url stringlengths 6 1.61k | fetch_time int64 1,368,856,904B 1,726,893,854B | content_mime_type stringclasses 3 values | warc_filename stringlengths 108 138 | warc_record_offset int32 9.6k 1.74B | warc_record_length int32 664 793k | text stringlengths 45 1.04M | token_count int32 22 711k | char_count int32 45 1.04M | metadata stringlengths 439 443 | score float64 2.52 5.09 | int_score int64 3 5 | crawl stringclasses 93 values | snapshot_type stringclasses 2 values | language stringclasses 1 value | language_score float64 0.06 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://fr.scribd.com/document/311357599/Understanding-the-Mass-Participation | 1,575,762,749,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00284.warc.gz | 379,315,122 | 80,856 | Vous êtes sur la page 1sur 4
# UNDERSTANDING THE MASS
PARTICIPATION FACTOR
In my post Example of how to use the mass participation factor in SolidWorks you
can find a practical example where this methodology is implemented.
Introduction
Every structure has the tendency to vibrate at certain frequencies, called natural or
resonant frequencies. Each natural frequency is associated with a certain shape,
called mode shape, that the model tends to assume when vibrating at that
frequency.
When a structure is properly excited by a dynamic load with a frequency that
coincides with one of its natural frequencies, the structure undergoes large
displacements and stresses. This phenomenon is known as resonance.
Several programs such as SolidWorks are provided with a Finite Element Analysis
module, FEA, which helps to calculate properties as the mode shapes, stresses,
displacements, strains, velocities, accelerations, When I started using this type
of modal analysis I needed to answer some questions that somehow haunted me:
Q1: How do I know if a certain dynamic load will make resonate the
structure where it is installed?
Q2: When I am doing a modal analysis, how many resonant modes
should I check?
Q3: When a mode shape should be considered or not?
Q4: What is the effective mass participation factor?
Q5: How could I make the most of the effective mass participation
factor?
Firstly, I will give a very direct answer to the first question applying a criterion
limiting the exciting frequencies (it uses the idea of the Design Factor of Safety
presented in the post What is your Design Factor?).
Frequency limits criterion Answer to Q1
After studying different information about modal analysis, I found out a criterion as
follows (this criterion is also followed by the ASHRAE, an association with an
important role in a lot of engineering fields):
> 1-0,6 = 0,4 (it is 60% lower) Eq. 1
, where
is the exciting frequency (i. e.
is the frequency of the expected dynamic
is a particular resonant frequency and represents the integral
multiples of the exciting frequency (generally, you can use the first six integral
multiples to obtain reliable results). To make it clear, if I want to avoid resonance
problems, I should perform the design so that the resonant frequencies under
consideration are 40% or less of the expected exciting frequency, which means using
a DFoS of 0,4. In the same way, we can support that:
> 1+0,6 = 1,6 (it is 60% higher) Eq. 2
which means that the resonant frequencies under consideration should be 60% or
more of the expected exciting frequency, which means using a DFoS of 1,6.
-60%
+60%
[-o-]
should not be in here
This criterion shall be applied to each natural frequency taken under consideration to
evaluate if the phenomenon of resonance appears, but a new question arises.
Now, you will need to answer the second question: How many resonant modes
should I consider? Or, how do you know if you have chosen sufficient modes?
As a general rule, you probably want to look at as many modes as it takes to fully
explore the frequency excitation range youre expecting. For example, for structural
excitations you can check 6 modes minimum (obvious) and dont normally evaluate
more than 10 modes or a couple of hundred Hz (say 500 Hz). However, this
methodology is not always the right method and that is why I will introduce you to
the term of Effective Mass Participation Factor, EMPF (also known as Mass
Participation Factor).
What is the Effective Mass Participation Factor? Answer to Q4 and Q5
Basically, the EMPF provides a measure of the energy contained within each
resonant mode since it represents the amount of system mass participating in a
particular mode. For a particular structure, with a mass matrix
, normalized mode
shapes
and a ground motion influence coefficient , participation of each mode
can be obtained as the effective mass participation factor:
Eq. 3
Therefore, we can assure the following ideas:
## A mode with a large effective mass is usually a significant
contributor to the response of the system.
It is possible to calculate a EMPF for a particular direction (x, y or z).
The sum of the effective masses for all modes in a given response
direction must equal the total mass of the structure.
## How Can I Calculate EMPF Using SolidWorks?
To list mass participation factors:
1. Run a frequency or a linear dynamic study.
2. Right-click the Results folder and select List Mass Participation (Figure
1).
3. The Mass Participation (Normalized) dialog box opens.
4. Click Save to save the listed information to an Excel (*.csv) file or to a
plain text (*.txt) file.
## Figure 1. List of mass participation factor
Number of modes criterion Answer to Q2
Priestley et al (1996), among other authors, confirm that a sum of all EMPF (known
as Cumulative Effective Mass Participation Factor, CEMPF) of 80% to 90% in
any given response direction can be considered sufficient to capture the dominant
dynamic response of the structure:
Eq. 4
, where is the number of modes taken under consideration. Therefore, if for
example we expect a vibration in the x direction, we need to keep calculating modes
until the sum of all EMPF in the x direction is about 80-90%. This should ensure a
consistency in the results since we can compare the exciting frequency with the
sufficient natural frequencies. In the previous example, you can see that the sum of
the EMPF for each direction is higher than 80%.
The frequency limits criterion is not the unique criterion that we must apply to
evaluate if the expected dynamic load generates a resonance effect. For example, it
may be the case that the exciting frequency is close to one of the natural
frequencies but the energy contained within this resonant mode is a small
value and hence there is no resonance effect. That is why we need to use another
criterion:
Eq. 5
One common rule is that a mode should be considered if it contributes more than 1%
of the total mass.
Methodology for performing a good, coherent and precise modal analysis
Lets finish the post summarizing the main presented ideas and sort those key ideas
as follows:
## 1. Evaluate the expected dynamic loads (frequencies and directions).
2. Run a frequency or a linear dynamic study for an initial number of
modes .
3. Check if the CEMPF
is between 80% and 90% for those
directions (x, y or z) where you expect a dynamic load. If not, increase
the number of considered modes and re-run the simulation (Number of
modes criterion).
4. Check any EMPF
where the value is higher than 1% (Participation
criterion).
5. Apply the Frequency limits criterion for those
In my post Example of how to use the mass participation factor in SolidWorks you
can find a practical example where this methodology is implemented.
I hope this post has been useful and if you have any concerns or questions feel free
to contact me jaime.martinez.verdu@gmail.com
## If you liked it Dont forget to share!
References:
ASHRAE publications:
Vibration Isolation and Control
A shot of isolation to prevent an outbreak of vibration
Priestley, M. J. N., Seible, S., Calvi, G. M., Seismic Design and Retrofit of
Bridges, John Wiley and Sons, 1996. p 184,242.
Giancarlo Genta, (1998).Vibration of Structures and Machines: Practical Aspects.
Springer; 3rd edition.
Tom Irvins webpage: http://www.vibrationdata.com/
SolidWorks help: Mass Participation (Normalized)
Share | 1,696 | 7,415 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.0625 | 3 | CC-MAIN-2019-51 | latest | en | 0.920901 |
https://fr.mathworks.com/matlabcentral/cody/problems/44365-an-asteroid-and-a-spacecraft/solutions/1871506 | 1,579,306,419,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579250591431.4/warc/CC-MAIN-20200117234621-20200118022621-00135.warc.gz | 468,829,420 | 15,608 | Cody
# Problem 44365. An asteroid and a spacecraft
Solution 1871506
Submitted on 11 Jul 2019
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Fail
p0 = [0 0 0]; p1 = [1 1 1]; p2 = [2 2 2]; p3 = [3 3 3]; t0 = 0; t1 = 1; d = 1; ok = true; assert(isequal(safetrip(d, t0, t1, p0, p1, p2, p3), ok))
Assertion failed.
2 Pass
p0 = [3 3 3]; p1 = [2 2 2]; p2 = [2 2 2]; p3 = [3 3 3]; t0 = 0; t1 = 1; d = 1; ok = false; assert(isequal(safetrip(d, t0, t1, p0, p1, p2, p3), ok))
3 Fail
p0 = [1 2 3]; p1 = [4 5 6]; p2 = [3 2 1]; p3 = [6 5 4]; t0 = 10; t1 = 20; d = 2; ok = true; assert(isequal(safetrip(d, t0, t1, p0, p1, p2, p3), ok))
Assertion failed. | 356 | 765 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2020-05 | latest | en | 0.624219 |
https://it.mathworks.com/matlabcentral/answers/1801245-why-is-my-plot-not-showing-lines | 1,718,530,628,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861657.69/warc/CC-MAIN-20240616074847-20240616104847-00824.warc.gz | 295,309,308 | 28,964 | # why is my plot not showing lines
5 visualizzazioni (ultimi 30 giorni)
Doyouknow il 9 Set 2022
Commentato: Ankit il 12 Set 2022
%This script is to solve for the investment of \$50
%with an interest of 1% or 0.01 the equation uses the
% A * (1 + r) .^ n equation of interest with investment
%Script wrote by Doyouknow
format bank
monthlyInvestment = 50; %monthlyInvestment = 50;
interestRate = 0.01; %interestRate = 0.01;
monthEndBalance = 0;
disp('Month Month End Balance')
figure;
for month = 1:12
hold on;
monthEndBalance = monthEndBalance + (monthlyInvestment * (1 + interestRate) .^ month);
fprintf([num2str(month)]), disp(monthEndBalance(:))
plot (month,monthEndBalance,'r.')
title('Plot of Investment') %Testing out to see if i can plot
xlabel('Month (month)') %using the given data so far not
ylabel('Year (monthEndBalance)')
xticks (1:12) %sets the x values to be 1 - 12 to represent the graph
end
##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti
Accedi per commentare.
### Risposta accettata
Ankit il 9 Set 2022
Modificato: Ankit il 12 Set 2022
Edited on 12.09.2022
Problem in your script is that you are not storing your results, values are updated in your "for loop"
%This script is to solve for the investment of \$50
%with an interest of 1% or 0.01 the equation uses the
% A * (1 + r) .^ n equation of interest with investment
%Script wrote by Doyouknow
format bank
monthlyInvestment = 50; %monthlyInvestment = 50;
interestRate = 0.01; %interestRate = 0.01;
j = 1;
monthEndBalance =zeros(1,12);
prevmonthEndBalance = 0;
disp('Month Month End Balance')
Month Month End Balance
figure;
month = 1:12;
for i = 1:length(month)
monthEndBalance(i) = prevmonthEndBalance + (monthlyInvestment * (1 + interestRate) .^ i);
prevmonthEndBalance = monthEndBalance(i);
fprintf([num2str(month(i)) '-' num2str(monthEndBalance(month(i))) '\n'])
end
1-50.5 2-101.505 3-153.02 4-205.0503 5-257.6008 6-310.6768 7-364.2835 8-418.4264 9-473.1106 10-528.3417 11-584.1252 12-640.4664
plot (month,monthEndBalance);hold on
title('Plot of Investment') %Testing out to see if i can plot
xlabel('Month (month)') %using the given data so far not
ylabel('Year (monthEndBalance)')
xticks (1:12) %sets the x values to be 1 - 12 to represent the graph
##### 3 CommentiMostra 1 commento meno recenteNascondi 1 commento meno recente
Doyouknow il 9 Set 2022
Only issue I noticed is that the values are wrong. They are calculated incorrectly from what I am seeing more in the code you showed doesn’t work with the values I am attempting to get
Ankit il 12 Set 2022
@Doyouknow Please check the updated solution.
Accedi per commentare.
### Più risposte (1)
chrisw23 il 9 Set 2022
Each plot call (loop) creates a line with one point. ( debug by look at the Figure.Children.Children property or save the return value of plot() )
This results in a 12×1 Line array and not a single line as you expected. Precalculate your data before plotting or try to use an animated line and add points in your loop.
##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti
Accedi per commentare.
### Categorie
Scopri di più su Loops and Conditional Statements in Help Center e File Exchange
### Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
Translated by | 1,006 | 3,355 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2024-26 | latest | en | 0.563579 |
http://www.massmind.org/techref/inet/iis/jscript/htm/js920.htm | 1,721,022,968,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514659.22/warc/CC-MAIN-20240715040934-20240715070934-00291.warc.gz | 43,322,815 | 7,118 | Microsoft® JScript™ Recursion JScript Tutorial Previous | Next
Recursion is an important programming technique. It's used to have a function call itself from within itself. One handy example is the calculation of factorials. The factorials of 0 and 1 are both defined specifically to be 1. The factorials of larger numbers are calculated by multiplying 1 * 2 * ..., incrementing by 1 until you reach the number for which you're calculating the factorial.
The following paragraph is a function, defined in words, that calculates a factorial.
"If the number is less than zero, reject it. If it isn't an integer, round it down to the next integer. If the number is zero or one, its factorial is one. If the number is larger than one, multiply it by the factorial of the next smaller number."
To calculate the factorial of any number that is larger than 1, you need to calculate the factorial of at least one other number. The function you use to do that is the function you're in the middle of already; the function must call itself for the next smaller number, before it can execute on the current number. This is an example of recursion.
Clearly, there is a way to get in trouble here. You can easily create a recursive function that doesn't ever get to a definite result, and cannot reach an endpoint. Such a recursion causes the computer to execute a so-called "infinite" loop. Here's an example: omit the first rule (the one about negative numbers) from the verbal description of calculating a factorial, and try to calculate the factorial of any negative number. This fails, because in order to calculate the factorial of, say, -24 you first have to calculate the factorial of -25; but in order to do that you first have to calculate the factorial of -26; and so on. Obviously, this never reaches a stopping place.
Thus, it is extremely important to design recursive functions with great care. If you even suspect that there's any chance of an infinite recursion, you can have the function count the number of times it calls itself, and thus make sure that if the function calls itself too many times, however many you decide that should be, it automatically quits.
Here's the factorial function again, this time written in JScript code.
```
function factorial(aNumber) {
aNumber = Math.floor(aNumber); // If the number is not an integer, round it down.
if (aNumber < 0) { // If the number is less than zero, reject it.
return "not a defined quantity";
}
if ((anumber == 0) || (anumber == 1)) { // If the number is 0 or 1, its factorial is 1.
return 1;
}
else return (anumber * factorial(anumber - 1)); // Otherwise, recurse until done.
}
```
file: /Techref/inet/iis/jscript/htm/js920.htm, 3KB, , updated: 1997/9/30 04:45, local time: 2024/7/14 22:56, TOP NEW HELP FIND: 3.233.232.160:LOG IN
©2024 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?Please DO link to this page! Digg it! / MAKE! Recursion
After you find an appropriate page, you are invited to your to this massmind site! (posts will be visible only to you before review) Just type a nice message (short messages are blocked as spam) in the box and press the Post button. (HTML welcomed, but not the <A tag: Instead, use the link box to link to another page. A tutorial is available Members can login to post directly, become page editors, and be credited for their posts.
Attn spammers: All posts are reviewed before being made visible to anyone other than the poster.
Did you find what you needed? "No. I'm looking for: " "No. Take me to the search page." "No. Take me to the top so I can drill down by catagory" "No. I'm willing to pay for help, please refer me to a qualified consultant"
. | 933 | 3,975 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.625 | 4 | CC-MAIN-2024-30 | latest | en | 0.932882 |
https://brainmass.com/statistics/quantative-analysis-of-data/mixed-integer-problem-formulation-and-solution-117994 | 1,511,200,415,000,000,000 | text/html | crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00506.warc.gz | 586,281,132 | 18,444 | Share
Explore BrainMass
# Mixed Integer problem:Formulation and solution
Lauren Moore has sold her business for \$500,000 and wants to invest in condominium units ( which she intends to rent) and land ( which she will lease to a farmer). She estimates that she will receive an annual return of \$8,000 for each condominium and \$6,000 for each acre of land. A condominium unit costs \$70,000, and land is \$30,000 per acre. A condominium will cost her \$1,000 per unit, an acre of land will cost \$2,000 for maintenance and upkeep, and \$14,000 has been budgeted for these annual expenses. Lauren wants to know how much to invest in condominiums and land to maximize her annual return.
A. Formulate a mixed integer programming model for this problem.
B. Solve this model using the computer.
#### Solution Summary
This posting contains solution to following mixed integer programming problem using excel solver.
\$2.19 | 205 | 924 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.09375 | 3 | CC-MAIN-2017-47 | longest | en | 0.933492 |
https://www.jagranjosh.com/articles/questions-framed-in-ssc-chsl-for-25th-jan-2017-1485780953-1 | 1,653,387,358,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00730.warc.gz | 941,201,021 | 42,452 | Questions framed in SSC CHSL for 25th Jan 2017
In this article, we have gathered and presented almost maximum questions out of all three shifts for SSC CHSL held on 25th Jan 2017 exam. Some answers are listed against the questions as well.
Created On: Jan 31, 2017 15:38 IST
Modified On: Feb 8, 2017 13:09 IST
In this post, we have included the questions obtained from the CHSL aspirants and these questions are totally based upon consciousness for the 25th Jan 2017 exam. These questions will assist you in figuring out the following things:-
1. Level of questions- difficulty wise
2. Chapterwise question selection.
3. Questions repetitiveness.
4. Estimation of Cut off marking, whether it is high or low.
5. Devising a random study plan for the remaining attempts.
These questions would be fruitful in appearing for forthcoming exams i.e. SSC CGL, MTS and other competitive exams. Exams held from 7th Jan to 9th Jan 2017 will be held in two time slots, whereas, remaining exams will be conducted in three shifts.
** The following questions are from the all Shifts exams. As soon, as we receive new questions from the remaining shifts then we will upload in the same blog.
Quantitative Aptitude
First Shift Second Shift Third Shift Height and Volume given of cylinder, find radius. A zoo charges Rs 800 for an adult’s ticket and Rs 200 child’s ticket. However, with two adults, a child can enter the zoo for free. How much profit would the zoo get if 20 adults and 8 children visit the zoo? Cot2A Cos22A =? (7-15x)-(6x-7) =? 14 Find relative rate of 22% for 1 year (half yearly) = 21% 1, 6.21, 66…? = 201 Cot4 π/3= ? – 1/√3 a+b=10…. ab=24… Val of a^3 +b^3? A can do a work in 18 days.. he work for 9days. how much fraction left? Average of 3 number 72 then 1st number is 2/7th part of the Sum of the remaining number then what is the first number? A + B = 10; AB = 24; A3 + B3 = ? Tan^4(@)+tan^2(@)=? Cosec135…? A man is 24 years older than his son. In two years, his age will be twice the age of his son. The present age of his son is? An error 5% in excess is made while measuring the side of a square. The percentage of error in the calculated area of the square is? If 2994 ÷ 14.5 = 172, then 29.94 ÷ 1.45 = ? Three number are in the ratio of 3 : 4 : 5 and their L.C.M. is 2400. Their H.C.F. is?
General Awareness
First Shift Second Shift Third Shift Who invented aspirin Felix Hoffmann Majuli Island is in? Assam. Marble is which type of rock? Metamorphic Brihadeshwara temple is located in? Thanjavur, Tamil Nadu. Antimatter of electron? Positron. Indian Rail connects how many stations? 7,112. The largest fresh water lake in Asia? Baikal. Which country won the Copa America Tournament, 2016? Chile. With which sport is Apurvi Chandela associated? Shooting 10. ISRO was set up in? 1969. 11. First woman in space was? – Valentina Vladimirovna Tereshkova 12. Nonstick cooking utensils are coated with? – Teflon 13. The ‘Char Minar’ is in? – Hyderabad 14. The National Anthem was first sung in which year? – 1911 15. Which of these articles deals with sedition? – Article 124A 16. When did the Haldighati battle take place? – 1576 17. Which player holds the record for scoring most centuries in Test cricket? – Sachin Tendulkar 18. What is the unit of Magnetic Flux density? – Tesla 19. Scientific name for lizard? – Lacertilia 20. Vascular bundles are present? – Plants 21. Demand decreases from 750 to 650 and price increases from 15 to 20, find elasticity. – ? Who established the Pal dynasty? Next south Asian games will be hosted by? Author of Shiva trilogy? Full form of VOIP… Voice over internet protocol. Enzyme Lipase found, at which organ? Which of the following countries has highest density – India, China, Philippines,UK? Which of the following is both living and non living organisms – fungus , bacteria? ”Immortal of Meluha ” written by whom? Arctic circle, Antarctic circle,tropic of cancer, the tropic of Capricorn,_____ what is _____ 10. Full form of NCP… 11. One physics, theoretical law was asked 12. Formula for ammonium oxalate? 13. OUPA was formed in which year? Where is the Kanha national park? Which of the following is Green house gases? The Oscar nominated movie water was nominated for which category? First man who calculated the radius of earth? Full form of ABM? Plants having flower is called? Gymnosperm, Angiosperm, Bryophtes, Pteridophyte Who is the author of the book “lowland”? Who is Deepa Mehta? What is the full name of Sachin Tendulkar? 10. Chandragupta 322 to 298 belonged to which dynasty – Maurya Vansh 11. One question related to Myopia? 12. Total no. of articles in Constitution 395 13. Position of Venus in term of size in solar system? 14. Who invented electric stove? 15. Covalent bond is also known as
General Intelligence
First Shift Second Shift Third Shift Venn diagram of society, friends and enemy. A man had 20 cows. All but nine died. How many was he left with? – 9 Arctic circle, Antarctic circle, tropic of cancer, tropic of Capricorn, ___?___ 3, 4, 7, 8, 11, 12, … ? 31, 29, 24, 22, 17, …?
English Language
First Shift Second Shift Third Shift Idioms and Phrases: Bend over backward meaning? Synonyms of privy? – Aware The idiom “Apple of eye”? The criminal seems to have acted in …… the three others. The man came in a car to …… the television set. It is 20 years since I …… him.
We have presented all possible questions in this blog. If we found more questions then we will surely let all of you know. For further information, stay tuned to www.jagranjosh.com.
So, All the Best!!!
रोमांचक गेम्स खेलें और जीतें एक लाख रुपए तक कैश
Related Categories
Comment (0)
Post Comment
1 + 7 =
Post
Disclaimer: Comments will be moderated by Jagranjosh editorial team. Comments that are abusive, personal, incendiary or irrelevant will not be published. Please use a genuine email ID and provide your name, to avoid rejection. | 1,652 | 5,929 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.953125 | 4 | CC-MAIN-2022-21 | latest | en | 0.940075 |
http://math.stackexchange.com/questions/275761/strange-inequality-involving-infinite-series | 1,469,549,628,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00273-ip-10-185-27-174.ec2.internal.warc.gz | 156,429,593 | 17,625 | # strange inequality involving infinite series
while doing a complex analysis exercise, i came to a strange inequality which i don't know how to interpretate. Suppose you have a sequence $\{a_j\}$ of positive real number. Let $\rho$ a positive real number. The inequality i found after some calculation is $$\sum_{j=1}^{+\infty}\frac{1}{|a_j|^{\rho +\epsilon}}\leq \sum_{j=1}^{+\infty}\frac{1}{|a_j|^{\rho-\epsilon}}$$ for every $\epsilon>0$. My question is: can i deduce something from this inequality? for example the convergence of the first series (that with $+\epsilon$)? Can i deduce nothing? Is that inequality surely false?Is it always true, so that i can't deduce nothing in particular? EDIT: the sequence $a_j$ tends to $\infty$
-
If $a_j\to 0\,$ then $\,\frac{1}{a_j}\rlap{\;\;/}\to 0\,$ unless $\,p\pm\epsilon <0\,$ ... – DonAntonio Jan 11 '13 at 11:53
i've edited, the sequence tends to $+\infty$ – Federica Maggioni Jan 11 '13 at 11:59
Basicaly, what you are saying is that if $a_j>0$ and $\alpha<\beta$, then $$\sum_{j=1}^\infty\frac{1}{a_j^\beta}\le\sum_{j=1}^\infty\frac{1}{a_j^\alpha}.$$ This is certainly true if $a_j\ge1$. Since in your case $a_j\to\infty$, this holds for all $j$ large enough. But the inequality is not true in general. Let $N\in\mathbb{N}$ and define $$a_j=\begin{cases} 1/2 & \text{if }j\le N,\\ 2^{N-j} & \text{if }j< N. \end{cases}$$ Then $$\sum_{j=1}^\infty\frac{1}{a_j^\beta}-\sum_{j=1}^\infty\frac{1}{a_j^\alpha}= N(2^\beta-2^\alpha)+\frac{1}{1-2^{-\beta}}-\frac{1}{1-2^{-\alpha}},$$ which is positive if $N$ is large enough.
In general there is nothing you can deduce from the inequality, since the right hand side can be $\infty$ and the left hand side $<\infty$.
If $|a_j|\geqslant1$ for every $j$ then $|a_j|^{\rho+\epsilon}\geqslant|a_j|^{\rho-\epsilon}$ hence indeed, $$\sum_{j=1}^{+\infty}\frac1{|a_j|^{\rho+\epsilon}}\leqslant\sum_{j=1}^{+\infty}\frac1{|a_j|^{\rho-\epsilon}}.$$ Otherwise no comparison holds (consider the limit $|a_1|\to0$, every other $a_j$ fixed).
But of course, if the series $\displaystyle\sum_j\frac1{|a_j|^{\rho-\epsilon}}$ converges, then $|a_j|\geqslant1$ for every $j$ large enough hence the series $\displaystyle\sum_j\frac1{|a_j|^{\rho+\epsilon}}$ converges as well. | 782 | 2,251 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.703125 | 4 | CC-MAIN-2016-30 | latest | en | 0.717687 |
https://www.coursehero.com/file/6934205/4-12-points-Divers-must-be-careful-when-ascending-to-avoid/ | 1,498,556,517,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128321306.65/warc/CC-MAIN-20170627083142-20170627103142-00653.warc.gz | 873,865,741 | 23,178 | BME365S_Homework4_2011
# 4 12 points divers must be careful when ascending to
This preview shows page 1. Sign up to view the full content.
This is the end of the preview. Sign up to access the rest of the document.
Unformatted text preview: out of solution and forming bubbles in the blood. Henry's Law states: P kH c , where P is the partial pressure of a gas (atm), c is its concentration in solution (mol/L), and kH is a proportionality constant specific to the gas and solution (L*atm/mol). At sea level and PN2=0.79, the concentration of N2 in blood is about 5 x 10^-4 mol/L. For this problem, assume the concentration of nitrogen in a diver's blood reaches equilibrium 99 ft below the surface, and she is using a gas mixture with 79% N2. She has 5 liters of blood and swims very quickly to the surface. How much nitrogen, measured in liters, comes out of solution in her blood? Ignore nitrogen dissolved in fat and other tissue. (hint: remember the ideal gas law?) kH P / c 0.79 / (5*104 ) kH 1580 Latm mol atm c99 3.16 atm / 1580 Lmol c99 0.002mol/L It's also easy to not...
View Full Document
## This note was uploaded on 05/15/2012 for the course BME 365S taught by Professor Sokolov during the Spring '10 term at University of Texas.
Ask a homework question - tutors are online | 346 | 1,293 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2017-26 | longest | en | 0.926575 |
https://mycqstate.wordpress.com/2019/04/14/randomness-and-interaction-entanglement-ups-the-game/?replytocom=3125 | 1,603,927,162,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107902038.86/warc/CC-MAIN-20201028221148-20201029011148-00220.warc.gz | 436,826,526 | 26,056 | ## Randomness and interaction? Entanglement ups the game!
[05/25/19 Update: Kevin Hartnett has a nice article at Quanta explaining Natarajan & Wright’s result in slightly more layman terms than I’d be able to…see here: Computer Scientists Expand the Frontier of Verifiable Knowledge]
The study of entanglement through the length of interactive proof systems has been one of the most productive applications of complexity theory to the physical sciences that I know of. Last week Anand Natarajan and John Wright, postdoctoral scholars at Caltech and MIT respectively, added a major stone to this line of work. Anand & John (hereafter “NW”) establish the following wild claim: it is possible for a classical polynomial-time verifier to decide membership in any language in non-deterministic doubly exponential time by asking questions to two infinitely powerful, but untrusted, provers sharing entanglement. In symbols, NEEXP ${\subseteq}$ MIP${^\star}$! (The last symbol is for emphasis — no, we don’t have an MIP${^\star}$! class — yet.)
What is amazing about this result is the formidable gap between the complexity of the verifier and the complexity of the language being verified. We know since the 90s that the use of interaction and randomness can greatly expand the power of polynomial-time verifiers, from NP to PSPACE (with a single prover) and NEXP (with two provers). As a result of the work of Natarajan and Wright, we now know that yet an additional ingredient, the use of entanglement between the provers, can be leveraged by the verifier — the same verifier as in the previous results, a classical randomized polynomial-time machine — to obtain an exponential increase in its verification power. Randomness and interaction brought us one exponential; entanglement gives us another.
To gain intuition for the result consider first the structure of a classical two-prover one-round interactive proof system for non-deterministic doubly exponential time, with exponential-time verifier. Cutting some corners, such a protocol can be obtained by “scaling up” a standard two-prover protocol for non-deterministic singly exponential time. In the protocol, the verifier would sample a pair of exponential-length questions ${(X,Y)}$, send ${X}$ and ${Y}$ to each prover, receive answers ${A}$ and ${B}$, and perform an exponential-time computation that verifies some predicate about ${(X,Y,A,B)}$.
How can entanglement help design an exponentially more efficient protocol? At first it may seem like a polynomial-time verifier has no way to even get started: if it can only communicate polynomial-length messages with the provers, how can it leverage their power? And indeed, if the provers are classical, it can’t: it is known that even with a polynomial number of provers, and polynomially many rounds of interaction, a polynomial-time verifier cannot decide any language beyond NEXP.
But the provers in the NW protocol are not classical. They can share entanglement. How can the verifier exploit this to its advantage? The key property that is needed is know as the rigidity of entanglement. In words, rigidity is the idea that by verifying the presence of certain statistical correlations between the provers’ questions and answers the verifier can determine precisely (up to a local change of basis) the quantum state and measurements that the provers must have been using to generate their answers. The most famous example of rigidity is the CHSH game: as already shown by Werner and Summers in 1982, the CHSH game can only be optimally, or even near-optimally, won by measuring a maximally entangled state using two mutually unbiased bases for each player. No other state or measurements will do, unless they trivially imply an EPR pair and mutually unbiased bases (such as a state that is the tensor product of an EPR pair with an additional entangled state).
Rigidity gives the verifier control over the provers’ use of their entanglement. The simplest use of this is for the verifier to force the provers to share a certain number ${N}$ of EPR pairs and measure them to obtain identical uniformly distributed ${N}$-bit strings. Such a test for ${N}$ EPR pairs can be constructed from ${N}$ CHSH games. In a paper with Natarajan we give a more efficient test that only requires questions and answers of length that is poly-logarithmic in ${N}$. Interestingly, the test is built on classical machinery — the low-degree test — that plays a central role in the analysis of some classical multi-prover proof systems for NEXP.
At this point we have made an inch of progress: it is possible for a polynomial-time (in ${n=\log N}$) verifier to “command” two quantum provers sharing entanglement to share ${N=2^n}$ EPR pairs, and measure them in identical bases to obtain identical uniformly random ${N}$-bit strings. What is this useful for? Not much — yet. But here comes the main insight in NW: suppose we could similarly force the provers to generate, not identical uniformly random strings, but a pair of ${N}$-bit strings ${(X,Y)}$ that is distributed as a pair of questions from the verifier in the aforementioned interactive proof system for NEEXP with exponential-time (in ${n}$) verifier. Then we could use a polynomial-time (in ${n}$) verifier to “command” the provers to generate their exponentially-long questions ${(X,Y)}$ by themselves. The provers would then compute answers ${(A,B)}$ as in the NEEXP protocol. Finally, they would prove to the verifier, using a polynomial interaction, that ${(A,B)}$ is a valid pair of answers to the pair of questions ${(X,Y)}$ — indeed, the latter verification is an NEXP problem, hence can be verified using a protocol with polynomial-time verifier.
Sounds crazy? Yes. But they did it! Of course there are many issues with the brief summary above — for example, how does the verifier even know the questions ${X,Y}$ sampled by the provers? The answer is that it doesn’t need to know the entire question; only that it was sampled correctly, and that the quadruple ${(X,Y,A,B)}$ satisfies the verification predicate of the exponential-time verifier. This can be verified using a polynomial-time interactive proof.
Diving in, the most interesting insight in the NW construction is what they call “introspection”. What makes multi-prover proof systems powerful is the ability for the verifier to send correlated questions to the provers, in a way such that each prover has only partial information about the other’s question — informally, the verifier plays a variant of prisonner’s dilemma with the provers. In particular, any interesting distribution ${(X,Y)}$ will have the property that ${X}$ and ${Y}$ are not fully correlated. For a concrete example think of the “planes-vs-lines” distribution, where ${X}$ is a uniformly random plane and ${Y}$ a uniformly random line in ${X}$. The aforementioned test for ${N}$ EPR pairs can be used to force both provers to sample the same uniformly random plane ${X}$. But how does the verifier ensure that one of the provers “forgets” parts of the plane, to only remember a uniformly random line ${Y}$ that is contained in it? NW’s insight is that the information present in a quantum state — such as the prover’s half-EPR pairs — can be “erased” by commanding the prover to perform a measurement in the wrong basis — a basis that is mutually unbiased with the basis used by the other prover to obtain its share of the query. Building on this idea, NW develop a battery of delicate tests that provide the verifier the ability to control precisely what information gets distributed to each prover. This allows a polynomial-time verifier to perfectly simulate the local environment that the exponential-time verifier would have created for the provers in a protocol for NEEXP, thus simulating the latter protocol with exponentially less resources.
One of the aspects of the NW result I like best is that they showed how the “history state barrier” could be overcome. Previous works attempting to establish strong lower bounds on the class MIP${^\star}$, such as the paper by Yuen et al., relies on a compression technique that requires the provers to share a history state of the computation performed by a larger protocol. Unfortunately, history states are very non-robust, and as a result such works only succeeded in developing protocols with vanishing completeness-soundness gap. NW entirely bypass the use of history states, and this allows them to maintain a constant gap.
Seven years ago Tsuyoshi Ito and I showed that MIP${}^\star$ contains NEXP. At the time, we thought this may be the end of the story — although it seemed challenging, surely someone would eventually prove a matching upper bound. Natarajan and Wright have defeated this expectation by showing that MIP${^\star}$ contains NEEXP. What next? NEEEXP? The halting problem? I hope to make this the topic of a future post.
## About Thomas
I am a professor in the department of Computing and Mathematical Sciences (CMS) at the California Institute of Technology, where I am also a member of the Institute for Quantum Information and Matter (IQIM). My research is in quantum complexity theory and cryptography.
This entry was posted in CHSH, QPCP, Quantum, Uncategorized and tagged . Bookmark the permalink.
### 8 Responses to Randomness and interaction? Entanglement ups the game!
1. RandomOracle says:
In the first paragraph, I guess it should NEEXP instead of NEXP 🙂
• Thomas says:
Fixed, thanks. I’m still recovering from the shock!
2. bentoner says:
Wow!
• Thomas says:
Hi Ben 🙂
3. Attila Pereszlényi says:
I would like to know what do you think this result implies to the gap amplification version of the quantum PCP conjecture (to the inapproximability of the local Hamiltonian problem)? More precisely, one could have hoped that a QMA lower bound on a certain (possibly restricted subclass) of MIP* with log-length questions would imply the QMA-hardness of the local Hamiltonian problem with constant relative gap. This result exceeds the expected QMA lower bound and made it all the way up to NEXP. This may indicate that the connection either does not exist (contrary to the classical case) or one would need to consider a very (artificially) restricted subclass of MIP*_log.
• Thomas says:
Yes, that’s a very good point. I think of it optimistically: as you write, what the result indicates is that there may exist restricted classes of MIP*_log protocols, such that their complexity would still equal QMA, but such that one would be able to transfer hardness from them to the local Hamiltonian problem. So you could say that the result is encouraging in giving a justification for our inability to relate the complexity of MIP* to that of the local Hamiltonian problem; indeed the former can contain much harder problems than the latter. It is now a very interesting question whether you can put a useful restriction on MIP* protocols that would allow a gap-preserving transfer of hardness to take place. An obvious one is limiting the entanglement to polynomially many qubits, but that doesn’t seem to be sufficient. | 2,390 | 11,128 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2020-45 | latest | en | 0.896012 |
https://www.mapleprimes.com/products/maple?page=9 | 1,642,654,297,000,000,000 | text/html | crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00182.warc.gz | 924,826,399 | 46,430 | dsolve with timelimit hangs on first order homogen...
Correction
Please ignore this question. dsolve does hang, but I had typo in the timelimit command itself when I wrote the test. Fixing this, now it timesout OK.
Maybe someone can look why dsolve hangs on this ode. But since timelimit does work, there is a workaround.
Original question
I was checking Maple's dsolve on this textbook problem
The book gives the answer in the back as
When using Maple's dsolve, I found it hangs. The stange thing, is that adding timelimit() also hangs. I can understand dsolve() hanging sometimes. But what I do not understand is why with timelimit it also hangs?
I've waited 20 minutes and then gave up. As you see, the timelimit is 20 seconds. May be if I wait 2 hrs or 20 hrs or 20 days, it will finally timeout. I do not know but can't wait that long.
Do others see same problem on this ode? Does it hang for you? How about on the mac or Linux?
During this time, I see mserver.exe running at very high CPU. I restarted Maple few times, but this did not help.
Maple 2021.2 on windows 10. May be one day Maplesoft will fix timelimit so it works as expected.
> interface(version)
> Physics:-Version();
> restart;
> ode:=x*diff(y(x),x)=y(x)*cos(ln(y(x)/x)); try timeout(20,dsolve(ode)); catch: print("timedout"); end try; print("OK");
>
How to perform this complex substitution...
>
Some more difficult calculation with complex numbers
Starting from the equation
Thus obtaining
One of the important relations between logarithms and inverse trigonometrische functions
How to act with a differential operator on a funct...
This might be a trivial question, but I have not been able to find the answer. I am using Maple 2018.
I am working with differential operators acting on real valued functions of a real variable. On the first hand, I want to be able to do simple algebra with these operators. For example, I might want to compute the commutator of two such operators. I have been using the DEtools package, toghether with the 'mult' command to perform such calculations. See the atached file containing a simplified version of what I am doing.
Now, I also want to be able to act with my differential operators on a function, and get the resulting function.
(a) What command allows me to do that within my framework? (i.e. that of DEtools with the way I have defined and used my operators)
(b) I there a better way to proceed? (i.e. is there a better way to do both algebra with differential operators and to act with them on functions to get the resulting function)
Many thanks
Example.mw
disappearing units cause problems in plots...
# Below you find a small example about the function diffdiameter(conc,number), which has the unit length as long as the second argument number is not zero. Using the convert function it is possible to remove the units entirely, but this does not help, if I want to plot the functions. Is there a possibilty to plot diffdiameter(conc,number-not-zero) and diffdiameter(conc,0) in one diagramm? with(Units[Simple]); with(plots); diffdiameter := (conc, number) -> -number*diameter*exp(-conc/Unit(mol/kg)); diffdiameter := proc (conc, number) options operator, arrow; Units:-Simple:-`-`(Units:-Simple:-`*`(Units:-Simple:-`*`(numb\ er, diameter), Units:-Simple:-exp(Units:-Simple:-`-`(Units:-Si\ mple:-`*`(conc, Units:-Simple:-`/`(Unit(Units:-Simple:-`*`(mol\ , Units:-Simple:-`/`(kg))))))))) end proc diameter := 2*Unit(m); diameter := 2 Unit(m) evalf(diffdiameter(Unit(mol/kg), 6)); evalf(diffdiameter(Unit(mol/kg), 0)); -4.414553294 Unit(m) 0. convert(evalf(diffdiameter(Unit(mol/kg), 6)), unit_free); convert(evalf(diffdiameter(Unit(mol/kg), 0)), unit_free); -4.414553294 0. plot([convert(evalf(diffdiameter(conc, 0.001)), unit_free), convert(evalf(diffdiameter(conc, 6)), unit_free)], conc = 0 .. 4*Unit(mol/kg), title = "derivative Diameter with conc", labels = ["concentration / mol/kg", "deriv diameter / m"], color = ["blue", "red"], legend = ["bare diameter", "PLUS diameter"], labeldirections = ["horizontal", "vertical"], titlefont = [Helvetica, bold, 16], axesfont = [Helvetica, 14], labelfont = [Helvetica, 14], axes = boxed); plot([convert(evalf(diffdiameter(conc, 0)), unit_free), convert(evalf(diffdiameter(conc, 6)), unit_free)], conc = 0 .. 4*Unit(mol/kg)); Error, (in plot) invalid subscript selector
how to obtain this transformation using dchange?...
on DLMF page, they show this transformation on independent variable for second order ode
https://dlmf.nist.gov/1.13#Px7
About half way down the page, under Elimination of First Derivative by Change of Independent Variable section.
I tried to verify it using Maple dchange. But the problem it looks like dchange wants the old variable to be on the left side (z in this example) and the new variable (eta in this example) to be on the right side in the transformation. But on the above web page, it is the other way around.
Here are my attempts
```restart;
ode:=diff(w(z),z\$2)+f(z)*diff(w(z),z)+g(z)*w(z)=0;
tranformation:=eta=int(exp(-int(f(z),z)),z);
PDEtools:-dchange({tranformation},ode,known={z},unknown={eta});
PDEtools:-dchange({tranformation},ode,{eta},known={z});
PDEtools:-dchange({tranformation},ode,{eta});
```
All give errors
Error, (in dchange/info) missing a list with the new variables
Error, (in dchange/info) the new variables are not contained in the rhs of the direct transformation equations
Error, (in dchange/info) the new variables are not contained in the rhs of the direct transformation equations
The problem it does not seem possible to invert the transformation shown on the webpage, so that the old variable z show on the left side and the new variable (eta) on the right side.
Why is this restriction on dchange Since one tells it which is the new variable and which is the old variable? May be I am not using dchange correctly in this example.
Any suggestion for a workaround to use dchange to verify the above result?
Here is my hand derivation (pdf file attached also)
Can the above be done using dchange?
Maple 2021.2 on windows 10
sol.pdf
Tetrahedron with integer volume...
Tetrahedron with length of sides like this picture has volume is an integer number. Is there another tetrahedron like that?
How to optimize code for counting experimental inv...
Hi! I have recently started some Maple in chaos in dynamical systems and I am thinking about counting experimental invariant denisty measure (which is in brief "how often the point visits the given interval") for some discrete mappings (in this case it's logistic mapping 3.7*x*(1-x)).
restart;
with(plots);
x := array(1 .. 10^6 + 2);
x[1] := 0.2;
for i to 10^6 do
x[i + 1] := 3.7*x[i]*(1 - x[i]);
end do;
counter := 0;
for i to 10^6 do
if 0 <= x[i] and x[i] < 0.1 then counter := counter + 1; end if;
end do;
counter;
counter := 0;
for i to 0^6 do
if 0.2 <= x[i] and x[i] < 0.3 then counter := counter + 1; end if;
end do;
counter;
..........
counter := 0;
for i to 10^6 do
if 0.9 <= x[i] and x[i] < 1 then counter := counter + 1; end if;
end do;
counter;
display(plot([[0, 0], [0.3, 0]]), plot([[0.3, 74089], [0.4, 74089]]), plot([[0.4, 57290], [0.5, 57290]]), plot([[0.5, 86726], [0.6, 86726]]), plot([[0.6, 122087], [0.7, 122087]]), plot([[0.7, 269178], [0.8, 269178]]), plot([[0.8, 185490], [0.9, 185490]]), plot([[0.9, 115405], [1, 115405]]))
I don't know how can I automate this code. I need smaller intervals because I took length 0.1 which is not good enough.
I want to get something like this (it is for logistic map such as above but in below example they plotted this graph for 4*x*(1-x) )
Dutch math book comparing two functions with a tay...
Hello everyone,
The "rows and series" chapter is coming to an end. But im not getting this question. Ive got a feeling they are not really specific with this book. But that could just be me.
Any way here is the question:
"In classical physics there is the kinetic energy of a body with the mass m0 and the speed v given by E1=1/2*m0*v^2. According to Einstein the kinetic energy E2=(m*c^2)-(m0*c^2)=((m0*c^2)/sqrt(1-(v/c)^2))-m0*c^2, at which m is the relativistic mass with a speed v, and m0 the mass in rest. Further c is the speed of light. Wright down E2 as a linear function of v^2 and show that E2=E1 when v is small compared to c."
Now i cant see what they did to get this answer:
A taylor series was probably used, the question before it also used a taylor series.
If someone knows what they did. What did they do to get the the answer the book gave?
Thank you!
Greetings,
The Function
Student[ODEs][ODESteps]: simple pendulum not suppo...
But ODEsteps supports simliar ODEs (see attached).
Is the ODESteps command not generic enough to cover the pendulum or have I missed something?
ODESteps.mw
How can i plot the phase portrait of a nonlinear d...
Hi everyone, how can i plot nonlinear phase portraithere k,w, alpha,K, k, gamma, beta are arbitrary constants and i have three equilibrium points:
How can I plot these phase portraits? Thanks in advance.
plotting an with animate a sequence ...
hello,
i want to animate a sequence from a nomeric solution i recived, i have a sequene for displacment and another one for time, is there a way to do that ?
Minus invisible on screen...
Has somebody experienced something like that before?
I was searching for a bug in a sheet, and was absolute unable to find out why some results in a sheet were different from another sheet.
Found out that it actually was a pure graphics problem. With normal zoom (100%) the minus signum is not visible.
Blowing up the zoom to 125% shows the signum again.
Coefficients in expression which is not polynomial...
I have an expression of the form
Expr := n0*C[0] + n1*C[1] + ... + nk*C[k] + n = 0,
where the numbers n0,...,nk and n are known to Maple (after it made some calculations), whereas C[0],...,C[k] are undetermined.
I would like to know the values of all of n0,...,nK and n. For n0,...,nk, I found them with
coeff(Expr, C[m], 1),
with m in 0,...,k. But I don't know how to get the value of the "independent term" n.
Can someone help me with this?
How do I make a new list from a list of lists?...
I have a list of lists made from a combinat and was wondering how to make a new list from it that only contains lists that the elements sum up to 0.
For example:
L1 := [[1,1,1,1], [1,2,0,0], [0,0,0,0], [1,1,-2,0]];
and I want the result to be
L2:= [[0,0,0,0], [1,1,-2,0]]
Thank you in advance! I'm new to maple so would appreciate the guidance!
Complex number commands in Maple ?...
I am studying something about complex numbers.
What commands are specific to find in Maple for complex numbers ?
• complexplot()
• conformal()
• conformal3d()
It seems that there are a lot of standard calculus statements can be used by adding the word : complex
Iam using here a package downloaded from Maple website : complex analysis for mathematics and engineering
Got the impression that some modern plot commands for complex numbers are not yet in this book present ..and how about other commands?
an_introduction_to_complex_numbers.mws
First 7 8 9 10 11 12 13 Last Page 9 of 1878
| 3,043 | 11,228 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2022-05 | latest | en | 0.941347 |
https://nados.io/question/score-of-parentheses | 1,675,042,399,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00152.warc.gz | 432,162,199 | 21,764 | `{"id":"3376cafc-535a-43ac-98a2-b725de0babe5","name":"Score Of Parentheses","description":"Given a balanced parentheses string S, compute the score of the string based on the following rule:\r\n () has score 1\r\n AB has score A + B, where A and B are balanced parentheses strings.\r\n (A) has score 2 * A, where A is a balanced parentheses string.\r\n\r\nScore of ()()() string is 3 => 1 + 1 + 1\r\nScore of (()) string is 2 => 2 * 1","inputFormat":"Input is managed for you","outputFormat":"Output is managed for you","constraints":"1: S is a balanced parentheses string, containing only ( and ).\r\n2: 2 <= S.length <= 50","sampleCode":{"cpp":{"code":"#include <iostream>\n#include <stack>\nusing namespace std;\n \n// Function to calculate\n// score of parentheses\nlong long scoreOfParentheses(string S)\n{\n // Write your code here \n}\n \n\nint main()\n{\n string S1 ;\n cin>>S1;\n cout << scoreOfParentheses(S1) << endl;\n \n return 0;\n}"},"java":{"code":"import java.io.*;\r\nimport java.util.*;\r\n\r\npublic class Main {\r\n public static int scoreOfParentheses(String S) {\r\n return 0;\r\n }\r\n\r\n public static void main(String[] args) throws Exception {\r\n BufferedReader read = new BufferedReader(new InputStreamReader(System.in));\r\n\r\n int score = scoreOfParentheses(read.readLine());\r\n System.out.println(score);\r\n \r\n }\r\n}\r\n"},"python":{"code":""}},"points":10,"difficulty":"medium","sampleInput":"(()(()))","sampleOutput":"6\r\n","questionVideo":"https://www.youtube.com/embed/rWsv46ME6lI","hints":[],"associated":[],"solutionSeen":false,"tags":[],"meta":{"path":[{"id":0,"name":"home"},{"id":"0c54b191-7b99-4f2c-acb3-e7f2ec748b2a","name":"Data Structures and Algorithms","slug":"data-structures-and-algorithms","type":0},{"id":"8c6022a5-8654-4226-918f-8110af738bd4","name":"Stacks For Intermediate","slug":"stacks-for-intermediate-688","type":0},{"id":"b741e094-1a08-47d2-9939-09b35cb549c3","name":"Score Of Parentheses","slug":"score-of-parentheses","type":1}],"next":{"id":"178cde19-0d1c-4073-ba53-22a4fdcedd39","name":"Score Of Parentheses Medium MCQ","type":0,"slug":"score-of-parentheses-medium-mcq"},"prev":{"id":"0ab476db-73e0-4d99-a845-1d103b3464ea","name":"Remove outermost parentheses","type":3,"slug":"remove-outermost-parentheses"}}}`
# Score Of Parentheses
Given a balanced parentheses string S, compute the score of the string based on the following rule: () has score 1 AB has score A + B, where A and B are balanced parentheses strings. (A) has score 2 * A, where A is a balanced parentheses string. Score of ()()() string is 3 => 1 + 1 + 1 Score of (()) string is 2 => 2 * 1
`{"id":"3376cafc-535a-43ac-98a2-b725de0babe5","name":"Score Of Parentheses","description":"Given a balanced parentheses string S, compute the score of the string based on the following rule:\r\n () has score 1\r\n AB has score A + B, where A and B are balanced parentheses strings.\r\n (A) has score 2 * A, where A is a balanced parentheses string.\r\n\r\nScore of ()()() string is 3 => 1 + 1 + 1\r\nScore of (()) string is 2 => 2 * 1","inputFormat":"Input is managed for you","outputFormat":"Output is managed for you","constraints":"1: S is a balanced parentheses string, containing only ( and ).\r\n2: 2 <= S.length <= 50","sampleCode":{"cpp":{"code":"#include <iostream>\n#include <stack>\nusing namespace std;\n \n// Function to calculate\n// score of parentheses\nlong long scoreOfParentheses(string S)\n{\n // Write your code here \n}\n \n\nint main()\n{\n string S1 ;\n cin>>S1;\n cout << scoreOfParentheses(S1) << endl;\n \n return 0;\n}"},"java":{"code":"import java.io.*;\r\nimport java.util.*;\r\n\r\npublic class Main {\r\n public static int scoreOfParentheses(String S) {\r\n return 0;\r\n }\r\n\r\n public static void main(String[] args) throws Exception {\r\n BufferedReader read = new BufferedReader(new InputStreamReader(System.in));\r\n\r\n int score = scoreOfParentheses(read.readLine());\r\n System.out.println(score);\r\n \r\n }\r\n}\r\n"},"python":{"code":""}},"points":10,"difficulty":"medium","sampleInput":"(()(()))","sampleOutput":"6\r\n","questionVideo":"https://www.youtube.com/embed/rWsv46ME6lI","hints":[],"associated":[],"solutionSeen":false,"tags":[],"meta":{"path":[{"id":0,"name":"home"},{"id":"0c54b191-7b99-4f2c-acb3-e7f2ec748b2a","name":"Data Structures and Algorithms","slug":"data-structures-and-algorithms","type":0},{"id":"8c6022a5-8654-4226-918f-8110af738bd4","name":"Stacks For Intermediate","slug":"stacks-for-intermediate-688","type":0},{"id":"b741e094-1a08-47d2-9939-09b35cb549c3","name":"Score Of Parentheses","slug":"score-of-parentheses","type":1}],"next":{"id":"178cde19-0d1c-4073-ba53-22a4fdcedd39","name":"Score Of Parentheses Medium MCQ","type":0,"slug":"score-of-parentheses-medium-mcq"},"prev":{"id":"0ab476db-73e0-4d99-a845-1d103b3464ea","name":"Remove outermost parentheses","type":3,"slug":"remove-outermost-parentheses"}}}`
Editor
# Score Of Parentheses
medium
Given a balanced parentheses string S, compute the score of the string based on the following rule: () has score 1 AB has score A + B, where A and B are balanced parentheses strings. (A) has score 2 * A, where A is a balanced parentheses string. Score of ()()() string is 3 => 1 + 1 + 1 Score of (()) string is 2 => 2 * 1
## Constraints
1: S is a balanced parentheses string, containing only ( and ). 2: 2 <= S.length <= 50
## Format
### Input
Input is managed for you
### Output
Output is managed for you
## Example
Sample Input
`.css-23h8hz{color:inherit;font-size:0.875rem;line-height:1.125rem;letter-spacing:0.016rem;font-weight:var(--chakra-fontWeights-normal);white-space:pre-wrap;}(()(()))`
### Sample Output
```.css-3oaykw{color:var(--chakra-colors-active-primary);font-size:0.875rem;line-height:1.125rem;letter-spacing:0.016rem;font-weight:var(--chakra-fontWeights-normal);white-space:pre-wrap;font-family:Monospace;}6 ```
Question Video
Discussions
Show Discussion
Related Resources | 1,821 | 5,977 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2023-06 | latest | en | 0.60692 |
https://www.unibo.it/en/teaching/course-unit-catalogue/course-unit/2021/406649 | 1,675,618,770,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00389.warc.gz | 1,052,937,704 | 28,306 | 75229 - Mathematics (B.A.)
Course Unit Page
• Credits 12
• SSD SECS-S/06
• Language Spanish
• Campus of Bologna
• Degree Programme First cycle degree programme (L) in Business and Economics (cod. 8965)
SDGs
This teaching activity contributes to the achievement of the Sustainable Development Goals of the UN 2030 Agenda.
Learning outcomes
At the end of the course the student will be capable of using the techniques of Linear Algebra; furthermore he will have acquired a working knowledge of First Year Calculus, together with the related applications in Finance and Economics.
Course contents
Sequences and limits.
Definition of sequence. Convergence and divergence. Limit of a sequence. Limit of a function. Geometrical interpretation of a limit. Calculate lateral limits. Finding the limit of a function algebraically.
Derivative.
Functions and continuity. Definition of the derivative. Geometric interpretation of the derivative. Differentiable functions. Stationary and extreme points. Differentiation rules and associated theorems. High order derivatives. Taylor expansion. Second derivative condition. Concavity and extremizing a function. Applications of the derivative. Study and interpretation of graphs of functions. Optimization.
Integrals.
Riemann’s sums. Definition of integral. Geometric interpretation of the integral. Proper integrals. Anti-derivation. Fundamental theorem of calculus. Improper integrals. Integration techniques: integration by substitution, integration by parts. Application of integrals.
Matrices.
Definition of matrix. Matrices classification. Rank. Operations between matrices. Elementary transformations of a matrix. The inverse matrix. Determinants. The adjoint of a matrix.
System of linear equations.
Relation between matrices and linear systems. Solution set and geometric interpretation. Algorithms for solving SLE: Gauss, Gauss-Jordan, inverse.
Vector spaces.
Definition of vector space. Sub-spaces and their properties. Linear combination and linear independence. Inner product. Basis and dimension. Change of basis. Orthonormal bases.
Eigenvalues and eigenvectors.
Definition of eigenvalues and eigenvectors of matrices. Characteristic polynomial.
Applications of linear algebra.
Sequences and series.
Sequences and series. Convergence.
Multivariate functions.
Definition of function of several variables. Vector-valued functions and graphs in space.
Differentiation.
Limit and continuity. Partial derivatives. Directional derivative. Multivariate continuity. Gradient. Total derivative. Chain rule. Laplacian.
Optimization.
Extremum points: maximum, minimum and saddle points, Lagrange’s multipliers for constrains extrema.
Applications of multi-variable calculus.
Cálculo de una variable. J. Stewart, 7ma. edición, Cengage Learning Editores.
Cálculo de varias variables. J. Stewart, 7ma. edición, Cengage Learning Editores.
Calculus (vol. 1 y 2). T. M. Apostol. Wiley.
Calculus. M. Spivak. Editorial Reverté.
Álgebra lineal con aplicaciones. S. Grossman, Ed. McGraw Hill.
Introducción al álgebra lineal. Howard Antón, Editorial Limusa.
Teaching methods
Lectures, exercises, and multimedia tools
Assessment methods
Multiple choice exams, exercises in class and for homework, midterm projects.
An oral examination could be carried out in exceptional circumstances (if decided by the University).
Teaching tools
Digital notes, lists of exercises and TPs, videos
Office hours
See the website of Rocio Angelica Bermudez Ramos | 730 | 3,519 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.796875 | 3 | CC-MAIN-2023-06 | latest | en | 0.836181 |
https://authors.library.caltech.edu/75726/4/File_S2.txt | 1,642,894,272,000,000,000 | text/plain | crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00619.warc.gz | 161,955,716 | 1,775 | function R = DA_integrate(S,p) %% function takes argument of stimulus and a parameter set and implements the DA %% model as described in Clark et al., 2013 %% the parameter set is as described in DA_model_script.m %% This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. %% CC-BY-SA %% Damon A. Clark, 2013 t = [0:3000]; % filters to be this long; don't use n*tau longer than a few hundred ms in this case... Ky = generate_simple_filter(p.tau_y,p.n_y,t); Kz = p.C*Ky + (1-p.C) * generate_simple_filter(p.tau_z,p.n_z,t); y = filter(Ky,1,S); z = filter(Kz,1,S); % in the case that tau_r is 0, don't have to integrate; this approximation can % also be used when tau_r/(1+p.B*z) << other time scales in y or z, for all or most z if p.tau_r == 0 R = p.A*y./(1+p.B*z); return; end % set up some variables to pass to the integration routine pass.y = y; pass.z = z; pass.tau_r = p.tau_r; pass.A = p.A; pass.B = p.B; % these options seem to work well, but can be modified, of course opts=odeset('reltol',1e-6,'abstol',1e-5,'MaxStep',25); T0 = [1,length(z)]; X0 = 0; % start from 0 % X0 = p.A*y(1)/(1+p.B*z(1)); % start from steady state of first inputs % the equations can be quite stiff, and this ode solver works quite well [tout,xout]=ode15s(@dxdt,T0,X0,opts,pass); % linearly interpolate the output to time intervals of the inputs R = interp1(tout,xout,[1:length(z)],'linear'); % uncomment this if you want to check out the filters % figure; hold on; % plot(cumsum(Ky)); % plot(cumsum(Kz)); function f = generate_simple_filter(tau,n,t) f = t.^n.*exp(-t/tau); % functional form in paper f = f/tau^(n+1)/gamma(n+1); % normalize appropriately function dx = dxdt(t,x,pass) B = pass.B; A = pass.A; tau_r = pass.tau_r; % can also use qinterp1 here for some speed up; see mathworks website zt = interp1([1:length(pass.z)],pass.z,t,'linear'); yt = interp1([1:length(pass.y)],pass.y,t,'linear'); % this is the DA model equation at time t dx = 1/tau_r * (A*yt - (1 + B*zt) * x); | 624 | 2,001 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2022-05 | latest | en | 0.623041 |
https://www.aa.quae.nl/en/verschijnselen/2038.html | 1,716,904,100,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971059139.85/warc/CC-MAIN-20240528123152-20240528153152-00378.warc.gz | 539,506,256 | 7,992 | $$\def\|{&}\DeclareMathOperator{\D}{\bigtriangleup\!} \DeclareMathOperator{\d}{\text{d}\!}$$
The diagram below is explained further on the page for the year 2000.
For an explanation of the above pictures, see the page for the year 2000.
The last table on this page lists a number of noteworthy planetary phenomena during the listed year. The first column ("date") gives the date and hour, in the form year-month-dayThour in the Gregorian calendar, in Universal Time (UT). The second column ("pl") identifies the planet that shows the phenomenon:
pl Planet
Me Mercury
Ve Venus
Ma Mars
Ju Jupiter
Sa Saturn
Ur Uranus
Ne Neptune
Lu Moon
The third column ("*") provides the type of the phenomenon. The fourth column ("AU") shows the value (distance) in Astronomical Units, if applicable. The fifth column ("deg") gives the value of a relevant angle (declination $$δ$$, elongation $$E$$, conjunction spread $$σ$$, or ecliptic longitude $$λ$$) in degrees. Which angle it is for each phenomenon is shown below in column "?". Declinations and ecliptical longitudes are measured relative to the equinox of the date.
?
Explanation
p perigee
a apogee
+dδgreatest northern declination
−dδgreatest southern declination
cλconjunction (with the Sun)
oλopposition (relative to the Sun)
eEgreatest elongation
xaλpassage through the ascending node (zero ecliptical latitude)
xdλpassage through the descending node (zero ecliptical latitude)
bσleast conjunction spread (Mercury - Saturn)
Bσleast conjunction spread (Mercury - Neptune)
sσgreatest conjunction spread (Mercury - Saturn)
Sσgreatest conjunction spread (Mercury - Neptune)
Vλbegin eastward (prograde) acceleration
Pgλstationary point: begin eastward (prograde) motion
vλbegin westward (retrograde) acceleration
Rgλstationary point: begin westward (retrograde) motion
The calculations for the planets are based on the VSOP87 model (type A) of Laskar, and the calculations for the Moon are based on the most detailed model from [Meeus]. One position per day was calculated for each celestial object. The hour of the day for each phenomenon was determined by interpolation.
date pl * AU deg
2038-01-01T10 Ne −d +8.54
2038-01-02T23 Ve p 0.2652
2038-01-03T10 Ve V +283.86
2038-01-03T13 Ve c +283.77
2038-01-04T11
**
S +284.58
2038-01-04T20 Lu −d −22.19
2038-01-05T01 Lu c +285.32
2038-01-05T09 Lu xd +289.62
2038-01-05T14 Ne Pg +26.32
2038-01-11T00 Me xa +272.79
2038-01-11T14 Ur V +112.15
2038-01-11T18 Ur o +112.15
2038-01-12T01 Lu a 0.002704
2038-01-12T02 Lu V +12.74
2038-01-12T09 Ur p 17.72
2038-01-13T08 Ju p 4.249
2038-01-13T22 Ju V +114.80
2038-01-14T07 Ju o +114.75
2038-01-16T03 Me −d −23.66
2038-01-18T12
**
B +178.26
2038-01-19T06 Lu +d +22.19
2038-01-19T20 Lu xa +109.69
2038-01-20T15 Lu o +121.23
2038-01-23T22 Lu p 0.002450
2038-01-23T22 Ve Pg +276.14
2038-01-23T22 Lu v +168.33
2038-01-24T15 Ve +d −16.78
2038-01-30T03
**
S +257.47
2038-01-30T16
**
s +271.40
2038-02-01T03 Lu −d −22.19
2038-02-01T17 Lu xd +289.66
2038-02-03T17 Lu c +315.49
2038-02-04T07 Me a 1.409
2038-02-08T21 Lu a 0.002709
2038-02-09T06 Lu V +22.18
2038-02-11T23 Me o +323.79
2038-02-14T09
**
B +157.63
2038-02-15T16 Lu +d +22.25
2038-02-16T05 Lu xa +109.13
2038-02-19T04 Lu o +151.11
2038-02-20T20 Lu p 0.002413
2038-02-20T20 Lu v +176.43
2038-02-21T00 Ve −d −17.53
2038-02-21T20 Me v +341.87
2038-02-25T09
**
S +204.43
2038-02-28T08 Lu −d −22.31
2038-02-28T20 Lu xd +288.54
2038-03-01T17 Me xd +356.30
2038-03-02T17 Sa p 8.350
2038-03-02T20 Sa o +162.85
2038-03-02T22 Sa V +162.85
2038-03-05T11 Lu c +345.47
2038-03-08T12 Lu a 0.002715
2038-03-09T02 Lu V +28.85
2038-03-10T09 Me e +18.19
2038-03-13T07
**
B +130.44
2038-03-15T01 Lu +d +22.45
2038-03-15T05 Ju Pg +109.73
2038-03-15T11 Lu xa +107.09
2038-03-15T15 Ve e −46.59
2038-03-16T07 Ju +d +22.44
2038-03-17T18 Me Rg +12.38
2038-03-19T03 Me +d +7.95
2038-03-20T14 Lu o +180.55
2038-03-21T05 Lu v +190.11
2038-03-21T05 Lu p 0.002389
2038-03-25T06
**
S +179.03
2038-03-25T07 Ur +d +22.42
2038-03-27T08 Me c +7.28
2038-03-27T14 Ur Pg +110.09
2038-03-27T14 Lu −d −22.55
2038-03-27T21 Lu xd +285.95
2038-03-28T05 Me V +6.53
2038-03-30T18 Me p 0.5936
2038-04-04T01 Ma +d +25.24
2038-04-04T04 Lu c +15.03
2038-04-04T17 Lu a 0.002718
2038-04-05T02 Lu V +25.74
2038-04-08T13 Ve xa +334.30
2038-04-09T00 Me xa +359.89
2038-04-09T08
**
B +124.16
2038-04-10T00 Me Pg +359.83
2038-04-11T08 Lu +d +22.72
2038-04-11T13 Lu xa +104.06
2038-04-15T06 Me −d −0.87
2038-04-18T01 Ne v +28.80
2038-04-18T05 Ne c +28.80
2038-04-18T15 Lu v +204.84
2038-04-18T16 Lu p 0.002385
2038-04-18T22 Lu o +209.52
2038-04-19T04 Ne a 30.82
2038-04-21T13
**
S +167.88
2038-04-23T21 Lu −d −22.82
2038-04-23T23 Lu xd +282.91
2038-04-24T04 Me e −27.16
2038-05-01T11 Lu V +15.54
2038-05-01T20 Lu a 0.002717
2038-05-03T21 Lu c +44.05
2038-05-07T07 Sa +d +9.92
2038-05-07T19
**
B +105.35
2038-05-08T14 Lu xa +101.53
2038-05-08T15 Lu +d +22.93
2038-05-10T17 Sa Pg +159.39
2038-05-16T23 Lu v +218.51
2038-05-17T01 Lu p 0.002401
2038-05-18T06 Lu o +237.94
2038-05-19T17
**
S +142.76
2038-05-21T05 Lu xd +280.92
2038-05-21T07 Lu −d −22.98
2038-05-28T16 Me xd +63.45
2038-05-29T01 Lu V +19.24
2038-05-29T08 Lu a 0.002712
2038-05-31T22 Me a 1.322
2038-05-31T23 Me o +70.66
2038-06-01T20 Me v +72.56
2038-06-02T12 Lu c +72.59
2038-06-04T18 Lu xa +100.44
2038-06-04T21 Lu +d +23.03
2038-06-04T23
**
B +83.53
2038-06-11T20 Me +d +25.25
2038-06-14T01 Lu v +228.74
2038-06-14T03 Lu p 0.002431
2038-06-16T14 Lu o +266.03
2038-06-17T14 Lu xd +280.34
2038-06-17T18 Lu −d −23.05
2038-06-17T19
**
S +126.98
2038-06-25T23 Lu V +26.39
2038-06-26T01 Lu a 0.002705
2038-07-02T00 Lu xa +100.39
2038-07-02T01 Lu c +100.79
2038-07-02T04 Lu +d +23.04
2038-07-03T02
**
B +76.40
2038-07-05T23 Me xa +130.61
2038-07-06T18 Me e +26.11
2038-07-11T03 Lu v +225.51
2038-07-11T07 Lu p 0.002463
2038-07-15T00 Lu xd +280.39
2038-07-15T03 Lu −d −23.03
2038-07-15T13 Ur a 19.69
2038-07-15T23 Lu o +294.09
2038-07-16T09 Ur c +114.45
2038-07-16T09
**
S +124.59
2038-07-16T13 Ur v +114.46
2038-07-20T13 Ve +d +22.89
2038-07-20T18 Me Rg +138.59
2038-07-23T19 Lu a 0.002702
2038-07-23T23 Lu V +34.31
2038-07-26T20 Me −d +11.43
2038-07-29T08 Lu xa +100.28
2038-07-29T10 Ne +d +10.33
2038-07-29T12 Lu +d +23.05
2038-07-30T16 Ve xd +107.22
2038-07-31T05
**
B +74.92
2038-07-31T07 Me p 0.5907
2038-07-31T12 Lu c +128.90
2038-08-01T18 Ju v +130.77
2038-08-02T16 Ju c +130.97
2038-08-02T18 Me V +132.50
2038-08-03T11 Ne Rg +31.38
2038-08-03T14 Ju a 6.317
2038-08-03T14 Me c +131.88
2038-08-05T00 Lu v +192.24
2038-08-05T10 Lu p 0.002465
2038-08-11T06 Lu xd +279.93
2038-08-11T10 Lu −d −23.08
2038-08-13T12
**
S +124.77
2038-08-13T20 Me Pg +127.05
2038-08-14T10 Lu o +322.26
2038-08-20T13 Lu a 0.002704
2038-08-20T23 Lu V +42.39
2038-08-21T18 Me +d +16.81
2038-08-21T21 Me e −18.46
2038-08-24T16 Me xd +134.06
2038-08-25T15 Lu xa +99.07
2038-08-25T20 Lu +d +23.15
2038-08-28T04
**
B +79.37
2038-08-29T22 Lu c +157.19
2038-09-01T05 Lu v +190.94
2038-09-01T11 Lu p 0.002434
2038-09-07T08 Lu xd +278.12
2038-09-07T15 Lu −d −23.24
2038-09-08T07 Me v +159.25
2038-09-08T10
**
b +29.56
2038-09-11T12 Sa v +169.53
2038-09-11T12
**
S +133.46
2038-09-11T16 Sa c +169.56
2038-09-11T20 Sa a 10.40
2038-09-13T00 Lu o +350.83
2038-09-16T00 Me o +173.85
2038-09-17T06 Lu a 0.002711
2038-09-17T22 Lu V +50.08
2038-09-21T19 Lu xa +96.49
2038-09-22T05 Lu +d +23.38
2038-09-23T07 Me a 1.402
2038-09-25T18
**
B +95.21
2038-09-28T06 Lu c +185.75
2038-09-29T08 Lu v +201.54
2038-09-29T12 Lu p 0.002403
2038-10-01T22 Me xa +201.35
2038-10-04T09 Lu xd +275.22
2038-10-04T20 Lu −d −23.49
2038-10-06T02 Ma a 2.590
2038-10-10T17
**
S +154.92
2038-10-12T16 Lu o +19.95
2038-10-14T18 Ve a 1.718
2038-10-14T18 Lu a 0.002716
2038-10-15T15 Lu V +55.15
2038-10-18T00 Ve o +205.34
2038-10-18T20 Lu xa +93.51
2038-10-19T12 Lu +d +23.64
2038-10-21T22 Ne p 28.82
2038-10-22T18 Ne o +29.95
2038-10-22T23 Ne V +29.95
2038-10-24T11
**
B +118.56
2038-10-27T15 Lu c +214.85
2038-10-27T18 Lu v +216.35
2038-10-27T22 Lu p 0.002386
2038-10-31T04 Ur −d +20.91
2038-10-31T12 Lu xd +272.60
2038-10-31T19 Ma c +218.98
2038-11-01T04 Lu −d −23.72
2038-11-01T08 Me e +23.52
2038-11-03T07 Ur Rg +118.80
2038-11-08T09 Me −d −24.36
2038-11-08T11
**
S +183.79
2038-11-10T21 Lu a 0.002717
2038-11-11T09 Lu V +48.95
2038-11-11T10 Lu o +49.65
2038-11-12T02 Me Rg +249.42
2038-11-14T23 Lu xa +91.57
2038-11-15T19 Lu +d +23.83
2038-11-19T06 Ve xa +245.77
2038-11-20T15 Me xd +243.56
2038-11-22T02 Me p 0.6775
2038-11-22T11 Me V +241.11
2038-11-22T13
**
B +135.49
2038-11-22T14 Me c +240.91
2038-11-22T20 Ma xa +234.23
2038-11-25T07 Lu v +232.69
2038-11-25T10 Lu p 0.002390
2038-11-26T01 Lu c +244.43
2038-11-26T18 Ve v +255.18
2038-11-27T20 Lu xd +271.31
2038-11-28T14 Lu −d −23.87
2038-12-02T00 Me Pg +233.24
2038-12-02T17 Me +d −16.04
2038-12-06T15
**
S +202.39
2038-12-08T00 Lu V +41.47
2038-12-08T01 Lu a 0.002714
2038-12-10T11 Me e −20.79
2038-12-11T05 Lu o +79.78
2038-12-11T12 Ve −d −24.25
2038-12-12T04 Lu xa +91.12
2038-12-13T01 Lu +d +23.89
2038-12-13T08 Ju −d +11.82
2038-12-16T19 Ju Rg +151.46
2038-12-19T21
**
B +158.44
2038-12-23T19 Lu v +248.68
2038-12-23T20 Lu p 0.002415
2038-12-25T07 Lu xd +271.21
2038-12-25T13 Lu c +274.32
2038-12-26T01 Lu −d −23.87
2038-12-28T21 Me xa +262.87
languages: [en] [nl]
//aa.quae.nl/en/verschijnselen/2038.html;
Last updated: 2021-07-19 | 4,761 | 9,461 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2024-22 | latest | en | 0.751182 |
https://zerkalowie.web.app/laforte81500nel/blackjack-running-count-true-count-3036.html | 1,620,929,494,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00251.warc.gz | 1,160,370,702 | 8,123 | # Blackjack running count true count
True Count Versus Running Count in Blackjack
True Count Calculation — The Whole Story. Card counters know that before we bet or play using a balanced strategy, we must adjust the running count byBlackjack card counting books generally provide a simple example, like a running count of +6 divided by three remaining decks yields a true... Card counting - Wikipedia Card counting is a casino card game strategy used primarily in the blackjack family of casino games to determine whether the next hand is likely to give a probable advantage to the player or to the dealer. Running count to true count | Black Jack Basics Counting cards in blackjack is not an overnight culture change to your gaming habits. It takes studying, learning and and endless amount ofWhich brings me to the question of the day – how do you handle conversion of the running count to the true count if the fraction doesn’t work to an even... In blackjack, what dose 'true count', 'running count... |… For card counters, the running count is the running total of the values (usually -1, 0, or +1) for all the cards which have been dealt since the beginning of the shoe. The true count is the running count divided by the number of decks (or half decks, for some counting systems) remaining in the shoe.
## Blackjack Online - Just another WordPress site
running count and true count - Blackjack and Card Counting ... With a balanced count.. The running count is an ongoing count that is kept and used to calculate the True Count. It is calculated by physically counting the cards as they are played, according to the tag numbers for each card, of the system being used. Card Counting - Wizard of Odds To determine the "true count," divide the running count by the number of decks left to be played, or in some strategies, the number of half decks. This will tell you the relative richness of the deck in good cards. The true count is used in two ways, to determine how much to bet and how to play your hand. How True is Your True Count? - Blackjack Forum Online Arnold Snyder shows the mathematical impossibility of perfect true count accuracy in blackjack card counting, with implications for win rate, bet sizing and table hopping. This Blackjack Forum article is considered seminal card counting research.
### True Count Calculation — The Whole Story
True Count of Running Count 21 Half Decks 6 Say you have a running count of 21 and to get the true count you must divide the running count by the number of half decks. In this case let's say there are 6 half ... True Count Strategy | Blackjack-Play is dedicated to gambling! Now that you’ve learned how to keep the running count (see Part 2 Article) it’s time to learn how to compute the true count. You will use the latter to vary the ...
### A Blackjack card-counting calculator that provides Running Count, True Count, and Bet Amounts in realtime. - yashar1/Card-Counter.
Practicing Blackjack Part III: Practicing How to Get to the True Count . A lot of people begin having issues when they start dividing to convert from the running count to the true count, but if you separate the individual processes and master them, it’s not too complicated to put it all together. True Count Calculation — The Whole Story True Count Calculation — The Whole Story. Card counters know that before we bet or play using a balanced strategy, we must adjust the running count by the un-dealt cards. That is, convert the running count into a true count. True Count Versus Running Count in Blackjack The running count is what a player tallies up as each card is dealt, so essentially this can be used solely or as a means to divide by the decks left to reach the true count. Advantages of the Running Count. If you’re able to remember the numbers assigned to each card and keep a track of it while playing rounds of blackjack, then this will ... Blackjack Running Count Advantage Here we will look at the advantage at each running count. That is the percentage gain you can expect on average for the initial Blackjack bet. Six decks, two decks and single-deck with two or four players are displayed. When we looked at the chart for true counts (True Count Advantages) we saw four lines that looked very much alike. Here we see ...
## How to count cards in blackjack: counting cards in blackjack is a simple card game strategy used to determine whether the next hand is likely to give aThe running count consists of grouping cards based on their Effect of Removal, which is essentially how their elimination from the shoe affects your...
Practicing Blackjack Part III: Practicing How to Get to the True Count . A lot of people begin having issues when they start dividing to convert from the running count to the true count, but if you separate the individual processes and master them, it’s not too complicated to put it all together. True Count Calculation — The Whole Story Most explanations of true count calculation simply say that the running count is divided by the number of remaining decks in the shoe. Blackjack card counting books generally provide a simple example, like a running count of +6 divided by three remaining decks yields a true count of +2 — and they leave it at that. True Count Versus Running Count in Blackjack
How True is Your True Count? - Blackjack Forum Online The whole purpose of adjusting running count to true count is to obtain an accurate estimate of your advantage at any deck level. Expert opinion in blackjack has always held that a running count of +6 with one deck dealt out of a 4-deck game indicates a player advantage equivalent to that of a running count... When to run a true count? - Blackjack - Gambling - Page 1 ... Running Count: The running count is the summation of all of your +1's and -1's at ANY point in the shoe. True Count: The true count is the running count divided by the number of decks remaining at the given time of calculation. The true count must be constantly RE-CALCULATED. If you have a RC of +10 with 5 decks left, then the TC is +2. Blackjack Running Count And True Count Blackjack Running Count And True Count. blackjack running count and true count In an attempt to thwart card counters, casinos began using multiple decks. Nice try, Casinos! To use our running count in a multiple deck game, we simply have to translate our information into a “True Count” or count per deck.Meet the real MIT Blackjack Team and ... Using the True Count in Blackjack Card Counting | 1,336 | 6,491 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.515625 | 4 | CC-MAIN-2021-21 | latest | en | 0.930268 |
https://espanol.libretexts.org/Under_Construction/Sandboxes/Henry/Webwork_Import/AlfredUniv/diffeq/ReductionOfOrder | 1,582,985,212,000,000,000 | text/html | crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00016.warc.gz | 345,333,356 | 19,589 | Saltar al contenido principal
# ReductionOfOrder
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ | 375 | 1,030 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.75 | 3 | CC-MAIN-2020-10 | latest | en | 0.194916 |
http://youtubedia.com/Pages/Show/595?courseid=5&chapterid=32§ionid=122 | 1,679,755,363,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00311.warc.gz | 117,257,671 | 4,512 | # Price elasticity of demand
Summary
Price elasticity of demand
• $$q\left( p \right)$$ is a given demand function ( market demand $$X^l$$ or individual demand $$x_i^l$$ ) where $$p$$ is the price of the good. $$q$$ will depend on other variables as well .
$ε= \frac{dq}{dp}⋅ \frac{p}{q\left( p \right)}$
• is called the price elasticity of demand or the elasticity of demand with respect to price.
• Example ( $$c$$ is an arbitrary constant)
$q\left( p \right)=c/p$
$\frac{dq}{dp}=- \frac{c}{p^2}$
$ε=- \frac{c}{p^2}⋅ \frac{p}{c/p}=-1$
• For small changes in price, $$Δp$$ small,
$ε≈ \frac{Δq}{Δp}⋅ \frac{p}{q}= \frac{Δq/q}{Δp/p}$
• $$Δq/q⋅100$$ is the percentage change in $$q$$ .
• $$ε$$ is the approximate percentage increase in $$q$$ when $$p$$ increases by 1% | 270 | 776 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.390625 | 3 | CC-MAIN-2023-14 | latest | en | 0.793445 |
https://virtualmathmuseum.org/SpaceCurves/autoevolute/autoevoluteN.html | 1,708,678,799,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00017.warc.gz | 635,213,676 | 3,472 | ## Closed Constant Curvature Autoevolutes
#### These curves are their own evolutes
These curves are defined by the following variation of the Serret-Frenet ODE:
T'(t) = kappa*vel(t)*N(t),
N'(t) = -kappa*vel(t)*T(t) + kappa/vel(t)*(T(t) x N(t)).
The velocity function vel(t) satisfies
vel(t+pi) = 1/vel(t). It is constructed from a function h(t)
as: vel(t) = sqrt(1 + h^2(t)) - h(t). Alternative: vel(t) = exp(h(t)).
For more freedom we scale t := fr*t. This parameter is fairly redundant, by changing all the others one can obtain similar curves. | 165 | 549 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2024-10 | latest | en | 0.731282 |
https://www.mathworks.com/matlabcentral/cody/problems/2176-which-way-to-go/solutions/403346 | 1,495,627,674,000,000,000 | text/html | crawl-data/CC-MAIN-2017-22/segments/1495463607813.12/warc/CC-MAIN-20170524112717-20170524132717-00230.warc.gz | 905,448,661 | 11,607 | Cody
# Problem 2176. Which way to go?
Solution 403346
Submitted on 13 Feb 2014 by J-G van der Toorn
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
%% m=2 n=2 y_correct =6; assert(isequal(wayz_in_maze(m,n),y_correct))
``` m = 2 n = 2 ```
2 Pass
%% m=3 n=4 y_correct =35; assert(isequal(wayz_in_maze(m,n),y_correct))
``` m = 3 n = 4 ```
3 Pass
%% m=5 n=6 y_correct =462; assert(isequal(wayz_in_maze(m,n),y_correct))
``` m = 5 n = 6 ``` | 198 | 566 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2017-22 | longest | en | 0.562957 |
https://www.stumblingrobot.com/2016/03/17/find-z-series-n-n1-z-2z1n-converges/ | 1,723,551,635,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641076695.81/warc/CC-MAIN-20240813110333-20240813140333-00711.warc.gz | 773,276,837 | 15,278 | Home » Blog » Find all z such that the series (n / (n+1)) (z / (2z+1))n converges
# Find all z such that the series (n / (n+1)) (z / (2z+1))n converges
Find all complex numbers such that the series
converges.
First, we have
We know from the previous exercise (Section 10.20, Exercise #44) that the series
converges if and only if
Therefore, the series converges if since both terms in the sum converge. The series diverges if since diverges and converges. Finally, the series diverges if since . | 145 | 502 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.40625 | 3 | CC-MAIN-2024-33 | latest | en | 0.849567 |
https://cboard.cprogramming.com/c-programming/130483-output-piecewise-defined-function.html | 1,490,417,811,000,000,000 | text/html | crawl-data/CC-MAIN-2017-13/segments/1490218188785.81/warc/CC-MAIN-20170322212948-00028-ip-10-233-31-227.ec2.internal.warc.gz | 772,772,253 | 12,555 | # Thread: Output of a piecewise defined function
1. ## Output of a piecewise defined function
Hi all, I'm having trouble printing the output of a piecewise defined function
the function is f(x)= 1 for x=0,1,2
f(x-2) + f(x-3) for x>2
so the user will enter a value for x and I have to print all of the values of the function up to x. I'm trying to do this with loops, I'm not sure if recursion can be used but I would like to do it with loops/if statements only. I can obviously print the first 3 values of x. Can anyone add any tips? Thanks
2. Post what you've got so far and let the gang have a look...
I'm sure you'll get plenty of helpful hints and tips.
3. Okay, with this I'm actually having a bit of trouble even starting it. I use the obvious if (x<=2) then it will go into the block of the if statement and print either 1, 1 1, or 1 1 1 for the first three values.
For x>2, I don't even know how to get started. I'll think I have an idea, like adding two numbers and putting it into a variable and looping it somehow, but it just turns into a bunch of garble. What I'm looking for here is some way to start this program, like what the fundamental idea is behind it, or just to be thrown a bone.
4. recursion would be the way to go
5. No recursion please, only methods with loops and if statements.
6. So, the fact is you're waiting for one of us to write you your program?
7. No not at all, I was hoping for a one or two-liner hint that might be able to point me in the right direction because I am stuck.
8. Never mind, I got it.
Lock thread, ban op, etc.
Popular pages Recent additions | 421 | 1,610 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2017-13 | longest | en | 0.943624 |
https://ch.mathworks.com/matlabcentral/cody/problems/1871-numbers-in-extended-form/solutions/318612 | 1,603,122,984,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107863364.0/warc/CC-MAIN-20201019145901-20201019175901-00273.warc.gz | 270,713,817 | 17,075 | Cody
Problem 1871. Numbers in extended form
Solution 318612
Submitted on 12 Sep 2013 by Paul Berglund
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
Test Suite
Test Status Code Input and Output
1 Pass
x = 8; y_correct = '8'; assert(strcmp(extended_form(x),y_correct))
2 Pass
%% x = 10234; y_correct = '10000+200+30+4'; assert(strcmp(extended_form(x),y_correct))
3 Pass
%% x=987654321; y_correct='900000000+80000000+7000000+600000+50000+4000+300+20+1'; assert(strcmp(extended_form(x),y_correct))
4 Pass
%% x = 1000; y_correct = '1000'; assert(strcmp(extended_form(x),y_correct))
5 Pass
%% x = 314159265358979; y_correct = '300000000000000+10000000000000+4000000000000+100000000000+50000000000+9000000000+200000000+60000000+5000000+300000+50000+8000+900+70+9'; assert(strcmp(extended_form(x),y_correct))
6 Pass
%% x=540200; y_correct='500000+40000+200'; assert(strcmp(extended_form(x),y_correct))
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | 344 | 1,101 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2020-45 | latest | en | 0.464328 |
http://nrich.maths.org/public/leg.php?code=-333&cl=1&cldcmpid=10588 | 1,503,558,333,000,000,000 | text/html | crawl-data/CC-MAIN-2017-34/segments/1502886133042.90/warc/CC-MAIN-20170824062820-20170824082820-00078.warc.gz | 297,514,548 | 10,095 | # Search by Topic
#### Resources tagged with Investigations similar to Doubling Fives:
Filter by: Content type:
Stage:
Challenge level:
### Exploring Wild & Wonderful Number Patterns
##### Stage: 2 Challenge Level:
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
### Mobile Numbers
##### Stage: 1 and 2 Challenge Level:
In this investigation, you are challenged to make mobile phone numbers which are easy to remember. What happens if you make a sequence adding 2 each time?
### It Was 2010!
##### Stage: 1 and 2 Challenge Level:
If the answer's 2010, what could the question be?
### Train Carriages
##### Stage: 1 and 2 Challenge Level:
Suppose there is a train with 24 carriages which are going to be put together to make up some new trains. Can you find all the ways that this can be done?
### The Pied Piper of Hamelin
##### Stage: 2 Challenge Level:
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
### Sometimes We Lose Things
##### Stage: 2 Challenge Level:
Well now, what would happen if we lost all the nines in our number system? Have a go at writing the numbers out in this way and have a look at the multiplications table.
### Bean Bags for Bernard's Bag
##### Stage: 2 Challenge Level:
How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this?
##### Stage: 2 Challenge Level:
Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square.
### It Figures
##### Stage: 2 Challenge Level:
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
### Polo Square
##### Stage: 2 Challenge Level:
Arrange eight of the numbers between 1 and 9 in the Polo Square below so that each side adds to the same total.
##### Stage: 2 Challenge Level:
What happens when you add the digits of a number then multiply the result by 2 and you keep doing this? You could try for different numbers and different rules.
##### Stage: 2 Challenge Level:
Lolla bought a balloon at the circus. She gave the clown six coins to pay for it. What could Lolla have paid for the balloon?
### Exploring Number Patterns You Make
##### Stage: 2 Challenge Level:
Explore Alex's number plumber. What questions would you like to ask? What do you think is happening to the numbers?
### Sets of Numbers
##### Stage: 2 Challenge Level:
How many different sets of numbers with at least four members can you find in the numbers in this box?
### Newspapers
##### Stage: 2 Challenge Level:
When newspaper pages get separated at home we have to try to sort them out and get things in the correct order. How many ways can we arrange these pages so that the numbering may be different?
### Number Squares
##### Stage: 1 and 2 Challenge Level:
Start with four numbers at the corners of a square and put the total of two corners in the middle of that side. Keep going... Can you estimate what the size of the last four numbers will be?
### Sending Cards
##### Stage: 2 Challenge Level:
This challenge asks you to investigate the total number of cards that would be sent if four children send one to all three others. How many would be sent if there were five children? Six?
### Caterpillars
##### Stage: 1 Challenge Level:
These caterpillars have 16 parts. What different shapes do they make if each part lies in the small squares of a 4 by 4 square?
### My New Patio
##### Stage: 2 Challenge Level:
What is the smallest number of tiles needed to tile this patio? Can you investigate patios of different sizes?
### Room Doubling
##### Stage: 2 Challenge Level:
Investigate the different ways you could split up these rooms so that you have double the number.
### 3 Rings
##### Stage: 2 Challenge Level:
If you have three circular objects, you could arrange them so that they are separate, touching, overlapping or inside each other. Can you investigate all the different possibilities?
##### Stage: 2 Challenge Level:
I like to walk along the cracks of the paving stones, but not the outside edge of the path itself. How many different routes can you find for me to take?
### Building with Rods
##### Stage: 2 Challenge Level:
In how many ways can you stack these rods, following the rules?
### Plants
##### Stage: 1 and 2 Challenge Level:
Three children are going to buy some plants for their birthdays. They will plant them within circular paths. How could they do this?
### Sweets in a Box
##### Stage: 2 Challenge Level:
How many different shaped boxes can you design for 36 sweets in one layer? Can you arrange the sweets so that no sweets of the same colour are next to each other in any direction?
### Sets of Four Numbers
##### Stage: 2 Challenge Level:
There are ten children in Becky's group. Can you find a set of numbers for each of them? Are there any other sets?
### New House
##### Stage: 2 Challenge Level:
In this investigation, you must try to make houses using cubes. If the base must not spill over 4 squares and you have 7 cubes which stand for 7 rooms, what different designs can you come up with?
### Homes
##### Stage: 1 Challenge Level:
There are to be 6 homes built on a new development site. They could be semi-detached, detached or terraced houses. How many different combinations of these can you find?
### Cubes Here and There
##### Stage: 2 Challenge Level:
How many shapes can you build from three red and two green cubes? Can you use what you've found out to predict the number for four red and two green?
### Stairs
##### Stage: 1 and 2 Challenge Level:
This challenge is to design different step arrangements, which must go along a distance of 6 on the steps and must end up at 6 high.
### Doplication
##### Stage: 2 Challenge Level:
We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes?
### Magic Constants
##### Stage: 2 Challenge Level:
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
### Ice Cream
##### Stage: 2 Challenge Level:
You cannot choose a selection of ice cream flavours that includes totally what someone has already chosen. Have a go and find all the different ways in which seven children can have ice cream.
### Month Mania
##### Stage: 1 and 2 Challenge Level:
Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice?
### Street Party
##### Stage: 2 Challenge Level:
The challenge here is to find as many routes as you can for a fence to go so that this town is divided up into two halves, each with 8 blocks.
### Magazines
##### Stage: 2 Challenge Level:
Let's suppose that you are going to have a magazine which has 16 pages of A5 size. Can you find some different ways to make these pages? Investigate the pattern for each if you number the pages.
### Calcunos
##### Stage: 2 Challenge Level:
If we had 16 light bars which digital numbers could we make? How will you know you've found them all?
### Halloween Investigation
##### Stage: 2 Challenge Level:
Ana and Ross looked in a trunk in the attic. They found old cloaks and gowns, hats and masks. How many possible costumes could they make?
### Become Maths Detectives
##### Stage: 2 Challenge Level:
Explore Alex's number plumber. What questions would you like to ask? Don't forget to keep visiting NRICH projects site for the latest developments and questions.
### Abundant Numbers
##### Stage: 2 Challenge Level:
48 is called an abundant number because it is less than the sum of its factors (without itself). Can you find some more abundant numbers?
### Calendar Patterns
##### Stage: 2 Challenge Level:
In this section from a calendar, put a square box around the 1st, 2nd, 8th and 9th. Add all the pairs of numbers. What do you notice about the answers?
### Five Coins
##### Stage: 2 Challenge Level:
Ben has five coins in his pocket. How much money might he have?
### Tiles on a Patio
##### Stage: 2 Challenge Level:
How many ways can you find of tiling the square patio, using square tiles of different sizes?
### Worms
##### Stage: 2 Challenge Level:
Place this "worm" on the 100 square and find the total of the four squares it covers. Keeping its head in the same place, what other totals can you make?
### Division Rules
##### Stage: 2 Challenge Level:
This challenge encourages you to explore dividing a three-digit number by a single-digit number.
### Making Boxes
##### Stage: 2 Challenge Level:
Cut differently-sized square corners from a square piece of paper to make boxes without lids. Do they all have the same volume?
### Making Cuboids
##### Stage: 2 Challenge Level:
Let's say you can only use two different lengths - 2 units and 4 units. Using just these 2 lengths as the edges how many different cuboids can you make?
### Round and Round the Circle
##### Stage: 2 Challenge Level:
What happens if you join every second point on this circle? How about every third point? Try with different steps and see if you can predict what will happen.
### Cuboid-in-a-box
##### Stage: 2 Challenge Level:
What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it?
### Three Sets of Cubes, Two Surfaces
##### Stage: 2 Challenge Level:
How many models can you find which obey these rules? | 2,222 | 9,916 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.25 | 4 | CC-MAIN-2017-34 | latest | en | 0.908781 |
https://mathisradical.com/algebra-problem/highest-common-factor-11-plus-.html | 1,656,112,096,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00629.warc.gz | 432,686,146 | 12,549 | Algebra Tutorials!
Friday 24th of June
Try the Free Math Solver or Scroll down to Tutorials!
Depdendent Variable
Number of equations to solve: 23456789
Equ. #1:
Equ. #2:
Equ. #3:
Equ. #4:
Equ. #5:
Equ. #6:
Equ. #7:
Equ. #8:
Equ. #9:
Solve for:
Dependent Variable
Number of inequalities to solve: 23456789
Ineq. #1:
Ineq. #2:
Ineq. #3:
Ineq. #4:
Ineq. #5:
Ineq. #6:
Ineq. #7:
Ineq. #8:
Ineq. #9:
Solve for:
Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:
### Our users:
The best part of The Algebrator is its approach to mathematics. Not only it guides you on the solution but also tells you how to reach that solution.
Sarah Johnston, WA
The simple interface of Algebrator makes it easy for my son to get right down to solving math problems. Thanks for offering such a useful program.
David Figueroa, NY.
I appreciate that it is basicIt has helped me tremendously
D.E., Kentucky
The Algebrator is the perfect algebra tutor. It covers everything you need to know about algebra in an easy and comprehensive manner.
Max Duncan, OH
### Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?
#### Search phrases used on 2009-09-04:
• dividing fractions calculator
• algibra
• whole number operation worksheets
• non homogeneous first order system
• Graphic Designing Software
• how to write algebraic expressions in java
• square root worksheets
• solving equations with multiple variables
• World Cruises
• Vitamins Analysis
• Variable Universal Life Insurance Policy
• how to simplify cube roots
• solving fraction exponents
• inequality solver online step by step
• linear algebra exam with solutions
• dividing multiplying and subtracting fractions
• Equations Systems
• Bahamas Gyms
• Beijing Travels
• source code for combination and permutation for visual basic
• symbolic equations java source code
• Algebra 1 Prentice Hall
• Disability Insurance in California
• free parabola calculator
• WWW Gaiam
• synthetic division calculator matlab
• matlab convert decimal to fraction
• ti-89 graph system of equations
• CRM Guru
• find probability, TI-83 Plus
• Loans Tenants
• System Equations
• Chapter 7 Bankruptcy Rule
• formula to find "lowest common denominator"
• 1 800 Contact
• algebra 1 free homework solver
• half life algebra 2
• how to factor out cubed root
• Solving Linear Inequality
• binary code calculator
• pre algebra 6th grade book
• Why is it important to simplify radical expressions before adding or subtracting?
• VoIP WiFi
• graphing log base on TI-89 calculator
• algebra calculator systems substitution method
• Solution for elementary fluid mechanics seventh edition
• Best Rated Secured Credit Cards
• cheat on plato software test
• Credit Decisions
• linear ecuation class power point
• free examples of prealgebra
• distributive property worksheets
• math trivia
• Columbia Cosmetic Surgery
• Carnival Cruise Line New Ship
• FREE KS3 SATS PAPER WITH ANSWERS
• Worksheets or Exercises in Fourier transforms maths
• scientific calculator cubed root
• Beacon Credit Scoring System
• step by step factoring calculator
• square root of 512
• Three Credit Reporting Agencies
• Software Consulting Companies
• year 9 algebra revision sheets
• Andrew Gross Books
• trinomial calculator
• 8TH GRADE MATH pre algebra
• Year 8 Maths Exercise online
• middle school algebra sample sheet
• Health Insurance in New York
• Goldmine FrontOffice
• add and subtract polynomial functions +"lesson plan"
• gre permutation combination question
• HP Presario S6104NX P4
• Free Online Programming Textbooks Algebra Structure and Method book 1
• Southfield Mi Jobs
• Bankruptcy Attorney Houston
• ti-89 number bases | 940 | 3,903 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2022-27 | latest | en | 0.843325 |
https://cdn.goconqr.com/mindmap/3117669/chapter-7-investigating-data | 1,721,588,271,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763517768.40/warc/CC-MAIN-20240721182625-20240721212625-00346.warc.gz | 143,654,431 | 10,859 | # Chapter 7: Investigating Data
### Description
8 Maths Mind Map on Chapter 7: Investigating Data, created by Sarah L on 27/07/2015.
Mind Map by Sarah L, updated more than 1 year ago
Created by Sarah L almost 9 years ago
138
5
## Resource summary
Chapter 7: Investigating Data
1. Mean and mode
1. mean = sum of scores/ number of score
1. Mode=most frequent score
2. Organising and displaying data
1. Graphs
1. Column
1. Secto
2. Tables
1. Charts
1. Pie chart
2. Types of Data
1. Categorical
1. Groups in categories
2. Numerical
1. Numbers
1. Discrete
1. distinct values
2. Contiuous
1. not exact
1. decimal points
2. Median and range
1. median= middle score( in order)
1. range= highest score- lowest score
2. Analysing frequency tables
1. fx colum= score x frequency
2. Dot plots and Stem and leaf plots
1. Dot plots show cluster and outliers
1. Stem and lea plots show modes, clusters and how the scores are spread out
2. Frequency histograms and ploygons
1. Sampling
1. Sample
1. selection of people
2. Census
1. whole population
2. random sample
1. NOT biased
1. =not opiniated questions
3. Designing survey questions
1. simple
1. not biased
1. easy to answer
1. not too long
2. Analysing data
1. Mean
1. affected by outliers
1. *best when no outliers
2. Mode
1. not affected by outliers
1. *best when most common score is needed
2. Median
1. not affected by outliers
1. *when data has outliers
2. Range
1. *when a measure of spread is needed
### Similar
Statistics, Data and Area (Semester 2 Exam)
The Exponential Model
TYPES OF DATA
Statistics Key Words
STEM AND LEAF DIAGRAMS
FREQUENCY TABLES: MODE, MEDIAN AND MEAN
HISTOGRAMS
CUMULATIVE FREQUENCY DIAGRAMS
GROUPED DATA FREQUENCY TABLES: MODAL CLASS AND ESTIMATE OF MEAN
Maths GCSE - What to revise!
GCSE Maths Symbols, Equations & Formulae | 536 | 1,804 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.9375 | 4 | CC-MAIN-2024-30 | latest | en | 0.738419 |
http://gmatclub.com/forum/collection-of-remainder-problems-in-gmat-74776-40.html?sort_by_oldest=true | 1,484,626,377,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00303-ip-10-171-10-70.ec2.internal.warc.gz | 121,618,381 | 65,449 | Collection of remainder problems in GMAT : GMAT Quantitative Section - Page 3
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 16 Jan 2017, 20:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Collection of remainder problems in GMAT
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 36520
Followers: 7067
Kudos [?]: 92932 [2] , given: 10528
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
04 Dec 2010, 07:58
2
KUDOS
Expert's post
1
This post was
BOOKMARKED
Fijisurf wrote:
I guess I am still not sure why in the original post sandipchowdhury wrote:
sandipchowdhury wrote:
St 1: when n is divided by 21 ( 7 and 3) the remainder is an odd number.
But it cannot be 7, 3 or 9 . Hence the possibilities are : 1 and 5.
Why remainder cannot be 7, 3 or 9? (because in case of 28 divided by 21 remainder is 7)
OK. It's just not correct.
When n is divided by 21 the remainder is an odd number --> $$n=21q+odd=7*3q+odd$$. Now, this odd number can be ANY odd number from 1 to 19, inclusive.
As for r:
If $$n=22$$ then $$n$$ divided by 21 gives remainder of 1 and $$n$$ divded by 7 also gives remainder of 1;
If $$n=24$$ then $$n$$ divided by 21 gives remainder of 3 and $$n$$ divded by 7 also gives remainder of 3;
If $$n=26$$ then $$n$$ divided by 21 gives remainder of 5 and $$n$$ divded by 7 also gives remainder of 5;
If $$n=28$$ then $$n$$ divided by 21 gives remainder of 7 and $$n$$ divded by 7 gives remainder of 0.
So r can equal to 1, 3, 5 or 0.
Hope it helps.
_________________
Senior Manager
Joined: 08 Nov 2010
Posts: 417
WE 1: Business Development
Followers: 7
Kudos [?]: 105 [0], given: 161
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
13 Dec 2010, 12:43
Sandipchowdhury, thanks for the collection
_________________
Manager
Joined: 05 Nov 2010
Posts: 63
Followers: 1
Kudos [?]: 1 [0], given: 5
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
13 Dec 2010, 19:55
Great thanks guys. Helps out a great deal however I am still a bit confused. Anyone have any suggestions on where to get more help with these types of problems? Thanks in advance.
Math Expert
Joined: 02 Sep 2009
Posts: 36520
Followers: 7067
Kudos [?]: 92932 [0], given: 10528
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
14 Dec 2010, 00:30
Expert's post
1
This post was
BOOKMARKED
Manager
Status: Stanford GSB
Joined: 02 Jun 2008
Posts: 100
Followers: 6
Kudos [?]: 164 [0], given: 4
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
19 Dec 2010, 19:12
It was a pleasant surprise to see my one year old post still being discussed.
Currently I am studying at Stanford GSB.
If anyone has any question about Stanford GSB, you may contact me chowdhury_sandip@gsb.stanford.edu
Intern
Joined: 22 Oct 2010
Posts: 9
Followers: 0
Kudos [?]: 11 [0], given: 3
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
31 Dec 2010, 15:56
9.If n is a positive integer and r is the remainder when (n - 1)(n + 1) is divided by 24, what is the value of r ?
(1) n is not divisible by 2.
(2) n is not divisible by 3.
ST 1- if n is not divisible by 2, then n is odd, so both (n - 1) and (n + 1) are even. moreover, since every other even number is a multiple of 4, one of those two factors is a multiple of 4. so the product (n - 1)(n + 1) contains one multiple of 2 and one multiple of 4, so it contains at least 2 x 2 x 2 = three 2's in its prime factorization.
But this is not sufficient, because it can be (n-1)*(n+1) can be 2*4 where remainder is 8. it can be 4*6 where the remainder is 0.
ST 2- if n is not divisible by 3, then exactly one of (n - 1) and (n + 1) is divisible by 3, because every third integer is divisible by 3. therefore, the product (n - 1)(n + 1) contains a 3 in its prime factorization.
Just like st 1 this is not sufficient
the overall prime factorization of (n - 1)(n + 1) contains three 2's and a 3.
therefore, it is a multiple of 24.
sufficient
What if N=1? 1 is not divisible by 2 or 3.
Math Expert
Joined: 02 Sep 2009
Posts: 36520
Followers: 7067
Kudos [?]: 92932 [0], given: 10528
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
01 Jan 2011, 02:21
girishkakkar wrote:
9.If n is a positive integer and r is the remainder when (n - 1)(n + 1) is divided by 24, what is the value of r ?
(1) n is not divisible by 2.
(2) n is not divisible by 3.
ST 1- if n is not divisible by 2, then n is odd, so both (n - 1) and (n + 1) are even. moreover, since every other even number is a multiple of 4, one of those two factors is a multiple of 4. so the product (n - 1)(n + 1) contains one multiple of 2 and one multiple of 4, so it contains at least 2 x 2 x 2 = three 2's in its prime factorization.
But this is not sufficient, because it can be (n-1)*(n+1) can be 2*4 where remainder is 8. it can be 4*6 where the remainder is 0.
ST 2- if n is not divisible by 3, then exactly one of (n - 1) and (n + 1) is divisible by 3, because every third integer is divisible by 3. therefore, the product (n - 1)(n + 1) contains a 3 in its prime factorization.
Just like st 1 this is not sufficient
the overall prime factorization of (n - 1)(n + 1) contains three 2's and a 3.
therefore, it is a multiple of 24.
sufficient
What if N=1? 1 is not divisible by 2 or 3.
Please discuss specific questions in PS or DS subforums.
This question is discussed here: ds-from-gmatprep-96712.html
As for your question: if n=1 then (n - 1)(n + 1) equals to zero and zero is divisible by every integer (except zero itself) so by 24 too.
_________________
Manager
Joined: 26 Mar 2007
Posts: 87
Concentration: General Management, Leadership
Schools: Thunderbird '15
Followers: 1
Kudos [?]: 155 [0], given: 8
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
03 Jan 2011, 04:44
great collection. Thanks
Intern
Joined: 08 Apr 2011
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
10 Apr 2011, 10:46
8.If n is a positive integer and r is the remainder when 4 + 7n is divided by 3, what is the value of r ?
(1) n + 1 is divisible by 3.
(2) n > 20
st1... n+1 divisible by 3..so n=2,5,8,11......
this gives 4+7n=18,39,60....remainder 0 in each case......
st2) insufficient ....n can have any value
Another approach:
4 + 7n = 4 + 4n + 3n = 4(n+1) + 3n
From statement 1) we can see that 4(n+1) and 3n are divisible by 3 so the remainder is 0.
From statement 2) " insufficient ....n can have any value"
Manager
Joined: 29 Jun 2011
Posts: 76
Followers: 3
Kudos [?]: 16 [0], given: 47
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
02 Jul 2011, 05:33
awesome set of questions...VERY VERY helpful
_________________
It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate :
I am the captain of my soul.
~ William Ernest Henley
Manager
Joined: 15 Jan 2011
Posts: 103
Followers: 11
Kudos [?]: 155 [0], given: 13
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
07 Aug 2011, 10:31
Quote:
st1. take multiples of 8....divide them by 4...remainder =1 in each case...
but how can it be so? multiples of 8 are also multiples of 4
Manager
Status: Trying to survive
Joined: 29 Jun 2011
Posts: 183
GMAt Status: Quant section
Concentration: Finance, Real Estate
Schools: WBS (D)
GMAT Date: 12-30-2011
GPA: 3.2
Followers: 11
Kudos [?]: 107 [1] , given: 94
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
07 Aug 2011, 18:25
1
KUDOS
Galiya wrote:
Quote:
st1. take multiples of 8....divide them by 4...remainder =1 in each case...
but how can it be so? multiples of 8 are also multiples of 4
hey
because P has a remainder of 5 after dividing by 8 , so P= (multiple of 8) + 5
take numbers ( 40 + 5 ) / 8 = 5 + ( 5/8 )
divide 45 by 4 the remainder is 1
_________________
How can i lose my faith in life's fairness when i know that the dreams of those who sleep on the feathers are not more beautiful than the dreams of those who sleep on the ground? - Jubran Khaleel Jubran
Manager
Joined: 15 Jan 2011
Posts: 103
Followers: 11
Kudos [?]: 155 [0], given: 13
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
07 Aug 2011, 21:06
HI,Silver!)
yep, I missed it.
thanks a lot?
+1
Manager
Status: Trying to survive
Joined: 29 Jun 2011
Posts: 183
GMAt Status: Quant section
Concentration: Finance, Real Estate
Schools: WBS (D)
GMAT Date: 12-30-2011
GPA: 3.2
Followers: 11
Kudos [?]: 107 [0], given: 94
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
07 Aug 2011, 21:15
Galiya wrote:
HI,Silver!)
yep, I missed it.
thanks a lot?
+1
you are welcome
_________________
How can i lose my faith in life's fairness when i know that the dreams of those who sleep on the feathers are not more beautiful than the dreams of those who sleep on the ground? - Jubran Khaleel Jubran
Intern
Joined: 05 Apr 2012
Posts: 42
Followers: 0
Kudos [?]: 30 [0], given: 12
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
07 May 2012, 15:28
[quote="sandipchowdhury"]I have collected these problems on remainder. This type of problem is frequently asked in DS.
Answers are also given.
3.If p is a positive odd integer, what is the remainder when p is divided by 4 ?
(1) When p is divided by 8, the remainder is 5.
(2) p is the sum of the squares of two positive integers.
st1. take multiples of 8....divide them by 4...remainder =1 in each case...
st2. p is odd ,since p is square of 2 integers...one will be even and other odd....now when we divide any even square by 4 v ll gt 0 remainder..and when divide odd square vll get 1 as remainder......so intoatal remainder=1
Ans : D
i don t aggree in your approach it is clearly said that P is not a multiple of 8 Since it has a remainder of 5
So your ST 1 ?
FOR sT2 Why do you consider that the sum of 2 square one is even and one odd
do you considered that it is the sum of the Following square
I am very confused
best regards
Manager
Joined: 28 Jul 2011
Posts: 240
Followers: 3
Kudos [?]: 119 [0], given: 16
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
07 Jul 2012, 09:52
good stuff, book making for future reference.
Thank you
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 13420
Followers: 575
Kudos [?]: 163 [0], given: 0
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
17 Feb 2014, 18:35
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Current Student
Joined: 14 Jul 2013
Posts: 32
Followers: 0
Kudos [?]: 8 [0], given: 39
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
18 Apr 2014, 02:01
8.If n is a positive integer and r is the remainder when 4 + 7n is divided by 3, what is the value of r ?
(1) n + 1 is divisible by 3.
(2) n > 20
My method -
1) n+1 is divisible by 3 =>
7n + 4 can be written as (4n+3n + 4)
now, check for divisibility
[4(n+1) + 3n]/3 => 4(something divisible by 3)/3 + 3n/3
hence, the expression is divisible by 3.
2) n > 20
not sufficient, as different values of r are possible.
Math Expert
Joined: 02 Sep 2009
Posts: 36520
Followers: 7067
Kudos [?]: 92932 [0], given: 10528
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
18 Apr 2014, 02:10
honey86 wrote:
8.If n is a positive integer and r is the remainder when 4 + 7n is divided by 3, what is the value of r ?
(1) n + 1 is divisible by 3.
(2) n > 20
My method -
1) n+1 is divisible by 3 =>
7n + 4 can be written as (4n+3n + 4)
now, check for divisibility
[4(n+1) + 3n]/3 => 4(something divisible by 3)/3 + 3n/3
hence, the expression is divisible by 3.
2) n > 20
not sufficient, as different values of r are possible.
OPEN DISCUSSION OF THIS QUESTION IS HERE: if-n-is-a-positive-integer-and-r-is-the-remainder-when-93364.html
_________________
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 13420
Followers: 575
Kudos [?]: 163 [0], given: 0
Re: Collection of remainder problems in GMAT [#permalink]
### Show Tags
30 May 2015, 17:29
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Collection of remainder problems in GMAT [#permalink] 30 May 2015, 17:29
Go to page Previous 1 2 3 4 Next [ 63 posts ]
Similar topics Replies Last post
Similar
Topics:
41 Bunuel's Special Problems Collection 4 04 Jul 2014, 02:56
23 Ginormous Collection of CAT Problems 5 06 Aug 2012, 15:00
46 Collections of work/rate problem with solutions 15 14 Aug 2011, 11:49
30 GMAT Prep Problem Collections 11 29 May 2011, 11:18
37 Collection of work/rate problems? 111 27 Mar 2009, 23:15
Display posts from previous: Sort by
# Collection of remainder problems in GMAT
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 4,526 | 14,559 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2017-04 | latest | en | 0.905183 |
http://www.self.gutenberg.org/articles/eng/Minimalist_grammar | 1,618,387,836,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00264.warc.gz | 163,532,773 | 18,150 | ### Minimalist grammar
Minimalist grammars are a class of formal grammars that aim to provide a more rigorous, usually proof-theoretic, formalization of Chomskyan Minimalist program than is normally provided in the mainstream Minimalist literature. A variety of particular formalizations exist, often developed by Edward Stabler, Alain Lecomte, Christian Retoré, or combinations thereof.
## Lecomte and Retoré's extensions of the Lambek Calculus
Lecomte and Retoré (2001) [1] introduces a formalism that modifies that core of the Lambek Calculus to allow for movement-like processes to be described without resort to the combinatorics of Combinatory categorial grammar. The formalism is presented in proof-theoretic terms. Differing only slightly in notation from Lecomte and Retoré (2001), we can define a minimalist grammar as a 3-tuple $G = \left(C, F, L\right)$, where C is a set of "categorial" features, F is a set of "functional" features (which come in two flavors, "weak", denoted simply f, and "strong", denoted f*), and L is a set of lexical atoms, denoted as pairs , where w is some phonological/orthographic content, and t is a syntactic type defined recursively as follows:
all features in C and F are (atomic) types, and
if X and Y are types, so are X/Y, X\Y, and $X \circ Y$ are types.
We can now define 6 inferences rules:
, for all
, for all
$\frac\left\{\Gamma; \Gamma\text{'} \vdash \alpha\right\}\left\{\Gamma, \Gamma\text{'} \vdash \alpha\right\} entropy$
The first rule merely makes it possible to use lexical items with no extra assumptions. The second rule is just a means of introducing assumptions into the derivation. The third and fourth rules just perform directional feature checking, combining the assumptions required to build the subparts that are being combined. The entropy rule presumably allows the ordered sequents to be broken up into unordered sequents. And finally, the last rule implements "movement" by means of assumption elimination.
Th last rule can be given a number of different interpretations in order to fully mimic movement of the normal sort found in the Minimalist Program. The account given by Lecomte and Retoré (2001) is that if one of the product types is a strong functional feature, then the phonological/orthographic content associated with that type on the right is substituted with the content of the a, and the other is substituted with the empty string; whereas if neither is strong, then the phonological/orthographic content is substituted for the category feature, and the empty string is substituted for the weak functional feature. That is, we can rephrase the rule as two sub-rules as follows:
where $X \in C, Y^\left\{*\right\} \in F$
where $X \in C, Y \in F$
Another alternative would be to construct pairs in the /E and \E steps, and use the $\circ E$ rule as given, substituting the phonological/orthographic content a into the highest of the substitution positions, and the empty string in the rest of the positions. This would be more in line with the Minimalist Program, given that multiple movements of an item are possible, where only the highest position is "spelled out".
## Example
As a simple example of this system, we can show how to generate the sentence who did John see with the following toy grammar:
Let $G = \left(\\left\{N, S\\right\}, \\left\{W\\right\}, L\right)$, where L contains the following words:
The proof for the sentence who did John see is therefore:
$\dfrac\left\{$
\vdash \text{who} : N \circ W \quad \dfrac{ \text{x} : W \vdash \text{x} : W \quad \dfrac{ \vdash \text{did} : (S\backslash W)/S \quad \dfrac{ \vdash \text{John} : N \quad \dfrac{ \text{y} : N \vdash \text{y} : N \quad \vdash \text{see} : (S\backslash N)/N }{ \text{y} : N \vdash \text{see y} : S\backslash N }[/E] }{ \text{y} : N \vdash \text{John see y} : S }[\backslash E] }{ \text{y} : N \vdash \text{did John see y} : S\backslash W }[/E] }{ \text{x} : W, \text{y} : N \vdash \text{x did John see y} : S }[\backslash E] }{ \vdash \text{who did John see} : S }[\circ E]
## References
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. | 1,309 | 5,121 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.59375 | 3 | CC-MAIN-2021-17 | latest | en | 0.898743 |
https://www.reference.com/world-view/factors-49-59bdba82ddbfd371 | 1,652,707,298,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00340.warc.gz | 1,167,965,365 | 14,496 | # What Are the Factors of 49?
The factors of 49 are 1, 7 and 49. The factors of a given number are those numbers that divide into it evenly leaving no remainder. One and the number itself, 49 in this case, are always factors.
The number 49 is a square, which means that a number multiplied by itself yields 49, and that number is seven. Seven is called the square root of 49. Seven is also a prime number, which means it is evenly divisible by only one and itself. The prime factorization of 49 therefore is 7 x 7 or 7^2. Prime factorization of a number involves finding the prime numbers that multiply together to yield that number. | 153 | 635 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.03125 | 4 | CC-MAIN-2022-21 | latest | en | 0.963443 |
https://math.stackexchange.com/questions/1787981/how-to-prove-the-group-of-roots-of-unity-in-mathbbc-is-a-group | 1,713,688,739,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296817729.87/warc/CC-MAIN-20240421071342-20240421101342-00477.warc.gz | 342,264,302 | 35,928 | # How to prove the group of roots of unity in $\mathbb{C}$ is a group
I mostly need help with proving $G$ is closed but a verification of the other parts is appreciated.
Let $G = \{z \in \mathbb{C} \mid z^n=1$ for some $n\in \mathbb{Z^+}\}$ I want to start by proving $G$ is closed under multiplication. So $z_1 z_2 = z_{1 \cdot 2}$ and I need to show that $z_{1\cdot2}^n = 1$ I was thinking of breaking $z_1$ and $z_2$ into prime factors(?) to show that the property is retained through multiplication. Is this the correct place to start?
To prove multiplication is associative, Let $z_1 = a+bi$ and $z_2 = c+di$ and $z_3 = e + fi$ $$(z_1 z_2) z_3 = ((a+bi) (c+di)) (e + fi) = (ac+adi+cbi-bd)(e + fi)$$ $$(z_1 z_2) z_3 = ace+adei+cbei-bde + acfi - adf - cbf - bdfi$$ $$z_1 (z_2 z_3) = (a+bi) ((c+di) (e+fi)) = (a+bi) (ce+cfi+edi-df)$$ $$z_1 (z_2 z_3) = cbei - cbf - bde - bdfi + ace + acfi + adei - adf$$ it's messy but this shows they are equal.
identity element would be 1 since $a+bi \cdot 1 = a+bi$ and the inverse would be $z^{-1} = \frac{1}{a+bi}$ $z z^{-1} = \dfrac{a+bi}{a+bi} = 1$.
• You don't need to prove that multiplication is associative because multiplication is associative in $\mathbb C$.
– lhf
May 16, 2016 at 18:47
• Is $n$ fixed? In your set-theoretic description of $G$, each $n$ seemingly depends on $z$. May 16, 2016 at 18:47
• $n$ is not fixed May 16, 2016 at 18:58
• This question got two down-votes. Can anyone explain why? $\qquad$ May 16, 2016 at 19:19
Hint: if $x^n = 1$ and $y^m = 1$, what is $(xy)^{mn}$?
The only other thing you really need to verify (given that associativity holds in $\mathbb C$) is that the reciprocal of a root of unity is a root of unity.
• Oh I see. $(xy)^{mn} = x^{mn} * y^{mn} = 1^{m} * 1^{n} = 1$ for all $m,n \in \mathbb{Z^+}$. Thank you very much. How should I prove the reciprocal of a root of unity is a root of unity? It satisfies $z^n = 1$ since it's $\frac{1}{z^n} = 1$ and if the root that it is the reciprocal of satisfies $z^n = 1$ then $\frac{1}{1} = 1$. Is that all I need to do? May 16, 2016 at 18:57
• As a side question: Is this group also abelian? It seems like $(a+bi)(c+di) = (c+di)(a+bi)$ May 16, 2016 at 19:23 | 835 | 2,194 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2024-18 | latest | en | 0.910727 |
https://www.greenemath.com/Precalculus/82/Nonlinear-Systems-of-EquationsTest.html | 1,726,491,660,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00250.warc.gz | 749,518,452 | 3,674 | Question 1 of 5Solve each system:
Select the Correct Answer Below: Correct! Not Correct!
A
$$(8, 5)$$
B
$$(0, 1)$$
C
$$(8, 5), (8, 6)$$
D
No Solution
E
$$(0, 1), (8, 5)$$
Question 2 of 5Solve each system:
Select the Correct Answer Below: Correct! Not Correct!
A
$$(-3, 5)$$
B
$$(6, 2)$$
C
$$(-3, 5), (-1, -1)$$
D
$$(-3, 5), (6, 2)$$
E
No Solution
Question 3 of 5Solve each system:
Select the Correct Answer Below: Correct! Not Correct!
A
No Solution
B
$$(-1, 1)$$
C
$$(5, 0), (-4, 10)$$
D
$$(3, 2)$$
E
$$(-1, 1), (-4, 10), (3, 2)$$
Question 4 of 5Solve each system:
Select the Correct Answer Below: Correct! Not Correct!
A
$$(-7, -7)$$
B
$$(4, 5), (-7, 3)$$
C
$$(1, 3), (0, -10)$$
D
$$(-4, -7), (-7, -7)$$
E
No Solution
Question 5 of 5Solve each system:
Select the Correct Answer Below: Correct! Not Correct!
A
$$\left(\sqrt{3}, \sqrt{3}\right), \left(-\sqrt{3}, -\sqrt{3}\right),$$ $$\left(2, 1\right), \left(-2, -1\right)$$
B
$$(3, 5), (\sqrt{3}, \sqrt{3})$$
C
$$(-1, 3), (2, 6)$$
D
$$(5, 7), (3, 10)$$
E
No Solution
Great Job! You Passed!
Better Luck Next Time...
Restart Quiz ↻
Review Lesson ↻
Next Lesson » | 511 | 1,114 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.890625 | 3 | CC-MAIN-2024-38 | latest | en | 0.422676 |
https://forum.mikrotik.com/viewtopic.php?f=8&t=132987&sid=e81c3577f6b326e59e75f8b7890f6ff5&view=print | 1,627,091,277,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00281.warc.gz | 272,485,178 | 2,204 | Page 1 of 1
### Graph a value even though it is down
Posted: Tue Apr 10, 2018 4:19 pm
Hi Guys
We have a function voltage() which we would like to graph. The function simply looks up the OID and pulls a value.
If voltage falls below 24 then we want it to consider the probe as down, but still output a value for the graphing.
At the moment if the voltage is above 24, it graphs perfectly well. But if it falls below 24, it doesnt show anything.
On the probe settings i have
Avaliable: Voltage()
Error: if(Voltage()>24, "", "down")
Value: Voltage()
If i change the error line to
Error: if(Voltage()>24, "", "")
Then it never considers it to be a problem even if the value drops below 24. I tested this by setting it to 26 instead of 24
Your help or suggestions are much appreciated.
### Re: Graph a value even though it is down [SOLVED]
Posted: Wed Apr 11, 2018 10:48 pm
Hi Guys
We have a function voltage() which we would like to graph. The function simply looks up the OID and pulls a value.
If voltage falls below 24 then we want it to consider the probe as down, but still output a value for the graphing.
At the moment if the voltage is above 24, it graphs perfectly well. But if it falls below 24, it doesnt show anything.
On the probe settings i have
Avaliable: Voltage()
Error: if(Voltage()>24, "", "down")
Value: Voltage()
If i change the error line to
Error: if(Voltage()>24, "", "")
Then it never considers it to be a problem even if the value drops below 24. I tested this by setting it to 26 instead of 24
Your help or suggestions are much appreciated.
Put two probe, one at 24v warning, the other at 0v warning.
The first warn you, the second simply record the voltage.
### Re: Graph a value even though it is down
Posted: Thu Apr 12, 2018 2:25 pm
Thanks - Thats an awesome idea.
So simple, yet Brilliant! | 482 | 1,830 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.828125 | 3 | CC-MAIN-2021-31 | latest | en | 0.87966 |
https://testbook.com/question-answer/in-a-3-%CF%95-delta-star-connected-transfor--5f57b8d58ce2ca036e72caaa | 1,632,807,853,000,000,000 | text/html | crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00261.warc.gz | 588,498,015 | 30,120 | # In a 3 – ϕ, delta / star connected transformer, the ratio of phase voltages (primary to secondary) is 3 : 1. Find the ratio of their line voltages?
1. 1 : 2
2. √3 : 1
3. 3 : 1
4. 3 : 2
Option 2 : √3 : 1
Free
CT 1: Network Theory 1
11327
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:
In star connection
VL = √3VPh
IL = Iph
In delta connection
VL = Vph
IL = √3Iph
VL = line voltage
Vph = phase voltage
IL = line current
Iph = phase current
In transformer
$$a= \frac{{{V_P}}}{{{V_S}}} = \frac{{{N_P}}}{{{N_S}}} = \frac{{{I_S}}}{{{I_P}}}$$
Calculation:
Given that, delta phase : star phase = 3 : 1 = a
Line voltage in star is$${V_L} = \sqrt 3 {V_{ph}}$$
Line voltage in delta is $${V_L} = {V_{ph}}$$
Now the line to line voltage ratio of delta-star will be $$= \frac{a}{\sqrt 3} = \frac{{3 }}{\sqrt3}$$
∴ Ratio of their line voltages = √3 : 1 | 332 | 877 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.21875 | 4 | CC-MAIN-2021-39 | latest | en | 0.546063 |
https://www.univerkov.com/a-concrete-slab-2-m-long-and-1-5-m-wide-exerts-a-pressure-of-3-kpa-on-the-ground-determine-the-slab-mass/ | 1,657,201,652,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00600.warc.gz | 1,088,508,201 | 6,250 | # A concrete slab 2 m long and 1.5 m wide exerts a pressure of 3 kPa on the ground. Determine the slab mass.
L = 2 m.
a = 1.5 m.
P = 3 kPa = 3000 Pa.
g = 9.8 m / s2.
m -?
P = F / S – the formula for determining the pressure of the plate on the soil.
F = m * g is the weight of the concrete slab.
The slab has the shape of a rectangular parallelepiped, so the area S, on which the weight of the concrete slab acts, has the shape of a rectangle.
S = L * a.
The formula for determining the pressure of the plate will be: P = m * g / L * a.
We express the mass of the concrete slab by the formula: m = P * L * a / g.
m = 3000 Pa * 2 m * 1.5 m / 9.8 m / s2 = 918 kg.
Answer: a concrete slab has a mass of m = 918 kg.
One of the components of a person's success in our time is receiving modern high-quality education, mastering the knowledge, skills and abilities necessary for life in society. A person today needs to study almost all his life, mastering everything new and new, acquiring the necessary professional qualities. | 290 | 1,035 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.0625 | 4 | CC-MAIN-2022-27 | latest | en | 0.896928 |
https://pratt.duke.edu/news/katie-vanderkam-improving-the-efficiency-of-wind-farms/ | 1,726,382,846,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00603.warc.gz | 413,481,276 | 19,488 | # Katie VanderKam: Improving the Efficiency of Wind Farms
10/12/22 Pratt School of Engineering
A profile of an undergraduate engineering research fellow at Duke University.
• Major: Mechanical Engineering
• Advisor: Donald B. Bliss, Associate Professor of Mechanical Engineering and Materials Science
• Pratt Fellows Project: Wind Turbine Designs to Improve Efficiency and Allow Higher Turbine Density in Wind Farms
How did you get interested in engineering?
I was always a math and science person, and I think engineering is a good way to merge those two things together. It’s kind of the best of both worlds. I did an engineering summer program a couple of years before coming to college and really enjoyed all the hands-on projects, so I definitely wanted to continue that. I’ve always liked building. I wrote my Common Application essay about Legos, so it was a pretty easy choice in that regard.
What sorts of projects are you working on as a Pratt Fellow?
I am working with Dr. Bliss on modeling the wake behind vertical axis wind turbines (VAWT). Before my project, Dr. Bliss had done a lot of research with wake steering in horizontal axis wind turbines, that is, designing a turbine to direct the wake behind it such that it would not interfere with the wind going through other turbines. This would result in more efficient arrays of turbines. He had not yet done this with VAWTs, but studying this type of turbine was of interest. We have been doing a lot of computations about how the wind is passing through this turbine, where it’s going and how we can calculate that and figure out how it is operating.
Normally when you say “windmill” or “wind turbine,” people are going to picture the horizontal axis ones that spin parallel to the ground. Vertical axis ones spin perpendicular to the ground. There is a lot of research about horizontal axis wind turbines, but not so much for vertical axis wind turbines. My project works on developing computational models to track the wake as it passes through a VAWT in a way that is hopefully faster or less computationally expensive than running a full computational fluid dynamics (CFD) model.
Why did you want to work on this project?
When I applied to the Pratt Research Fellows program, I went through the list of available projects and ranked the ones in which I was most interested. This one was my first choice. Dr. Bliss said he was looking for someone who was good at spatially visualizing how things were working together, so you could picture formations in your head. I am also a dancer and I have choreographed a bunch. So, I was used to picturing formations, such as people moving around together on a stage. I thought that is kind of how we would picture vortices moving around each other behind a wind turbine. I saw that connection and that’s why this project was interesting to me.
What do you think of the Pratt Research Fellows program?
I think it is a really good program for people who want to get deeply involved with a research project. Since it goes on for three semesters and a summer, you can really dedicate a lot of time and get involved in the nitty-gritty of your project.
What’s it like working with the faculty?
I’ve really enjoyed it. They can be great advisors for a lot of different things—earning their trust so they ask what you think, and you can come to them about classes you are taking from them. Dr. Bliss was a great help, as I was applying to PhD programs. It’s a great way to get closer to faculty and to hear about what they’re interested in.
[Note: Since her interview, Katie has been accepted into a PhD program at Princeton University where she plans to continue her research in aerodynamics.] | 770 | 3,714 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.625 | 3 | CC-MAIN-2024-38 | latest | en | 0.960434 |
https://acemyhomeworkwriters.com/economics-and-quantitative-analysis-linear-regression-report-due-date-27-may-2019-word-limit-1200-words-weighting-40-instructions-as-an-economist-working-in-the-oecd-you-have-been-asked-to-prepare-a-s/ | 1,638,088,067,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00424.warc.gz | 165,244,393 | 12,122 | # ECONOMICS AND QUANTITATIVE ANALYSIS LINEAR REGRESSION REPORT DUE DATE 27 May 2019 WORD LIMIT 1200 words WEIGHTING 40% Instructions As an economist working in the OECD you have been asked to prepare a short report that examines the statistical association between average life satisfaction and GDP per capita using the data contained in the spreadsheet (linear regression assignment data). Your report needs to be structured as follow
1
ECONOMICS AND QUANTITATIVE ANALYSIS
LINEAR REGRESSION REPORT
DUE DATE: 27 May 2019
WORD LIMIT: 1200 words
WEIGHTING: 40%
Instructions
As an economist working in the OECD you have been asked to prepare a short report that examines
the statistical association between average life satisfaction and GDP per capita using the data
contained in the spreadsheet (linear regression assignment data).
Your report needs to be structured as follow:
1. Purpose (2 marks)
In this section, the purpose of the report needs to be clearly and concisely stated.
2. Background (4 marks)
In this section, a brief literature review on the association between life satisfaction and GDP is
required. Why are economists interested in this particular issue?
3. Method (4 marks)
In this section, the data source and empirical approach used to examine the relationship between life
satisfaction and GDP needs to be detailed.
4. Results (20 marks)
In this section, you need to present and summarize the results from your statistical analysis. In
particular, the results section must:
Provide a descriptive analysis of the two variables (e.g., mean, standard deviation, minimumand maximum). Which countries have the lowest and average life satisfaction scores? Whichcountries have the lowest and highest GDPs per capita? (2 marks).Develop a scatter diagram with GDP per capita as the independent variable. What does thescatter diagram indicate about the relationship between the two variables? (3 marks).Develop and estimate a regression equation that can be used to predict average lifesatisfaction given GDP per capita. (2 marks).State the estimated regression equation and interpret the meaning of the slope coefficient (tomake the interpretation easier multiply the estimated coefficient by 10,000). (3 marks).Is there a statistically significant association between GDP per capita and average lifesatisfaction? What is your conclusion? (2 marks).Did the regression equation provide a good fit? Explain. (3 marks).Luxembourg, Ireland, and Norway appear to be outliers in terms of GDP per capita. Re
estimate your regression model without Luxembourg, Ireland, and Norway. How does this
affect the slope coefficient and goodness of fit? Explain. (5 marks).
5. Discussion (5 marks)
In this section, provide a brief overview of the results. What are the key strengths and limitations of
this analysis? (e.g., data, method, etc.). How do the results from this analysis compare with other
studies? (e.g., are the findings consistent?). Do these findings have clear policy implications?
6. Recommendations (5 marks).
In this section, you should present three to five well-considered recommendations.
Pages (550 words)
Approximate price: -
Quality Research Papers
We always make sure that our academic writers follow all your instructions precisely. You can choose your academic level and we will assign a writer who has a respective degree.
We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.
Unlimited Revisions
If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document.
On time Delivery
All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. Otherwise a 100% refund is guaranteed.
Original & Confidential
We use several writing tools checks to ensure that all documents you receive are free from plagiarism. We also promise maximum confidentiality in all of our services.
Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.
Try it now!
## Calculate the price of your order
Total price:
\$0.00
How it works?
Fill in the order form and provide all details of your assignment.
Proceed with the payment
Choose the payment system that suits you most.
Whether you have an urgent deadline or those that have time. You can take some time and relax after trusting us with your paper. We make sure that we conduct the academic writing services diligently.
## Essay Writing Service
Among the wide variety of academic work, essay writing is one of the simplest a student can ever come across. Usually, it is a task which students encounter and learn how to write whilst in high school. However, the case is quite different when it comes to university and college.
Term Paper Writing
Are you looking for an online writing firm that can offer you reliable custom term paper writing help? Is your wish and desire to get someone who can guide you throughout the process of writing term papers? If yes, then you have come to the right place.
Coursework Writing Help
Coursework is essential for every student in order to graduate from college. However, most of it is deadline-centric, and that becomes a challenge to most learners. With the amount of work, learners are receiving every day, finding time to work on every task is not easy.
Online Homework Help
Online homework help services are an answer to every challenge that students go through. Despite the difference in the needs and levels of learning, all students can benefit from these services. Acemyhomework is one of the best online homework help service companies you can find on the internet. | 1,205 | 5,920 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.1875 | 3 | CC-MAIN-2021-49 | latest | en | 0.892778 |
https://pastebin.com/sn3AT1np | 1,603,369,031,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00672.warc.gz | 447,183,014 | 19,949 | # Untitled
a guest
Nov 20th, 2014
153
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
1. clock_gettime(CLOCK_MONOTONIC, {279, 416419245}) = 0
2. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
3. read(3, "\10\0\0\0\0\0\0\0", 16) = 8
4. clock_gettime(CLOCK_MONOTONIC, {279, 419928766}) = 0
5. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
6. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
7. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
8. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
9. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
10. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
11. futex(0x861dec, FUTEX_WAIT_PRIVATE, 681, NULL) = -1 EAGAIN (Resource temporarily unavailable)
12. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
13. clock_gettime(CLOCK_MONOTONIC, {279, 430762507}) = 0
14. clock_gettime(CLOCK_MONOTONIC, {279, 431494929}) = 0
15. epoll_wait(43, {}, 32, 0) = 0
16. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
17. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
18. epoll_wait(43, {}, 32, 0) = 0
19. clock_gettime(CLOCK_MONOTONIC, {279, 436438777}) = 0
20. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1) = 0 (Timeout)
21. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
22. clock_gettime(CLOCK_MONOTONIC, {279, 440894344}) = 0
23. clock_gettime(CLOCK_MONOTONIC, {279, 441138484}) = 0
24. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
25. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
26. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
27. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
28. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
29. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
30. futex(0x861dec, FUTEX_WAIT_PRIVATE, 683, NULL) = -1 EAGAIN (Resource temporarily unavailable)
31. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
32. clock_gettime(CLOCK_MONOTONIC, {279, 449347712}) = 0
33. clock_gettime(CLOCK_MONOTONIC, {279, 450385311}) = 0
34. clock_gettime(CLOCK_MONOTONIC, {279, 452155330}) = 0
35. epoll_wait(43, {}, 32, 0) = 0
36. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=43, revents=POLLIN}])
37. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
38. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\26\0\0\0\1\0\24\0\$\0\0\0\0\0\0\0\0\0\0\0\26\0\0\0\2\0\30\0\0\0\0\0"..., 680}, {"", 3416}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 52
39. epoll_wait(43, {}, 32, 0) = 0
40. clock_gettime(CLOCK_MONOTONIC, {279, 465125301}) = 0
41. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
42. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
43. clock_gettime(CLOCK_MONOTONIC, {279, 468238094}) = 0
44. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
45. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
46. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
47. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
48. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
49. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
50. futex(0x861dec, FUTEX_WAIT_PRIVATE, 685, NULL) = -1 EAGAIN (Resource temporarily unavailable)
51. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
52. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
53. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\26\0\0\0\1\0\24\0&\0\0\0\0\0\0\0\0\0\0\0\26\0\0\0\2\0\30\0\0\0\0\0"..., 628}, {"", 3468}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 52
54. clock_gettime(CLOCK_MONOTONIC, {279, 478980281}) = 0
55. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\"\0\0\0\0\0\10\0", 8}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 8
56. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
57. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
58. clock_gettime(CLOCK_MONOTONIC, {279, 488288142}) = 0
59. clock_gettime(CLOCK_MONOTONIC, {279, 488867975}) = 0
60. clock_gettime(CLOCK_MONOTONIC, {279, 489142634}) = 0
61. clock_gettime(CLOCK_MONOTONIC, {279, 489325739}) = 0
62. clock_gettime(CLOCK_MONOTONIC, {279, 490088679}) = 0
63. clock_gettime(CLOCK_MONOTONIC, {279, 490332819}) = 0
64. clock_gettime(CLOCK_MONOTONIC, {279, 491034723}) = 0
65. clock_gettime(CLOCK_MONOTONIC, {279, 492163874}) = 0
66. clock_gettime(CLOCK_MONOTONIC, {279, 492438532}) = 0
67. clock_gettime(CLOCK_MONOTONIC, {279, 493018366}) = 0
68. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
69. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\26\0\0\0\1\0\24\0\"\0\0\0\0\0\0\0\0\0\0\0\26\0\0\0\2\0\30\0\0\0\0\0"..., 576}, {"", 3520}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 52
70. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"&\0\0\0\0\0\10\0", 8}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 8
71. clock_gettime(CLOCK_MONOTONIC, {279, 496710993}) = 0
72. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
73. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
74. clock_gettime(CLOCK_MONOTONIC, {279, 498694635}) = 0
75. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
76. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
77. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
78. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
79. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
80. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
81. futex(0x861dec, FUTEX_WAIT_PRIVATE, 687, NULL) = -1 EAGAIN (Resource temporarily unavailable)
82. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
83. epoll_wait(43, {}, 32, 0) = 0
84. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\$\0\0\0\0\0\10\0", 8}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 8
85. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
86. clock_gettime(CLOCK_MONOTONIC, {279, 520179010}) = 0
87. clock_gettime(CLOCK_MONOTONIC, {279, 520941949}) = 0
88. clock_gettime(CLOCK_MONOTONIC, {279, 521125055}) = 0
89. clock_gettime(CLOCK_MONOTONIC, {279, 521735406}) = 0
90. clock_gettime(CLOCK_MONOTONIC, {279, 522498346}) = 0
91. clock_gettime(CLOCK_MONOTONIC, {279, 523108697}) = 0
92. clock_gettime(CLOCK_MONOTONIC, {279, 523352838}) = 0
93. clock_gettime(CLOCK_MONOTONIC, {279, 524085259}) = 0
94. clock_gettime(CLOCK_MONOTONIC, {279, 524359918}) = 0
95. clock_gettime(CLOCK_MONOTONIC, {279, 524543023}) = 0
96. epoll_wait(43, {}, 32, 0) = 0
97. clock_gettime(CLOCK_MONOTONIC, {279, 526038384}) = 0
98. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1402) = 1 ([{fd=3, revents=POLLIN}])
99. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
100. clock_gettime(CLOCK_MONOTONIC, {279, 529029107}) = 0
101. clock_gettime(CLOCK_MONOTONIC, {279, 529608941}) = 0
102. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1398) = 1 ([{fd=26, revents=POLLIN}])
103. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
104. clock_gettime(CLOCK_MONOTONIC, {279, 635504937}) = 0
105. read(26, "\27\1\0\0z\210\t\0\5\0\0\0\0\0\0\0\27\1\0\0\231\210\t\0\0\0\0\0\0\0\0\0", 512) = 32
106. epoll_wait(43, {}, 32, 0) = 0
107. clock_gettime(CLOCK_MONOTONIC, {279, 639075494}) = 0
108. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1289) = 1 ([{fd=34, revents=POLLIN}])
109. clock_gettime(CLOCK_MONOTONIC, {279, 866614556}) = 0
110. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\4\1\1\334\0\0\0\347\1\0\0m\0\0\0\1\1o\0\1\0\0\0/\0\0\0\0\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 348
111. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
112. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
113. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
114. epoll_wait(43, {}, 32, 0) = 0
115. clock_gettime(CLOCK_MONOTONIC, {279, 872107721}) = 0
116. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
117. clock_gettime(CLOCK_MONOTONIC, {279, 874671197}) = 0
118. epoll_wait(43, {}, 32, 0) = 0
119. clock_gettime(CLOCK_MONOTONIC, {279, 876654840}) = 0
120. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1051) = 1 ([{fd=3, revents=POLLIN}])
121. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
122. clock_gettime(CLOCK_MONOTONIC, {279, 881995416}) = 0
123. clock_gettime(CLOCK_MONOTONIC, {279, 882666803}) = 0
124. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1045) = 0 (Timeout)
125. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
126. clock_gettime(CLOCK_MONOTONIC, {280, 931189754}) = 0
127. clock_gettime(CLOCK_MONOTONIC, {280, 931861140}) = 0
128. clock_gettime(CLOCK_MONOTONIC, {280, 932624080}) = 0
129. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
130. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
131. clock_gettime(CLOCK_MONOTONIC, {280, 934943416}) = 0
132. clock_gettime(CLOCK_MONOTONIC, {280, 935340144}) = 0
133. epoll_wait(43, {}, 32, 0) = 0
134. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
135. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
136. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
137. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
138. epoll_wait(43, {}, 32, 0) = 0
139. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\27\0\0\0\0\0\f\0\3\0\0\0", 12}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 12
140. clock_gettime(CLOCK_MONOTONIC, {280, 945105769}) = 0
141. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=43, revents=POLLIN}])
142. read(3, "\5\0\0\0\0\0\0\0", 16) = 8
143. clock_gettime(CLOCK_MONOTONIC, {280, 948523737}) = 0
144. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
145. clock_gettime(CLOCK_MONOTONIC, {280, 950232721}) = 0
146. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
147. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
148. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\27\0\0\0\0\0\f\0\3\0\0\0", 524}, {"", 3572}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 12
149. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
150. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
151. clock_gettime(CLOCK_MONOTONIC, {280, 954688287}) = 0
152. clock_gettime(CLOCK_MONOTONIC, {280, 955664850}) = 0
153. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
154. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
155. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
156. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
157. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
158. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
159. futex(0x861dec, FUTEX_WAIT_PRIVATE, 689, NULL) = -1 EAGAIN (Resource temporarily unavailable)
160. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
161. epoll_wait(43, {}, 32, 0) = 0
162. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
163. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
164. clock_gettime(CLOCK_MONOTONIC, {280, 964728570}) = 0
165. clock_gettime(CLOCK_MONOTONIC, {280, 965003228}) = 0
166. clock_gettime(CLOCK_MONOTONIC, {280, 965766168}) = 0
167. clock_gettime(CLOCK_MONOTONIC, {280, 966712213}) = 0
168. epoll_wait(43, {}, 32, 0) = 0
169. clock_gettime(CLOCK_MONOTONIC, {280, 968054986}) = 0
170. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 5032) = 1 ([{fd=43, revents=POLLIN}])
171. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
172. clock_gettime(CLOCK_MONOTONIC, {281, 365699030}) = 0
173. epoll_wait(43, {{EPOLLIN, {u32=11510104, u64=11510104}}}, 32, 0) = 1
174. recvmsg(82, {msg_name(0)=NULL, msg_iov(2)=[{"\22\0\0\0\1\0\f\0\v\0\0\0\22\0\0\0\0\0\f\0\3\0\0\0\4\0\0\0\0\0\f\0"..., 3492}, {"", 604}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 48
175. fcntl64(47, F_DUPFD_CLOEXEC, 0) = 95
176. sendmsg(82, {msg_name(0)=NULL, msg_iov(1)=[{"\v\0\0\0\0\0\20\0\1\0\0\0\\\260\0\0", 16}], msg_controllen=16, {cmsg_len=16, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, {95}}, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 16
177. close(95) = 0
178. epoll_wait(43, {}, 32, 0) = 0
179. clock_gettime(CLOCK_MONOTONIC, {281, 371649957}) = 0
180. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 4629) = 1 ([{fd=34, revents=POLLIN}])
181. clock_gettime(CLOCK_MONOTONIC, {285, 872107708}) = 0
182. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\4\1\1\334\0\0\0\351\1\0\0m\0\0\0\1\1o\0\1\0\0\0/\0\0\0\0\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 348
183. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
184. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
185. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
186. epoll_wait(43, {}, 32, 0) = 0
187. clock_gettime(CLOCK_MONOTONIC, {285, 876837933}) = 0
188. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
189. clock_gettime(CLOCK_MONOTONIC, {285, 880103314}) = 0
190. epoll_wait(43, {}, 32, 0) = 0
191. clock_gettime(CLOCK_MONOTONIC, {285, 882117474}) = 0
192. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 118) = 1 ([{fd=3, revents=POLLIN}])
193. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
194. clock_gettime(CLOCK_MONOTONIC, {285, 893836224}) = 0
195. clock_gettime(CLOCK_MONOTONIC, {285, 894873822}) = 0
196. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 106) = 0 (Timeout)
197. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
198. clock_gettime(CLOCK_MONOTONIC, {286, 12824260}) = 0
199. clock_gettime(CLOCK_MONOTONIC, {286, 14594280}) = 0
200. clock_gettime(CLOCK_MONOTONIC, {286, 15326701}) = 0
201. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
202. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
203. clock_gettime(CLOCK_MONOTONIC, {286, 17188274}) = 0
204. clock_gettime(CLOCK_MONOTONIC, {286, 17615520}) = 0
205. epoll_wait(43, {}, 32, 0) = 0
206. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
207. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
208. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
209. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
210. epoll_wait(43, {}, 32, 0) = 0
211. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\27\0\0\0\0\0\f\0\4\0\0\0", 12}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 12
212. clock_gettime(CLOCK_MONOTONIC, {286, 27106487}) = 0
213. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=43, revents=POLLIN}])
214. read(3, "\5\0\0\0\0\0\0\0", 16) = 8
215. clock_gettime(CLOCK_MONOTONIC, {286, 31409466}) = 0
216. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
217. clock_gettime(CLOCK_MONOTONIC, {286, 32721722}) = 0
218. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
219. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
220. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\27\0\0\0\0\0\f\0\4\0\0\0", 512}, {"", 3584}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 12
221. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
222. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
223. clock_gettime(CLOCK_MONOTONIC, {286, 36963665}) = 0
224. clock_gettime(CLOCK_MONOTONIC, {286, 38306438}) = 0
225. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
226. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
227. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
228. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
229. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
230. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
231. futex(0x861dec, FUTEX_WAIT_PRIVATE, 691, NULL) = -1 EAGAIN (Resource temporarily unavailable)
232. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
233. epoll_wait(43, {}, 32, 0) = 0
234. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
235. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
236. clock_gettime(CLOCK_MONOTONIC, {286, 48407757}) = 0
237. clock_gettime(CLOCK_MONOTONIC, {286, 49292767}) = 0
238. clock_gettime(CLOCK_MONOTONIC, {286, 49597943}) = 0
239. clock_gettime(CLOCK_MONOTONIC, {286, 50330365}) = 0
240. epoll_wait(43, {}, 32, 0) = 0
241. clock_gettime(CLOCK_MONOTONIC, {286, 51764691}) = 0
242. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 4949) = 1 ([{fd=26, revents=POLLIN}])
243. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
244. clock_gettime(CLOCK_MONOTONIC, {287, 276129924}) = 0
245. read(26, "\37\1\0\0\250\360\3\0\1\0t\0\1\0\0\0\37\1\0\0\345\360\3\0\0\0\0\0\0\0\0\0", 512) = 32
246. epoll_wait(43, {}, 32, 0) = 0
247. clock_gettime(CLOCK_MONOTONIC, {287, 286505901}) = 0
248. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 3714) = 1 ([{fd=26, revents=POLLIN}])
249. clock_gettime(CLOCK_MONOTONIC, {287, 549872599}) = 0
250. read(26, "\37\1\0\0U\316\7\0\1\0t\0\0\0\0\0\37\1\0\0\222\316\7\0\0\0\0\0\0\0\0\0", 512) = 32
251. epoll_wait(43, {}, 32, 0) = 0
252. clock_gettime(CLOCK_MONOTONIC, {287, 555518352}) = 0
253. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 3445) = 1 ([{fd=34, revents=POLLIN}])
254. clock_gettime(CLOCK_MONOTONIC, {287, 565619671}) = 0
255. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\1\1\1l\0\0\0\7\1\0\0\235\0\0\0\1\1o\0\v\0\0\0/screenl"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 423
256. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
257. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
258. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
259. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
260. epoll_wait(43, {}, 32, 0) = 0
261. clock_gettime(CLOCK_MONOTONIC, {287, 578192913}) = 0
262. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
263. clock_gettime(CLOCK_MONOTONIC, {287, 584876263}) = 0
264. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
265. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
266. epoll_wait(43, {}, 32, 0) = 0
267. clock_gettime(CLOCK_MONOTONIC, {287, 590460980}) = 0
268. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
269. read(3, "\5\0\0\0\0\0\0\0", 16) = 8
270. clock_gettime(CLOCK_MONOTONIC, {287, 595313275}) = 0
271. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
272. clock_gettime(CLOCK_MONOTONIC, {287, 598700726}) = 0
273. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
274. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
275. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
276. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
277. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
278. epoll_wait(43, {}, 32, 0) = 0
279. clock_gettime(CLOCK_MONOTONIC, {287, 632819380}) = 0
280. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=34, revents=POLLIN}])
281. read(3, "\10\0\0\0\0\0\0\0", 16) = 8
282. clock_gettime(CLOCK_MONOTONIC, {287, 641455855}) = 0
283. clock_gettime(CLOCK_MONOTONIC, {287, 641730513}) = 0
284. clock_gettime(CLOCK_MONOTONIC, {287, 641944137}) = 0
285. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\1\0\1\4\0\0\0\t\1\0\0\225\0\0\0\1\1o\0\1\0\0\0/\0\0\0\0\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 172
286. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
287. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
288. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
289. epoll_wait(43, {}, 32, 0) = 0
290. clock_gettime(CLOCK_MONOTONIC, {287, 651404586}) = 0
291. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
292. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
293. clock_gettime(CLOCK_MONOTONIC, {287, 654456344}) = 0
294. clock_gettime(CLOCK_MONOTONIC, {287, 655677047}) = 0
295. clock_gettime(CLOCK_MONOTONIC, {287, 655921187}) = 0
296. clock_gettime(CLOCK_MONOTONIC, {287, 656531539}) = 0
297. clock_gettime(CLOCK_MONOTONIC, {287, 657233443}) = 0
298. clock_gettime(CLOCK_MONOTONIC, {287, 657630172}) = 0
299. clock_gettime(CLOCK_MONOTONIC, {287, 658179488}) = 0
300. clock_gettime(CLOCK_MONOTONIC, {287, 658454146}) = 0
301. clock_gettime(CLOCK_MONOTONIC, {287, 658789840}) = 0
302. clock_gettime(CLOCK_MONOTONIC, {287, 659400191}) = 0
303. clock_gettime(CLOCK_MONOTONIC, {287, 660315719}) = 0
304. clock_gettime(CLOCK_MONOTONIC, {287, 660529342}) = 0
305. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\v\0\0\0\2\0\20\0\5\0\0\0\26\0\0\0", 16}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 16
306. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
307. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
308. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
309. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
310. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
311. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
312. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
313. futex(0x861dec, FUTEX_WAIT_PRIVATE, 693, NULL) = -1 EAGAIN (Resource temporarily unavailable)
314. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
315. futex(0x861e44, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x861e40, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
316. futex(0x861e28, FUTEX_WAKE_PRIVATE, 1) = 1
317. futex(0x861cf4, FUTEX_WAKE_PRIVATE, 1) = 1
318. futex(0x861cc4, FUTEX_WAKE_PRIVATE, 1) = 1
319. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 1
320. futex(0x861de8, FUTEX_WAKE_PRIVATE, 1) = 1
321. futex(0x861dec, FUTEX_WAIT_PRIVATE, 695, NULL) = -1 EAGAIN (Resource temporarily unavailable)
322. futex(0x861dd0, FUTEX_WAKE_PRIVATE, 1) = 0
323. sendmsg(4, {msg_name(29)={sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, msg_iov(13)=[{"MESSAGE=[D] HwComposerContext::s"..., 62}, {"\n", 1}, {"PRIORITY=7", 10}, {"\n", 1}, {"CODE_FILE=hwcomposer_context.cpp", 32}, {"\n", 1}, {"CODE_LINE=180", 13}, {"\n", 1}, {"CODE_FUNC=void HwComposerContext"..., 52}, {"\n", 1}, {"SYSLOG_IDENTIFIER=", 18}, {"lipstick", 8}, {"\n", 1}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 201
324. writev(6, [{"\3", 1}, {"qdhwcomposer\0", 13}, {"hwc_blank: Blanking display: 0\0", 31}], 3) = 45
326. ioctl(5, FBIOBLANK, 0x4) = 0
327. writev(6, [{"\3", 1}, {"qdhwcomposer\0", 13}, {"hwc_blank: Done blanking display"..., 36}], 3) = 50
328. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\2\1\1\0\0\0\0\253\1\0\0\30\0\0\0\6\1s\0\4\0\0\0:1.0\0\0\0\0"..., 40}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 40
329. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
330. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\1\0\0\0\0\0\f\0*\0\0\0", 500}, {"", 3596}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 12
331. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"*\0\0\0\0\0\f\0\5\0\0\0\1\0\0\0\1\0\f\0*\0\0\0", 24}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 24
332. clock_gettime(CLOCK_MONOTONIC, {288, 79444142}) = 0
333. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
334. read(3, "\1\0\0\0\0\0\0\0", 16) = 8
335. clock_gettime(CLOCK_MONOTONIC, {288, 83136770}) = 0
336. clock_gettime(CLOCK_MONOTONIC, {288, 85211965}) = 0
337. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
338. clock_gettime(CLOCK_MONOTONIC, {288, 109961721}) = 0
339. epoll_wait(43, {}, 32, 0) = 0
340. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=34, revents=POLLIN}])
341. read(3, "\1\0\0\0\0\0\0\0", 16) = 8
342. clock_gettime(CLOCK_MONOTONIC, {288, 117591116}) = 0
343. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
344. clock_gettime(CLOCK_MONOTONIC, {288, 121924613}) = 0
345. clock_gettime(CLOCK_MONOTONIC, {288, 124152396}) = 0
346. clock_gettime(CLOCK_MONOTONIC, {288, 126013968}) = 0
347. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
348. clock_gettime(CLOCK_MONOTONIC, {288, 130683159}) = 0
349. clock_gettime(CLOCK_MONOTONIC, {288, 136603568}) = 0
350. sendmsg(110, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\5\0\0\0f\0\0\0\1\1o\0 \0\0\0/com/mee"..., 120}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 120
351. clock_gettime(CLOCK_MONOTONIC, {288, 141028618}) = 0
352. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
353. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\4\1\1\10\0\0\0\f\1\0\0u\0\0\0\1\1o\0\25\0\0\0/com/nok"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 144
354. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
355. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
356. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
357. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
358. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
359. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
360. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
361. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
362. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
363. epoll_wait(43, {}, 32, 0) = 0
364. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\33\0\0\0\0\0\f\0\3\0\0\0", 12}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 12
365. clock_gettime(CLOCK_MONOTONIC, {288, 164527154}) = 0
366. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 3 ([{fd=3, revents=POLLIN}, {fd=43, revents=POLLIN}, {fd=110, revents=POLLIN}])
367. read(3, "\v\0\0\0\0\0\0\0", 16) = 8
368. clock_gettime(CLOCK_MONOTONIC, {288, 166510796}) = 0
369. clock_gettime(CLOCK_MONOTONIC, {288, 166754937}) = 0
370. clock_gettime(CLOCK_MONOTONIC, {288, 166999077}) = 0
371. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
372. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
373. epoll_wait(43, {{EPOLLIN, {u32=16430512, u64=16430512}}}, 32, 0) = 1
374. recvmsg(112, {msg_name(0)=NULL, msg_iov(2)=[{"\32\0\0\0\4\0\30\0\26\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\31\0\0\0\1\0\24\0"..., 488}, {"", 3608}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 128
375. clock_gettime(CLOCK_MONOTONIC, {288, 172187066}) = 0
376. clock_gettime(CLOCK_MONOTONIC, {288, 172797418}) = 0
377. clock_gettime(CLOCK_MONOTONIC, {288, 173163629}) = 0
378. clock_gettime(CLOCK_MONOTONIC, {288, 173377252}) = 0
379. clock_gettime(CLOCK_MONOTONIC, {288, 173590875}) = 0
380. clock_gettime(CLOCK_MONOTONIC, {288, 173773980}) = 0
381. clock_gettime(CLOCK_MONOTONIC, {288, 173987603}) = 0
382. clock_gettime(CLOCK_MONOTONIC, {288, 174170709}) = 0
383. clock_gettime(CLOCK_MONOTONIC, {288, 174384332}) = 0
384. clock_gettime(CLOCK_MONOTONIC, {288, 174933648}) = 0
385. clock_gettime(CLOCK_MONOTONIC, {288, 175208306}) = 0
386. clock_gettime(CLOCK_MONOTONIC, {288, 175605035}) = 0
387. clock_gettime(CLOCK_MONOTONIC, {288, 175818658}) = 0
388. clock_gettime(CLOCK_MONOTONIC, {288, 176001763}) = 0
389. clock_gettime(CLOCK_MONOTONIC, {288, 176215386}) = 0
390. clock_gettime(CLOCK_MONOTONIC, {288, 176398492}) = 0
391. clock_gettime(CLOCK_MONOTONIC, {288, 176581597}) = 0
392. clock_gettime(CLOCK_MONOTONIC, {288, 177222466}) = 0
393. clock_gettime(CLOCK_MONOTONIC, {288, 177619195}) = 0
394. clock_gettime(CLOCK_MONOTONIC, {288, 178199029}) = 0
395. clock_gettime(CLOCK_MONOTONIC, {288, 178412652}) = 0
396. clock_gettime(CLOCK_MONOTONIC, {288, 178992486}) = 0
397. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\37\0\0\0\0\0\10\0", 8}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 8
398. recvmsg(110, {msg_name(0)=NULL, msg_iov(1)=[{"l\2\1\1\0\0\0\0\5\0\0\0\10\0\0\0\5\1u\0\5\0\0\0", 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 24
399. recvmsg(110, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
400. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
401. clock_gettime(CLOCK_MONOTONIC, {288, 183509088}) = 0
402. clock_gettime(CLOCK_MONOTONIC, {288, 184210992}) = 0
403. clock_gettime(CLOCK_MONOTONIC, {288, 184821344}) = 0
404. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
405. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
406. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
407. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
408. write(70, "\1\0\0\0\0\0\0\0", 8) = 8
409. gettimeofday({1416519573, 699274}, NULL) = 0
410. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
411. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
412. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
413. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
414. clock_gettime(CLOCK_MONOTONIC, {288, 225287654}) = 0
415. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\254\1\0\0\\\0\0\0\1\1o\0\1\0\0\0/\0\0\0\0\0\0\0"..., 112}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 112
416. clock_gettime(CLOCK_MONOTONIC, {288, 233496883}) = 0
417. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\255\1\0\0a\0\0\0\1\1o\0\1\0\0\0/\0\0\0\0\0\0\0"..., 120}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 120
418. clock_gettime(CLOCK_MONOTONIC, {288, 254706599}) = 0
419. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\256\1\0\0`\0\0\0\1\1o\0\1\0\0\0/\0\0\0\0\0\0\0"..., 112}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 112
420. gettimeofday({1416519573, 752375}, NULL) = 0
421. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
422. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
423. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
424. gettimeofday({1416519573, 757807}, NULL) = 0
425. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
426. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
427. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
428. stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2211, ...}) = 0
429. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
430. epoll_wait(43, {}, 32, 0) = 0
431. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=34, revents=POLLIN}])
432. read(3, "\21\0\0\0\0\0\0\0", 16) = 8
433. clock_gettime(CLOCK_MONOTONIC, {288, 278540826}) = 0
434. clock_gettime(CLOCK_MONOTONIC, {288, 279334283}) = 0
435. clock_gettime(CLOCK_MONOTONIC, {288, 279608941}) = 0
436. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\2\1\1\370\v\0\0\352\1\0\0005\0\0\0\6\1s\0\5\0\0\0:1.73\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
437. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"G\0\0\0/net/connman/service/wifi_00"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 1088
438. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
439. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
440. epoll_wait(43, {}, 32, 0) = 0
441. clock_gettime(CLOCK_MONOTONIC, {288, 292945122}) = 0
442. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
443. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
444. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
445. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\257\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
446. clock_gettime(CLOCK_MONOTONIC, {288, 300940728}) = 0
447. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\236\0\0\0\260\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\231\0\0\0type='signal',sender='net.co"..., 158}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 302
448. clock_gettime(CLOCK_MONOTONIC, {288, 303870416}) = 0
449. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\257\0\0\0\261\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\252\0\0\0type='signal',sender='net.co"..., 175}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 319
450. clock_gettime(CLOCK_MONOTONIC, {288, 308936333}) = 0
451. open("/proc/net/route", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 120
452. fcntl64(120, F_SETFD, FD_CLOEXEC) = 0
453. fstat64(120, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
454. fstat64(120, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
455. read(120, "Iface\tDestination\tGateway \tFlags"..., 16384) = 512
456. read(120, "", 15872) = 0
457. read(120, "", 16384) = 0
458. close(120) = 0
459. open("/proc/net/ipv6_route", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 120
460. fcntl64(120, F_SETFD, FD_CLOEXEC) = 0
461. fstat64(120, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
462. fstat64(120, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
463. read(120, "00000000000000000000000000000000"..., 16384) = 1950
464. read(120, "", 14434) = 0
465. read(120, "", 16384) = 0
466. close(120) = 0
467. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
468. clock_gettime(CLOCK_MONOTONIC, {288, 328955865}) = 0
469. clock_gettime(CLOCK_MONOTONIC, {288, 329779840}) = 0
470. clock_gettime(CLOCK_MONOTONIC, {288, 330237604}) = 0
471. epoll_wait(43, {}, 32, 0) = 0
472. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=34, revents=POLLIN}])
473. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
474. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\2\1\0018V\0\0\353\1\0\0005\0\0\0\6\1s\0\5\0\0\0:1.73\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
475. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"\v\0\0\0Nameservers\0\2as\0\0\0\0\0\31\0\0\0Name"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
476. epoll_wait(43, {}, 32, 0) = 0
477. clock_gettime(CLOCK_MONOTONIC, {288, 347846247}) = 0
478. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=34, revents=POLLIN}])
479. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
480. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"ation\0\2as\0\0\0\0\0\0\0\7\0\0\0Domains\0\2as\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
481. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"/connman/service/ethernet_de6f87"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
482. clock_gettime(CLOCK_MONOTONIC, {288, 353766658}) = 0
483. clock_gettime(CLOCK_MONOTONIC, {288, 354041316}) = 0
484. epoll_wait(43, {}, 32, 0) = 0
485. clock_gettime(CLOCK_MONOTONIC, {288, 355475642}) = 0
486. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 6) = 1 ([{fd=34, revents=POLLIN}])
487. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"\1b\0\0\0\0\0\0\10\0\0\0Ethernet\0\5a{sv}\0\0\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
488. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"\2as\0\0\0\0\0\31\0\0\0Nameservers.Configur"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
489. epoll_wait(43, {}, 32, 0) = 0
490. clock_gettime(CLOCK_MONOTONIC, {288, 358588435}) = 0
491. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 3) = 1 ([{fd=34, revents=POLLIN}])
492. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"\5\0\0\0Proxy\0\5a{sv}\0\0\0\0\0\0\0\0\23\0\0\0Prox"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
493. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\10\0\0\0Security\0\2as\0\0\0\0\0\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
494. epoll_wait(43, {}, 32, 0) = 0
495. clock_gettime(CLOCK_MONOTONIC, {288, 363105037}) = 0
496. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=34, revents=POLLIN}])
497. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"a{sv}\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0IPv6\0\5a{sv}\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
498. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"servers.Configuration\0\2as\0\0\0\0\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
499. epoll_wait(43, {}, 32, 0) = 0
500. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=34, revents=POLLIN}])
501. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"sv}\0\0\0\0\0000\0\0\0/net/connman/service"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 2048
502. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"\1\0\0\0\0\0\0\0\t\0\0\0Connected\0\1b\0\0\0\0\1\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 600
503. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
504. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
505. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
506. epoll_wait(43, {}, 32, 0) = 0
507. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
508. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\262\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
509. clock_gettime(CLOCK_MONOTONIC, {288, 395240046}) = 0
510. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\255\0\0\0\263\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\250\0\0\0type='signal',sender='net.co"..., 173}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 317
511. clock_gettime(CLOCK_MONOTONIC, {288, 399604059}) = 0
512. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\264\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
513. clock_gettime(CLOCK_MONOTONIC, {288, 403907037}) = 0
514. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\265\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
515. clock_gettime(CLOCK_MONOTONIC, {288, 407904840}) = 0
516. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\257\0\0\0\266\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\252\0\0\0type='signal',sender='net.co"..., 175}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 319
517. clock_gettime(CLOCK_MONOTONIC, {288, 412299371}) = 0
518. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\267\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
519. clock_gettime(CLOCK_MONOTONIC, {288, 417090630}) = 0
520. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\270\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
521. clock_gettime(CLOCK_MONOTONIC, {288, 421149469}) = 0
522. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\271\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
523. clock_gettime(CLOCK_MONOTONIC, {288, 425147271}) = 0
524. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\272\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
525. clock_gettime(CLOCK_MONOTONIC, {288, 429145074}) = 0
526. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\273\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
527. clock_gettime(CLOCK_MONOTONIC, {288, 433264948}) = 0
528. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\274\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
529. clock_gettime(CLOCK_MONOTONIC, {288, 437293268}) = 0
530. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\275\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
531. clock_gettime(CLOCK_MONOTONIC, {288, 441413142}) = 0
532. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\276\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
533. clock_gettime(CLOCK_MONOTONIC, {288, 445471979}) = 0
534. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\277\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
535. clock_gettime(CLOCK_MONOTONIC, {288, 449500299}) = 0
536. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\300\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
537. clock_gettime(CLOCK_MONOTONIC, {288, 453620173}) = 0
538. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\301\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
539. clock_gettime(CLOCK_MONOTONIC, {288, 457709528}) = 0
540. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\302\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
541. clock_gettime(CLOCK_MONOTONIC, {288, 461829402}) = 0
542. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\303\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
543. clock_gettime(CLOCK_MONOTONIC, {288, 466956355}) = 0
544. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\304\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
545. clock_gettime(CLOCK_MONOTONIC, {288, 470710018}) = 0
546. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\305\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
547. clock_gettime(CLOCK_MONOTONIC, {288, 474250057}) = 0
548. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\306\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
549. clock_gettime(CLOCK_MONOTONIC, {288, 478278377}) = 0
550. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\307\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
551. clock_gettime(CLOCK_MONOTONIC, {288, 482154109}) = 0
552. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\310\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
553. clock_gettime(CLOCK_MONOTONIC, {288, 486182429}) = 0
554. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\311\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
555. clock_gettime(CLOCK_MONOTONIC, {288, 489844538}) = 0
556. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\312\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
557. clock_gettime(CLOCK_MONOTONIC, {288, 493994928}) = 0
558. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\313\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
559. clock_gettime(CLOCK_MONOTONIC, {288, 497779108}) = 0
560. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\314\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
561. clock_gettime(CLOCK_MONOTONIC, {288, 501898981}) = 0
562. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\315\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
563. clock_gettime(CLOCK_MONOTONIC, {288, 505561090}) = 0
564. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\316\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
565. clock_gettime(CLOCK_MONOTONIC, {288, 509619927}) = 0
566. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\317\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
567. clock_gettime(CLOCK_MONOTONIC, {288, 514258599}) = 0
568. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\320\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
569. clock_gettime(CLOCK_MONOTONIC, {288, 518714165}) = 0
570. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\1\1\230\0\0\0\321\1\0\0\177\0\0\0\1\1o\0\25\0\0\0/org/fre"..., 144}, {"\223\0\0\0type='signal',sender='net.co"..., 152}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 296
571. clock_gettime(CLOCK_MONOTONIC, {288, 522467828}) = 0
572. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
573. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
574. clock_gettime(CLOCK_MONOTONIC, {288, 526923394}) = 0
575. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\322\1\0\0~\0\0\0\1\1o\0 \0\0\0/net/con"..., 144}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 144
576. clock_gettime(CLOCK_MONOTONIC, {288, 532660699}) = 0
577. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\323\1\0\0~\0\0\0\1\1o\0 \0\0\0/net/con"..., 144}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 144
578. clock_gettime(CLOCK_MONOTONIC, {288, 538275933}) = 0
579. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\324\1\0\0v\0\0\0\1\1o\0\34\0\0\0/net/con"..., 136}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 136
580. clock_gettime(CLOCK_MONOTONIC, {288, 544043756}) = 0
581. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\325\1\0\0~\0\0\0\1\1o\0!\0\0\0/net/con"..., 144}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 144
582. clock_gettime(CLOCK_MONOTONIC, {288, 549781061}) = 0
583. sendmsg(34, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\1\0\0\0\0\326\1\0\0v\0\0\0\1\1o\0\33\0\0\0/net/con"..., 136}, {"", 0}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 136
584. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
585. epoll_wait(43, {}, 32, 0) = 0
586. clock_gettime(CLOCK_MONOTONIC, {288, 557318903}) = 0
587. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=34, revents=POLLIN}])
588. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
589. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\2\1\1\220\0\0\0\355\1\0\0005\0\0\0\6\1s\0\5\0\0\0:1.73\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 1078
590. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
591. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
592. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
593. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
594. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
595. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
596. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
597. clock_gettime(CLOCK_MONOTONIC, {288, 568457819}) = 0
598. clock_gettime(CLOCK_MONOTONIC, {288, 568701960}) = 0
599. clock_gettime(CLOCK_MONOTONIC, {288, 569770075}) = 0
600. clock_gettime(CLOCK_MONOTONIC, {288, 571021297}) = 0
601. epoll_wait(43, {}, 32, 0) = 0
602. sendmsg(112, {msg_name(0)=NULL, msg_iov(1)=[{"\30\0\0\0\0\0\f\0\0\0\0\0", 12}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 12
603. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
604. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
605. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
606. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
607. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
608. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
609. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
610. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
611. epoll_wait(43, {}, 32, 0) = 0
612. clock_gettime(CLOCK_MONOTONIC, {288, 588965633}) = 0
613. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
614. read(3, "\6\0\0\0\0\0\0\0", 16) = 8
615. clock_gettime(CLOCK_MONOTONIC, {288, 591010311}) = 0
616. clock_gettime(CLOCK_MONOTONIC, {288, 591773251}) = 0
617. epoll_wait(43, {}, 32, 0) = 0
618. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
619. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
620. clock_gettime(CLOCK_MONOTONIC, {288, 606238582}) = 0
621. clock_gettime(CLOCK_MONOTONIC, {288, 606574275}) = 0
622. clock_gettime(CLOCK_MONOTONIC, {288, 606818416}) = 0
623. epoll_wait(43, {}, 32, 0) = 0
624. clock_gettime(CLOCK_MONOTONIC, {288, 607306697}) = 0
625. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 194) = 0 (Timeout)
626. clock_gettime(CLOCK_MONOTONIC, {288, 804114558}) = 0
627. clock_gettime(CLOCK_MONOTONIC, {288, 804358699}) = 0
628. epoll_wait(43, {}, 32, 0) = 0
629. clock_gettime(CLOCK_MONOTONIC, {288, 805335261}) = 0
630. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 4) = 0 (Timeout)
631. clock_gettime(CLOCK_MONOTONIC, {288, 812262752}) = 0
632. clock_gettime(CLOCK_MONOTONIC, {288, 812842586}) = 0
633. epoll_wait(43, {}, 32, 0) = 0
634. clock_gettime(CLOCK_MONOTONIC, {288, 813971736}) = 0
635. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 82) = 0 (Timeout)
636. clock_gettime(CLOCK_MONOTONIC, {288, 897986626}) = 0
637. clock_gettime(CLOCK_MONOTONIC, {288, 898719048}) = 0
638. epoll_wait(43, {}, 32, 0) = 0
639. clock_gettime(CLOCK_MONOTONIC, {288, 899970269}) = 0
640. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
641. clock_gettime(CLOCK_MONOTONIC, {288, 904212213}) = 0
642. epoll_wait(43, {}, 32, 0) = 0
643. clock_gettime(CLOCK_MONOTONIC, {288, 905860162}) = 0
644. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 3) = 0 (Timeout)
645. clock_gettime(CLOCK_MONOTONIC, {288, 912329889}) = 0
646. clock_gettime(CLOCK_MONOTONIC, {288, 913031793}) = 0
647. epoll_wait(43, {}, 32, 0) = 0
648. clock_gettime(CLOCK_MONOTONIC, {288, 914191461}) = 0
649. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 3) = 0 (Timeout)
650. clock_gettime(CLOCK_MONOTONIC, {288, 919165826}) = 0
651. clock_gettime(CLOCK_MONOTONIC, {288, 919379449}) = 0
652. epoll_wait(43, {}, 32, 0) = 0
653. clock_gettime(CLOCK_MONOTONIC, {288, 922827936}) = 0
654. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
655. clock_gettime(CLOCK_MONOTONIC, {288, 924964167}) = 0
656. epoll_wait(43, {}, 32, 0) = 0
657. clock_gettime(CLOCK_MONOTONIC, {288, 926123835}) = 0
658. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
659. clock_gettime(CLOCK_MONOTONIC, {288, 928382135}) = 0
660. epoll_wait(43, {}, 32, 0) = 0
661. clock_gettime(CLOCK_MONOTONIC, {288, 930701472}) = 0
662. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
663. clock_gettime(CLOCK_MONOTONIC, {288, 932593562}) = 0
664. epoll_wait(43, {}, 32, 0) = 0
665. clock_gettime(CLOCK_MONOTONIC, {288, 944037654}) = 0
666. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
667. clock_gettime(CLOCK_MONOTONIC, {288, 945990779}) = 0
668. epoll_wait(43, {}, 32, 0) = 0
669. clock_gettime(CLOCK_MONOTONIC, {288, 947241999}) = 0
670. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 3) = 0 (Timeout)
671. clock_gettime(CLOCK_MONOTONIC, {288, 952826717}) = 0
672. clock_gettime(CLOCK_MONOTONIC, {288, 953589656}) = 0
673. epoll_wait(43, {}, 32, 0) = 0
674. clock_gettime(CLOCK_MONOTONIC, {288, 955207088}) = 0
675. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
676. clock_gettime(CLOCK_MONOTONIC, {288, 957617976}) = 0
677. epoll_wait(43, {}, 32, 0) = 0
678. clock_gettime(CLOCK_MONOTONIC, {288, 959296443}) = 0
679. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
680. clock_gettime(CLOCK_MONOTONIC, {288, 961859919}) = 0
681. epoll_wait(43, {}, 32, 0) = 0
682. clock_gettime(CLOCK_MONOTONIC, {288, 963172175}) = 0
683. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 4) = 0 (Timeout)
684. clock_gettime(CLOCK_MONOTONIC, {288, 968573786}) = 0
685. clock_gettime(CLOCK_MONOTONIC, {288, 969672419}) = 0
686. epoll_wait(43, {}, 32, 0) = 0
687. clock_gettime(CLOCK_MONOTONIC, {288, 973639704}) = 0
688. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
689. clock_gettime(CLOCK_MONOTONIC, {288, 976325251}) = 0
690. epoll_wait(43, {}, 32, 0) = 0
691. clock_gettime(CLOCK_MONOTONIC, {288, 977668024}) = 0
692. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1) = 0 (Timeout)
693. clock_gettime(CLOCK_MONOTONIC, {288, 982428766}) = 0
694. clock_gettime(CLOCK_MONOTONIC, {288, 983161188}) = 0
695. epoll_wait(43, {}, 32, 0) = 0
696. clock_gettime(CLOCK_MONOTONIC, {288, 984503961}) = 0
697. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 2) = 0 (Timeout)
698. clock_gettime(CLOCK_MONOTONIC, {288, 988837457}) = 0
699. clock_gettime(CLOCK_MONOTONIC, {288, 989295220}) = 0
700. epoll_wait(43, {}, 32, 0) = 0
701. clock_gettime(CLOCK_MONOTONIC, {288, 991950250}) = 0
702. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
703. clock_gettime(CLOCK_MONOTONIC, {288, 993903374}) = 0
704. epoll_wait(43, {}, 32, 0) = 0
705. clock_gettime(CLOCK_MONOTONIC, {288, 995063042}) = 0
706. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
707. clock_gettime(CLOCK_MONOTONIC, {289, 5133843}) = 0
708. epoll_wait(43, {}, 32, 0) = 0
709. clock_gettime(CLOCK_MONOTONIC, {289, 6476617}) = 0
710. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
711. clock_gettime(CLOCK_MONOTONIC, {289, 8704400}) = 0
712. epoll_wait(43, {}, 32, 0) = 0
713. clock_gettime(CLOCK_MONOTONIC, {289, 10352350}) = 0
714. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
715. clock_gettime(CLOCK_MONOTONIC, {289, 12244440}) = 0
716. epoll_wait(43, {}, 32, 0) = 0
717. clock_gettime(CLOCK_MONOTONIC, {289, 13526178}) = 0
718. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1) = 0 (Timeout)
719. clock_gettime(CLOCK_MONOTONIC, {289, 25031305}) = 0
720. clock_gettime(CLOCK_MONOTONIC, {289, 25244928}) = 0
721. epoll_wait(43, {}, 32, 0) = 0
722. clock_gettime(CLOCK_MONOTONIC, {289, 26374079}) = 0
723. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 2249) = 0 (Timeout)
724. clock_gettime(CLOCK_MONOTONIC, {291, 279944635}) = 0
725. clock_gettime(CLOCK_MONOTONIC, {291, 283698297}) = 0
726. clock_gettime(CLOCK_MONOTONIC, {291, 284796930}) = 0
727. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
728. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
729. clock_gettime(CLOCK_MONOTONIC, {291, 286750055}) = 0
730. clock_gettime(CLOCK_MONOTONIC, {291, 287360406}) = 0
731. clock_gettime(CLOCK_MONOTONIC, {291, 288123346}) = 0
732. epoll_wait(43, {}, 32, 0) = 0
733. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
734. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
735. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
736. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
737. epoll_wait(43, {}, 32, 0) = 0
738. clock_gettime(CLOCK_MONOTONIC, {291, 303321100}) = 0
739. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
740. read(3, "\5\0\0\0\0\0\0\0", 16) = 8
741. clock_gettime(CLOCK_MONOTONIC, {291, 307135797}) = 0
742. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
743. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
744. clock_gettime(CLOCK_MONOTONIC, {291, 309149957}) = 0
745. epoll_wait(43, {}, 32, 0) = 0
746. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
747. read(3, "\2\0\0\0\0\0\0\0", 16) = 8
748. clock_gettime(CLOCK_MONOTONIC, {291, 314612604}) = 0
749. clock_gettime(CLOCK_MONOTONIC, {291, 316199518}) = 0
750. clock_gettime(CLOCK_MONOTONIC, {291, 317176080}) = 0
751. epoll_wait(43, {}, 32, 0) = 0
752. clock_gettime(CLOCK_MONOTONIC, {291, 322150446}) = 0
753. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 0 (Timeout)
754. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
755. clock_gettime(CLOCK_MONOTONIC, {291, 324866511}) = 0
756. epoll_wait(43, {}, 32, 0) = 0
757. clock_gettime(CLOCK_MONOTONIC, {291, 325812556}) = 0
758. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 5) = 0 (Timeout)
759. clock_gettime(CLOCK_MONOTONIC, {291, 337531306}) = 0
760. clock_gettime(CLOCK_MONOTONIC, {291, 338263728}) = 0
761. clock_gettime(CLOCK_MONOTONIC, {291, 339118220}) = 0
762. clock_gettime(CLOCK_MONOTONIC, {291, 339423396}) = 0
763. epoll_wait(43, {}, 32, 0) = 0
764. clock_gettime(CLOCK_MONOTONIC, {291, 343482234}) = 0
765. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 1) = 0 (Timeout)
766. clock_gettime(CLOCK_MONOTONIC, {291, 346900203}) = 0
767. clock_gettime(CLOCK_MONOTONIC, {291, 347724177}) = 0
768. clock_gettime(CLOCK_MONOTONIC, {291, 348334529}) = 0
769. epoll_wait(43, {}, 32, 0) = 0
770. clock_gettime(CLOCK_MONOTONIC, {291, 349860408}) = 0
771. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 13) = 0 (Timeout)
772. clock_gettime(CLOCK_MONOTONIC, {291, 366278865}) = 0
773. clock_gettime(CLOCK_MONOTONIC, {291, 367285945}) = 0
774. clock_gettime(CLOCK_MONOTONIC, {291, 367591120}) = 0
775. epoll_wait(43, {}, 32, 0) = 0
776. clock_gettime(CLOCK_MONOTONIC, {291, 369116999}) = 0
777. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 10) = 0 (Timeout)
778. clock_gettime(CLOCK_MONOTONIC, {291, 384558893}) = 0
779. clock_gettime(CLOCK_MONOTONIC, {291, 384803033}) = 0
780. clock_gettime(CLOCK_MONOTONIC, {291, 385443902}) = 0
781. epoll_wait(43, {}, 32, 0) = 0
782. clock_gettime(CLOCK_MONOTONIC, {291, 386786676}) = 0
783. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 8) = 0 (Timeout)
784. clock_gettime(CLOCK_MONOTONIC, {291, 397040581}) = 0
785. clock_gettime(CLOCK_MONOTONIC, {291, 397406792}) = 0
786. clock_gettime(CLOCK_MONOTONIC, {291, 398169732}) = 0
787. epoll_wait(43, {}, 32, 0) = 0
788. clock_gettime(CLOCK_MONOTONIC, {291, 399481988}) = 0
789. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 12) = 0 (Timeout)
790. clock_gettime(CLOCK_MONOTONIC, {291, 416175104}) = 0
791. clock_gettime(CLOCK_MONOTONIC, {291, 417212701}) = 0
792. clock_gettime(CLOCK_MONOTONIC, {291, 417609430}) = 0
793. epoll_wait(43, {}, 32, 0) = 0
794. clock_gettime(CLOCK_MONOTONIC, {291, 419593072}) = 0
795. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 8) = 0 (Timeout)
796. clock_gettime(CLOCK_MONOTONIC, {291, 434577204}) = 0
797. clock_gettime(CLOCK_MONOTONIC, {291, 435004450}) = 0
798. clock_gettime(CLOCK_MONOTONIC, {291, 435218073}) = 0
799. epoll_wait(43, {}, 32, 0) = 0
800. clock_gettime(CLOCK_MONOTONIC, {291, 436957575}) = 0
801. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 6) = 0 (Timeout)
802. clock_gettime(CLOCK_MONOTONIC, {291, 444617488}) = 0
803. clock_gettime(CLOCK_MONOTONIC, {291, 444831111}) = 0
804. clock_gettime(CLOCK_MONOTONIC, {291, 445624568}) = 0
805. epoll_wait(43, {}, 32, 0) = 0
806. clock_gettime(CLOCK_MONOTONIC, {291, 447028376}) = 0
807. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 12) = 0 (Timeout)
808. clock_gettime(CLOCK_MONOTONIC, {291, 464575985}) = 0
809. clock_gettime(CLOCK_MONOTONIC, {291, 465613582}) = 0
810. clock_gettime(CLOCK_MONOTONIC, {291, 466529109}) = 0
811. epoll_wait(43, {}, 32, 0) = 0
812. clock_gettime(CLOCK_MONOTONIC, {291, 468543269}) = 0
813. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 7) = 0 (Timeout)
814. clock_gettime(CLOCK_MONOTONIC, {291, 477454403}) = 0
815. clock_gettime(CLOCK_MONOTONIC, {291, 477668026}) = 0
816. clock_gettime(CLOCK_MONOTONIC, {291, 478553035}) = 0
817. epoll_wait(43, {}, 32, 0) = 0
818. clock_gettime(CLOCK_MONOTONIC, {291, 479865291}) = 0
819. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 11) = 0 (Timeout)
820. clock_gettime(CLOCK_MONOTONIC, {291, 495734431}) = 0
821. clock_gettime(CLOCK_MONOTONIC, {291, 495948054}) = 0
822. clock_gettime(CLOCK_MONOTONIC, {291, 496833064}) = 0
823. epoll_wait(43, {}, 32, 0) = 0
824. clock_gettime(CLOCK_MONOTONIC, {291, 498114802}) = 0
825. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 9) = 0 (Timeout)
826. clock_gettime(CLOCK_MONOTONIC, {291, 509162164}) = 0
827. clock_gettime(CLOCK_MONOTONIC, {291, 509406305}) = 0
828. clock_gettime(CLOCK_MONOTONIC, {291, 513343072}) = 0
829. epoll_wait(43, {}, 32, 0) = 0
830. clock_gettime(CLOCK_MONOTONIC, {291, 515296197}) = 0
831. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 8) = 0 (Timeout)
832. clock_gettime(CLOCK_MONOTONIC, {291, 525611139}) = 0
833. clock_gettime(CLOCK_MONOTONIC, {291, 526374078}) = 0
834. clock_gettime(CLOCK_MONOTONIC, {291, 527137018}) = 0
835. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
836. epoll_wait(43, {}, 32, 0) = 0
837. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
838. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
839. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
840. epoll_wait(43, {}, 32, 0) = 0
841. clock_gettime(CLOCK_MONOTONIC, {291, 535498835}) = 0
842. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
843. read(3, "\3\0\0\0\0\0\0\0", 16) = 8
844. clock_gettime(CLOCK_MONOTONIC, {291, 539099909}) = 0
845. clock_gettime(CLOCK_MONOTONIC, {291, 541114070}) = 0
846. epoll_wait(43, {}, 32, 0) = 0
847. clock_gettime(CLOCK_MONOTONIC, {291, 542517878}) = 0
848. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 5458) = 0 (Timeout)
849. read(3, 0xbebc23ec, 16) = -1 EAGAIN (Resource temporarily unavailable)
850. clock_gettime(CLOCK_MONOTONIC, {297, 15570857}) = 0
851. clock_gettime(CLOCK_MONOTONIC, {297, 16333796}) = 0
852. clock_gettime(CLOCK_MONOTONIC, {297, 17523982}) = 0
853. clock_gettime(CLOCK_MONOTONIC, {297, 18653132}) = 0
854. sendmsg(30, {msg_name(0)=NULL, msg_iov(2)=[{"l\1\0\0010\0\0\0\217\0\0\0\210\0\0\0\1\1o\0\36\0\0\0/org/pul"..., 152}, {"\34\0\0\0org.PulseAudio.ServerLookup1"..., 48}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 200
855. clock_gettime(CLOCK_MONOTONIC, {297, 23169734}) = 0
856. poll([{fd=30, events=POLLIN}], 1, 25000) = 1 ([{fd=30, revents=POLLIN}])
857. recvmsg(30, {msg_name(0)=NULL, msg_iov(1)=[{"l\3\1\1J\0\0\0(\0\0\0u\0\0\0\6\1s\0\5\0\0\0:1.22\0\0\0"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 210
858. recvmsg(30, 0xbebc1c30, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
859. clock_gettime(CLOCK_MONOTONIC, {297, 26831844}) = 0
860. epoll_wait(43, {}, 32, 0) = 0
861. clock_gettime(CLOCK_MONOTONIC, {297, 28113582}) = 0
862. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 8972) = 1 ([{fd=26, revents=POLLIN}])
863. read(26, ")\1\0\0\23a\6\0\5\0\0\0\1\0\0\0)\1\0\0001a\6\0\0\0\0\0\0\0\0\0", 512) = 32
864. epoll_wait(43, {}, 32, 0) = 0
865. clock_gettime(CLOCK_MONOTONIC, {297, 428961970}) = 0
866. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 8572) = 1 ([{fd=26, revents=POLLIN}])
867. read(26, "-\1\0\0D6\0\0\5\0\0\0\0\0\0\0-\1\0\0b6\0\0\0\0\0\0\0\0\0\0", 512) = 32
868. epoll_wait(43, {}, 32, 0) = 0
869. clock_gettime(CLOCK_MONOTONIC, {301, 26465633}) = 0
870. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 4974) = 1 ([{fd=34, revents=POLLIN}])
871. recvmsg(34, {msg_name(0)=NULL, msg_iov(1)=[{"l\1\1\1\4\0\0\0\22\1\0\0\225\0\0\0\1\1o\0\v\0\0\0/screenl"..., 2048}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 485
872. recvmsg(34, 0xbebc2050, MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
873. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
874. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
875. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
876. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
877. epoll_wait(43, {}, 32, 0) = 0
878. poll([{fd=3, events=POLLIN}, {fd=21, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=26, events=POLLIN}, {fd=27, events=POLLIN}, {fd=29, events=POLLIN}, {fd=30, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=37, events=POLLIN}, {fd=43, events=POLLIN}, {fd=48, events=POLLIN}, {fd=69, events=POLLIN}, {fd=84, events=POLLIN}, {fd=85, events=POLLIN}, {fd=88, events=POLLIN}, {fd=89, events=POLLIN}, {fd=110, events=POLLIN}], 19, 0) = 1 ([{fd=3, revents=POLLIN}])
879. write(3, "\1\0\0\0\0\0\0\0", 8) = 8
880. sendmsg(4, {msg_name(29)={sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, msg_iov(13)=[{"MESSAGE=[D] HwComposerContext::s"..., 64}, {"\n", 1}, {"PRIORITY=7", 10}, {"\n", 1}, {"CODE_FILE=hwcomposer_context.cpp", 32}, {"\n", 1}, {"CODE_LINE=183", 13}, {"\n", 1}, {"CODE_FUNC=void HwComposerContext"..., 52}, {"\n", 1}, {"SYSLOG_IDENTIFIER=", 18}, {"lipstick", 8}, {"\n", 1}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 203
881. writev(6, [{"\3", 1}, {"qdhwcomposer\0", 13}, {"hwc_blank: Unblanking display: 0"..., 33}], 3) = 47
882. ioctl(5, FBIOBLANK
RAW Paste Data | 47,307 | 103,670 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2020-45 | latest | en | 0.519552 |
http://www.sputtr.com/digit?page=2 | 1,618,714,588,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618038464146.56/warc/CC-MAIN-20210418013444-20210418043444-00176.warc.gz | 171,246,221 | 6,010 | Digit
Changes to Local Digit Maps on SoundPoint IP Phones
November, 2011 3725-17471-001/D Technical Bulletin 11572 Changes to Local Digit Maps on SoundPoint ® IP, SoundStation ® IP, and Polycom ® VVX ® 1500 Phones This technical bulletin provides detailed information on how to modify the configuration files to automate the setup phase of number ...
Largest 3-digit Sum
Name: Date: Sam picked the numbers 2, 7, 5, 0, 6 and 8. • How should Sam arrange the numbers to get the largest sum? Write the numbers 2, 7, 5, 0, 6 and 8 in the boxes below and then find the sum.
Printer/Ink Specifications/Kodak1200i printer
Your wide-format business demands higher productivity. The 1200idelivers. For over 20 years we have been a pioneer in the field of wide-format inkjet printing technology.
Multiplying 10-digit numbers using Flickr: The power of ...
Multiplying 10-digit numbers using Flickr: The power of recognition memory Andrew Drucker 1 The recognition method In this informal article, I'lldescribethe\recognition method"|asimple, powerful technique for memorization and mental calculation.
Two- and Three-Digit Squares
Problem-of-the-Week Two- and Three-Digit Squares Copyright © Glencoe Division, Macmillan/McGraw-Hill Algebra 2 The Problem Each array shows two two-digit squares reading from left to right, and two two-digit squares reading from top to bottom.
sub3digitre
Microsoft Word - sub3digitre.doc. 3-Digit Subtraction Facts With Re-Grouping 177-88_ 654-316_ 861 - 612_ 441-193_ 125-118_ 444-435_ 662-529_ 840-421_ 863-287_ 755 - 655_ 684 - 432_ 909-409_ 726-289_ 917-248_ 705 - 266_ 801-714_ 748 - 29_ 667 - 139_ 503-229_ 518-359_ 804-216_ 777-669_ 934 - 268 ...
Check Digit Algorithms
C HECK D IGIT A LGORITHMS i May 2007 Table of Contents Section 1 - Check Digit Algorithm Description for Subscriber IDs..... 1 2.1 MEDS ID Check Digit Example..... 1 2.1 CIN Check Digit Example ...
STATE OF INDIANA
S TATE OF I NDIANA I NDIANA G OVERNMENT C ENTER N ORTH 100 N ORTH S ENATE A VENUE N1058(B) I NDIANAPOLIS , IN 46204 P HONE (317) 232-3777 F AX (317) 232-8779 D EPARTMENT OF L OCAL G OVERNMENT F INANCE TO: County Auditors and Assessors FROM: Charlie Bell, Operations, DLGF RE: 18 - Digit Parcel ...
MISSOURI’S 12-DIGIT HYDROLOGIC UNITS
data development for the national watershed boundary dataset: mapping missouri's 12-digit hydrologic units missouri’s 12-digit hydrologic units | 707 | 2,409 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.734375 | 3 | CC-MAIN-2021-17 | latest | en | 0.558103 |
https://archives2.twoplustwo.com/archive/index.php/t-152766.html | 1,618,409,863,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00414.warc.gz | 221,649,875 | 3,627 | PDA
View Full Version : Which is correct regarding a flush draw?
Dave H.
11-23-2004, 03:11 PM
Two different sites and I quote:
[ QUOTE ]
N.B. If you take two sutied cards to the river, you have a 15/1 (6.4%) chance of making a flush in your suit by then.
[/ QUOTE ]
[ QUOTE ]
Finally for all the possibilities if you start suited and stay to see all seven cards (your two and the five board cards) the probability that you will make a flush is 5.77%. The odds against you are 16.3:1
[/ QUOTE ]
Which of the above is correct, please?
Also, I thought I read (and I searched for this a long time) that there was something like a 60% probability that the board could be exactly 2 suited. Can that be??
Thank you!
Lost Wages
11-23-2004, 04:25 PM
Question 1)
If you start with 2 suited cards then you will make a flush (or straight flush):
C(11,3)*C(47,2)/C(50,5) = 8.42% = 10.9:1 against
However, that includes the times that the board contains 4 or 5 of your suit which you generally would not prefer.
The probability of making exactly a 5 card flush (or straight flush) is:
C(11,3)*C(39,2)/C(50,5) = 5.77% = 16.3:1 against
Question 2)
Not sure what you mean by "exactly 2 suited". Can you give an example?
Lost Wages
Stork
11-23-2004, 09:46 PM
I'm pretty sure the second one is correct. I just remember hearing that you make the flush slightly less than 6% of the time.
gaming_mouse
11-24-2004, 09:42 AM
5.77% is the chance of having a board with EXACTLY 3 of your suit.
6.4% is the chance of EXACTLY 3 OR EXACTLY 4.
Lost wages gives the correct calculation for having a flush made any way at all (including a str8 flush).
gm
EDIT: Actually, LostWages calculation is not quite right for making a flush any way at all. It counts certain hands multiple times. The correct answer is 6.4%, because having a board of 5 suited cards does not even affect the calculation by as much as 1 digit.
gaming_mouse
11-24-2004, 10:03 AM
Dave,
In case you want to see the math, here it is:
ncr(11,3)*ncr(39,2)/ncr(50,5)=.0577 (EXACTLY 3)
ncr(11,4)*ncr(39,1)/ncr(50,5)=.0060 (EXACTLY 4)
ncr(11,5)/ncr(50,5)=.0002 (EXACTLY 5)
gm
Lost Wages
11-24-2004, 10:30 AM
Good catch, I fudged it.
Lost Wages
Dave H.
11-24-2004, 12:03 PM
By 2 suited, I meant two and ONLY two suits on the board, i.e. 2 diamonds and 3 spades, 4 hearts and 1 club, etc.
Dave H.
11-24-2004, 12:05 PM
Yes, I absolutely wanted to see the math. Thank you very much!
gaming_mouse
11-24-2004, 02:29 PM
[ QUOTE ]
By 2 suited, I meant two and ONLY two suits on the board, i.e. 2 diamonds and 3 spades, 4 hearts and 1 club, etc.
[/ QUOTE ]
ncr(4,2)* (ncr(13,1)*ncr(13,4)*2 + ncr(13,2)*ncr(13,3)*2)/ncr(52,5)
This works out to 14.5%. So yes, the 60% is way off.
gm
Lost Wages
11-24-2004, 02:53 PM
GM has it right. Link (http://www.math.sfu.ca/~alspach/comp3.pdf) to the solution for every possible suit distribution.
Lost Wages
semipro
11-24-2004, 04:32 PM
Pardon my ignorance,
What is ncr? And please explain the math.
Thank you.
gaming_mouse
11-24-2004, 06:09 PM
nCr(n,r) is the number of ways you can choose r things from a total of n things.
Thus, for example:
nCr(2,2) = 1
nCr(3,1) = 3
nCr(4,2) = 6
The formula is given by:
nCr = n!/( (n-r)! * r!)
where n! = n*(n-1)*(n-2)*...*3*2*1
If you are not familiar with these concepts, I won't be able to explain the math to you.
gm
mannika
11-24-2004, 10:20 PM
The 60% probability probably refers to the probability of the flop being two-suited, not the entire board.
Probability of flop being two suited = 1 - prob(rainbow) - prob(single suit)
prob(rainbow) = (52/52)*(39/51)*(26/50) = 0.3976
prob(single suit) = (52/52)*(12/51)*(11/50) = 0.0518
prob(two-suited) = 1 - 0.3976 - 0.0518 = 0.5506
Which is fairly close to your 60% figure. Not sure if this is what was intended when you heard that remark.
Dave H.
11-26-2004, 10:01 AM
That may very well be what I read...thanx much!
Dave H.
11-26-2004, 10:38 AM
[ QUOTE ]
ncr(4,2)* (ncr(13,1)*ncr(13,4)*2 + ncr(13,2)*ncr(13,3)*2)/ncr(52,5)
This works out to 14.5%. So yes, the 60% is way off.
[/ QUOTE ]
gm,
I'm struggling with this...can you help?
1. ncr(4,2) would be the number of ways to combine 4 suits two ways...CORRECT?
2. ncr(13,1)*ncr(13,4) refers to the number of ways to get 1 card of one suit and 4 cards of another suit...CORRECT?
3. ncr(13,2)*ncr(13,3) refers to the number of ways to get 2 cards of one suit and 3 cards of another suit...CORRECT?
4. the ncr(52,5) term is obvious and the '+' sign is obvious
5. What is the *2 factor that's used with both terms in the parentheses?
Thank you.
gaming_mouse
11-26-2004, 01:26 PM
ncr(4,2) would be the number of ways to combine 4 suits two ways...CORRECT?
Correct.
ncr(13,1)*ncr(13,4) refers to the number of ways to get 1 card of one suit and 4 cards of another suit...CORRECT?
Correct.
ncr(13,2)*ncr(13,3) refers to the number of ways to get 2 cards of one suit and 3 cards of another suit...CORRECT?
Correct.
What is the *2 factor that's used with both terms in the parentheses?
Take two particular suits, hearts and clubs. We can have:
1 heart, 4 clubs
2 heart, 3 clubs
3 heart, 2 clubs
4 heart, 1 clubs
But the # of combos for "1 heart, 4 clubs" = # of combos for "4 clubs, 1 heart". Similarly for 2,3. So we just multiply each by 2.
gm | 1,823 | 5,312 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.765625 | 4 | CC-MAIN-2021-17 | latest | en | 0.938641 |
https://link.springer.com/article/10.1007/s11229-023-04110-9 | 1,713,410,846,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296817187.10/warc/CC-MAIN-20240418030928-20240418060928-00382.warc.gz | 332,355,588 | 94,223 | ## 1 Introduction
Classical statistics can be considered the traditional workhorse of many disciplines, that, as a consequence, has been studied by philosophers for a long time. Yet methods of machine learning (ML) are gaining relevance in science. In particular, they are successfully employed in predictive tasks.Footnote 1 This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). Focusing on deep neural networks (DNNs), one of the most popular methods in ML, I try to carefully carve out how they cope with the RCP. I will conclude that although it remains a serious methodological challenge for them in many situations, some DNNs are able to overcome specific instantiations of the RCP.
In general, the RCP arises whenever the objective probability of possessing a certain property should be assigned to an individual. According to the frequentist account, this probability should be based on an observed relative frequency.Footnote 2 Yet an individual belongs to different reference classes and relative frequencies may vary across these classes. Consequently, it is unclear which reference class should be chosen to determine said single-case probability. Apart from this probabilistic version of the problem, there is a version that is structurally similar, yet not concerned with the rational determination of single-case probabilities, but rather with the rational construction of predictions. The present paper focuses on this predictive version of the RCP.Footnote 3
For instance, consider William Smith who wants to predict whether he will be alive 15 years from now (Salmon, 1989, p. 69).Footnote 4 He belongs to different reference classes: the class of 40-year-old American males, the class of heavy cigarette-smoking individuals, and several other classes. Clearly, the evidence for 15-year survival varies considerably between them. It is therefore not straightforward to choose the class that should serve as a basis for making the prediction. The example illustrates that the RCP is central to situations in which statistical evidence is used to make a prediction for an individual case, even when the prediction is not a probability, but rather a real number or a discrete classification.Footnote 5 Consequently, it arises regularly within the framework of classical statistics, encompassing a variety of fields such as evolutionary biology (Strevens, 2016) or law (Colyvan et al., 2001; Colyvan and Regan, 2016).
An influential suggestion to solve the RCP is due to Reichenbach (1949). He proposes to base one’s inferences on the reference class that is as narrow as possible while also allowing compiling reliable statistics. The narrowness of a class increases with the number of predicates that determine the class. Additionally, Salmon (1971) proposes to counterbalance a strict preference for narrower reference classes with the requirement of homogeneity. Briefly put, this means that the reference class should only be determined by those predicates that are relevant for a particular prediction.
Several authors have thus interpreted the predictive RCP as a problem of statistical model selection (Cheng, 2009; Franklin, 2010): the predicates that determine a reference class can be expressed by the variables in a statistical model. Solving the RCP then reduces to identifying the model with the ‘right’ set of variables. Clearly then, any strategy that identifies one set of variables as the ‘right’ one ultimately needs to take into account the criteria of narrowness, reliability, and homogeneity that make for a suitable reference class.
However, in the context of classical statistics, existing strategies to approach the RCP with model selection techniques pose an additional challenge instead of offering a remedy: from a statistical point of view, there is a tradeoff between the narrowness of the class considered and the reliability of the information that this class contains.Footnote 6 A narrower reference class will contain fewer observations. Thus, by an argument along the lines of the law of large numbers, it will also have an inferior statistical reliability. Furthermore, the combination of fewer observations and a higher number of predicates defining a narrow reference class is problematic for another reason: expressing a narrow reference class by a model with a high number of variables and fitting it to a low number of observations leads to a situation in which the model can memorize the given data, but might predict new observations rather poorly. Thus, inferences derived from information in that reference class are likely susceptible to overfitting (Shalev-Shwartz and Ben-David, 2016). For the same reason, it is difficult to determine a homogeneous reference class using methods of classical statistics: using a model with a high number of variables, thereby considering all predictively relevant predicates, might lead to a homogeneous reference class, but also to a low number of observations in that class and thus, ultimately, to the risk of overfitting.
With the rise of big data, rapidly growing computational resources and datasets, methods of ML are gradually complementing, sometimes even replacing methods of classical statistics in science (Mjolsness and DeCoste, 2001; Wheeler, 2016). In this paper, I focus on DNNs, one of the most popular ML methods. They are employed frequently and with astonishing success in predictive tasks (LeCun et al., 2015; Goodfellow et al., 2016). DNNs perform particularly well in settings involving so-called high-dimensional data, where the number of features associated with each observation is very high, usually much higher than the overall sample size (Belkin et al., 2019). This particular field of application serves as the starting point for my argumentation that proceeds in two steps.
First, I argue that the notion of ‘big data’ can be conceived along two perspectives. A dataset might be large simply because of the number of observations it contains. But the high dimensionality of many contemporary datasets adds a second perspective to the understanding of big data. I show that the combination of both perspectives can be connected to the notions of narrowness and reliability in the debate surrounding the RCP. On the one hand, high dimensionality of a dataset and thus a high number of features associated with each observation can be linked with the idea of a narrow reference class that is defined by a high number of predicates. On the other hand, a high number of observations can be interpreted as being related to the reliability of the information in a dataset.
Second, I argue that the particular functionality of some DNNs predestines them to exploit settings involving big data. For methods of classical statistics and many ML approaches, high-dimensional data involves the risk of overfitting. However, recent ML research reveals that there are DNNs for which this risk is much less prevalent: in many settings, they perfectly fit the training data, but also exhibit high predictive accuracy on new inputs (Belkin et al., 2019; Berner et al.,2021, p. 17).Footnote 7
I argue that this gives rise to a situation in which DNNs remedy particular instantiations of the RCP, namely those involving high-dimensional or ‘big’ data. Their specific functionality enables them to exploit high-dimensional data without incurring the risk of overfitting which allows them to make predictions with high accuracy. I argue that this is akin to an accurate inference from relevant and reliable information in a very narrow reference class to previously unseen individuals.
The remainder of the paper is organized as follows: Sect. 2 introduces the RCP and reviews criteria for the suitability of a reference class. Section 3 provides the necessary background on ML and DNNs. Section 4 outlines existing strategies to solve the RCP that rely on the framework of classical statistics and shows that they fail in some situations. Section 5 argues that DNNs offer a remedy to specific cases of the RCP.
## 2 The reference class problem
This section discusses the RCP. It carves out important distinctions that have been introduced in the literature and their relevance for the present paper. Additionally, this section outlines criteria for the suitability of a reference class.
### 2.1 The problem
The RCP originates in the assignment of an objective probability to an individual object, that is, a single-case probability. According to the frequentist account, this probability should be based on an observed relative frequency. Yet an individual belongs to different classes, so-called reference classes, and relative frequencies may vary across these classes. Consequently, it is unclear which reference class should be chosen to determine the single-case probability (Reichenbach, 1949, p. 374, Venn, 1876, p. 194). I will refer to this original version of the problem as the probabilistic RCP. However, the treatment of the problem has gradually become more fine-grained.Footnote 8 The present paper focuses on the epistemological RCP as it arises in the context of prediction.
The context of prediction was introduced as a specific instantiation of the RCP by Fetzer (1977) and Salmon (1989). In this context, an individual should be assigned to a suitable reference class so as to allow for an accurate prediction. To do so, all available evidence relevant to the prediction at hand should be used.Footnote 9
The epistemological RCP concerns situations in which a rational agent is dealing with the question on which part of given statistical evidence they should base their inductive inferences and decision-making (Hájek, 2007). As illustrated using the case of William Smith who tries to predict his 15-year survival, statistical evidence is relative to a particular reference class, the problem being that it is unclear which reference class is the correct one.
To illustrate the specific instantiation of the RCP examined in this paper, consider the widely discussed legal case United States v. Shonubi.Footnote 10 The case is about Charles Shonubi, a Nigerian citizen, who was apprehended on December 10, 1991 at New York’s John F. Kennedy Airport (JFK), carrying 427.7 grams of heroin. The evidence gathered during the subsequent trial revealed that Shonubi had made at least seven smuggling trips between Nigeria and the United States prior to his detention. As a consequence, sentencing guidelines required an estimate of the overall amount of heroin that Shonubi imported during all eight of his trips (Tillers, 2005, p. 34). It was also required that this estimate be based on ‘specific evidence’. In response to both requirements, data of 117 Nigerian drug smugglers that were apprehended at JFK in the period between Shonubi’s first and last known smuggling trip was analyzed. In particular, the amounts of heroin found on these smugglers served as a basis for estimating the amount Shonubi carried during his first seven trips. This estimated amount was subsequently added to the known amount of 427.7 grams that resulted in the eighth trip (Colyvan et al., 2001, p. 169).
The case clearly involves a prediction problem, since the overall amount of heroin that Shonubi carried during his first seven trips was unknown at the time of the trial. Furthermore, the case is about predicting a quantity rather than a probability. So although the case does not involve the probabilistic RCP, it certainly involves a structurally similar problem: in order to predict the overall amount of heroin based on statistical evidence, Shonubi had to be assigned to some reference class. Yet it also had to be determined what constitutes ‘specific evidence’, that is, a suitable reference class in this particular situation. As several authors rightly point out, it is unclear why “Nigerian drug smugglers apprehended at JFK during the given time period” was chosen as Shonubi’s reference class rather than “all drug smugglers at JFK, all Nigerian smugglers regardless of airport, or smugglers in general” (Cheng, 2009, p. 2082). In fact, there is an indefinite number of classes to which Shonubi could have been assigned. This includes apparently unsuspicious classes such as the class of all airline passengers or the class of toll collectors at New York’s George Washington Bridge which was Shonubi’s day job (Colyvan et al., 2001, p. 172).Footnote 11 Each of them would have resulted in very different predictions for the overall amount of heroin. Thus, when trying to make an individual prediction based on statistical evidence, it is unclear which part of the evidence should have a bearing on the prediction. Put differently, it is unclear which reference class to use to make the prediction. This is the epistemological RCP as it arises in the context of prediction.
### 2.2 Criteria for a suitable reference class
The previous section revealed that the RCP is about choosing a suitable reference class when applying statistical evidence to an individual object. Consequently, a solution to the RCP needs to spell out two things: first, a criterion for what constitutes a suitable reference class and second, a method for actually finding that class. This section discusses criteria for a suitable reference class. Strategies for actually finding it are outlined in Sect. 4.
One influential proposal of a solution to the RCP is due to Reichenbach (1949). For him, there are two criteria determining a suitable reference class: it should be as narrow as possible while also allowing compiling reliable statistics (Reichenbach, 1949, p. 374). What is meant by narrow and reliable?
On the predominant view, the concept of narrowness can be linked to the number of predicates by which a class is determined. For instance, given data about the entire population (no predicate) and data about males in that population (one predicate) when predicting the amount of heroin in the case of Shonubi, one should opt for the more specific data, thereby assigning Shonubi to the narrowest reference class possible that is refined by the highest number of predicates. This seems intuitive. Additionally, Thorn (2017) and Wallmann (2017) show that the preference for narrow reference classes can be formally justified: choosing the narrowest reference class maximizes accuracy in the sense that the difference between prediction and actual value will be minimal.
However, there are at least two problems with the criterion of narrowness. First, reference classes cannot “be totally ordered according to their narrowness” (Hájek, 2007, p. 568). For instance, given data about the entire population and data about males, it is straightforward to identify the narrowest reference class. Yet in a situation in which there is only reliable data regarding males that weigh more than 80 kilograms and regarding males with dark hair, this is not as straightforward. Obviously, each of the classes is narrower than the class of all males, but there is no reliable information as to which of them should be considered the narrowest reference class. Furthermore, it would be a mistake to judge them as equally narrow simply because both classes are determined by one further predicate (Hájek, 2007, p. 569).Footnote 12
Second, solely focusing on the criterion of narrowness implies that one should always prefer evidence for singleton reference classes (Thorn, 2012, p. 303).Footnote 13 Thus, in the Shonubi case, the overall amount of heroin should have been determined based on the reference class containing only Charles Shunobi.Footnote 14 This clearly misguided strategy illustrates what might have been obvious from the outset: that a strict preference for narrow and hence ultimately singleton reference classes is untenable.
Reichenbach (1949) seems to attenuate the strict preference for narrower reference classes by additionally requiring reliable information: one should choose the narrowest reference class that also contains reliable information. However, Reichenbach does not further specify the concept of reliability. Hájek (2007, p. 568) even argues that it is a vague concept per se that cannot be pinned down employing ideas of classical statistics such as a sufficiently large sample size. I partially disagree with this observation. Although there might be more to reliability than purely statistical aspects like a large sample, the latter aspects are certainly an important part of it. This is due to the fact that theoretical results that guarantee the reliability of statistical methods rely on precisely these aspects.Footnote 15 Hájek (2007, p. 568) also notes that the meaning of reliability might in fact be context-dependent and sensitive to pragmatic considerations. I agree with this observation. However, it seems unproblematic as soon as the specific context is made explicit. Here, the focus is on the RCP as it arises in the context of prediction. Thus it is reasonable to argue that information is reliable to the extent that it leads to accurate predictions.Footnote 16
Apart from reliability, Salmon (1971, 1989) proposes homogeneity as another criterion to counterbalance a strict preference for narrower reference classes. As mentioned above, he argues that when concerned with prediction, one should exploit all available evidence. Yet what is crucial to achieve homogeneity is the statistical relevance of the evidence, which Salmon (1971, p. 42) defines as follows: when trying to predict the probability that an individual has some property B based on an overall set of evidence A, another property C is statistically relevant to B just in case $$P(B|A,C) \ne P(B|A)$$, that is, just in case conditioning on A and C leads to another probability for the individual to have property B than conditioning only on A.Footnote 17 Thus, to determine a suitable reference class for a prediction concerning property B, one should start by considering the broadest class A and partition it in terms of all predicates $$C_1, C_2, \dots$$ that are statistically relevant to the question at hand; yet one should avoid partitioning the class in terms of statistically irrelevant predicates, since this would reduce the available evidence with no good reason. According to Salmon (1971, p. 43, 1989, p. 69), one should ultimately choose the broadest homogeneous reference class. This is the class that is subdivided by a homogeneous partition, that is, by a partition that includes all predicates that are known to be statistically relevant and that does not include any statistically irrelevant predicates.Footnote 18
While Salmon focuses on the prediction of probabilities and hence formulates the notion of statistical relevance in terms of probabilities as well, Colyvan et al. (2001) emphasize the importance of homogeneity even in settings like the case of Shonubi, where the prediction to be made is not a probability, but rather a real number. They argue that choosing the right reference class “is not just a question of specifying enough predicates to be jointly satisfied so that the reference class in question contains very few (but non-zero) members” (Colyvan et al., 2001, p. 172). Instead, the reference class should be homogeneous in the sense that refining the partition by adding another predicate does not (significantly) change the predicted value. I will refer to this idea as predictive homogeneity. This highlights that the formulation resembles Salmon’s definition of a homogeneous partition in terms of statistical relevance. Yet it also highlights that the formulation is different because it replaces the focus on changes in probabilities that is central to the definition of statistical relevance by the more general focus on changes in predicted values.
Overall, the criterion of homogeneity complements the criterion of narrowness and can be considered as a lower bound to it: while the criterion of narrowness requires choosing a class that is determined by as many predicates as possible, the criterion of homogeneity requires choosing a class that is determined only by those predicates that are relevant to the question at hand.
In sum, the discussion reveals that Reichenbach’s proposal to solve the RCP is still an important point of reference. The criterion of narrowness is intuitively plausible, yet it requires a counterpart to avoid shortcomings like singleton reference classes. In the context of prediction, both the criterion of reliability and the criterion of homogeneity serve as such a counterpart.
## 3 Machine learning and deep neural networks
This section provides an overview of central aspects of ML and DNNs. Readers familiar with the material may safely skip to Sect. 4.
### 3.1 Machine learning
The main focus of ML is on the problem of generalization: how to make accurate predictions for new instances based on empirical observations?Footnote 19 In the following, I will focus on the case of supervised learning. In this setting, there is an input space, X, an output space, Y, and it is assumed that they are governed by an unknown functional relationship $$f :X \rightarrow Y$$. I will focus on a regression task in which $$X = {\mathbb {R}}^{d}$$ and $$Y = {\mathbb {R}}$$.Footnote 20
A set of training data, $$\langle x_1,y_1\rangle , \ldots , \langle x_n,y_n\rangle \in {\mathbb {R}}^{d} \times {\mathbb {R}}$$, is essential to most ML tasks. A concise way of capturing the data sampled from the input space is by means of a design matrix $${\textbf{X}} \in {\mathbb {R}}^{n \times d}$$. Here, n is the number of observations and d is the number of features associated with each observation. Often, d is referred to as the dimension of the data. In many applications involving texts, speech or images, the number of features d in a dataset is high, in some cases even considerably higher than the number of observations, such that $$d \gg n$$. This issue is discussed under the headline of high-dimensional data. It is commonly encountered in fields such as astronomy, climate science, economics or genomics (Bühlmann and van de Geer, 2011; Johnstone and Titterington, 2009). The increasing prevalence of high-dimensional data is mainly driven by two factors: a dataset can be inherently high-dimensional because a high number of features is available for each observation. Yet a dataset can also become high-dimensional because researchers are unsure about the functional relationship between available features. In this case, they might construct a wide range of new features by interacting and transforming the available ones (Belloni et al., 2014). The issue of high-dimensional data will come up again in the discussion below. For the moment, note that the features included in a dataset are somehow related to properties associated with the objects that constitute the observations in the dataset. They might consequently provide a link to the analysis of the RCP.
Based on the set of training data, the goal in ML is to find a function $$h :{\mathbb {R}}^{d} \rightarrow {\mathbb {R}}$$ that takes a new and previously unseen point x as input and predicts the corresponding label y as accurately as possible. This is why it is also called a prediction rule. The function h is usually chosen from a so-called hypothesis class $${\mathcal {H}}$$. This is a class of functions that is predetermined by the developers or operators of an ML system. In most cases, empirical risk minimization (ERM) or some variant guides the choice of the final prediction rule $$h \in {\mathcal {H}}$$. This means that the function h is chosen such that it minimizes the training risk, that is, the average deviation between predicted labels, $$h(x_i), i = 1, \dots , n$$, and true labels, $$y_i$$, in the training data. It is in this sense that the final prediction rule h should be as accurate as possible: the goal is to get as close as possible to the labels generated by the true but unknown underlying function f.
However, as mentioned above, the focus of ML is on generalization, that is, on predictions for new observations. So there needs to be a link between ERM on training data and generalization to unseen data. This link is established by the so-called i.i.d.-assumption that all input-output pairs, $$\langle x, y\rangle$$, are independent from each other and drawn from the identical but unknown probability distribution P over $${\mathbb {R}}^{d} \times {\mathbb {R}}$$ (von Luxburg and Schölkopf, 2011, p. 653). This allows to assess the performance of h on new input-output pairs sampled independently from P, giving rise to the test risk. The goal of successful generalization is then operationalized by the requirement that in addition to minimizing the training risk (the goal of ERM), the gap between training and test risk should be minimized as well (Goodfellow et al., 2016, p. 109). It is within this setting that the reliability of the data, that is, whether it leads to accurate predictions, can be assessed to some extent before making predictions for unseen observations: given the i.i.d.-assumption, the training data is structurally similar to the test data which is why accurate predictions on the latter are more likely given accurate predictions on the former.Footnote 21
The relation between training and test risk and hence the ability to generalize is closely linked to two central challenges in ML: underfitting and overfitting. Underfitting occurs when a prediction rule is overly simplistic, lacks the capacity to capture the complexity in the data and hence achieves poor accuracy on the training data. Overfitting occurs when a prediction rule fits the training data very closely and achieves high accuracy on the training data, thereby also fitting idiosyncrasies of the sample at hand that are not relevant for future observations. This usually leads to poor generalization and hence to a large gap between low training and high test risk. Just in case there is such a large gap, a prediction rule is said to be subject to overfitting (Goodfellow et al., 2016, p. 110). Consequently, a very close fit to the training data is not equivalent to overfitting, but usually makes it more likely to occur.
Whether a prediction rule tends to underfit or overfit is closely tied to the capacity of the underlying hypothesis class. This is illustrated in Figure 1. A hypothesis class with low capacity contains rather simplistic prediction rules that may struggle to fit the training data and will be prone to underfitting. A hypothesis class with high capacity contains highly complex prediction rules that may even fit random patterns in the training data and will be prone to overfitting. Consequently, to balance over- and underfitting, it is usually necessary to impose certain restrictions on the hypothesis class.
For instance, given that the structure of input and output data points towards a linear relationship, one might restrict the hypothesis class such that it only contains linear prediction rules.Footnote 22 In this case, the hypothesis class would be given by all prediction rules of the form $$h(x) = x_1 c_1 + \dots + x_d c_d$$. Determining the final prediction rule would amount to determining the coefficients $$c_1, \dots , c_d$$. On the one hand, this restriction would lead to at least an approximate fit between the final prediction rule and the training data, thereby avoiding underfitting. On the other hand, the restriction would ensure that the final prediction cannot fit the training data too closely, thereby avoiding overfitting.
### 3.2 Deep neural networks
DNNs are usually depicted as graphs consisting of nodes, the neurons, and edges transmitting information between neurons.Footnote 23 For simplicity, I focus on fully connected feedforward networks in which the graph contains no cycles.Footnote 24
More formally, a DNN can be described as a (directed and acyclic) graph, $$G = \langle V,E\rangle$$. The set of neurons is denoted by V, the set of edges is denoted by E. Typically, a DNN is structured in layers. If the DNN is fully connected, each node from one layer is connected to each node from the next layer by one edge. A network’s number of layers is commonly referred to as the depth of the network. DNNs contain a high number of layers which is why they are called ‘deep’.
Data is processed through the network as follows: first, it enters the network at the input layer. This layer contains one node per dimension of the input data. Then, the data is transmitted to the next layer. An activation function that is associated with the nodes in the network determines whether and in what form the data is processed from one neuron to another. A weight function determines, for each edge, the importance of the data passed on along that edge. Consequently, the input of a neuron consists of the weighted sum of the transformed outputs of all nodes connected to it.Footnote 25 Finally, for each input x, the network produces an output y at the output layer.
In practical applications, developers or operators of a DNN usually predefine the architecture of the network. It consists of a graph and an activation function. Thus, the output labels that a network produces depend on the predefined architecture and on the weights, w. Consequently, the learning process of a DNN amounts to finding the best among all possible configurations of weights for a given architecture. In this context, ‘best’ means most accurate according to ERM. The most common method to minimize the empirical risk of DNNs is the so-called stochastic gradient-descent (SGD) algorithm. Its underlying rationale is to initialize the weights with random values, to update them stepwise and to converge to that configuration of weights that leads to the lowest empirical risk. This configuration is then used to compute new predictions y for previously unseen observations x.Footnote 26
## 4 Statistical strategies to solve the reference class problem
Section 2.2 discussed three important criteria for a suitable reference class: narrowness, reliability, and homogeneity. However, little has been said about strategies to find the reference class for which these criteria are fulfilled. In particular, while it might be straightforward to determine a narrowest reference class, it is unclear how to discern relevant from irrelevant evidence and hence how to establish predictive homogeneity of a reference class. As mentioned above, several authors have interpreted the RCP as a problem of statistical model selection, which is why they try to address this issue within the framework of classical statistics.
For instance, Cheng (2009) argues that the predicates that determine a reference class can be expressed by the variables in a statistical model. So in a linear model of the form $$h(x) = x_1 c_1 + \dots + x_d c_d$$, the variables $$x_1, \dots , x_d$$ are taken to be predicates that determine a reference class, while h(x) would be a prediction based on these variables. Thus, in the case of Shonubi, $$x_1$$ might encode ‘age’, $$x_2$$ might encode ‘citizenship’ and h(x) might encode the overall quantity of heroin predicted based on the variables included in the model.Footnote 27
Given this setup, choosing the right reference class for making a prediction reduces to identifying the model with the right set of variables. With respect to the reference class, the criteria of narrowness, reliability, and homogeneity are constitutive for what is ‘right’. With respect to the set of variables, the model should be selected such that it avoids under- and overfitting (Cheng, 2009, p. 2095). As mentioned above, the latter is closely related to a model’s complexity and thus, given the model’s overall structure (i.e., a linear function, a specific architecture, etc.), to the number of variables it contains: the model should include enough variables to avoid underfitting; yet it should also contain only relevant variables to avoid overfitting. Consequently, when framing the RCP as a problem of statistical model selection, there is a close connection between the goal of avoiding under- and overfitting and the goal of choosing a reference class that is as narrow as possible while also being homogeneous.
When interpreting the RCP as a problem of statistical model selection, it seems straightforward to solve it using model selection methods.Footnote 28 Accordingly, Cheng (2009) argues that statistical measures like the Akaike Information Criterion (AIC) should be employed to determine the right reference class. The AIC evaluates a statistical model by measuring the model’s fit to the evidence as well as its complexity.Footnote 29 Thus, it evaluates how well the model balances over- and underfitting. Both poor fit to the evidence and high complexity of the model lead to higher values of the AIC. If, instead, a model achieves a considerably close fit to the evidence while being relatively simple, the AIC has a small value. Consequently, the best model is the one that minimizes the AIC. According to Cheng (2009, p. 2094), this also solves the RCP, for the variables of the best model in terms of the AIC determine the best reference class in a given situation.
A related approach is proposed by Franklin (2010). He also frames the RCP as a problem of statistical model selection. Yet contrary to Cheng, he suggests using feature selection methods to solve the problem. These methods are commonly used as follows: first, a complex model is specified that contains as many variables as possible given the available data. Next, the model is fitted to the data using a feature selection method that retains relevant variables in the model, while weighting irrelevant variables less or even discarding them altogether.Footnote 30 This leads to a fitted model that contains the relevant variables and in which the weights for irrelevant variables are small or even zero. According to Franklin, the variables that are identified as relevant by the feature selection method determine the right reference class in a given situation.
There are certainly many aspects about both approaches that require further discussion. Yet there is one general issue that affects both of them. In fact, it even invalidates them as a remedy to the RCP in many situations. Both Cheng (2009) and Franklin (2010) develop their proposals using the case of Shonubi as their point of departure. The discussion above revealed that all reference classes considered in this case were determined by a rather low number of predicates. This means that statistical models applied to the case will have a rather low number of predictively relevant variables, thereby avoiding over- and presumably also underfitting.
However, suppose the proposed strategies were applied to a setting involving high-dimensional data. In this case, a wide range of variables would be predictively relevant. Additionally, due to the high-dimensional setting, the sample size would be relatively low compared to the number of features associated with each observation. Thus, this situation embraces two scenarios, both of which would be problematic from the perspective of classical statistics: on the one hand, a statistical model could exploit all predictively relevant variables. This would correspond to a reference class that is both narrow and predictively homogeneous. However, it would also lead to overfitting, since a model including a large number of variables would be flexible enough to fit idiosyncrasies of the relatively small sample. Consequently, the information in the reference class would not be reliable in the sense that it gives rise to accurate predictions. On the other hand, employing the AIC or feature selection methods would lead to a model that is sufficiently simple to avoid overfitting. Yet this would prevent many predictively relevant variables from entering the model, thereby leading to a reference class that is neither narrow nor predictively homogeneous.Footnote 31
Overall, the example reveals that there are situations in which it is not possible to simultaneously achieve all desiderata for a suitable reference class within the framework of classical statistics. Consequently, proposals to solve the RCP using methods of classical statistics often fall short of doing so, because they cannot escape the fundamental tradeoff between overfitting and underfitting that is particularly challenging in the context of high-dimensional data.
## 5 The argument
The previous sections examined the RCP and central ideas of ML separately. To answer the guiding question of this text, both subjects have to be taken together: how, if at all, are DNNs suited to deal with the RCP? In this section, I argue that there are situations in which DNNs remedy specific instantiations of the RCP. By clearly demarcating these situations, my argumentation also allows to distinguish the latter from situations in which the RCP remains the intricate methodological problem as which it is known.
### 5.1 ‘Big Data’ is related to narrowness and reliability
DNNs gained their relevance mainly from what Wheeler (2016) refers to as “the era of big data”. Thus, as a first step, it is worth analyzing what ‘big data’ actually means.
First, the sheer number of observations in many contemporary datasets is vast. While classical statistics is often concerned with assessing the significance and precision of inferences made from a restricted sample, “we are now routinely handling population datasets directly or sample sizes so immense [...] that they behave like population data” (Wheeler, 2016, p. 330). Given this observation and the common assumption that “[t]he larger the sample gets, the more likely it is to reflect more accurately the distribution and labeling used to generate it” (Shalev-Shwartz and Ben-David, 2016, p. 38), considerations regarding the reliability of inferences in classical statistics do not, or at least to a far lesser extent, carry over to applications of ML.Footnote 32 Here, the representativeness of a given sample for the entire population is much more likely based on the size of the sample.
Second, many datasets nowadays belong to the high-dimensional setting outlined above. Thus, in addition to a large number of observations, each observation is associated with a—possibly much higher—number of features (Bühlmann and van de Geer, 2011). This is interesting from the perspective of the RCP, where a reference class gets narrower with any further predicate that is added to its definition. Consequently, when framing the RCP as a problem of statistical model selection, high-dimensional datasets give rise to very narrow reference classes.Footnote 33
Before proceeding, let me address two potential objections to this interpretation of features in a dataset as predicates that determine a reference class. First, consider the example of a dataset consisting of images. Images are usually stored in a dataset such that for each pixel in the image, there is one feature in the dataset giving the color of the pixel as a numeric value. Suppose further that the goal is image classification, that is, to determine a suitable reference class or, equivalently, to find a statistical model based on the given data that allows to correctly classify future images. Clearly then, selecting features that give the color of singular pixels seems to be something entirely different from selecting a feature such as ‘age’ in the case of Shonubi: one might object that features giving the color of pixels do not have an immediately obvious meaning and that, as a consequence, such features give rise to reference classes that do not have an immediately obvious meaning either.Footnote 34 This would call into question the strategy of approaching the RCP as a problem of statistical model selection in such settings. However, in the context of prediction, it is not the goal to investigate reference classes themselves or the predicates by which they are determined. Instead, the goal is to identify those features that determine a reference class for making accurate predictions. Thus, the criterion of predictive relevance alone is discerning suitable from unsuitable features in this context. Whether or not the features and the reference class they determine have an immediately obvious meaning is less important.Footnote 35
Second, one might object that when interested in the predicates that determine a reference class, what is relevant are not features in the dataset, but rather the values taken on by the features. For instance, in a demographic dataset, the predicate ‘age’ will be satisfied for each observation and hence irrelevant to determine a reference class. What is relevant is the value of ‘age’ for each individual in the dataset. This objection can be addressed by constructing a binary variable for each value taken on by a feature like ‘age’, leading to a dataset that contains features like ‘age30’ that equal one if an individual is 30 years old and zero otherwise. These can be interpreted as useful predicates to determine reference classes.
To summarize: in this section I argued that ‘big data’ can be conceived along two perspectives. They provide a promising basis to approach the RCP employing DNNs because they address both components of Reichenbach’s proposal: to choose a reference class that is narrow and for which reliable statistics are available. What remains is the problem of over- and underfitting when trying to determine predictively relevant features.
### 5.2 Deep neural networks can exploit high-dimensional data
We have seen above that strategies to solve the RCP with classical model selection techniques fail in applications involving high-dimensional data. On the one hand, statistical models could include a high number of variables in such situations. In this way they would fulfill the requirement of narrowness, but they would also overfit the information in the reference class which would prevent them from predicting accurately. On the other hand, statistical models could include a low number of variables. This would prevent them from overfitting, yet it would also prevent the choice of a predictively homogeneous reference class, since not all predictively relevant variables would be part of the model.
Contrary to this observation, recent results reveal that some DNNs possess a remarkable feature: they perform particularly well on high-dimensional data (Berner et al., 2021, p. 19, Neyshabur et al., 2017, p. 5947). In this setting, they are able to interpolate, that is, to exactly fit the training data, thereby achieving zero training error (Belkin et al., 2019, p. 15849). Given the preceding discussion of central ideas in ML, one might take this behavior as an indication for overfitting and a poor ability to generalize. However, as several authors show, DNNs possess a high ability to generalize to previously unseen data (Belkin et al., 2019; Zhang et al., 2017). This seems peculiar as it is at odds with the standard framework of ML, especially regarding its treatment of the under- versus overfitting problem. It is also at odds with the conventional wisdom presented in standard textbooks that “a model with zero training error is overfit to the training data and will typically generalize poorly” (Hastie et al., 2009, p. 221).
Thus, apparently, the case of DNNs is not appropriately captured by the depiction in Figure 1 where an algorithm’s predictive ability diminishes with increasing capacity of the underlying hypothesis class. As a consequence, Belkin et al. (2019) propose and empirically confirm an alternative framework that combines the traditional context of under- and overfitting—the ‘classical’ regime as they call it—with the specific behavior of some DNNs—the ‘modern’ interpolating regime. The main feature of their framework is what the authors refer to as the double-descent risk curve depicted in Figure 2. It corresponds to the classical U-shaped curve depicted in Figure 1 above, as long as an algorithm’s capacity is below the so-called interpolation threshold. This threshold marks the point beyond which an algorithm interpolates the training data. While prediction rules obtained directly at the threshold generally exhibit a high test risk indicating a low predictive accuracy, Belkin et al. (2019, p. 15850) “show that increasing the function class capacity beyond this point leads to decreasing risk, typically going below the risk achieved at the sweet spot in the ‘classical’ regime.” This means that large DNNs with a complex architecture involving many layers and incorporating a high number of features as inputs are suited particularly well for any kind of prediction task.
Many insights about the generalization ability of DNNs rely on empirical studies conducted with specific network architectures, but there is theoretical progress for some aspects of the problem (Zhang et al., 2021).Footnote 36 Perhaps most importantly, recent analyses of the SGD algorithm revealed that the algorithm exhibits a behavior of implicit regularization (Neyshabur et al., 2015, Poggio et al., 2020, Theorem 4). Mathematically, this means that the final configuration of weights to which the algorithm converges has a small norm.Footnote 37 With respect to the structure of a DNN, a small norm corresponds to a final configuration of weights or, equivalently, to a final prediction rule that is relatively simple. In particular, this means that many weights within the network will have a small value and that some of them will even be assigned a value of zero. So after the learning process, a DNN might locally ‘look’ considerably simpler than its initial architecture, since several input features might not be processed to the next layer and the flow of information along edges might be muted at various points in the network.
The observation of implicit regularization can be considered as one possible explanation for the astonishing generalization ability of DNNs.Footnote 38 In a way, it also allows to reconcile the behavior of DNNs with conventional statistical wisdom: just as in other methods of classical statistics and ML, accuracy and simplicity need to be balanced in DNNs as well. What remains surprising, however, is that this balance is struck automatically by the SGD algorithm and without being enforced at some point during the learning process. While statistical measures like the AIC explicitly incorporate the tradeoff between accuracy and simplicity as the objective of model selection, the SGD algorithm operates solely with the objective of maximizing accuracy—yet implicitly restricts the complexity of the final network as well.
In sum, recent ML research reveals that highly complex DNNs are often not susceptible to overfitting, because they achieve both a low training and a low test error.Footnote 39 Consequently, when framing the RCP as a problem of statistical model selection, they seem superior to methods of classical statistics in determining reference classes that are both narrow and predictively homogeneous. I will carve out this last step of my argument in the next section.
### 5.3 The deep neural network approach to the reference class problem
According to the discussion above, a solution to the predictive RCP needs to propose a method that identifies relevant predicates so as to achieve accurate predictions. Framing the RCP as a model selection problem, this means that the method should find the predictively relevant features to be included in the final model.
When approaching the RCP using DNNs, everything starts with input data in a design matrix, $${\textbf{X}} \in {\mathbb {R}}^{n \times d}$$. The dimension d indicates the number of features associated with each observation, $$x_i, i=1, \dots , n$$, so each observation might be interpreted as possessing d different properties or characteristics. We have seen that there are DNNs which perform best in high-dimensional settings and that the “era of big data” regularly brings about datasets that belong to precisely this setting. Consequently, it is reasonable to focus on cases where $$d \gg n$$. The task of image classification is an excellent example for such cases, since storing images in a dataset often gives rise to a setting in which the number of pixels in each image, corresponding to the number of features, is larger than the number of stored images, corresponding to the number of observations. Additionally, DNNs are considered the state-of-the-art method to perform image classification (Berner et al., 2021, p. 2).
The discussion above revealed that a reference class gets narrower with each predicate that is added to its definition. It also revealed that features in a dataset can be interpreted as predicates that determine a reference class. Taking these aspects together, one can conclude that given high-dimensional input data, a DNN starts a prediction exercise like image classification with the narrowest reference class possible that is defined by a high number of features.Footnote 40 Thus, this very first step is in line with Reichenbach’s recommendation to use information for the narrowest reference class available. It is also in line with Franklin’s (2010) feature-selection approach according to which one should start the process of finding a suitable reference class by considering the model that contains the highest number of variables. However, there have to be safeguards that counterbalance a strict preference for narrow reference classes and prevent overfitting.
When trying to determine a suitable reference class, the criterion of predictive homogeneity introduced above can be seen as a counterpart to the criterion of narrowness. Recall that a reference class is predictively homogeneous just in case it is determined by all and only those features that are predictively relevant (see Sect. 2.2). In the context of DNNs, predictive relevance is assessed via ERM: given the training data, the SGD algorithm chooses all weights within the network such that the empirical risk is minimized. As long as the empirical risk is not minimal, the algorithm proceeds by altering the weights to get closer to the minimum. Once the minimum is reached, the algorithm terminates. Put differently, the algorithm only converges to the minimum and terminates once everything predictively relevant is taken into account and appropriately weighted, since otherwise, the empirical risk could be decreased even further.Footnote 41 We have seen that very complex DNNs often achieve perfect accuracy and hence zero empirical risk in the training sample as well as a high ability to generalize to new data. In the context of the RCP, this means that such DNNs are able to exploit the large number of features in the data to an extent that allows them to make accurate predictions on both the training and the test data.Footnote 42 For instance, in the example of image classification, DNNs are highly successful in selecting and appropriately weighting those features that correspond to the pixels that are crucial for classifying new images (Huh et al., 2021; Krizhevsky et al., 2012). Consequently, it is reasonable to assume that DNNs operating within the ERM paradigm take into account all predictively relevant features during their learning process.
However, we have seen that a reference class is predictively homogeneous just in case it is determined by all and only those features that are predictively relevant. Maximizing accuracy alone is therefore insufficient, because apart from all relevant features, the most accurate model might also include irrelevant features. Furthermore, maximizing accuracy alone involves the risk of overfitting. Above, I discussed how classical model selection techniques try to address this issue and fail to consider all predictively relevant features in high-dimensional settings. DNNs are different in this respect. The previous section revealed the central role of implicit regularization that takes place in the determination of a network’s weights. In addition to maximizing accuracy, the SGD algorithm generally yields a final prediction rule that is simple in the sense that the network’s weights have a small norm. This means that some weights are assigned a high value, since the associated input is considered to be of high predictive relevance for the output, but others are assigned a low value—maybe even zero—, since the associated input is considered less relevant—or not relevant at all—for the output. Put bluntly, irrelevant features are downweighted or eliminated to achieve a simple configuration of weights.
We can now combine both insights. First, within the framework of ERM and assuming that a global minimum for the empirical risk was reached, the final prediction rule is the one that maximizes accuracy and hence includes all predictively relevant features (otherwise the risk could be decreased further by including additional features). Second, given maximal accuracy, the final prediction rule is also the simplest solution and hence only includes predictively relevant features due to the simplicity bias of SGD.Footnote 43 Taking both aspects together reveals that the combination of ERM and the simplicity bias of SGD seems to identify all and only those features that are predictively relevant, thereby giving rise to a predictively homogeneous reference class.Footnote 44,Footnote 45
So in sum, the learning process of DNNs is governed by ERM, leading to the consideration of all predictively relevant features and to maximal accuracy. However, it is also governed by a bias towards simple solutions, leading to the consideration of predictively relevant features only, thereby preventing overfitting. Thus, in situations involving big data, the specific functionality of DNNs allows them to exploit data for very narrow yet predictively homogeneous reference classes and to incorporate the relevant information in a combination of weights that maximizes predictive accuracy. This is why DNNs are suited to deal with the RCP as it arises in the context of prediction. Contrary to methods of classical statistics, they might offer a remedy to it in these situations.
Clearly, there is a flipside to the latter reasoning: by illustrating how DNNs can offer a remedy to the RCP in some very specific situations, it also suggests that in many others, DNNs fare no better than methods of classical statistics.
First, I emphasized that the concept of predictive homogeneity crucially depends on the minimization of the empirical risk. Yet I also pointed out that, sometimes, the SGD algorithm might fail to achieve this minimization and converge to a local instead of a global minimum of the loss function (see Fn. 41). Consequently, predictive homogeneity cannot be achieved in these situations and neither do they give rise to a suitable reference class of features.
Second, we have seen that the criterion of reliability is crucial for determining a suitable reference class. Above, I explicitly tied reliability to characteristics of the data, in particular to the sample size (see Fn. 32). On the one hand, this seems to be very much in the spirit of Reichenbach,’s (1949) requirement to compile reliable statistics. On the other hand, one might question whether this is sufficient or whether reliability should also be an explicit requirement for the method that does the compiling. This question is particularly pressing in the case of DNNs, since several network architectures have been shown to lack robustness and to be easily fooled by slight perturbations of the input data.Footnote 46 Tying reliability to the data, however, the above reasoning rests on the assumption that DNNs indeed work reliably and thus only applies to situations in which this really is the case.
## 6 Conclusion
This paper set out to answer the question whether ML faces the same methodological problems as classical statistics. I tried to shed light on this question by investigating the RCP, a long-standing challenge to classical statistics. Albeit originating as a problem of (frequentist) probability theory, the RCP also concerns the more general question as to how statistical evidence should have a bearing on individual cases. My focus in this paper was on cases in which a reference class should be chosen so as to allow for accurate predictions, that is, on the epistemological RCP as it arises in the context of prediction.
I argued that one particular method of ML, namely DNNs, are sometimes able to overcome the RCP in settings involving high-dimensional data. First, the high dimensionality of the data can be linked to the concepts of narrowness (via a high number of features) and reliability (via a high number of observations), both of which were proposed as criteria for a suitable reference class by Reichenbach (1949). Second, the particular functionality of DNNs predestines them to exploit high-dimensional settings. Due to the SGD algorithm’s behavior of implicit regularization, they are less susceptible to overfitting. Consequently, they can select a narrow reference class consisting of a high number of features that is also predictively homogeneous in the sense that it only includes features that are relevant to make accurate predictions.
In sum, I conclude that contrary to methods of classical statistics, DNNs can offer a remedy to the RCP in settings involving high-dimensional data. However, and this is just as important a conclusion, there are also many settings in which DNNs cannot provide such a remedy—and in which, consequently, the RCP remains a serious methodological challenge. | 11,668 | 57,851 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2024-18 | latest | en | 0.931917 |
http://www.nag.com/numeric/FL/nagdoc_fl23/examples/source/f07fefe.f90 | 1,438,079,528,000,000,000 | text/plain | crawl-data/CC-MAIN-2015-32/segments/1438042981856.5/warc/CC-MAIN-20150728002301-00246-ip-10-236-191-2.ec2.internal.warc.gz | 602,398,779 | 1,316 | PROGRAM f07fefe ! F07FEF Example Program Text ! Mark 23 Release. NAG Copyright 2011. ! .. Use Statements .. USE nag_library, ONLY : dpotrf, dpotrs, nag_wp, x04caf ! .. Implicit None Statement .. IMPLICIT NONE ! .. Parameters .. INTEGER, PARAMETER :: nin = 5, nout = 6 ! .. Local Scalars .. INTEGER :: i, ifail, info, lda, ldb, n, nrhs CHARACTER (1) :: uplo ! .. Local Arrays .. REAL (KIND=nag_wp), ALLOCATABLE :: a(:,:), b(:,:) ! .. Executable Statements .. WRITE (nout,*) 'F07FEF Example Program Results' ! Skip heading in data file READ (nin,*) READ (nin,*) n, nrhs lda = n ldb = n ALLOCATE (a(lda,n),b(ldb,nrhs)) ! Read A and B from data file READ (nin,*) uplo IF (uplo=='U') THEN READ (nin,*) (a(i,i:n),i=1,n) ELSE IF (uplo=='L') THEN READ (nin,*) (a(i,1:i),i=1,n) END IF READ (nin,*) (b(i,1:nrhs),i=1,n) ! Factorize A ! The NAG name equivalent of dpotrf is f07fdf CALL dpotrf(uplo,n,a,lda,info) WRITE (nout,*) FLUSH (nout) IF (info==0) THEN ! Compute solution ! The NAG name equivalent of dpotrs is f07fef CALL dpotrs(uplo,n,nrhs,a,lda,b,ldb,info) ! Print solution ! ifail: behaviour on error exit ! =0 for hard exit, =1 for quiet-soft, =-1 for noisy-soft ifail = 0 CALL x04caf('General',' ',n,nrhs,b,ldb,'Solution(s)',ifail) ELSE WRITE (nout,*) 'A is not positive definite' END IF END PROGRAM f07fefe | 441 | 1,306 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2015-32 | latest | en | 0.489506 |
https://fulltimeroadwarrior.com/recreational-vehicle/you-asked-will-rv-pipes-freeze-at-32-degrees.html | 1,631,910,906,000,000,000 | text/html | crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00102.warc.gz | 321,599,386 | 18,932 | # You asked: Will RV pipes freeze at 32 degrees?
Contents
As we mentioned above, there isn’t a definitive temperature where your RV water lines will freeze. However, you definitely need to start to worry once the temperature gets below the freezing temperature of 32 degrees Fahrenheit or 0 degrees Celsius. … This way, you won’t risk being too late and your pipes won’t freeze.
## At what temperature will RV pipes freeze?
How long does it take for RV pipes to freeze? In general, the temperature has to dip below freezing (32 F) for approximately 24 hours for RV pipes to freeze.
## How cold can it get before I have to winterize my camper?
As a general rule of thumb, even if your RV is in use, you should probably winterize if: Temperatures are consistently at 20 degrees Fahrenheit or lower. You can’t insulate and heat your RV’s underbelly, or you don’t have heated tanks. You’re boondocking and can only run your furnace at certain times.
## How long does it take pipes to freeze at 32 degrees?
Using ½” copper pipe with ½” fiberglass insulation, at an ambient temperature of 20°F, it took about 2-hours for the pipe to reach 32°. This is the point at which the water in the pipe begins to freeze.
## Is 32 degrees cold enough to freeze pipes?
Information varies on how cold it has to be for pipes to freeze, but the freezing temperature of water is 32 degrees. So, theoretically, your pipes could freeze at any temperature lower than that. But for your pipes to literally freeze overnight, the temperature would probably have to drop to at least 20 degrees.
## Will RV pipes freeze in one night?
If the temperature drops drastically to a temperature that’s a lot below the freezing temperature, your RV pipes will freeze much quicker. However, if the temperature drops to the freezing temperature precisely, you can expect it to take roughly 24 hours for your pipes to freeze.
## Will pipes freeze at 27 degrees?
There is no simple answer. Water freezes at 32 degrees Fahrenheit, but indoor pipes are somewhat protected from outdoor temperature extremes, even in unheated areas of the house like in the attic or garage. … As a general rule, temperatures outside must drop to at least 20 degrees or lower to cause pipes to freeze.
## What happens if you don’t winterize RV?
What Happens if You Don’t Winterize Your RV or Camper? If you choose not to winterize your RV and temperatures fall below 32 degrees Fahrenheit, you run the risk of severe damage to your RV. When temperatures fall below the freezing point of 32 degrees Fahrenheit, water freezes.
## How do you winterize a camper to live in?
There are several ways to insulate them: foam insulation boards, bubble insulation, solar blankets, etc. For extra warmth, line your windows with heavy-weight thermal curtains. You may also want to go over your RV windows and doors with a layer of RV sealant or caulk, just to ensure they’re nice and weather-tight.
IT IS INTERESTING: Frequent question: What brand of motorhome was used in the movie RV?
## Will pipes burst at 32 degrees?
Water freezes at 32 degrees Fahrenheit. … Even so, outside temperatures generally have to fall to about 20 degrees Fahrenheit or below before your pipes will freeze or burst due to freezing.
## How long does it take pipes to freeze at 10 degrees?
All that said, the basic rule of thumb is to generally expect pipes to freeze within 3 – 6 hours of drawn out subnormal temperatures.
## At what temp should I drip my faucets?
When a cold snap hovers around or below 20 degrees Fahrenheit (-6 degrees Celsius), it’s time to let at least one faucet drip. Pay close attention to water pipes that are in attics, garages, basements or crawl spaces because temperatures in these unheated interior spaces usually mimic outdoor temperatures. | 826 | 3,801 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2021-39 | latest | en | 0.919841 |
https://kr.mathworks.com/matlabcentral/cody/problems/2308-rectangle-in-circle/solutions/857378 | 1,575,712,141,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540497022.38/warc/CC-MAIN-20191207082632-20191207110632-00011.warc.gz | 435,565,263 | 15,681 | Cody
# Problem 2308. rectangle in circle
Solution 857378
Submitted on 27 Mar 2016 by William
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
a=6; b=12; y_correct=30 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 30 y = 30
2 Pass
a=7; b=14; y_correct=35 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 35 y = 35
3 Pass
a=3; b=6; y_correct=15 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 15 y = 15
4 Pass
a=5; b=40; y_correct=65 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 65 y = 65
5 Pass
a=5; b=90; y_correct=125 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 125 y = 125
6 Pass
a=12; b=54; y_correct=102 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 102 y = 102
7 Pass
a=16; b=18; y_correct=58 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 58 y = 58
8 Pass
a=16; b=72; y_correct=136 assert(isequal(your_fcn_name(a,b),y_correct))
y_correct = 136 y = 136 | 391 | 1,088 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2019-51 | latest | en | 0.491186 |
https://coursepals.com/social-science-course-pals/mat-221-week-3-discussion/ | 1,601,460,021,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00434.warc.gz | 341,259,736 | 17,543 | # Mat 221 week 3 discussion
Don't use plagiarized sources. Get Your Custom Essay on
Mat 221 week 3 discussion
Just from \$13/Page
Ashford 4: – Week 3 – Discussion 1
Your initial discussion thread is due on Day 3 (Thursday) and you have until Day 7 (Monday) to respond to your classmates. Your grade will reflect both the quality of your initial post and the depth of your responses.
Parallel and Perpendicular
Read the following instructions in order to complete this discussion, and review the example of how to complete the math required for this assignment:
• Given an equation of a line, find equations for lines parallel or perpendicular to it going through specified points. Find the appropriate equations and points from the table below. Simplify your equations into slope-intercept form
If your first name starts with
Write the equation of a line parallel to the given line but passing through the given point.
Write the equation of a line perpendicular to the given line but passing through the given point.
A or N
B or O
C or P
D or Q
E or R
F or S
G or T
H or U
I or V
J or W
K or X
L or Y
M or Z
• Discuss the steps necessary to carry out each activity. Describe briefly what each line looks like in relation to the original given line.
• What does it mean for one line to be parallel to another?
• What does it mean for one line to be perpendicular to another?
• Incorporate the following five math vocabulary words into your discussion. Use bold font to emphasize the words in your writing (Do not write definitions for the words; use them appropriately in sentences describing your math work.):
• Origin
• Ordered pair
• X- or y-intercept
• Slope
• Reciprocal
Your initial post should be 150-250 words in length. Respond to at least two of your classmates’ posts by Day 7. Make sure you choose people who don’t have the same equations as you worked. Do you agree with how they used the vocabulary? Do their equations seem reasonable given what they started with?
Carefully review the Grading Rubric for the criteria that will be used to evaluate your discussion.
## Calculate the price of your order
550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
\$26
The price is based on these factors:
Number of pages
Urgency
Basic features
• Free title page and bibliography
• Unlimited revisions
• Plagiarism-free guarantee
• Money-back guarantee
On-demand options
• Writer’s samples
• Part-by-part delivery
• Overnight delivery
• Copies of used sources
Paper format
• 275 words per page
• 12 pt Arial/Times New Roman
• Double line spacing
• Any citation style (APA, MLA, Chicago/Turabian, Harvard)
# Our guarantees
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
### Money-back guarantee
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
### Zero-plagiarism guarantee
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
### Free-revision policy
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result. | 819 | 3,551 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.75 | 4 | CC-MAIN-2020-40 | latest | en | 0.877408 |
https://www.homeworksolving.com/order-programming-data-visualization-project/ | 1,686,162,218,000,000,000 | text/html | crawl-data/CC-MAIN-2023-23/segments/1685224654012.67/warc/CC-MAIN-20230607175304-20230607205304-00600.warc.gz | 879,162,612 | 12,302 | Order Programming Data Visualization Project
Order Programming Data Visualization Project
Description
Using the MyOpenMath generated hockey data, create the following charts:
1) A bar chart of the divisions showing how many teams are in each one.
2) A histogram of goals allowed. Describe the shape of this data (symmetric, normal, skewed (if so, with the correct direction)).
3) Box plots of goals scored grouped by division. Make a conclusion about the variance of goals scored based on these box plots. (i.e. which divisions have larger/smaller variance, which ones are similar, etc)
Order Programming Data Visualization Project
4) A Q-Q plot for goal differential. Then make a conclusion (using just the plot, don’t stress about different statistics) on whether this variable follows a normal distribution. (Note that this part is NOT included in the example provided. A lot of coding involves researching how to do things on your own, so I’m testing your research skills here a bit!)
You might not know what all of the variables mean in this data set. That’s ok, and part of data science — learning about things you aren’t familiar with. (I worked on a clinical team where I had to learn about bilirubins, something I’d literally never heard of until thrust into that role!) On this hockey data, I can assure you that a quick google search will answer any questions you have about what the different things mean.
The using the MyOpenMath generated house data (note that this will be a *new* data set from the previous assignment), create the following charts:
5) A bar chart showing the number of stories of the houses.
Order Programming Data Visualization Project
6) A histogram of bedrooms. Describe the shape of this data (symmetric, normal, skewed (if so, with the correct direction)).
7) Box plots of square footage grouped by the number of stories.
8) A Q-Q plot for square footage. Then make a conclusion (using just the plot, don’t stress about different statistics) on whether this variable follows a normal distribution.
You can contact our live agent via WhatsApp! Via + 1 9294730077
Feel free to ask questions, clarifications, or discounts available when placing an order.
Order your essay today and save 20% with the discount code SOLVE | 473 | 2,258 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.3125 | 3 | CC-MAIN-2023-23 | latest | en | 0.878949 |
https://jp.mathworks.com/help/finance/plotting-sensitivities-of-an-option.html | 1,718,864,619,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861883.41/warc/CC-MAIN-20240620043158-20240620073158-00625.warc.gz | 277,203,035 | 18,385 | # Plotting Sensitivities of an Option
This example creates a three-dimensional plot showing how gamma changes relative to price for a Black-Scholes option.
Recall that gamma is the second derivative of the option price relative to the underlying security price. The plot in this example shows a three-dimensional surface whose z-value is the gamma of an option as price (x-axis) and time (y-axis) vary. The plot adds yet a fourth dimension by showing option delta (the first derivative of option price to security price) as the color of the surface. First set the price range of the options, and set the time range to one year divided into half-months and expressed as fractions of a year.
Range = 10:70;
Span = length(Range);
j = 1:0.5:12;
Newj = j(ones(Span,1),:)'/12;
For each time period, create a vector of prices from 10 to 70 and create a matrix of all ones.
JSpan = ones(length(j),1);
NewRange = Range(JSpan,:);
Calculate the gamma and delta sensitivities (greeks) using the blsgamma and blsdelta functions. Gamma is the second derivative of the option price with respect to the stock price, and delta is the first derivative of the option price with respect to the stock price. The exercise price is \$40, the risk-free interest rate is 10%, and volatility is 0.35 for all prices and periods.
Display the greeks as a function of price and time. Gamma is the z-axis; delta is the color.
mesh(Range, j, ZVal, Color);
xlabel('Stock Price (\$)');
ylabel('Time (months)');
zlabel('Gamma');
title('Call Option Price Sensitivity');
axis([10 70 1 12 -inf inf]);
view(-40, 50);
colorbar('horiz'); | 397 | 1,606 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.921875 | 3 | CC-MAIN-2024-26 | latest | en | 0.861565 |
https://www.hackmath.net/en/example/974?tag_id=64,159 | 1,566,284,971,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027315258.34/warc/CC-MAIN-20190820070415-20190820092415-00128.warc.gz | 809,904,485 | 6,649 | # Prism
Calculate the volume of the rhombic prism. Base of prism is rhombus whose one diagonal is 47 cm and the edge of the base is 28 cm. The edge length of the base of the prism and height is 3:5.
Result
V = 33389.9 cm3
#### Solution:
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
## Next similar examples:
1. Rhombus base
Calculate the volume and surface area of prisms whose base is a rhombus with diagonals u1 = 12 cm and u2 = 10 cm. Prism height is twice base edge length.
2. Prism
The base of the prism is a rhombus with a side 30 cm and height 27 cm. The height of the prism is 180% longer than the side length of the rhombus. Calculate the volume of the prism.
3. Tetrahedral prism
Calculate surface and volume tetrahedral prism, which has a rhomboid-shaped base, and its dimensions are: a = 12 cm, b = 7 cm, ha = 6 cm and prism height h = 10 cm.
4. Cylinder - h
Cylinder volume is 215 cm3. Base radius is 2 cm. Calculate the height of the cylinder.
5. Hexagonal prism
The base of the prism is a regular hexagon consisting of six triangles with side a = 12 cm and height va = 10.4 cm. The prism height is 5 cm. Calculate the volume and surface of the prism!
6. Triangular prism
Calculate the surface area and volume of a triangular prism, base right triangle if a = 3 cm, b = 4 cm, c = 5 cm and height of prism h=12 cm.
7. Triangular prism
Base of perpendicular triangular prism is a right triangle with leg length 5 cm. Content area of the largest side wall of its surface is 130 cm² and the height of the body is 10 cm. Calculate its volume.
8. Vertical prism
The base of vertical prism is a right triangle with leg a = 5 cm and a hypotenuse c = 13 cm. The height of the prism is equal to the circumference of the base. Calculate the surface area and volume of the prism
9. Building base
Excavation for the building base is 350x600x26000. Calculate its volume in m3.
10. Pine wood
From a trunk of pine 6m long and 35 cm in diameter with a carved beam with a cross-section in the shape of a square so that the square had the greatest content area. Calculate the length of the sides of a square. Calculate the volume in cubic meters of lum
11. Pool
Mr. Peter build a pool shape of a four-sided prism with rhombus base in the garden. Base edge length is 8 m, distance of the opposite walls of the pool is 7 m. Estimated depth is 144 cm. How many hectoliters of water consume Mr. Peter to fill the pool?
12. TV transmitter
The volume of water in the rectangular swimming pool is 6998.4 hectoliters. The promotional leaflet states that if we wanted all the pool water to flow into a regular quadrangle with a base edge equal to the average depth of the pool, the prism would have.
13. Tetrapack
How high should be the milk box in the shape of a prism with base dimensions 8 cm and 8.8 cm if its volume is 1 liter? | 760 | 2,897 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2019-35 | latest | en | 0.890024 |
https://eval-sql.net/sql-server-eval | 1,725,815,805,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00096.warc.gz | 226,737,471 | 6,460 | # SQL Eval Function SQL Server Eval
## Introduction
How to evaluate an arithmetic expression in SQL Server is a common subject. There are several reasons why an "Eval" function like JavaScript could be useful in SQL such as evaluating custom report fields for a trusted user.
Multiple partial solutions exist like using "EXEC(Transact-SQL)" which is limited, it cannot be used inside SELECT statement and lead to SQL Injection or using an homemade function which, most of time, fail supporting simple operator priorities and parenthesis.
SQL Eval.NET is a complete solution which, not only lets you evaluate dynamic arithmetic expression, but lets you use the full C# language directly in T-SQL stored procedures, functions and triggers.
```DECLARE @tableFormula TABLE (Formula VARCHAR(255), X INT, Y INT, Z INT)
INSERT INTO @tableFormula
VALUES ( 'x+y*z', 1, 2, 3 ),
( '(x+y)*z', 1, 2, 3 )
-- Select_0: 7
-- Select_1: 9
SELECT SQLNET::New(Formula).ValueInt('x', X).ValueInt('y', Y).ValueInt('z', Z).EvalInt() as Result
FROM @tableFormula
```
## SQL Eval - Arithmetic / Math Expression
### Problem
You need to evaluate a dynamic arithmetic operation specified by a trusted user or check a dynamic rule.
• Dynamic report calculation field
• Dynamic report query filter
• Dynamic rule validation
### Solution
Eval SQL.NET supports all C# operators including operators precedence and parenthesis.
Evaluating an expression is very fast and scalable. You can see performance 3-20x faster than User-Defined Function (UDF) and you can evaluate an expression as much as ONE MILLION times under a second.
```DECLARE @items TABLE (Quantity INT, Price MONEY)
INSERT INTO @items
VALUES ( 2, 10 ),
( 9, 6 ),
( 15, 2 ),
( 6, 0 ),
( 84, 5 )
DECLARE @customColumn SQLNET = SQLNET::New('(quantity * price).ToString("\$#.00")')
DECLARE @customFilter SQLNET = SQLNET::New('quantity > 3 && price > 0')
-- Select_0: 9, 6.00, \$54.00
-- Select_1: 15, 2.00, \$30.00
-- Select_2: 84, 5.00, \$420.00
SELECT * ,
@customColumn.ValueInt('quantity', Quantity).Val('price', Price).EvalString() as Result
FROM @items
WHERE @customFilter.ValueInt('quantity', Quantity).Val('price', Price).EvalBit() = 1
```
## SQL Eval - Dynamic Expression
### Problem
You need to evaluate and execute a dynamic SQL expression which requires more than basic arithmetic operators.
• if/else
• switch/case
• try/catch
### Solution
Eval SQL.NET is flexible and supports almost all C# keywords and features including:
• Anonymous Type
• Generic Type
• Lambda Expression
• LINQ
```CREATE PROCEDURE [dbo].[Select_Switch] @x INT, @y INT, @z INT
AS
BEGIN
DECLARE @result INT
SET @result = SQLNET::New('
switch(x)
{
case 1: return y + z;
case 2: return y - z;
case 3: return y * z;
default: return Convert.ToInt32(y ^^ z); // Pow
}
').ValueInt('x', @x).ValueInt('y', @y).ValueInt('z', @z).EvalInt()
SELECT @result as Result
END
GO
-- RETURN 5
EXEC Select_Switch 1, 2, 3
-- RETURN -1
EXEC Select_Switch 2, 2, 3
-- RETURN 6
EXEC Select_Switch 3, 2, 3
-- RETURN 8
EXEC Select_Switch 4, 2, 3
```
## SQL Eval - Framework class Library
### Problem
You have a complex SQL and you know C# Syntax and C# Object could very easily solve this problem.
• Regex
• DirectoryInfo / FileInfo
• String.Format
### Solution
Eval SQL.NET improves readability and maintainability over complex SQL. It supports all .NET framework class libraries (FCL) that are supported by SQL CLR Framework Libraries.
```-- CREATE test
DECLARE @t TABLE (Id INT , Input VARCHAR(MAX))
INSERT INTO @t VALUES ( 1, '1, 2, 3; 4; 5' ), ( 2, '6;7,8;9,10' )
-- SPLIT with many delimiters: ',' and ';'
DECLARE @sqlnet SQLNET = SQLNET::New('Regex.Split(input, ",|;")')
SELECT *
FROM @t AS A
CROSS APPLY ( SELECT *
FROM dbo.SQLNET_EvalTVF_1(@sqlnet.ValueString('input', Input))
) AS B
```
## Conclusion
Eval SQL.NET can be seen in SQL Server as the function "eval()" equivalent of JavaScript. Unlike common solutions limited to very simple math expressions, Eval SQL.NET features go way beyond: | 1,123 | 4,055 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2024-38 | latest | en | 0.755379 |
http://applet-magic.com/chargeproduct.htm | 1,490,575,236,000,000,000 | text/html | crawl-data/CC-MAIN-2017-13/segments/1490218189316.33/warc/CC-MAIN-20170322212949-00181-ip-10-233-31-227.ec2.internal.warc.gz | 21,123,592 | 2,882 | The Dependence of Force on the Product of Charges Implies the Discretization and Spatial Distribution of the Charges
San José State University
applet-magic.com
Thayer Watkins
Silicon Valley
USA
The Dependence of Force
on the Product of Charges
Implies the Discretization and
Spatial Distribution of the Charges
The force between two bodies of elecrical charges q and Q is given by the Coloumbic formula
#### F = kqQ/s²
where k is a constant and s is the separaration distance between them. The dependence on the product qQ raises the question of how Nature can generate a product of two real numbers. How would Nature generate the product of two irrational numbers such as √2 and √3? The dependence of force on inverse distance squared is handled by the force being propagated by particles that are spread over an area of 4πs² and so their intensity is diminished by this factor.
It is conjectured here that the product of charges is generated by a charge being composed of discrete spatially separated units. Let n and N be the numbers of these discrete units for the charges of q and Q, respectively. Nature establishes the force between two discrete units at a separation distance s. One unit of the charge q then interacts with the N units of the charge Q. There are n units for the charge q so the number of interactions is automatically nN. The force between the charges is then proportional to nN and hence also to qQ.
The gravitational force between masses of magnitudes m and M also depends upon the product mM.
#### F = GmM/s²
All of the force formulas show this dependence so all of the generic charges are composed of discrete units spatially separated.
The product dependence has the special property that if the charges are decomposed such that ∪qi=q and ∪Qj=Q with no overlaps and the differences in separation distances being insignificant then
#### F = kqQ/s² = Σi Σj kqiQj/s²
Thus the dependence has to be on the product of the charges. | 434 | 1,968 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.453125 | 3 | CC-MAIN-2017-13 | longest | en | 0.937467 |
https://www.airmilescalculator.com/distance/pur-to-vvi/ | 1,713,092,653,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296816879.25/warc/CC-MAIN-20240414095752-20240414125752-00337.warc.gz | 615,079,007 | 27,827 | # How far is Santa Cruz from Puerto Rico?
The distance between Puerto Rico (Puerto Rico Airport) and Santa Cruz (Viru Viru International Airport) is 538 miles / 866 kilometers / 468 nautical miles.
The driving distance from Puerto Rico (PUR) to Santa Cruz (VVI) is 872 miles / 1404 kilometers, and travel time by car is about 29 hours 29 minutes.
538
Miles
866
Kilometers
468
Nautical miles
1 h 31 min
104 kg
## Distance from Puerto Rico to Santa Cruz
There are several ways to calculate the distance from Puerto Rico to Santa Cruz. Here are two standard methods:
Vincenty's formula (applied above)
• 538.009 miles
• 865.842 kilometers
• 467.517 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth's surface using an ellipsoidal model of the planet.
Haversine formula
• 539.666 miles
• 868.509 kilometers
• 468.957 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
## How long does it take to fly from Puerto Rico to Santa Cruz?
The estimated flight time from Puerto Rico Airport to Viru Viru International Airport is 1 hour and 31 minutes.
## Flight carbon footprint between Puerto Rico Airport (PUR) and Viru Viru International Airport (VVI)
On average, flying from Puerto Rico to Santa Cruz generates about 104 kg of CO2 per passenger, and 104 kilograms equals 230 pounds (lbs). The figures are estimates and include only the CO2 generated by burning jet fuel.
## Map of flight path and driving directions from Puerto Rico to Santa Cruz
See the map of the shortest flight path between Puerto Rico Airport (PUR) and Viru Viru International Airport (VVI).
## Airport information
Origin Puerto Rico Airport
City: Puerto Rico
Country: Bolivia
IATA Code: PUR
ICAO Code: SLPR
Coordinates: 11°6′27″S, 67°33′4″W
Destination Viru Viru International Airport
City: Santa Cruz
Country: Bolivia
IATA Code: VVI
ICAO Code: SLVR
Coordinates: 17°38′41″S, 63°8′7″W | 513 | 2,059 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2024-18 | latest | en | 0.818029 |
https://convertoctopus.com/20-kilometers-to-miles | 1,624,106,637,000,000,000 | text/html | crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00351.warc.gz | 181,078,847 | 7,942 | ## Conversion formula
The conversion factor from kilometers to miles is 0.62137119223733, which means that 1 kilometer is equal to 0.62137119223733 miles:
1 km = 0.62137119223733 mi
To convert 20 kilometers into miles we have to multiply 20 by the conversion factor in order to get the length amount from kilometers to miles. We can also form a simple proportion to calculate the result:
1 km → 0.62137119223733 mi
20 km → L(mi)
Solve the above proportion to obtain the length L in miles:
L(mi) = 20 km × 0.62137119223733 mi
L(mi) = 12.427423844747 mi
The final result is:
20 km → 12.427423844747 mi
We conclude that 20 kilometers is equivalent to 12.427423844747 miles:
20 kilometers = 12.427423844747 miles
## Alternative conversion
We can also convert by utilizing the inverse value of the conversion factor. In this case 1 mile is equal to 0.0804672 × 20 kilometers.
Another way is saying that 20 kilometers is equal to 1 ÷ 0.0804672 miles.
## Approximate result
For practical purposes we can round our final result to an approximate numerical value. We can say that twenty kilometers is approximately twelve point four two seven miles:
20 km ≅ 12.427 mi
An alternative is also that one mile is approximately zero point zero eight times twenty kilometers.
## Conversion table
### kilometers to miles chart
For quick reference purposes, below is the conversion table you can use to convert from kilometers to miles
kilometers (km) miles (mi)
21 kilometers 13.049 miles
22 kilometers 13.67 miles
23 kilometers 14.292 miles
24 kilometers 14.913 miles
25 kilometers 15.534 miles
26 kilometers 16.156 miles
27 kilometers 16.777 miles
28 kilometers 17.398 miles
29 kilometers 18.02 miles
30 kilometers 18.641 miles | 447 | 1,735 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.1875 | 4 | CC-MAIN-2021-25 | latest | en | 0.782575 |
https://tgdrumming.com/programming-language/parallel-resistance-calculation-questions-and-answers.html | 1,701,509,519,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00425.warc.gz | 655,881,572 | 9,253 | # Parallel Resistance Calculation Questions and Answers
### Examples of series and parallel resistance calculations
Several resistors are connected in series to pass the same current, the voltage across each resistor is equal to the current through the resistor multiplied by the resistance value of the resistor, and the current is equal to the total voltage of 20v divided by the total resistance (the sum of the three resistors 300+150+100)550 ohms is slightly equal to 0.036a, several resistors are connected in series to share the voltage, and their current through the same; several resistors are connected in parallel to share the voltage; several resistors are connected in parallel to share the current, and their voltage through the resistor remains unchanged. In the circuit several resistors are connected in series in order to divide the voltage, and the current through them remains the same; in the circuit several resistors are connected in parallel in order to divide the current, and the voltage across the resistors remains the same.
The relationship between current, resistance, and voltage can be calculated using Ohm’s law i (current) = u voltage/r resistance.
### A few circuit questions to find the equivalent resistance (please include the steps to answer or analyze)
1. According to the parallel resistance calculation, the final result: Rab = 1
2. When the current is in the xy direction, there is no current in the middle resistor, according to the series-parallel way of calculating: Rxy/Rgh = 2
3. According to the bridge, there is no current in the series resistor in the upper right and lower left. Rpq=R
4. Pulling the middle resistor out will clearly give: Rzh=0.5
### Parallel Resistance Calculation Assuming 4 resistors in parallel as shown in the diagram each resistor is 10K R total = ?
If the resistors are the same and there are n identical resistors in parallel then r total = r/n. This derivation is very easy to prove, just do the proof yourself and bring in the values to get the answer 2.5k
### A simple physics counting series-parallel resistance question, as shown
The questioner should be a middle school student~This kind of circuit is not very common in middle and high school, and I’ll give you an idea of the equivalent method of this kind of circuit:
In your question, it should be written like this:
Last R:
Looking for the acceptance! | 505 | 2,404 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.15625 | 4 | CC-MAIN-2023-50 | latest | en | 0.946867 |
https://socratic.org/questions/how-do-you-find-the-limit-of-xlnx-as-x-oo#507530 | 1,726,757,775,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00896.warc.gz | 486,606,132 | 5,825 | # How do you find the limit of xlnx as x->oo?
Nov 18, 2017
The answer is $= \infty$
#### Explanation:
In this exercise $x > 0$
${\lim}_{x \to \infty} x = \infty$
${\lim}_{x \to \infty} \ln x = \infty$
Therefore,
${\lim}_{x \to \infty} x \ln x = \infty \cdot \infty = \infty$
A big number times a big number is a big number.
graph{xlnx [-10, 10, -5, 5]} | 141 | 362 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.1875 | 4 | CC-MAIN-2024-38 | latest | en | 0.634846 |
http://de.metamath.org/mpeuni/copco.html | 1,716,331,640,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971058522.2/warc/CC-MAIN-20240521214515-20240522004515-00338.warc.gz | 8,564,423 | 9,581 | Metamath Proof Explorer < Previous Next > Nearby theorems Mirrors > Home > MPE Home > Th. List > copco Structured version Visualization version GIF version
Theorem copco 22626
Description: The composition of a concatenation of paths with a continuous function. (Contributed by Mario Carneiro, 9-Jul-2015.)
Hypotheses
Ref Expression
pcoval.2 (𝜑𝐹 ∈ (II Cn 𝐽))
pcoval.3 (𝜑𝐺 ∈ (II Cn 𝐽))
pcoval2.4 (𝜑 → (𝐹‘1) = (𝐺‘0))
copco.6 (𝜑𝐻 ∈ (𝐽 Cn 𝐾))
Assertion
Ref Expression
copco (𝜑 → (𝐻 ∘ (𝐹(*𝑝𝐽)𝐺)) = ((𝐻𝐹)(*𝑝𝐾)(𝐻𝐺)))
Proof of Theorem copco
Dummy variables 𝑥 𝑦 are mutually distinct and distinct from all other variables.
StepHypRef Expression
1 pcoval.2 . . . . . . . 8 (𝜑𝐹 ∈ (II Cn 𝐽))
2 iiuni 22492 . . . . . . . . 9 (0[,]1) = II
3 eqid 2610 . . . . . . . . 9 𝐽 = 𝐽
42, 3cnf 20860 . . . . . . . 8 (𝐹 ∈ (II Cn 𝐽) → 𝐹:(0[,]1)⟶ 𝐽)
51, 4syl 17 . . . . . . 7 (𝜑𝐹:(0[,]1)⟶ 𝐽)
6 elii1 22542 . . . . . . . 8 (𝑥 ∈ (0[,](1 / 2)) ↔ (𝑥 ∈ (0[,]1) ∧ 𝑥 ≤ (1 / 2)))
7 iihalf1 22538 . . . . . . . 8 (𝑥 ∈ (0[,](1 / 2)) → (2 · 𝑥) ∈ (0[,]1))
86, 7sylbir 224 . . . . . . 7 ((𝑥 ∈ (0[,]1) ∧ 𝑥 ≤ (1 / 2)) → (2 · 𝑥) ∈ (0[,]1))
9 fvco3 6185 . . . . . . 7 ((𝐹:(0[,]1)⟶ 𝐽 ∧ (2 · 𝑥) ∈ (0[,]1)) → ((𝐻𝐹)‘(2 · 𝑥)) = (𝐻‘(𝐹‘(2 · 𝑥))))
105, 8, 9syl2an 493 . . . . . 6 ((𝜑 ∧ (𝑥 ∈ (0[,]1) ∧ 𝑥 ≤ (1 / 2))) → ((𝐻𝐹)‘(2 · 𝑥)) = (𝐻‘(𝐹‘(2 · 𝑥))))
1110anassrs 678 . . . . 5 (((𝜑𝑥 ∈ (0[,]1)) ∧ 𝑥 ≤ (1 / 2)) → ((𝐻𝐹)‘(2 · 𝑥)) = (𝐻‘(𝐹‘(2 · 𝑥))))
1211ifeq1da 4066 . . . 4 ((𝜑𝑥 ∈ (0[,]1)) → if(𝑥 ≤ (1 / 2), ((𝐻𝐹)‘(2 · 𝑥)), ((𝐻𝐺)‘((2 · 𝑥) − 1))) = if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), ((𝐻𝐺)‘((2 · 𝑥) − 1))))
13 pcoval.3 . . . . . . . 8 (𝜑𝐺 ∈ (II Cn 𝐽))
142, 3cnf 20860 . . . . . . . 8 (𝐺 ∈ (II Cn 𝐽) → 𝐺:(0[,]1)⟶ 𝐽)
1513, 14syl 17 . . . . . . 7 (𝜑𝐺:(0[,]1)⟶ 𝐽)
16 elii2 22543 . . . . . . . 8 ((𝑥 ∈ (0[,]1) ∧ ¬ 𝑥 ≤ (1 / 2)) → 𝑥 ∈ ((1 / 2)[,]1))
17 iihalf2 22540 . . . . . . . 8 (𝑥 ∈ ((1 / 2)[,]1) → ((2 · 𝑥) − 1) ∈ (0[,]1))
1816, 17syl 17 . . . . . . 7 ((𝑥 ∈ (0[,]1) ∧ ¬ 𝑥 ≤ (1 / 2)) → ((2 · 𝑥) − 1) ∈ (0[,]1))
19 fvco3 6185 . . . . . . 7 ((𝐺:(0[,]1)⟶ 𝐽 ∧ ((2 · 𝑥) − 1) ∈ (0[,]1)) → ((𝐻𝐺)‘((2 · 𝑥) − 1)) = (𝐻‘(𝐺‘((2 · 𝑥) − 1))))
2015, 18, 19syl2an 493 . . . . . 6 ((𝜑 ∧ (𝑥 ∈ (0[,]1) ∧ ¬ 𝑥 ≤ (1 / 2))) → ((𝐻𝐺)‘((2 · 𝑥) − 1)) = (𝐻‘(𝐺‘((2 · 𝑥) − 1))))
2120anassrs 678 . . . . 5 (((𝜑𝑥 ∈ (0[,]1)) ∧ ¬ 𝑥 ≤ (1 / 2)) → ((𝐻𝐺)‘((2 · 𝑥) − 1)) = (𝐻‘(𝐺‘((2 · 𝑥) − 1))))
2221ifeq2da 4067 . . . 4 ((𝜑𝑥 ∈ (0[,]1)) → if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), ((𝐻𝐺)‘((2 · 𝑥) − 1))) = if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), (𝐻‘(𝐺‘((2 · 𝑥) − 1)))))
2312, 22eqtrd 2644 . . 3 ((𝜑𝑥 ∈ (0[,]1)) → if(𝑥 ≤ (1 / 2), ((𝐻𝐹)‘(2 · 𝑥)), ((𝐻𝐺)‘((2 · 𝑥) − 1))) = if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), (𝐻‘(𝐺‘((2 · 𝑥) − 1)))))
2423mpteq2dva 4672 . 2 (𝜑 → (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), ((𝐻𝐹)‘(2 · 𝑥)), ((𝐻𝐺)‘((2 · 𝑥) − 1)))) = (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), (𝐻‘(𝐺‘((2 · 𝑥) − 1))))))
25 copco.6 . . . 4 (𝜑𝐻 ∈ (𝐽 Cn 𝐾))
26 cnco 20880 . . . 4 ((𝐹 ∈ (II Cn 𝐽) ∧ 𝐻 ∈ (𝐽 Cn 𝐾)) → (𝐻𝐹) ∈ (II Cn 𝐾))
271, 25, 26syl2anc 691 . . 3 (𝜑 → (𝐻𝐹) ∈ (II Cn 𝐾))
28 cnco 20880 . . . 4 ((𝐺 ∈ (II Cn 𝐽) ∧ 𝐻 ∈ (𝐽 Cn 𝐾)) → (𝐻𝐺) ∈ (II Cn 𝐾))
2913, 25, 28syl2anc 691 . . 3 (𝜑 → (𝐻𝐺) ∈ (II Cn 𝐾))
3027, 29pcoval 22619 . 2 (𝜑 → ((𝐻𝐹)(*𝑝𝐾)(𝐻𝐺)) = (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), ((𝐻𝐹)‘(2 · 𝑥)), ((𝐻𝐺)‘((2 · 𝑥) − 1)))))
311, 13pcoval 22619 . . . . . 6 (𝜑 → (𝐹(*𝑝𝐽)𝐺) = (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))))
32 pcoval2.4 . . . . . . 7 (𝜑 → (𝐹‘1) = (𝐺‘0))
331, 13, 32pcocn 22625 . . . . . 6 (𝜑 → (𝐹(*𝑝𝐽)𝐺) ∈ (II Cn 𝐽))
3431, 33eqeltrrd 2689 . . . . 5 (𝜑 → (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))) ∈ (II Cn 𝐽))
352, 3cnf 20860 . . . . 5 ((𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))) ∈ (II Cn 𝐽) → (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))):(0[,]1)⟶ 𝐽)
3634, 35syl 17 . . . 4 (𝜑 → (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))):(0[,]1)⟶ 𝐽)
37 eqid 2610 . . . . 5 (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))) = (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1))))
3837fmpt 6289 . . . 4 (∀𝑥 ∈ (0[,]1)if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1))) ∈ 𝐽 ↔ (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))):(0[,]1)⟶ 𝐽)
3936, 38sylibr 223 . . 3 (𝜑 → ∀𝑥 ∈ (0[,]1)if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1))) ∈ 𝐽)
40 eqid 2610 . . . . . 6 𝐾 = 𝐾
413, 40cnf 20860 . . . . 5 (𝐻 ∈ (𝐽 Cn 𝐾) → 𝐻: 𝐽 𝐾)
4225, 41syl 17 . . . 4 (𝜑𝐻: 𝐽 𝐾)
4342feqmptd 6159 . . 3 (𝜑𝐻 = (𝑦 𝐽 ↦ (𝐻𝑦)))
44 fveq2 6103 . . . 4 (𝑦 = if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1))) → (𝐻𝑦) = (𝐻‘if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))))
45 fvif 6114 . . . 4 (𝐻‘if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1)))) = if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), (𝐻‘(𝐺‘((2 · 𝑥) − 1))))
4644, 45syl6eq 2660 . . 3 (𝑦 = if(𝑥 ≤ (1 / 2), (𝐹‘(2 · 𝑥)), (𝐺‘((2 · 𝑥) − 1))) → (𝐻𝑦) = if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), (𝐻‘(𝐺‘((2 · 𝑥) − 1)))))
4739, 31, 43, 46fmptcof 6304 . 2 (𝜑 → (𝐻 ∘ (𝐹(*𝑝𝐽)𝐺)) = (𝑥 ∈ (0[,]1) ↦ if(𝑥 ≤ (1 / 2), (𝐻‘(𝐹‘(2 · 𝑥))), (𝐻‘(𝐺‘((2 · 𝑥) − 1))))))
4824, 30, 473eqtr4rd 2655 1 (𝜑 → (𝐻 ∘ (𝐹(*𝑝𝐽)𝐺)) = ((𝐻𝐹)(*𝑝𝐾)(𝐻𝐺)))
Colors of variables: wff setvar class Syntax hints: ¬ wn 3 → wi 4 ∧ wa 383 = wceq 1475 ∈ wcel 1977 ∀wral 2896 ifcif 4036 ∪ cuni 4372 class class class wbr 4583 ↦ cmpt 4643 ∘ ccom 5042 ⟶wf 5800 ‘cfv 5804 (class class class)co 6549 0cc0 9815 1c1 9816 · cmul 9820 ≤ cle 9954 − cmin 10145 / cdiv 10563 2c2 10947 [,]cicc 12049 Cn ccn 20838 IIcii 22486 *𝑝cpco 22608 This theorem was proved from axioms: ax-mp 5 ax-1 6 ax-2 7 ax-3 8 ax-gen 1713 ax-4 1728 ax-5 1827 ax-6 1875 ax-7 1922 ax-8 1979 ax-9 1986 ax-10 2006 ax-11 2021 ax-12 2034 ax-13 2234 ax-ext 2590 ax-rep 4699 ax-sep 4709 ax-nul 4717 ax-pow 4769 ax-pr 4833 ax-un 6847 ax-inf2 8421 ax-cnex 9871 ax-resscn 9872 ax-1cn 9873 ax-icn 9874 ax-addcl 9875 ax-addrcl 9876 ax-mulcl 9877 ax-mulrcl 9878 ax-mulcom 9879 ax-addass 9880 ax-mulass 9881 ax-distr 9882 ax-i2m1 9883 ax-1ne0 9884 ax-1rid 9885 ax-rnegex 9886 ax-rrecex 9887 ax-cnre 9888 ax-pre-lttri 9889 ax-pre-lttrn 9890 ax-pre-ltadd 9891 ax-pre-mulgt0 9892 ax-pre-sup 9893 ax-mulf 9895 This theorem depends on definitions: df-bi 196 df-or 384 df-an 385 df-3or 1032 df-3an 1033 df-tru 1478 df-ex 1696 df-nf 1701 df-sb 1868 df-eu 2462 df-mo 2463 df-clab 2597 df-cleq 2603 df-clel 2606 df-nfc 2740 df-ne 2782 df-nel 2783 df-ral 2901 df-rex 2902 df-reu 2903 df-rmo 2904 df-rab 2905 df-v 3175 df-sbc 3403 df-csb 3500 df-dif 3543 df-un 3545 df-in 3547 df-ss 3554 df-pss 3556 df-nul 3875 df-if 4037 df-pw 4110 df-sn 4126 df-pr 4128 df-tp 4130 df-op 4132 df-uni 4373 df-int 4411 df-iun 4457 df-iin 4458 df-br 4584 df-opab 4644 df-mpt 4645 df-tr 4681 df-eprel 4949 df-id 4953 df-po 4959 df-so 4960 df-fr 4997 df-se 4998 df-we 4999 df-xp 5044 df-rel 5045 df-cnv 5046 df-co 5047 df-dm 5048 df-rn 5049 df-res 5050 df-ima 5051 df-pred 5597 df-ord 5643 df-on 5644 df-lim 5645 df-suc 5646 df-iota 5768 df-fun 5806 df-fn 5807 df-f 5808 df-f1 5809 df-fo 5810 df-f1o 5811 df-fv 5812 df-isom 5813 df-riota 6511 df-ov 6552 df-oprab 6553 df-mpt2 6554 df-of 6795 df-om 6958 df-1st 7059 df-2nd 7060 df-supp 7183 df-wrecs 7294 df-recs 7355 df-rdg 7393 df-1o 7447 df-2o 7448 df-oadd 7451 df-er 7629 df-map 7746 df-ixp 7795 df-en 7842 df-dom 7843 df-sdom 7844 df-fin 7845 df-fsupp 8159 df-fi 8200 df-sup 8231 df-inf 8232 df-oi 8298 df-card 8648 df-cda 8873 df-pnf 9955 df-mnf 9956 df-xr 9957 df-ltxr 9958 df-le 9959 df-sub 10147 df-neg 10148 df-div 10564 df-nn 10898 df-2 10956 df-3 10957 df-4 10958 df-5 10959 df-6 10960 df-7 10961 df-8 10962 df-9 10963 df-n0 11170 df-z 11255 df-dec 11370 df-uz 11564 df-q 11665 df-rp 11709 df-xneg 11822 df-xadd 11823 df-xmul 11824 df-ioo 12050 df-icc 12053 df-fz 12198 df-fzo 12335 df-seq 12664 df-exp 12723 df-hash 12980 df-cj 13687 df-re 13688 df-im 13689 df-sqrt 13823 df-abs 13824 df-struct 15697 df-ndx 15698 df-slot 15699 df-base 15700 df-sets 15701 df-ress 15702 df-plusg 15781 df-mulr 15782 df-starv 15783 df-sca 15784 df-vsca 15785 df-ip 15786 df-tset 15787 df-ple 15788 df-ds 15791 df-unif 15792 df-hom 15793 df-cco 15794 df-rest 15906 df-topn 15907 df-0g 15925 df-gsum 15926 df-topgen 15927 df-pt 15928 df-prds 15931 df-xrs 15985 df-qtop 15990 df-imas 15991 df-xps 15993 df-mre 16069 df-mrc 16070 df-acs 16072 df-mgm 17065 df-sgrp 17107 df-mnd 17118 df-submnd 17159 df-mulg 17364 df-cntz 17573 df-cmn 18018 df-psmet 19559 df-xmet 19560 df-met 19561 df-bl 19562 df-mopn 19563 df-cnfld 19568 df-top 20521 df-bases 20522 df-topon 20523 df-topsp 20524 df-cld 20633 df-cn 20841 df-cnp 20842 df-tx 21175 df-hmeo 21368 df-xms 21935 df-ms 21936 df-tms 21937 df-ii 22488 df-pco 22613 This theorem is referenced by: pi1coghm 22669 cvmlift3lem6 30560
Copyright terms: Public domain W3C validator | 5,883 | 8,909 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.3125 | 3 | CC-MAIN-2024-22 | latest | en | 0.175416 |
https://pclub.in/events/2024/08/05/Kolmogorov-arnold-network/ | 1,726,147,897,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00624.warc.gz | 417,122,619 | 24,811 | in
# Kolmogorov–Arnold Networks
Neural Networks are largely seen as backboxes which magically provide the required output for a particular input, highlighting that they are not really interpretable and the whole process of training and tracking the learning of any neural network is unintuitive. That is, even if a Neural Network can provide correct answers, we cannot understand the reasons why it makes these decisions. When I say “neural networks”, I primarily refer to a class of networks called “Multi Layered Perceptron” (MLPs), which are modelled on the basis of the “Universal Approximation Theorem” (UAT), and use linear weight and bias matrices as trainable parameters. All the learning algorithms today are mostly based on MLPs and the UAT.
A few weeks ago, researchers from MIT, CalTech and Northeastern University released a paper which tries to use the Kolmogorov–Arnold Representation Theorem, trainable activation functions and B-Splines in order to model a network namely, the Kolmogorov–Arnold Network (KAN). This Network Architecture proposes a more interpretable and potentially effective alternative to MLPs. In this blog, we’ll be discussing about the architecture of KANs and what differentiates it from MLPs; along with the supplementary concepts required to build the mathematical framework and an intuition behind the Network.
Some of the aforementioned terms might sound way too intimidating at the first look, but they are just simple functions represented with such a complex level of Greek notation (That’s what mathematicians do 🙃) that they look more like alien scriptures than mathematics. But don’t worry, in this blog I’d try to break down these mathematically intensive theorems and architectures built on them in a much intuitive manner. Note, that the purpose of this blog is not only to provide intuition but to build that intuition into a mathematical rigor. Moreover, I’ve tried to keep this as sparse as possible, but cover all the important aspects, so this might be a mildly long read. The blog assumes basic knowledge about Highschool Calculus, Linear Models, Neural Networks and Activation Functions. A quick read, or binging the first four videos of the 3B1B playlist on Neural Networks over a Dark Chocolate will be enough.
## Chapter 1| The Status Quo: Multi Layered Perceptron
If you’re already well versed with the Universal Approximation Theorem, MLP Architecture and potential roadblocks with the same, you can skip to Chapter 2
### 1.1| Universal Approximation Theorem
Ever wondered why and how MLPs held their ground as the default foundational building block of basically every Deep Neural Network ranging from simple classification models to those sophisticated LLMs for easily over 20 years? If you really think about it, with the rapid pace and remarkable progress in Machine Learning, its pretty crazy for MLPs to not see any contender for its position for such a long time.this just goes to show how good they actually are and how difficult it is to find something better.
Their strength lies in their foundation itself that is the Universal Approximation Theorem. Universal Approximation theorem states that no matter what a function $y=F(x)$ be ,there exists a series of neural nets $\phi _{1},\phi _{2},…$ such that eventually $\phi _{n}\rightarrow f(x)$.Okay lets end the mathematical jargon here and let me try to tell you what it aims to do. First imagine any non linear continuous function of $x$ (say a sine curve) , now try to draw line segments on that curve such that those line segments fit the curve as neatly as they can. In other words define a function that says if $x \in (a,b)$ then follow this straight line such that each $(a,b)$ do not intersect each other and cover all possible values of $x$.
### 1.2| MLP Architecture
We have been discussing MLPs for a very long time here, but do you know what they actually are? Multi Layer Perceptons are really just a type of artificial neural network consisting of multiple layers of neurons. The neurons in MLP typically use nonlinear activation functions, allowing the network to learn complex patterns in data. You might actually be very familiar with their architecture that is just a feed-forward neural network consisting of an input layer, hidden layers and finally an output layer all linked through nonlinear activation functions. Well, the only difference between ANN and MLP is that MLPs always have nonlinear activation functions while ANNs may or may not use them. If you are familiar with machine learning all this must already be known to you, but in case you aren’t here is a pretty great video for you to understand all the stuff related to ANNs along with the essential maths.
### 1.3| Issues with MLP
Even though MLPs may be the best solution to most of the problems we aim to solve in machine learning they do have their own disadvantages.
1. Vanishing/Exploding Gradients: MLPs use the backpropagation algorithm to train themselves. During this training gradients can become very small (vanishing) or very large (exploding) as they travel through multiple layers. This is a very remarkable drawback of using MLPs because as we add more and more hidden layers to deal with more complex data vanishing and exploding gradients make it very ineffective and inefficient for network to learn.
2. Hyperparameter Tuning: MLPs require careful tuning of hyperparameters like learning rate, number of layers and neurons, and activation functions. Believe me when I say that hyperparameter tuning is quite a process, turn the learning rate higher and here come the exploding gradients, turn it down and now the network doesn’t converge. Too few hidden layers and the network doesn’t learns anything and too much of them just for the model to over-fit the data.There are techniques and rules-of-thumb that help but they cannot dispose off the problem and even with their help there is essentially no way to determine if the hyperparameters you have used to train the best model you got is actually the best possible model (its pretty sad imo).
3. Interpretability: Understanding how MLPs arrive at their decisions can be difficult specially when there are multiple layers but understanding them is also very essential for several reasons such as getting insight into the data(crucial for research purposes), to improve the model , and to check the the fairness and biases of the network. The passing of data between weights and activation functions makes it a challenge to interpret the inner workings of the network. Its like the game where we had to connect dots to reveal a picture but the dots are all jumbled and there are a lot of them(really a lot) and then they also connect to multiple dots.
## Chapter 2| The Prequels: KART and Splines
### 2.1| Kolmogorov–Arnold Representation Theorem
The Theorem can be summed up in the following simply equation:
$f(X) = f(x_1, x_2, ..., x_n) = \sum_{q=0}^{2n} \Phi _q \left(\sum_{p=1}^{n} \phi _{q,p} (x_p) \right)$ where $\phi_{p,q}: [0,1] \rightarrow \mathbb{R}$ and $\Phi_{q}: \mathbb{R} \rightarrow \mathbb{R}$
This notation at a surface level might seem to be extremely complex (Honestly, I was flabbergasted when I first saw it), but it simply states that any multivariate continuous function (here, $f(X)$) can be represented as a superposition of continuous functions of one variable and the additive operation. Let’s break it down.
Let’s ignore the outer summation for a bit and focus on the inner summation. $\sum_{p=1}^{n} \phi _p (x_p) = \phi _1 (x_1) + \phi _2 (x_2) + ... + \phi _n (x_n)$ here, $\phi _p (x_p)$ is just an arbitrary univariate function (called the inner function) in $x_p$ and we are summing $n$ such univariate functions in $n$ different variables.
$\Phi_{q}$ is also a univariate function, taking $\sum_{p=1}^{n} \phi _{q,p}(x_p)$ as a whole as a single variable. Note that when we apply $\Phi_{q}$ on the inner function, we achieve a mixing a variables. That is, we can have terms line $x_{1}^{a}x_{2}^{b}$ or $(x_3)^{3}cos(x_2)sin(x_1)$ or $e^{x_{2}^2 + x_{3}^{3}}sin(x_{1})$ as our in our output. This allows for a wide variety and forms of terms which might be very similar to the final function. Note, that we might not be able to achieve the $f(X)$ in a single iterations, so we superimpose 2n+1 such functions in order to obtain $f(X)$ .
The proof as to why we will necessarily be able to decompose $f(X)$ into such a superposition and why we need $2n+1$ functions in the outer summation is beyond the scope of this blog. Moreover, there exist variations of the KART, for instance replacement of the outer functions $\Phi_{q}$ with a single outer function $\Phi$ by George Lorentz, replacement of the inner functions $\phi_{q,p}$ by one single inner function with an appropriate shift in its argument by David Sprecher and generalization of KART by Phillip A. Ostrand via compact metric spaces. Again, these variations are not really important for understanding KANs, but present pretty interesting manipulations and applications (might cover the proofs, variations and results in another blog later 🥰)
The KART in a way tries to approximate the function $f(X)$ via decomposition. Since in machine learning models, the main task loosely is to find a function that can approximate the behavior of the system we are trying to model (called the “Hypothesis function”), this decomposition might somewhat be helpful in approximation of the function. Even before the extremely popular 2024 paper on KANs, multiple attempts have been made to implement the KART to develop network architectures, but they have largely failed in doing so. Why? We shall explore that in the next section.
### 2.2| “Kolmogorov’s Theorem is Irrelevant”, lmao
In 1989, Girosi and Poggio released a paper which literally, without hesitation claimed that “Kolmogorov’s Theorem is Irrelevant” ☠️. We’ll cover the underlying reasons behind this claim in this section.
Girosi and Poggio primarily claimed 2 fallacies: 1) Non Smoothness of Functions: It is claimed that the inner functions of KART are not necessarily smooth. Here, I shall not delve into the mathematical definition of smooth but at a high level, these are functions whose derivates can be calculated till a sufficient depth (simply put, the curves look smooth 👍). Hence, Kolmogorov’s theorem relies on constructing inner functions that are highly non-smooth. In the context of neural networks, smooth activation functions are preferred because they facilitate gradient-based optimization techniques such as backpropagation; because multiple partial derivatives are necessary for backpropagation. Non-smooth functions, on the other hand, introduce difficulties in optimization and can lead to poor performance in learning tasks. This is a major roadblock. 3) Lack of parameterization: The functions provided by Kolmogorov’s theorem are not parameterized in a form that can be easily adjusted or learned through data. MLPs can easily be parameterized via weights as biases; but it becomes increasingly difficult to parameterize a KART function in such a way that at each iteration, it can be altered on the basis of some learning. Neural networks require functions with parameters that can be tuned during training to approximate the desired outputs. The lack of such parameterization in KART functions makes them impractical for real-world neural network applications.
Had KART actually been irrelevant and it’s implementation in a network actually not been feasible, I’d not have been writing this blog. So obviously, the new paper presents methods to overcome these problems, including Non Linear and Learnable Activation Functions and the implementation of the same via B-Splines. Before delving into the actual model, let’s understand Splines.
### 2.3| Beautiful Curves: Splines 🥰
#### 2.3.1| Splines as threads
Imagine a thread which can be easily deformed to change its shape. Stretch it out to form a line segment and now select 4 points (not the endpoints) A, B, C and D in that order on the curve. Observe the shape of the curve between the extreme points (A and D). Take any point (say, B) and alter the shape of the curve by moving the curve at that point, keeping other points fixed. The line now becomes a curve. Such a curve is called a spline. Observe that by altering the 4 points in various fashions, you can create infinitely many curves, and these curves will just be a function of the initial and final positions of the points. These fixed points (A, B, C and D) are called control points. These control points help to “pull” the curve (spline) to its desired shape and in this example, the shape is determied via the natural Tension in the wire.
Note that this is just an example of a spline which follows certain properties. We can define as to HOW the spline is constructed from its control points via and algorithm; and this gives rise to different types of splines. Moreover, it is not necessary that the control points always lie on the spline; they might or might not lie on it. When one or many points lie on the spline, we say that the spline “interpolates” the point. In the following sub sections, we’ll explore various properties, advantages and classes of splines and finally conclude with B-Splines.
#### 2.3.2| Pice Wise Cubic Curves and C1 Continuity
Given 4 points in the cartesian plane, can you find an equation which passes through those 4 points? Just assume a cubic equation, substitute the values and solve the resulting system of 4 linear equations. What about 2 points and the slope of tangents of those points? Again, you can assume a cubic equation, differentiate it to get the slope of the tangent, substitute values in the cubic euqation and equation of slope and find the coefficients.
What if we have more than 4 point? For instance 10 or 20 points. We can follow the same procedure and calculate the coefficients, but observe that the curves will just behave in a crazy fashion, will have a lot of noise and will be non-smooth.
In order to solve this issue, we try to define a piecewise function, composed of functions of lower order such that the resulting piecewise function is contiuous and differentible in the given domain. For simplicity, let’s take the order to be 3. Assume that we need to interpolate $n$ points on a cartesian plane to form a curve. Consider the first two points. Assume the slope of tangents at these 2 points and then fit a cubic polynomial between these points. Repeat this for all consecutive pairs of points, hence obtaining $(n-1)$ cubic polynomials. Observe that each control point (except the endpoints) is common in 2 curves, and a slope of tangent is defined for that point for both the curves. If the slope defined is unequal, the resulting piecewise function will no longer be differtiable. Such a condition is called a knot; and in order to prevent this, we keep the left hand derivative equal to the right hand derivative for each point. In this way, we are able to define out function using $3n-2$ parameters: $n$ points, with each point having 2 values and $n-2$ values of slopes.
This gives rise to a smooth enough curve which does not go crazy like those higher degree curves. Note that we ensure that the function is continuous and differentiable at least once. This type of continuity is called the $C^1$ continuity.
In order to reduce the parameters, let’s set the slope of the tangent at the $i^{th}$ point as the slope of the line joining the $(i-1)^{th}$ and $(i+1)^{th}$ point. Such a class of splines are called Catmull-Rom Splines.
#### 2.3.3| Pretty Natural Curves
Let’s get back to our initial condition: 3 points (2 end points and one knot point in the middle). To fit 2 cubic curves, we require 8 variables, hence 8 consistent linear equations. We get 4 equations by substituting the control points (2 points each for the 2 cubic curves). To ensure differentiability, we equate the derivatives of both curves at the knot point. Let’s also make the second derivatives equal at the knot point for ensuring that the final function is twice differentiable. We not have 6 equations. Such a spline which is continuous, has a continuous first and second derivative is called a $C^2$ spline. The fact that the spline interpolates all the control points, it is a $C^2$ interpolating spline.
Then, we make the second derivatives of end points to be 0, providing us with 8 equations and now we can sufficiently solve for the curves. This additional property makes these splines fall under the class of “Natural Cubic Splines”.
Go ahead and play with these curves using the interactive plots presented below!
If you fiddle around with the control points of the Catmull-Rom Splines and Natural Cubic Splines, you’ll notice that if you alter the position of a control point in C-R Spline, change is only observed in the cubic curves controlled by that control point, and the rest of the curve is almost unchanged. This property is called “Local Control”. On the other hand, this property is not observed in the Natural Cubic Splines. Go back to the interactive curves and fiddle with them a bit more to get a hang of it.
#### 2.3.4| B-Splines
$C^2$ continuity offers greater smoothness to the curve, and local control provides the ability to change a function locally without altering the rest of the curve. This is an important feature when it comes to learning algorithms because local control provides greater retention and memory (more on that in Chapter 3, for now just assume that it is important). B-Splines are piecewise functions, wherein each piece is a Cubic Bezier Curve. A Bezier Curve defined by a parametric equation
## Chapter 3| The Kolmogorov–Arnold Network
### 3.1| Architecture
Assume we have a task at hand that we are given data points ${x_1,x_2,….x_n,y}$ and need to find an $f(x_i)$ such that $f(x_i)\approx y$ (consider the housing price prediction problem). Now the KART states that if we can find $\Phi _q$ and $\phi _{q,p}$ from the below equation then we are done.
$f(X) = f(x_1, x_2, ..., x_n) = \sum_{q=0}^{2n} \Phi _q \left(\sum _{p=1}^{n} \phi _{q,p} (x_p) \right)$
Now in order to find the uni-variate functions $\Phi _q$ and $\phi _{q,p}$ we just use splines, specifically the B-splines. But now we encounter a problem again, even though this can be easily implemented, this network would be too simple to learn things.What the network currently is, is just a 2 layern equivalent of a MLP. In order to make this network more complex and deeper, we need to add more “layers” ,but how do we do so?? The answer comes from looking at MLPs and KANs in parallel.
When we describe a MLP layer with n inputs $(a_1,…a_n)$ and m outputs $(b_1,…b_m)$ what we are essentially doing is multiplying input values with learned weights and adding biases(a linear transformation) and then passing these values through a non-linear function. Now we can add more of these layers on top of one another to make the network deeper. Now how do we make a KAN “deeper”? Before answering this question we will have to first define what a “layer” means for KAN.
The original paper says
It turns out that a KAN layer with $n_in$-dimensional inputs and $n_out$-dimensional outputs can be defined as a matrix of 1D functions where the functions $\phi_{q,p}$ have trainable parameters.
Now let me explain what exactly is happening over here, say $(a_1,…a_n)$ are the n inputs and $(b_1,…b_m)$ are the m outputs then the first step is to define $m$ functions ${\phi_{1,i},\phi_{2,i},…,\phi_{m,i}}$ for each of the $i \in (1,2,…,n)$ (a total of $m * n$ functions). Now for the next step we pass through each of our input $a_i$ through the functions ${\phi_{1,i},\phi_{2,i},….\phi_{m,i}}$ defined for it to get values ${\phi _{1,i}(a_i) ,\phi _{2,i}(a_i),….\phi _{m,i}(a_i)}$ (a total of $m * n$ values). Finally to get the outputs $(b_1,…b_m)$ we calculate $b _j = \sum _{i=1}^n \phi _{j,i}(a _i)$ and voila.
Another way to interpret whats written above is by looking at the following matrix multiplication equation. I have taken this directly from the paper. Here they have taken the input dimension to be n and output dimension to be 2n+1. Now they have described the matrix to be a matrix of functions and the input and output to be column matrices. Just remember that what is going on over here is not matrix multiplication but rather the multiplication of the two elements of the matrix has been replaced by passing ones element as an argument to the function i.e. $b _{i,j} * a _{j,k} \rightarrow b(a _{j,k}) _{i,j}$
### 3.2| Interpretability
We have talked a lot about interpretability in this article, what it basically means is the ability to decipher how much and in what ways a given input variable affects the output(eg. how the cost relates to the no of units sold, or say how the appearance of aa shape in an immage relates to it belonging to a particular class). One of the main reasons that KANs are gaining popularity at such a pace is because of their improved interpretability. MLPs are known as black-boxes because after going through multiple transformations (multiplication by weights) and then being passed on through activation functions , tracing these transformations from the input to the output is barely doable (its like being asked to tell all the ingredients present in a dish you have never tasted in your life). The non linear activation functions are what really make MLPs very hard to interpret because they modify the input in very non interpretable way (think of how tanh and sigmoid squishify stuff). Now in KANs since we do not multiply inputs with weights and only ever pass it through activation functions, in addition to which the activation functions are also learnable splines(much more ‘readable’), they become much more interpretable by humans. We can very easily see how the input value translates to the output value as the activation functions are much easily visualized. This interpretability in KANs makes them particularly useful in scientific applications where understanding how the model arrives at its results is crucial.
### 3.3| Applications
Okay lets draw two parallels again, between a layer of MLP and a layer of KAN. Since both do the same thing i.e. taking n inputs and giving out m outputs, even though by using different methods. So wherever the MLP layer (a fully connected layer) is used, it can be replaced by a KAN layer, given we use the correct training methods. So yes, simple neural networks can be fully replaced by KANs very easily.
But what about other neural networks such as CNNs or Transformers. How do you find the ‘KA-equivalent’ of a Convolutional layer ? Well, Machine Learning is one of the fastest developing sectors. The paper on KANs was released on 30 Apr 2024 and not even a month and a half later on 13 Jun 2024 a paper titled Suitability of KANs for Computer Vision: A preliminary investigation was released exploring the application of KAN concepts in computer vision tasks mainly classification). They did so by defining the KAN convolution layer which does work in a similar way to a KAN layer, that is by eliminating the weights and biases and changing the activation functions to learnable splines. This paper does go out of scope for this article but the key takeaways are that KConvKANs do outperform traditional CNNs on small sized datasets such as MNIST but on slightly bigger datasets such as CIFAR-10 , the margin by which they outperform them becomes pretty minuscule.(Note-the comparisons made here are based on the number of parameters)
Integrating KANs in Large Language Models(LLMs) could provide a window into how these models process and generate language. This could lead to more interpretable and efficient LLMs. It has not been done yet, but if KANs do turn out to be better at this task than MLPs then they potentially could revamp the whole LLM scene. But one thng to keep in mind is that they are still a “work-in-progress” thing and optimization of such new techniques takes its time. There have been proposed models though such as Temporal-KANs and Temporal-Kolmogorov Arnold Transformer(TKAT) that try to mimic and replace the existing LSTMs and Transformers based on MLPs.
### 3.4| Advantages and drawbacks
Now even though many new things are developed in the world, but they don’t gain traction unless they are better than previously existing solutions.KANs also have to have some benefits over MLPs given their popularity. Lets take a look at some of those.
1. Interpretability - We have talked about it a lot in the previous headings so i wont repeat the same things here again
2. Efficiency- KANs are indeed a large step up from MLPs in terms of accuracy with a lower number of parameters. With a lower number of parameters KANs can produce the same or better results than MLPs. This means a smaller model with better accuracy(who doesn’t want that).
3. Complexity Handling- KANs are inherently better at handling complex data because they use sum of splines to represent these functions. They can also do so by using much less parameters in comparison to MLPs. Using sum of splines to represent a function allows KANs to capture much more detail because splines themselves can capture relations between input and output much better.
Though KANs have their benefits over MLPs, they have their drawbacks as well:
1. Training time- Some researches do suggest that KANs should converge faster but currently they do not.This is because MLPs take advantage of GPU parallelization and optimized techniques for matrix multiplications and training. Such techniques have not yet been developed for KANs and thus they are slower to train.
2. Computational Resources- KANs being new have not been optimized to make them efficient and thus they take up a lot of computation resources to train. Moreover the use of splines make them more computationally heavy than MLPs as matrix operations have been heavily optimized .
### 3.5| Current stage of development
Given how promising KANs are, it should not be very surprising that it has seen a plethora of developments towards it current and upcoming applications
1. KConvKAN- KConvKAN are an architecture based entirely on KAN Linear layers and KANConv layers. These, as talked about before, are currently being developed for image processing tasks using the KAN-Convolution layer based on the KAN layer and the classic convolutional layer.
2. TKAN(Temporal-KAN) - Temporal KANs were made by combining KANs and LSTMs in order to perform time series analysis. The structure of their architecture in comparison to LSTMs is quite similar but the key differenct is the use of Recurring-KAN layer. Recurring Kolmogorov-Arnold Networks (RKANs) layers integrate memory management capabilities into the network. This allows the model to better capture and retain the long-term dependencies in the input data, leading to significant improvements in the accuracy and efficiency of multi-step time series forecasting.
3. TKAT(Temporal Kolmogorov Arnold Transformer)- Why stop at RNNs and LSTMs? Researchers proposed Temporal Kolmogorov Arnold Transformer that takes its concepts from KANs and the Transformer architecture. Integration of these two provided very effective capturing of long-term and short-term memory in complex time-series data. TKAT shows a lot of promise and is a significant step forward in time-series forecasting. Who knows, maybe the future LLMs would be based on TKATs.
4. DeepOKAN- DeepOKAN is a Deep Operator network based on KAN. The key innovation here is that DeepOKAN uses Gaussian radial basis functions (RBFs) rather than the B-splines. There have already been techniques to efficiently handle RBFs, so they can be applied in the case of DeepOKANs which in turn makes them more efficient than normal KANs. They were developed to handle complex engineering scenario predictions and computational mechanics.
5. WavKAN- WavKAN is another recent development inspired by Kolmogorov-Arnold Networks (KANs). It was aimed at improving traditional KANs and addressing some of their limitations. It introduces the use of wavelet functions into the KANs. Wavelets are mathematical tools used to analyze data at different scales, capturing both high-frequency (details) and low-frequency (overall trends) information. WavKAN can reportedly achieve better accuracy and train faster compared to traditional KANs (Spline-KAN) and MLPs and they also can adapt to the specific structure of the data, leading to increased robustness.
### 3.6| Do It Yourself
We have talked a lot about splines and KANs and their architecture so how about getting some hands-on experience with them. Follow the given colab notebook to try KANs out and see for yourself how they fare against MLPs.
_Authors: Anirudh Singh, Himanshu Sharma | 6,568 | 28,894 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2024-38 | latest | en | 0.931513 |
http://oeis.org/A005112 | 1,669,466,840,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00614.warc.gz | 37,228,841 | 4,573 | The OEIS is supported by the many generous donors to the OEIS Foundation.
Year-end appeal: Please make a donation to the OEIS Foundation to support ongoing development and maintenance of the OEIS. We are now in our 59th year, we have over 358,000 sequences, and we’ve crossed 10,300 citations (which often say “discovered thanks to the OEIS”). Other ways to Give
Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)
A005112 Class 4- primes (for definition see A005109). (Formerly M5289) 15
47, 139, 167, 179, 269, 277, 347, 461, 467, 499, 599, 643, 691, 709, 797, 827, 829, 839, 857, 863, 967, 997, 1013, 1019, 1039, 1063, 1069, 1151, 1163, 1181, 1289, 1367, 1381, 1399, 1427, 1487, 1493, 1499, 1579, 1609, 1619, 1657, 1867, 1877, 1889, 1933, 1979 (list; graph; refs; listen; history; text; internal format)
OFFSET 1,1 REFERENCES R. K. Guy, Unsolved Problems in Number Theory, A18. N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). LINKS R. J. Mathar, Table of n, a(n) for n = 1..20000 MATHEMATICA PrimeFactors[n_Integer] := Flatten[ Table[ #[[1]], {1}] & /@ FactorInteger[n]]; f[n_Integer] := Block[{m = n}, If[m == 0, m = 1, While[ IntegerQ[m/2], m /= 2]; While[ IntegerQ[m/3], m /= 3]]; Apply[Times, PrimeFactors[m] - 1]]; ClassMinusNbr[n_] := Length[NestWhileList[f, n, UnsameQ, All]] - 3; Prime[ Select[ Range[300], ClassMinusNbr[ Prime[ # ]] == 4 &]] CROSSREFS Cf. A005113, A056637, A005109, A005110, A005111, A081424, A081425, A081426, A081427, A081428, A081429, A081430 Sequence in context: A039530 A288408 A095311 * A355727 A062637 A212374 Adjacent sequences: A005109 A005110 A005111 * A005113 A005114 A005115 KEYWORD nonn AUTHOR EXTENSIONS Edited and extended by Robert G. Wilson v, Mar 20 2003 STATUS approved
Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam
Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recents
The OEIS Community | Maintained by The OEIS Foundation Inc.
Last modified November 26 05:48 EST 2022. Contains 358353 sequences. (Running on oeis4.) | 737 | 2,150 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5 | 4 | CC-MAIN-2022-49 | latest | en | 0.584542 |
https://math.libretexts.org/Bookshelves/PreAlgebra/Prealgebra_(Arnold)/05%3A_Decimals | 1,716,583,926,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971058736.10/warc/CC-MAIN-20240524183358-20240524213358-00474.warc.gz | 340,836,658 | 28,859 | # 5: Decimals
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$
( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\id}{\mathrm{id}}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\kernel}{\mathrm{null}\,}$$
$$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$
$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$
$$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$
$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$
$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vectorC}[1]{\textbf{#1}}$$
$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$
$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$
$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Decimals are like fractions and represent non-whole number numbers.
• 5.1: Decimals
On 1/29/2001, the New York Stock exchange ended its 200-year tradition of quoting stock prices in fractions and switched to decimals. It was said that pricing stocks the same way other consumer items were priced would make it easier for investors to understand and compare stock prices. Supporters of the change claimed that trading volume, the number of shares of stock traded, would increase and improve efficiency. But switching to decimals would have another effect of narrowing the spread.
• 5.2: Introduction to Decimals
Recall that whole numbers are constructed by using digits.
• 5.3: Adding and Subtracting Decimals
Addition of decimal numbers is quite similar to addition of whole numbers. For example, suppose that we are asked to add 2.34 and 5.25. We could change these decimal numbers to mixed fractions and add.
• 5.4: Multiplying Decimals
Multiplying decimal numbers involves two steps: (1) multiplying the numbers as whole numbers, ignoring the decimal point, and (2) placing the decimal point in the correct position in the product or answer.
• 5.5: Dividing Decimals
In this and following sections we make use of the terms divisor, dividend, quotient, and remainder.
• 5.6: Fractions and Decimals
When converting a fraction to a decimal, only one of two things can happen. Either the process will terminate or the decimal representation will begin to repeat a pattern of digits. In each case, the procedure for changing a fraction to a decimal is the same.
• 5.7: Equations with Decimals
We can add or subtract the same decimal number from both sides of an equation without affecting the solution.
• 5.8: 5.7 Introduction to Square Roots
Once you’ve mastered the process of squaring a whole number, then you are ready for the inverse of the squaring process, taking the square root of a whole number.
• 5.9: The Pythagorean Theorem
This page titled 5: Decimals is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by David Arnold. | 1,134 | 3,732 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.703125 | 4 | CC-MAIN-2024-22 | latest | en | 0.386754 |
https://math.stackexchange.com/questions/3411984/how-many-4-digit-numbers-divisible-by-4-can-be-formed-using-the-digits-0-1 | 1,620,939,756,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00629.warc.gz | 394,647,381 | 38,668 | # How many $4$ digit numbers divisible by $4$ can be formed using the digits $0,1,2,3,4$ (without repetition)?
Here is my approach:- Firstly, I fixed the last digit as $$4$$ then there will be only $$2$$ numbers $$(0,2)$$ for the ten's digit, $$3$$ numbers for the hundred's digit and $$2$$ numbers for the Thousand's digit (so that they don't repeat). Number of $$4$$ digit numbers in which $$4$$ is the last digit and is divisible by $$4 = 2 \times 3 \times 2 = 12$$. As there can be only $$4,2,0$$ as the last digit so there are $$12\times 3 = 36$$ numbers possible but that is an incorrect answer. The correct answer is $$30$$. Where did I go wrong?
• The last two digits must be divisible by 4. So, the last digit must be $0,4$ and the second last even, or the last digit must be $2$ and the second last $1$ or $3$. $\\$For the first possibility, there are $2\cdot2\cdot3\cdot2=24$ possibilities. For the second, there are $1\cdot2\cdot1\cdot3=6$. $$24+6=30$$ – Don Thousand Oct 28 '19 at 5:30
• @ Don thousand How there are 3 possibilities for the ten's digit if we fix the last digit as 4. The tens digit must be 0 or 2 then. – Ali Oct 28 '19 at 5:47
• I'm not doing it in that order. If I order it like that, it'd be $2\cdot3\cdot2\cdot2$ and $1\cdot3\cdot2\cdot1$. – Don Thousand Oct 28 '19 at 5:52
• Why did you separately calculate the the possibilities of 0,4 and 2. Why not calculate it altogether ? – Ali Oct 28 '19 at 6:20
Case 1: $$\underbrace{**}_{\{1,3,4\}}20 \Rightarrow P(3,2)=3!=6$$.
Case 2: $$**04 \Rightarrow P(3,2)=3!=6$$.
Case 3: $$**40 \Rightarrow P(3,2)=3!=6$$.
Case 4: $$**12=\underbrace{**}_{\{3,4\}}12+\underbrace{*}_{\{3,4\}}012 \Rightarrow P(2,2)+C(2,1)=4$$.
Case 5: $$**32 \Rightarrow P(2,2)+C(2,1)=4$$.
Case 6: $$**24 \Rightarrow P(2,2)+C(2,1)=4$$.
• Thank you for your solution. But can you tell where did i go wrong in calculating the answer this way ? – Ali Oct 28 '19 at 9:30
• You are saying "I fixed the last digit as 4 then there will be only 2 numbers (0,4) for the ten's digit". No, if you fix last digit as $4$, then the ten's digit cannot be $4$ again, the condition says "without repetition". – farruhota Oct 28 '19 at 9:34
• 3 ways to fill the last digit (4,2,0). 2 ways to fill the tens digit for every number we fill in the ones place. 3 ways to fill the hundreds digit and 2 ways to fill the thousands digit which makes 3×2×3×2=36 – Ali Oct 28 '19 at 9:37
• my bad I meant to say 2 and 0 if we select last digit as 4. – Ali Oct 28 '19 at 9:40
• remember, the thousand's digit cannot be $0$. – farruhota Oct 28 '19 at 9:41 | 886 | 2,580 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.375 | 4 | CC-MAIN-2021-21 | latest | en | 0.770261 |
https://homework.cpm.org/category/ACC/textbook/acc7/chapter/1%20Unit%201/lesson/CC3:%201.1.4/problem/1-34 | 1,620,601,350,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00504.warc.gz | 323,288,649 | 16,350 | Home > ACC7 > Chapter 1 Unit 1 > Lesson CC3: 1.1.4 > Problem1-34
1-34.
Compute without using a calculator.
1. $−15+7$
Try thinking of the problem as $7−15$.
$−8$
1. $8−(−21)$
Remember that subtracting a negative number is the same as adding a positive number.
Read the problem as $8+21$.
1. $6(−8)$
A positive multiplied by a negative equals a negative.
$−48$
1. $−9+(−13)$
Both numbers have a negative sign in front of them. Add them and keep the negative sign.
1. $−50−30$
$−80$
1. $3−(−9)$
See the hint for part (a).
1. $−75−(−75)$
See the hint for part (b).
$−75−(−75)=−75+75=0$
1. $(−3)+6$
$6−3$.
1. $9+(−14)$
What if the question were $14−9$?
$−5$
1. $28−(−2)$
See the hint for part (b).
$28−(−2)=28+2=30$
1. $−3+(−2)+5$
It may be easier to compute this in more than one step.
Because the only operations are addition and subtraction, the order in which they are carried out does not matter.
1. $3+2+5$
See the hint for part (k). | 346 | 968 | {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.46875 | 4 | CC-MAIN-2021-21 | latest | en | 0.754104 |
http://matematikaria.com/geometri/triangle-3-4-5.html | 1,558,647,729,000,000,000 | text/html | crawl-data/CC-MAIN-2019-22/segments/1558232257396.96/warc/CC-MAIN-20190523204120-20190523230120-00390.warc.gz | 126,752,154 | 3,686 | Tambahkan ke favorit Tautkan di sini
Beranda
MatematikaRia.com
# 3, 4, 5 Triangle
## Need a Right Angle (90°) ?
Make a 3,4,5 Triangle ! Connect three lines: 3 long 4 long 5 long And you will have a right angle (90°)
## Other Lengths
You can use other lengths by multipying each side by 2. Or by 10. Or any multiple.
## Drawing It
Let us say you need to mark a right angle coming from a point on a wall.
You decide to use 300, 400 and 500 cm lines.
Draw a 300 line along the wall Draw an arc 400 away from the start of the 300 line Draw an arc 500 away from the end of the 300 line Connect from the start of the 300 line to where the arcs cross And you have your "3,4,5" triangle with its right angle
## The Mathematics Behind It
The Pythagoras Theorem says:
The square of a (a²) plus the square of b (b²) is equal to the square of c (c²): a2 + b2 = c2
Let's check if it does work: 32 + 42 = 52 Calculating this becomes: 9 + 16 = 25 Yes, it works !
## Other Combinations
Yes, there are other combinations that work (not just by multiplying). Here are two others:
5,12,13 triangle 9,40,41 Triangle 52 + 122 = 132 92 + 402 = 412 | 349 | 1,146 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.5 | 4 | CC-MAIN-2019-22 | latest | en | 0.809185 |
https://www.vedantu.com/question-answer/the-consecutive-digits-of-a-three-digit-number-class-10-maths-cbse-5ee37ce89025123a6b16af14 | 1,723,060,833,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640707024.37/warc/CC-MAIN-20240807174317-20240807204317-00241.warc.gz | 803,716,066 | 30,409 | Courses
Courses for Kids
Free study material
Offline Centres
More
Store
# The consecutive digits of a three digit number are in G.P. If the middle digit is increased by 2 then they form an A.P. If 792 is subtracted from this number then we get the number consisting of the same three digits but in reverse order. Find the number.
Last updated date: 04th Aug 2024
Total views: 456.3k
Views today: 12.56k
Verified
456.3k+ views
Hint: Firstly assume the digits to be $a,ar,a{{r}^{2}}$after that you can apply the conditions accordingly. Use the formula of A.P. given by If a,b,c are in A.P. then , $b=\dfrac{a+c}{2}$ so that you will get a simpler equation. After that use the last condition and solve it to find ‘r’ and on further solving you will get ‘a’.
To solve the given problem we have to assume the digits and as they are in G.P. therefore we have the standard consecutive numbers to be assumed which are given by,
Therefore the three consecutive digits of the number are given by,
$a,ar,a{{r}^{2}}$ …………………………………………… (1)
Where, ‘a’ is the first term and ’r’ is the common ratio of G.P.
$100a+10ar+a{{r}^{2}}$
If we take ‘a’ as common we will get,
$a\left( 100+10r+{{r}^{2}} \right)$ …………………………….. (2)
As given in the problem if the middle digit is increased by 2 then it will form an A.P.
Therefore,
$a,ar+2,a{{r}^{2}}$ form an A.P.
To proceed further we should know the formula given below,
Formula:
If a,b,c are in A.P. then , $b=\dfrac{a+c}{2}$.
By using the formula we can write,
$ar+2=\dfrac{a+a{{r}^{2}}}{2}$
$\therefore 2\left( ar+2 \right)=a+a{{r}^{2}}$
$\therefore 2ar+4=a+a{{r}^{2}}$
$\therefore 4=a{{r}^{2}}-2ar+a$
By taking ‘a’ common we will get,
$\therefore 4=a\left( {{r}^{2}}-2r+1 \right)$
To proceed further in the solution we should know the formula given below,
Formula:
${{\left( a-b \right)}^{2}}={{a}^{2}}-2ab+{{b}^{2}}$
By using the above formula we can write,
$\therefore 4=a{{\left( r-1 \right)}^{2}}$
By rearranging the above equation we will get,
$\therefore a{{\left( r-1 \right)}^{2}}=4$ …………………………………….. (3)
To proceed further we should write the number in reverse order therefore, from equation (1) we can write the number in reverse order as,
$100a{{r}^{2}}+10ar+a$
By taking ‘a’ common we will get,
$a\left( 100{{r}^{2}}+10r+1 \right)$ ………………………………. (4)
Now, by using the second condition given in the problem and using equation (2) and equation (4), we will get,
$a\left( 100+10r+{{r}^{2}} \right)-792=a\left( 100{{r}^{2}}+10r+1 \right)$
By taking 792 on the right side of the equation and reverse order number on the left hand side of the equation we will get
$a\left( 100+10r+{{r}^{2}} \right)-a\left( 100{{r}^{2}}+10r+1 \right)=792$
By taking ‘a’ common we will get,
$a\left[ \left( 100+10r+{{r}^{2}} \right)-\left( 100{{r}^{2}}+10r+1 \right) \right]=792$
By opening the parenthesis we will get,
$\therefore a\left[ 100+10r+{{r}^{2}}-100{{r}^{2}}-10r-1 \right]=792$
$\therefore a\left[ 100+{{r}^{2}}-100{{r}^{2}}-1 \right]=792$
$\therefore a\left[ 99-99{{r}^{2}} \right]=792$
By taking 99 common we will get,
$\therefore 99a\left[ 1-{{r}^{2}} \right]=792$
$\therefore a\left[ 1-{{r}^{2}} \right]=\dfrac{792}{99}$
Dividing by 9 to numerator and denominator on the right hand side of the equation we will get,
$\therefore a\left[ 1-{{r}^{2}} \right]=\dfrac{88}{11}$
$\therefore a\left[ 1-{{r}^{2}} \right]=8$
Dividing above equation by equation (3) we will get,
$\therefore \dfrac{a\left( 1-{{r}^{2}} \right)}{a{{\left( r-1 \right)}^{2}}}=\dfrac{8}{4}$
$\therefore \dfrac{\left( 1-{{r}^{2}} \right)}{{{\left( r-1 \right)}^{2}}}=2$
By taking -1 common from denominator of left hand side of the equation we will get,
$\therefore \dfrac{\left( 1-{{r}^{2}} \right)}{{{\left[ -1\left( 1-r \right) \right]}^{2}}}=2$
$\therefore \dfrac{\left( 1-{{r}^{2}} \right)}{{{\left( 1-r \right)}^{2}}}=2$
To proceed further we should know the formula given below,
Formula:
${{a}^{2}}-{{b}^{2}}=\left( a+b \right)\times \left( a-b \right)$
By using above formula we will get,
$\therefore \dfrac{\left( 1-r \right)\left( 1+r \right)}{{{\left( 1-r \right)}^{2}}}=2$
$\therefore \dfrac{\left( 1+r \right)}{\left( 1-r \right)}=2$
$\therefore \left( 1+r \right)=2\left( 1-r \right)$
$\therefore \left( 1+r \right)=2-2r$
$\therefore r+2r=2-1$
$\therefore 3r=1$
$\therefore r=\dfrac{1}{3}$ …………………………………………… (5)
By substituting the value of equation (5) in equation (3) we will get,
$\therefore a{{\left( \dfrac{1}{3}-1 \right)}^{2}}=4$
$\therefore a{{\left( \dfrac{1-3}{3} \right)}^{2}}=4$
$\therefore a{{\left( \dfrac{-2}{3} \right)}^{2}}=4$
$\therefore a\times \dfrac{4}{9}=4$
$\therefore a=4\times \dfrac{9}{4}$
$\therefore a=9$ …………………………………………. (6)
Now if we put the values of equation (5) and equation (6) in equation (2) we will get,
$\therefore$The required number $=a\left( 100+10r+{{r}^{2}} \right)$
$\therefore$The required number $=9\times \left[ 100+10\times \left( \dfrac{1}{3} \right)+{{\left( \dfrac{1}{3} \right)}^{2}} \right]$
$\therefore$The required number $=9\times 100+9\times 10\times \left( \dfrac{1}{3} \right)+9\times {{\left( \dfrac{1}{3} \right)}^{2}}$
$\therefore$The required number $=900+3\times 10\ +9\times \dfrac{1}{9}$
$\therefore$The required number $=900+30+1$
$\therefore$The required number $=931$
Therefore the number which satisfies all the conditions given in the problem is 931.
Note: Don’t assume the digits to be $\dfrac{a}{r},a,ar$, as it will become very much difficult for calculation, it will be better if we use the digits as$a,ar,a{{r}^{2}}$, as it will save your time in solving which is beneficial in competitive exams. | 1,997 | 5,608 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.71875 | 5 | CC-MAIN-2024-33 | latest | en | 0.831 |
https://metanumbers.com/211441 | 1,638,398,106,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00038.warc.gz | 453,711,503 | 7,291 | # 211441 (number)
211,441 (two hundred eleven thousand four hundred forty-one) is an odd six-digits prime number following 211440 and preceding 211442. In scientific notation, it is written as 2.11441 × 105. The sum of its digits is 13. It has a total of 1 prime factor and 2 positive divisors. There are 211,440 positive integers (up to 211441) that are relatively prime to 211441.
## Basic properties
• Is Prime? Yes
• Number parity Odd
• Number length 6
• Sum of Digits 13
• Digital Root 4
## Name
Short name 211 thousand 441 two hundred eleven thousand four hundred forty-one
## Notation
Scientific notation 2.11441 × 105 211.441 × 103
## Prime Factorization of 211441
Prime Factorization 211441
Prime number
Distinct Factors Total Factors Radical ω(n) 1 Total number of distinct prime factors Ω(n) 1 Total number of prime factors rad(n) 211441 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 12.2617 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0
The prime factorization of 211,441 is 211441. Since it has a total of 1 prime factor, 211,441 is a prime number.
## Divisors of 211441
2 divisors
Even divisors 0 2 2 0
Total Divisors Sum of Divisors Aliquot Sum τ(n) 2 Total number of the positive divisors of n σ(n) 211442 Sum of all the positive divisors of n s(n) 1 Sum of the proper positive divisors of n A(n) 105721 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 459.827 Returns the nth root of the product of n divisors H(n) 1.99999 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors
The number 211,441 can be divided by 2 positive divisors (out of which 0 are even, and 2 are odd). The sum of these divisors (counting 211,441) is 211,442, the average is 105,721.
## Other Arithmetic Functions (n = 211441)
1 φ(n) n
Euler Totient Carmichael Lambda Prime Pi φ(n) 211440 Total number of positive integers not greater than n that are coprime to n λ(n) 211440 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 18868 Total number of primes less than or equal to n r2(n) 8 The number of ways n can be represented as the sum of 2 squares
There are 211,440 positive integers (less than 211,441) that are coprime with 211,441. And there are approximately 18,868 prime numbers less than or equal to 211,441.
## Divisibility of 211441
m n mod m 2 3 4 5 6 7 8 9 1 1 1 1 1 6 1 4
211,441 is not divisible by any number less than or equal to 9.
• Arithmetic
• Prime
• Deficient
• Polite
• Prime Power
• Square Free
## Base conversion (211441)
Base System Value
2 Binary 110011100111110001
3 Ternary 101202001011
4 Quaternary 303213301
5 Quinary 23231231
6 Senary 4310521
8 Octal 634761
10 Decimal 211441
12 Duodecimal a2441
20 Vigesimal 168c1
36 Base36 4j5d
## Basic calculations (n = 211441)
### Multiplication
n×y
n×2 422882 634323 845764 1057205
### Division
n÷y
n÷2 105720 70480.3 52860.2 42288.2
### Exponentiation
ny
n2 44707296481 9452955475239121 1998742358640034983361 422616083053207636916833201
### Nth Root
y√n
2√n 459.827 59.5749 21.4436 11.6155
## 211441 as geometric shapes
### Circle
Diameter 422882 1.32852e+06 1.40452e+11
### Sphere
Volume 3.95964e+16 5.61808e+11 1.32852e+06
### Square
Length = n
Perimeter 845764 4.47073e+10 299023
### Cube
Length = n
Surface area 2.68244e+11 9.45296e+15 366227
### Equilateral Triangle
Length = n
Perimeter 634323 1.93588e+10 183113
### Triangular Pyramid
Length = n
Surface area 7.74353e+10 1.11404e+15 172641
## Cryptographic Hash Functions
md5 6313cf52e71abe523f710037b5ac4d61 c7aaa4d9e9526b861c797d2c91deb2a8f14acef4 14223c08636deea223c3b5da1836d6f8ae8d375402fe1993dc5119d37d85b46a e7714d85a31355cee57605f05e384ba6943eacceb12e09c6202e482224c53452480aecf0aa048c87271b32b164791fcb70263ebae0f5c332cc790260e93ef567 3f57ab3020b5881f577ceee0cfc329f461bd24b0 | 1,430 | 4,159 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.828125 | 4 | CC-MAIN-2021-49 | latest | en | 0.823495 |
https://curriculum.illustrativemathematics.org/MS/students/3/6/4/index.html | 1,716,342,252,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971058525.14/warc/CC-MAIN-20240522005126-20240522035126-00251.warc.gz | 158,668,895 | 28,209 | # Lesson 4
Fitting a Line to Data
Let’s look at the scatter plots as a whole.
### 4.1: Predict This
Here is a scatter plot that shows weights and fuel efficiencies of 20 different types of cars.
If a car weighs 1,750 kg, would you expect its fuel efficiency to be closer to 22 mpg or to 28 mpg? Explain your reasoning.
### 4.2: Shine Bright
Here is a table that shows weights and prices of 20 different diamonds.
weight (carats) actual price (dollars) predicted price (dollars)
1 3,772 4,429
1 4,221 4,429
1 4,032 4,429
1 5,385 4,429
1.05 3,942 4,705
1.05 4,480 4,705
1.06 4,511 4,760
1.2 5,544 5,533
1.3 6,131 6,085
1.32 5,872 6,195
1.41 7,122 6,692
1.5 7,474 7,189
1.5 5,904 7,189
1.59 8,706 7,686
1.61 8,252 7,796
1.73 9,530 8,459
1.77 9,374 8,679
1.85 8,169 9,121
1.9 9,541 9,397
2.04 9,125 10,170
The scatter plot shows the prices and weights of the 20 diamonds together with the graph of $$y = 5,\!520x- 1,\!091$$.
The function described by the equation $$y = 5,\!520x- 1,\!091$$ is a model of the relationship between a diamond’s weight and its price.
This model predicts the price of a diamond from its weight. These predicted prices are shown in the third column of the table.
1. Two diamonds that both weigh 1.5 carats have different prices. What are their prices? How can you see this in the table? How can you see this in the graph?
2. The model predicts that when the weight is 1.5 carats, the price will be \$7,189. How can you see this in the graph? How can you see this using the equation?
3. One of the diamonds weighs 1.9 carats. What does the model predict for its price? How does that compare to the actual price?
4. Find a diamond for which the model makes a very good prediction of the actual price. How can you see this in the table? In the graph?
5. Find a diamond for which the model’s prediction is not very close to the actual price. How can you see this in the table? In the graph?
### 4.3: The Agony of the Feet
Here is a scatter plot that shows lengths and widths of 20 different left feet. Use the double arrows to show or hide the expressions list.
1. Estimate the widths of the longest foot and the shortest foot.
2. Estimate the lengths of the widest foot and the narrowest foot.
3. Click on the gray circle next to the words “The Line” in the expressions list. The graph of a linear model should appear. Find the data point that seems weird when compared to the model. What length and width does that point represent?
### Summary
Sometimes, we can use a linear function as a model of the relationship between two variables. For example, here is a scatter plot that shows heights and weights of 25 dogs together with the graph of a linear function which is a model for the relationship between a dog’s height and its weight.
We can see that the model does a good job of predicting the weight given the height for some dogs. These correspond to points on or near the line. The model doesn’t do a very good job of predicting the weight given the height for the dogs whose points are far from the line.
For example, there is a dog that is about 20 inches tall and weighs a little more than 16 pounds. The model predicts that the weight would be about 48 pounds. We say that the model overpredicts the weight of this dog. There is also a dog that is 27 inches tall and weighs about 110 pounds. The model predicts that its weight will be a little less than 80 pounds. We say the model underpredicts the weight of this dog.
Sometimes a data point is far away from the other points or doesn’t fit a trend that all the other points fit. We call these outliers
### Glossary Entries
• outlier
An outlier is a data value that is far from the other values in the data set.
Here is a scatter plot that shows lengths and widths of 20 different left feet. The foot whose length is 24.5 cm and width is 7.8 cm is an outlier. | 1,082 | 3,868 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.4375 | 4 | CC-MAIN-2024-22 | latest | en | 0.873054 |
https://socratic.org/questions/how-do-you-find-a-unit-vector-which-is-parallel-to-the-vector-which-points-from- | 1,632,104,925,000,000,000 | text/html | crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00409.warc.gz | 588,643,788 | 6,076 | # How do you find a unit vector which is parallel to the vector which points from (4,13)(19,8)?
Mar 3, 2018
$\frac{\sqrt{10}}{10} \left(\begin{matrix}3 \\ - 1\end{matrix}\right) = \frac{\sqrt{10}}{10} \left(3 \hat{i} - \hat{j}\right)$
#### Explanation:
call the first coordinate $A$, and the second $B$
then
$\vec{O A} = \left(\begin{matrix}4 \\ 13\end{matrix}\right)$
$\vec{O B} = \left(\begin{matrix}19 \\ 8\end{matrix}\right)$
now
$\vec{A B} = \vec{A O} + \vec{O B}$
$\therefore \vec{A B} = - \vec{O A} + \vec{O B}$
$\vec{A B} = - \left(\begin{matrix}4 \\ 13\end{matrix}\right) + \left(\begin{matrix}19 \\ 8\end{matrix}\right)$
$\vec{A B} = \left(\begin{matrix}15 \\ - 5\end{matrix}\right)$
now
|vec(AB)|=sqrt(15^2+5^2#
$| \vec{A B} | = \sqrt{250} = 5 \sqrt{10}$
a unit vector parallel to$\text{ } \vec{A B}$
is given by
$\frac{1}{5 \sqrt{10}} \left(\begin{matrix}15 \\ - 5\end{matrix}\right)$
$\frac{\sqrt{10}}{50} \left(\begin{matrix}15 \\ - 5\end{matrix}\right)$
cancelling
$\frac{\sqrt{10}}{10} \left(\begin{matrix}3 \\ - 1\end{matrix}\right) = \frac{\sqrt{10}}{10} \left(3 \hat{i} - \hat{j}\right)$ | 464 | 1,127 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.65625 | 5 | CC-MAIN-2021-39 | latest | en | 0.288193 |
https://www.teacherspayteachers.com/Product/Math-Word-Wall-for-3rd-4th-and-5th-Grade-3331012 | 1,537,362,582,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267156224.9/warc/CC-MAIN-20180919122227-20180919142227-00120.warc.gz | 875,750,844 | 21,716 | # Math Word Wall for 3rd, 4th, and 5th Grade
Subject
Resource Type
Product Rating
File Type
Compressed Zip File
45 MB|81 pages
Share
Product Description
Help your students remember and better understand key math concepts with these beautiful Math bulletin board headers, pennant letters, and sample cards. Build the most beautiful Math Wall with this amazing 80 page resource!
which includes No-Prep Math Practice for the Year AND this beautiful, coordinating Math Word Wall!
Don't need it all, but would love to have the practice pages and examples? Click HERE!
I’ve teamed up with Shelly Rees to make sure that your Math Wall will be THE place to spend time in your classroom this year. Each concept covered in grades 3-5 has a bright and colorful header with a corresponding sample card, giving students a visual example of the concept. We have also included the pennants and cut-out letters you need to make your Math Wall a visually attractive place for students to spend time learning and interacting with content.
CLICK on the PREVIEW above to see the contents of this incredible resource!
• 63 Colorful Math Word Wall Headers (1/2 page size)
• 63 Visually Appealing Sample Cards (1/4 page size)
• Math Word Wall Pennant Headers and Bulletin Board Letters
There are headers and sample cards for each of the following topics:
Numbers and Operations in Base Ten
Place Value - Whole Numbers
Place Value - Decimals
Comparing Whole Numbers
Comparing Decimal Numbers
Rounding Whole Numbers
Rounding Decimal Numbers
Number Form (Expanded, Word, Standard)
Decimal Number Form (Expanded, Word, Standard)
Subtraction with Regrouping
Multiplication by 1-Digit Numbers
Multiplication (2 Digits x 2 Digits)
Division (1 Digit Divisors)
Division (2 Digit Divisors)
Decimal Multiplication
Decimal Division
Operations and Algebraic Thinking
Word Problems
Multistep Word Problems
Patterns
Factor Pairs
Multiples
Prime and Composite Numbers
Order of Operations (Numerical Expressions)
Numbers and Operations: Fractions
Fractions on a Number Lines
Equivalent Fractions
Comparing Fractions
Fraction Subtraction (Like Denominators)
Fraction Subtraction (Unlike Denominators)
Fraction Multiplication
Multiplying Fractions by Mixed Numbers
Fraction Division
Fractions and Decimal Equivalency
Measurement and Data
Measurement Conversion (Customary)
Measurement Conversion (Metric)
Measurement Word Problems
Area and Perimeter
Angle Measurement
Volume
Telling Time to the Nearest Minute
Bar Graphs
Picture Graphs
Line Plots
Measure to ¼ Inch
Range, Mean, Median, Mode
Measurement Benchmarks
Elapsed Time Word Problems
Angles within a Circle
Geometry
Angles: Right, Acute, Obtuse
Triangle Classification by Length of Sides
Triangle Classification by Types of Angles
Triangle Classification by Length of Sides AND Types of Angles
Other Polygons
Lines, Line Segments, and Rays
Parallel, Intersecting, & Perpendicular Lines
Lines of Symmetry
Coordinate Graphing
This resource will be the perfect companion to your math instruction. It caters to EVERY level of learning, and can be used for...
♥...extra practice for your at-level kiddos
♥...and for enrichment for those students that excel in this area!
We worked very hard to make sure you will have every, single thing you need to have an easier, more productive, and highly successful school year! You will NOT be disappointed!
★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★
Want to learn more? Check out our FREE Video Guide in Shelly's shop, where we take you step by step through this product! PLEASE NOTE: The video guide also models the use of the Math Word Wall, including the Headers and Samplers. Those are NOT included in this Math Practice for the Year resource, but are included in the money saving Math Centers Ultimate Bundle for the Year.
Need even MORE details and an even CLOSER LOOK?! Check out my super detailed BLOG POST.
★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★
You may also be interested in:
Joey's Interactive Writing Center for Grades 3-6
Shelly's Math Task Card and Poster Sets
Don't forget that leaving feedback earns you points toward FREE TPT purchases. We love that feedback! As always, please feel free to contact us with any questions!
Thank you so much,
Joey Udovich and Shelly Rees
*Please note that this purchase is good for ONE user license. The option of multiple licenses will be available during checkout. Please respect the time and effort that went into creating this product, and if you are wanting to share it with your colleagues, simply purchase the additional licenses.
*Professionally Edited and Revised on 1/5/18.
Total Pages
81 pages
N/A
Teaching Duration
1 Year
Report this Resource
\$10.00 | 1,078 | 4,733 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2018-39 | latest | en | 0.850062 |
https://doc.sagemath.org/html/en/reference/data_structures/sage/data_structures/mutable_poset.html | 1,716,107,845,000,000,000 | text/html | crawl-data/CC-MAIN-2024-22/segments/1715971057774.18/warc/CC-MAIN-20240519070539-20240519100539-00889.warc.gz | 185,264,122 | 23,102 | # Mutable Poset#
This module provides a class representing a finite partially ordered set (poset) for the purpose of being used as a data structure. Thus the posets introduced in this module are mutable, i.e., elements can be added and removed from a poset at any time.
To get in touch with Sage’s “usual” posets, start with the page Posets in the reference manual.
## Examples#
### First Steps#
We start by creating an empty poset. This is simply done by
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: P
poset()
A poset should contain elements, thus let us add them with
sage: P.add(42)
Let us look at the poset again:
sage: P
poset(3, 7, 13, 42)
We see that they elements are sorted using $$\leq$$ which exists on the integers $$\ZZ$$. Since this is even a total order, we could have used a more efficient data structure. Alternatively, we can write
sage: MP([42, 7, 13, 3])
poset(3, 7, 13, 42)
to add several elements at once on construction.
### A less boring Example#
Let us continue with a less boring example. We define the class
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
It is equipped with a $$\leq$$-operation such that $$a \leq b$$ if all entries of $$a$$ are at most the corresponding entry of $$b$$. For example, we have
sage: a = T((1,1))
sage: b = T((2,1))
sage: c = T((1,2))
sage: a <= b, a <= c, b <= c
(True, True, False)
The last comparison gives False, since the comparison of the first component checks whether $$2 \leq 1$$.
Now, let us add such elements to a poset:
sage: Q = MP([T((1, 1)), T((3, 3)), T((4, 1)),
....: T((3, 2)), T((2, 3)), T((2, 2))]); Q
poset((1, 1), (2, 2), (2, 3), (3, 2), (3, 3), (4, 1))
In the representation above, the elements are sorted topologically, smallest first. This does not (directly) show more structural information. We can overcome this and display a “wiring layout” by typing:
sage: print(Q.repr_full(reverse=True))
poset((3, 3), (2, 3), (3, 2), (2, 2), (4, 1), (1, 1))
+-- oo
| +-- no successors
| +-- predecessors: (3, 3), (4, 1)
+-- (3, 3)
| +-- successors: oo
| +-- predecessors: (2, 3), (3, 2)
+-- (2, 3)
| +-- successors: (3, 3)
| +-- predecessors: (2, 2)
+-- (3, 2)
| +-- successors: (3, 3)
| +-- predecessors: (2, 2)
+-- (2, 2)
| +-- successors: (2, 3), (3, 2)
| +-- predecessors: (1, 1)
+-- (4, 1)
| +-- successors: oo
| +-- predecessors: (1, 1)
+-- (1, 1)
| +-- successors: (2, 2), (4, 1)
| +-- predecessors: null
+-- null
| +-- successors: (1, 1)
| +-- no predecessors
Note that we use reverse=True to let the elements appear from largest (on the top) to smallest (on the bottom).
If you look at the output above, you’ll see two additional elements, namely oo ($$\infty$$) and null ($$\emptyset$$). So what are these strange animals? The answer is simple and maybe you can guess it already. The $$\infty$$-element is larger than every other element, therefore a successor of the maximal elements in the poset. Similarly, the $$\emptyset$$-element is smaller than any other element, therefore a predecessor of the poset’s minimal elements. Both do not have to scare us; they are just there and sometimes useful.
AUTHORS:
• Daniel Krenn (2015)
ACKNOWLEDGEMENT:
• Daniel Krenn is supported by the Austrian Science Fund (FWF): P 24644-N26.
## Classes and their Methods#
class sage.data_structures.mutable_poset.MutablePoset(data=None, key=None, merge=None, can_merge=None)#
Bases: SageObject
A data structure that models a mutable poset (partially ordered set).
INPUT:
• data – data from which to construct the poset. It can be any of the following:
1. None (default), in which case an empty poset is created,
2. a MutablePoset, which will be copied during creation,
3. an iterable, whose elements will be in the poset.
• key – a function which maps elements to keys. If None (default), this is the identity, i.e., keys are equal to their elements.
Two elements with the same keys are considered as equal; so only one of these two elements can be in the poset.
This key is not used for sorting (in contrast to sorting-functions, e.g. sorted).
• merge – a function which merges its second argument (an element) to its first (again an element) and returns the result (as an element). If the return value is None, the element is removed from the poset.
This hook is called by merge(). Moreover it is used during add() when an element (more precisely its key) is already in this poset.
merge is None (default) is equivalent to merge returning its first argument. Note that it is not allowed that the key of the returning element differs from the key of the first input parameter. This means merge must not change the position of the element in the poset.
• can_merge – a function which checks whether its second argument can be merged to its first.
This hook is called by merge(). Moreover it is used during add() when an element (more precisely its key) is already in this poset.
can_merge is None (default) is equivalent to can_merge returning True in all cases.
OUTPUT:
A mutable poset.
You can find a short introduction and examples here.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
We illustrate the different input formats
1. No input:
sage: A = MP(); A
poset()
2. sage: B = MP(A); B
poset()
sage: C = MP(B); C
poset(42)
3. An iterable:
sage: C = MP([5, 3, 11]); C
poset(3, 5, 11)
Add the given object as element to the poset.
INPUT:
• element – an object (hashable and supporting comparison with the operator <=).
OUTPUT:
Nothing.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2))])
sage: print(P.repr_full(reverse=True))
poset((4, 4), (1, 3), (1, 2), (2, 1), (1, 1))
+-- oo
| +-- no successors
| +-- predecessors: (4, 4)
+-- (4, 4)
| +-- successors: oo
| +-- predecessors: (1, 3), (2, 1)
+-- (1, 3)
| +-- successors: (4, 4)
| +-- predecessors: (1, 2)
+-- (1, 2)
| +-- successors: (1, 3)
| +-- predecessors: (1, 1)
+-- (2, 1)
| +-- successors: (4, 4)
| +-- predecessors: (1, 1)
+-- (1, 1)
| +-- successors: (1, 2), (2, 1)
| +-- predecessors: null
+-- null
| +-- successors: (1, 1)
| +-- no predecessors
sage: reprP = P.repr_full(reverse=True); print(reprP)
poset((4, 4), (1, 3), (2, 2), (1, 2), (2, 1), (1, 1))
+-- oo
| +-- no successors
| +-- predecessors: (4, 4)
+-- (4, 4)
| +-- successors: oo
| +-- predecessors: (1, 3), (2, 2)
+-- (1, 3)
| +-- successors: (4, 4)
| +-- predecessors: (1, 2)
+-- (2, 2)
| +-- successors: (4, 4)
| +-- predecessors: (1, 2), (2, 1)
+-- (1, 2)
| +-- successors: (1, 3), (2, 2)
| +-- predecessors: (1, 1)
+-- (2, 1)
| +-- successors: (2, 2)
| +-- predecessors: (1, 1)
+-- (1, 1)
| +-- successors: (1, 2), (2, 1)
| +-- predecessors: null
+-- null
| +-- successors: (1, 1)
| +-- no predecessors
When adding an element which is already in the poset, nothing happens:
sage: e = T((2, 2))
sage: P.repr_full(reverse=True) == reprP
True
We can influence the behavior when an element with existing key is to be inserted in the poset. For example, we can perform an addition on some argument of the elements:
sage: def add(left, right):
....: return (left[0], ''.join(sorted(left[1] + right[1])))
sage: A = MP(key=lambda k: k[0], merge=add)
sage: A
poset((3, 'a'))
sage: A
poset((3, 'ab'))
We can also deal with cancellations. If the return value of our hook-function is None, then the element is removed out of the poset:
sage: def add_None(left, right):
....: s = left[1] + right[1]
....: if s == 0:
....: return None
....: return (left[0], s)
sage: B = MP(key=lambda k: k[0],
sage: B
poset()
clear()#
Remove all elements from this poset.
INPUT:
Nothing.
OUTPUT:
Nothing.
contains(key)#
Test whether key is encapsulated by one of the poset’s elements.
INPUT:
• key – an object.
OUTPUT:
True or False.
copy(mapping=None)#
Create a shallow copy.
INPUT:
• mapping – a function which is applied on each of the elements.
OUTPUT:
A poset with the same content as self.
difference(*other)#
Return a new poset where all elements of this poset, which are contained in one of the other given posets, are removed.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset. It is possible to specify more than one other as variadic arguments (arbitrary argument lists).
OUTPUT:
A poset.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.difference(Q)
poset(3, 7)
difference_update(*other)#
Remove all elements of another poset from this poset.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset. It is possible to specify more than one other as variadic arguments (arbitrary argument lists).
OUTPUT:
Nothing.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.difference_update(Q)
sage: P
poset(3, 7)
Remove the given object from the poset.
INPUT:
• key – the key of an object.
• raise_key_error – (default: False) switch raising KeyError on and off.
OUTPUT:
Nothing.
If the element is not a member and raise_key_error is set (not default), raise a KeyError.
Note
As with Python’s set, the methods remove() and discard() only differ in their behavior when an element is not contained in the poset: remove() raises a KeyError whereas discard() does not raise any exception.
This default behavior can be overridden with the raise_key_error parameter.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: P.remove(T((1, 2)))
Traceback (most recent call last):
...
KeyError: 'Key (1, 2) is not contained in this poset.'
element(key)#
Return the element corresponding to key.
INPUT:
key – the key of an object.
OUTPUT:
An object.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: e = P.element(42); e
42
sage: type(e)
<class 'sage.rings.integer.Integer'>
elements(**kwargs)#
Return an iterator over all elements.
INPUT:
OUTPUT:
An iterator.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7])
sage: [(v, type(v)) for v in sorted(P.elements())]
[(3, <class 'sage.rings.integer.Integer'>),
(7, <class 'sage.rings.integer.Integer'>),
(42, <class 'sage.rings.integer.Integer'>)]
Note that
sage: it = iter(P)
sage: sorted(it)
[3, 7, 42]
returns all elements as well.
elements_topological(**kwargs)#
Return an iterator over all elements in topological order.
INPUT:
OUTPUT:
An iterator.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: [(v, type(v)) for v in P.elements_topological(key=repr)]
[((1, 1), <class '__main__.T'>),
((1, 2), <class '__main__.T'>),
((1, 3), <class '__main__.T'>),
((2, 1), <class '__main__.T'>),
((2, 2), <class '__main__.T'>),
((4, 4), <class '__main__.T'>)]
get_key(element)#
Return the key corresponding to the given element.
INPUT:
• element – an object.
OUTPUT:
An object (the key of element).
intersection(*other)#
Return the intersection of the given posets as a new poset
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset. It is possible to specify more than one other as variadic arguments (arbitrary argument lists).
OUTPUT:
A poset.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.intersection(Q)
poset(42)
intersection_update(*other)#
Update this poset with the intersection of itself and another poset.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset. It is possible to specify more than one other as variadic arguments (arbitrary argument lists).
OUTPUT:
Nothing.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal; A.intersection_update(B) and B.intersection_update(A) might result in different posets.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.intersection_update(Q)
sage: P
poset(42)
is_disjoint(other)#
Return whether another poset is disjoint to this poset.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset.
OUTPUT:
Nothing.
Note
If this poset uses a key-function, then all comparisons are performed on the keys of the elements (and not on the elements themselves).
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.is_disjoint(Q)
False
sage: P.is_disjoint(Q.difference(P))
True
is_subset(other)#
Return whether another poset contains this poset, i.e., whether this poset is a subset of the other poset.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset.
OUTPUT:
Nothing.
Note
If this poset uses a key-function, then all comparisons are performed on the keys of the elements (and not on the elements themselves).
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.is_subset(Q)
False
sage: Q.is_subset(P)
False
sage: P.is_subset(P)
True
sage: P.is_subset(P.union(Q))
True
is_superset(other)#
Return whether this poset contains another poset, i.e., whether this poset is a superset of the other poset.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset.
OUTPUT:
Nothing.
Note
If this poset uses a key-function, then all comparisons are performed on the keys of the elements (and not on the elements themselves).
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.is_superset(Q)
False
sage: Q.is_superset(P)
False
sage: P.is_superset(P)
True
sage: P.union(Q).is_superset(P)
True
isdisjoint(other)#
Alias of is_disjoint().
issubset(other)#
Alias of is_subset().
issuperset(other)#
Alias of is_superset().
keys(**kwargs)#
Return an iterator over all keys of the elements.
INPUT:
OUTPUT:
An iterator.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7], key=lambda c: -c)
sage: [(v, type(v)) for v in sorted(P.keys())]
[(-42, <class 'sage.rings.integer.Integer'>),
(-7, <class 'sage.rings.integer.Integer'>),
(-3, <class 'sage.rings.integer.Integer'>)]
sage: [(v, type(v)) for v in sorted(P.elements())]
[(3, <class 'sage.rings.integer.Integer'>),
(7, <class 'sage.rings.integer.Integer'>),
(42, <class 'sage.rings.integer.Integer'>)]
sage: [(v, type(v)) for v in sorted(P.shells(),
....: key=lambda c: c.element)]
[(3, <class 'sage.data_structures.mutable_poset.MutablePosetShell'>),
(7, <class 'sage.data_structures.mutable_poset.MutablePosetShell'>),
(42, <class 'sage.data_structures.mutable_poset.MutablePosetShell'>)]
keys_topological(**kwargs)#
Return an iterator over all keys of the elements in topological order.
INPUT:
OUTPUT:
An iterator.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([(1, 1), (2, 1), (4, 4)],
....: key=lambda c: c[0])
sage: [(v, type(v)) for v in P.keys_topological(key=repr)]
[(1, <class 'sage.rings.integer.Integer'>),
(2, <class 'sage.rings.integer.Integer'>),
(4, <class 'sage.rings.integer.Integer'>)]
sage: [(v, type(v)) for v in P.elements_topological(key=repr)]
[((1, 1), <... 'tuple'>),
((2, 1), <... 'tuple'>),
((4, 4), <... 'tuple'>)]
sage: [(v, type(v)) for v in P.shells_topological(key=repr)]
[((1, 1), <class 'sage.data_structures.mutable_poset.MutablePosetShell'>),
((2, 1), <class 'sage.data_structures.mutable_poset.MutablePosetShell'>),
((4, 4), <class 'sage.data_structures.mutable_poset.MutablePosetShell'>)]
map(function, topological=False, reverse=False)#
Apply the given function to each element of this poset.
INPUT:
• function – a function mapping an existing element to a new element.
• topological – (default: False) if set, then the mapping is done in topological order, otherwise unordered.
• reverse – is passed on to topological ordering.
OUTPUT:
Nothing.
Note
Since this method works inplace, it is not allowed that function alters the key of an element.
Note
If function returns None, then the element is removed.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))],
....: key=lambda e: e[:2])
sage: P.map(lambda e: e + (sum(e),))
sage: P
poset((1, 2, 3), (1, 3, 4), (2, 1, 3), (2, 2, 4), (4, 4, 8))
mapped(function)#
Return a poset where on each element the given function was applied.
INPUT:
• function – a function mapping an existing element to a new element.
• topological – (default: False) if set, then the mapping is done in topological order, otherwise unordered.
• reverse – is passed on to topological ordering.
OUTPUT:
Note
function is not allowed to change the order of the keys, but changing the keys themselves is allowed (in contrast to map()).
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: P.mapped(lambda e: str(e))
poset('(1, 2)', '(1, 3)', '(2, 1)', '(2, 2)', '(4, 4)')
maximal_elements()#
Return an iterator over the maximal elements of this poset.
INPUT:
Nothing.
OUTPUT:
An iterator.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((1, 2)), T((2, 2))])
sage: sorted(P.maximal_elements())
[(1, 3), (2, 2)]
merge(key=None, reverse=False)#
Merge the given element with its successors/predecessors.
INPUT:
• key – the key specifying an element or None (default), in which case this method is called on each element in this poset.
• reverse – (default: False) specifies which direction to go first: False searches towards 'oo' and True searches towards 'null'. When key=None, then this also specifies which elements are merged first.
OUTPUT:
Nothing.
This method tests all (not necessarily direct) successors and predecessors of the given element whether they can be merged with the element itself. This is done by the can_merge-function of MutablePoset. If this merge is possible, then it is performed by calling MutablePoset’s merge-function and the corresponding successor/predecessor is removed from the poset.
Note
can_merge is applied in the sense of the condition of depth first iteration, i.e., once can_merge fails, the successors/predecessors are no longer tested.
Note
The motivation for such a merge behavior comes from asymptotic expansions: $$O(n^3)$$ merges with, for example, $$3n^2$$ or $$O(n)$$ to $$O(n^3)$$ (as $$n$$ tends to $$\infty$$; see Wikipedia article Big_O_notation).
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: key = lambda t: T(t[0:2])
....: return (left[0], left[1],
....: ''.join(sorted(left[2] + right[2])))
....: return key(left) >= key(right)
sage: P = MP([(1, 1, 'a'), (1, 3, 'b'), (2, 1, 'c'),
....: (4, 4, 'd'), (1, 2, 'e'), (2, 2, 'f')],
sage: Q = copy(P)
sage: Q.merge(T((1, 3)))
sage: print(Q.repr_full(reverse=True))
poset((4, 4, 'd'), (1, 3, 'abe'), (2, 2, 'f'), (2, 1, 'c'))
+-- oo
| +-- no successors
| +-- predecessors: (4, 4, 'd')
+-- (4, 4, 'd')
| +-- successors: oo
| +-- predecessors: (1, 3, 'abe'), (2, 2, 'f')
+-- (1, 3, 'abe')
| +-- successors: (4, 4, 'd')
| +-- predecessors: null
+-- (2, 2, 'f')
| +-- successors: (4, 4, 'd')
| +-- predecessors: (2, 1, 'c')
+-- (2, 1, 'c')
| +-- successors: (2, 2, 'f')
| +-- predecessors: null
+-- null
| +-- successors: (1, 3, 'abe'), (2, 1, 'c')
| +-- no predecessors
sage: for k in sorted(P.keys()):
....: Q = copy(P)
....: Q.merge(k)
....: print('merging %s: %s' % (k, Q))
merging (1, 1): poset((1, 1, 'a'), (1, 2, 'e'), (1, 3, 'b'),
(2, 1, 'c'), (2, 2, 'f'), (4, 4, 'd'))
merging (1, 2): poset((1, 2, 'ae'), (1, 3, 'b'), (2, 1, 'c'),
(2, 2, 'f'), (4, 4, 'd'))
merging (1, 3): poset((1, 3, 'abe'), (2, 1, 'c'), (2, 2, 'f'),
(4, 4, 'd'))
merging (2, 1): poset((1, 2, 'e'), (1, 3, 'b'), (2, 1, 'ac'),
(2, 2, 'f'), (4, 4, 'd'))
merging (2, 2): poset((1, 3, 'b'), (2, 2, 'acef'), (4, 4, 'd'))
merging (4, 4): poset((4, 4, 'abcdef'))
sage: Q = copy(P)
sage: Q.merge(); Q
poset((4, 4, 'abcdef'))
minimal_elements()#
Return an iterator over the minimal elements of this poset.
INPUT:
Nothing.
OUTPUT:
An iterator.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: sorted(P.minimal_elements())
[(1, 2), (2, 1)]
property null#
The shell $$\emptyset$$ whose element is smaller than any other element.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: z = P.null; z
null
sage: z.is_null()
True
property oo#
The shell $$\infty$$ whose element is larger than any other element.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: oo = P.oo; oo
oo
sage: oo.is_oo()
True
pop(**kwargs)#
Remove and return an arbitrary poset element.
INPUT:
OUTPUT:
An object.
Note
The special elements 'null' and 'oo' cannot be popped.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: P
poset(3)
sage: P.pop()
3
sage: P
poset()
sage: P.pop()
Traceback (most recent call last):
...
KeyError: 'pop from an empty poset'
remove(key, raise_key_error=True)#
Remove the given object from the poset.
INPUT:
• key – the key of an object.
• raise_key_error – (default: True) switch raising KeyError on and off.
OUTPUT:
Nothing.
If the element is not a member and raise_key_error is set (default), raise a KeyError.
Note
As with Python’s set, the methods remove() and discard() only differ in their behavior when an element is not contained in the poset: remove() raises a KeyError whereas discard() does not raise any exception.
This default behavior can be overridden with the raise_key_error parameter.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: print(P.repr_full(reverse=True))
poset((4, 4), (1, 3), (2, 2), (1, 2), (2, 1), (1, 1))
+-- oo
| +-- no successors
| +-- predecessors: (4, 4)
+-- (4, 4)
| +-- successors: oo
| +-- predecessors: (1, 3), (2, 2)
+-- (1, 3)
| +-- successors: (4, 4)
| +-- predecessors: (1, 2)
+-- (2, 2)
| +-- successors: (4, 4)
| +-- predecessors: (1, 2), (2, 1)
+-- (1, 2)
| +-- successors: (1, 3), (2, 2)
| +-- predecessors: (1, 1)
+-- (2, 1)
| +-- successors: (2, 2)
| +-- predecessors: (1, 1)
+-- (1, 1)
| +-- successors: (1, 2), (2, 1)
| +-- predecessors: null
+-- null
| +-- successors: (1, 1)
| +-- no predecessors
sage: P.remove(T((1, 2)))
sage: print(P.repr_full(reverse=True))
poset((4, 4), (1, 3), (2, 2), (2, 1), (1, 1))
+-- oo
| +-- no successors
| +-- predecessors: (4, 4)
+-- (4, 4)
| +-- successors: oo
| +-- predecessors: (1, 3), (2, 2)
+-- (1, 3)
| +-- successors: (4, 4)
| +-- predecessors: (1, 1)
+-- (2, 2)
| +-- successors: (4, 4)
| +-- predecessors: (2, 1)
+-- (2, 1)
| +-- successors: (2, 2)
| +-- predecessors: (1, 1)
+-- (1, 1)
| +-- successors: (1, 3), (2, 1)
| +-- predecessors: null
+-- null
| +-- successors: (1, 1)
| +-- no predecessors
repr(include_special=False, reverse=False)#
Return a representation of the poset.
INPUT:
• include_special – (default: False) a boolean indicating whether to include the special elements 'null' and 'oo' or not.
• reverse – (default: False) a boolean. If set, then largest elements are displayed first.
OUTPUT:
A string.
repr_full(reverse=False)#
Return a representation with ordering details of the poset.
INPUT:
• reverse – (default: False) a boolean. If set, then largest elements are displayed first.
OUTPUT:
A string.
shell(key)#
Return the shell of the element corresponding to key.
INPUT:
key – the key of an object.
OUTPUT:
An instance of MutablePosetShell.
Note
Each element is contained/encapsulated in a shell inside the poset.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: e = P.shell(42); e
42
sage: type(e)
<class 'sage.data_structures.mutable_poset.MutablePosetShell'>
shells(include_special=False)#
Return an iterator over all shells.
INPUT:
• include_special – (default: False) if set, then including shells containing a smallest element ($$\emptyset$$) and a largest element ($$\infty$$).
OUTPUT:
An iterator.
Note
Each element is contained/encapsulated in a shell inside the poset.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: tuple(P.shells())
()
sage: tuple(P.shells(include_special=True))
(null, oo)
shells_topological(include_special=False, reverse=False, key=None)#
Return an iterator over all shells in topological order.
INPUT:
• include_special – (default: False) if set, then including shells containing a smallest element ($$\emptyset$$) and a largest element ($$\infty$$).
• reverse – (default: False) – if set, reverses the order, i.e., False gives smallest elements first, True gives largest first.
• key – (default: None) a function used for sorting the direct successors of a shell (used in case of a tie). If this is None, no sorting occurs.
OUTPUT:
An iterator.
Note
Each element is contained/encapsulated in a shell inside the poset.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: list(P.shells_topological(key=repr))
[(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (4, 4)]
sage: list(P.shells_topological(reverse=True, key=repr))
[(4, 4), (1, 3), (2, 2), (1, 2), (2, 1), (1, 1)]
sage: list(P.shells_topological(include_special=True, key=repr))
[null, (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (4, 4), oo]
sage: list(P.shells_topological(
....: include_special=True, reverse=True, key=repr))
[oo, (4, 4), (1, 3), (2, 2), (1, 2), (2, 1), (1, 1), null]
symmetric_difference(other)#
Return the symmetric difference of two posets as a new poset.
INPUT:
• other – a poset.
OUTPUT:
A poset.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.symmetric_difference(Q)
poset(3, 4, 7, 8)
symmetric_difference_update(other)#
Update this poset with the symmetric difference of itself and another poset.
INPUT:
• other – a poset.
OUTPUT:
Nothing.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal; A.symmetric_difference_update(B) and B.symmetric_difference_update(A) might result in different posets.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.symmetric_difference_update(Q)
sage: P
poset(3, 4, 7, 8)
union(*other)#
Return the union of the given posets as a new poset
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset. It is possible to specify more than one other as variadic arguments (arbitrary argument lists).
OUTPUT:
A poset.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal.
Due to keys and a merge function (see MutablePoset) this operation might not be commutative.
Todo
Use the already existing information in the other poset to speed up this function. (At the moment each element of the other poset is inserted one by one and without using this information.)
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.union(Q)
poset(3, 4, 7, 8, 42)
union_update(*other)#
Update this poset with the union of itself and another poset.
INPUT:
• other – a poset or an iterable. In the latter case the iterated objects are seen as elements of a poset. It is possible to specify more than one other as variadic arguments (arbitrary argument lists).
OUTPUT:
Nothing.
Note
The key of an element is used for comparison. Thus elements with the same key are considered as equal; A.union_update(B) and B.union_update(A) might result in different posets.
Todo
Use the already existing information in the other poset to speed up this function. (At the moment each element of the other poset is inserted one by one and without using this information.)
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP([3, 42, 7]); P
poset(3, 7, 42)
sage: Q = MP([4, 8, 42]); Q
poset(4, 8, 42)
sage: P.union_update(Q)
sage: P
poset(3, 4, 7, 8, 42)
update(*other)#
Alias of union_update().
class sage.data_structures.mutable_poset.MutablePosetShell(poset, element)#
Bases: SageObject
A shell for an element of a mutable poset.
INPUT:
• poset – the poset to which this shell belongs.
• element – the element which should be contained/encapsulated in this shell.
OUTPUT:
A shell for the given element.
Note
If the element() of a shell is None, then this element is considered as “special” (see is_special()). There are two special elements, namely
• a 'null' (an element smaller than each other element; it has no predecessors) and
• an 'oo' (an element larger than each other element; it has no successors).
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: P = MP()
sage: P
poset(66)
sage: s = P.shell(66)
sage: type(s)
<class 'sage.data_structures.mutable_poset.MutablePosetShell'>
property element#
The element contained in this shell.
eq(other)#
Return whether this shell is equal to other.
INPUT:
• other – a shell.
OUTPUT:
True or False.
Note
This method compares the keys of the elements contained in the (non-special) shells. In particular, elements/shells with the same key are considered as equal.
is_null()#
Return whether this shell contains the null-element, i.e., the element smaller than any possible other element.
OUTPUT:
True or False.
is_oo()#
Return whether this shell contains the infinity-element, i.e., the element larger than any possible other element.
OUTPUT:
True or False.
is_special()#
Return whether this shell contains either the null-element, i.e., the element smaller than any possible other element or the infinity-element, i.e., the element larger than any possible other element.
INPUT:
Nothing.
OUTPUT:
True or False.
iter_depth_first(reverse=False, key=None, condition=None)#
Iterate over all shells in depth first order.
INPUT:
• reverse – (default: False) if set, reverses the order, i.e., False searches towards 'oo' and True searches towards 'null'.
• key – (default: None) a function used for sorting the direct successors of a shell (used in case of a tie). If this is None, no sorting occurs.
• condition – (default: None) a function mapping a shell to True (include in iteration) or False (do not include). None is equivalent to a function returning always True. Note that the iteration does not go beyond a not included shell.
OUTPUT:
An iterator.
Note
The depth first search starts at this (self) shell. Thus only this shell and shells greater than (in case of reverse=False) this shell are visited.
ALGORITHM:
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: list(P.null.iter_depth_first(reverse=False, key=repr))
[null, (1, 1), (1, 2), (1, 3), (4, 4), oo, (2, 2), (2, 1)]
sage: list(P.oo.iter_depth_first(reverse=True, key=repr))
[oo, (4, 4), (1, 3), (1, 2), (1, 1), null, (2, 2), (2, 1)]
sage: list(P.null.iter_depth_first(
....: condition=lambda s: s.element[0] == 1))
[null, (1, 1), (1, 2), (1, 3)]
iter_topological(reverse=False, key=None, condition=None)#
Iterate over all shells in topological order.
INPUT:
• reverse – (default: False) if set, reverses the order, i.e., False searches towards 'oo' and True searches towards 'null'.
• key – (default: None) a function used for sorting the direct predecessors of a shell (used in case of a tie). If this is None, no sorting occurs.
• condition – (default: None) a function mapping a shell to True (include in iteration) or False (do not include). None is equivalent to a function returning always True. Note that the iteration does not go beyond a not included shell.
OUTPUT:
An iterator.
Note
The topological search will only find shells smaller than (in case of reverse=False) or equal to this (self) shell. This is in contrast to iter_depth_first().
ALGORITHM:
Here a simplified version of the algorithm found in [Tar1976] and [CLRS2001] is used. See also Wikipedia article Topological_sorting.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: for e in P.shells_topological(include_special=True,
....: reverse=True, key=repr):
....: print(e)
....: print(list(e.iter_topological(reverse=True, key=repr)))
oo
[oo]
(4, 4)
[oo, (4, 4)]
(1, 3)
[oo, (4, 4), (1, 3)]
(2, 2)
[oo, (4, 4), (2, 2)]
(1, 2)
[oo, (4, 4), (1, 3), (2, 2), (1, 2)]
(2, 1)
[oo, (4, 4), (2, 2), (2, 1)]
(1, 1)
[oo, (4, 4), (1, 3), (2, 2), (1, 2), (2, 1), (1, 1)]
null
[oo, (4, 4), (1, 3), (2, 2), (1, 2), (2, 1), (1, 1), null]
sage: for e in P.shells_topological(include_special=True,
....: reverse=True, key=repr):
....: print(e)
....: print(list(e.iter_topological(reverse=False, key=repr)))
oo
[null, (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (4, 4), oo]
(4, 4)
[null, (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (4, 4)]
(1, 3)
[null, (1, 1), (1, 2), (1, 3)]
(2, 2)
[null, (1, 1), (1, 2), (2, 1), (2, 2)]
(1, 2)
[null, (1, 1), (1, 2)]
(2, 1)
[null, (1, 1), (2, 1)]
(1, 1)
[null, (1, 1)]
null
[null]
sage: list(P.null.iter_topological(
....: reverse=True, condition=lambda s: s.element[0] == 1,
....: key=repr))
[(1, 3), (1, 2), (1, 1), null]
property key#
The key of the element contained in this shell.
The key of an element is determined by the mutable poset (the parent) via the key-function (see construction of a MutablePoset).
le(other, reverse=False)#
Return whether this shell is less than or equal to other.
INPUT:
• other – a shell.
• reverse – (default: False) if set, then return whether this shell is greater than or equal to other.
OUTPUT:
True or False.
Note
The comparison of the shells is based on the comparison of the keys of the elements contained in the shells, except for special shells (see MutablePosetShell).
lower_covers(shell, reverse=False)#
Return the lower covers of the specified shell; the search is started at this (self) shell.
A lower cover of $$x$$ is an element $$y$$ of the poset such that $$y < x$$ and there is no element $$z$$ of the poset so that $$y < z < x$$.
INPUT:
• shell – the shell for which to find the covering shells. There is no restriction of shell being contained in the poset. If shell is contained in the poset, then use the more efficient methods predecessors() and successors().
• reverse – (default: False) if set, then find the upper covers (see also upper_covers()) instead of the lower covers.
OUTPUT:
A set of shells.
Note
Suppose reverse is False. This method starts at the calling shell (self) and searches towards 'oo'. Thus, only shells which are (not necessarily direct) successors of this shell are considered.
If reverse is True, then the reverse direction is taken.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: e = P.shell(T((2, 2))); e
(2, 2)
sage: sorted(P.null.lower_covers(e),
....: key=lambda c: repr(c.element))
[(1, 2), (2, 1)]
sage: set(_) == e.predecessors()
True
sage: sorted(P.oo.upper_covers(e),
....: key=lambda c: repr(c.element))
[(4, 4)]
sage: set(_) == e.successors()
True
sage: Q = MP([T((3, 2))])
sage: f = next(Q.shells())
sage: sorted(P.null.lower_covers(f),
....: key=lambda c: repr(c.element))
[(2, 2)]
sage: sorted(P.oo.upper_covers(f),
....: key=lambda c: repr(c.element))
[(4, 4)]
merge(element, check=True, delete=True)#
Merge the given element with the element contained in this shell.
INPUT:
• element – an element (of the poset).
• check – (default: True) if set, then the can_merge-function of MutablePoset determines whether the merge is possible. can_merge is None means that this check is always passed.
• delete – (default: True) if set, then element is removed from the poset after the merge.
OUTPUT:
Nothing.
Note
This operation depends on the parameters merge and can_merge of the MutablePoset this shell is contained in. These parameters are defined when the poset is constructed.
Note
If the merge function returns None, then this shell is removed from the poset.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
....: return (left[0], ''.join(sorted(left[1] + right[1])))
....: return left[0] <= right[0]
sage: P = MP([(1, 'a'), (3, 'b'), (2, 'c'), (4, 'd')],
sage: P
poset((1, 'a'), (2, 'c'), (3, 'b'), (4, 'd'))
sage: P.shell(2).merge((3, 'b'))
sage: P
poset((1, 'a'), (2, 'bc'), (4, 'd'))
property poset#
The poset to which this shell belongs.
predecessors(reverse=False)#
Return the predecessors of this shell.
INPUT:
• reverse – (default: False) if set, then return successors instead.
OUTPUT:
A set.
successors(reverse=False)#
Return the successors of this shell.
INPUT:
• reverse – (default: False) if set, then return predecessors instead.
OUTPUT:
A set.
upper_covers(shell, reverse=False)#
Return the upper covers of the specified shell; the search is started at this (self) shell.
An upper cover of $$x$$ is an element $$y$$ of the poset such that $$x < y$$ and there is no element $$z$$ of the poset so that $$x < z < y$$.
INPUT:
• shell – the shell for which to find the covering shells. There is no restriction of shell being contained in the poset. If shell is contained in the poset, then use the more efficient methods predecessors() and successors().
• reverse – (default: False) if set, then find the lower covers (see also lower_covers()) instead of the upper covers.
OUTPUT:
A set of shells.
Note
Suppose reverse is False. This method starts at the calling shell (self) and searches towards 'null'. Thus, only shells which are (not necessarily direct) predecessors of this shell are considered.
If reverse is True, then the reverse direction is taken.
EXAMPLES:
sage: from sage.data_structures.mutable_poset import MutablePoset as MP
sage: class T(tuple):
....: def __le__(left, right):
....: return all(l <= r for l, r in zip(left, right))
sage: P = MP([T((1, 1)), T((1, 3)), T((2, 1)),
....: T((4, 4)), T((1, 2)), T((2, 2))])
sage: e = P.shell(T((2, 2))); e
(2, 2)
sage: sorted(P.null.lower_covers(e),
....: key=lambda c: repr(c.element))
[(1, 2), (2, 1)]
sage: set(_) == e.predecessors()
True
sage: sorted(P.oo.upper_covers(e),
....: key=lambda c: repr(c.element))
[(4, 4)]
sage: set(_) == e.successors()
True
sage: Q = MP([T((3, 2))])
sage: f = next(Q.shells())
sage: sorted(P.null.lower_covers(f),
....: key=lambda c: repr(c.element))
[(2, 2)]
sage: sorted(P.oo.upper_covers(f),
....: key=lambda c: repr(c.element))
[(4, 4)]
sage.data_structures.mutable_poset.is_MutablePoset(P)#
Test whether P inherits from MutablePoset. | 13,270 | 43,446 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.421875 | 3 | CC-MAIN-2024-22 | latest | en | 0.814046 |
https://chemistryhelpforum.com/threads/heat-capacity.21104/ | 1,576,347,082,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575541288287.53/warc/CC-MAIN-20191214174719-20191214202719-00502.warc.gz | 311,265,457 | 15,508 | # Heat Capacity
#### ashleyirene
1. The molar heat capacity is similar to the specific heat, but it is given in units J/mol*degrees Celsius instead of J/g*degrees Celsius. Calculate the molar heat capacity of gold (MW 197.0 g/mol) and water (MW 18.02 g/mol)
2. 1.000 L of boiling water is added to 3.000 L of water at 25.00 degrees Celsius. What will be the final temperature? (Hint, write expressions for heat gained by each with Tf as a variable, and solve for Tf) (( double hit: loosing heat is the same thing as gaining a negative amount of heat.))
3. How many grams of propane would you have to burn to get the water to the boing point in in Example 1?
Example 1 is: 0.800 g of propane (C3H8) is burned in a flame. All the heat released is absorbed by a bucket containing 3.500 kg of water. The temperature of water increases by 2.531 degrees Celsius.
#### bjhopper
heat capacity
1 liter of water @100 C weighs 958 g
3 liter of water @ 25 C weighs 996 *3 =2988g
Wt of mix = 3946g
heat in BW above 25 C 958#75 *4.216 joules
This equals heat in mix above 25 heat in mix
3946*4.216*(t-25)
t =43.2 C
CHF Hall of Fame | 331 | 1,125 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.8125 | 4 | CC-MAIN-2019-51 | latest | en | 0.879654 |
https://devel.isa-afp.org/browser_info/current/AFP/Goedel_HFSet_Semanticless/Goedel_I.html | 1,685,547,434,000,000,000 | text/html | crawl-data/CC-MAIN-2023-23/segments/1685224646937.1/warc/CC-MAIN-20230531150014-20230531180014-00703.warc.gz | 249,269,856 | 30,563 | # Theory Goedel_I
```chapter ‹Section 6 Material and Gödel's First Incompleteness Theorem›
theory Goedel_I
imports Pf_Predicates Functions II_Prelims
begin
section‹The Function W and Lemma 6.1›
subsection‹Predicate form, defined on sequences›
nominal_function SeqWRP :: "tm ⇒ tm ⇒ tm ⇒ fm"
where "⟦atom l ♯ (s,k,sl); atom sl ♯ (s)⟧ ⟹
SeqWRP s k y = LstSeqP s k y AND
HPair Zero Zero IN s AND
All2 l k (Ex sl (HPair (Var l) (Var sl) IN s AND
HPair (SUCC (Var l)) (Q_Succ (Var sl)) IN s))"
by (auto simp: eqvt_def SeqWRP_graph_aux_def flip_fresh_fresh) (metis obtain_fresh)
nominal_termination (eqvt)
by lexicographic_order
lemma
shows SeqWRP_fresh_iff [simp]: "a ♯ SeqWRP s k y ⟷ a ♯ s ∧ a ♯ k ∧ a ♯ y" (is ?thesis1)
and SeqWRP_sf [iff]: "Sigma_fm (SeqWRP s k y)" (is ?thsf)
and SeqWRP_imp_OrdP: "{SeqWRP s k t} ⊢ OrdP k" (is ?thOrd)
and SeqWRP_LstSeqP: "{SeqWRP s k t} ⊢ LstSeqP s k t" (is ?thlstseq)
proof -
obtain l::name and sl::name where "atom l ♯ (s,k,sl)" "atom sl ♯ (s)"
by (metis obtain_fresh)
thus ?thesis1 ?thsf ?thOrd ?thlstseq
by (auto intro: LstSeqP_OrdP[THEN cut1])
qed
lemma SeqWRP_subst [simp]:
"(SeqWRP s k y)(i::=t) = SeqWRP (subst i t s) (subst i t k) (subst i t y)"
proof -
obtain l::name and sl::name
where "atom l ♯ (s,k,sl,t,i)" "atom sl ♯ (s,k,t,i)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: SeqWRP.simps [where l=l and sl=sl])
qed
lemma SeqWRP_cong:
assumes "H ⊢ s EQ s'" and "H ⊢ k EQ k'" and "H ⊢ y EQ y'"
shows "H ⊢ SeqWRP s k y IFF SeqWRP s' k' y'"
by (rule P3_cong [OF _ assms], auto)
declare SeqWRP.simps [simp del]
subsection‹Predicate form of W›
nominal_function WRP :: "tm ⇒ tm ⇒ fm"
where "⟦atom s ♯ (x,y)⟧ ⟹
WRP x y = Ex s (SeqWRP (Var s) x y)"
by (auto simp: eqvt_def WRP_graph_aux_def flip_fresh_fresh) (metis obtain_fresh)
nominal_termination (eqvt)
by lexicographic_order
lemma
shows WRP_fresh_iff [simp]: "a ♯ WRP x y ⟷ a ♯ x ∧ a ♯ y" (is ?thesis1)
and sigma_fm_WRP [simp]: "Sigma_fm (WRP x y)" (is ?thsf)
proof -
obtain s::name where "atom s ♯ (x,y)"
by (metis obtain_fresh)
thus ?thesis1 ?thsf
by auto
qed
lemma WRP_subst [simp]: "(WRP x y)(i::=t) = WRP (subst i t x) (subst i t y)"
proof -
obtain s::name where "atom s ♯ (x,y,t,i)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: WRP.simps [of s])
qed
lemma WRP_cong: "H ⊢ t EQ t' ⟹ H ⊢ u EQ u' ⟹ H ⊢ WRP t u IFF WRP t' u'"
by (rule P2_cong) auto
declare WRP.simps [simp del]
lemma ground_WRP [simp]: "ground_fm (WRP x y) ⟷ ground x ∧ ground y"
by (auto simp: ground_aux_def ground_fm_aux_def supp_conv_fresh)
lemma SeqWRP_Zero: "{} ⊢ SyntaxN.Ex s (SeqWRP (Var s) Zero Zero)"
proof -
obtain l sl :: name where "atom l ♯ (s, sl)" "atom sl ♯ s" by (metis obtain_fresh)
then show ?thesis
apply (subst SeqWRP.simps[of l _ _ sl]; simp)
apply (rule Ex_I[where x="(Eats Zero (HPair Zero Zero))"], simp)
apply (auto intro!: Mem_Eats_I2)
done
qed
lemma WRP_Zero: "{} ⊢ WRP Zero Zero"
by (subst WRP.simps[of undefined]) (auto simp: SeqWRP_Zero)
lemma SeqWRP_HPair_Zero_Zero: "{SeqWRP s k y} ⊢ HPair Zero Zero IN s"
proof -
let ?vs = "(s,k,y)"
obtain l::name and sl::name
where "atom l ♯ (?vs,sl)" "atom sl ♯ (?vs)" by (metis obtain_fresh)
then show ?thesis
by (subst SeqWRP.simps[of l _ _ sl]) auto
qed
lemma SeqWRP_Succ:
assumes "atom s ♯ (s1,k1,y)"
shows "{SeqWRP s1 k1 y} ⊢ SyntaxN.Ex s (SeqWRP (Var s) (SUCC k1) (Q_Succ y))"
proof -
let ?vs = "(s,s1,k1,y)"
obtain l::name and sl::name and l1::name and sl1::name
where atoms:
"atom l ♯ (?vs,sl1,l1,sl)"
"atom sl ♯ (?vs,sl1,l1)"
"atom l1 ♯ (?vs,sl1)"
"atom sl1 ♯ (?vs)"
by (metis obtain_fresh)
let ?hyp = "{RestrictedP s1 (SUCC k1) (Var s), OrdP k1, SeqWRP s1 k1 y}"
show ?thesis
using assms atoms
apply (auto simp: SeqWRP.simps [of l "Var s" _ sl])
apply (rule cut_same [where A="OrdP k1"])
apply (rule SeqWRP_imp_OrdP)
apply (rule cut_same [OF exists_RestrictedP [of s s1 "SUCC k1"]])
apply (rule AssumeH Ex_EH Conj_EH | simp)+
apply (rule Ex_I [where x="Eats (Var s) (HPair (SUCC k1) (Q_Succ y))"])
apply (simp_all (no_asm_simp))
apply (rule Conj_I)
apply (blast intro: RestrictedP_LstSeqP_Eats[THEN cut2] SeqWRP_LstSeqP[THEN cut1])
apply (rule Conj_I)
apply (rule Mem_Eats_I1)
apply (blast intro: RestrictedP_Mem[THEN cut3] SeqWRP_HPair_Zero_Zero[THEN cut1] Zero_In_SUCC[THEN cut1])
proof (rule All2_SUCC_I, simp_all)
show "?hyp ⊢ SyntaxN.Ex sl
(HPair k1 (Var sl) IN Eats (Var s) (HPair (SUCC k1) (Q_Succ y)) AND
HPair (SUCC k1) (Q_Succ (Var sl)) IN
Eats (Var s) (HPair (SUCC k1) (Q_Succ y)))"
― ‹verifying the final values›
apply (rule Ex_I [where x="y"])
using assms atoms apply simp
apply (rule Conj_I[rotated])
apply (rule Mem_Eats_I2, rule Refl)
apply (rule Mem_Eats_I1)
apply (rule RestrictedP_Mem[THEN cut3])
apply (rule AssumeH)
apply (simp add: LstSeqP_imp_Mem SeqWRP_LstSeqP thin1)
apply (rule Mem_SUCC_Refl)
done
next
show "?hyp ⊢ All2 l k1
(SyntaxN.Ex sl
(HPair (Var l) (Var sl) IN
Eats (Var s) (HPair (SUCC k1) (Q_Succ y)) AND
HPair (SUCC (Var l)) (Q_Succ (Var sl)) IN
Eats (Var s) (HPair (SUCC k1) (Q_Succ y))))"
― ‹verifying the sequence buildup›
apply (rule All_I Imp_I)+
using assms atoms apply simp_all
― ‹... the sequence buildup via s1›
apply (simp add: SeqWRP.simps [of l s1 _ sl])
apply (rule AssumeH Ex_EH Conj_EH)+
apply (rule All2_E [THEN rotate2], auto del: Disj_EH)
apply (rule Ex_I [where x="Var sl"], simp)
apply (rule Conj_I)
apply (blast intro: Mem_Eats_I1 [OF RestrictedP_Mem [THEN cut3]] Mem_SUCC_I1)
apply (blast intro: Mem_Eats_I1 [OF RestrictedP_Mem [THEN cut3]] OrdP_IN_SUCC)
done
qed
qed (*>*)
lemma WRP_Succ: "{OrdP i, WRP i y} ⊢ WRP (SUCC i) (Q_Succ y)"
proof -
obtain s t :: name where "atom s ♯ (i, y)" "atom t ♯ (s,i, y)" by (metis obtain_fresh)
then show ?thesis
by (subst WRP.simps[of s], simp, subst WRP.simps[of t], simp) (force intro: SeqWRP_Succ[THEN cut1])
qed
lemma WRP: "{} ⊢ WRP (ORD_OF i) «ORD_OF i»"
by (induct i)
(auto simp: WRP_Zero quot_Succ intro!: WRP_Succ[THEN cut2])
lemma prove_WRP: "{} ⊢ WRP «Var x» ««Var x»»"
unfolding quot_Var quot_Succ
by (rule WRP_Succ[THEN cut2]) (auto simp: WRP)
subsection‹Proving that these relations are functions›
lemma SeqWRP_Zero_E:
assumes "insert (y EQ Zero) H ⊢ A" "H ⊢ k EQ Zero"
shows "insert (SeqWRP s k y) H ⊢ A"
proof -
obtain l::name and sl::name
where "atom l ♯ (s,k,sl)" "atom sl ♯ (s)"
by (metis obtain_fresh)
thus ?thesis
apply (auto simp: SeqWRP.simps [where s=s and l=l and sl=sl])
apply (rule cut_same [where A = "LstSeqP s Zero y"])
apply (blast intro: thin1 assms LstSeqP_cong [OF Refl _ Refl, THEN Iff_MP_same])
apply (rule cut_same [where A = "y EQ Zero"])
apply (blast intro: LstSeqP_EQ)
apply (metis rotate2 assms(1) thin1)
done
qed
lemma SeqWRP_SUCC_lemma:
assumes y': "atom y' ♯ (s,k,y)"
shows "{SeqWRP s (SUCC k) y} ⊢ Ex y' (SeqWRP s k (Var y') AND y EQ Q_Succ (Var y'))"
proof -
obtain l::name and sl::name
where atoms: "atom l ♯ (s,k,y,y',sl)" "atom sl ♯ (s,k,y,y')"
by (metis obtain_fresh)
thus ?thesis using y'
apply (auto simp: SeqWRP.simps [where s=s and l=l and sl=sl])
apply (rule All2_SUCC_E' [where t=k, THEN rotate2], auto)
apply (rule Ex_I [where x = "Var sl"], auto)
apply (blast intro: LstSeqP_SUCC) ― ‹showing @{term"SeqWRP s k (Var sl)"}›
apply (blast intro: ContraProve LstSeqP_EQ)
done
qed
lemma SeqWRP_SUCC_E:
assumes y': "atom y' ♯ (s,k,y)" and k': "H ⊢ k' EQ (SUCC k)"
shows "insert (SeqWRP s k' y) H ⊢ Ex y' (SeqWRP s k (Var y') AND y EQ Q_Succ (Var y'))"
using SeqWRP_cong [OF Refl k' Refl] cut1 [OF SeqWRP_SUCC_lemma [of y' s k y]]
by (metis Assume Iff_MP_left Iff_sym y')
lemma SeqWRP_unique: "{OrdP x, SeqWRP s x y, SeqWRP s' x y'} ⊢ y' EQ y"
proof -
obtain i::name and j::name and j'::name and k::name and sl::name and sl'::name and l::name and pi::name
where i: "atom i ♯ (s,s',y,y')" and j: "atom j ♯ (s,s',i,x,y,y')" and j': "atom j' ♯ (s,s',i,j,x,y,y')"
and atoms: "atom k ♯ (s,s',i,j,j')" "atom sl ♯ (s,s',i,j,j',k)" "atom sl' ♯ (s,s',i,j,j',k,sl)"
"atom pi ♯ (s,s',i,j,j',k,sl,sl')"
by (metis obtain_fresh)
have "{OrdP (Var i)} ⊢ All j (All j' (SeqWRP s (Var i) (Var j) IMP (SeqWRP s' (Var i) (Var j') IMP Var j' EQ Var j)))"
apply (rule OrdIndH [where j=k])
using i j j' atoms apply auto
apply (rule rotate4)
apply (rule OrdP_cases_E [where k=pi], simp_all)
― ‹Zero case›
apply (rule SeqWRP_Zero_E [THEN rotate3])
prefer 2 apply blast
apply (rule SeqWRP_Zero_E [THEN rotate4])
prefer 2 apply blast
apply (blast intro: ContraProve [THEN rotate4] Sym Trans)
― ‹SUCC case›
apply (rule Ex_I [where x = "Var pi"], auto)
apply (metis ContraProve EQ_imp_SUBS2 Mem_SUCC_I2 Refl Subset_D)
apply (rule cut_same)
apply (rule SeqWRP_SUCC_E [of sl' s' "Var pi", THEN rotate4], auto)
apply (rule cut_same)
apply (rule SeqWRP_SUCC_E [of sl s "Var pi", THEN rotate7], auto)
apply (rule All_E [where x = "Var sl", THEN rotate5], simp)
apply (rule All_E [where x = "Var sl'"], simp)
apply (rule Imp_E, blast)+
apply (rule cut_same [OF Q_Succ_cong [OF Assume]])
apply (blast intro: Trans [OF Hyp Sym] HPair_cong)
done
hence "{OrdP (Var i)} ⊢ (All j' (SeqWRP s (Var i) (Var j) IMP (SeqWRP s' (Var i) (Var j') IMP Var j' EQ Var j)))(j::=y)"
by (metis All_D)
hence "{OrdP (Var i)} ⊢ (SeqWRP s (Var i) y IMP (SeqWRP s' (Var i) (Var j') IMP Var j' EQ y))(j'::=y')"
using j j'
by simp (drule All_D [where x=y'], simp)
hence "{} ⊢ OrdP (Var i) IMP (SeqWRP s (Var i) y IMP (SeqWRP s' (Var i) y' IMP y' EQ y))"
using j j'
by simp (metis Imp_I)
hence "{} ⊢ (OrdP (Var i) IMP (SeqWRP s (Var i) y IMP (SeqWRP s' (Var i) y' IMP y' EQ y)))(i::=x)"
by (metis Subst emptyE)
thus ?thesis using i
by simp (metis anti_deduction insert_commute)
qed
theorem WRP_unique: "{OrdP x, WRP x y, WRP x y'} ⊢ y' EQ y"
proof -
obtain s::name and s'::name
where "atom s ♯ (x,y,y')" "atom s' ♯ (x,y,y',s)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: SeqWRP_unique [THEN rotate3] WRP.simps [of s _ y] WRP.simps [of s' _ y'])
qed
section‹The Function HF and Lemma 6.2›
subsection ‹Defining the syntax: quantified body›
nominal_function SeqHRP :: "tm ⇒ tm ⇒ tm ⇒ tm ⇒ fm"
where "⟦atom l ♯ (s,k,sl,sl',m,n,sm,sm',sn,sn');
atom sl ♯ (s,sl',m,n,sm,sm',sn,sn');
atom sl' ♯ (s,m,n,sm,sm',sn,sn');
atom m ♯ (s,n,sm,sm',sn,sn');
atom n ♯ (s,sm,sm',sn,sn');
atom sm ♯ (s,sm',sn,sn');
atom sm' ♯ (s,sn,sn');
atom sn ♯ (s,sn');
atom sn' ♯ (s)⟧ ⟹
SeqHRP x x' s k =
LstSeqP s k (HPair x x') AND
All2 l (SUCC k) (Ex sl (Ex sl' (HPair (Var l) (HPair (Var sl) (Var sl')) IN s AND
((OrdP (Var sl) AND WRP (Var sl) (Var sl')) OR
Ex m (Ex n (Ex sm (Ex sm' (Ex sn (Ex sn' (Var m IN Var l AND Var n IN Var l AND
HPair (Var m) (HPair (Var sm) (Var sm')) IN s AND
HPair (Var n) (HPair (Var sn) (Var sn')) IN s AND
Var sl EQ HPair (Var sm) (Var sn) AND
Var sl' EQ Q_HPair (Var sm') (Var sn')))))))))))"
by (auto simp: eqvt_def SeqHRP_graph_aux_def flip_fresh_fresh) (metis obtain_fresh)
nominal_termination (eqvt)
by lexicographic_order
lemma
shows SeqHRP_fresh_iff [simp]:
"a ♯ SeqHRP x x' s k ⟷ a ♯ x ∧ a ♯ x' ∧ a ♯ s ∧ a ♯ k" (is ?thesis1)
and SeqHRP_sf [iff]: "Sigma_fm (SeqHRP x x' s k)" (is ?thsf)
and SeqHRP_imp_OrdP: "{ SeqHRP x y s k } ⊢ OrdP k" (is ?thord)
and SeqHRP_imp_LstSeqP: "{ SeqHRP x y s k } ⊢ LstSeqP s k (HPair x y)" (is ?thlstseq)
proof -
obtain l::name and sl::name and sl'::name and m::name and n::name and
sm::name and sm'::name and sn::name and sn'::name
where atoms:
"atom l ♯ (s,k,sl,sl',m,n,sm,sm',sn,sn')"
"atom sl ♯ (s,sl',m,n,sm,sm',sn,sn')" "atom sl' ♯ (s,m,n,sm,sm',sn,sn')"
"atom m ♯ (s,n,sm,sm',sn,sn')" "atom n ♯ (s,sm,sm',sn,sn')"
"atom sm ♯ (s,sm',sn,sn')" "atom sm' ♯ (s,sn,sn')"
"atom sn ♯ (s,sn')" "atom sn' ♯ (s)"
by (metis obtain_fresh)
thus ?thesis1 ?thsf ?thord ?thlstseq
by (auto intro: LstSeqP_OrdP)
qed
lemma SeqHRP_subst [simp]:
"(SeqHRP x x' s k)(i::=t) = SeqHRP (subst i t x) (subst i t x') (subst i t s) (subst i t k)"
proof -
obtain l::name and sl::name and sl'::name and m::name and n::name and
sm::name and sm'::name and sn::name and sn'::name
where "atom l ♯ (s,k,t,i,sl,sl',m,n,sm,sm',sn,sn')"
"atom sl ♯ (s,t,i,sl',m,n,sm,sm',sn,sn')"
"atom sl' ♯ (s,t,i,m,n,sm,sm',sn,sn')"
"atom m ♯ (s,t,i,n,sm,sm',sn,sn')" "atom n ♯ (s,t,i,sm,sm',sn,sn')"
"atom sm ♯ (s,t,i,sm',sn,sn')" "atom sm' ♯ (s,t,i,sn,sn')"
"atom sn ♯ (s,t,i,sn')" "atom sn' ♯ (s,t,i)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: SeqHRP.simps [of l _ _ sl sl' m n sm sm' sn sn'])
qed
lemma SeqHRP_cong:
assumes "H ⊢ x EQ x'" and "H ⊢ y EQ y'" "H ⊢ s EQ s'" and "H ⊢ k EQ k'"
shows "H ⊢ SeqHRP x y s k IFF SeqHRP x' y' s' k'"
by (rule P4_cong [OF _ assms], auto)
subsection ‹Defining the syntax: main predicate›
nominal_function HRP :: "tm ⇒ tm ⇒ fm"
where "⟦atom s ♯ (x,x',k); atom k ♯ (x,x')⟧ ⟹
HRP x x' = Ex s (Ex k (SeqHRP x x' (Var s) (Var k)))"
by (auto simp: eqvt_def HRP_graph_aux_def flip_fresh_fresh) (metis obtain_fresh)
nominal_termination (eqvt)
by lexicographic_order
lemma
shows HRP_fresh_iff [simp]: "a ♯ HRP x x' ⟷ a ♯ x ∧ a ♯ x'" (is ?thesis1)
and HRP_sf [iff]: "Sigma_fm (HRP x x')" (is ?thsf)
proof -
obtain s::name and k::name where "atom s ♯ (x,x',k)" "atom k ♯ (x,x')"
by (metis obtain_fresh)
thus ?thesis1 ?thsf
by auto
qed
lemma HRP_subst [simp]: "(HRP x x')(i::=t) = HRP (subst i t x) (subst i t x')"
proof -
obtain s::name and k::name where "atom s ♯ (x,x',t,i,k)" "atom k ♯ (x,x',t,i)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: HRP.simps [of s _ _ k])
qed
subsection‹Proving that these relations are functions›
lemma SeqHRP_lemma:
assumes "atom m ♯ (x,x',s,k,n,sm,sm',sn,sn')" "atom n ♯ (x,x',s,k,sm,sm',sn,sn')"
"atom sm ♯ (x,x',s,k,sm',sn,sn')" "atom sm' ♯ (x,x',s,k,sn,sn')"
"atom sn ♯ (x,x',s,k,sn')" "atom sn' ♯ (x,x',s,k)"
shows "{ SeqHRP x x' s k }
⊢ (OrdP x AND WRP x x') OR
Ex m (Ex n (Ex sm (Ex sm' (Ex sn (Ex sn' (Var m IN k AND Var n IN k AND
SeqHRP (Var sm) (Var sm') s (Var m) AND
SeqHRP (Var sn) (Var sn') s (Var n) AND
x EQ HPair (Var sm) (Var sn) AND
x' EQ Q_HPair (Var sm') (Var sn')))))))"
proof -
obtain l::name and sl::name and sl'::name
where atoms:
"atom l ♯ (x,x',s,k,sl,sl',m,n,sm,sm',sn,sn')"
"atom sl ♯ (x,x',s,k,sl',m,n,sm,sm',sn,sn')"
"atom sl' ♯ (x,x',s,k,m,n,sm,sm',sn,sn')"
by (metis obtain_fresh)
thus ?thesis using atoms assms
apply (simp add: SeqHRP.simps [of l s k sl sl' m n sm sm' sn sn'])
apply (rule Conj_E)
apply (rule All2_SUCC_E' [where t=k, THEN rotate2], simp_all)
apply (rule rotate2)
apply (rule Ex_E Conj_E)+
apply (rule cut_same [where A = "HPair x x' EQ HPair (Var sl) (Var sl')"])
apply (metis Assume LstSeqP_EQ rotate4, simp_all, clarify)
apply (rule Disj_E [THEN rotate4])
apply (rule Disj_I1)
apply (metis Assume AssumeH(3) Sym thin1 Iff_MP_same [OF Conj_cong [OF OrdP_cong WRP_cong] Assume])
― ‹auto could be used but is VERY SLOW›
apply (rule Disj_I2)
apply (rule Ex_E Conj_EH)+
apply simp_all
apply (rule Ex_I [where x = "Var m"], simp)
apply (rule Ex_I [where x = "Var n"], simp)
apply (rule Ex_I [where x = "Var sm"], simp)
apply (rule Ex_I [where x = "Var sm'"], simp)
apply (rule Ex_I [where x = "Var sn"], simp)
apply (rule Ex_I [where x = "Var sn'"], simp)
apply (simp add: SeqHRP.simps [of l _ _ sl sl' m n sm sm' sn sn'])
apply (rule Conj_I, blast)+
― ‹first SeqHRP subgoal›
apply (rule Conj_I)+
apply (blast intro: LstSeqP_Mem)
apply (rule All2_Subset [OF Hyp], blast)
apply (blast intro!: SUCC_Subset_Ord LstSeqP_OrdP, blast, simp)
― ‹next SeqHRP subgoal›
apply (rule Conj_I)+
apply (blast intro: LstSeqP_Mem)
apply (rule All2_Subset [OF Hyp], blast)
apply (auto intro!: SUCC_Subset_Ord LstSeqP_OrdP)
― ‹finally, the equality pair›
apply (blast intro: Trans)+
done
qed
lemma SeqHRP_unique: "{SeqHRP x y s u, SeqHRP x y' s' u'} ⊢ y' EQ y"
proof -
obtain i::name and j::name and j'::name and k::name and k'::name and l::name
and m::name and n::name and sm::name and sn::name and sm'::name and sn'::name
and m2::name and n2::name and sm2::name and sn2::name and sm2'::name and sn2'::name
where atoms: "atom i ♯ (s,s',y,y')" "atom j ♯ (s,s',i,x,y,y')" "atom j' ♯ (s,s',i,j,x,y,y')"
"atom k ♯ (s,s',x,y,y',u',i,j,j')" "atom k' ♯ (s,s',x,y,y',k,i,j,j')" "atom l ♯ (s,s',i,j,j',k,k')"
"atom m ♯ (s,s',i,j,j',k,k',l)" "atom n ♯ (s,s',i,j,j',k,k',l,m)"
"atom sm ♯ (s,s',i,j,j',k,k',l,m,n)" "atom sn ♯ (s,s',i,j,j',k,k',l,m,n,sm)"
"atom sm' ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn)" "atom sn' ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm')"
"atom m2 ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm',sn')" "atom n2 ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm',sn',m2)"
"atom sm2 ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm',sn',m2,n2)" "atom sn2 ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm',sn',m2,n2,sm2)"
"atom sm2' ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm',sn',m2,n2,sm2,sn2)" "atom sn2' ♯ (s,s',i,j,j',k,k',l,m,n,sm,sn,sm',sn',m2,n2,sm2,sn2,sm2')"
by (metis obtain_fresh)
have "{OrdP (Var k)}
⊢ All i (All j (All j' (All k' (SeqHRP (Var i) (Var j) s (Var k) IMP (SeqHRP (Var i) (Var j') s' (Var k') IMP Var j' EQ Var j)))))"
apply (rule OrdIndH [where j=l])
using atoms apply auto
apply (rule Swap)
apply (rule cut_same)
apply (rule cut1 [OF SeqHRP_lemma [of m "Var i" "Var j" s "Var k" n sm sm' sn sn']], simp_all, blast)
apply (rule cut_same)
apply (rule cut1 [OF SeqHRP_lemma [of m2 "Var i" "Var j'" s' "Var k'" n2 sm2 sm2' sn2 sn2']], simp_all, blast)
apply (rule Disj_EH Conj_EH)+
― ‹case 1, both are ordinals›
apply (blast intro: cut3 [OF WRP_unique])
― ‹case 2, @{term "OrdP (Var i)"} but also a pair›
apply (rule Conj_EH Ex_EH)+
apply simp_all
apply (rule cut_same [where A = "OrdP (HPair (Var sm) (Var sn))"])
apply (blast intro: OrdP_cong [OF Hyp, THEN Iff_MP_same], blast)
― ‹towards second two cases›
apply (rule Ex_E Disj_EH Conj_EH)+
― ‹case 3, @{term "OrdP (Var i)"} but also a pair›
apply (rule cut_same [where A = "OrdP (HPair (Var sm2) (Var sn2))"])
apply (blast intro: OrdP_cong [OF Hyp, THEN Iff_MP_same], blast)
― ‹case 4, two pairs›
apply (rule Ex_E Disj_EH Conj_EH)+
apply (rule All_E' [OF Hyp, where x="Var m"], blast)
apply (rule All_E' [OF Hyp, where x="Var n"], blast, simp_all)
apply (rule Disj_EH, blast intro: thin1 ContraProve)+
apply (rule All_E [where x="Var sm"], simp)
apply (rule All_E [where x="Var sm'"], simp)
apply (rule All_E [where x="Var sm2'"], simp)
apply (rule All_E [where x="Var m2"], simp)
apply (rule All_E [where x="Var sn", THEN rotate2], simp)
apply (rule All_E [where x="Var sn'"], simp)
apply (rule All_E [where x="Var sn2'"], simp)
apply (rule All_E [where x="Var n2"], simp)
apply (rule cut_same [where A = "HPair (Var sm) (Var sn) EQ HPair (Var sm2) (Var sn2)"])
apply (blast intro: Sym Trans)
apply (rule cut_same [where A = "SeqHRP (Var sn) (Var sn2') s' (Var n2)"])
apply (blast intro: SeqHRP_cong [OF Hyp Refl Refl, THEN Iff_MP2_same])
apply (rule cut_same [where A = "SeqHRP (Var sm) (Var sm2') s' (Var m2)"])
apply (blast intro: SeqHRP_cong [OF Hyp Refl Refl, THEN Iff_MP2_same])
apply (rule Disj_EH, blast intro: thin1 ContraProve)+
apply (blast intro: Trans [OF Hyp Sym] intro!: HPair_cong)
done
hence "{OrdP (Var k)}
⊢ All j (All j' (All k' (SeqHRP x (Var j) s (Var k)
IMP (SeqHRP x (Var j') s' (Var k') IMP Var j' EQ Var j))))"
apply (rule All_D [where x = x, THEN cut_same])
using atoms by auto
hence "{OrdP (Var k)}
⊢ All j' (All k' (SeqHRP x y s (Var k) IMP (SeqHRP x (Var j') s' (Var k') IMP Var j' EQ y)))"
apply (rule All_D [where x = y, THEN cut_same])
using atoms by auto
hence "{OrdP (Var k)}
⊢ All k' (SeqHRP x y s (Var k) IMP (SeqHRP x y' s' (Var k') IMP y' EQ y))"
apply (rule All_D [where x = y', THEN cut_same])
using atoms by auto
hence "{OrdP (Var k)} ⊢ SeqHRP x y s (Var k) IMP (SeqHRP x y' s' u' IMP y' EQ y)"
apply (rule All_D [where x = u', THEN cut_same])
using atoms by auto
hence "{SeqHRP x y s (Var k)} ⊢ SeqHRP x y s (Var k) IMP (SeqHRP x y' s' u' IMP y' EQ y)"
by (metis SeqHRP_imp_OrdP cut1)
hence "{} ⊢ ((SeqHRP x y s (Var k) IMP (SeqHRP x y' s' u' IMP y' EQ y)))(k::=u)"
by (metis Subst emptyE Assume MP_same Imp_I)
hence "{} ⊢ SeqHRP x y s u IMP (SeqHRP x y' s' u' IMP y' EQ y)"
using atoms by simp
thus ?thesis
by (metis anti_deduction insert_commute)
qed
theorem HRP_unique: "{HRP x y, HRP x y'} ⊢ y' EQ y"
proof -
obtain s::name and s'::name and k::name and k'::name
where "atom s ♯ (x,y,y')" "atom s' ♯ (x,y,y',s)"
"atom k ♯ (x,y,y',s,s')" "atom k' ♯ (x,y,y',s,s',k)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: SeqHRP_unique HRP.simps [of s x y k] HRP.simps [of s' x y' k'])
qed
lemma HRP_ORD_OF: "{} ⊢ HRP (ORD_OF i) «ORD_OF i»"
proof -
let ?vs = "(i)"
obtain s k l::name and sl::name and sl'::name and m::name and n::name and
sm::name and sm'::name and sn::name and sn'::name
where atoms:
"atom s ♯ (?vs,sl,sl',m,n,sm,sm',sn,sn',l,k)"
"atom k ♯ (?vs,sl,sl',m,n,sm,sm',sn,sn',l)"
"atom l ♯ (?vs,sl,sl',m,n,sm,sm',sn,sn')"
"atom sl ♯ (?vs,sl',m,n,sm,sm',sn,sn')" "atom sl' ♯ (?vs,m,n,sm,sm',sn,sn')"
"atom m ♯ (?vs,n,sm,sm',sn,sn')" "atom n ♯ (?vs,sm,sm',sn,sn')"
"atom sm ♯ (?vs,sm',sn,sn')" "atom sm' ♯ (?vs,sn,sn')"
"atom sn ♯ (?vs,sn')" "atom sn' ♯ ?vs"
by (metis obtain_fresh)
then show ?thesis
apply (subst HRP.simps[of s _ _ k]; simp)
apply (subst SeqHRP.simps[of l _ _ sl sl' m n sm sm' sn sn']; simp?)
apply (rule Ex_I[where x="Eats Zero (HPair Zero (HPair (ORD_OF i) «ORD_OF i»))"]; simp)
apply (rule Ex_I[where x="Zero"]; simp)
apply (rule Conj_I[OF LstSeqP_single])
apply (rule All2_SUCC_I, simp)
apply auto [2]
apply (rule Ex_I[where x="ORD_OF i"], simp)
apply (rule Ex_I[where x="«ORD_OF i»"], simp)
apply (auto intro!: Disj_I1 WRP Mem_Eats_I2)
done
qed
lemma SeqHRP_HPair:
assumes "atom s ♯ (k,s1,s2,k1,k2,x,y,x',y')" "atom k ♯ (s1,s2,k1,k2,x,y,x',y')"
shows "{SeqHRP x x' s1 k1,
SeqHRP y y' s2 k2}
⊢ Ex s (Ex k (SeqHRP (HPair x y) (Q_HPair x' y') (Var s) (Var k)))" (*<*)
proof -
let ?vs = "(s1,s2,s,k1,k2,k,x,y,x',y')"
obtain km::name and kn::name and j::name and k'::name
and l::name and sl::name and sl'::name and m::name and n::name
and sm::name and sm'::name and sn::name and sn'::name
where atoms2: "atom km ♯ (kn,j,k',l,s1,s2,s,k1,k2,k,x,y,x',y',sl,sl',m,n,sm,sm',sn,sn')"
"atom kn ♯ (j,k',l,s1,s2,s,k1,k2,k,x,y,x',y',sl,sl',m,n,sm,sm',sn,sn')"
"atom j ♯ (k',l,s1,s2,s,k1,k2,k,x,y,x',y',sl,sl',m,n,sm,sm',sn,sn')"
and atoms: "atom k' ♯ (l,s1,s2,s,k1,k2,k,x,y,x',y',sl,sl',m,n,sm,sm',sn,sn')"
"atom l ♯ (s1,s2,s,k1,k2,k,x,y,x',y',sl,sl',m,n,sm,sm',sn,sn')"
"atom sl ♯ (s1,s2,s,k1,k2,k,x,y,x',y',sl',m,n,sm,sm',sn,sn')"
"atom sl' ♯ (s1,s2,s,k1,k2,k,x,y,x',y',m,n,sm,sm',sn,sn')"
"atom m ♯ (s1,s2,s,k1,k2,k,x,y,x',y',n,sm,sm',sn,sn')"
"atom n ♯ (s1,s2,s,k1,k2,k,x,y,x',y',sm,sm',sn,sn')"
"atom sm ♯ (s1,s2,s,k1,k2,k,x,y,x',y',sm',sn,sn')"
"atom sm' ♯ (s1,s2,s,k1,k2,k,x,y,x',y',sn,sn')"
"atom sn ♯ (s1,s2,s,k1,k2,k,x,y,x',y',sn')"
"atom sn' ♯ (s1,s2,s,k1,k2,k,x,y,x',y')"
by (metis obtain_fresh)
let ?hyp = "{HaddP k1 k2 (Var k'), OrdP k1, OrdP k2, SeqAppendP s1 (SUCC k1) s2 (SUCC k2) (Var s),
SeqHRP x x' s1 k1, SeqHRP y y' s2 k2}"
show ?thesis
using assms atoms
apply (auto simp: SeqHRP.simps [of l "Var s" _ sl sl' m n sm sm' sn sn'])
apply (rule cut_same [where A="OrdP k1 AND OrdP k2"])
apply (metis Conj_I SeqHRP_imp_OrdP thin1 thin2)
apply (rule cut_same [OF exists_SeqAppendP [of s s1 "SUCC k1" s2 "SUCC k2"]])
apply (rule AssumeH Ex_EH Conj_EH | simp)+
apply (rule cut_same [OF exists_HaddP [where j=k' and x=k1 and y=k2]])
apply (rule AssumeH Ex_EH Conj_EH | simp)+
apply (rule Ex_I [where x="Eats (Var s) (HPair (SUCC(SUCC(Var k'))) (HPair(HPair x y)(Q_HPair x' y')))"])
apply (simp_all (no_asm_simp))
apply (rule Ex_I [where x="SUCC (SUCC (Var k'))"], simp)
apply (rule Conj_I)
apply (blast intro: LstSeqP_SeqAppendP_Eats SeqHRP_imp_LstSeqP [THEN cut1])
proof (rule All2_SUCC_I, simp_all)
show "?hyp ⊢ SyntaxN.Ex sl
(SyntaxN.Ex sl'
(HPair (SUCC (SUCC (Var k'))) (HPair (Var sl) (Var sl')) IN
Eats (Var s) (HPair (SUCC (SUCC (Var k'))) (HPair (HPair x y) (Q_HPair x' y'))) AND
(OrdP (Var sl) AND WRP (Var sl) (Var sl') OR
SyntaxN.Ex m
(SyntaxN.Ex n
(SyntaxN.Ex sm
(SyntaxN.Ex sm'
(SyntaxN.Ex sn
(SyntaxN.Ex sn'
(Var m IN SUCC (SUCC (Var k')) AND
Var n IN SUCC (SUCC (Var k')) AND
HPair (Var m) (HPair (Var sm) (Var sm')) IN
Eats (Var s) (HPair (SUCC (SUCC (Var k'))) (HPair (HPair x y) (Q_HPair x' y'))) AND
HPair (Var n) (HPair (Var sn) (Var sn')) IN
Eats (Var s) (HPair (SUCC (SUCC (Var k'))) (HPair (HPair x y) (Q_HPair x' y'))) AND
Var sl EQ HPair (Var sm) (Var sn) AND Var sl' EQ Q_HPair (Var sm') (Var sn'))))))))))"
― ‹verifying the final values›
apply (rule Ex_I [where x="HPair x y"])
using assms atoms apply simp
apply (rule Ex_I [where x="Q_HPair x' y'"], simp)
apply (rule Conj_I, metis Mem_Eats_I2 Refl)
apply (rule Disj_I2)
apply (rule Ex_I [where x=k1], simp)
apply (rule Ex_I [where x="SUCC (Var k')"], simp)
apply (rule Ex_I [where x=x], simp)
apply (rule_tac x=x' in Ex_I, simp)
apply (rule Ex_I [where x=y], simp)
apply (rule_tac x=y' in Ex_I, simp)
apply (rule Conj_I)
apply (blast intro: HaddP_Mem_I LstSeqP_OrdP Mem_SUCC_I1)
apply (rule Conj_I [OF Mem_SUCC_Refl])
apply (blast intro: Disj_I1 Mem_Eats_I1 Mem_SUCC_Refl SeqHRP_imp_LstSeqP [THEN cut1]
LstSeqP_imp_Mem SeqAppendP_Mem1 [THEN cut3] SeqAppendP_Mem2 [THEN cut4] HaddP_SUCC1 [THEN cut1])
done
next
show "?hyp ⊢ All2 l (SUCC (SUCC (Var k')))
(SyntaxN.Ex sl
(SyntaxN.Ex sl'
(HPair (Var l) (HPair (Var sl) (Var sl')) IN
Eats (Var s) (HPair (SUCC (SUCC (Var k'))) (HPair (HPair x y) (Q_HPair x' y'))) AND
(OrdP (Var sl) AND WRP (Var sl) (Var sl') OR
SyntaxN.Ex m
(SyntaxN.Ex n
(SyntaxN.Ex sm
(SyntaxN.Ex sm'
(SyntaxN.Ex sn
(SyntaxN.Ex sn'
(Var m IN Var l AND
Var n IN Var l AND
HPair (Var m) (HPair (Var sm) (Var sm')) IN
Eats (Var s) (HPair (SUCC (SUCC (Var k'))) (HPair (HPair x y) (Q_HPair x' y'))) AND
HPair (Var n) (HPair (Var sn) (Var sn')) IN
Eats (Var s) (HPair (SUCC (SUCC (Var k'))) (HPair (HPair x y) (Q_HPair x' y'))) AND
Var sl EQ HPair (Var sm) (Var sn) AND Var sl' EQ Q_HPair (Var sm') (Var sn')))))))))))"
― ‹verifying the sequence buildup›
apply (rule cut_same [where A="HaddP (SUCC k1) (SUCC k2) (SUCC (SUCC (Var k')))"])
apply (rule All_I Imp_I)+
using assms atoms atoms2 apply simp_all
apply (rule AssumeH)
apply (blast intro: OrdP_SUCC_I LstSeqP_OrdP)
― ‹... the sequence buildup via s1›
apply (simp add: SeqHRP.simps [of l s1 _ sl sl' m n sm sm' sn sn'])
apply (rule AssumeH Ex_EH Conj_EH)+
apply (rule All2_E [THEN rotate2])
apply (simp | rule AssumeH Ex_EH Conj_EH)+
apply (rule Ex_I [where x="Var sl"], simp)
apply (rule Ex_I [where x="Var sl'"], simp)
apply (rule Conj_I [OF Mem_Eats_I1])
apply (metis SeqAppendP_Mem1 rotate3 thin2 thin4)
apply (rule AssumeH Disj_IE1H Ex_EH Conj_EH)+
apply (rule Ex_I [where x="Var m"], simp)
apply (rule Ex_I [where x="Var n"], simp)
apply (rule Ex_I [where x="Var sm"], simp)
apply (rule Ex_I [where x="Var sm'"], simp)
apply (rule Ex_I [where x="Var sn"], simp)
apply (rule Ex_I [where x="Var sn'"], simp_all (no_asm_simp))
apply (rule Conj_I, rule AssumeH)+
apply (rule Conj_I)
apply (blast intro: OrdP_Trans [OF OrdP_SUCC_I] Mem_Eats_I1 [OF SeqAppendP_Mem1 [THEN cut3]] Hyp)
apply (blast intro: Disj_I1 Disj_I2 OrdP_Trans [OF OrdP_SUCC_I] Mem_Eats_I1 [OF SeqAppendP_Mem1 [THEN cut3]] Hyp)
― ‹... the sequence buildup via s2›
apply (simp add: SeqHRP.simps [of l s2 _ sl sl' m n sm sm' sn sn'])
apply (rule AssumeH Ex_EH Conj_EH)+
apply (rule All2_E [THEN rotate2])
apply (simp | rule AssumeH Ex_EH Conj_EH)+
apply (rule Ex_I [where x="Var sl"], simp)
apply (rule Ex_I [where x="Var sl'"], simp)
apply (rule cut_same [where A="OrdP (Var j)"])
apply (rule Conj_I)
apply (blast intro: Mem_Eats_I1 SeqAppendP_Mem2 [THEN cut4] del: Disj_EH)
apply (rule AssumeH Disj_IE1H Ex_EH Conj_EH)+
apply (rule cut_same [OF exists_HaddP [where j=km and x="SUCC k1" and y="Var m"]])
apply (blast intro: Ord_IN_Ord, simp)
apply (rule cut_same [OF exists_HaddP [where j=kn and x="SUCC k1" and y="Var n"]])
apply (metis AssumeH(6) Ord_IN_Ord0 rotate8, simp)
apply (rule AssumeH Ex_EH Conj_EH | simp)+
apply (rule Ex_I [where x="Var km"], simp)
apply (rule Ex_I [where x="Var kn"], simp)
apply (rule Ex_I [where x="Var sm"], simp)
apply (rule Ex_I [where x="Var sm'"], simp)
apply (rule Ex_I [where x="Var sn"], simp)
apply (rule Ex_I [where x="Var sn'"], simp_all (no_asm_simp))
apply (rule Conj_I [OF _ Conj_I])
apply (blast intro!: HaddP_Mem_cancel_left [THEN Iff_MP2_same] OrdP_SUCC_I intro: LstSeqP_OrdP Hyp)+
apply (blast del: Disj_EH intro: OrdP_Trans Hyp
intro!: Mem_Eats_I1 SeqAppendP_Mem2 [THEN cut4] HaddP_imp_OrdP [THEN cut1])
done
qed
qed (*>*)
lemma HRP_HPair: "{HRP x x', HRP y y'} ⊢ HRP (HPair x y) (Q_HPair x' y')"
proof -
obtain k1::name and s1::name and k2::name and s2::name and k::name and s::name
where "atom s1 ♯ (x,y,x',y')" "atom k1 ♯ (x,y,x',y',s1)"
"atom s2 ♯ (x,y,x',y',k1,s1)" "atom k2 ♯ (x,y,x',y',s2,k1,s1)"
"atom s ♯ (x,y,x',y',k2,s2,k1,s1)" "atom k ♯ (x,y,x',y',s,k2,s2,k1,s1)"
by (metis obtain_fresh)
thus ?thesis
by (force simp: HRP.simps [of s "HPair x y" _ k]
HRP.simps [of s1 x _ k1]
HRP.simps [of s2 y _ k2]
intro: SeqHRP_HPair [THEN cut2])
qed
lemma HRP_HPair_quot: "{HRP x «x», HRP y «y»} ⊢ HRP (HPair x y) «HPair x y»"
using HRP_HPair[of x "«x»" y "«y»"]
unfolding HPair_def quot_simps by auto
lemma prove_HRP_coding_tm: fixes t::tm shows "coding_tm t ⟹ {} ⊢ HRP t «t»"
by (induct t rule: coding_tm.induct)
(auto simp: quot_simps HRP_ORD_OF HRP_HPair_quot[THEN cut2])
lemmas prove_HRP = prove_HRP_coding_tm[OF quot_fm_coding]
section‹The Function K and Lemma 6.3›
nominal_function KRP :: "tm ⇒ tm ⇒ tm ⇒ fm"
where "atom y ♯ (v,x,x') ⟹
KRP v x x' = Ex y (HRP x (Var y) AND SubstFormP v (Var y) x x')"
by (auto simp: eqvt_def KRP_graph_aux_def flip_fresh_fresh) (metis obtain_fresh)
nominal_termination (eqvt)
by lexicographic_order
lemma KRP_fresh_iff [simp]: "a ♯ KRP v x x' ⟷ a ♯ v ∧ a ♯ x ∧ a ♯ x'"
proof -
obtain y::name where "atom y ♯ (v,x,x')"
by (metis obtain_fresh)
thus ?thesis
by auto
qed
lemma KRP_subst [simp]: "(KRP v x x')(i::=t) = KRP (subst i t v) (subst i t x) (subst i t x')"
proof -
obtain y::name where "atom y ♯ (v,x,x',t,i)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: KRP.simps [of y])
qed
declare KRP.simps [simp del]
lemma prove_SubstFormP: "{} ⊢ SubstFormP «Var i» ««A»» «A» «A(i::=«A»)»"
using SubstFormP by blast
lemma prove_KRP: "{} ⊢ KRP «Var i» «A» «A(i::=«A»)»"
by (auto simp: KRP.simps [of y]
intro!: Ex_I [where x="««A»»"] prove_HRP prove_SubstFormP)
lemma KRP_unique: "{KRP v x y, KRP v x y'} ⊢ y' EQ y"
proof -
obtain u::name and u'::name where "atom u ♯ (v,x,y,y')" "atom u' ♯ (v,x,y,y',u)"
by (metis obtain_fresh)
thus ?thesis
by (auto simp: KRP.simps [of u v x y] KRP.simps [of u' v x y']
intro: SubstFormP_cong [THEN Iff_MP2_same]
SubstFormP_unique [THEN cut2] HRP_unique [THEN cut2])
qed
lemma KRP_subst_fm: "{KRP «Var i» «β» (Var j)} ⊢ Var j EQ «β(i::=«β»)»"
by (metis KRP_unique cut0 prove_KRP)
end
``` | 12,252 | 30,710 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2023-23 | latest | en | 0.580322 |
http://nortonkit.co.in/protrain/dc_theory/rc_circuits.html | 1,516,488,369,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00652.warc.gz | 261,438,932 | 6,071 | www.nortonkit.com 18 अक्तूबर 2013
Direct links to other DC Electronics pages:
Fundamentals of Electricity: [Introduction to DC Circuits] [What is Electricity?] [Electrons] [Static Electricity] [The Basic Circuit] [Using Schematic Diagrams] [Ohm's Law]
Basic Electronic Components and Circuits. . .
Resistors: [Resistor Construction] [The Color Code] [Resistors in Series] [Resistors in Parallel] [The Voltage Divider] [Resistance Ratio Calculator] [Three-Terminal Resistor Configurations] [Delta<==>Wye Conversions] [The Wheatstone Bridge]
Capacitors: [Capacitor Construction] [Reading Capacitor Values] [Capacitors in Series] [Capacitors in Parallel]
Inductors and Transformers: [Inductor Construction] [Inductors in Series] [Inductors in Parallel] [Transformer Concepts]
Combining Different Components: [Resistors With Capacitors] [Resistors With Inductors] [Capacitors With Inductors] [Resistors, Capacitors, and Inductors]
Resistors and Capacitors Together
Now that we have looked at each of the three basic types of electronic components, we need to explore how they behave in various combinations. As we do so, remember that while each component still retains its basic properties, the combination can have its own characteristics, which may not seem intuitive at first.
On this page, we'll begin by considering a resistor and capacitor working together in a circuit, and see how the resistor affects the charging and discharging of the capacitor.
Consider the circuit shown to the right. Initially, we will have switch S in position 2. Capacitor C is fully discharged and no current flows through R and C. The circuit is quiescent at this point.
Now we move the switch to position 1. This connects the series combination of R and C to the battery. Current can flow through the circuit, and the capacitor will begin to charge. The question is, how fast?
Keep in mind that the voltage across a capacitor cannot change instantaneously. Therefore, at that first instant the entire battery voltage, E, appears across resistor R, and the charging current for C is determined by Ohm's Law: I = E/R.
However, now the capacitor voltage, VC begins to increase. This reduces the voltage, VR, that remains across the resistor. Therefore the charging current will be reduced slightly, and the capacitor will charge more slowly than before. This will continue until the capacitor has charged to the voltage E, and there is no further current flow in this circuit.
But this isn't quite enough. We can see that the values of R and C will affect the amount of time it takes for C to become fully charged. But we'd like to be able to state the appropriate equation so we can determine not only the charging time but also the way in which VC and the circuit current will change while C is charging.
To accomplish this, we go back to some basic definitions. First, we note that by moving one coulomb of electric charge from one plate of a 1 farad capacitor to the other, we will change the voltage between plates by 1 volt. We can change the size of the capacitance, adjust the voltage, and thereby adjust the amount of charge required to make the change. However, the basic equation still holds: E = q/C, where q is the electric charge in coulombs.
The other definition is for the current flowing in the circuit: one Ampere of current consists of one coulomb of charge passing a given point in a circuit in one second.
In this circuit, however, the charging current is constantly changing as the voltage across C increases. Therefore, we must look at a steadily decreasing rate of charge. This brings us to a bit of differential calculus. If you aren't familiar with this, don't worry; you'll be able to make use of the results. But for completeness, we include the appropriate expression here:
iC = C dvC dt
Qualitatively, the current flowing through the capacitor is directly proportional to the value of the capacitor itself (high value capacitors charge more slowly), and is directly proportional to the change in capacitor voltage over time. The use of differential calculus allows us to track the changing current on an instant-by-instant basis.
But the current charging the capacitor is also the current flowing through R. And the voltage across R is whatever part of E that hasn't already been built up as the charge on C. Therefore, we can apply Ohm's Law here:
iC = iR = E - vC R
Solving differential equations is beyond the scope of this page. However, we can present the final equation that describes the capacitor voltage at any time t, for any values of R and C, and any battery voltage E:
vC = E(1 - (-t/RC))
Here, is the base for natural logarithms, with a value of approximately 2.7182818. Because we are using it in this fashion, the equation above is known as an exponential equation.
At the moment switch S is closed, time t = 0. Since 0 = 1, we see that:
vC = E(1 - 1) = 0
This is exactly what we would expect, since the capacitor is completely discharged at the start of the sequence. But what does the rest of the charging curve look like? Let's plot this expression over time, as the capacitor charges, and see how it behaves.
The figure to the right shows the capacitor voltage, vC, as a percentage of E as the capacitor charges over time. Thus, it is the plot of the equation:
vC = 1 - (-t/RC) E
Note the product RC in the exponent of . This is very important, because it shows that both R and C control the charging time equally. Also, because that exponent is (-t/RC), if we set RC = t, the exponent will be -1. Therefore, we show time in this graph as multiples of RC. In addition, the RC product is identified as the time constant for this circuit.
At first glance, this would seem to be very strange. How can the product of a resistance in ohms and a capacitance in farads possibly give us a time in seconds? To understand how this is possible, we go back to the basic definitions and some dimensional analysis.
Resistance
Resistance opposes the flow of current through a circuit. By Ohm's Law, R = E/I. Thus, 1 ohm may also be expressed as 1 volt/ampere.
Current
Current is a measure of the amount of charge flowing through a circuit in a given amount of time. By primary definition, 1 ampere is equal to 1 coulomb/second.
Capacitance
Capacitance is the capacity to hold an electrical charge. A capacitance of 1 farad will exhibit a change of 1 volt if 1 coulomb of charge is moved from one plate to the other. Hence, 1 farad can be expressed as 1 coulomb/volt.
Putting these three basic definitions together we get the following progression:
RC = ohms × farads = volts × coulombs amperes volts = volts × coulombs coulombs/seconds volts = volts × seconds × coulombs coulombs volts = seconds
Thus, we see that the RC product is indeed a measure of time, and can properly be described as the time constant of this circuit. This in turn means that this curve can be used to determine the voltage to which any capacitor will charge through any resistance, over any period of time, towards any source voltage. It is the general curve describing the voltage across a charging capacitor, over time.
Theoretically, C will never fully charge to the source voltage, E. In the first time constant, C charges to 63.2% of the source voltage. During the second time constant, C charges to 86.5% of the source voltage, which is also 63.2% of the remaining voltage difference between E and vC. This continues indefinitely, with vC continually approaching, but never quite reaching, the full value of E. However, at the end of 5 time constants (5RC), vC has reached 99.3% of E. This is considered close enough for practical purposes, and the capacitor is deemed fully charged at the end of this period of time.
Now that we have charged our capacitor, what happens if we move switch S in our original circuit to position 2 (We have repeated the circuit to the right for easier reference)? We have disconnected resistor R from the source, E, and connected it in parallel with capacitor C instead.
At this point, the capacitor has a discharge path, so it will begin discharging through R. However, as vC continues to drop, the discharge current likewise decreases, in accordance with Ohm's Law. Therefore, it is logical to assume that the capacitor discharge curve will probably follow some of the same rules as the capacitor charge curve above. However, the resistor voltage is now the same as the capacitor voltage, since R and C are now in parallel. So what is the resulting equation?
The figure to the right shows the appropriate RC discharge curve. This graph shows the function:
vC = (-t/RC) E
Here, E refers to the starting voltage on the capacitor, which need not be the same as the battery voltage E in the schematic diagram above. In fact, if you look at the capacitor voltage after it has partly discharged, the same curve applies.
As with the charge curve, the discharge curve is exponential in shape. The RC time constant still applies; the capacitor is deemed to be fully discharged at the completion of five time constants. | 2,088 | 9,083 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5625 | 4 | CC-MAIN-2018-05 | latest | en | 0.784442 |
https://iitutor.com/courses/qce-general-mathematics-bivariate-data-sequences-and-change-and-earth-geometry/ | 1,721,388,899,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514902.63/warc/CC-MAIN-20240719105029-20240719135029-00304.warc.gz | 269,561,361 | 26,746 | # QCE General Mathematics – Bivariate Data, Sequences and Change, and Earth Geometry
3.1 Bivariate Data
3.2 Time Series
3.3 Arithmetic Sequences and Series
3.4 Geometric Sequences and Series
4.4 Spherical Geometry
### Master Bivariate Data Analysis
Decode Complex Relationships with Ease!
Are you struggling to make sense of bivariate data, sequences and change, and earth geometry in your QCE General Mathematics course? You’re not alone. Many students find these topics challenging, feeling overwhelmed and unsure how to proceed.
### Unlock the Secrets of Sequences and Change
Excel in Mathematical Patterns!
But fear not – our online mathematics course is here to provide you with the guidance and support you need to conquer these challenges and succeed in your studies. With our carefully crafted curriculum and experienced instructors, you’ll gain the skills and confidence to excel in your mathematics journey.
### Explore Earth Geometry
Navigate the World of Spatial Mathematics!
Bivariate data analysis can be particularly daunting for students, as it focuses on understanding the relationship between two variables. From scatter plots to correlation coefficients, our course will help you navigate complex data sets and interpret relationships with ease.
### Conquer QCE General Mathematics
Your Path to Success Starts Here!
Sequences and change are fundamental concepts in mathematics, but they can be difficult to grasp. Whether you’re dealing with arithmetic or geometric sequences, our comprehensive lessons will break down these concepts into simple steps, making them easier to understand and apply.
### Dive into Mathematics Mastery
Earth geometry introduces students to the mathematics of shapes and figures on the Earth’s surface. From calculating distances to understanding map projections, our instructors will guide you through the intricacies of spatial mathematics, empowering you to tackle real-world problems confidently.
Embark on your journey to mathematics mastery with our QCE General Mathematics course. With our expert guidance and comprehensive curriculum, you’ll develop a deep understanding of bivariate data, sequences and change, and earth geometry, setting yourself up for success in your studies and beyond.
We meticulously craft our online courses to adhere to the Queensland Curriculum and Assessment Authority‘s guidelines, ensuring a comprehensive and suitable learning experience.
## Course Content
### Bivariate Data
Lesson Content
0% Complete 0/7 Steps
Lesson Content
0% Complete 0/2 Steps
Lesson Content
0% Complete 0/3 Steps
Lesson Content
0% Complete 0/3 Steps
Lesson Content
0% Complete 0/6 Steps
Lesson Content
0% Complete 0/1 Steps
Lesson Content
0% Complete 0/4 Steps
Lesson Content
0% Complete 0/2 Steps
Lesson Content
0% Complete 0/3 Steps
Lesson Content
0% Complete 0/2 Steps
Arithmetic Sequences and Series
Lesson Content
0% Complete 0/15 Steps
Lesson Content
0% Complete 0/9 Steps
Lesson Content
0% Complete 0/10 Steps
Lesson Content
0% Complete 0/6 Steps
Geometric Sequences and Series
Lesson Content
0% Complete 0/10 Steps
Lesson Content
0% Complete 0/5 Steps
Lesson Content
0% Complete 0/10 Steps
Lesson Content
0% Complete 0/6 Steps
Lesson Content
0% Complete 0/8 Steps
Lesson Content
0% Complete 0/3 Steps
Lesson Content
0% Complete 0/5 Steps
Lesson Content
0% Complete 0/5 Steps
Spherical Geometry
Lesson Content
0% Complete 0/18 Steps
Lesson Content
0% Complete 0/4 Steps
Lesson Content
0% Complete 0/9 Steps
Lesson Content
0% Complete 0/9 Steps
Not Enrolled
• 26 Lessons
• 165 Topics
• 48 Quizzes | 796 | 3,583 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2024-30 | latest | en | 0.871498 |
http://www.codingforums.com/javascript-programming/33276-mortgage-calculation-please.html | 1,429,448,012,000,000,000 | text/html | crawl-data/CC-MAIN-2015-18/segments/1429246639057.4/warc/CC-MAIN-20150417045719-00207-ip-10-235-10-82.ec2.internal.warc.gz | 413,168,031 | 14,931 | Hello and welcome to our community! Is this your first visit?
Enjoy an ad free experience by logging in. Not a member yet? Register.
1. mortgage calculation, please
Can anyone show me, even simply algebric, how to calculate morgage. Relation between:
var r = rate (/per month)
var m = number of month
var val = total initial value
var mort (/per month) = ?
• I found ... sorry for bother you, but I have an account who do this stuff in my bussines
var r = rate (/per month)
var m = number of month
var val = total initial value
var mort = mortgage(/per month)
var mort = val * (r / (1 - (1 / Math.pow(1+r,m))))
•
Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
• | 199 | 763 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2015-18 | longest | en | 0.822202 |
https://www.teacherspayteachers.com/Product/Zero-Exponent-Rule-Riddle-Activity-4212192 | 1,652,836,704,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00365.warc.gz | 1,188,854,989 | 34,454 | # Zero Exponent Rule Riddle Activity
Grade Levels
8th - 9th, Homeschool
Subjects
Standards
Resource Type
Formats Included
• PDF
Pages
5 pages
#### Also included in
1. Looking for individual practice for each Exponent Rule? This bundle has a great variety of activities! This bundle also includes a variety of problems that help students practice power rule, product rule, quotient rule, negative rule and zero exponent rule, individually. Check out the individual fil
\$19.00
\$24.00
Save \$5.00
2. Looking for individual practice for each Exponent Rule? This bundle has a great variety of activities! This bundle also includes a variety of problems that help students practice power rule, product rule, quotient rule, negative rule and zero exponent rule, individually. Check out the individual fil
\$35.00
\$50.00
Save \$15.00
### Description
Practice working through the Zero Exponent Rule problems, only! There is a fun riddle for the students to solve when completing the correct answers. A great self-directed activity that has 16 problems for students to solve. A great introduction activity for students to practice that is print-ready!
Included
*16 Zero Exponent Rule Problems
*Completed Answer Key
*****************************************************************************
More resources that are included in Part 2 of the Exponent Rule Bundle!
*Exponent Rule Bookmark Notes
*Exponent Power Rule Cut and Paste Activity
*Exponent Product Rule Color By Number Activity
*Quotient Exponent Rule Maze Activity
*Negative Exponent Rule Chain Reaction Activity
*Exponent Rule (ALL) Color By Number Activity
*Exponent Rule (ALL) Chain Reaction Activity
Here are MORE resources that are included in Part 1 of the Exponent Rule Bundle!
-Exponent Rule Notes - Power, Product, Quotient, Zero and Negative
-Product Rule Task Cards
-Power Rule Task Cards
-Quotient Rule Task Cards
-Negative Exponent Riddle Me Worksheet
-Zero Exponent Rule Color By Number Activity
-Exponent Rule Puzzle Activity
-Exponent Rules Hole Punch Game (Power, Product, Quotient, Negative & Zero)
-Exponent Rule Bundle (Part 1) - Power, Product, Quotient, Negative & Zero)
OR find Part 1 AND 2 in this bundle here; Exponent Rule Part 1 and 2 Bundle!
Google Digital Versions - (NOT Included in either Part 1 or Part 2 Bundles)
*Exponent Rule - Power Rule Google Slides Digital Version*
*Exponent Rule - Quotient Rule Google Form Digital Version
*Exponent Rule - Quotient Rule Google Slides Digital Version
*Exponent Rule - Product Rule Google Slides Digital Version
*Exponent Rule Puzzle GOOGLE SLIDES Activity
*****************************************************************************
How to get TPT credit to use on future purchases:
• Please go to your My Purchases page (you may need to login). Beside each purchase you'll see a Provide Feedback button. Simply click it and you will be taken to a page where you can give a quick rating and leave a short comment for the product. Each time you give feedback, TPT gives you feedback credits that you use to lower the cost of your future purchases. I value your feedback greatly as this is a new journey for my family and me. It helps me determine which products are most valuable for your classroom so I can create more for you. If you see any errors, please notify me ASAP. I will get it updated and send over a new version to you.
*****************************************************************************
* Find me in the SOCIAL world - These places you will find new resource postings, fun tidbits and more! *
**Click here to Follow me on Facebook
**Click here to Follow me on Instagram
**Click here to Follow me on Pinterest
**Click here to Follow my Blog
© 2017 Learning Made Radical (Kara Holland) Please note that this resource is for one teacher resource . Additional teachers need to purchase their own license. If you need to purchase several licenses, please contact me. kholland2007@gmail.com
Total Pages
5 pages
Answer Key
Included
Teaching Duration
40 minutes
Report this Resource to TpT
Reported resources will be reviewed by our team. Report this resource to let us know if this resource violates TpT’s content guidelines.
### Standards
to see state-specific standards (only available in the US).
Know and apply the properties of integer exponents to generate equivalent numerical expressions. For example, 3² × (3⁻⁵) = (3⁻³) = 1/3³ = 1/27.
### Questions & Answers
1.9k Followers
Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials.
More About Us | 1,002 | 4,598 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.34375 | 3 | CC-MAIN-2022-21 | latest | en | 0.829136 |
http://www.haskell.org/haskellwiki/index.php?title=User:Benmachine/Cont&oldid=44081 | 1,409,455,623,000,000,000 | text/html | crawl-data/CC-MAIN-2014-35/segments/1408500835872.63/warc/CC-MAIN-20140820021355-00256-ip-10-180-136-8.ec2.internal.warc.gz | 396,911,954 | 9,884 | # User:Benmachine/Cont
## 1 A practical Cont tutorial
It seems to me like
Cont
and
ContT
is way simpler than people make it. I think it's just a way to give a name to the "tail" of a do-block.
Warning: I'm going to use some metaphors here, like the Magic type, that probably shouldn't be interpreted perfectly literally. Try to understand the gist, rather than worrying about the details.
```contstuff :: Magic
contstuff = do
thing1
thing2
-- Now I want to manipulate the rest of the computation.
-- So I want a magic function that will give me the future to
-- play with.
magic \$ \rest ->
-- 'rest' is the rest of the computation. Now I can just do it,
-- or do it twice and combine the results, or discard it entirely,
-- or do it and then use the result to do it again... it's easy to
-- imagine why this might be useful.
thing3 -- these might get done once, several times,
thing4 -- or not at all.```
The question is, what type should
magic
have? Well, let's say the whole do-block results in a thing of type
r
(without thinking too hard about what this means). Then certainly the function we give
magic
should result in type
r
as well, since it can run that do-block. The function should also accept a single parameter, referring to the tail of the computation. That's the rest of the do-block, which has type
r
, right? Well, more or less, with one caveat: we might bind the result of
magic
:
``` x <- magic \$ \rest -> -- ...
thingInvolving x```
so the rest of the do-block has an
x
in it that we need to supply (as well as other variables, but
magic
already has access to those). So the rest of the do-block can be thought of as a bit like
a -> r
. Given access to the rest of that do-block, we need to produce something of type
r
. So our lambda has type
(a -> r) -> r
and hence
magic :: (a -> r) -> r -> Magic
... oh but this looks familiar...
```newtype Cont r a = Cont { runCont :: (a -> r) -> r }
-- Magic = Cont r a
magic = Cont```
Tada! The moral of the story is, if you got up one morning and said to yourself "I want to stop in the middle of a do-block and play about with the last half of it", then Cont is the type you would have come up with.
Now you know what the Cont type is, you can implement pretty much all of its type class instances just from there, since the types force you to apply this to that and compose that with this. But that doesn't really help you to understand what's going on: here's a way of using the intuition introduced above to implement `Functor` without thinking about the types too much:
```instance Functor (Cont r) where
fmap f (Cont g) = -- ...```
Well, we've got to build a
Cont
value, and those always start the same way:
` fmap f (Cont g) = Cont \$ \rest -> -- ...`
Now what? Well, remember what
g
is. It comes from inside a
Cont
, so it looks like
\rest -> stuffWith (rest val)
, where
val
is the 'value' of the computation (what would be bound with
<-
). So we want to give it a
rest
, but we don't want it to be called with the 'value' of the computation - we want
f
to be applied to it first. Well, that's easy:
` fmap f (Cont x) = Cont \$ \rest -> x (\val -> rest (f val))`
Load it in `ghci` and the types check. Amazing! Emboldened, let's try
Applicative
```instance Applicative (Cont r) where
pure x = Cont \$ \rest -> -- ...```
We don't want to do anything special here. The rest of the computation wants a value, let's just give it one:
` pure x = Cont \$ \rest -> rest x`
<*>
?
` Cont f <*> Cont x = Cont \$ \rest -> -- ...`
This is a little trickier, but if we look at how we did
fmap
we can guess at how we get the function and the value out to apply one to the other:
` Cont f <*> Cont x = Cont \$ \rest -> f (\fn -> x (\val -> rest (fn val)))`
is a harder challenge, but the same basic tactic applies. Hint: remember to unwrap the newtype with
runCont
,
case
, or
let
when necessary.
### 1.1 So what's callCC?
"Call with current continuation". Basically, you use
callCC
like this:
``` ret <- callCC \$ \exit -> do
-- A mini Cont block.
-- You can bind things to ret in one of two ways: either return
-- something at the end as usual, or call exit with something of
-- the appropriate type, and the rest of the block will be ignored.
when (n < 10) \$ exit "small!"
when (n > 100) \$ exit "big!"
return "somewhere in between!"```
See if you can work out the type (not too hard: work out the type of exit first, then the do block) then the implementation. Try not to follow the types too much: they will tell you what to write, but not why. Think instead about the strategies we used above, and what each bit means. Hints: remember that
exit
throws stuff away, and remember to use
runCont
or similar, as before.
The thing to understand with
ContT
is that it's exactly the same trick. Literally. To the point where I think the following definition works fine:
```newtype ContT r m a = ContT (Cont (m r) a)
runContT :: ContT r m a -> (a -> m r) -> m r
runContT (ContT m) = runCont m```
The only reason the newtype exists at all is to shuffle the type parameters around a bit, so that instances of things like
can be defined.
### 1.3 Some real examples
I find he examples in the mtl doc unconvincing. They don't do anything genuinely daring, and some of them don't use the features of Cont at all – they work in any monad! Here's a more complex example:
```-- This tends to be useful.
runC :: Cont a a -> a
runC c = runCont c id
faff :: Integer -> Maybe Integer
faff n = runC \$ do
test <- Cont \$ \try -> case try n of
Nothing -> try (2*n)
res -> fmap (subtract 10) res
return \$ if test < 10 then Nothing else Just test```
In
statement is run with
test = n
: if it succeeds then we subtract 10 from the result and return it. If it fails we try again, but with
(2*n)
: note that if this succeeds, we don't subtract 10.
As an exercise, work out how to make the function return:
1. Nothing
2. Just 12
3. Just 0
(just mapping it over a large range and filtering the results is discouraged!)
### 1.4 Acknowledgements
I think it was the legendary sigfpe who made this click for me, after thinking about how this works:
and there's also this:
which is more-or-less the above trick but in a bit more detail.
### 1.5 Disclaimer
I'm currently unsure if I've fallen victim to Brent's (in)famous monad tutorial fallacy. I know that there was more in my learning process than I've been able to reproduce above, but I do think I'm doing this in a genuinely new style –
Cont
always seems to be presented in such vague terms, and people don't provide actual examples of the way it works.
### 1.6 A moderately heretical conclusion
Sometimes looking at types isn't the best way to understand things! I've implemented the
Cont
type class instances before, and the types ensure that you pretty much can't help but do it the right way. But that doesn't tell you what you're doing, or why you did it that way. I never understood
Cont
until I came across the natural interpretation of its content. It's a bit like fitting the pieces of a puzzle together without looking at the picture. | 1,870 | 7,065 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.859375 | 3 | CC-MAIN-2014-35 | longest | en | 0.932613 |
https://emmadonnan.org/and-pdf/1440-3d-shapes-names-faces-edges-and-vertices-pdf-821-753.php | 1,653,742,738,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00648.warc.gz | 295,230,781 | 6,938 | # 3d Shapes Names Faces Edges And Vertices Pdf
On Sunday, May 23, 2021 8:16:49 PM
File Name: 3d shapes names faces edges and vertices .zip
Size: 1373Kb
Published: 23.05.2021
Three dimensional shapes can be picked up and held because they have length, width and depth. Faces are the surfaces that make up the outside of a shape. Edges are the lines in between the faces.
Nombre requerido.
Welcome to the Math Salamanders 3d Shapes Worksheets. Here you will find our range of free Shape worksheets which involve naming and identifying 3d shapes and their properties. There are a range of worksheets at different levels, suitable for children from Kindergarten up to 3rd grade. If you are looking for some different geometry worksheets, such as symmetry or 2d shape sheets, try one of the links below!
## Faces, Edges and Vertices of 3D Shapes
Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials. Are you getting the free resources, updates, and special offers we send out every week in our teacher newsletter? Grade Level. Resource Type. Log In Join Us. View Wish List View Cart. Results for 3d faces, edges and vertices.
Welcome to our 3d Shapes Worksheets page. Here you will find our selection of free shape worksheets to help you child to name and learn some of the properties of the 3d shapes they will meet at 2nd grade. The main focus on this page is the identification and properties of different types of 3d shapes: cubes, cuboids, prisms, pyramids, cones, cylinders and spheres. During 2nd grade, children are introduced to a wider range of 2d and 3d shapes. Children also start to look more closely at the properties shapes have to categorise them.
## 3d faces, edges and vertices
Comparing Numbers. Daily Math Review. Division Basic. Division Long Division. Hundreds Charts. Multiplication Basic.
Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials. Are you getting the free resources, updates, and special offers we send out every week in our teacher newsletter? Grade Level. Resource Type. Log In Join Us. View Wish List View Cart. Results for edges faces and vertices worksheet.
## edges faces and vertices worksheet
Faces, edges, and vertices worksheets are a must-have for your grade 1 through grade 5 kids to enhance vocabulary needed to describe and label different 3D shapes. Children require ample examples and adequate exercises to remember the attributes of each 3D figure. Begin with the printable properties of solid shapes chart, proceed to recognizing and counting the faces, edges, and vertices of each shape, expand horizons while applying the attributes to real-life objects, add a bonus with comparing attributes of different solid figures and many more pdf worksheets.
Сьюзан, сядь. Она не обратила внимания на его просьбу. - Сядь. - На этот раз это прозвучало как приказ.
Мы почти приехали, мисс Флетчер. Держитесь.
#### Foldable Solid Shapes (Nets)
Следуя плану, он бросился в проход и, оказавшись внутри, лицом к правому углу, выстрелил. Пуля отскочила от голой стены и чуть не попала в него. Он стремительно развернулся и едва сдержал крик. Никого. Дэвид Беккер исчез. Тремя пролетами ниже Дэвид Беккер висел на вытянутых руках над Апельсиновым садом с наружной стороны Гиральды, словно упражняясь в подтягивании на оконном выступе. Когда Халохот поднимался по лестнице, Беккер, спустившись на три пролета, вылез через один из проемов и повис на руках.
За .
У меня есть доказательство! - Сьюзан встала и подошла к терминалам. - Помнишь, как ты отключил Следопыта? - спросила она, подойдя к своему терминалу. - Я снова его запустила. Посмотрим, вернулся ли. Разумеется, на ее экране замигал значок, извещающий о возвращении Следопыта.
Сьюзан кивнула.
Она пробовала снова и снова, но массивная плита никак не реагировала. Сьюзан тихо вскрикнула: по-видимому, отключение электричества стерло электронный код. Она опять оказалась в ловушке.
with pdf english pdf
20.02.2021 at 14:31
03.03.2021 at 22:29
1. Sue O.
Permendikbud no 26 tahun 2016 pdf permendikbud no 26 tahun 2016 pdf
2. Rachelle P.
A 3D shape is described by its edges, faces, and vertices vertex is the singular form of vertices. | 1,130 | 4,262 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2022-21 | latest | en | 0.928457 |
https://www.ashishkumarletslearn.com/integrals-class-12-maths/ | 1,566,426,317,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00064.warc.gz | 720,119,057 | 6,840 | # Integrals Class 12 Maths
Get answers to all NCERT exercise questions, examples, supplementary exercise questions and Sample Papers with expert created video lecture, pdf notes and assignments for Integrals Class 12 Maths.
Summary:
Integration is the inverse process of differentiation. In the differential calculus, we are given a function and we have to find the derivative or differential of this function, but in the integral calculus, we are to find a function whose differential is given. Thus, integration is a process which is the inverse of differentiation.
Let $\frac{d}{dx}F(x) = f(x)$. Then, we write $\int{f(x) dx} = F(x) + C$. These integrals are called indefinite integrals or general integrals, C is called constant of integration. All these integrals differ by a constant.
From the geometric point of view, an indefinite integral is collection of family of curves, each of which is obtained by translating one of the curves parallel to itself upwards or downwards along the y-axis.
## Lectures for Integrals Class 12 Maths
• Lecture 1 and 2 are based on NCERT Exercises 7.1, 7.2 and 7.3
• These lectures also have questions from R. D. Sharma, NCERT Exemplar Problems, CBSE Question Bank (Support Material) and books from other states.
## Lecture - 1
This Lecture is based on First Method Direct Integration. This lecture has five parts with detailed explanation of concepts, derivations of all identities or formulas and explanation of sixty (60) practice questions from various books of integrals class 12 maths.
## Lecture - 2
This Lecture is based on Second Method Substitution. This lecture has eight parts with detailed explanation of concepts, derivations and eighty three (83) practice questions from various textbooks. Although, this lecture is based on Substitution method but there are also some questions based on previous lecture of Direct Integration.
## NCERT Exercise 7.4
This section has complete explanation of concepts based on NCERT Exericse 7.4 with each and every NCERT solutions. | 437 | 2,032 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 2, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.484375 | 3 | CC-MAIN-2019-35 | latest | en | 0.925915 |
https://www.hackmath.net/en/word-math-problems/quadratic-equation?tag_id=49&page_num=5 | 1,638,568,107,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00301.warc.gz | 871,743,190 | 7,152 | Quadratic equation + area - math problems
Number of problems found: 101
• Perimeter and legs
Determine the perimeter of a right triangle if the length of one leg is 75% length of the second leg and its content area is 24 cm2.
• Triangle
Calculate the triangle sides if its area S = 630 and the second cathetus is shorter by 17.
• Cylinder diameter
The surface of the cylinder is 149 cm2. The cylinder height is 6 cm. What is the diameter of this cylinder?
• Do you solve this?
Determine area S of rectangle and length of its sides if its perimeter is 102 cm.
• Diagonals in the diamond
The length of one diagonal in a diamond is 24 cm greater than the length of the second diagonal, and the diamond area is 50 m2. Determine the sizes of the diagonals.
• Tiles
From how many tiles 20 cm by 30 cm, we can build a square of maximum dimensions if we have maximum 881 tiles.
• Rectangle - sides
What is the perimeter of a rectangle with area 266 cm2 if length of the shorter side is 5 cm shorter than the length of the longer side?
• Rhombus
The rhombus with area 137 has one diagonal that is longer by 5 than the second one. Calculate the length of the diagonals and rhombus sides.
• Built-up area
John build up area 5 x 7 = 35 m2 with building with a wall thickness 30 cm. How many centimeters would have to subtract from thickness of the walls that built-up area fell by 9%?
• Garden
The area of a square garden is 2/9 of triangle garden with sides 160 m, 100 m, and 100 m. How many meters of fencing need to fence a square garden?
• Similarity
The area of the regular 10-gon is 563 cm2. The area of similar 10-gon is 606 dm2. What is the coefficient of similarity.
• Coins
Harvey had saved up a number of 2-euro coins. He stored coins in a single layer in a square. Left 6 coins. When he make square, which has one more row, missing 35 coins. How many euros he have?
• Rectangular cuboid
The rectangular cuboid has a surface area 5334 cm2, and its dimensions are in the ratio 2:4:5. Find the volume of this rectangular cuboid.
• Trapezoid MO
The rectangular trapezoid ABCD with the right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of the trapezoid.
• Cubes
Surfaces of cubes, one of which has an edge of 48 cm shorter than the other, differ by 36288 dm2. Determine the length of the edges of this cubes.
• R triangle
Calculate the right triangle area whose longer leg is 6 dm shorter than the hypotenuse and 3 dm longer than the shorter leg.
• Special cube
Calculate the edge of cube, if its surface and its volume is numerically equal number.
• Heron backlaw
Calculate missing side in a triangle with sides 25 and 13 and area 152.
• Right triangle
Legs of the right triangle are in the ratio a:b = 2:8. The hypotenuse has a length of 87 cm. Calculate the perimeter and area of the triangle.
• Rectangle
Area of rectangle is 3002. Its length is 41 larger than the width. What are the dimensions of the rectangle?
Do you have an exciting math question or word problem that you can't solve? Ask a question or post a math problem, and we can try to solve it.
We will send a solution to your e-mail address. Solved examples are also published here. Please enter the e-mail correctly and check whether you don't have a full mailbox.
Looking for help with calculating roots of a quadratic equation? Quadratic Equations Problems. Area - math problems. | 867 | 3,422 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.4375 | 4 | CC-MAIN-2021-49 | latest | en | 0.915412 |
http://math.stackexchange.com/questions/54578/pontrjagin-square-mosher-and-tangora-question | 1,469,378,752,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257824113.35/warc/CC-MAIN-20160723071024-00076-ip-10-185-27-174.ec2.internal.warc.gz | 151,711,073 | 18,623 | # Pontrjagin square (Mosher and Tangora Question)
This is an elaboration on a question/answer posted on MathOverflow.
As per the MO question, the actual question is (with the typo corrected)
Suppose the cocycle $u\in C^{2p}(X;Z)$ satisfies $\delta u=2a$ for some $a$.
i. Show that $u \cup_0 u + u\cup_1 \delta u$ is a cocycle mod 4.
ii. Define a natural operation, the Pontrjagin square, $P_2:H^{2p}(-;Z_2)\rightarrow H^{4p}(-;Z_4)$.
iii. Show that $\rho P_2(u)=u\cup u$, where $\rho:H^*(-;Z_4)\rightarrow H^*(-;Z_2)$ denotes reduction mod 2.
iv. Show that $P_2(u+v)=P_2(u)+P_2(v)+u\cup v$, where $u\cup v$ is computed with the non-trivial pairing $Z_2 \otimes Z_2\rightarrow Z_4$.
I am quite stuck on part (iv). If you just plug in, expand everything and simplify, you get
$P_2(u+v)=P_2(u)+P_2(v)+u\cup_0 v+v\cup_0 u+u\cup_1 \delta v+v\cup_1 \delta u$.
Now the answer given on MathOverflow is that $u \cup_0 v$ is not quite commutative "you need to subtract off a coboundary (involving the cup-1 product). So essentially you need to take the expression that you already have and introduce correction coboundary terms (which don't change the cohomology class) to reduce it to the form you're interested in."
Now the only real cup-1 product that seems viable is $$\delta(u \cup_1 v) = -\delta u \cup_1 v - u \cup_1 \delta v + u\cup_0v - v \cup_0 u$$
(or swap $u$ and $v$ in the formula). I write -1, but I am thinking of the coefficients modulo 4.
No matter what I do I can't seem to get the expression to simplify nicely. Moreover, I realised I can't work out what type of expression gives $u \cup v$ - I don't quite understand the part about computing it using the non-trivial pairing $Z_2 \otimes Z_2\rightarrow Z_4$. What sort of expression should I be seeking?
Update: I'll post some more working here. I am focusing on the expression $$f=u\cup_0 v+v\cup_0 u+u\cup_1 \delta v+v\cup_1 \delta u$$
From the coboundary formula above we see we can write this as (noting that $\delta(u \cup_1 \delta v)$ does not change the cohomology class, and taking everything modulo 4)
\begin{align} f &= \delta u \cup_1 v + u \cup_1 \delta v + v \cup_0 u + v\cup_0 u+u\cup_1 \delta v+v\cup_1 \delta u \\ &= \delta u \cup_1 v + 2\left(v \cup_0 u\right) +v\cup_1 \delta u \end{align}
Now consider $\delta(\delta u \cup_2 v)$. By the coboundary formula we have (again modulo 4)
$$\delta(\delta u \cup_2 v) = - \delta u \cup_1 v - v \cup_1 \delta u$$ and thus we have $$f = 2\left(v \cup_0 u\right)$$
and so
$$P_2(u+v)=P_2(u)+P_2(v)+2\left(v \cup_0 u\right)$$
Is this right? I'm still interested in how to get to an expression involving $u \cup v$?
-
Related meta thread for reference – Juan S Jul 30 '11 at 8:30
I posted a comment to Tyler's answer in order to let him know about your question. – t.b. Jul 30 '11 at 11:20
Thanks for that Theo! – Juan S Jul 30 '11 at 12:14
My guess is that the main difficulty in simplifying is that you're stuck with terms $\delta u \cup_1 v$ and $v \cup_1 \delta u$ (signs ignored, because these are multiples of 2 and we're working mod 4). If the cup-1 product was commutative, you would be done. However, it's not commutative, but only commutative up to chain homotopy. The correction factor is a cup-2 product.
Dear Quirk, you've essentially got the correct formula. I see that I was too casual in my post on MathOverflow about the method to get the solution. If you instead add $\delta(u \cup_1 v)$ rather than subtract it, your final formula should be $f = 2 (u \cup_0 v) = 2 (u \cup v)$. – Tyler Lawson Jul 31 '11 at 1:33
Dear Tyler - don't we require $u \cup v$ not $2(u \cup v)$ ? – Juan S Jul 31 '11 at 2:09
Quirk, well... In the question you are computing using the nontrivial pairing $Z/2 \otimes Z/2 \to Z/4$, which sends $1 \otimes 1$ to $2$. The notation then becomes a little misleading, in my opinion! If you've chosen integer-valued cochain representatives for $u$ and $v$ then this pairing has a lift by first taking the integer-level cup product, multiplying by two, and then reducing mod 4. – Tyler Lawson Jul 31 '11 at 5:00 | 1,319 | 4,086 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.78125 | 4 | CC-MAIN-2016-30 | latest | en | 0.836478 |
https://easierwithpractice.com/what-does-the-greek-root-word-kilo-mean/ | 1,719,196,147,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198864986.57/warc/CC-MAIN-20240624021134-20240624051134-00502.warc.gz | 190,401,027 | 41,079 | # What does the Greek root word kilo mean?
## What does the Greek root word kilo mean?
chilioi
Is kilo a Greek word?
Definition for kilo (2 of 2) a Greek combining form meaning “thousand,” introduced from French in the nomenclature of the metric system (kiloliter); on this model, used in the formation of compound words in other scientific measurements (kilowatt).
### What does the word kilo mean?
: a unit of mass or weight equaling one thousand grams or approximately 2.2 pounds : kilogram Each sack weighs 50 kilos.
What is the root word of kilogram?
The word kilogramme or kilogram is derived from the French kilogramme, which itself was a learned coinage, prefixing the Greek stem of χίλιοι khilioi “a thousand” to gramma, a Late Latin term for “a small weight”, itself from Greek γράμμα.
## What’s another name for 1000 grams?
Cultural definitions for kilogram A unit of mass in the metric system, equal to one thousand grams.
What is 1kg mass?
It is defined as the mass of a particular international prototype made of platinum-iridium and kept at the International Bureau of Weights and Measures. It was originally defined as the mass of one liter (10-3 cubic meter) of pure water. At the Earth’s surface, a mass of 1 kg weighs approximately 2.20 pounds (lb).
### What is a kilogram in math?
A Metric measure of mass (which we feel as weight). The abbreviation is kg. 1 kg = 1000 grams. 1 kg = 2
How much is a kilo weigh?
2
## Which is heavier 1Kg or 1lb?
A kilogram (kg) is stated to be 2.2 times heavier than a pound (represented as lbs). Thus, one kilo of mass is equal to 2.26lbs.
How heavy is 1Kg example?
A kilogram is about: the mass of a liter bottle of water. very close to 10% more than 2 pounds (within a quarter of a percent) very very close to 2
### How much is a kilo of clothing?
The premise is very much explained in the title – you pay for your vintage clothes by the kilo – normally £15.00 per kilo. How does it work? You go around putting all of your favourite items in a huge bag and then at the end they weigh and you pay.
How much clothes is 7kg?
7kg washing machine – can fit around 35 T-Shirts or a double duvet and is suited for a small sized family. 8kg washing machine – can fit around 40 T-Shirts or a queen sized duvet and is suited for a medium sized family.
## How many dresses is 1 kg?
6–7 garments will weigh 1 kg. Normally one upper garment like a T-shirt/shirt weighs around 150–170 grams.
How many clothes we can washed in 6kg washing machine?
So by following this guideline, a 6kg washing machine should be able to efficiently clean a load of around 30 shirts, or a dozen bath towels.
### What is meaning of 6.5 kg in washing machine?
The size or load capacity of a washing machine is indicated in kilograms. It is the maximum weight of laundry that the machine can clean efficiently and comfortably without wasting excess power, water or time. This weight is measured in terms of dry clothes and not wet clothes.
How many clothes is 2 kg?
If it is men’s clothing, 1 kilo will be about 1 pair of pants or 4 T-shirts. A pair of jeans will be between 1.5 and 2 kilograms. If it is woman’s clothing, 1 kilo will be about a pair of jeans or T-shirts.
## How many pants can go in a 7kg washing machine?
What Capacity Washing machine you need? Get the right size Washer
WASHING MACHINE CAPACITY IDEAL FOR CLOTHES
7 Kg (Medium) Family of 3-4 3 shirts, 1 pair of children jeans, 2 pairs of adult jeans
8 Kg (Medium) Family of 4-6 3 shirts, 3 pairs of adult jeans
10 Kg (Large) Large Families 4 shirts, 3 pairs of adult jeans
How many clothes is 8kg?
An 8 kg machine can wash 40 t-shirts, a 10 kg machine 50 t-shirts, and a 12 kg machine 60 t-shirts. For larger households or those who want to wash bigger duvets, larger capacity washing machines can be really useful.
### What does 7kg wash capacity mean?
So, for a 7kg washing machine, you can expect to fit a maximum of seven kilograms of dry laundry. For example, a 7kg washing machine will only wash the a full 7kg load on specific programs.
How many beds fit in a 7kg washing machine?
In 7kg washing machine you can wash five sheets easily depending on the weight of sheets.
## What is meant by 6 kg washing machine?
The kilogram capacity is in reference to dry clothing, yes, dry. When you see a washer that says it’s capacity is 6kg, know that the 6kg is the weight of dry clothes you can put in. Now obviously when these clothes get wet they might weigh up to and even above 12kg.
Which washing machine is best?
Top Washing Machine to Buy in India
• LG 8 Kg Inverter Fully Automatic Top Loading Washing Machine – T9077NEDL1.
• LG 6.5 Kg Inverter Fully Automatic Top Loading Washing Machine – T7581NDDLG.
• Bosch 7 Kg Fully Automatic Front Loading Washing Machine – WAK24168IN.
### Can bedsheets be washed in washing machine?
Most sheets can be washed at home in your washing machine, but specialty fabrics may require careful consideration. Wash with the hottest water temperature setting listed on the care label. Hotter water kills most germs and also takes care of dust mites that thrive in bedding.
Should bed sheets be washed in hot water?
For the best clean, wash sheets in the hottest water on the heavy-duty cycle. Washing bedding in water that’s too hot can cause them to shrink and fade over time. Similarly, constant washing on the heavy-duty cycle may cause them to wear out.
## How often should you wash bed sheets?
Most people should wash their sheets once per week. If you don’t sleep on your mattress every day, you may be able to stretch this to once every two weeks or so. Some people should wash their sheets even more often than once a week.
How do hotels keep sheets white?
First, they wash with laundry detergent. Then, they wash again with fabric softener. The final wash includes bleach to bring out the white color. In other words, hotels don’t bleach the linens within an inch of its life and call it “good.”
### What laundry detergent do hotels use?
When turning over hundreds of pounds of laundry every day, there is no time to waste fighting stains individually as you might at home. Hotels need a laundry detergent that fights stains the first time through. That’s why many hotels choose to use HTD Heavy Duty Detergent.
What are the worst laundry detergents?
Next: These are the absolute worst detergents money can buy.
• Xtra ScentSations.
• Trader Joe’s Liquid Laundry HE.
• Woolite Everyday.
• Home Solv 2X Concentrated.
• Xtra Plus OxiClean.
• Sun Triple Clean.
• Arm & Hammer Toss ‘N Done Ultra Power Paks.
• Tide Plus Ultra Stain Release and Persil ProClean Power-Liquid 2in1.
## Why do hotels have white bed sheets?
The reason behind using white colour bed sheets is that they don’t hide stains. Therefore, all the guests remain alert while eating food or doing any other thing bedsheet. | 1,685 | 6,882 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.125 | 3 | CC-MAIN-2024-26 | latest | en | 0.932437 |
https://magoosh.com/gmat/inclusive-counting-on-the-gmat/ | 1,679,761,118,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00787.warc.gz | 457,428,080 | 38,342 | # Inclusive Counting on the GMAT
Understand this common type of special counting on the GMAT!
Question #1: On Monday, there were 29 bananas in the cafeteria. No new bananas were brought in after Monday. Two days later, on Wednesday, there were 14 bananas left. How many were eaten in that time?
Question #2: In January of last year, MicroCorp start-up had 14 employees. All of those employees have stayed on through June. In June of last year, they had 29 employees. How many new employees did they hire?
Clearly for both of these questions, we need simply the different: 29 – 14 = 15. That’s the correct answer for each of them. Now, try a very different kind of question, with a different answer.
Question #3: A certain workshop begins on the 14th of this month and ends on the 29th of this month. How many days long is this workshop?
Question #4: How many multiples of 5 are there from 70 to 145?
Those two have the same answer as each other, but it’s not the same as the answer for #1 and #2.
## Inclusive
We say the final two require “inclusive” counting, because both endpoints are included, whereas in the first two questions, both endpoints were not included. What do I mean?
In Question #1, the 29th banana was eaten, the 28 banana was eaten, …. the 15 banana was eaten, BUT the 14th banana was not eaten because there 14 remaining. The endpoint 14 is “not included” in those eaten.
In Question #2, employees #1 – #14 were already hired by last January. Employees #15 – #29 were new hires. The lower endpoint, employee #14, was “not included” in the group of new hires.
Now, by contrast, the 14th is the first day of the workshop, and the 29th is the last day of the workshop. Both the 14th and 29th are days when the workshop is happening. Both endpoints are included.
Similarly, in #4, the multiples of 5 from 70 to 145 include both 70 and 145. Again, both endpoints are included. Incidentally, the connection to the other three questions: 70 = 5*14 and 145 = 5*29, so the list of multiples of 5 from 70 to 145 is really 5 times the list of consecutive integers from 14 to 29, so the underlying question is really: how many consecutive integers are there from 14 to 29?
As may be apparent, the inclusive scenario always includes exactly one more member than the “not included” scenario. Therefore, the formula for inclusive counting is
number included = (last) – (first) + 1
Notice, this only works if the numbers are consecutive. If the question is like #4, you may have to notice what change you can make to match the given list with a list of consecutive integers.
## Author
• Mike served as a GMAT Expert at Magoosh, helping create hundreds of lesson videos and practice questions to help guide GMAT students to success. He was also featured as "member of the month" for over two years at GMAT Club. Mike holds an A.B. in Physics (graduating magna cum laude) and an M.T.S. in Religions of the World, both from Harvard. Beyond standardized testing, Mike has over 20 years of both private and public high school teaching experience specializing in math and physics. In his free time, Mike likes smashing foosballs into orbit, and despite having no obvious cranial deficiency, he insists on rooting for the NY Mets. Learn more about the GMAT through Mike's Youtube video explanations and resources like What is a Good GMAT Score? and the GMAT Diagnostic Test. | 832 | 3,408 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.46875 | 4 | CC-MAIN-2023-14 | latest | en | 0.980174 |
https://qa.answers.com/entertainment/What_is_Q3_of_2009 | 1,718,508,140,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861640.68/warc/CC-MAIN-20240616012706-20240616042706-00184.warc.gz | 438,344,847 | 49,495 | 0
What is Q3 of 2009?
Updated: 4/28/2022
Wiki User
14y ago
July, August and September are the Q3 of year 2009.
Wiki User
14y ago
Earn +20 pts
Q: What is Q3 of 2009?
Submit
Still have questions?
Related questions
* *
WHEN will Final Fantasy dissidia come out in us?
It is scheduled to be released in Q3 of 2009.
What is q3 plus 2336?
q3 + 2336 is an algebraic expression which cannot be simplified.
When does the omnia 2 come out?
Later this year. Q3 2009. Depending on Samsung's fiscal year. It could mean July to August or October to December. Although many companies have Oct-Dec as Q4 some have it as Q3 and Q4 being Jan-Mar.
What is the formula for coefficient of quartile deviation?
coefficient of quartile deviation: (Q3-Q1)/(Q3+Q1)
How do you do interquartile range step by step?
Step 1: Find the upper quartile, Q3.Step 2: Find the lower quartile: Q1.Step 3: Calculate IQR = Q3 - Q1.Step 1: Find the upper quartile, Q3.Step 2: Find the lower quartile: Q1.Step 3: Calculate IQR = Q3 - Q1.Step 1: Find the upper quartile, Q3.Step 2: Find the lower quartile: Q1.Step 3: Calculate IQR = Q3 - Q1.Step 1: Find the upper quartile, Q3.Step 2: Find the lower quartile: Q1.Step 3: Calculate IQR = Q3 - Q1.
If q1q2q3 are three quartiles then Coefficent of quartile deviation is?
coefficient of quartile deviation is = (q3-q1)/(q3+q1)
How do you find the maximum that is less than the limit for the Upper Quartile. In other words I want to find the maximum value of a dataset that EXCLUDES outliers?
There is no universally agreed definition of an outlier. One conventional definition of an outlier classifies an observations x as an outlier if: x > Q3 + 1.5*IQR = Q3 + 1.5*(Q3 - Q1) A similar definition applies to outliers that are too small. So, to find the maximum that is not an outlier, you need to find the upper and lower quartiles (Q3 and Q1 respectively) and then find the largest observation that is smaller than Q3 + 1.5*IQR = Q3 + 1.5*(Q3 - Q1)
Is World Championship Athletics also known as Summer Athletics 2009 coming out for ps2?
World Championship Athletics/Summer Athletics 2009 is expected to be released September 29th 2009 or Q3 2009 for Ps2, Xbox 360,Wii and PC consoles. 27 days left!
When is Q3?
Q3 starts July 1st and ends September. Think about splitting the year into quarters and taking the months in the third quarter. | 684 | 2,373 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.15625 | 4 | CC-MAIN-2024-26 | latest | en | 0.914139 |
http://turkishmedals.net/isometric-drawing-worksheet-maths | 1,532,131,360,000,000,000 | text/html | crawl-data/CC-MAIN-2018-30/segments/1531676592001.81/warc/CC-MAIN-20180720232914-20180721012914-00116.warc.gz | 371,886,396 | 15,516 | # Isometric Drawing Worksheet Maths
Here you are at [blog]. Many people have tried online for finding data, guidelines, articles or other guide for their purposes. Just like you are. Do you arrive here to obtain new unique understanding of Isometric Drawing Worksheet Maths? Just how many websites have you read to get more detail about Isometric Drawing Worksheet Maths?
Isometric Drawing Worksheet Maths is one of raised topic at the moment. We realize it from google engine statistics like google adwords or google trends. In an effort to provide useful information to our audience, weve attempted to find the closest relevance photo about Isometric Drawing Worksheet Maths. And here you can see now, this picture have already been extracted from trustworthy resource.
We expect this Isometric Drawing Worksheet Maths pic will present you with certain extra point for your need and we hope you enjoy it. We know, we might have different view relating to this but at least we have tried our best.
This picture has been uploaded by our team. You can easily surf more valuable posts in [cat] group. We thank you for your visit to our website. Make sure you get the information you are looking for. Do not forget to share and love our reference to help further develop our website.
## D Isometric Drawing Worksheets Solve My Maths On Orthogonal And
### Isometric Drawing Exercises For Kids Cerca Con Google Art By
#### 63 Best Isometric Drawing Images On Pinterest Perspective
##### Isometric Letters Good To Know When I Am Drawing Typography
###### 3D Front Elevation Side And Plan Isometric Drawing Part 2 YouTube
9 Best Drawing Images On Pinterest Technical Drawings
3d Isometric Drawing At GetDrawings Com Free For Personal Use 3d
Isometric Drawing Worksheet Ks3 Images Worksheet Math For Kids
Isometric Drawing Worksheet Maths Choice Image Worksheet For Kids
1 Cm Isometric Grid Paper Portrait A Math Worksheet Freemath
145 Best Isometric Drawing Images On Pinterest Technical Drawings
145 Best Isometric Drawing Images On Pinterest Technical Drawings
3D Isometric Drawing Worksheets 3D Drawing Solve My Maths
Isometric Drawing Exercise Draw Drawi On How To Draw D Shapes
Quiz Worksheet Isometric Drawing Study Com
Isometric Drawing Worksheet Choice Image Worksheet For Kids Maths
3D Isometric Drawing Worksheets Image Gallery Isometric Drawings
Beautiful Isometric Drawing Worksheet Maths Inspiration Worksheet
Isometric Drawings Lessons Tes Teach
Collection Of Difficult Isometric Drawing High Quality On Isometric
Education Info Pinterest Math
1 3 Drawings And Models Foundational Orthographic And Isometric
Steps Isometric Drawing Worksheet 2 Question 4 YouTube
Workbooks A Isometric Drawing Worksheets Free Printable On Its All
06 Isometric Drawing CBSE MATHS YouTube
What Is An Isometric Drawing Definition Examples Video
25 Best Isometric Images On Pinterest Technical Drawings
Isometric Drawing Worksheet Maths Choice Image Worksheet For Kids
1 3 Drawings And Models Foundational Orthographic And Isometric
3D Isometric Drawing Worksheets Worksheets Mrs Lay S Webpage 2011
Isometric Drawing Exercises For Kids Cerca Con Google Art By
Isometric Drawing Worksheet Maths Choice Image Worksheet For Kids
Printable Isometric Graph Paper Zoey S Room Pinterest Graph
GCSE REVISION Drawing On Isometric Paper YouTube
9 Best Drawing Images On Pinterest Technical Drawings
Beautiful Isometric Drawing Worksheet Maths Inspiration Worksheet
A Isometric Technical Graphics
Isometric Drawing Paper Thatswhatsup
How To Draw D Shapes Worksheet Drawing Isometric Pap On Isometric
Pin By Miki On Technical Drawings Pinterest Isometric Drawing
9 Best Drawing Images On Pinterest Technical Drawings
Isometric Drawing Exercises For Kids Search Google Visual
Beautiful Isometric Drawing Worksheet Maths Inspiration Worksheet
Design Process Orthographic Views Jpg 760 647 Graphics
3D Isometric Drawing Worksheets How To Draw A Cube On An Isometric
Maths Higher Worksheets
Isometric Dot Paper Customizable STEM Sheets
Imagen Relacionada Drawing Tutorial Pinterest Drawings And
28 Collection Of Easy Isometric Drawing Exercises High Quality
Simple Drawing Exercises Simple Isometric Drawing Exercises Unit 1
9 Best Drawing Images On Pinterest Technical Drawings
9 Best Drawing Images On Pinterest Technical Drawings
Isometric Drawing Worksheet Choice Image Worksheet For Kids Maths
Orthographic Projection Worksheets The Best Worksheets Image
Cm Isometric Portrait Math Worksheet Freemath Cm Online Grid Paper
Simple Drawing Exercises Simple Isometric Drawing Exercises
3D Isometric Drawing Worksheets 3D Drawing Solve My Maths
Isometric Drawing Worksheet Maths Choice Image Worksheet For Kids
3D Shapes Worksheets PDF Properties Of 3D Shapes Worksheet
9 Best Drawing Images On Pinterest Technical Drawings
28 Collection Of Isometric Drawing Exercises Pdf High Quality
Shape Drawing Worksheets At GetDrawings Com Free For Personal Use
Isometric Drawing Worksheet Ks3 Images Worksheet Math For Kids
28 Collection Of Basic Isometric Drawing Exercises High Quality
Beautiful Isometric Drawing Worksheet Maths Inspiration Worksheet
Isometric Drawing Worksheet Ks3 Images Worksheet Math For Kids
1886 Technical Drawing Antique Math Geometric Mechanical Drafting
Drawing Template Paper Geocvcco Cm Grid Portrait A Math Worksheet
28 Collection Of Isometric Drawing And Oblique Drawing High
NCERT Solutions For Class 7th Maths Chapter 15 Visualising Solid
Isometric Drawing Worksheet Choice Image Worksheet For Kids Maths
Beautiful Isometric Drawing Worksheet Maths Inspiration Worksheet
Isometric Drawing Worksheet Ks3 Images Worksheet Math For Kids
Excel Multiplication Worksheets Generator Math Horizontal
Isometric Drawing Worksheet Ks3 Images Worksheet Math For Kids
28 Collection Of Graph Paper For Isometric Drawing High Quality
NCERT Solutions For Class 7 Maths Chapter 15 Visualising Solid
9 Best Drawing Images On Pinterest Technical Drawings
Isometric Drawing Worksheet Ks3 Images Worksheet Math For Kids
Pictures Basic Technical Drawing Worksheets DRAWING ART GALLERY
1 Cm Isometric Grid Paper Landscape A Math Worksheet Freemath
Solved Draw Isometric Pictorials Of 1 Orthographic Proj
Maths Drawing At GetDrawings Com Free For Personal Use Maths
Beautiful Isometric Drawing Worksheet Maths Inspiration Worksheet | 1,240 | 6,387 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.59375 | 3 | CC-MAIN-2018-30 | latest | en | 0.864969 |
https://www.solutioninn.com/this-elementary-problem-begins-to-explore-propagation-delay-and-transmission | 1,702,230,132,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00273.warc.gz | 1,061,493,142 | 24,830 | # This elementary problem begins to explore propagation delay and transmission delay, two central concepts in data networking. Consider two hosts,
## Question:
This elementary problem begins to explore propagation delay and transmission delay, two central concepts in data networking. Consider two hosts, A and B, connected by a single link of rate R bps. Suppose that the two hosts are separated by m meters, and suppose the propagation speed along the link is s meters/sec. Host A is to send a packet of size L bits to Host B.
a. Express the propagation delay, dprop ' in terms of In and s.
b. Determine the transmission time of the packet, dtrans,> in terms of Land R.
c. Ignoring processing and queuing delays, obtain an expression for the end -to-end delay.
d. Suppose Host A begins to transmit the packet at time t =O. At time t =dtrans '
where is the last bit of the packet?
e. Suppose dprop is greater than ddrop At time t =dtrans where is the first bit of the packet?
f. Suppose dprop is less than dtrans. At time t =dtrans' where is the first bit of the packet?
g. Suppose s =2.5· 108, L = 120 bits, and R =56 kbps. Find the distance m so that dprop equals dtrans. | 284 | 1,176 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2023-50 | latest | en | 0.925872 |
https://www.indianuniversityquestionpapers.com/2016/03/apgenco-ae-mechanical-2012-question-paper.html | 1,537,390,375,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267156305.13/warc/CC-MAIN-20180919200547-20180919220547-00014.warc.gz | 779,434,026 | 38,749 | # APGENCO AE MECHANICAL 2012 Question Paper
Are you looking for APGENCO AE MECHANICAL 2012 question paper in PDF? Download it here from the contents provided below.
APGENCO AE MECHANICAL
Class: APPSC
Subject : General Studies - APGENCO Syllabus
QP.Type: Previous Year
Exam Year: 2012
Question Paper Download Format: PDF and Some samples of questions in text format
Attachments:
Some Sample Questions...
Q.14. In an impulse steam turbine, the steam expands in
Q.15. In an air craft gas turbine, the axial flow compressor is preferred because of
(A) High pressure rise (B) Low frontal area (C) High thrust (D) High propulsion
Q.16. The essential function of the carburetor in a S.I. Engine is to
(A) Meter the fuel into air stream and amount dictated by the load and speed
(B) Vaporize the fuel
(C) Distribute the fuel uniformly into all cylinders
(D) Both (B) and (C)
Q.17. The most popular firing order in case of a four cylinder in line IC engine is
(A) 1-2-3-4 (B) 1-3-2-4 (C) 1-3-4-2 (D) 1-2-4-3
Q.18.The air fuel ratio for idling speed of an automobile petrol engine is close to
(A)10:1 (B) 15:1 (C) 17:1 (D) 21:1
Q.19. A power screw is a device used for power transmission to convert
(A) Rotary motion into a linear motion (B) Linear motion into rotary motion
(C) Sliding motion (D) Centrifugal motion into rotary motion
Q.20. Creep depends on
(A) Pressure (B) Temperature (C) Load applied (D) Stiffness
Q.21. Ratio of force transmitted to the force applied is known as
(A) Damping factor (B) Damping coefficient
(C) Transmissibility (D) Magnification factor
Q.22. A simple gas turbine power plant used for air craft propulsion works on
(A) Rankine cycle (B) Carnot cycle (C) Brayton cycle (D) Otto cycle
Q.23. In EDM process, the tool and work piece are separated by
(A) An electrolyte (B) A metal conductor (C) Dielectric fluid (D) Metallic slum
Q.24. Surface roughness on a drawing is represented by
(A) Triangles (B) Circles (C) Squares (D) Rectangles
Q.25. Poor fusion in a welded joint is due to
(A) High welding speed (B) Dirty metal surface (C) Improper current (D) Lack of flux
Q.26. Mechanical properties of the metal improves in hot working due to
(A) Recovery of grains (B) Recystallisation
(C) Grain growth (D) Refinement of grain size
Q.27. Certain pilot study showed that % of occurrence of an activity as 50% with 95% confidence level ad an accuracy of ±2%, the no. of observations are
(A) 2500 (B) 2300 (C) 2200 (D) 2000
Q.28. Ann IC engine has a bone and stroke of 2 units each. The area to calculate heat loss can be taken as
(A) 4π (B) 5π (C) 6π (D) 8π
Q.29. In a weaving operation, the parameter to be controlled in the number of defects per 10 square yards of material, control chart appropriate for this task is
(A) P-chart (B) C-chart (C) R-chart (D) X-chart
Q.30. The profile of a cam in a particular zone is given by x = √3cosθ and y = sinθ. The normal to the cam profile at θ = π/4 is at an angle (with respect to x axis)
(A) π /4 (B) π /2 (C) π/3 (D) 0
Q.31. A heat engine operates at 75% of the maximum possible efficiency. The ratio of heat source temperature to the heat sink temperature (in Kelvin) is 5/3. The function of the heat supplied, that is converted to work is
(A) 0.6 (B) 0.4 (C) 0.3 (D) 0.7 | 980 | 3,260 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2018-39 | longest | en | 0.818004 |
http://mathematica.stackexchange.com/questions/7784/how-can-i-regulate-play-sound-duration-in-mathematica-8?answertab=votes | 1,448,446,957,000,000,000 | text/html | crawl-data/CC-MAIN-2015-48/segments/1448398445080.12/warc/CC-MAIN-20151124205405-00150-ip-10-71-132-137.ec2.internal.warc.gz | 150,896,093 | 18,812 | # How can I regulate 'Play sound duration' in Mathematica 8
How can I regulate the time of play sound[s] - duration in Mathematica 8. I want to import and save wav file in *.nb document & manipulate (with) the sound duration, for example, select sound fragment, split etc. I can not present sounds with duration more than 5 or 7 seconds in wavelet form - problem with CPU or memory ("very large..."). Thank you for answers.
I really tried to solve the problem of primitive –
file="C:/**.wav";
data=Flatten@Import[file,"Data"];
Import[file,"Options"];
data=Import[file,"Sound"];
L=Length[data]; Manipulate[Sound[data,t],{t,0.1,L}]
it works, but probably not optimal. It works slowly. In general, my problem has arisen from a desire to manipulate with time (duration) of the wav sound file. Your perfect solution, since it allows to allocate the sample.
Before that I was faced with freezing when trying to view 10-second file in the wavelet form. «No more memory available. Mathematica kernel has shut down. Try quitting other applications and then retry». I used the "Direct conversion of acoustic data" Mathematica code - Direct analysis of acoustic data using continuous wavelet transforms.
http://www.wolfram.com/mathematica/new-in-8/wavelet-analysis/directly-transform-sound.html
snd = ExampleData[{"Sound", "Apollo11ReturnSafely"}]
cwd = ContinuousWaveletTransform[snd,
GaborWavelet[6], {Automatic, 12}]
sty = Directive[14, FontFamily -> "Helvetica"];
WaveletScalogram[cwd, {3 | 4 | 5 | 6 | 7, _}, ImageSize -> 570,
Ticks -> {Automatic, {#, Superscript[2, #]} & /@ Range[7]},
TicksStyle -> sty, AxesLabel -> {Style["Time", sty], Style["Scale", sty]},
ColorFunction -> ColorData["SiennaTones"]]
But it hangs when I try to submit a file whose duration is 10 seconds or more. Maybe it depends on the record a sound file.
-
Welcome Alexander. I am not quite sure what you are trying to achieve. Do you want to import only parts of the sound as the files are so large? Could you post some code of what you have tried. This would give me/us some better idea of what you are after. – Matariki Jul 2 '12 at 4:36
Alexander, the "answer" you posted was not an answer, therefore I deleted it according the standard operating rules of this site. I have editing your addendum into your question. You should edit it further as necessary and remove the personal comments. – Mr.Wizard Jul 2 '12 at 15:07
Alexander, the code posted on this site is under a specific CreativeCommons license (CC by-sa). See the link at the bottom right corner of every page… – F'x Sep 7 '12 at 15:12
Let's take an example of WAV sound data:
data=Import[ "ExampleData/rule30.wav"]
You can see sampling rate 44100 Hz and duration 1.8 s of your sample. This function extracts data for a specific time duration:
TakeSound[d_, s_, e_] := {d[[1, 1, 1, Round[44100 s] ;; Round[44100 e]]]}
And this app allows you to cut and play sub-samples of your data:
Manipulate[ Column@{ListLinePlot[
Transpose[{Table[N[t/44100], {t, 1, Length[data[[1, 1, 1]]], 1}],
data[[1, 1, 1]]}], PlotRange -> All, Frame -> True,
Epilog -> Dynamic@{Opacity[.3], Rectangle[{s, -1}, {e, 1}]},
AspectRatio -> 1/4, ImageSize -> 250,
FrameTicks -> {Automatic, None}],
Dynamic@Sound[SampledSoundList[TakeSound[data, s, e], 44100]]},
{{s, .2, "start"}, .00003, 1.8, Appearance -> "Labeled",
ImageSize -> Small}, {{e, 1.5, "end"}, 0, 1.8,
Appearance -> "Labeled", ImageSize -> Small}, AppearanceElements -> None]
Of course, always end > start. Otherwise interface will break. It is easy to build in protection against it, but I wanted to keep it simple and clear. You can add it yourself.
- | 994 | 3,645 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2015-48 | longest | en | 0.844507 |
http://sbseminar.wordpress.com/2009/08/06/algebraic-geometry-without-prime-ideals/ | 1,369,374,162,000,000,000 | text/html | crawl-data/CC-MAIN-2013-20/segments/1368704234586/warc/CC-MAIN-20130516113714-00012-ip-10-60-113-184.ec2.internal.warc.gz | 239,405,584 | 57,024 | ## Algebraic geometry without prime ideals August 6, 2009
Posted by Joel Kamnitzer in Algebraic Geometry, Anton Geraschenko, things I don't understand.
The first definition in “Grothendieck-style” algebraic geometry is the affine scheme $Spec R$ for any ring $R$. This is a topological space whose set of points in the set of prime ideals in $R$. Then one defines a scheme to be a locally ringed space locally isomorphic to an affine scheme.
The definition of $Spec R$ goes against intuition since it involves prime ideals, not just maximal ideals. Maximal ideals are more natural, since if $R = k[x_1, \dots, x_n]/I$ for some alg closed field $k$, then the set of maximal ideals of $R$ is in bijection with the vanishing set in the affine space $k^n$ of the ideal $I$. (Of course one can give a geometric meaning to the prime ideals in terms of subvarieties, but it is less natural.)
However, in Daniel Perrin’s text Algebraic geometry, an introduction, he states/implies that one can define affine schemes just using maximal ideals (at least for finitely-generated $k$ algebras) and still get a good theory of schemes and varieties. Is this true? If so why don’t we all learn it this way? (One answer to the this latter question could be that some people are interested in non-algebraically closed fields.)
Let me be more specific. Fix an algebraically closed field $k$. Suppose that $R$ is a finitely generated $k$ algebra. Then we define a locally ringed space $Specm R$ whose points are the maximal ideals of $R$. and we give it a sheaf of rings in the usual manner. Let us call these affine m-schemes over $k$.
Then we define a “m-scheme over $k$” to be a locally ringed space which is locally isomorphic to an affine m-scheme over $k$.
I believe we have a functor from finite type schemes over $k$ to m-schemes over $k$. which on the level of topological spaces is taking closed points.
Is the following true:
This gives an equivalence of categories between the category of m-schemes over $k$ and the category of finite type schemes over $k$.
Following a reference in Perrin, I looked at EGA IV, section 10.9 which discusses these issues. In particular, what I call “m-schemes” above seems to be called “ultra-schemes” there (or rather ultra-preschema). Proposition 10.9.6 shows that the functor from Jacobson schemes to ultra-schemes is an equivalence. (This differs in two ways from what I ask above, first I have no idea what a Jacobson scheme is, second this is not dealing with finite type $k$ schemes.)
One last question. One way to justify the usual $Spec R$ is to say that $Spec$ is the adjoint functor to the global sections functor which goes from locally ringed spaces to rings. Is there a similar way to justify $Specm$?
(By the way, I’m asking these questions since I’m teaching introductory algebraic geometry in the fall and I’m planning on using Perrin’s book.)
1. Matthew Emerton - August 6, 2009
Jacobson schemes are schemes that are locally the Spec of Jacobson rings. Jacobson rings are rings satisfying the Nullstellensatz (i.e. for any ideal I, the intersection of the maximal prime ideals containing I equals the intersection of the prime ideals containing I, and so equals the radical of I). Obviously any field k is Jacobson, and a general form of the Nullstellensatz says that if A is Jacobson, so is A[x]. Hence any finite type algebra over a field is Jacobson, and thus so is any finite type k-scheme.
The reason we don’t learn it this way is that for Grothendieck-style algebraic geometry (as to opposed to say Serre-style, as in FAC, where I think he did work with maximal spectra) it is important to pass to the Spec of local rings, say (which are not Jacobson), and many arguments use passage to generic points.
Also, if one is working in mixed characteristic (say taking a variety over Q and spreading out over Z, which is an important techinque in number theory, but also in parts of birational geometry, e.g. in proving bend-and-break theorems), a closed point on the variety over Q is no longer a closed point when one spreads out to a scheme over Z.
Another example: if one has a family f:X –> C, where C is a smooth irreducible curve, and the total space X is reduced, then f is flat provided that each generic point of each component of X is mapped by C to the generic point of C. Another way to say this, that avoids talking about generic points, is to say that f is flat if each component of X dominates C. But having the generic points around makes many arguments easier, because one now test flatness by checking something on certain points. I think there is an example of this worked out somewhere in Hartshorne’s section on flatness (which is where I first saw the above criterion).
By the way, there is a section of Hartshorne (in chapter II, I think) where he defines what he means by a variety over an algebraically closed field, and shows that no information is lost by forgetting non-closed points. (This is just a simpler version of the EGA result cited in your post.)
2. Akhil Mathew - August 6, 2009
Hartshorne Proposition II.4.10, I think.
3. David Speyer - August 6, 2009
Matthew Emerton gives a great list of the reasons why one might want to work with nonclosed points. I’ll add two which have come up in my life: (1) “normal implies smooth in codimension 2″ is incredibly useful, easy to prove using generic points, and a real pain otherwise. (2) Most Bertini-like theorems are best proved by showing that they hold at the generic point, and showing that they hold on an open set. The same is true of Grothendieck’s generic flatness theorem.
However, if you are doing an introductory course, I think sticking to closed points is a great way to reduce complexity. My first algebraic geometry course, taught by Brian Conrad, took this approach and it worked great.
4. Joel Kamnitzer - August 6, 2009
Thanks Matt for explaining about the Jacobson schemes and thanks to Akhil for the Hartshorne reference which I haven’t had a chance to check out yet.
My point wasn’t so much why do we use schemes with nonclosed points, but rather why do we learn about the nonclosed points right away. When you are learning algebraic geometry (French style) for the first time, you have to learn about sheaves, locally ringed spaces, etc. Adding nonclosed points to the mix seems to make things unnecessarily complicated.
Anyway, as David points out, perhaps many people do learn algebraic geometry with only closed points in their introductory course on the subject. Perhaps my problem is that I never had an introductory course on the subject — after doing some reading on my own and taking commutative algebra, I jumped into a second semester class with Tom Graber which was all cohomology of sheaves, Riemann-Roch, etc (and was a great class).
The other point of my post is that I never knew all this schemes machinery worked fine with only using the closed points. I am wondering how many people out there were/are like me (so far based on the comments, not many …).
5. Akhil Mathew - August 6, 2009
Incidentally, is the approach you mentioned by Perrin the same as or similar to what Mumford does in the first chapter of the Red Book or what James Milne does in his online notes?
6. Allen Knutson - August 6, 2009
“normal implies smooth in codimension 2″
a) Codimension 1, surely.
b) Since this post has so much to do with distinguishing between working over an algebraically closed field or not, don’t say “smooth” when you mean “regular”.
(If I remember the issue correctly, there are normal schemes defined over a non-algebraically-closed field of char p that become non-normal upon base extension.)
Joel, if you’re looking for other pedagogical suggestions re teaching schemes, I would suggest you point out to people that the three extensions
1) moving beyond affine (or projective over affine) schemes to general ones by gluing
2) allowing nilpotents in the structure sheaf
3) working over a non-algebraically-closed field, or indeed, over a ring that doesn’t contain a field
that are included in the definition of “scheme” are independent, and it’s only by historical accident that they are all sprung on the unwary at the same time. I was happy with just #2 for years, and have only comparatively recently allowed for #3. I still haven’t ever personally needed #1.
7. Florian - August 7, 2009
Joel, James Milne’s Algebraic Geometry course notes (200+ pages) might be useful if you consider using Specm.
http://www.jmilne.org/math/CourseNotes/math631.html
8. Jim Humphreys - August 7, 2009
Allen’s comments underscore the fact that “algebraic
geometry” has become many subjects rolled into one.
It’s impossible to learn or teach or use algebraic geometry
without having some motivating problems and examples.
Much of the classical work can avoid schemes, but for me
the entry point was the study of linear algebraic groups
over algebraically closed fields (later arbitrary fields) of
any characteristic. You need some of the scheme
viewpoint to study homogeneous spaces $G/H$,
to compare rational points over a smaller field with those
over an algebraic closure, and especially to make sense
of the role of the Lie algebra in prime characteristic
where the notion of Frobenius kernel is needed. There
is no easy way to do all this, as seen in the texts by
Borel, Springer, and me as well as Jantzen’s big book on
representation theory (where some foundational results
must be quoted from Demazure-Gabriel’s optimistically
labelled Tome I). The Borel-Tits papers on reductive
groups over arbitrary fields require at least some scheme
ideas, but starting with SGAD or Demazure-Gabriel will
turn off anyone’s interest in the whole subject.
9. Matthew Emerton - August 7, 2009
Dear Joel,
I think it’s a god idea to work just with MaxSpec (or Specm if you prefer) in an introductory course; I have done this before myself. (And Chapter I of Hartshorne does this, at least implicitly.) In fact, I like to begin just by working in affine space (and a little later on, in projective space), and consider solutions of affine (then homogeneous) equations. One can then introduce affine rings, and using the Nullstellensatz, prove that solutions match with maximal ideals in the affine ring. This then motivates the introduction of MaxSpec, and one can shift from the extrinsic viewpoint (where everything happens in an ambient affine or projective space, which provides the geometric glue that gives everything meaning) to a more intrinsic viewpoint in which the glue is supplied by the Zariski topology and sheaves of rings.
Working with MaxSpec (and reduced varieties) also has the advantage that the structure sheaf can just be defined as a certain subsheaf of the sheaf of all k-valued functions, which eliminates a lot of sheaf-theoretic machinery (which I think can be unnecessarily distracting in a beginning course).
Also, the Zariski topology can be motivated in the following way (closely tied to the idea of having a locally ringed space, rather than just a ringed space): it is the weakest topology on an affine variety with the property that if f is a regular function that is non-zero at some point P, then there is a n.h. of P on which f is invertible.
By the way, I think this kind of development of the foundations is extremely close (maybe identical to ?) Serre’s treatment in FAC, which is probably worth looking at.
10. Joel Kamnitzer - August 7, 2009
Thanks for pointing me to Milne’s notes, which I had looked at briefly before. Indeed the approach taken by Milne is very similar to that taken by Perrin and very similar to that suggested by Matt above. Namely, they work just with MaxSpec and define the structure sheaf to be a subsheaf of the sheaf of k-valued functions.
To answer Allen’s comment: for me it is extremely “psychologically important” to define varieties in an abstract way (ie as a locally ringed space locally isomorphic to spec of a ring) rather than just thinking about affine or projective (or projective over affine) varieties. Even if one is ultimately interested in just affine variety, it seems me extremely important to communicate the idea of what a variety is and also understanding that varieties (schemes) are glued from local pieces motivates many later definitions (eg line bundles).
11. David Speyer - August 7, 2009
I have not yet had a serious reason to deal with a non-quasiprojective variety. But I constantly work in local coordinates, and compute the effects of changing coordinates. Once you’ve gotten that far, it seem silly to me to insist that all the coordinates come from some projective embedding.
12. Jason Starr - August 7, 2009
Regarding Allen’s point of working with non-projective objects, it has become fairly common to prove that new objects (moduli spaces, etc.) actually are projective by first studying them as not-necessarily-projective objects (i.e., algebraic spaces or algebraic stacks). It is often easier to first construct them as not-necessarily-projective objects, then show that they have some nice properties as such objects, and finally use these properties to prove that the space is projective after all (e.g., Knudsen-Mumford, Kollar’s article, Cornalba’s article, etc.).
13. David Ben-Zvi - August 8, 2009
Is it completely unreasonable not to teach schemes as locally ringed spaces (either with Spec or MaxSpec) and just to think of functors of points? (I know how people will respond to this, but anyway..) That picture feels to me much closer to the intuitive idea of closed points – after all it just means solve the equations defining the scheme in various coefficient rings. (I never liked the locally ringed space POV and never use it — and I can, albeit unfairly, blame Matt Emerton, who taught me algebraic geometry!) This can be explained very intuitively as testing a space by points in it, lines in it, and other families, or as thinking of generalized functions as taking values on test functions, and is a point of view with much greater applicability. That seems to me much more natural than either 1. discarding crucial information you need to do geometry (the max-spec idea) or 2. working with a strange topological space with wierd points, rather than developing intuition from the point of view of machines for constructing solutions over different rings..
oh well. (luckily I’m not teaching intro algebraic geometry so no need to warn me of the irreparable damage I’ll do to impressionable minds.)
14. Charles Siegel - August 8, 2009
David, isn’t that pretty much the point of view Grothendieck was switching to for the second edition of EGA? I know he only updated the first book, but I recall spotting functors of points being featured prominently.
15. Emmanuel Kowalski - August 8, 2009
The “functor of points” approach is definitely the way the students of Grothendieck (or at least one of them, who I will only identify with his initials L. I.) teach (or used to teach, before retirement) introductory algebraic geometry…
16. Joel Kamnitzer - August 8, 2009
Just to be clear, in the “functor of points” approach, do the basic definitions go like this:
- the category of rings with its Grothendieck topology
- sheaves (valued in sets) on the category of rings
- representable functors = affine schemes
- schemes = sheaves locally isomorphic to affine schemes
Is that logically coherent? (if not pedagogically coherent)
17. David Ben-Zvi - August 8, 2009
Joel – yes that seems coherent to me. An affine scheme is by definition a commutative ring (or k-algebra if you’d prefer) thought of contravariantly. all objects in algebraic geometry are things you can test on rings (by mapping affines to them), i.e. functors rings –> sets (say). the most basic thing is what are the points of your affine, and these come in flavors, you have K points for any algebraically closed field K, which you can think of as geometric points, and R points for various rings, which you can think of as families of geometric points. Then, nice functors on rings are ones coming from taking some quotient of an affine (representable functor) by an equivalence relation – the various Grothendieck topologies tell you how general a gluing you’re going to allow (giving rise to schemes, algebraic spaces, etc). Equivalently, geometric quantities can be patched together from local information, and this notion of locality is encoded in the topology. This point of view has innumerable advantages, but for example makes the idea of moduli problems very natural.. also it makes the construction of projective space pretty natural and coordinate free, as the moduli of lines in affine space (as opposed to defining it as a patching or as a whole new concept, that of Proj of a graded ring, as opposed to the geometric notion of quotient – after puncturing – by the multiplicative group).
18. James Borger - August 8, 2009
I agree completely with David Ben-Zvi. It seems vastly preferable to me to *define* the category of schemes (or better, algebraic spaces) as a certain full subcategory of the category of functors from Rings to Sets. Then things simplify greatly. No silly topological spaces, no prime ideals, no axiom of choice, no local rings, no fields. All you need is all you ever used anyway: the category of rings, covers, and descent. Of course, you can use ideals, for instance prime and maximal ideals, if you want– they’re just no longer necessary to set up the theory.
I would love to teach a class from this point of view. From a pedagogical point of view, the key thing would be to explain how to visualize schemes defined in this way, without relying on their substrate of Zariski points. But first, you really ought to do that with scheme theory qua locally ringed point sets anyway. And second, it shouldn’t be that hard by fleshing out each concept in the case of finitely generated algebras over C, which can be accurately visualized by using their sets of C-valued points.
One objection might be that students often learn point-set topology before algebraic geometry, and so teaching them a second gluing formalism might seem wasted time. But in my opinion, continuing with the point-set-theoretic point of view is the pedagogical equivalent of throwing good money after bad, and is probably in the long run even worse.
I guess this is one of my pet peeves. It just seems that since the SGA seminars (which, being seminar notes, are not at all ideal introductions to the subject), each introductory text in algebraic geometry has taken another step in the wrong direction (with the notable exception of two books on group theory: Demazure-Gabriel and Waterhouse). The attempt to explain the theory in its simplest form is laudable and no doubt sincere, but I think these introductions make things more complicated in that, for them, simplicity equates to using the formalisms the authors learned as students, rather than the formalisms best suited for the job. The fundamental functorial concepts are really very natural and simple. We shouldn’t be thinking about how to vulgarize them; we should be thinking about how to explain them.
19. Andy P. - August 8, 2009
Are people seriously suggesting that we should teach beginners algebraic geometry via the “functor of points” approach? Frankly, I can’t imagine such a course not ending in complete disaster.
For my money, the best book for beginners (i.e. people who don’t yet know what a variety is) is Joe Harris’s “First Course”. I’m a firm believer in the philosophy that to understand a big machine like post-Grothendieck algebraic geometry, you first have to acquire a deep understanding a good number of classical examples (for instance : algebraic curves, linear algebraic groups, Grassmannians, etc.).
Maybe this is a function of the areas I work in, but my feeling is that most mathematicians today secretly understand their machines via these sorts of examples anyway, so why short-circuit the process?
20. Carl - August 8, 2009
While trying to learn algebraic geometry, I have found the first chunk of Eisenbud-Harris’ little book `Geometry of Schemes’ to be really helpful in getting my head around schemes and various ways of looking at them. You might suggest it as supplementary reading. I think it does a beautiful job of explaining the basic set-up, intricacies, and intuition in a very concise way – something I have not found in the standard algebraic geometry textbooks. (although I haven’t gotten a chance to look at Perrin’s book)
21. James Borger - August 8, 2009
Andy: I wouldn’t jump right into the functors, but I would take the pedagogically best path towards them. So I’d probably first cover affine schemes, not as schemes per se (locally ringed spaces of prime ideals), but just as rings. I’d focus on finitely generated C-algebras, explaining how to visualize them and how ring-theoretic concepts (etaleness, flatness, surjectivity, etc) can be interpreted visually. You could also talk about how we can visualize other rings, like finitely generated Z-algebras, analogously (if perhaps less realistically). You could easily talk about affine algebraic groups and their functors of points here. You could also talk about other moduli problems, like those represented by affine cells in flag varieties.
Once the students have a good feel for how to think about commutative algebra geometrically, then I’d address the issue of gluing, pointing out that this can be handled with two different formalisms: locally ringed spaces of prime ideals, and functors of points. (I might provocatively call them the pre and post 1960 (?) ways.) Then I would explain why the second approach is better. The main reason is that any space X whose points have a “meaning” can be handled very easily using the functor of points. This includes Grassmannians, classical groups, moduli problems. On the other hand, the points of the topological space underlying a scheme and silly formal constructions, and they lead to false phenomena such as the fact that the set of points of a product of two schemes is not the product of the two sets of points.
To be sure, it would require some care, but no more, I think, than teaching any other class for which there is no adequate textbook. It seems half the people think this obvious, and the other half think it’s impossible. It would be great if someone who really believes it could be done would actually do it. (Or maybe someone could just post lecture notes from Prof. L.I.’s class…)
22. David Ben-Zvi - August 9, 2009
Andy – You make an important pedagogical point, but I believe
it’s somewhat orthogonal to James’ and my point – the assertion is not that we should shortchange examples, or have less familiarity with classical intuitions, or rush into machines, but rather that when you do get to introducing topics like schemes, the now-standard presentation of in terms of locally ringed spaces have serious drawbacks and that going directly to functors of points has advantages. In fact as James points out having a strong background in classical thinking will make the transition to functors easier, since many classical examples are very conducive to writing as functors.
23. Jason Starr - August 9, 2009
Complex analytic spaces are also locally ringed spaces. The nicest formulations of GAGA use the comparison of a complex, proper scheme (or algebraic space) with its associated complex analytic space, as locally ringed spaces. So this is one point in favor of locally ringed spaces.
24. Akhil Mathew - August 9, 2009
“David, isn’t that pretty much the point of view Grothendieck was switching to for the second edition of EGA? I know he only updated the first book, but I recall spotting functors of points being featured prominently.”
Charlie, I think EGA I *defines* schemes in the second edition as ringed spaces which are locally isomorphic to affine schemes, although he talks about functors of points quite a bit.
25. Scott Carnahan - August 9, 2009
I think A.O. taught from a functor of points perspective in 2004-2005 at Berkeley, but maybe it’s not surprising, since he’s been known to “pal around with” L.I.
I’m not sure where I stand on this question. Allcock once told me an idea he had for an “algebraic geometry from nature” class, basically teaching a large collection of examples, all drawn from real-life phenomena like shadows, reflections, configurations of linkages, etc., and it sounded rather appealing. On the other hand, modern toolsets can be both useful and compelling.
The functors associated to non-affine schemes like can be a bit more subtle than the locally ringed spaces, since you might have to do work to make sure you have a sheaf. At least, it took me a while to see why the “stupid quotient” doesn’t give projective space. I could see people arguing both ways about whether it is better to cover this sort of thing early.
26. Scott Carnahan - August 9, 2009
Joel: This is a minor point, but if your functor is from rings, then you should have a cosheaf of sets.
27. Allen Knutson - August 9, 2009
I still haven’t ever personally needed #1.
Jason was kind enough in his comment not to point out that I had indeed needed some algebraic spaces (constructed, really from their functors of points) in a paper of mine that he generalized. I’m still hoping they’re projective, though!
28. Emmanuel Kowalski - August 9, 2009
I think the “functor of points” is particularly popular and helpful for algebraic group theory, because it is much easier to define the many homomorphisms required by group theory using their functorial incarnations on sets/groups of points — the extra points of schemes complicate the picture here. Similarly, the type of complications that schemes have been invented to deal with are often much milder for algebraic groups than for schemes (to say it differently, the difficulties only arise quite a bit deeper in the theory, or at least I’ve heard this from someone who is involved in the retyping/updating of SGA3). So for instance even such a book as Platonov and Rapinchuk’s on arithmetic groups — truly a research book, and not a textbook — deals with algebraic groups as sets of points over a “universal domain”, with some “rational structure” to deal with the questions of field of definition.
I’ll try to see if I can get some lecture notes of this course of “L. I”. I didn’t attend it myself, but a friend did and may have taken notes; that friend highly approves of a functorial presentation of algebraic geometry, so of course he liked it, but from what he said, it was not a universal feeling among the students…
I’m actually fairly interested in the pedagogical questions here because a pet project of mine is to write a book on exponential sums over algebraic varieties, mixing the algebraic-geometry and the applications to analytic number theory; since the goal is for the resulting text to be an accessible reference for analytic number theorists who are not acquainted deeply with algebraic geometry, I will have to find a way to present things which is suitable. Fortunately, most exponential sums of analytic interest live over affine schemes of finite type over the integers, so a fairly simple “functor of points” description is what I’m currently thinking of using (of course, the proofs or sketches thereof will involve more complicated things, but the statements of the important applicable results might not).
29. Matthew Emerton - August 9, 2009
I would like to make an argument in favour of locally ringed spaces over functors of points. But I will say at the beginning that it may just reflect my own psychological weaknesses. In general in such a discussion, I think it’s important to remember that most of our preferences are based on habit and related psychological factors. It’s normally not very difficult to translate from one point of view to another if one tries, and once one becomes used to a certain view-point, it becomes easy to work with it. So the feeling that a certain view-point provides something indispensible that is missing in the a different view-point should probably not be taken so seriously.
This being said, I find a lot of utility to the topological space underlying a scheme, to the points that are part of it, to their various closure relations, and so on. I’ve never had to work with stacks or even algebraic spaces, and so I’ve never had to kick this habit.
As I said in an earlier comment, when one has a point of a scheme over Q, which one then spreads out over Z, the point which was (say) closed over Q becomes non-closed over Z, and has various specializations, into the different possible residue characteristics. Thinking about these points in a concrete way is something I find to be useful, and helps me understand what’s going on. I can localize sheaves around these various points, and this has various meanings related to interesting number-theoretic operations (inverting primes, p-adically completing, etc.).
Maybe more generally, I might say that one interpretation of geometry is that it is about arguing from pictures, and to me there is no question that a scheme with its Zariski topology is much more directly pictorial (and hence geometric, in this sense) then a functor on the category of rings. Speaking a little more technically, this might mean that something like intersection-theory is easier to explain when one works with point-sets than with functors.
Now of course, we could just do intersection theory by considering sheaves instead of the cycles that support them, and computing derived tensor products and so on, and in fact there are good arguments for doing this in certain contexts. This is analogous to the fact that a lot of what we might call geometric topology (and here, I don’t necessarily mean what is usually meant by that field, but just geometric ideas of intersecting, homotoping, cutting along curves, concrete obstruction theory, etc.) can be encoded in algebraic topology. But there are some intiutions to be gained from the geometric pictures that can be lost in the algebraic formulation, even if the latter is often more powerful and general, and so similarly I would say that there are some geometric intuitions that are lost in the the functorial formulation. (Of course, different intuitions, such as the importance of gluing and equivalence relations, are brought out, but to me these seem to be somewhat softer than the very geometric ideas in something like intersection theory. This is a fairly strong statement, and probably many people will disagree. Again, it may well just reflect my own hangups.)
Although (as Grothendieck pointed out, surprisingly) many constructions in algebraic geometry can be understood as quotient constructions of various kinds, starting from very simple objects, my own feeling is that this is not what should be emphasized at the beginning. When I introduce projective space, I explain it in classical terms as being obtained from affine space by adding points at infinity, so that we don’t miss any points (and hence any intersections). The description as affine space minus the origin modulo G_m I would mention only as a technical device, and (in a more subtle way) as a point of view that demonstrates the homogeneity of projective space. But again, this homogeneity, while obviously very important, wouldn’t be the first thing I emphasize.
I’m sure that even those who prefer the functor-of-points view-point, when they think of a space, imagine something geometric (in the literal, pictorial, sense). Beginning students have to learn to construct these pictures. Teaching them schemes (or at least, varieties) is one way for them to learn it.
David and James, feel free to critique this (and/or to subject me to psychoanalysis, which, in the spirit of my opening paragraph, is probably the same thing).
30. Danny Calegari - August 9, 2009
This is a very interesting discussion to read as someone who (very sporadically) uses a tiny little bit of algebraic geometry, of the extremely un-modern and complex analytic sort, and who sat uncomprehendingly through many, many lectures by a certain R. H. when I was a grad student at Berkeley.
As an ignoramus, I have a few questions for the cognoscenti.
I wonder if it’s fair to say that as a general rule, one wants the most functorial approach to a subject when one already knows in advance what kinds of things one wants to do with the objects. Going back later and adding “extra” flexibility to your objects can be a major headache; maybe this is one of the reasons why programming (or, more accurately, maintaining programs) in strongly typed languages can sometimes be very time-consuming. It is also, I think, one of the reasons why analysts typically have less use for functorial language than algebraists: every problem requires a slightly different estimation technique or function space, and book-keeping is less important than flexibility.
Does the functorial language in algebraic geometry obscure some ideas and constructions, and uncover others? I guess I’m specifically wondering whether Grothendieck-style algebraic geometers would have invented pseudo-holomorphic curves, and the application of “softer” (i.e. symplectic) methods to enumerative complex geometry. What about mirror symmetry? In fact, what actually is the history here? BZ?
For what it’s worth, the one time I taught (introductory) Algebraic Geometry, I used Mumford’s “Complex projective varieties” book, which was just right for me (I can’t speak for the students . . .)
31. Matthew Emerton - August 9, 2009
Dear D,
I think that the functorial approach certainly obscures certain ideas and constructions (like any dogma, I would guess). In general, in taking a functorial approach to a subject, one sets boundaries from the beginning by specifying the category, which place a priori constraints on what one is allowed to talk about (and hence, in some subtle sense, to think about). My understanding is that symplectic methods came from outside of algebraic geometry in the beginning. On the other hand, once one knows to use them, they can be recast in purely algebro-geometrical terms, and the functorial methods make it easy to work with them (e.g. by making it easy to make spaces of stable maps, following Kontsevich). (Others can critique this analysis if my (implicit) history of the situation is wrong.)
I should add that working scheme theoretically is also a functorial approach, in the sense of the preceding paragraph, in that one has a certain prescribed category that limits the scope of the mathematics.
But, as Jason pointed out above, by virtue of being point-set topological spaces, schemes are closer to complex analytic spaces (and hence, to classical manifolds), then functors. So learning to work with them (or at least with classical varieties) might provide some intuition which is closer to other parts of geometry.
By the way, I think that Mumford’s “Complex projective varieties” is great.
32. David Ben-Zvi - August 9, 2009
I don’t think there’s any disagreement here that a strictly dogmatic mathematician, or one who ignores geometric intuition, is at a severe disadvantage in this subject. I think that’s irrelevant to the question at hand. I certainly think about algebraic geometry in a very pictorial and informal way (thanks in large part to having people like Matthew and James teach me enough intuition that I can get away without really working through the formalism). In fact I always visualize algebraic objects in terms of complex varieties (with the complex topology) – part of learning algebraic geometry (for me) is learning how to take that intuition and make it into rigorous math, in particular learning how to adapt one’s intuitions when your variety is over another field or ring, how nilpotents work, etc. I still claim however that this is independent of whether you think of schemes in terms of functors of points or locally ringed spaces. After all the functor of points POV is just a way to think of gluing affines — so you better have a good intuitive feel for affines, one way or another. And all the geometric information is encoded very pleasantly in the functorial POV.
I think it’s also crucial what kind of applications you have in mind – Matthew, you say you usually deals with schemes, and so the topological picture is completely adequate (and perhaps has advantages) – though I’m not sure I believe this, since much of what you care about with schemes involves their (topological, not coherent) cohomology theories, which are all defined using the functor of points perspective (ie the etale or crystalline topologies or variants)…
On the other hand if your interests are in representation theory, noncommutative geometry, algebraic topology, for example I think it’s unassailably true that the functor of points IS the way to go. Representation theory for example is mostly concerned with stacks – stacks with very few C-points, like say the flag variety modulo the Borel. Tannakian theory naturally concerns the functorial POV and stacks. The theory of moduli spaces is concerned inevitably with algebraic spaces at least and usually stacks. The theory of classifying spaces for beginners, or the chromatic picture of stable homotopy theory, are about functors or stacks. Noncommutative spaces (both the algebraic and the C^* algebra-ic ones) can be represented very nicely in terms of stacks as well. Deformation theory is much more naturally a subject in functorial algebraic geometry. And so on… None of which is to say I would teach first year students stacks – but I would teach them a picture of algebraic geometry that is flexible enough to handle all of these contexts, which the theory of locally ringed spaces fails miserably at.
Danny, as for mirror symmetry and pseudoholomorphic curves, no algebraic geometer could have come up with them IMHO (nonfalsifiable statements are fun!) In particular the idea to think of a complex variety with its complex topology (which is necessary to make the leap you suggest) is equally natural in either of the points of view discussed (again the question is how you think of your building blocks, the affines.) When you speak of “Grothendieckian” algebraic geometer, though, it’s unfair to imply this means a less geometrically-intuitive one than a classical one. In fact the geometric intuition for moduli stacks (like stable maps) and their derived enhancements (like virtual fundamental classes) which are crucial for making sense of mirror symmetry are much more naturally “functorial” than “ringed-spaceish”..
wow that was quite a rant. not sure where it came from
- probably my current failures to make an abstract functorial construction concrete :-) at the end of the day I’ll probably end up teaching the way every one else does and wish I had James’ courage!
33. James Borger - August 9, 2009
Once again, I agree completely with David. In particular, his first paragraph (in 32) expresses perfectly my thoughts on all this.
Jason: I agree with your point about about holomorphic geometry. This is a good reason not to ban locally ringed spaces entirely. For a while, I have wanted to know if the functor of points approach can be done there. Presumably you can do things the same way once you’ve defined the category of Stein spaces with its Grothendieck topology. Does anyone know if there’s a direct way of doing this? (Maybe this is obvious. The only time I asked an expert on the foundations of these things, the answer was, “Who cares?”.)
Matt: I would probably agree that an abstract functor cannot really be visualize, but I also think that abstract locally ringed spaces can’t really be visualized, so I don’t think that’s so interesting. (For example, can you visualize the spectrum of an infinite product of fields as a locally ringed space?) So really the question is which point of view is better for visualizing affine schemes. I don’t think it really makes a difference. When I think about an elliptic curve, I see a torus. This is not because the locally ringed space or the functor it represents look like a torus in any real sense. It’s because one of the first things I figured out when learning algebraic geometry is that it’s always better to picture the analogous holomorphic space than to try to picture the locally ringed space. Now, if in my work, I visualize the holomorphic space but make it into real math by using the formalism of functors, is that any less geometric than keeping the same picture in mind but making it real by using the formalism of locally ringed spaces? I would say not.
I agree that geometry in its broadest sense is arguing from pictures. But I don’t see any reason why you can’t use the same pictures for the functors as for the locally ringed spaces. With the example of an integral model of a curve over Q, I probably have a very similar picture in mind to yours, but I just wouldn’t use the formalism of topological spaces to make it real.
One difference, though, might be in how we view spectra of fields (and hence scheme-theoretic points). In scheme theory proper, Spec of a field is a point. This is quite reasonable from the point of view of the Zariski topology, essentially module theory. But there are many other points of view from which it isn’t reasonable. For instance (as you well know), from the point of view of the etale topology, the fundamental group of Spec of a field is its absolute Galois group, and so a Spec of a field is not *really* a point unless it is separably closed. On the other hand, from the crystalline point of view, you’d want it to be perfect. For another example, should we view Spec C((z)) as a point or an infinitesimal punctured disk? So should Spec of a field “really” be viewed as a point or not? I would say the question is meaningless, it is not “really” anything. In particular, it is no more a point than some other figure. It should formally be just what it is (a certain object of the opposite of the category of rings), but we should have the flexibility to picture it however we find most helpful. For example, I like to view Spec C(t) as the limit of an increasing punctured copy of the complex numbers, like a some fly screen. (For instance, then it’s expected that its absolute Galois group is free on uncountably many generators.) So if the spectrum of a field is not really a point, why insist on making it one?
Anyway, I agree that it’s not hard to translate back and forth between the two formalisms (at least if you’re talking about schemes and you don’t mind invoking the axiom of choice). So the real question is whether one can teach a class or write a textbook from the functor of points point of view that will be as good or better, by general agreement, than one taught from the locally ringed spaces point of view. I think it would be possible, but it’s pretty clear that to convince other people, one would actually have to do it. Maybe when I’m old. (So much for courage.)
34. Jason Starr - August 10, 2009
For the folks who are against non-closed points, how do you intend to talk about Hasse’s principle? How will you describe the Brauer-Manin obstruction? How will you define the Brown-Gersten-Quillen spectral sequence for K-theory? Regarding “pointless stacks”, do you realize how often arguments about BG reduce to classical algebraic varieties like P(V) for V a faithful representation (e.g., equivariant Chow theory)? Regarding algebraic spaces, do you realize how many theorems in Knutson’s book are reduced to the “classical” case of schemes via Chow’s theorem for algebraic spaces? I am all for algebraic spaces, stacks, and the functor of points — I use all of those. But the perspective of locally ringed spaces is also very useful. Why not teach both?
35. James Borger - August 11, 2009
Coincidentally, today someone gave a talk about visualizing Spec Z[i] in a student seminar I organize. Of course, it was from the point of view of spaces of prime ideals, rather than functors of points. And it was clear that that approach has a big advantage in that it is possible to define Spec A and describe Spec Z[i] in a concrete sense as a cover of Spec Z, drawn as the usual two wiggly lines, all within one lecture. Even though the picture should not be taken too seriously, it’s better than no picture and is good for students to see at least once. Doing something similar with the functor of points would require much more discussion about analogies with Riemann surfaces, fibers of morphisms, and so on. I believe that in a class on algebraic geometry, all that should be done anyway, but doing it in one lecture is impossible.
So I actually probably would explain the prime spectrum of a ring in my ideal class on algebraic geometry, but I would emphasize that it is just a way of modeling certain functors using point-set topology or is one way of visualizing them. So it’s a point of view that can be psychologically helpful, but it’s not essential. I think there is a continuum of approaches between this one and the ones in the standard textbooks. I just prefer it a bit further along that continuum.
36. Scott Carnahan - August 11, 2009
Jason,
You make good points, but I might suggest that a “barrage of questions” may not be the best way to maintain the appearance of a civil discussion (verbally or in writing).
37. Jason Starr - August 11, 2009
Dear Scott,
No offense intended. If there is a clever way of presenting any of those topics using functor of points only, I would really like to learn about it. Over the 6 or so years I have been teaching algebraic geometry (certainly not as long as Matt, from whom I also learned much of algebraic geometry), I have had many discussions with students and fellow instructors over how to present the material. I have followed Shafarevich, Mumford’s Red Book, Hartshorne, Harris’s “First Course”, Cox-Little-O’Shea and my own notes (photocopied for students). Of course every approach has its advantages and disadvantages. But my experience is that Hartshorne’s book, whatever its flaws, gets students up-and-running and ready for more advanced material the most quickly.
38. David Ben-Zvi - August 11, 2009
Jason – Certainly the point was not to eliminate locally ringed spaces.
We can all agree that a mathematician ought to have as many tools as possible at their disposal and different pictures are better adapted to different goals. James and I were I believe combating the perception
(not shared by you obviously) that functors of points are these abstract unintuitive things and a first course is better off focusing only on the supposedly more accessible locally ringed spaces – i.e. we’re advocating, as are you, that a well informed algebraic geometer should be exposed to both. Or as Danny would say, should be exposed to much more, in particular should be aware of the potential impact of the great flexibility found in less rigid worlds than algebraic geometry. (Well ok we’re saying more, both of us strictly prefer one picture over the other, and in our favor is the point that one strictly contains the other, but that’s a personal decision in any case.)
As to your questions, I’m not sure I understand the complaint. Of course generic points are absolutely essential to understanding algebraic geometry, and are one of the most useful tools there are
(for example for formulating the things you mention). However that doesn’t mean one needs to think about them as points of a locally ringed space – they are after all (very special) k-points for k an
appropriate field. To put it another way, anything you can say in ring theory (like the field of fractions of an integral quotient of your ring) you can equally say with functors (by putting “op” in front – ie the notion of localization, field, and closed subvariety are equally accessible from the functorial point of view). So I’m not sure why say the Brown-Gersten resolution favors one picture over another
(but obviously it’s far closer to your expertise than mine). Again the point was that for certain kinds of algebraic geometry (say the kinds James or I practice) one picture is strictly better, and I’m happy to believe that for your purposes the converse is true.
39. Matthew Emerton - August 11, 2009
I think it might be interesting to isolate at least one question which is underlying some of this discussion. I’m not sure how to phrase it pithily, unfortunately, but I will try to explain it as best I can.
An illustration of what I have in mind is given in (I think) one of the appendices to Miles Reid’s book “Undergraduate algebraic geometry” (or, as it was apparently translated in Russian, “Algebraic geometry for everyone”), where Reid describes the case of a thesis about cubic surfaces (or something similar) which was derided by the examiners because the natural setting was “an arbitrary ringed topos” (or something similar).
Probably there is agreement among most of the commenters here that many problems in geometry (and especially moduli-type problems) are about making gluing constructions, or fibre products, beginning with fairly basic objects. And this is probably a fairly general principle, at least in algebra. (The kind of thing I have mind is, for example, the way many constructions that one makes with modules in algebra are (perhaps secretly, at least at first) just special cases of either a Hom or a tensor product, and hence are subsumed in the general theory of those two operations.)
Hence it might well be that the solution to some question posed about a cubic surface might well actually involve a framework whose natural setting is a general ringed topos.
The question I want to ask now (since I think it might be underlying some part of the preceding discussion) is: how does one effectively teach this principle?
It’s easy to imagine that if one spends all ones time teaching Hartshorne Chapter I type stuff, then students will have a good feeling for a lot of examples of different varieties, but won’t have a sense of how to make general arguments with them, or to work with important constructions involving them. (The basic theory of individual curves is quite a bit simpler than the theory of the moduli spaces of curves.) On the other hand, most of us have probably had the experience of teaching general machinery in a course, and having the unpleasant feeling that it is completely meaningless to the students.
I think one merit of Hartshorne’s book is just that it is a very substantial book. Taken as a whole, it develops a fair bit of machinery, but also actually does a lot of concrete geometry, including a lot of the theory of curves and surfaces. If you study the whole book carefully, then you can get a sense of how the theory and examples fit together, and how you can actually use the theory to make particular arguments and computations. Many of the other texts available may not be flawed in their approach, but simply don’t go as far as Hartshorne , and so in the end, don’t serve as well as self-contained texts.
As one possible counterexample to this, I would propose Mumford’s “Lectures on curves on an algebraic surface”, which I think is one of the really great algebraic geometry texts. It deals not just with foundations, but with really substantial, and concrete, geomeric questions, whose solutions depend on all the subtleties of the foundations.
It would be nice to have something like this in the spirit that James and David are proposing: a text that worked with the functor of points approach, perhaps entering into stacks and algebraic spaces, but which also illustrated everything with substantial applications, which could be used as a way to lead students into this point of view.
Maybe there are some reasonable texts out there that I just don’t know about. E.g. is there a textbook treatment of Deligne and Mumford’s paper on moduli of curves, which really exposes it in the same accessible way that Hartshorne exposes the theory of surfaces?
Or is it still the case that students just read the original paper?
40. Matthew Emerton - August 11, 2009
Dear BZ,
Our posts just crossed, I saw.
I think that one thing Jason was getting at was that in the technical theory of algebraic spaces and stacks, the proofs of several results are reduced to the scheme case by Chow-type lemmas (just as in the theory of proper maps, several results are reduced to the projective case in the same way). The question (at least implicitly, and maybe made explicit in his reply to Scott) is whether one can deal with these kind of question directly from the functor of points view-point, without at some stage having to reduce to the scheme case and get ones hands dirty with the locally ringed space.
I am not an expert on this kind of question, but I wouldn’t be surprised if that the answer was that “yes”, one can find other ways to argue that more intrinisically adopt a functor of points view-point, but I also wouldn’t be surprised if a lot of such arguments aren’t yet developed.
I think the comparison with proper maps might be a good one. My understanding is that it is a fairly recent development (the last 10 years or so?) that people have known how to prove things like “pushforward of coherent is coherent” for proper maps in general settings, without having to reduce to the projective case. (I remember hearing about a theorem of Faltings of this type for proper maps of stacks in some generality roughly ten years ago. Hopefully I’m not totally mistaken in what I’m saying.)
It seems that there are imperatives from non-commutative geometry, and derived algebraic geometry, which are stimulating people to find new characterizations of various geometric properties, e.g. in terms of properties of derived categories of coherent sheaves, which are leading to greater flexibility of working with these notions in a more stacky/functor of points, rather than in a traditional scheme-theoretic way. BZ and Jim, is this right? How far can one go in developing the full geometric theory of stacks (including proper maps and the like) without ever using the crutch of (non-affine) schemes as locally ringed spaces?
Best wishes,
Matthew
41. Allen Knutson - August 11, 2009
Probably there is agreement among most of the commenters here that many problems in geometry (and especially moduli-type problems) are about making gluing constructions
Can you give me an example where you actually glue, say along an open set? My limited experience with moduli spaces involves overparametrizing, then dividing by a group.
BTW most commenters here know that the “Knutson’s book” that Jason refers to is not by me, but my father Donald Knutson; just thought I should mention that in case anyone else is still reading.
42. Matthew Emerton - August 11, 2009
I guess by gluing I meant “dividing out by an equivalence relation”; but in many questions one does break things up into open sets, work on these individually (perhaps taking a quotient, among other things), and then glue. For example, I would think this happens when one makes the Picard scheme of a variety. (At least, on the few occasions when I’ve tried to think this through for myself, this was one of the steps.)
43. Jason Starr - August 11, 2009
Dear David,
You and I might mean different things by “Brown-Gersten-Quillen” (there seems to be disagreement, for instance, between “BGQ” in McCleary’s book and in Srinivas’s book). I mean Theorem 5.20 on p. 65 of Srinivas’s “Algebraic K-theory”. This is a spectral sequence for the higher K-theory of a Noetherian scheme X in which the (p,q)-part of the 1st page is a product over all codimension p points of the scheme of the K-theory of the residue field of that point. For me this is strong motivation to include the codimension p points as part of the “primary definition” of a scheme rather than a “secondary definition”.
44. David Ben-Zvi - August 11, 2009
Hi Jason – I certainly agree, that’s a beautiful picture taking great advantage of generic points of subvarieties, and makes a good case for their centrality (another very similar example is the Beilinson adelic Cousin complex). It might be worth pointing out though that the spectral sequence can be defined without mentioning nonclosed points – it comes from the filtration on the category of sheaves on a space by codimension of support, i.e. it’s a fancy version of the spectral sequence calculating the cohomology of a stratified space, in which you don’t fix a stratification. I think you can define the same for the cohomology of any sheaf (the BGQ case being the sheaf of K-theory spectra) – though if you’re not in a Noetherian space (or in the presence of some similar finiteness, as in categories of constructible sheaves) this likely gives intractable nonsense.
45. James Borger - August 11, 2009
This discussion seems to be fraying into more than one issue: MaxSpec vs Spec (both as locally ringed spaces, the original issue), functors of points vs locally ringed spaces, algebraic spaces vs schemes vs projective schemes, Hartshorne’s book vs someone else’s, and maybe some more. I don’t think these are all orthogonal issues, but they are definitely independent, and it seems to me that Matt and Jason are at times identifying them.
So, for example, I would not be surprised at all if one needs (in some sense) projective schemes to prove basic finiteness theorems about proper algebraic spaces. But this doesn’t mean that we have to model the gluing of affines that produces projective space using the formalism of locally ringed spaces. It’s easy enough from the functor of points POV to make the following real: take affine n-space, delete the origin, and then quotient out by the action of G_m. I’ve been told before that you need general schemes (rather than just projective schemes) to prove basic facts about algebraic spaces. Although I’m a bit skeptical, no problem: just define a scheme to be a certain kind of functor (e.g., an algebraic space, which is very easily defined from the functor of points POV, with a covering family of open immersions from affine schemes). It would great if, as Matt said, one could prove such statements without first reducing to the case of schemes or projective schemes, but the functor of points POV doesn’t require that at all. On the other hand, as Matt said, probably many of the arguments needed to do this from the functor of points POV aren’t yet written down, which is to me the main reason to use the ringed spaces point of view.
Similarly, as David pointed out, you can still talk about generic points if you want to. They’re just certain maps from Spec of certain fields. (However, from this point of view, there is less of a reason to call them points, which I actually like, as I wrote before.)
Finally, let me say something about Matt’s question of writing a textbook at needs the formalism of the functor of points, for example with some treatment of Deligne-Mumford. I now think something implicit in this question is really what’s at the heart of this discussion. That is that the functor of points, being a harder formalism than ringed spaces, needs more justification than, say, the justification for the formalism of schemes found in Hartshorne. I, and I think David, would disagree that the functor of points is a harder formalism than ringed spaces. In fact, I think it’s an easier one, and I think that’s really what’s going on in this discussion. Why do I think this? For example, suppose you had to explain to the analyst down the hall what the group scheme GL_2 is. I would say this: In algebraic geometry, a space is a functor from Rings to Sets, i.e. a rule that gives a set for every ring in a natural way. Then we define GL_2 to be the rule that assigns the group GL_2(R) to the ring R. What could be easier and more natural? That’s exactly what GL_2 *should* be, once you accept the very natural point that we want to look at all rings. I wouldn’t even have to define what an ideal is, much less a prime ideal, the spectrum of a ring, a locally ringed space, and so on.
When I first learned scheme theory, there were lots of geometrically minded students in the class who just couldn’t get past the crazy topological spaces. The simplest varieties, such as n-dimensional affine space over the complex numbers, were these unvisualizable things. The problem was that they took the Spec construction too seriously. Completely understandably, they really wanted to understand Spec A (A=C-algebra, say), when in fact the right approach is to view it as a formal gadget and imagine it the high school way as spaces cut out by equations. Now I believe that the functor of points POV is, not just formally more powerful, but easier and closer to the best intuitive (=high school) way of looking at varieties. What is the variety cut out by equations f_1,…,f_m? It’s the set of simultaneous solutions to these polynomials in R^n, for R a variable ring. That’s exactly what the definition should be, and you could explain it to anyone who knows what a ring is.
Becoming comfortable with the whole functor of points formalism of course requires more effort, but I think if it were properly taught, all that would actually require less effort than with the ringed spaces formalism.
46. Jason Starr - August 12, 2009
Dear James,
I agree that there are several related questions. Regarding “locally ringed spaces” versus some other way of introducing schemes (i.e., as locally representable functors on the category of rings with the Zariski topology), I still consider the “Brown-Gersten-Quillen” spectral sequence as very good motivation. Whichever approach to schemes we choose, and whichever approach to K-theory we choose, I think we would agree about the higher K-theory groups of a Noetherian scheme, as Abelian groups. The BGQ spectral sequence has for (p,q)-term a product over the codimension-p points of the K-theory of the residue field at that point. So the spectral sequence already encodes a lot about the locally ringed space of the scheme: the point-set, the codimensions of the points, and their residue fields. As David points out, this spectral sequence is an instance of a more general construction. But I still don’t see how one would naturally introduce this spectral sequence without first introducing the locally ringed space of a scheme (of course, I would love to learn if there is a way to do it). There are some similar spectral sequences in Grothendieck’s Brauer exposes (and I think also in SGA 2 somewhere).
I mentioned Hartshorne’s book because, if one decides that locally ringed spaces are the way to go, then I think Hartshorne is the textbook which most quickly gets students up-to-speed with that approach.
Of course that is just my opinion, heavily influenced from first learning algebraic geometry from Matt and Brian Conrad. But I also teach my students about the functor of points at the earliest possible moment. And I try to teach about algebraic spaces and algebraic stacks as well, although with limited success. Johan is running a student seminar on stacks for the second time this semester. Maybe my opinion will completely change after that.
Best regards,
Jason
47. Matthew Emerton - August 12, 2009
Dear James,
It’s not that I think that the functor of points POV is intrinsically harder (although I think your “analyst down the hall” scenario is slightly rosy, just because, although the operation of forming GL_2(R) is very simple, the idea that the object itself *is* this operation is something that I think could be hard to get your head around if you hadn’t thought about it before). What I am curious about is the following:
Properness is usually defined for schemes in terms of the concept “universally closed”, which uses the topological spaces that you want to avoid. Now one can rephrase this via the valuative criterion, which is pure functor of points. But can all results about properness be proved just using the valuative criterion?
Another concept that comes to mind which superficially uses the underlying topological space of the scheme is arguments via specialization or generalization. Presumably, though, these are not hard to rephrase functorially.
My point is that if there are arguments along the way that really use the topological spaces associated to non-affine schemes, then one has to develop this theory as well as the functor of points POV. But if one can really make all the arguments without this, I think it would interesting to record this fact, and to record some of the arguments.
Incidentally, I hadn’t realized that non-closed points were such a pedagogical issue before taking part in this thread; despite the sentiment of Joel’s original post, they always seemed quite natural to me.
Cheers,
Matt
P.S. The fact that a variety or scheme is just the high-school notion of a system of equations is something that I try to emphasize from the beginning. But actually, I think this is a difficult thing to convince students of (and not just because of any particular choice of formalism — I think that there is a tendency to psychologically separate high school and early undergraduate mathematics like calculus, linear algebra, and analytic geometry, from the “sophisticated” mathematics of upperclass undergraduate and graduate courses, which is not easily overcome).
48. anon - August 12, 2009
I had a question stimulated by the K-theory discussion: Is there a notion of Krull dimension at a k-valued point of a functor F: (k-alg) -> (Sets) for an algebraically closed field k, say when F is a locally finitely presented fppf sheaf (but not necessarily formally smooth)? Ideally, I’d like a notion which spits out the Krull dimension of F at the image point when F is a scheme.
49. Akhil Mathew - August 12, 2009
“Properness is usually defined for schemes in terms of the concept “universally closed”, which uses the topological spaces that you want to avoid. Now one can rephrase this via the valuative criterion, which is pure functor of points. But can all results about properness be proved just using the valuative criterion?”
I was also curious (and sorry if this is irrelevant): Is it possible to define a notion such as quasi-compactness of morphisms, which is quite natural via topological spaces, easily using the “functor of points” approach?
50. Eric Zaslow - August 12, 2009
This has been an enormously fun discussion to read (or, more honestly, skim). I lament only that Danny Calegari has hijacked the “ignoramus” moniker — for if he’s that, what am I? Let’s go with “doofus.”
This doofus never got a proper foundational grounding in algebraic geometry (or any other math subject) and can appreciate all the thought that is being put into what the students might need down the road. But… what do you want to *do* in your course? Why not just work backward from there? If the answer to this question truly is, “set the foundations for algebraic geometry in its most broadly applicable setting,” then go slowly and start with categories/functors/representability (or whatever). If the answer is, “explain the Kodaira embedding theorem” then start with manifolds and complex structures — then maybe reprove it from another perspective for students who will advance.
I guess I’m just saying introduce the “hard” stuff only if you’ll need it (I know this repeats many previous thoughts in the thread).
No calculus student needs to learn the vectors-as-differentials point of view. Further, every student familiar with multivariable calculus can prove the equivalence between vectors and differentials whenever s/he needs to. Worldviews can change over time, but learning must needs be connected to familiar objects, whatever those may be.
Doofusfully yours,
-Eric
p.s. Mirror symmetry comment. The flexibility of symplectic techniques in mirror symmetry is really cool. But the cat got loose and became feral. Now there are all kinds of efforts to tame it through the *algebraic* structures of the Fukaya category. Algebra always wins in the end.
51. Emmanuel Kowalski - August 12, 2009
Still thinking about how I will try to present algebraic geometry for my intended exponential sums book, here’s another concrete example of question where one formulation seems to me certainly clearer to present, but it’s rather the usual topological/ringed space one: in families of exponential sums over finite fields (and in other things like Katz-Sarnak-type studies of L-functions over finite fields) it is essential to be able to speak of the Zariski closure of a subset of some GL(n,k), where k is a ell-adic field. How easy would it be to do this as a “functor of points” construction? (Note that the subsets in question have no structure a priori, except being subgroups of GL(n,k)).
52. David Ben-Zvi - August 12, 2009
Emmanuel – I imagine this is not helpful (ie just a tautological reformulation), but I think this can be said functorially as follows: given a subset S of X(k), the k-points of a variety (X=GLn in your case), its Zariski closure is the initial object of the category of closed subvarieties of X with an inclusion of S in their k-points. Or equivalently, Zariski closure is the left adjoint of the functor from closed subvarieties of X (or varieties with a closed embedding into X) to subsets of X(k) (or sets with an embedding into X(k)), so that inclusions of sets S into Y(k) are the same as embeddings of closed subvarieties Closure(S) into Y. (Of course this presupposes we know closed subvarieties in the functorial language, but that’s immediate from the ring-theoretic definition of a closed subsets or Zariski opens, which is part of the functor-of-points package..).
53. James Borger - August 12, 2009
Regarding quasi-compactness and properness, I believe that some properties cannot be made sense of on the category of all functors, though as you point out, Matt, properness can. But in general, I think we need to look at certain subcategories. So I should probably be more specific about what I have in mind. I pretty much think of this as Grothendieck’s point of view, perhaps as interpreted by others, such as Artin. Maybe some flourishes are due to me, but I doubt it.
The basic point is that there are three categories: the category of functors from Rings to Sets, the category of sheaves contained in it, and the category of algebraic spaces contained in that. I think of these as loosely corresponding to set theory, topology, and geometry. Here’s what I mean:
1. Let Aff denote the opposite of the category of rings. Call the objects “affine schemes”.
2. Put the etale topology on Aff. For many purposes, you could probably also use the Zariski topology. (It’s not necessary at this point, but note that etaleness can be easily defined for any functor X from Rings to Sets. This is using the nilpotent lifting criterion for formal etaleness and the commutation with filtered colimits criterion for locally of finite presentation.) Note that we have to make sense of an etale cover here. I’m not sure what the best way of doing this is. You could use prime ideals of rings, but that goes against the spirit of what I’m proposing. You could also use faithful flatness. Since we’re just talking about the surjectivity of etale maps, there might be a nice way of defining cover that I haven’t thought of.
3. Define the category of spaces to be the category of sheaves of sets on Aff. (Actually, one would probably want some set-theoretic smallness conditions on such sheaves so we don’t have to work with larger universes. This is alas not addressed in SGA 4.) Any affine scheme represents a functor (its functor of points) which is in fact a sheaf, so we can view Aff as a (full) subcategory of Spaces.
The category of spaces forms a topos, and so we can probably define most properties of schemes we think of as topological at this stage. For example, a space X is defined to be quasi-compact if every covering family has a finite subcover. This answers Akhil Mathew’s question. There is a lot about such topological properties in the first two volumes of SGA 4. For quasi-compactness and quasi-separatedness, see the Grothendieck-Verdier expose VI. In particular open and closed subspaces are defined (exp IV, I think). I would imagine you can define what it means for a map of spaces to be closed.
4. Let AlgSp, the category of algebraic spaces, denote the closure of Aff under disjoint unions and quotienting (in Spaces) by etale equivalence relations. (Every algebraic space in the sense of D Knutson is an algebraic space in the sense here, and the converse is true if and only if the space is quasi-separated. Also, note that once you add all disjoint unions, the quotienting procedure terminates in two steps, I think. So they really are accessible. After the first step, you get the algebraic spaces with affine diagonal.)
5. Define an open immersion of algebraic spaces to be an etale monomorphism. (This definition can be stated for general functors, but I don’t know if it’s reasonable in that generality.) Define a scheme to be an algebraic spaces with a covering by open immersions from affine schemes.
Thus we have schemes and open and closed subsets, so I would expect that any topological property in scheme theory could be defined from this point of view, though I’d have to think about closed morphisms. One nice thing about this approach is that because our underlying “set theory” (i.e. functors from Rings to Sets) is in some ways quite rich, the passage from set theory to topology to geometry is made by adding *properties* to our “sets” (=functors), rather than structure, as is the case with differential geometry in the usual sense. So defining GL_2 is easy, but the more sophisticated the questions you ask, the deeper you’ll have to go into the theory. So the scenario of the analyst down the hall is a bit rosy, I admit, but I think there is some genuine truth to it.
Now, you might be tempted to read this as a satire on the categorical point of view. A thousand pages of SGA 4 to eliminate prime spectra of rings! A few comments on that: First, just as you don’t need a definitive tome on general topology to teach scheme theory, you could probably introduce the general sheaf theory using only a fraction of the account in SGA 4. That said, I could easily imagine that if I were to teach such an algebraic geometry class, I would retreat quite a bit because students know topological spaces and I wouldn’t want to spend the whole semester on sheaf theory (actually, I might…).
But I do believe that, in principle, this is the right way of thinking about the foundations of the subject, and so one should try to teach as close to this point of view as is reasonable. But some people who know the details of the big arguments in EGA much better than I do disagree with me. Probably Jason and Matt know them better than I do. Certainly Brian Conrad does, and I don’t think he’ll mind me saying that he thinks I’m crazy (presumably only about this). I suppose the only way I’d be convinced otherwise is to try to teach a class and fail, the flip side to the remark that the only way to convince the unbelievers is to teach a successful class.
Finally, another thing I like about this approach is that it would clearly work for categories much more general than the category of commutative rings. Indeed, Toen-Vaquie have a paper “Au-dessous de Spec Z” about these things. In particular, I think it would be interesting to look at semi-algebraic geometry from this point of view. Also, this approach doesn’t use the axiom of choice, because we don’t need to have prime ideals to make our underlying space: we just need the ring itself. Now, at a practical level, I don’t actually care about these two issues, but it is my experience that the way of understanding things that generalizes most cleanly, that uses no “recondite axioms of set theory”, is usually the right way and that understanding things in general improves your point of view, even if you only want to work in a special context.
54. Matthew Emerton - August 12, 2009
Dear Jim,
Thanks for the nice survey of the category of points set-up.
Regarding Akhil Mathew’s question, let me take one more step, just to be completely explicit:
a morphism X –> Y is quasi-compact if and only if the preimage of any affine is quasi-compact, if and only if its fibre product with any map S –> Y, where S is affine, has affine domain. Since we can define fibre product in the category of functors (the categorical definition of fibre product just means that we define the fibre product of functors in a “point-wise” fashion), and since Jim above defined what it means for a given space to be quasi-compact, we can define what it means for a morphism to be quasi-compact.
(More generally, since fibre products are, by their very nature, something that fits very well in the functor of points POV, once you can define an absolute notion (e.g. quasi-compactness or affine), the corresponding relative notion is very easily defined in the functor of points POV.)
Also, it is worth pointing out for any non-experts still reading that the exercise of making definitions (and proofs) in the functor of points set-up is an important one (as far as I understand) whether or not one wants to retain the locally ringed space POV on schemes, because often this is what is needed to extend definitions and theorems to the setting of stacks, or non-commutative spaces, or (perhaps?) the setting of derived algebraic geometry.
Cheers,
Matt
55. Peter - August 12, 2009
There’s still non-experts reading, I think that learning the different ways that experts think about the things they know is not too much less important than learning the details of what they know, and this kind of discussion is perfect for that. I’m glad this post has generated such vigorous discussion, since I’d like to learn more about both points of view :-) It seems (to a non-expert) like there’s quite a lot of machinery required for the locally-ringed spaces POV also. Eisenbud’s “commutative algebra, with a view towards algebraic geometry” is 800+ pages, and one of the goals was to prove all the results assumed in Hartshorne. Of course, not all of it is necessary, but I’d assume not all of SGA 4 is needed to give the needed results for the ‘functor of points’ POV.
56. James Borger - August 12, 2009
Peter, I think there wouldn’t be much difference with the commutative algebra requirements. The functorial POV might require a little bit less to get the general categories defined (you probably only need some basic facts about nilpotent elements to prove a few basic facts about etale ring maps). But most of the commutative algebra is for doing real work with finite-dimensional varieties, and I don’t think that would be any different from the functorial POV. You might save what you need to prove that the opposite of the category of rings is equivalent to a full subcategory of the category of locally ringed spaces, but I doubt much more than that.
57. Aise Johan de Jong - August 13, 2009
It seems to be very hard to find a good way to teach algebraic geometry. The main problem is that it is impossible to get very far into the material, even in a year long course. Every time I teach it I use a different approach. I have had most success by teaching a first semester course on varieties over algebraically closed fields with closed points only (usually a la the red book), and to follow this up with a second semester on schemes.
Note that Hartshorne’s book on algebraic geometry has the same structure. I claim we would not have such a successful and active field of research had Hartshorne not written his book. It is a marvellous book.
I have recently taught an algebraic geometry course where the first semester was spent teaching commutative algebra, and the second semester was a course on schemes. This did not work as well, perhaps because I was not able to get as far as with the other method.
In any course on schemes you will mention Grothendieck and the functor of points, fibre products, the notion of base change, etc. In fact, this is how most students are introduced to the “functorial POV”. You can start convincing them that this is useful in the discussion of separated and proper morphisms (where you show that certain diagrams are fibre product diagrams by thinking in terms of the functor of points).
The next layer in the story are algebraic spaces. Depending on their current topic of research an algebraic geometer may not need to use, or know about these at all.
Why are algebraic spaces important? My favorite example is the following: Take a degree d > 3 surface X in P^3 with a single ordinary double point and otherwise smooth. Let X’ be the resolution of singularities of X. Then there exists a flat proper family Y –> T of algebraic spaces over T = Spec(dvr) with Y_0 = X’ and Y_\eta a surface of degree d in P^3 with general moduli (in particular with Picard number 1). However, the algebraic space Y is not a scheme.
After algebraic spaces one introduces algebraic stacks, in order to coherently think about moduli problems, such as moduli of curves, surfaces, etc. But note that for certain questions, involving moduli you can find substitutes for arguments that would seem to require knowing about algebraic stacks, for example if you want to prove that there exist nontrigonal curves of a given genus, then you can get by with naively counting moduli. The key is to think about the collection of all families of curves — which is also how the algebraic stack M_g is defined theoretically.
After having introduced algebraic stacks you can start to think about higher algebraic stacks, noncommutative spaces, etc.
It seems madness to try and teach algebraic geometry starting with the category of rings, and then introducing functors, sheaves, etc. For example, without introducing varieties or schemes, the Zariski topology may seem artificial. Why consider etale maps if you do not know that a variety of dimension d is in reality a 2d-dimensional gadget? In fact, introducing etale ring maps and proving even the most trivial properties is difficult and takes lots of lectures. Explaining what those properties mean will be hard if you have not previously introduced any geometry. Finally, I think the devil is in the details and that it would be very hard to actually prove anything geometrically interesting even in a year long course using the “functorial POV”.
What I am really trying to say is this. If you are an algebraic geometer you probably enjoy the layering and abstraction that we do in our field — roughly in the order I sketched above. You likely enjoy the fact that there is a lot of commutative algebra underpinning algebraic geometry, and also that there are spaces, sheaves, and cohomology. Once you learn about Artin’s work on algebraic spaces you are amazed at how a simple list of conditions on a functor encapsulates the notion of an algebraic space. You enjoy reading Schlessinger’s article. You admire how Deligne and Mumford introduced algebraic stacks as a good way of thinking about moduli of curves, and you love how it fits with your notion of a family of curves. And so on.
Getting rid of one of the layers is not a good idea IMHO, especially when teaching students. I often find myself telling students: “You have to know everything!”
58. Emmanuel Kowalski - August 13, 2009
Thanks to David (Comment 52) for ways of describing the Zariski closure in functorial form. I like it in an abstract way, but it seems quite clear to me that this will not be the right way to present things optimally for a book for analytic number theory…
59. Akhil Mathew - August 13, 2009
Thanks to James Borger and Matthew Emerton for describing how quasi-compactness can be expressed functorially. Perhaps this is a better approach for generalizability.
I will also take a look at SGA 4.
60. Allen Knutson - August 13, 2009
I claim we would not have such a successful and active field of research had Hartshorne not written his book.
(Off topic)
My father once described to me what it was like doing algebraic geometry before [Hartshorne] (and before Xerox machines). He would go to conferences, tell people he was at MIT, and see naked envy in their eyes that he didn’t have pore through 6th-generation mimeographed notes — he could learn algebraic geometry by just breathing the Boston air.
61. Dipankar Ray - August 14, 2009
Hey, if y’all are going to *insist* on continuing this conversation – which, as I remember it, started at least as far back as that party at BZ’s place, in Berkeley back in ’98 – well, then we should at least be at Rivoli or something. I may have to pore over all these posts one more time, but I’m pretty sure Jim said he’d buy (as long as Danny springs for the wine…)
62. Ben Webster - August 14, 2009
Well, this blog was intended to be a continuation of drunken conversation by other means….
63. Anonymous - August 14, 2009
When studying a new subject, the first thing that (a majority of) people learn are the formal definitions and how to manipulate them. As time progresses, we start to develop an intuitive feel for the material, and finally, we create our own internal stories for what is “really” going on. But these internal stories don’t make any sense without the hard work that went into creating them. It’s tempting to imagine that with the help of a few well placed examples we can impart all our hard won secrets about algebraic geometry (or any other subject), but I think this is misguided. I’m not quite prepared to say that teaching an algebraic geometry course from the functor of points POV is falling for this trap, on the other hand, I think we can all agree that a certain book by “Geometrix the Gaul” (or R.H. for those unfamiliar with the work of Goscinny) has lots of great exercises.
The first time I thought about a scheme was while looking at Mazur’s Eisenstein Ideal paper. He draws a “picture” of Spec(T), where T is the Hecke Algebra. (T is finite over Z, and one may as well imagine it to be Z[x]/(x^2-px) for this discussion.) For me, the cartoon of two copies of Spec(Z) intersecting transversely at p was illuminating and instructive, and it’s also a picture that is ultimately relevant to the decomposition (up to isogeny) of the Jacobian of a modular curve.
Later on, I learnt about finite group schemes (over affine bases, so we are still talking about affine schemes here). It is somehow obvious in this context to think about such objects as functors — even though the underlying rings T may be very similar to the ones considered above. All of which is to say that context is surely everything when deciding what perspective to consider, and understanding every approach is useful.
In response to Dipankar, _any_ conversation would be better at Rivoli, my friend.
64. Dipankar Ray - August 14, 2009
OK, a real post: why don’t the Shafarevich books come up at all in these discussions? We all know a few Russians who learned this stuff reasonably well – did they start with Shafarevich, or did their adviser also tell them “first do all the problems in that R.H. book, and then we’ll talk” (something i never did, btw, which is why my knowledge of AG never budged beyond G&H). Or did they all read EGA/SGA on their mother’s knee like we were supposed to?
I always fantasized about spending a year doing the problems in Harris’s book, followed by going thru Shafarevich’s books, but that fantasy never got airtime during waking hours. Of course, this is along the lines of my other fantasy from those days, which was to go thru C.L. Siegel’s three-volume series… which is I guess the antipodal stance from the Arakelov/Lang thing. Ah well…
65. Dipankar Ray - August 14, 2009
Hmm, I see now that Jason Starr did indeed mention Shafarevich in passing…
66. James Borger - August 14, 2009
Dipankar! I’ve never looked at Shafarevich’s books, so I’m afraid I don’t have anything to say. (And I think you might be mistaken about the origins of this conversation, because as far as I can remember, no one talked about anything then but hair.)
67. Matthew Emerton - August 14, 2009
I think that both Johan and Anonymous have made thoughtful comments about the pedagogy of algebraic geometry. It’s a little depressing (from the point of view of a teacher of algebraic geometry) that both of them conclude with some version of “you have to know everything”.
On the other hand, despite the comment of Anonymous that it is “somehow obvious” to think of (say) finite flat group schemes as functors, I’ve had the experience of watching people battle with Hopf algebras to prove statement that were fairly evident from the functorial point of view. I think that there is something to be taught about the effective use of the functorial viewpoint, and this viewpoint is underrepresented in Hartshorne. (This is not a criticism of Hartshorne — it is already quite a tome, and it can’t do everything.) I learnt the functor of points viewpoint myself by reading Grothendieck’s Bourbaki seminar about faithfully flat descent. He states that this is obvious when the faithfully flat map has a section, and then gives a simple argument via the functor of points. I remember spending a long time trying to resist this argument, and instead battling with legions of diagrams in the category of schemes, until I became convinced that the Yoneda viewpoint was the best one to adopt.
It would be nice to have a more text-bookish discussion of the functorial point of view, with examples and exercises, not (at least to my mind) as any kind of replacement for Hartshorne, but as another resource for students. (My experience with algebraic geometry students is that they’re grateful for whatever resources they can get.)
After all, if they have to learn everything, the more resources that are available, the better.
68. Matthew Emerton - August 14, 2009
P.S. When I wrote that “the Yoneda viewpoint was the best one to adopt”, I meant for the particular issue at hand. (Effectiveness of faithfully flat descent data in the presence of a section.)
69. James Borger - August 14, 2009
Regarding Matt’s “text-bookish discussion of the functorial point of view”, what do people think of Waterhouse’s book Introduction to Affine Group Schemes? He starts right away with functors of points. It’s a book on algebraic groups, so doesn’t really deal with varieties much, but it is at least approximately the group-theoretic analogue of the kind of introduction I’ve had in mind. Has anyone tried to teach a class using it? While it might be hard for students on their own, it might be reasonable if they have access to someone who is comfortable with that point of view.
70. Jim Humphreys - August 17, 2009
Like James, I’m curious as to what people think of the book
by Waterhouse and especially its pedagogical value. There
is no doubt that the book outlines an attractive spectrum
of ideas in relatively few pages, but with few big theorems
proved completely along the way. I’ve always been reluctant to recommend it to graduate students, partly because of my own interest in characteristic p and
the role of quotient varieties such as flag varieties. The
somewhat impenetrable book by Demazure-Gabriel is
longer and more detailed but also fails to get very far
toward current research. (It costs a fortune, too.)
71. Various and Sundry « Not Even Wrong - August 20, 2009
[...] the all-time highest quality discussion ever held in a blog comment section goes to the comments on this posting at Secret Blogging Seminar, where several of the best (relatively)-young algebraic geometers in the [...]
72. Z - August 21, 2009
So Danny Calegary is the ignoarmus and Eric Zaslow is the doofus. Then what I am? I shudder just to think about it.
As someone who remembers vividly what I felt when I tried to learn algebraic geometry, I will just mention that I, for one, like prime ideals (which are not maximal), because they are so important in my trade. Had I learned algebraic geometry the Yoneda way, or even the spec max way, I wouldn’t have found a lot a what motivated me to learn in the first place.
But this emphatically doesn”t mean that either ways are bad; it is just a reminder that different students with different tastes, backgrounds and especially learning habits will react differently to different material. I think algebraic geometry might be particularly hard to teach in that respect because students will typically expect something different from the course, depending on their main research interests.
73. James Milne - August 21, 2009
If someone already ancient can intrude, I’d like to make a couple of points.
(a) In my view one should first learn algebraic geometry in the context of algebraic varieties over algebraically closed fields. Someone with a very strong background in commutative algebra can read Hartshorne without much difficulty, but it is possible to do this without acquiring much geometric intuition.
Also I think algebraic varieties should be defined as ringed spaces from the start (i.e., maxspecs) for a number of reasons: it’s actually easier; someone who has learned to think of manifolds as ringed spaces will find that he already understands much of the theory of algebraic varieties; it makes the transition to schemes easier; etc..
In fact, many algebraic geometers write “scheme” but think “closed points” — for example Lazarsfeld in his two volume work “Positivity in Algebraic Geometry” writes in Notations and Conventions: A scheme is an algebraic scheme of finite type over C… We deal exclusively with closed points.
However, for other algebraic geometers, and all arithmetic geometers, schemes (including nonclosed points) are vital.
(b) For a while in the 1960s, true believers *knew* that the *only* way to introduce schemes was as functors, and there are several horror stories from students who attended such a course.
Nevertheless, I do think that the correct way to define affine (algebraic) groups is as functors. For example, what *is* SLn? A commutative Hopf algebra with antipode? A ringed space whose underlying space has nonclosed points and is not even a group? I think not. It is something that, when applied to a commutative ring R gives the group of nxn matrices of determinant one, in other words, it is a functor.
Waterhouse’s book is a fine work, but it doesn’t get very far. The heart of the theory of algebraic groups after all is the study of semisimple algebraic groups (roots and weights), which isn’t covered. Incidentally, I have taught a course on algebraic groups using the functorial point of view, which (I think) went quite well.
Regarding algebraic groups as functors allows you to go a long way without using much algebraic geometry. In the book I’m working on, I plan to assume little algebraic geometry in the first three Chapters (only what’s in my notes on Commutative Algebra), but then use whatever algebraic geometry (including schemes) I want to in the last three.
Incidentally, some of the links to my website in the above discussion have already changed — you should always go to the top.
James Milne
74. Dinakar Muthiah - October 4, 2009
In my opinion, one main benefit of working with varieties (reduced, and finite type over an algebraically closed field) is that morphisms between them are completely determined by the set-theoretic maps of their closed points. So being a morphism is a property of a set map rather than supplemented structure, which is the case in rings, groups, topological spaces, differential manifolds, complex manifolds, etc. Because of the fully faithful functor to schemes, one doesn’t have to worry about non-closed points or specifying a pull-back map of structure sheaves. So both pedagogically and in practice, varieties have an important role.
Joel, as for your question of Spec-m as a adjoint. I think that if you restrict yourself to locally ringed spaces where the structure sheaf is a sheaf of finite-type k-algebras, and all residue fields are k, then Spec-m will be adjoint to the global section functor.
75. MaxSpec is not a functor « Annoying Precision - December 22, 2009
[...] this post we’ll discuss this choice. I should mention that the Secret Blogging Seminar has discussed this point very thoroughly already, but from a much more high-brow [...]
76. Left-Handed Commutative Algebra « Effective descent - February 8, 2010
[...] In the sequel, we will construct the category of Algebraic spaces following the program outlined by James Borger in an interesting SBS discussion, that is, relying only on commutative [...]
77. Passage from compact Lie groups to complex reductive groups « Secret Blogging Seminar - November 25, 2010
[...] again, I’m preparing to teach a class and needing some advice concerning an important point. [...]
78. Functor of Points Versus Locally Ringed Spaces « Ars Mathematica - December 19, 2010
[...] year ago, the Secret Blogging Seminar had a long thread on how to teach algebraic geometry, one that I never managed to read in its entirety before now. [...] | 21,630 | 97,350 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.015625 | 3 | CC-MAIN-2013-20 | longest | en | 0.906978 |
https://lungemine.com/greatest-common-factor-of-8-and-32/ | 1,656,363,983,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00585.warc.gz | 421,147,062 | 5,343 | GCF that 8 and 32 is the largest possible number that divides 8 and also 32 precisely without any kind of remainder. The factors of 8 and 32 are 1, 2, 4, 8 and 1, 2, 4, 8, 16, 32 respectively. There space 3 typically used techniques to discover the GCF the 8 and also 32 - lengthy division, Euclidean algorithm, and also prime factorization.
You are watching: Greatest common factor of 8 and 32
1 GCF that 8 and 32 2 List the Methods 3 Solved Examples 4 FAQs
Answer: GCF of 8 and 32 is 8.
Explanation:
The GCF of 2 non-zero integers, x(8) and also y(32), is the biggest positive essence m(8) that divides both x(8) and y(32) without any kind of remainder.
Let's look in ~ the various methods for finding the GCF the 8 and also 32.
Prime factorization MethodLong department MethodUsing Euclid's Algorithm
### GCF the 8 and 32 by prime Factorization
Prime administer of 8 and 32 is (2 × 2 × 2) and (2 × 2 × 2 × 2 × 2) respectively. As visible, 8 and also 32 have common prime factors. Hence, the GCF the 8 and 32 is 2 × 2 × 2 = 8.
### GCF the 8 and also 32 by lengthy Division
GCF the 8 and 32 is the divisor that we get when the remainder i do not care 0 after ~ doing long department repeatedly.
Step 2: due to the fact that the remainder = 0, the divisor (8) is the GCF of 8 and 32.
The matching divisor (8) is the GCF the 8 and also 32.
### GCF the 8 and 32 by Euclidean Algorithm
As every the Euclidean Algorithm, GCF(X, Y) = GCF(Y, X mode Y)where X > Y and also mod is the modulo operator.
Here X = 32 and also Y = 8
GCF(32, 8) = GCF(8, 32 mode 8) = GCF(8, 0)GCF(8, 0) = 8 (∵ GCF(X, 0) = |X|, wherein X ≠ 0)
Therefore, the worth of GCF that 8 and also 32 is 8.
## GCF of 8 and 32 Examples
Example 1: discover the GCF the 8 and also 32, if their LCM is 32.
Solution:
∵ LCM × GCF = 8 × 32⇒ GCF(8, 32) = (8 × 32)/32 = 8Therefore, the greatest usual factor that 8 and 32 is 8.
Example 2: The product of 2 numbers is 256. If their GCF is 8, what is your LCM?
Solution:
Given: GCF = 8 and also product of number = 256∵ LCM × GCF = product that numbers⇒ LCM = Product/GCF = 256/8Therefore, the LCM is 32.
Example 3: discover the biggest number the divides 8 and also 32 exactly.
Solution:
The best number the divides 8 and 32 exactly is their greatest common factor, i.e. GCF of 8 and 32.⇒ components of 8 and 32:
Factors that 8 = 1, 2, 4, 8Factors the 32 = 1, 2, 4, 8, 16, 32
Therefore, the GCF of 8 and 32 is 8.
Show systems >
go to slidego come slidego to slide
Ready to view the human being through math’s eyes?
Math is at the main point of whatever we do. Gain solving real-world math troubles in live classes and also become an experienced at everything.
Book a free Trial Class
## FAQs on GCF that 8 and also 32
### What is the GCF that 8 and 32?
The GCF the 8 and also 32 is 8. To calculation the greatest typical factor (GCF) of 8 and 32, we require to variable each number (factors the 8 = 1, 2, 4, 8; factors of 32 = 1, 2, 4, 8, 16, 32) and also choose the greatest factor that specifically divides both 8 and 32, i.e., 8.
### How to discover the GCF of 8 and 32 by element Factorization?
To discover the GCF of 8 and 32, us will discover the element factorization of the provided numbers, i.e. 8 = 2 × 2 × 2; 32 = 2 × 2 × 2 × 2 × 2.⇒ due to the fact that 2, 2, 2 are common terms in the prime factorization the 8 and 32. Hence, GCF(8, 32) = 2 × 2 × 2 = 8☛ prime Number
### What is the Relation between LCM and also GCF of 8, 32?
The complying with equation have the right to be offered to refer the relation in between LCM (Least common Multiple) and also GCF the 8 and also 32, i.e. GCF × LCM = 8 × 32.
### How to discover the GCF that 8 and 32 by Long division Method?
To find the GCF the 8, 32 making use of long department method, 32 is divided by 8. The corresponding divisor (8) when remainder amounts to 0 is taken together GCF.
### What are the methods to find GCF that 8 and 32?
There are three typically used methods to uncover the GCF the 8 and 32.
See more: How Many Square Yards Are In An Acre S Converter, Acres To Square Yards Converter
By element FactorizationBy Listing common FactorsBy long Division
### If the GCF the 32 and also 8 is 8, find its LCM.
GCF(32, 8) × LCM(32, 8) = 32 × 8Since the GCF that 32 and also 8 = 8⇒ 8 × LCM(32, 8) = 256Therefore, LCM = 32☛ Greatest usual Factor Calculator | 1,438 | 4,373 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.4375 | 4 | CC-MAIN-2022-27 | latest | en | 0.933026 |
http://ergobalance.blogspot.com/2008/06/predicting-oil-hubbert-liearization.html | 1,481,449,984,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698544672.33/warc/CC-MAIN-20161202170904-00284-ip-10-31-129-80.ec2.internal.warc.gz | 93,072,072 | 19,749 | ## Sunday, June 15, 2008
### Predicting Oil - The Hubbert Linearization.
The name of Marion King Hubbert is revered as that of a pioneer in the field of predicting likely future oil production. Hubbert first applied his analysis to the lower 48 states of the US (i.e. those excluding Hawaii and Alaska) in 1956 and predicted that oil production would reach a maximum (peak oil) either in 1965 or 1970, depending on his estimate of the total volume of the oil-reserve there, of either 150 or 200 billion barrels, respectively [1]. While the latter input yielded the timing of the peak for US oil with uncanny accuracy, the method was not per se predictive of the total volume of the resource; that required a prior estimate. One derivation of Hubbert's analysis that is predictive [2,3] has become known as the "Hubbert Linearization". The Hubbert Peak can be represented by the logistic differential equation (1):
dQ/dt = P = kQ(1 - Q/Qt) ......................(1).
Here, P is the production (number of barrels of oil) per year, Q is the cumulative production (i.e. the total amount of oil recovered from the source to date), Qt is the total amount of oil that will ever be recovered from it and k is the logistic growth rate (described by Kenneth Deffeyes [3] as a "sort of compound interest"). Equation (1) is a quadratic (and describes a parabola or bell-shaped curve, Figure 1) but may be re-written in linear form, by dividing through by Q to give equation (2):
P/Q = k - kQ/Qt ......................................(2).
Thus a plot of P/Q (i.e. the number of barrels of oil produced each year divided by the total amount of oil extracted to date) versus Q (the total amount of oil extracted to date) directly, gives a straight line (Figure 2) with a y-axis intercept equal to k and a slope -k/Qt. From k and Qt, values for P can be estimated using equation (1), for each unit value of Q, from which it is apparent that the ability to produce oil depends entirely on the "unproduced fraction", (1-Q/Qt), i.e. how much oil there is remaining in the well... and on nothing else. Qt is also given by the intercept on the x-axis, since it corresponds to the point at which the resource is exhausted and P/Q = 0.
To make a plot of P against time - i.e. a classic production curve - it is necessary to replace Q as the x-axis unit by time (e.g. by year). This can be done by noting that P = dQ/dt, as in equation (1).
Hence, 1/P = dt/dQ. By using equation (1), values of P can be predicted for each barrel of oil (billion barrels of oil are a more convenient unit), by increasing or decreasing Q by increments of one (billion barrel) unit from cumulative production at a specified year (to act as a "clock", e.g. Q = 169 billion barrels by 2002 for the US). By then dividing the P values into 1, we get the reciprocals (1/P) which are in units of years/billion barrels rather than of billion barrels per year (P). Then for each value of P, we calculate a year-fraction* (i.e. how long it took to produce each billion barrel unit) and make a production plot of P versus year-fraction, giving the curve in Figure 1, the area under which is equal to the total volume of the resource, Qt.
[*i.e. the division does not come out conveniently in round year units, but is usually fractional. For example, for the US production, for which we obtain k=0.061 and Qt=198.395 Gb. When Q=169 Gb, P=1.532, 1/P = 0.653, and we set the year at 2002, by when production data shows that 169 Gb had been produced. This is our "clock". We then calculate for Q=168, P=1.574, 1/P=0.635, and so the "year fraction" is 2002-0.653=2001.347. For Q=167, P=1.902, 1/P=0.526, and the year fraction is 2001.347-0.526=2000.821. The points can be extended above the "clock" year too, e.g. for Q=170, P=1.488, 1/P=0.672, and the "year"= 2002+0.672=2002.672. The procedure is continued for all values of Q to obtain a good data set for the plot of P/"year"].
For world oil reserves, the analysis predicts a value for Qt of around 2 trillion barrels, which would suggest we are close to (or past) the half-way point, i.e. we have used around half our original bestowal of oil.
The method has been extended to using second derivatives [4], e.g. in the form of equation (3):
(1/P)dP/dt = k(1 - 2 Q/Qt) ....................(3).
In equation 3, the term before the equals sign is often called the decline rate (of a resource). Use of this formula has been called the "Second Hubbert Linearization". A plot of delta-P/P versus 2Q gives a value of 2634 billion barrels for Qt and k = 4.6%. There are two potential matters of import here, if this analysis is correct: (1) we may have another 600 billion barrels of oil available to us, (2) the date of peak oil is shifted from around 2006 [as is obtained from equations (1) and (2)] to around 2013.
According to the summary of a recent oil conference, consensus on peak oil is that it will be with us by 2012 [5]. This is in accord with the prognosis made by the CEO of Shell who, earlier this year, stated that he expected to see a gap between demand and supply for oil at some time between 2010 - 2015.
That additional 600 billion barrels if real may not help us much though, because it is the rate of recovery that matters in closing the demand-supply gap for oil. If more oil cannot be pumped-out per day and refined fast enough to match demand, high prices will remain and there will be shortfalls in supply... somewhere or another, both in terms of fuel and chemical feedstocks for industry. | 1,420 | 5,517 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2016-50 | longest | en | 0.950141 |
https://askfilo.com/user-question-answers-mathematics/begin-aligned-and-left-frac-2-hat-i-3-hat-j-5-hat-k-hat-j-34333234323733 | 1,725,757,432,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00496.warc.gz | 99,595,441 | 47,082 | Question
Question asked by Filo student
# 8. The two adjacent sides of a parallelogram are and . Find the two unit vectors parallel to its diagonals. Using the diagonal vectors, find the area of the parallelogram. [4] Sol. Let be a parallelogram such that Then,
## Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
19 mins
Ask your question, on a video call with tutor
79
Share
Report
Found 3 tutors discussing this question
Discuss this question LIVE
12 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Trusted by 4 million+ students
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 8. The two adjacent sides of a parallelogram are and . Find the two unit vectors parallel to its diagonals. Using the diagonal vectors, find the area of the parallelogram. [4] Sol. Let be a parallelogram such that Then, Updated On Feb 19, 2023 Topic Vector and 3D Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 79 Avg. Video Duration 19 min | 333 | 1,387 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2024-38 | latest | en | 0.856154 |
https://essayheroes.us/eel-3705-fundamental-of-digital-circuit/ | 1,725,926,355,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00442.warc.gz | 217,693,376 | 18,847 | ## EEL 3705: Fundamental of Digital Circuit
1) A sequential circuit with two D flip-flops A and B, two inputs, x and y, and one output z is
specified by the following next-state and output equations: (10 point)
??+1 = ??
′ + ??
??+1 = ?? + ??
? = ?
a) Draw the logic diagram of the circuit.
b) List the state table for the sequential circuit.
c) Draw the corresponding state diagram.
2) A sequential circuit has two JK flip-flop A and B and one input x. The circuit is described by the
following flip-flop input equations: (15 point)
?? = ? ?? = ?
?? = ? ?? = ?
a) Derive the state equations ??+1 and ??+1 by substituting the input equations for the J
and K variables.
b) Draw the state diagram of the circuit.
3) A sequential circuit has three flip-flops A, B, C; one
input ???; and one output ????. The state diagram is
shown in next figure. The circuit is to be designed by
treating the unused states as don’t care conditions.
Analyze the circuit obtained from the design to
determine the effect of unused states. (15 point)
a) Using D flip-flops
b) Using T flip-flops
4) Design a sequential circuit with two D flip-flops A
and B, and one input ???. (15 point)
a) When ??? = 0, the state of the circuit remains the same. When ??? = 1, the circuit goes
through the state transitions from ?? → ?? → ?? → ?? → ?? and repeats the
sequence.
b) When ??? = 0, the state of the circuit remains the same. When ??? = 1, the circuit
goes through the state transitions from ?? → ?? → ?? → ?? → ?? and repeats the
sequence.
5) Design a four-bitshift register with parallel load using D flip-flops. There are two control inputs:
shift and load. When shift = 1, the content of the register is shifted by one position. New data are
transferred into the register when load = 1 and shift = 0. If both control inputs are equal to 0, the
content of the register does not change. (10 point)
6) Draw the logic diagram: (15 point)
a) A four-bit register with four D flip-flops and four
4 × 1 multiplexers with mode selection inputs 1 and 0.
The register operates according to the following
function table.
b) A four-bit binary ripple countdown counter using
flip-flops that trigger on the positive-edge of the clock.
c) A timing circuit that provides an output signal that stays on for exactly twelve clock
cycles. A start signal sends the output to the 1 state, and after twelve clock cycles the
signal returns to o state.
7) Using D flip-flops: (20 point)
a) Design a counter with the following repeating binary sequence ? → ? → ? → ? → ?.
Draw the logic diagram of the counter.
c) Design a counter with the following repeated binary sequence 0 → ? → ? → ? → ?.
Draw the logic diagram of the counter | 687 | 2,684 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.203125 | 3 | CC-MAIN-2024-38 | latest | en | 0.847955 |
https://oschvr.com/problems/7 | 1,571,094,730,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00163.warc.gz | 649,019,478 | 10,941 | ← Problems
# 10001st prime
#### 20 daysago
###### 06/05/2019
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13. What is the 10 001st prime number?
At first glance, a simple iteration whilst checking if index position is prime, would suffice.
A pseudocode attempt:
``````1
2
3
4
5
6
7
8
9
``````sum = 0
for i = 2; i <= ?; i++ {
if(isPrime(i)){
sum++
if ( sum >= 10001) {
return i
}
}
}``````
Using the solution in go to check if is prime from former examples:
``````1
2
3
4
5
6
7
8
9
``````func isPrime(x int64) bool {
var i int64 = 2
for ; i < x; i++ {
if x%i == 0 {
return false
}
}
return true
}``````
The solution would scale to
\$\$O(n^2)\$\$.
This is not viable. Also, there's no upper bound to set our iteration limit, meaning, we don't know up to which number we should iterate to find the 10,001 prime.
So digging further, I found about the Erathostenes Sieve:
An ancient algorithm for finding all prime numbers up to any given limit
This is exactly what we need. Let's look at the procedure:
To find all the prime numbers less than or equal to a given integer n by Erathostenes' method:
1. Create a list of consecutive integers from 2 ... n (2,3,4,...,n)
2. Initially, let p equal 2, smallest prime.
3. Enumerate multiples of p by counting in increments of p from 2p to n, and mark them in the list (these will be 2p, 3p, 4p, ...; the p itself should not be marked)
4. Find the first number greater than p in the list that is not marked, if there is not such number, stop. Otherwise, let p now equal to this number (which is next prime), and repeat step 3
5. When algorithm terminates, the numbers remain not marked in the list, are all primes below n.
Here's the code I came up:
``````1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
``````package main
import (
"fmt"
"math"
"time"
)
func primeSieve(n, limit int) int {
// Create and populate array of values
// lim := c
a := make([]bool, limit+1)
for i := 0; i < limit+1; i++ {
a[i] = true
}
// p * p <= limit === p <= int(math.Sqrt(float64(limit)))
for p := 2; p <= int(math.Sqrt(float64(limit))); p++ {
// At first all will be true
if a[p] == true {
// i = 2 * 2, i = 3 * 2, i = 4 * 2, ..., i = i * i
// Calculate multiples of 2, then 3, then 5, then 7
for i := p * 2; i <= limit; i += p {
a[i] = false
}
}
}
// Count primes up to n
// if primes == 10001, return sievePrime
var primes, sievePrime int
for p := 2; p <= limit; p++ {
if a[p] == true {
primes++
// Sum up primes and print 10,001th prime
if primes <= n {
sievePrime = p
}
}
}
return sievePrime
}
func main() {
start := time.Now()
// Nth prime to find
var n, limit int = 10001, 105000 // Arbitrary limit find by testing
fmt.Println("Execution time: ", time.Since(start))
fmt.Println("10,001th Prime: ", primeSieve(n, limit))
}`````` | 990 | 2,931 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.1875 | 4 | CC-MAIN-2019-43 | latest | en | 0.820404 |
https://it.mathworks.com/matlabcentral/profile/authors/9934578?detail=all&page=2 | 1,657,149,085,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00578.warc.gz | 362,795,459 | 3,141 | Question
move a row from a cell to another cell
I had a cell array Q like this Q=cell(5,1); Q{1,:}=[1 2 3 4;11 22 33 4;11 2 33 4]; Q{2,:}=[1 11 11 2;22 2 32 33]; ...
oltre 4 anni ago | 2 answers | 0
### 2
Question
nested for loop with different variable name
S is cell array of size 994*1 and each cell size in it is 24*24, price3(size 1*24), Bita(size 1*24), CONS_T3(size 1*24), I want...
oltre 4 anni ago | 1 answer | 0
### 1
Question
matrix as desired data using for loop
suppose I had following matrix A A(1,:)=[10 10 0.2 0 0]; A(2,:)=[10 10 10 0.3 0]; A(3,:)=[10 0.5 0 0 0]; Now I want a cell...
oltre 4 anni ago | 1 answer | 0
### 1
Question
create a specific matrix data ?
I have a matrix N=[2;3;4;2;4;1;4;3;2;1] (10*1 size) then from this i want to create a matrix like this that for i=1:10 ...
oltre 4 anni ago | 1 answer | 0
### 1
Question
I had a matrix A = [data1 data2 data3]; B = [data4 data5]; data1,data2,....data5 are calculated or exported from exc...
oltre 4 anni ago | 1 answer | 0
### 1
Question
find a rows in matrix and its position in that matrix ?
datasheet(48*6)size %I had a matrix of size(48*6) and then ii = randperm(size(datasheet,1)); k = ones(24,1)*2; ...
oltre 4 anni ago | 2 answers | 0
### 2
Question
selection of data from matrix
I have a dataset lets say a matrix of 100 rows and 5 columns, each row represents a dataset.Now I want to select 5 any rows(data...
oltre 4 anni ago | 4 answers | 0
### 4
Question
probability data distribution randomly or with some probability
I have 10,000 Electric vehicles(EVs) now I want to distribute them in 24hours(day) one-hour interval, this shows the arrival tim...
oltre 4 anni ago | 1 answer | 0
### 1
Question
how to create a function ?
I want to create a function which gives following information like fi = @(a,b) ([a:data_len 1:b]); if x<y A= x:y ...
circa 5 anni ago | 2 answers | 0
### 2
Question
Is there the more elegant way to do this?
I have 10 matrices(10*24 size each) named A1,A2,,,,A10.Now i want this E1=sum(A1); E2=sum(A2); till E10=sum(A10); ...
circa 5 anni ago | 4 answers | 1
### 4
Question
create a specific vector from excel file
I want create​ a zero vector (1*24) and according to the appliances like for 'lights' (in Excel file ) create A1 zero vector (...
circa 5 anni ago | 2 answers | 0
### 2
Question
create different vector using for loop
I have three users named A1, A2 and A3 and each user has three appliances like A11 represent the A1's 1st appliance and A12, A13...
circa 5 anni ago | 1 answer | 0
### 1
Question
read specific data from excel and create a vector ?
I have attached excel file and to create a row vector 1*24 like A1=[0 0 0 0 2.2 2.2 2.2 2.2 2.2 2.2 2.2 2.2 2.2 2.2 2.2...
circa 5 anni ago | 2 answers | 0 | 958 | 2,785 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.96875 | 3 | CC-MAIN-2022-27 | latest | en | 0.74546 |
https://community.airtable.com/t5/other-questions/convert-months-into-the-formatting-of-quot-x-years-y-months-quot/td-p/43912 | 1,726,527,566,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00338.warc.gz | 155,025,843 | 50,370 | Convert months into the formatting of "[x] years, [y] months"
Topic Labels: Dates & Timezones
1611 2
cancel
Showing results for
Did you mean:
4 - Data Explorer
Hiya!
I’m trying to calculate employee tenure with DATETIME_DIFF. I’m able to show it by years, months, or days. What I’d ideally like is to break it down to say: 3 years, 2 months.
So I have a field for start date and end date (as screenshot suggests). I have the formula applying `today()` if they are still employed. Just can’t figure out how to get that month figure to be months into the current year.
Thought I’d add my current formula.
``````DATETIME_DIFF(
IF({End Date}=BLANK(),TODAY(),{End Date}),
{Start Date}, 'Years') & " Years, "
& DATETIME_DIFF(
IF({End Date}=BLANK(),TODAY(),{End Date}),
{Start Date}, 'M') & " Months"
``````
Thanks!!
2 Replies 2
4 - Data Explorer
I think I may have gotten it in a not very elegant way (but it works lol)
``````DATETIME_DIFF(
IF({End Date}=BLANK(),TODAY(),{End Date}), {Start Date}, 'Years')
& " Years, "
& ((DATETIME_DIFF(IF({End Date}=BLANK(),TODAY(),{End Date}), {Start Date}, 'M'))
- (12 * DATETIME_DIFF(IF({End Date}=BLANK(),TODAY(),{End Date}), {Start Date}, 'Y')))
& " Months"
``````
Basically what I did was repeat the formula to get the years. I used that to multiply by 12. I subtracted that from the original month calculation to get the months into the current year. It’s not pretty but it works… lol.
18 - Pluto
Welcome to the community, @chrisko! :grinning_face_with_big_eyes: Yup, that’s pretty much the process. The logic to use either `TODAY()` or `{End Date}` can be simplified a bit to omit the `BLANK()` function. With the exception of numeric fields1, you can use this pattern:
``````IF({Field Name}, result_if_field_is_full, result_if_field_is_empty)
``````
A non-empty field is equivalent to `True` (or truthy), and an empty field is equivalent to `False` (or falsy).
That simplification turns your formula into this:
``````DATETIME_DIFF(
IF({End Date},{End Date},TODAY()), {Start Date}, 'Years')
& " Years, "
& ((DATETIME_DIFF(IF({End Date},{End Date},TODAY()), {Start Date}, 'M'))
- (12 * DATETIME_DIFF(IF({End Date},{End Date},TODAY()), {Start Date}, 'Y')))
& " Months"
``````
1 For a numeric field, concatenate the field name with an empty string:
``````IF({Field Name} & "", result_if_field_is_full, result_if_field_is_empty)
``````
This is required because a value of 0 is also falsy. Even though a 0 in a field makes it non-empty, the first formula above would still treat it as false. Concatenating a number with an empty string turns it into a string—0 becomes “0”, and a non-empty string is also truthy. | 740 | 2,665 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2024-38 | latest | en | 0.786595 |
http://math.stackexchange.com/questions/161752/probability-of-sequence-being-longer-than-some-length | 1,469,293,554,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257823133.4/warc/CC-MAIN-20160723071023-00206-ip-10-185-27-174.ec2.internal.warc.gz | 152,457,115 | 17,321 | # Probability of sequence being longer than some length
We are given random bit generator which generates 0s and 1s with equal probability 1/2. We have an algorithm which generates random numbers using this random bit generator in this way: we look for subsequences of 1s, and length of each subsequence gives us one random number (lets forget that this is really bad random generator).
So for example: 0011011100010 gives us random numbers [2, 3, 1]
Task is to show that probability of seeing number greater than $c+\log_2n$ is falling exponentially for $c\in\mathbb{N}$
Intuitively every time we increased $c$ for one, there is $1/2$ probability that next bit is going to be 1, so probability is falling exponentially, but I guess I have to prove that expected longest run of ones in sequence is going to be $\log_2n$
Here is an article about distribution of longest run in coin flip (we can se our random bit generator as fair coin flip). On page 6 is given an approximation for length of longest run, but thing that bothers me is that in an article, n is number of all trials, while in this task, n is number of subsequences of ones (equals number of "random numbers"), while number of trials can be much larger.
-
Your question based on the number of $1$s [each with probability $\frac12$], rather than the number of trials, is equivalent to the paper's "longest run of heads or tails" applied to a fair coin, and in particular where it gives $$B_n (x) = 2 A_{n-1}(x-1)\qquad \qquad (2)$$ and explains this by saying
... the distribution of the longest run of heads or tails is simply the distribution of the longest run of heads alone for a sequence containing one fewer coin toss, shifted to the right by one.
- | 416 | 1,727 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.6875 | 4 | CC-MAIN-2016-30 | latest | en | 0.939756 |
http://math.stackexchange.com/questions/244301/bessel-function-in-sturm-liouville-problem | 1,469,415,056,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257824201.56/warc/CC-MAIN-20160723071024-00176-ip-10-185-27-174.ec2.internal.warc.gz | 145,144,863 | 16,605 | # Bessel Function in Sturm-Liouville problem
I have the following Sturm-Liouville problem: $$\frac{d^2 y}{dx^2}+\lambda x^2y=0,$$ where $y(0)=0$ and $y(1)=0$.
I have solved this using MAPLE and found the exact solution to be: $$y(x)=c_1\sqrt{x}J_\frac{1}{4}(\frac{1}{2}\sqrt{\lambda}x^2)+c_2\sqrt{x}J_\frac{1}{4}(\frac{1}{2}\sqrt{\lambda}x^2).$$ Where J is the Bessel function.
I am told to use the "BesselJZeros" command in MAPLE to find the smallest eigenvalue, any help much appreciated.
-
The first term vanishes at the origin but the second, which should read $c_2\sqrt{x}J_{-1/4}(\sqrt{\lambda}x^2/2)$, has a nonzero limit there. So the first boundary condition requires $c_2=0$. Then the second boundary condition says $J_{-1/4}(\sqrt{\lambda}/2)=0$ so you'll need the first zero of $J_{1/4}$. | 282 | 805 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2016-30 | latest | en | 0.820294 |
https://deceptivelyeducational.blogspot.com/2012/07/baseball-math-match-up.html | 1,712,957,921,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296816070.70/warc/CC-MAIN-20240412194614-20240412224614-00335.warc.gz | 178,799,421 | 24,998 | ## Monday, July 2, 2012
### Baseball Math Match-Up
My son’s baseball team is mid-way through the season and with each practice, I see a love of the game growing with all the boys on the team. It’s exciting to witness such enthusiasm, see their understanding of the game grow, and skills improve.
When I conceptualized this activity, I asked my son which sport (baseball, basketball, soccer, or football) he wanted to “play.”
“Baseball,” he said without hesitation. Of course.
I designed a page of four pinstripe baseball jerseys and printed three of the pages onto cardstock. Then I trimmed two of the pages so they were slightly narrower than the third (I wanted to be able to tape the three pages together so the two outer pages of the game board could be folded in on the middle page).
I laminated the pages, along with 12 cut-outs of baseballs. (Download a PDF of the jerseys and baseballs here.)
With the baseballs cut out, I attached velco dots to the back of each and the game board, beneath each jersey. I also taped the game board together, using clear tape on the back to “hinge” the jersey pages, putting the widest page in the middle.
Now, all that was left to do was write the players’ numbers on the jerseys and corresponding math problems on the baseballs (e.g. player 12 would be matched with the ball marked 19-7).
It was up to my son to solve the math problem and place the ball under the jersey with the answer.
Three wrong answers (i.e. “strikes”) and the game was over. Every three problems he answered correctly (i.e. “balls”), he was given a small piece of candy (I used Smarties).
This game was a home run!
1. Very cute. Thanks!
imgoingfirst@gmail.com
2. What a great activity! And how fun to add the element of strikes to the game! and tee hee to it being a homerun! Thanks for sharing at the Sunday Showcase!
3. What a great way to connect it to something he enjoys!! Thank you for sharing at Sharing Saturday!!
4. What a GREAT idea! Thanks for linking up to TGIF =)
Beth
5. nice post ! thank you
to play baseball math go to cool math games baseball | 491 | 2,093 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.421875 | 3 | CC-MAIN-2024-18 | longest | en | 0.957327 |
https://www.essaysauce.com/essays/engineering/2016-5-11-1462952629.php | 1,582,697,065,000,000,000 | text/html | crawl-data/CC-MAIN-2020-10/segments/1581875146187.93/warc/CC-MAIN-20200226054316-20200226084316-00407.warc.gz | 710,740,232 | 16,421 | | Engineering Essay | Essay Sauce ESSAY SAUCE
# Essay:
### Essay details:
• Subject area(s): Engineering
• Published on: 7th September 2019
• File format: Text
• Number of pages: 2
### Text preview of this essay:
STUDY OF HYDROMAGNETIC FLOW PAST A VERTICAL SEMI INFINITE POROUS PLATE
CHEPKONGA DAVID
BACHELOR OF SCIENCE
(Mathematics and Computer Science)
A research proposal submitted in partial fulfillment for the Degree of Bachelor of Science in Mathematics and Computer Science in the Taita Taveta University College
2016
DECLARATION
This proposal is my original work and has not been presented for a degree in any other university.
Signature: ………………………………… Date:…………………………….
This proposal has been submitted for examination with our approval as university supervisors.
Signature: ………………………………… Date:…………………………….
Dr. Phineas Roy Kiogora
JKUAT, KENYA
Signature: ………………………………… Date:…………………………….
Mr. Nicholas Muthama
TTUC, KENYA
DECLARATION ii
NOMENCLATURE v
ABBREVIATIONS v
ABSTRACT v
1.0 INTRODUCTION AND LITERATURE REVIEW 1
1.1 BACKGROUND INFORMATION 1
1.2 LITERATURE REVIEW 1
1.3 STATEMENT OF THE PROBLEM 3
1.4 OBJECTIVES OF THE STUDY 3
1.To determine velocity profiles 3
2.To determine skin friction 3
3.To determine the rate of mass transfer 3
1.5 HYPOSTHESIS 3
1.6 GOVERNING EQUATIONS AND ASSUMPTIONS 4
1.6.1 Introduction 4
1.6.2 The equation of continuity 4
1.6.3 The equation of conservation of momentum 4
1.6.4 The equation of species concentration 4
1.7 METHODOLOGY 5
1.8 RESEARCH SCHEDULE 6
1.9 GANTT CHART 7
REFERENCES 9
NOMENCLATURE
Symbol Meaning
Magnetic field vector
Density
Body force
ABBREVIATIONS
Magneto hydrodynamic
ABSTRACT
In this research, the hydro magnetic flow past a vertical porous semi- infinite plate is to be considered, The effects of induced magnetic field arising due to the motion of the fluid is taken into account .The governing equations will be solved by adopting the finite difference method. The effects of various non-dimensional parameters on the velocity profile, the induced magnetic field and the temperature profile will be discussed. The results shall be tabulated and a computer program shall be written to help in the clear simulation of the solutions, where possible graphs shall also be used to present the results. It is expected that the variations of the parameters had no effect on the skin friction.
1.0 INTRODUCTION AND LITERATURE REVIEW
1.1 BACKGROUND INFORMATION
Matter is classified into fluids and solid. A solid can resist shear stress by static deformation but a fluid cannot .Fluid flow maybe one, two or three dimensional .Individual particles move in the direction of the flow to constitute fluid flow. Fluids are classified as incompressible or compressible .A fluid is said to be compressible if the fluid density varies with pressure whereas it’s not incompressible if the change in density with pressure is limited.
Hydro magnetic flow is the science which deals with the motion of electrically conducting fluid in the presence of magnetic fields. It is the synthesis of two classical sciences, Fluid Mechanics and Electromagnetic field theory. It is well known result in electromagnetic theory that when a conductor moves in a magnetic field, electric currents are induced in it. These current experience a mechanical force, called ‘Lorentz Force’, due to the presence of magnetic field. This force tends to modify the initial motion of the conductor. Moreover, induced currents generate their own magnetic field which is added on to the primitive magnetic field.
The science of fluid dynamics describes the motion of liquids and gases and their interaction with solid bodies. It is a broad, interdisciplinary field that touches almost every aspect of our daily lives, and it is central to much of science and engineering. Fluid dynamics impacts defenses, homeland security, transportation, manufacturing, medicine, biology, energy and the environment. Predicting the flow of blood in the human body, the behavior of micro fluidic devices the aero-dynamics performance of airplanes, cars, and ships, the cooling of electronic components, or the hazard of weather and climate, all require a detailed understanding of fluid dynamics and therefore substantial research. Fluid dynamics is one of the most challenging and exciting fields of scientific activity simply because of the complexity of the subject and the breadth of the applications.
1.2 LITERATURE REVIEW
The phenomenon of hydro magnetic flow past a vertical semi-infinite porous plate has attracted attention of a good number of investigators because of its various applications in Engineering. The study of magneto hydrodynamics (MHD) was started with Faraday (1859), who carried out an experiment in which an electrically conducting fluid was passed between poles of a magnet in a vacuum glass.
Alam et al (2006) investigated the effects of mass transfer on steady two dimensional free convection flow past a continuously moving semi-infinite vertical porous plate in a porous medium. They observed that the temperature decreases with increase in suction parameter .They concluded from their analysis that the temperature and concentration fields are influenced by the dufour and soret effects.
Tamana et al (2009) analyzed heat transfer in a porous medium over a stretching surface with internal heat generation, suction or injection. They observed that velocity profiles decrease with increase in injection or suction.
Das et al (2011) investigated the mass transfer effect of unsteady hydromagnetic convective flow past a vertical porous plate in porous medium with heat source, where they observed that velocity of the flow field changes more or less with variation of flow parameters.
S.velmuragan (2014) conducted a research on hydro magnetic flow past a parabolic started vertical plate in the presence of homogenous chemical reaction of first order, he used Laplace transform solution of unsteady flow past a parabolic starting motion of an infinite vertical plate with variable temperature and uniform mass diffusion, in the presence of homogenous chemical reaction of first order. The plate temperature was raised linearly with time and the concentration fields were studied for the different physical parameter. He observed that the velocity increases with increasing values of the thermal Grashof number or mass of Grashof number .he noticed that the trend is just reversed with respect to the chemical reaction parameter as well as magnetic field parameter.
Ashok (1988) carried out research on a similarity solution for hydro magnetic flow of an incompressible viscous electrically conducting fluid past a continuously moving semi-infinite porous plate in the presence of a magnetic field in the case of small magnetic Reynolds number the perturbation technique was applied to solve the equations .he realized that the effect of the magnetic parameter was to increase the skin friction coefficient while it has no significant effect on the Nusset number.
Rapits et al (1987) studied the unsteady free convective flow through a porous medium adjacent to a semi-infinite vertical plate using finite difference scheme.
Ramana (1991) studied heat transfer in flow past a continuously moving semi-infinite flat plate in transverse magnetic field with heat flux. He obtained that a fall in the temperature of the thermal boundary layer due to increase in magnetic field parameter.
Emma Marigi (2012) et al studied hydromagnetic turbulent flow past a semi-infinite vertical plate subjected to heat flux, they noticed that an induced electric current known as Hall current exists due to the presence of both electric field and magnetic field. They used finite difference scheme to solve partial differential equations, they noted that the hall current, rotation parameter, Eckert number, injection and Schmidt number affect the velocity, temperature magnetic field and concentration profiles.
G.palani, U.srikanth (2009) carried out an analysis on MHD flow past a semi-infinite vertical plate with mass transfer, they carried out analysis of incompressible, viscous fluid past a semi-infinite vertical plate with mass transfer, under the action of transversely applied magnetic field, and the heat due to viscous dissipation and the induced magnetic field were assumed to be negligible. The dimensionless governing equations were unsteady, two dimensional, coupled and nonlinear partial differential equations. They used a fast converging implicit difference scheme to solve the non-dimensional governing equations.
A.S. Gupta (1974) carried out an investigation on hydro magnetic flow past a porous flat plate with Hall effects, he realized that asymptotic solution for the velocity and magnetic field exist both for suction or blowing at the plate. Also he concluded that when the magnetic Reynolds number is very small, the flow pattern is remarkably similar to that for a non-conducting flow past a flat plate in rotating frame.
Ashraf A,moniem(2013),researched on the model of mass transfer on free convective flow of a viscous incompressible electrically conducting fluid past vertically porous plate through a porous medium with time dependent permeability and oscillatory suction in presence of transverse magnetic field .he applied perturbation technique to obtain the solution for velocity field and concentration distribution analytically.
D.latha Mathuri(2012) carried out study on an unsteady ,two dimensional hydromagnetic, laminar mixed convective boundary layer flow of an incompressible, Newton electrically conducting and radiating fluid along a semi-infinite vertical permeable moving plate with heat and mass transfer is analyzed ,by taking into account the effect of viscous dissipation .he solved the equations using finite difference method.
J.Anand Rao (2012), carried out study of hydromagnetic heat and mass transfer in MHD flow of an incompressible, electrically conducting viscous fluid past an infinite vertical porous medium of time dependent permeability under oscillatory suction velocity normal to the plate. He noted that the influence of the uniform magnetic field acts normal to the flow and the permeability of the porous medium fluctuate with time. He used Galerkin finite element method for velocity, temperature, concentration, field and expressions for skin friction.
Throughout literature review I noted that much is still not covered on the method of finite difference on hydromagnetic flow. Many researchers have dwelt on unsteady state of the flow but in my present work am going to consider a case where the flow field is steady. Another vital thing that we are bringing on board that remains untouched is that the semi-infinite plate is fixed it’s not movable.
1.3 STATEMENT OF THE PROBLEM
In this study we are going to use steady incompressible flow, where the equation of conservation of momentum will be analyzed under various conditions and also the concentration equation shall be considered, we will adopt the finite difference method to obtain solutions and compare with results from other researchers. I choose finite difference scheme because much have not been done on this method.
1.4 JUSTIFICATION
Flow of hydromagetic fluid through a porous media are very much prevalent in nature and therefore ,the study for such flows has become a principal of interest in many scientific and engineering in the study of movement of natural gas, oil and water trough oil reservoirs ;in chemical engineering for the filtration and water purification process .it’s also applicable in MHD generators ,plasma studies ,nuclear reactors ,oil exploration ,flows in oil ,control of pollutant in ground water ,coolers, fuel and gas filters ,geothermal energy extraction and in boundary layer control in the field of aerodynamics.
1.5 OBJECTIVES OF THE STUDY
The objectives of the study will be;
1. To determine velocity profiles
2. To determine skin friction
3. To determine the rate of mass transfer
1.6 HYPOSTHESIS
The flow field variables and parameters on steady hydro magnetic flow past a vertical semi-infinite porous plate has no effect on the primary velocity profiles.
1.7 GOVERNING EQUATIONS AND ASSUMPTIONS
1.7.1 Introduction:
The equations governing hydro magnetic flow are as follows:
1.7.2 The Assumptions:
In order to reduce complexity and achieve the outlined objectives, the following assumptions are made:
ii The fluid is incompressible
iii There are no chemical reactions taking place
iv The fluid flow is laminar
1.7.3 The equation of continuity.
The principle of conservation of mass says that the mass of the fluid element remains the same as the mass moves through the fluid.in fluid dynamics the continuity equation is a mathematical statement that, in any steady state process ,the rate at which the mass enters a system is equal to the rate at which mass leaves the system .
The differential form of the equation is given by:
=
But since the fluid is incompressible
Hence it reduces to
For the case of incompressible flow, is assumed to be a constant and the equation simplifies to
1.7.4 The equation of conservation of momentum
This is derived from the Newton’s second law of motion which states that the sum of the resultant force is equal to the rate of change of momentum of the flow. The law requires that the sum of all forces acting on a control volume must be equal to the rate of increase of the fluid momentum within the control volume. The equation in normal form can be expressed as:
The F term in this study represents the body force which will be taken as the magnetic force, while T represents the traction force.
From Maxwell’s electromagnetic equations ,the relation which implies that ,when magnetic Reynolds number is small ,induced magnetic field is negligible in comparison with applied magnetic field ,so that
(A constant), the equation of conservation of charge gives a constant when where current density since the plate is none conducting, this constant is zero.
The Lorentz force becomes which is equal to:
1.7.5 The equation of species concentration
The equation of species concentration is based on the conservation of mass .it is applicable when
i The porous medium is saturated with fluid
The equation for species concentration is given by;
1.8 METHODOLOGY
The partial differential equations that are obtained are non-linear.it is therefore not possible to obtain an exact analytical solution .the equations are solved numerically using the finite difference method, this a numerical method that make use of finite difference codes/solvers that take low computation memory and easy to program and modify, hence more advantageous to use in electrical problems.it is a second order method which is accurate and unconditionally stable and has less computation cost. According to Steven and Raymond ,Crank Nicolson method provide an implicit scheme which is accurate in both space and time.to provide this accuracy ,difference approximations are developed at the midpoint of the time increment.
1.9 RESEARCH SCHEDULE
ACTIVITY DURATION(WEEKS) START DATE FINISH DATE DELIVERABLE
Preliminary Work 3 7th Jan 2016 28th Jan 2016 Problem Statement
Project Identification 1 29th Jan 2016 5th Jan 2016 Research Definition
Draft Proposal 3 6th Feb 2016 27th Feb 2016 Draft Proposal
Proposal Presentation 1 28th Feb 2016 6th March 2016 Final Proposal
Proposal Defense 1 7th March 2016 14th March2016 Final Proposal
Literature search & Mathematical formulation 1 15stMarch 2016 22nd March 2016 Project report
Actual Mathematical Analysis 1 23rd March 2016 30th March 2016 Project report
Modeling of the solution 1 31st March 2016 7th April 2016 Project report
Drafting final report 3 8th April 2016 29th April 2016 Draft Project report
Report publishing & submission 5 30th April 2016 4th June 2016 Project report booklet
Final Presentation 1 5th June 2016 12th June 2016 Project presentation
2.0 GANTT CHART
Activity Duration(weeks)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Preliminary Work
Project Identification
Draft Proposal
Proposal Presentation
Proposal Defense
Literature search and Mathematical formulation
Actual Mathematical Analysis
Modeling of the solution
Drafting final report
Report publishing & submission
Final Presentation
Key
2.1BUDGET ESTIMATES
ITEM DESCRIPTION COST(KSH)
Flash disk For backing up research files and documents 1500.00
Travel (field study) To gather the necessary information as pertains to the research proposal 4000.00
Binding and Photocopying Proposal papers 3000.00
TOTAL 10000.00
2.2 REFERENCES
A.S.Gupta, hydro magnetic flow past a porous flat plate with Hall effects, Acta media 22, pp.281-287, 1975.
Asharaf A,moniem, Solution of MHD Flow past a Vertical Porous Plate through a Porous Medium under Oscillatory Suction,vol.4,no.4,April 2013.
Alam,M.S,(2006) ,Dufuor and soret effects on unsteady hydromagnetic convective flow past a vertical porous plate in a porous medium ,Internal journal of Applied Mechanics and Engineering ,11(30),535-545.
D.latha, Finite difference analysis on an unsteady mixed convection flow past a semi- infinite vertical permeable moving plate with heat and mass transfer with radiation and viscous dissipaton,vol3(4),pp.2266-2279,2012.
Das ,Biswal ,(2011) ,mass transfer effects on unsteady hydromagnetic convective flow past a vertical porous plate in a porous medium with heat source ,Journal of Applied Fluid Mechanics ,4(4),91-100.
G.palani, U, srikanth, MHD Flow past a semi-infinite vertical plate with mass transfer, vol.14, no.3, pp.345-356, 2009.
J.Anand Rao, Finite Element Solution of Heat and Mass Transfer in MHD Flow of a Viscous Fluid past a Vertical Plate under Oscillatory Suction Velocity, Journal of Applied Fluid Mechanics, Vol. 5, No. 3, pp. 1-10, 2012.
Tamana, (2009), heat transfer in a porous medium over a stretching surface with internal heat generation and suction or injection in the presence of radiation, Journal of Mechanical Engineering, 40(1), 22-28. | 3,947 | 18,215 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2020-10 | latest | en | 0.745851 |
https://brainy.expertwritershub.com/which-formula-should-be-used-to-determine-the-total-cost/ | 1,670,072,330,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00824.warc.gz | 177,583,316 | 10,365 | # Which formula should be used to determine the total cost
Mathematics
The wedding photographer for the Smith/Jones wedding charges \$1,000 for her preparation and first 60 prints. The cost is \$2.00 per photo for photos beyond the first 60. Which formula should be used to determine the total cost, C, as a function of the number of photos, p, that are purchased, assuming at least 60 are purchased?
a. C = 1000 p + 2
b. C = 1000(2p – 60)
c. C = 1000 + 2(p – 60) | 133 | 468 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.65625 | 4 | CC-MAIN-2022-49 | latest | en | 0.880616 |
https://www.mindyourlogic.com/matchstick-puzzles/roman-number-1+11+111=111-matchstick-puzzle | 1,722,999,155,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640667712.32/warc/CC-MAIN-20240807015121-20240807045121-00357.warc.gz | 715,601,675 | 26,168 | # Roman Number 1+11+111=111 - Matchstick Puzzle
###### 2.Matchstick Puzzles
`Correct the equation by moving 1 matchstick.`
•
Explanation :
``` Move one matchstick from the first + sign to 1
The equation becomes
II - II + III = III``` | 68 | 237 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2024-33 | latest | en | 0.504947 |
https://puzzling.stackexchange.com/questions/94819/shots-shots-shots-shots | 1,708,696,532,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707947474412.46/warc/CC-MAIN-20240223121413-20240223151413-00493.warc.gz | 503,785,513 | 42,608 | Shots, shots, shots, shots!
Once in a while I go out for a drink to my favourite bar. Normally is just drink a beer, eat some snacks and have a good time with my three friends. But last night I wanted to do some shots. The bartender said there were only 4 shots available:
1-digit : contains 2 types of high-percentage liquors and 3 types of low-percentage liquors
2-digit : contains 2 types of high-percentage liquors and 0 types of low-percentage liquors
..-digit : contains...
5-digit : contains 1 type of high-percentage liquors and 6 types of low-percentage liquors
But the bartender forgot the name of the third shot and what it contains… He told me: find out what shot it is, and the next shots are on the house! For all four! plus those other two at the bar. Can you guys help me out and getting free shots for everyone at the bar?
Hint 1:
I asked the bartender for more information about the shots menu and he said: Actually there were 5 possible shots, but the fourth was taken of the menu because people said it tasted like it contained 0 types of high-percentage liquor and 0 types of low-percentage liquor. The fourth shot is called 4-digit.
Hint 2:
The bartender said that the order of mixing is important to him. When he knows the name of the shots (by counting to 5), he knows how many high-percentage liquors he has to pour first and secondly how many low-percentage liquors he has to pour. Thus for the 1-digit shot he pours first 2 and then 3, for the 2-digit: 2,0; 4-digit: 0,0 and 5-digit: 1,6. The bartender also found out that the amount of people he would have to give a free shot, is an important number!
Hint 3:
During deciphering the shot, we talked to the bartender. He just has a new hobby, encrypting messages. He mentioned he recently started, therefore only uses the basic method concerning letters and numbers. He is a more experienced mathematician and just had an exam last week. He knew for sure that he answered question 1a, 2b and 3c correctly by using multiplication, addition, brackets and subtraction to find the correct 2 digit number he could split. He told us this when he was preparing the two + four! free shots, because he thinks we can find the answer!
Hint 4:
$$2+4! = 2+4*3*2*1 = 26$$
Hint 5:
The solution can be found through dechipering the name of the shot by combining the number in the name of the shot, the word digit and mathematical operations. This will lead to a number which can be translated into a 2-digit number (23 for example). This only works one way, from the name of the shot to the 2-digit number.
Hint 6:
The barman confirms to us that the letters of the word digit can be transformed using A1Z26 ciphering. Using multiplication and addition depending on the number of the shot (alle 5 letters in digit are used) a number comes out. By using the amount of people who get a free shot the different types of high-and low-percentage liqours can be found (this system only works one way!).
• when mixing a shot, does the order matter? e.g. for a 1-digit, are the two high-percentage liquors added first, then the 3 low-percentage liquors, or is that not relevant? Mar 11, 2020 at 18:43
• Yes, shots are always made by pouring first the high-percentage liquors and then combining this with the low-percentage liquors. I added a new hint concerning this.
– tyui
Mar 12, 2020 at 8:05
2 types of high-percentage liquors and 1 types of low-percentage liquors?
Because
the word "digit" in A1Z26 is $$4, 9, 7, 9, 20$$
And the fact that
1-digit: $$4 + 9 + 7 + 9 + 20 = 49 \equiv 23 \mod 26$$
2-digit: $$4 \times 9 + 7 + 9 + 20 = 72 \equiv 20 \mod 26$$
3-digit: ???
4-digit: $$4 \times 9 \times 7 \times 9 + 20 = 2288 \equiv 0 \mod 26$$
5-digit: $$4 \times 9 \times 7 \times 9 \times 20 = 45360 \equiv 16 \mod 26$$
3-digit: $$4 \times 9 \times 7 + 9 + 20 = 281 \equiv 21 \mod 26$$
Which matches Hint 6 in that (Those not after "-" is the original text):
The barman confirms to us that the letters of the word digit can be transformed using A1Z26 ciphering.
- This is how we get the number $$4, 9, 7, 9, 20$$.
Using multiplication and addition depending on the number of the shot (alle 5 letters in digit are used) a number comes out.
- See the calculation above.
By using the amount of people who get a free shot the different types
- It uses mod 26, which is the amount of people who get a free shot
(this system only works one way!)
- The modulo function is many-to-one; you have no way to know the original number if you only know the result.
This is not a full answer, but it looks like you're getting quite desperate for some response to this, so I'll write up the thoughts I've had so far, in the hope that either someone can take it and continue to get the full answer, or at least that you will see which parts are clear to potential solvers and which parts may still need hints.
Basically we're looking for some kind of function $$f$$ such that $$f(1)=(2,3),f(2)=(2,0),f(4)=(0,0),f(5)=(1,6),$$ and it seems $$f(n)$$ may relate somehow to an $$n$$-digit number or numbers. Also this may only work for $$n=1,2,3,(4),5$$ as there is no mention of possible extensions to the menu.
We know that somehow
number-letter correspondences will be important. The number of people he'll offer a free drink is $$26$$.
Also
the order of high-percentage and low-percentage is important. That could be as simple as just using them for tens place and units place in a number ($$23,20,?,0,16$$ aka $$W,T,?,0,P$$), or it could be something else.
From the penultimate hint and the new tags, it seems the answer will be something to do with
putting numbers together to form other numbers, using $$\times,+,(,),-$$ and maybe $$!$$ as operations.
So how could this be?
• $$f(n)$$, as a two-digit number, is the number of ways to do something? Unlikely, with 23 possibilities for the 1-digit problem.
• $$f(n)$$, as two separate digits, describes some way of forming $$n$$-digit numbers? Like all $$n$$-digit numbers can be formed from those two digits using the given operations (that's not the answer, but something like that)?
• Maybe the numbers-to-letter conversion is for the left-hand side? So instead of considering $$n$$-digit numbers, we consider $$n$$-letter words. But again, 23 possibilities for anything involving 1-letter words (of which there are only two) seems unlikely.
• I'll happily delete this answer if people feel it's too partial, or not saying anything that wasn't obvious from the hints. I don't expect to get the bounty since surely somebody will get a fuller answer before then. Mar 27, 2020 at 16:04
• I'm not sure this is much of an answer, as you acknowledge -- it seems to be more speculation than answer, as I don't think you say anything really concrete here.
– Deusovi
Mar 27, 2020 at 16:06
• Granted, but I can see the OP's frustration here: they're adding hint after hint but nobody is really responding. At least seeing somebody's thoughts and directions for an attempted solution might help them to decide which aspect to hint at next - or of course, as always, help someone else to get a fuller answer. Mar 27, 2020 at 16:09
• True, thoughts like this could definitely be helpful to OP! But that doesn't make them an answer. I'd summarize them in comments, or maybe make a chat room for this puzzle and share thoughts there.
– Deusovi
Mar 27, 2020 at 16:16
• @Randal'Thor You're right about getting a bit desperate :). You're thoughts aren't that bad too. The number 26 is important in letter-number correspondences, but not in the way you describe it in you're second blockqoute (the found numbers are correct btw). Also the number 26 is used in another way than letter-number correspondences.
– tyui
Mar 27, 2020 at 17:27 | 2,034 | 7,746 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.828125 | 4 | CC-MAIN-2024-10 | longest | en | 0.965372 |
http://gmatclub.com/forum/when-to-start-timing-105066.html#p820827 | 1,472,260,375,000,000,000 | text/html | crawl-data/CC-MAIN-2016-36/segments/1471982296931.2/warc/CC-MAIN-20160823195816-00280-ip-10-153-172-175.ec2.internal.warc.gz | 105,566,350 | 50,365 | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 26 Aug 2016, 18:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# When to start Timing
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
Current Student
Joined: 10 Aug 2009
Posts: 67
Location: United States (AL)
Concentration: Entrepreneurship, Strategy
GMAT 1: 600 Q43 V30
GMAT 2: 640 Q41 V36
WE: Business Development (Other)
Followers: 0
Kudos [?]: 30 [0], given: 45
When to start Timing [#permalink]
### Show Tags
19 Nov 2010, 08:18
So I have made it through MGMAT Foundations of Math and MGMAT Number Properties. I wanted to know when should I start trying to attempt to complete questions within a time frame, at the current moment I am just trying to get them correct. Any suggestions for when you practice timing in a study plan?
_________________
"Popular opinion is the greatest lie in the world"-Thomas Carlyle
Kaplan GMAT Instructor
Joined: 21 Jun 2010
Posts: 148
Location: Toronto
Followers: 45
Kudos [?]: 174 [0], given: 0
Re: When to start Timing [#permalink]
### Show Tags
19 Nov 2010, 11:03
Bigred2008 wrote:
So I have made it through MGMAT Foundations of Math and MGMAT Number Properties. I wanted to know when should I start trying to attempt to complete questions within a time frame, at the current moment I am just trying to get them correct. Any suggestions for when you practice timing in a study plan?
Hi!
Before answering your question, we need to gather a bit more information.
First, how are you doing on those concepts? Have you mastered the basics (not everything, but at least the foundations) to the point at which you're comfortable answering untimed questions?
Second, have you only been focusing on algebraic approaches or have you also worked on strategic approaches such as picking numbers, backsolving and strategic guessing?
Third, how's your work on the verbal side of the exam? Have you solely been focusing on math or have you been attacking verbal concepts as well?
Fourth, for how long have you been studying and when is your test date?
Current Student
Joined: 10 Aug 2009
Posts: 67
Location: United States (AL)
Concentration: Entrepreneurship, Strategy
GMAT 1: 600 Q43 V30
GMAT 2: 640 Q41 V36
WE: Business Development (Other)
Followers: 0
Kudos [?]: 30 [0], given: 45
Re: When to start Timing [#permalink]
### Show Tags
19 Nov 2010, 11:11
Thanks for your response, I have mastered the basics of number properties, geometry, I went through the Kaplan Math Workbook, MGMAT Foundations of Math and was able to complete everything in there without trouble. I am preparing to take my first practice test next week, so I'll have a better idea of where I am scoring (I had not used math in a while and was worried about bombing)
I have been focusing solely on solving everything through algebra,
Verbal side is fine I believe, I took the lsat before and I know CR is ok. I am getting ready to gear up for SC, but wanted to have a foundation in math before I did so.
I have been studying for a month so far, and I'm looking to take the test early Feb.
Thanks
_________________
"Popular opinion is the greatest lie in the world"-Thomas Carlyle
Kaplan GMAT Instructor
Joined: 21 Jun 2010
Posts: 148
Location: Toronto
Followers: 45
Kudos [?]: 174 [0], given: 0
Re: When to start Timing [#permalink]
### Show Tags
19 Nov 2010, 11:22
Bigred2008 wrote:
Thanks for your response, I have mastered the basics of number properties, geometry, I went through the Kaplan Math Workbook, MGMAT Foundations of Math and was able to complete everything in there without trouble. I am preparing to take my first practice test next week, so I'll have a better idea of where I am scoring (I had not used math in a while and was worried about bombing)
I have been focusing solely on solving everything through algebra,
Verbal side is fine I believe, I took the lsat before and I know CR is ok. I am getting ready to gear up for SC, but wanted to have a foundation in math before I did so.
I have been studying for a month so far, and I'm looking to take the test early Feb.
Thanks
Then I'd start working on timing immediately, to get a feel for how long it takes you to solve problems.
That said, you need to also learn alternative approaches to problems - something that will help immeasurably with timing. It's been conclusively shown that if all you do is algebra, no matter how good you are at it, you can't max out your quant score because the questions will eventually get so complicated that you'll run out of time.
There's definitely an adjustment from LSAT LR/RC to GMAT CR/RC, mostly due to the format of the GMAT. On the LSAT you can take notes right on the page and you're reading off of a booklet; on the GMAT you have to take notes on your noteboard and you're reading off of a computer. You'll find there's a definite adjustment period that slows you down, so it's essential to practice on the computer as much as possible. The brain actually processes information differently in the two formats, so transitioning to the GMAT format is likely to take some time.
Re: When to start Timing [#permalink] 19 Nov 2010, 11:22
Similar topics Replies Last post
Similar
Topics:
3 Study Strategy - When to start drilling? 11 12 Jun 2015, 13:06
1 Confused about when to start 1 25 Jan 2015, 09:56
When to start studying for GMAT 2 22 Jan 2014, 13:29
3 Time to start this GMAT journey 17 30 Jun 2012, 15:04
1 When to start with questions? 8 01 Mar 2011, 03:53
Display posts from previous: Sort by
# When to start Timing
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Moderators: HiLine, WaterFlowsUp
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 1,587 | 6,488 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.796875 | 3 | CC-MAIN-2016-36 | longest | en | 0.884313 |
https://www.jiskha.com/display.cgi?id=1295212798 | 1,516,235,738,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084887024.1/warc/CC-MAIN-20180117232418-20180118012418-00343.warc.gz | 928,904,088 | 3,903 | # Physics
posted by .
Joe is sledding down a snow hill when he collides with Mike half way down the hill. Joe and the sled have a mass of 65 kg and their velocity before the collision was 12 m/s. If Mike has a mass of 55 kg, what velocity would Joe, Mike, and the sled have after the collision?
## Similar Questions
1. ### phys. help
A 30-kg child coast down a hill on a 20-kg sled. She pushes off from the top of the hill with a velocity of 1 m/s. At the bottom of the hill, she is moving 4 m/s. If there is no heat generated when she slides down, how high is this …
2. ### Pyhsics
A sled slides down a snow-covered hill at CONSTANT SPEED. If the hill makes an ANGLE of 10degrees ABOVE THE HORIZONTAL, WHAT IS THE COEFFICIENT OF KINETIC FRICTION BETWEEN THE SLED AND THE SNOW?
3. ### Physics
Can you show work please..? 1. Joe is sledding down a snow hill when he collides with Mike half way down the hill. Joe and the sled have a mass of 65 kg and their velocity before the collision was 12 m/s. If Mike has a mass of 55 kg,
4. ### Physics
Marcus and Santa are sliding down a snowy hill that is 13.3 m high. On the hill itself friction and air resistance can be ignored. At the bottom of hill the sled hits a long patch of rough snow that slows the sled down by exerting …
5. ### physics
You and your friend are sledding on two sides of a triangle-shaped hill. On your side, the hill slopes up at 30.0° from the horizontal; on your friend's side, it slopes down at the same angle. You do not want to climb up the hill, …
6. ### Physics
A child insists on going sledding on a barely snow-covered hill. The child starts from rest at the top of the 60 m long hill which is inclined at an angle of 30o to the horizontal, and arrives at the bottom 8.0 s later. What is the …
7. ### Physics URGENT!!!
Gayle runs at a speed of 9.00 m/s and dives on a sled, initially at rest on the top of a frictionless, snow-covered hill, that has a vertical drop of 20.0 m. After she has descended a vertical distance of 4.00 m, her brother, who is …
8. ### physics
A child and a sled have a combined mass of 87.5 kg slides down a friction-less hill that is 7.34 m high. If the sled starts from rest, what is the velocity of the sled at the bottom of the hill?
9. ### Physics
You and your friend are sledding on two sides of a triangle-shaped hill. On your side, the hill slopes up at 30.0° from the horizontal; on your friend's side, it slopes down at the same angle. You do not want to climb up the hill, …
10. ### Physics
You and your friend are sledding on two sides of a triangle-shaped hill. On your side, the hill slopes up at 30.0° from the horizontal; on your friend's side, it slopes down at the same angle. You do not want to climb up the hill, …
More Similar Questions | 731 | 2,776 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.875 | 3 | CC-MAIN-2018-05 | latest | en | 0.881553 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.