url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://en.wikipedia.org/wiki/Parallel_Processing_(DSP_implementation)
|
# Parallel Processing (DSP implementation)
Jump to: navigation, search
Parallel Processing in digital signal processing (DSP) is a technique duplicating function units to operate different tasks (signals) simultaneously.[1] Accordingly, we can perform the same processing for different signals on the corresponding duplicated function units. Further, due to the features of parallel processing, the parallel DSP design often contains multiple outputs, resulting in higher throughput than not parallel.
## Conceptual Example
Consider a function unit (F0) and three tasks (T0, T1 and T2). The required time for the function unit F0 to process those tasks is t0,t1 and t2 respectively. Then, if we operate these three tasks in a sequential order, the required time to complete them is t0+t1+t2.
However, if we duplicate the function unit to another two copies (F), the aggregate time is reduced to max(t0,t1,t2), which is smaller than in a sequential order.
## Parallel Processing Versus Pipelining
Mechanism:
• Parallel: duplicated function units working in parallel
• Each task is processed entirely by a different function unit.
• Pipelining: different function units working in parallel
• Each task is split into a sequence of sub-tasks, which are handled by specialized and different function units.
Objective:
• Pipelining leads to a reduction in the critical path, which can increase the sample speed or reduce power consumption at the same speed.
• Parallel processing techniques require multiple outputs, which are computed in parallel in a clock period. Therefore, the effective sample speed is increased by the level of parallelism.
Consider a condition that we are able to apply both parallel processing and pipelining techniques, it is better to choose parallel processing techniques with the following reasons
• Pipelining usually causes I/O bottlenecks
• Parallel processing is also utilized for reduction of power consumption while using slow clocks
• The hybrid method of pipelining and parallel processing further increase the speed of the architecture
## Parallel FIR Filters
Consider a 3-tap FIR filter:[2]
$y(n)=ax(n)+bx(n-1)+cx(n-2)$
which is shown in the following figure.
Assume the calculation time for multiplication units is Tm and Ta for add units. The sample period is given by
${T_{sample} \ge T_m + 2T_a }$
By parallelizing it, the resultant architecture is shown as follows. The sample rate now becomes
${T_{sample} \ge \frac{T_{clock}}{N} = \frac{T_m + 2T_a}{3} }$
where N represents the number of copies.
Please note that, in a parallel system, $T_{sample} \ne T_{clock}$ while $T_{sample}=T_{clock}$ holds in a pipelined system.
## Parallel 1st-order IIR Filters
Consider the transfer function of a 1st–order IIR filter formulated as
$H(z)=\frac{z^{-1}}{1-a*z^{-1}}$
where |a|≤1 for stability, and such filter has only one pole located at z=a;
The corresponding recursive representation is
$y(n+1) = ay(n) + u(n)$
Consider the design of a 4-parallel architecture (N=4). In such parallel system, each delay element means a block delay and the clock period is four times the sample period.
Therefore, by iterating the recursion with n=4k, we have
$y(n+4) = a^{4}y(n) + a^{3}u(n) + a^{2}u(n+1) + au(n+2) + u(n+3)$
$\rightarrow y(4k+4) = a^{4}y(4k) + a^{3}u(4k) + a^{2}u(4k+1) + au(4k+2) + u(4k+3)$
The corresponding architecture is shown as follows.
The resultant parallel design has the following properties.
• The pole of the original filter is at z=a while the pole for the parallel system is at z=a4 which is closer to the origin.
• The pole movement improves the robustness of the system to the round-off noise.
• Hardware complexity of this architecture: N*N multiply-add operations.
Please note that the square increase in hardware complexity can be reduced by exploiting the concurrency and the incremental computation to avoid repeated computing.
## Parallel Processing for Low Power
Another advantage for the parallel processing techniques is that it can reduce the power consumption of a system by reducing the supply voltage.
Consider the following power consumption in a normal CMOS circuit.
$P_{seq}=C_{total}*V_0^{2}*f$
where the Ctotal represents the total capacitance of the CMOS circuit.
For a parallel version, the charging capacitance remains the same but the total capacitance increases by N times.
In order to maintain the same sample rate, the clock period of the N-parallel circuit increases to N times the propagation delay of the original circuit.
It makes the charging time prolongs N times. The supply voltage can be reduced to βV0.
Therefore, the power consumption of the N-parallel system can be formulated as
$P_{para}=(NC_{total})*(\beta V_0^{2})*\frac{f}{N}=\beta^2*P_{seq}$
where β can be computed by
$N(\beta V_{0}-V_{t})^{2} = \beta(V_{0}-V_{t})^{2}$
## References
1. ^ K.K. Parhi, VLSI Digital Signal Processing Systems: Design and Implementation, John Wiley, 1999
2. ^ Slides for VLSI Digital Signal Processing Systems: Design and Implementation John Wiley & Sons, 1999 (ISBN Number: 0-471-24186-5): http://www.ece.umn.edu/users/parhi/slides.html
|
2014-07-10 10:18:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281393766403198, "perplexity": 1710.6634505007898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776413052.23/warc/CC-MAIN-20140707234013-00046-ip-10-180-212-248.ec2.internal.warc.gz"}
|
https://iitbrain.com/error-analysis/
|
# 2 - Error Analysis Questions Answers
A resistor of 10kΩ having tolerance 10 % is connected in series with another resistor of 20 kΩ having tolerance 20 %. The tolerance of the combination will be approximately.
Joshi sir comment
resultant = (10±1)+(20±4) = 30±5
so tolerance = 5*100/30 = 50/3 = 16.67 %
the time period of oscillatn of a simple pendulum is 2π (l/g)1/2. l is about 10cm & is known to 1mm accuracy. the time period of oscillatn is about 0.5s . time of 100 oscillatn with a wrist watch of 1s revolutn . what is the accuracy in determinatn of g?
ans 5%
Solution by Joshi sir
T = 2π (l/g)1/2
so g α l/T2
so % error in g = % error in l + 2 % error in T
= 0.1*100/10 + 2* 1*100/50 = 1+4 = 5
here you shoult remember that if you count 100 osccilations then time will be counted for 100 osccilation.
|
2022-07-05 03:39:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83289635181427, "perplexity": 5038.549172410352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00388.warc.gz"}
|
https://calculator.academy/horizontal-acceleration-calculator/
|
Enter the magnitude of the acceleration and the angle of the acceleration into the calculator to determine the Horizontal Acceleration.
## Horizontal Acceleration Formula
The following equation is used to calculate the Horizontal Acceleration.
Ax = A * Cos(a)
• Where Ax is the Horizontal Acceleration (m/s^2)
• A is the magnitude of the acceleration (m/s^2)
• a is the angle of acceleration (degrees)
To calculate the horizontal acceleration, multiply the magnitude by the cosine of the angle of the direction.
## What are the units for Horizontal Acceleration?
The most common units for Horizontal Acceleration are m/s^2.
## How to Calculate Horizontal Acceleration?
Example Problem:
The following example problem outlines the steps and information needed to calculate the Horizontal Acceleration.
First, determine the magnitude of the acceleration. In this example, the magnitude of the acceleration is determined to be 130 (m/s^2).
Next, determine the angle of acceleration. For this problem, the angle of acceleration is measured to be 60 (degrees).
Finally, calculate the Horizontal Acceleration using the formula above:
Ax = A * Cos(a)
Inserting the values from above and solving the equation with the imputed values gives:
Ax =130 * Cos(60deg) = 65 (m/s^2)
|
2023-02-09 02:46:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948464751243591, "perplexity": 669.7570227448705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00458.warc.gz"}
|
https://www.enotes.com/homework-help/write-an-equation-rational-function-with-these-582710?en_action=hh-question_click&en_label=hh_carousel&en_category=internal_campaign
|
# Write an equation of a rational function with these conditions: No Vertical Asymptote Horizontal Asymptote at y=5 Y-intercept at (0,3)
The equation for rational function is
`f(x) = [p(x)]/[g(x)]`
The function has no vertical asymptote. So, the denominator has no real roots. The simplest polynomial we can write is `x^2 + 1`
`=gt f(x) = [p(x)]/(x^2 +1)`
The function has horizontal asymptote at 5 and is greater than 0. So, both numerator and denominator have same degree and numerator is with coefficient equal to 5.
`p(x) = 5x^2 + ax + b`
Here y- intercepts are given as, `a = 0 and b = 3`
Now `p(x) = 5x^2 + 0.x + 3`
`therefore ` the required equation for the rational function = `(5x^2 +3)/(x^2 + 1)`
Approved by eNotes Editorial Team
The function has no vertical Asymptote means the denominator does not equate to 0 at any value of x. In other words the polynomial should not have real roots. A simple form of this type function of function would be `ax^2+1` .
The function has a horizontal Asymptote at y=5. So the polynomial of the numerator would have a type like `5x^2+bx+c` .
So from these data we can say the function is;
`f(x) = (5x^2+bx+c)/(ax^2+1)`
It is given that at x = 0 then y = 3.
`3 = c/1`
`c = 3`
`f(x) = (5x^2+bx+3)/(ax^2+1)`
So a and b can be any rational value where `a!=0` .
A simple form of the answer would be at a = 1 and b = 0;
`f(x) =(5x^2+3)/(x^2+1)`
So the answer can be given as;
`f(x) =(5x^2+3)/(x^2+1)`
Approved by eNotes Editorial Team
|
2021-03-06 02:28:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386270046234131, "perplexity": 520.2458354092539}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00148.warc.gz"}
|
https://math.stackexchange.com/questions/2964587/forward-euler-method-given-two-step-sizes
|
# Forward Euler Method Given Two Step Sizes
I am attempting to compute an approximation of the solution with the forward Euler method in $$[0,1]$$ with step lengths $$h_{1}= 0.2$$, $$h_{2}= 0.1$$ given the initial value problem below
$$\frac{dy}{dz}=\frac{1}{1+z}-y(z)\quad y(0)=1$$
I am not sure what to do when I am given two step sizes instead of one. I know how to compute it if it was given with a step size. Am I supposed to find out the approximation for two different step sizes? Or is there anything I am missing?
The problem asks for solving the differential equation twice. Once for the step size of $$h=.1$$ and once for the step size of $$h= .2$$ and compare the results. As you know different step sizes give you different results with the smaller step size smaller error is made .
• Ah ok I understood completely another thing! – gimusi Oct 21 '18 at 15:09
• Thanks a lot for the explanation – enes Oct 21 '18 at 17:29
• Thanks for your attention and understanding – Mohammad Riazi-Kermani Oct 21 '18 at 18:19
We can apply the Euler’s method as usual using $$h_1$$ for the first solution that is
$$y_{i+1}=y_i+h_1F(z_i,y_i)$$
and $$h_2$$ for the second one that is
$$y_{i+1}=y_i+h_2F(z_i,y_i)$$
in order to compare the results since smaller isbthe step more accurate is the solution.
|
2019-08-26 01:06:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396810293197632, "perplexity": 172.51386258803237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00105.warc.gz"}
|
https://transcendent-ai-labs.github.io/DynaML/utils/package/
|
# utils object
Summary
The utils object contains some useful helper functions which are used by a number of API components of DynaML.
## String/File Processing¶
### Load File into a Stream¶
1 val content = utils.textFileToStream("data.csv")
### String Replace¶
Replace all occurrences of a string (or regular expression) in a target string
1 val new_str = utils.replace(find = ",")(replace = "|")(input = "1,2,3,4")
Download the content of a url to a specified location on disk.
1 utils.downloadURL("www.google.com", "google_home_page.html")
### Write to File¶
1 2 val content: Stream[String] = _ utils.writeToFile("foo.csv")(content)
## Numerics¶
### log1p¶
Calculates $log_{e}(1+x)$.
1 val l = utils.log1pExp(0.02)
### Haar DWT Matrix¶
Constructs the Haar discrete wavelet transform matrix for orders which are powers of two.
1 val dwt_mat = utils.haarMatrix(math.pow(2, 3).toInt)
### Hermite Polynomials¶
The Hermite polynomials are an important class of orthogonal polynomials used in numerical analysis. There are two definitions of the Hermite polynomials i.e. the probabilist and physicist definitions, which are equivalent up-to a scale factor. The the utils object, the probabilist polynomials are calculated.
1 2 3 4 5 //Calculate the 3rd order Hermite polynomial val h3 = (x: Double) => utils.hermite(3, x) h3(2.5)
### Chebyshev Polynomials¶
Chebyshev polynomials are another important class of orthogonal polynomials used in numerical analysis. There are two types, the first kind and second kind.
1 2 3 4 5 //Calculate the Chebyshev polynomial of second kind order 3 val c23 = (x: Double) => utils.chebyshev(3, x, kind = 2) c23(2.5)
### Quick Select¶
The quick select aims to find the $k^{th}$ smallest element of a list of numbers.
1 val second = utils.quickselect(List(3,2,4,5,1,6), 2)
### Median¶
1 val second = utils.median(List(3,2,4,5,1,6))
### Sample Statistics¶
Calculate the mean and variance (or covariance), minimum, maximum of a list of DenseVector[Double] instances.
1 2 3 4 5 6 7 8 9 val data: List[DenseVector[Double]] = _ val (mu, vard): (DenseVector[Double], DenseVector[Double]) = utils.getStats(data) val (mean, cov): (DenseVector[Double], DenseMatrix[Double]) = utils.getStatsMult(data) val (min, max) = utils.getMinMax(data)
|
2019-12-05 15:27:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4006405770778656, "perplexity": 5001.485393740559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00380.warc.gz"}
|
https://stats.stackexchange.com/questions/387929/a-reward-becomes-a-penalty-if
|
# A reward becomes a penalty if
I am working to build a reinforcement agent with DQN. The agent would be able to place buy and sell orders for a day trading purpose. I am facing a little problem with that project. The question is "how to tell the agent to maximize the profit and avoid the transaction where the profit is less than 100$". I want to maximize the profit inside a trading day and avoid to place the pair (limit buy order, limit sell order) if the profit on that transaction is less than 100$. The idea here is to avoid the little noisy movements. Instead, I prefer long beautiful profitable movements. Be aware that I thought using the "Profit & Loss" as the reward.
"I want the minimal profit per transaction to be 100$" ==> It seems this is not something that is enforceable. I can train the agent to maximize profit per transaction, but how that profit is cannot be ensured. At the beginning, I wanted to tell the agent, if the profit of a transaction is 50 dollars, I will remove 100 dollars, then It becomes a penalty of 50 dollars for the agent. I thought it was a great way to tell the agent to not place a limit buy order if you are not sure it will give us a minimal profit of 100$. It seems that all I would be doing there is simply shifting the value of the reward. The agent only cares about maximizing the sum of rewards and not taking care of individual transactions.
How to tell the agent to maximize the profit and avoid the transaction where the profit is less than 100$? With that strategy, what guarantee that the agent will never make a buy/sell decision that results in less than 100 dollars profit? Does the sum of reward - # transaction * 100 can be a solution? ## 1 Answer Your utility function is basically $$U(x) = \max(\100, x)$$ so all the profits below \$100 are equally bad. Above this, the more profit, the better. The problem is that the function is flat below \$100, so the optimizer can get stuck in such region. To avoid this, you would need to use some kind of optimizer that is able to make "jumps" outside such region, rather then something that only makes incremental improvements (like gradient descent). This would possibly depend on initialization. I am not an expert in reinforced learning, so I don't feel I could give you more detailed hints. With that strategy, what guarantee that the agent will never make a buy/sell decision that results in less than$100 profit?
Nothing would give you such guarantees. What you are describing is simply an if (profit <= 100) ... else ... block of code inside your agent, that reacts on profits below \$100 (e.g. fails and restarts). • With that strategy, what guarantee that the agent will never make a buy/sell decision that results in less than$100 profit? Jan 18, 2019 at 17:59
• @fgauth of course you can never guarantee that you won't lose money / make less than $100 on a trade. If you could, you'd be rich. Jan 19, 2019 at 15:11 • You answer is well, but I am not a fan. If the transaction gives a negative profit, then the reward function gives a minimal reward of$100. We need to punish the agent if he placed a pair (limit buy order, limit sell order) that occurred a negative profit. Jan 19, 2019 at 21:43
• @fgauth so the profit that is greater then zero, but <\$100 is acceptable? You said that it is unacceptable for the profit to go below \$100.
– Tim
Jan 20, 2019 at 7:41
|
2022-09-28 18:27:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7533640265464783, "perplexity": 752.2814862286338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00209.warc.gz"}
|
https://www.iacr.org/cryptodb/data/paper.php?pubkey=12760
|
## CryptoDB
### Paper: Solutions to Key Exposure Problem in Ring Signature
Authors: Joseph K. Liu Duncan S. Wong URL: http://eprint.iacr.org/2005/427 Search ePrint Search Google In this paper, we suggest solutions to the key exposure problem in ring signature. In particular, we propose the first forward secure ring signature scheme and the first key-insulated ring signature schemes. Both constructions allow a $(t,n)$-threshold setting. That is, even $t$ secret keys are compromised, the validity of all forward secure ring signatures generated in the past is still preserved. In the other way, the compromise of up to all secret keys does not allow any adversary to generate a valid key-insulated ring signature for the remaining time periods.
##### BibTeX
@misc{eprint-2005-12760,
title={Solutions to Key Exposure Problem in Ring Signature},
booktitle={IACR Eprint archive},
keywords={public-key cryptography / Signatures},
url={http://eprint.iacr.org/2005/427},
note={ liu@cs.bris.ac.uk 13110 received 23 Nov 2005},
author={Joseph K. Liu and Duncan S. Wong},
year=2005
}
|
2019-09-17 17:26:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23545598983764648, "perplexity": 5679.950529785185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573098.0/warc/CC-MAIN-20190917161045-20190917183045-00030.warc.gz"}
|
https://physics.stackexchange.com/questions/645193/color-of-pinholes-of-two-different-sized-blackbody-enclosures
|
Color of pinholes of two different sized blackbody enclosures
Like this video shows, blackbody enclosures held at the same temperature and having the same dimensions, albeit made of different materials, show, as expected, the same color of the pinhole, despite their different overall color.
If instead we have two blackbody enclosures of the same material, held at the same temperature, but having different dimensions, will the color of the pinholes differ?
I argue that they would be because the energies (not the energy densities $$\rho$$ in Planck’s, respectively, Wien’s laws) will be $$\rho v_{small} \ne \rho v_{large}$$, where $$v$$ with the subscript is volume. Besides, according to wave theory, frequency $$\nu$$ depends on the dimensions of the cavity $$\nu = \frac{c}{2L}n$$, where $$L$$ is the length of a enclosure, $$c$$ is the velocity of light and $$n$$ is the number of modes.
Do you agree with that? Is there an experiment demonstrating the above?
As you mentioned, photons in the box have quantised momenta based on the fact that they are standing waves, with an integer number of half wavelengths.
$$\lambda = \frac{n}{2L}$$
$$p_n = \frac{h}{\lambda} = \frac{2hL}{n}$$
There are photons of all allowed momenta in the box. Which momenta are allowed is determined by the expression for $$p_n$$ above, where n is all possible integers. So for a macroscopic system where L is much, much larger than $$\lambda$$, we can think of $$\lambda$$ as being a function of a continuous variable where, rather than being an integer, n can take on all any value (basically, we can ignore the step-like nature of $$\lambda$$).
For a photon gas (a type of Bose gas with zero chemical potential), the mean energy can be found using: $$\overline{E} = \sum_i E_i \overline{N}_i$$
Where $$\overline{N}_i$$ is the average number of particles which occupy a state with energy $$E_i$$. I'm not sure how much you know about statistical mechanics but for a photon gas $$\overline{N}_i$$ is:
$$\overline{N}_i = \frac{1}{e^{E/kT}-1}$$
We also know that for a photon:
$$E_i= pc$$
And so because we have concluded that we can approximate the allowed momenta as a continuous function of n this sum of $$E_i N_i$$ becomes an integral, which is a lot easier to solve! The result is:
$$\overline{E} = \frac{\pi ^2 (kT)^4 V}{15(\hbar c)^3}$$
Wow! The mean energy of the photon gas depends on $$T^4$$ and the volume... the important thing here is that this is the mean energy of the photon gas as a whole. If we divide by V we get the energy density, which only depends on $$T^4$$. This is an important result: the energy density in a black body cavity is only dependent on the temperature. We could take that energy per unit volume, which is carried by many photons with a broad spectrum of energies, and think of it as simply the energy of a single photon of frequency f:
$$f = \frac{E_{density}}{h}$$
Therefore our box at temperature T would consist of a number of photons, all with energy $$E_{density}$$. This energy is independent of volume and depends only on temperature, but the number of these photons in the box is dependent on the volume. It is the frequency of these hypothetical photons which determines the colour of light we see emitted from the box and it is independent of the volume. So a larger box (or larger opening) would simply allow more energy per unit time to escape, so the intensity of the radiation would be greater, but the colour of the light would be unaffected.
This photon with energy $$E_density$$ is purely hypothetical, but it is a useful pedagogical tool. The full blackbody spectrum, as devised by Plack, is given by:
$$I = 2kT\frac{f^2}{c^2} \frac{hf/kT}{e^{hf/kt}-1}$$
Where I is intensity. This is independent of volume.
I hope this makes sense.
The pinholes would have the same colour if the cavities are at the same temperature.
Imagine two different cavities, joined at their respective pinholes but isolated from the rest of the universe.
These two cavities would eventually arrive at thermal equilibrium at the same temperature with no net flow of energy through the join. That is as expected because the blackbody radiation field is isotropic.
We could also do this experiment with the same cavities, but this time insert a filter between the pinholes that only allows through a narrow range of wavelengths. The equilibrium must still be reached (it would take longer). We could choose any wavelength range for the filter and get the same result. This tells us that the flux at any particular wavelength - i.e. the blackbody radiation spectrum - is universal at a given temperature. It cannot depend on the materials or dimensions of the cavities.
The energy density in the two cavities will be the same, but the integrated energy content will not be. The flux of energy emerging from a pinhole is proportional to the energy density.
Treating the Planck function as a continuous spectrum implicitly assumes that $$L$$ is large enough that the photon energies can be summed over using an integral rather than a discrete summation, which requires that the separation between the different energy states is small compared with the average energy. i.e. Approximately $$\frac{hc}{2L} \ll k_B T\ ,$$ $$L \gg \frac{hc}{2k_B T}\ .$$ Which is a bit like saying $$L$$ must be much larger than a typical wavelength in the system. I don't think it would be appropriate to treat the spectrum as continuous if this were not true.
Experimental lab-based blackbody cavities could be of any (reasonable) size and emit a similar spectrum at the same temperature.
Black body radiation (BBR) results from the equilibriated thermal radiation that is emitted by the charges in a material as the material is heated. It is therefore a function of only temperature and is independent of material or geometry of the body, as also mentioned in the video you linked. Hence your question is answered in the negative.
Yes the total energy in the differently sized bodies would be different. It seems you then linked that to Planck's law to claim that color of the escaping photons would be different too.
This isn't true since, the total energy isn't different because there are photons of different frequencies but because there is more volume and therefore more photons. The spectrum is in fact same.
frequency ν depends on the dimensions of the cavity $$ν=\frac{c}{2Ln}$$
Yes that is the frequency of the $$n^{th}$$ stationary mode in the direction of $$L$$ of an EM wave but,as you may find in the initial steps of any derivation of BBR, the density of states and thereby the spectral energy density is independent of $$L$$. Hence geometry doesn't matter. This aspect results solely from wave theory itself, and has little to do with BBR per se.
• But now you've added that bit about stars. Stars are not blackbody cavities and neither are their photospheres. Jun 13, 2021 at 12:05
• @ProfRob spectral lines from photosphere absorption not whitstanding, isn't their temp calculated assuming BBR? Jun 13, 2021 at 12:06
• Let's not start on that. Stars are not blackbodies, since their photospheres are not in thermal equilibrium. The spectrum can be approximated by an effective temperature but in detail it is quite dissimilar to the Planck function. Your general point that one can find approximations to blackbodies with a similar spectrum but a large range of sizes is sufficient, but this would be best just done in the lab. Jun 13, 2021 at 12:08
• I am not an expert in astrophysics but the deviation from the BBR fit ..is it significant enough for the current context ? Anyways I take your point and will drop the example. Feel free to add an experiment you may know in place of it, if you have the time. Jun 13, 2021 at 12:12
|
2022-06-25 18:20:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075087666511536, "perplexity": 283.0008228897735}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00462.warc.gz"}
|
https://proofwiki.org/wiki/Set_Difference_Equals_First_Set_iff_Empty_Intersection
|
# Set Difference Equals First Set iff Empty Intersection
## Theorem
$S \setminus T = S \iff S \cap T = \varnothing$
## Proof
Assume $S, T \subseteq \Bbb U$ where $\Bbb U$ is a universal set.
$\displaystyle S \setminus T$ $=$ $\displaystyle S$ $\quad$ $\quad$ $\displaystyle \iff \ \$ $\displaystyle S \cap \complement \left({T}\right)$ $=$ $\displaystyle S$ $\quad$ Set Difference as Intersection with Complement $\quad$ $\displaystyle \iff \ \$ $\displaystyle S$ $\subseteq$ $\displaystyle \complement \left({T}\right)$ $\quad$ Intersection with Subset is Subset $\quad$ $\displaystyle \iff \ \$ $\displaystyle S \cap \complement \left({\complement \left({T}\right)}\right)$ $=$ $\displaystyle \varnothing$ $\quad$ Intersection with Complement is Empty iff Subset $\quad$ $\displaystyle \iff \ \$ $\displaystyle S \cap T$ $=$ $\displaystyle \varnothing$ $\quad$ Complement of Complement $\quad$
$\blacksquare$
|
2018-08-17 22:24:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3434315621852875, "perplexity": 252.30803978506262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213158.51/warc/CC-MAIN-20180817221817-20180818001817-00690.warc.gz"}
|
http://codeforces.com/problemset/problem/362/C
|
C. Insertion Sort
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Petya is a beginner programmer. He has already mastered the basics of the C++ language and moved on to learning algorithms. The first algorithm he encountered was insertion sort. Petya has already written the code that implements this algorithm and sorts the given integer zero-indexed array a of size n in the non-decreasing order.
for (int i = 1; i < n; i = i + 1){ int j = i; while (j > 0 && a[j] < a[j - 1]) { swap(a[j], a[j - 1]); // swap elements a[j] and a[j - 1] j = j - 1; }}
Petya uses this algorithm only for sorting of arrays that are permutations of numbers from 0 to n - 1. He has already chosen the permutation he wants to sort but he first decided to swap some two of its elements. Petya wants to choose these elements in such a way that the number of times the sorting executes function swap, was minimum. Help Petya find out the number of ways in which he can make the swap and fulfill this requirement.
It is guaranteed that it's always possible to swap two elements of the input permutation in such a way that the number of swap function calls decreases.
Input
The first line contains a single integer n (2 ≤ n ≤ 5000) — the length of the permutation. The second line contains n different integers from 0 to n - 1, inclusive — the actual permutation.
Output
Print two integers: the minimum number of times the swap function is executed and the number of such pairs (i, j) that swapping the elements of the input permutation with indexes i and j leads to the minimum number of the executions.
Examples
Input
54 0 3 1 2
Output
3 2
Input
51 2 3 4 0
Output
3 4
Note
In the first sample the appropriate pairs are (0, 3) and (0, 4).
In the second sample the appropriate pairs are (0, 4), (1, 4), (2, 4) and (3, 4).
|
2020-10-26 07:12:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31124529242515564, "perplexity": 694.2350650209657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00269.warc.gz"}
|
https://tex.stackexchange.com/questions/185392/multiple-spacing-in-toc
|
# multiple spacing in TOC
Thesis office at my university has this rule regarding TOC -
• Chapter headings should have double space above and below them
• Subheadings should be single spaced.
To accomplish this, I have this in my latex source file
\setlength{\cftbeforechapskip}{12pt} % space between whole chapter blocks
\renewcommand{\cftchapafterpnum}{\vskip 16pt}
% single spaced section and subsection
\renewcommand{\cftsecafterpnum}{\vskip 6pt}
\renewcommand{\cftsubsecafterpnum}{\vskip 6pt}
The problem I have is - because of the first two commands, I have unequal space distribution - something like
ABSTRACT
<28pt>
DEDICATION
<28pt>
.
.
.
.
CHAPTER 1 TITLE
<28pt>
CHAPTER 2
<16pt>
<16pt>
CHAPTER 3
<16pt>
.
.
.
I have used \cftbeforesubsecskip and \cftbeforesecskip also. These introduce spaces everywhere.
What I want is -
ABSTRACT
<16pt>
DEDICATION
<16pt>
.
.
.
<16pt>
CHAPTER 1 TITLE
<16pt>
CHAPTER 2
<16pt>
<16pt>
CHAPTER 3
<16pt>
.
.
.
This is what I get -
REVISED SOLUTION to automate the process and also allow full use of optional argument in \section. This solution modifies the \section definition. If this is the first section of a chapter, a blank line is stacked above the section name in the toc (using the optional argument of the original \section definition). You may have to tweak the actual stack gap (currently set as 18pt).
\documentclass{report}
\usepackage[usestackEOL]{stackengine}
\let\svsection\section
\makeatletter
\renewcommand\section[2][]{%
\if0\@arabic\c@section%
\ifx\relax#1\relax\firstsection{#2}\else\firstsection[#1]{#2}\fi%
\else
\ifx\relax#1\relax\svsection{#2}\else\svsection[#1]{#2}\fi%
\fi}
\makeatother
\newcommand\firstsection[2][]{%
\ifx\relax#1\relax\svsection[\Longstack{\\#2}]{#2}\else
\svsection[\Longstack{\\#1}]{#2}\fi
}
\setstackgap{L}{18pt}
\begin{document}
\tableofcontents
\chapter{A Chapter}
\section{A Section}
\section[TOC Section Name]{A Section}
\chapter{A Chapter}
\chapter{A Chapter}
\section[TOC Section Name]{A Section}
\end{document}
ORIGINAL SOLUTION:
You may have to tweak the actual stack gap (currently set as 18pt), but using the optional argument for the first section of every chapter will allow you to stack a blank line above that section heading in the toc.
Here, I codify that as \firstsection{}.
\documentclass{report}
\usepackage[usestackEOL]{stackengine}
\newcommand\firstsection[1]{\section[\Longstack{\\#1}]{#1}}
\setstackgap{L}{18pt}
\begin{document}
\tableofcontents
\chapter{A Chapter}
\firstsection{A Section}
\section{A Section}
\chapter{A Chapter}
\chapter{A Chapter}
\firstsection{A Section}
\end{document}
`
|
2019-12-11 03:58:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096465110778809, "perplexity": 6513.6120107347015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00464.warc.gz"}
|
https://lifecs.likai.org/2010/02/assembling-pdf-from-full-page-scans.html
|
## Tuesday, February 9, 2010
### Assembling PDF from full-page scans using LaTeX
On Friday, I requested a journal article from the library, which doesn't have subscription to an electronic copy of the journal prior to 1991, but has a hard copy in storage. Over the weekend, they fished it out and gave me a photocopy. Unfortunately, some text near the book binding were not legible in the photocopy because the margin was too narrow. They let me check out the book for a day and figure out what I want to do with it.
I wanted to have a PDF copy, so this is what I tried:
• I tried using a flat-bed scanner with Adobe Acrobat, but the text near the binding showed up black.
• I tried using BookEye scanner, which us essentially a lamp and a digital camera. I could only photograph both pages in a single image since the image splitting function in the scanner simply chopped off the text near the binding. The page still showed up curved. Using a small piece of acrylics plastic, I was able to flatten one page at a time, so I photographed the two-page spread twice, once with the left page flattened, and once with the right page flattened. I was able to recover the missing text close to the binding, but the PDF has duplicate pages, half of each page should be discarded.
• I tried using a Canon photocopier to scan for me. This produced the highest quality scan, but I was not able to flatten the page enough to recover text from the book binding.
So I took the scanned PDF from the Canon photocopy machine and the PDF from BookEye, extracted the images, and then used the BookEye images to patch the missing parts of the Canon scan. I also cleaned up the images. The result is several PNG files. Now I want PDF.
It turns out it is easy with pdfLaTeX. I made a .tex file like this:
\documentclass[letterpaper]{article}
\usepackage[left=0pt,top=0pt,right=0pt,bottom=0pt]{geometry}
\usepackage{graphicx}
\begin{document}
\newpage
\includegraphics[width=\paperwidth,height=\paperheight]{000.png}
\newpage
\includegraphics[width=\paperwidth,height=\paperheight]{001.png}
\newpage
\includegraphics[width=\paperwidth,height=\paperheight]{002.png}
\end{document}
And ran pdfLaTeX on it to generate the PDF. The main idea is to set page size to letter, set page margin to 0 , then include the image files while setting the width and height to that of the paper using LaTeX measurement macros. I think there are still rough corners of this approach because LaTeX complains about overfull hboxes, but the resulting PDF is usable for my need.
I_resent_having_to_name_everything said...
Two small fixes:
remove the first \newpage to get rid of the extra blank page.
prefix each \includegraphics with a \noindent to remove the extra space on the left.
Likai Liu said...
\newpage only starts a page if there are existing content (i.e. \newpage \newpage won't give you two blank pages). But you're right about \noindent.
Nancy Jones said...
Besides my C course, I have a job and family, both of which compete to get my time. I couldn't find sufficient time for the challenging C assignments, and these people came in and saved my skin. I must commend them for the genius Programming Assignment Help. Their C Homework Help tutors did the best job and got me shining grades.
Nancy Jones said...
I am looking for a Statistics Assignment Help expert for Statistics Homework Help. I have struggled enough with statistics and therefore I just can't do it anymore on my own. . I have come across your post and I think you are the right person to provide me with SPSS homework help. Let me know how much you charge per assignment so that I can hire you today.
Nancy Jones said...
Matlab Assignment Help helped me to complete my seventh Matlab assignment, which was also the best-performed! It scored 92/100, which I've never scored before on any other assignment/exam in my lifetime. Otherwise, their service was as quick as usual. The delivery was also on time. I'm now requesting to use this same programmer multiple times. He seems the best in Image Processing tasks. Meanwhile, I'll ask for more Matlab Homework Help soon.
Nancy Jones said...
I have just come across your post and I believe this is exactly what I am looking for. I want an economics assignment help from a tutor who can guarantee me a top grade. Do you charge per page or does it depend on the
bulk of the economics homework help being completed? More to that if the work is not good enough do you offer free corrections.
Maria Garcia said...
The ardent Programming Homework Help tutor that nailed down my project was very passionate. He answered my Python questions with long, self-explanatory solutions that make it easy for any average student to revise. Moreover, he didn't hesitate to answer other questions, too, even though they weren't part of the exam. If all Python Homework Help experts can be like this then they can trend as the best Programming school ever online.
Maria Garcia said...
Hey STATA homework help expert, I need to know if you can conduct the Kappa measurement of agreement. This is what is in my assignment. I can only hire someone for statistics assignment help if they are aware of the kappa measurement of agreement. If you can do it, then reply to me with a few lines of what the kappa measure of agreement is. Let me know also how much you charge for statistics homework help in SAS.
Maria Garcia said...
Me and my classmates took too long to understand Matlab Assignment Help pricing criteria. we're always grateful for unique solutions on their Matlab assignments. Matlab Homework Help experts have the right experience and qualifications to work on any programming student's homework. They help us in our project.
Maria Garcia said...
Hi, other than economics assignment help are there other subjects that you cover? I am having several assignments one needs an economics homework help expert and the other one needs a financial expert. If you can guarantee quality work on both then I can hire you to complete them. All I am sure of is that I can hire you for the economics one but the finance one I am not sure.
Hanna Bell said...
Hello. Please check the task I have just sent and reply as soon as possible. I want an adjustment assignment done within a period of one week. I have worked with an Accounting Homework Help tutor from your team and therefore I know it’s possible to complete it within that period. Let me know the cost so that I can settle it now as your Accounting Assignment Help experts work on it.
Hanna Bell said...
That is a huge number of students. Are they from the same country or different countries? I also want your math assignment help. I want to perform in my assignments and since this is what you have been doing for years, I believe you are the right person for me. Let me know how much you charge for your math homework help services.
Hanna Bell said...
I don’t have time to look for another expert and therefore I am going to hire you with the hope that I will get quality economics assignment help .Being aneconomics homework help professor I expect that your solutions are first class. All I want to tell you is that if the solutions are not up to the mark I am going to cancel the project.
Hanna Bell said...
Hey there, I need an Statistics Homework Help expert to help me understand the topic of piecewise regression. In our lectures, the concept seemed very hard, and I could not understand it completely. I need someone who can explain to me in a simpler way that I can understand the topic. he/she should explain to me which is the best model, the best data before the model and how to fit the model using SPSS. If you can deliver quality work then you would be my official Statistics Assignment Help partner.
Sarah Wilson said...
Just what I was looking for. I am struggling with my accounting assignment. I want an Accounting Assignment Help tutor to offer me two services. One is to complete my accounting assignments and the other is to provide me with online classes. I believe you are experienced enough to offer both Accounting Homework Help and online classes. I know you charge assignments based on the bulk. Tell me how much you charge for the online classes per hour.
Sarah Wilson said...
As much as there are discouragements, it is true that mathematics is hard. Like in my case, I was never discouraged by anyone about math but I still find it very hard and that is why I am requesting your Math assignment help. I am tired of struggling with mathematics and spending sleepless nights trying to solve sums that I still don’t get right. Having gone through your Math homework help, I am sure that I will get the right help through you. Please tell me what I need to be able to hire you.
Sarah Wilson said...
I have submitted my assignment to your website without any challenges. The economics assignmenthelp expert handling my assignment has already contacted me and I am certain that my work is underway. I am just hoping that I will get quality economics homework help. I have a lot of hopes in you and I am just hoping that you will not disappoint me.
Sarah Wilson said...
How much do you charge for a Statistics Assignment Help task? Take, for statistics assignment experts my case, where I need you to provide me with the Statistics Homework Help on plotting a scatter plot with a regression line? How much should that cost? Do you charge on the basis of the workload or have a constant payment?
|
2021-07-26 03:22:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42581644654273987, "perplexity": 1031.299386659869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00045.warc.gz"}
|
https://www.talkstats.com/threads/overlapping-sample-t-test-with-partial-pairing.57552/
|
Overlapping sample: t test with partial pairing??
JAaron
New Member
Hi -
I've a work project that requires a stat test between Product A and Produce B performance ratings (On a scale of 1 to 5 how well did Product A / B perform?)
We are treating the data as interval scaled.
The samples partially overlap. e.g. Product A has n=500; Product B has n=500; there are 250 common respondents between the two.
What test is appropriate here?
I dug around and the only detailed reference I could find that seemed to fit the bill is here on page 20. Dependent t-test with partial pairing. Is that what I need?
http://www.analyticalgroup.com/statistical_reference14/Statistical Reference.pdf
Thank you!
Jake
A mixed model / multilevel model / hierarchical linear model / random effects model (this same model goes by many different names, as you can see) can easily handle this.
CB
Super Moderator
Jake, just out of interest, can I check how you would specify the mixed model? I'm thinking something just like:
Code:
lme(Performance ~ Product, random = ~1 | Respondent)
|
2022-08-15 12:50:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805609405040741, "perplexity": 2941.4919044995763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00303.warc.gz"}
|
https://inference-review.com/article/as-the-loom-of-physics-expands
|
This is the second in a series of essays. The first offered an historical appraisal of the elements of matter. The third will address the Standard Model.
Democritus, Epicurus, and Xenocrates regarded sound as a stream of particles.1 Aristotle came closer to the truth:
Sound is composed of particular beating vibrations … created in the air by the body that gave out the tone … This motion propagates itself unchanged [as] each portion of the air sets the next portion of air in motion with the same movement as it has itself.2
Galileo Galilei and Isaac Newton, like Aristotle, favored waves, Newton deducing a simple formula for the speed of sound through a gas,
$\sqrt{\frac{p}{\rho }}$,
where p is the pressure of the gas and ρ is its density. In the case of sound traveling through air, his result was significantly in error. Newton invoked two quite absurd fudge factors to force his predictions to coincide with reality.3 The triumph of the wave theory of sound would have been unalloyed had not quantum mechanics blurred the distinction between particles and waves. And indeed sound in elastic solids often displays particle-like behavior. The particle avatars of sound were termed phonons in 1932, just six years after photons received their name.
Both Democritus and Aristotle regarded light as a stream of particles. Lucretius agreed:
The light and heat of the sun: these are composed of minute atoms which, when they are shoved off, lose no time in shooting across the interspace of air in the direction imparted by the shove.4
The Greeks found the law of reflection and sought a similar law for refraction. In 984 CE, Ibn Sahl found an empirical formula for refraction, but it was soon forgotten—like snow upon the desert’s dusty face, as a Persian mathematician remarked. The law was rediscovered by Willebrord Snellius (Snell) in the seventeenth century. In the version of Snell’s law worked out by René Descartes, light travels more slowly through air than water or glass; in Pierre de Fermat’s version, faster.
Newton deduced Snell’s law by inventing a short-range attractive force that would act between particles of light and matter.5 His imagined new force gave an impetus to photons just as they entered a denser medium, thereby increasing their speed and changing their direction in accord with Snell’s law. Ingenious! Note the curiously indirect way Newton presented his ideas:
These attractions bear a great resemblance to the reflexions and refractions of light … as was discovered by Snellius and Des Cartes … For it is now certain that light requires about 7 or 8 minutes to travel from the sun to the earth. Moreover, rays of light … as lately was discovered by Grimaldus … passing through a small hole are bent or refracted around these bodies as if they were attracted to them … Therefore I thought it not amiss to add the following Propositions for optical use; not at all considering the nature of the rays of light, or inquiring whether they are bodies or not.6
Here Newton alludes to two recent developments. In 1676, Ole Rømer had proven that light travels at a finite speed, using his observations of the varying eclipse intervals of Jupiter’s innermost moon Io to find a rough measure of its speed. Shortly before this, Francesco Grimaldi had found that light passing through a tiny aperture emerges as a cone. Light propagates and spreads “not only directly, and by refraction and reflexion,” he wrote, but also by a fourth mode, dubbing the phenomenon diffraction and recognizing it as a property shared by sound waves.7 Grimaldi’s discovery was a vital step toward Christiaan Huygens’ development of the wave theory of light. In this passage, Newton cites Grimaldi, misinterpreting diffraction as evidence for the bending of light particles by his conjectured force.8
Much of Newton’s treatise on optics concerns his observations of the colored rings that appear when a thin lens is placed atop a glass plate.9 Newton called these phenomena “fits of easy transmission or reflection,” but his own data showed that the wavelengths associated with light span a factor of about two, or an octave, as he illustrated with a circular chart of seven colors divided by the seven notes of a diatonic musical scale. The intervals between E and F (orange), and B and C (indigo) are half the size of the other five, in accord with a piano’s tuning. Hence Newton chose seven colors for the rainbow.
### Figure 1.
The musical scale and its colors, according to Newton.
In 1678, Huygens developed a wave theory of light from which Snell’s law followed. Huygens showed why light ordinarily travels in straight lines, but he could not explain diffraction. Early in the nineteenth century, Huygens’s wave theory was improved by Thomas Young in Britain and Augustin-Jean Fresnel in France. Superposition and interference of waves offered simple explanations for Newton’s rings and diffraction. Fresnel predicted that a bright spot would appear at the center of the shadow of an opaque disc, which seemed absurd, but in 1819 the remarkable scientist-adventurer-politician François Arago managed to spot the spot; his colleagues were flabbergasted.
Arago’s spot proved that light is comprised of waves, but the crucial experiment would involve a comparison between light’s speed in air and water. Arago designed a technique to do this, but failing eyesight forced him to assign the task to two younger colleagues, Hippolyte Fizeau and Léon Foucault. Collaborators at the start, they soon became fierce competitors. Each set out to measure the speed of light in air and to compare that to its speed in water. In 1849, Fizeau measured the speed of light to within five percent of its correct value. Soon afterward, both found that light travels more slowly in water than air, refuting both Newton and Descartes and confirming Huygens’s wave theory of light.
## Electromagnetism
Newton’s theory of gravity required action at a distance. No one in the seventeenth century found this a particularly compelling idea, and neither did Newton.
That one body may act upon another at a distance through a vacuum, without the mediation of anything else … is to me so great an absurdity that I believe no man who has in philosophical matters a competent faculty of thinking can ever fall into it. Gravity must be caused by an agent … but whether this agent be material or immaterial, I have left open to the consideration of my readers.10
The same problem pertains to electric and magnetic forces. Michael Faraday introduced lines of force into the discussion, but these magnetic field lines were important only because they represented the first step in the elaboration of the concept of a field itself. There are today electric, magnetic, and gravitational fields. Defined at all points of space-time, they represent the potentiality of force. Newton’s agent, the gravitational field, is generated by mass. An electric charge produces an electric field everywhere, which exerts forces on other charges. Electric currents, which are charges in motion, produce magnetic fields, which exert forces on other moving charges.11
James Clerk Maxwell, in his 1873 treatise A Dynamical Theory of the Electromagnetic Field, created a unified theory of electrical and magnetic phenomena: the discipline now known as electromagnetism or classical electrodynamics. A simplified version of his equations appears on the t-shirts of many physics students.12
∇ ⋅ E = ρ
∇ ⋅ B = 0
∇ × E = –Ḃ
∇ × B = Ė + J
What an achievement in concision these equations represent! They embody the results achieved by André-Marie Ampère, Charles-Augustin de Coulomb, Carl Friedrich Gauss, and Hans Christian Ørsted, amongst others. Physicists before Maxwell had understood much of the import of these equations, but it is the inconspicuous term Ė, appearing in the fourth equation, that is brand new. Maxwell needed Ė to make his equations consistent, but as so often happens in the history of physics, consistency enforces amazing and unexpected consequences. Maxwell’s equations give rise to solutions that are self-propagating oscillations of transverse electric and magnetic fields. These electromagnetic waves travel at the speed of light. Light is the visible portion of the vast electromagnetic spectrum.
Waves are disturbances that propagate through a medium. Such is the counsel of common sense. If this is so, how does light traverse the vacuum of outer space? Surely something must be there. For Maxwell that something was the luminiferous ether, a massless, intangible, invisible, and all-pervasive material, one possessing electromagnetic attributes but no mechanical properties.
The existence of Maxwell’s ether should have been easy to establish. His equations determine the speed of light relative to the ether. Hurtling around the sun, the earth must experience an intense ether wind. Light should appear faster or slower when measured with or against this wind. In 1887, Albert Michelson and Edward Morley set out to measure the predicted speed difference. No difference! They had glimpsed, but not recognized, the principle that would lead Albert Einstein to his theory of special relativity. The speed of light in a vacuum is the same to all uniformly moving observers. The speed of light is a universal constant denoted by c. The luminiferous ether has joined such discarded fancies as caloric, phlogiston, and vitalism. Light needs no medium. The changing electric field of a propagating light signal generates a magnetic field ahead of it, which in turn generates an electric field further along the light beam.
For electromagnetic radiation, the message itself serves as its own medium of transmission.
## Waves are Particles
An ideal blackbody absorbs all incoming radiation and emits thermal radiation characteristic of its temperature. Nineteenth-century physicists found the laws to quantify the facts. The mean frequency of blackbody radiation is proportional to the Kelvin temperature of the body; its radiated power is proportional to the fourth power of its absolute temperature. Astronomers delighted in these results. Stars are blackbodies of a sort; their colors reveal their surface temperatures and luminosities.
Blackbody radiation challenged nineteenth-century physicists. No one could deduce a formula for the intensity and frequency distribution in terms of its temperature. In 1900, Max Planck introduced the radical hypothesis that light can only be emitted or absorbed in discrete and indivisible bundles that he termed quanta. The energy E carried by each light quantum is proportional to its frequency—E = hf, where Planck’s constant h, is a fundamental constant of nature, like c or the electron’s charge e.
Planck’s constant had no basis in classical physics. What meaning could there be to a quantized bundle of light waves? Still, Planck succeeded in deducing the formula for blackbody radiation. Five years later, Einstein fleshed out Planck’s hypothesis by attributing particle-like properties to light waves. “We are faced with a new kind of difficulty,” Einstein admitted. “We have two contradictory pictures of reality.” Light is both a particle and a wave. “[S]eparately neither of them fully explains the phenomena of light, but together they do!”13 Planck’s quanta and Einstein’s wave-particle duality initiated the quantum revolution.
Hydrogen is the most abundant element, its atom the lightest and simplest. Johann Balmer, in 1885, found its five visible spectral lines to satisfy a simple arithmetical rule.14 Was it mere coincidence, or did the rule conceal a profound truth? A decade later, Edward Pickering found a series of perplexing lines between those of hydrogen, but only in the spectra of very hot stars. Niels Bohr and his quantum rules would eventually explain both Pickering’s lines and Balmer’s formula.
Another puzzle of nineteenth-century physics began to emerge in 1839 when the team of Antoine and Edmond Becquerel discovered the photovoltaic effect, by which light produces electrical currents.15 The closely related photoelectric effect is the emission of electrons when ultraviolet light strikes a metal. Although the number of ejected electrons increases with the intensity of the light, Philipp Lenard found that the energy of those electrons increases with the frequency instead. At frequencies below a critical value no electrons are liberated, no matter how intense the light. In fact, the photoelectric effect cannot be explained in terms of light as electromagnetic waves.16 Einstein offered a radical explanation for the photoelectric effect:
The wave theory of light … has worked well in the representation of purely optical phenomena and will probably never be replaced by another theory … [but] phenomena connected with the emission or absorption of light are more readily understood if one assumes that the energy of light is discontinuously distributed in space … [T]he energy of a light ray … consists of a finite number of energy quanta which are localized in space, which move without dividing, and which can only be produced or absorbed as complete units.17
In the photoelectric process, as Einstein imagined it, each photoelectron is liberated by a single photon, its energy being that of the photon minus the energy needed to eject the electron. Einstein’s explanation was supported by Robert Millikan’s 1914 experiments and even more strongly by those of Arthur Compton in 1923.
## Inside Atoms
Ernest Rutherford is among the few scientists who made their most famous discoveries after becoming Nobel laureates. Rutherford’s assistants in Manchester directed a beam of alpha particles from a radioactive source toward a thin gold foil target. They meant to measure small deflections of these particles upon striking gold atoms. Instead, the alpha particles were often deflected by large angles, sometimes even bouncing backward. “It was almost as incredible,” Rutherford recalled, “as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you … It was then I had the idea of an atom with a minute massive centre carrying a charge.”18 The result could only be understood if most of the atom’s mass is confined within a tiny nucleus.
Rutherford proposed that atoms are miniature solar systems, with electrons orbiting nuclei just as planets orbit the sun, and electrical forces playing the role of gravity. The hydrogen atom would then consist of a single electron orbiting a much heavier positively charged proton. This view faced an intractable problem. Maxwell’s laws require light to be emitted whenever electric charges accelerate. Orbiting electrons must accelerate toward their nuclei, and thus must emit light, lose energy, and spiral inward. Yet most atoms are perfectly stable. Classical physics offered no solution.
Bohr would find one.
Bohr earned his doctorate in 1911 then spent a year in England studying, researching, and visiting laboratories; in Manchester, he was received warmly by Rutherford. The following year Bohr returned to Copenhagen, and in 1913 he published three papers setting forth the first version of quantum theory. He began:
In order to explain the results of experiments on the scattering of α rays by matter, Prof. Rutherford has given a theory [in which atoms] consist of a positively charged nucleus surrounded by a system of electrons kept together by attractive forces from the nucleus; the total negative charge of the electrons is equal to the positive charge of the nucleus. Further, the nucleus is assumed to be the seat of the … mass of the atom, and to have linear dimensions exceedingly small compared with [those of] the whole atom.19
Bohr then pointed out how work on thermal radiation, the photoelectric effect, and X-rays indicate,
the inadequacy of the classical electrodynamics in describing the behavior of systems of atomic size. Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce … a quantity foreign to classical electrodynamics, i.e. Planck’s constant.20
Bohr’s analysis assumed arbitrarily that only certain electronic orbits are permitted, each corresponding to a discrete stationary state of an atom. Upon emitting or absorbing a photon, an atom jumps from one stationary state to another, the photon carrying or supplying the energy difference between the two states. Atoms are ordinarily found in states of least energy, or ground states. Atoms are stable because no lower energy states exist.
Bohr went on to deduce the properties of atoms with just one orbital electron, such as hydrogen or singly ionized helium. Limiting himself to circular orbits, Bohr postulated “that the angular momentum of the electron round the nucleus in a stationary state of the system is equal to an entire multiple of a universal value.” This is a quantum constraint absent from classical physics, where angular momentum can assume any value at all.
Using Newtonian mechanics upon which he impressed a quantized angular momentum, Bohr calculated the allowed size and energy states of the hydrogen atom, determining its entire electromagnetic spectrum. He found not just the handful of visible lines satisfying Balmer’s formula, but all of its spectral lines, ranging from the infrared to the ultraviolet. “[T]here obviously can be no question,” Bohr wrote, “of a mechanical foundation of the calculations given in this paper.”21 Like Planck and Einstein, Bohr had moved beyond classical physics to express the first substantial intimations of quantum physics. He not only deduced the size and spectrum of hydrogen atoms, but found a truly satisfying solution to the mystery of Pickering’s lines: they were spectral lines of singly ionized helium, present only in the hottest stars. Einstein was delighted by this result.
A cardinal attribute of the emerging theory was first recognized by the French physicist Louis de Broglie:
After long reflection in solitude and meditation, I suddenly had the idea, during the year 1923, that the discovery made by Einstein in 1905 should be generalized by extending it to all material particles and notably to electrons.22
There we have it. Light’s baffling wave-particle duality is shared by matter. Electrons, as well as atoms and other particles, can display wavelike properties, just as light waves can act as particles. The wavelength associated with a body with momentum p is simply h/p, just as it is for a photon. De Broglie’s intuition was confirmed in 1927, when electrons scattering from a crystal formed diffraction patterns identical to those made by X-rays with the same momentum as the electrons. The first electron microscopes were designed and built just four years later.
Particles and waves emerge from everyday life. Pebbles thrown into a lake are undeniably particles; their ripples are waves. But atoms do not respect our feelings about how things should be, nor does our language have words adequate to describe the microworld. Because Planck’s constant is so small, and things like baseballs, bees, and bacteria are so large, they are not noticeably affected by quantum mechanics. Relativistic effects like time dilation and length contraction are irrelevant to our daily lives, because the speed of light is a million times that of sound. Classical theories of mechanics and electromagnetism will never be discarded. Within their envelope of validity, they are absolutely true.
In 1925, Werner Heisenberg formulated his uncertainty principle. The order in which measurements are made matters. It is not possible to measure precisely the position and velocity of a particle at the same time; measurement of one necessarily disturbs a subsequent measurement of the other. The uncertainty principle reflects the noncommutativity of the operators linked to momentum and position. This result encapsulates the essential difference between classical and quantum theories.
To formulate a theory incorporating the uncertainty principle, Heisenberg, Max Born, and Pascual Jordan identified Bohr’s quantum states as vectors in a Hilbert space. Heisenberg identified quantum states as infinite-rank matrices acting on these state vectors. Shortly afterward, Erwin Schrödinger took a different tack, seeking a wave equation akin to those for sound and light, in order to describe de Broglie’s matter waves. Measurements were linked to noncommutative differential operators acting on a space-time-dependent wave function—now famous as the ψ function. It was Born who completed the new theory’s arch by providing a probabilistic interpretation of the wave function. The Jordan–Heisenberg and Schrödinger formulations of quantum mechanics were soon recognized as two formulations of the same theory.23
Quantum mechanics not only tells us why copper is red, diamonds are hard, and rubber is stretchy, it underlies all of chemistry and biology. It is a theory of almost everything, except for gravity and nuclear phenomena.
## From Dante Onward
In the sixteenth century, Nicolaus Copernicus put the sun at the center of the universe, just as Aristarchus of Samos had done two millennia before. Neither theory was widely accepted. And for obvious reasons. Anyone on a speeding planet, common sense might suggest, would be aware of its motion.
In Inferno, Dante Alighieri described the descent from the seventh to the eighth circle of the underworld. He is flying atop the infernal monster Geryon.
Than was my own, when I perceived myself
on all sides in the air, and saw extinguished
the sight of everything but of the monster.
Onward he goeth, swimming slowly, slowly;
Wheels and descends, but I perceive it only
by wind upon my face and from below.
24
Dante’s wildly imagined flight, as Leonardo Ricci realized, “captures a physical law of motion”:
[Dante] is not aware (or, more accurately, he imagines that he is not aware) of anything but the apparent wind. He asserts that, aside from the effect of the wind, his sensation of flying is not dissimilar to being at rest … Dante intuitively grasped the concept [now known as Galilean invariance], but unlike Galileo, he did not pursue this idea any further.25
Three centuries later, Galileo proffered his own thought experiment, showing that one’s uniform motion cannot be detected:
Shut yourself up with some friend … below decks on some large ship, and have with you there some flies, butterflies, and other small flying animals. Have a large bowl of water with some fish in it; hang up a bottle that empties drop by drop into a wide vessel beneath it. With the ship standing still, observe carefully how the little animals fly with equal speed to all sides of the cabin. The fish swim indifferently in all directions; the drops fall into the vessel beneath; and, in throwing something to your friend, you need throw it no more strongly in one direction than another … When you have observed all these things carefully … have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still … The cause of all these correspondences of effects is the fact that the ship’s motion is common to all the things contained in it, and to the air also.26
Galileo never precisely enunciated his principle of invariance; he was besotted with circular motion, not unbounded rectilinear motion. Nor, for the same reason, did he accept Johannes Kepler’s planetary ellipses.
The principle may have been first stated by Descartes in 1644, two years after Galileo’s death and Newton’s birth: “Each and every thing, insofar as it can, always continues in its same state,” and, “all motion is, of itself, along straight lines.”27 The principle reappeared in Newton’s Principia Mathematica, rather more clearly stated, and credited to Galileo as the first of his three laws of motion: “Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed upon it.”28 Whether due to Galileo, Descartes, or Newton, this law underlies the notion of Galilean invariance. The laws of physics are the same to all uniformly moving or inertial observers, whether they are on the earth, or the International Space Station, or on a planet receding from the earth at half the speed of light.29
Newton’s second and third laws of motion are the foundations of classical mechanics. His second law offers a definition of force as the product of mass and acceleration. His third law, that every action is accompanied by an equal and opposite reaction, is equivalent to the law of the conservation of momentum. Newton invented the calculus, which he called the method of fluxions, and used it to calculate planetary orbits, thereby initiating the science of celestial mechanics, but his most profound insight was expressed by the law of universal gravitational attraction. All objects in the universe attract one another with a force that is proportional to their mass and inversely proportional to the square of the distance between them. No one had before seen this. Together with his laws of motion, Newton’s law of gravitational attraction allowed him to create a unified theory of motion on earth and in the heavens.30
Daniel Bernoulli, Coulomb, and Joseph-Louis Lagrange all made their careers by developing and exploiting Newtonian mechanics. But the most dramatic scientific event of the eighteenth century had little to do with Newtonian mechanics per se: the 1781 discovery of the seventh planet, Uranus, by William Herschel. Its orbital radius accidentally agreed with the empirical Titius–Bode law, but that now-discarded bit of numerology did inspire astronomers to discover the largest asteroids: Ceres (1801), Pallas (1802), Juno (1804), and Vesta (1807), all of them moving in accordance with Newton’s laws.
As astronomical measurements became more precise, an anomaly was found in the orbit of Uranus. John Couch Adams and Urbain Le Verrier independently showed that the effects of an eighth planet lying beyond Uranus could explain the discrepancy. They computed its mass and orbit, enabling Johann Galle to find the new planet in 1846.31
The anomalous behavior of Mercury represented another challenge to Newton’s theory. Le Verrier found the rate of precession of its orbit to exceed what could be accounted for by the gravitational effects of other planets. He attributed the discrepancy—a mere forty-two arc seconds per century—to the gravitational effects of a new planet with a smaller orbit. He named it Vulcan and announced its discovery to the Académie des Sciences in 1860. Le Verrier died in 1877, still believing that he had discovered both Neptune and Vulcan. Astronomers would have to wait until 1916 for Einstein to solve the problem of Mercury’s orbital anomaly with his new and improved theory of gravity: the general theory of relativity.
## The Special Theory of Relativity
Many nineteenth-century issues led Einstein toward special relativity. Fizeau was one of the first to show that light travels more slowly in water than in air. In 1851, he measured the difference in speed between light traveling with moving water and light traveling against it. If Maxwell’s luminiferous ether was unaffected by water, there should be no difference in speed; if it were it dragged along by the water, the difference should be twice the speed of the water. Fizeau obtained a mystifying result that lay between these plausible extremes. Equally relevant was the negative result of the Michelson–Morley experiment.
Einstein presented his special theory of relativity in a 1905 paper entitled “On the Electrodynamics of Moving Bodies”:
Examples of this sort, together with the unsuccessful attempts to discover any motion of the earth relatively to the “light medium” suggest that the phenomena of electrodynamics as well as mechanics possess no properties corresponding to the idea of absolute rest. They suggest rather that … the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good. We will raise this conjecture … to the status of a postulate, and also introduce another postulate, which is only apparently irreconcilable with the former, namely, that light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. These two postulates suffice for the attainment of a simple and consistent theory of the electrodynamics for moving bodies based on Maxwell’s theory for stationary bodies. The introduction of a “luminiferous æther” will prove to be superfluous inasmuch as the view here to be developed will not require an “absolutely stationary space” provided with special properties…32
Einstein’s second postulate—that the speed of light is the same relative to all observers whatever their state of motion—is certainly counterintuitive. Its consequences are even stranger. Why can neither mass nor message travel faster than light?33 How can events be simultaneous to one observer, but not to another? Why do the lengths of rulers or the rates of clocks differ for observers in different inertial systems?
The answers are tied to another and more fundamental question: which space-time quantities are the same when measured by any inertial observer? Let two events be designated as (x1, t1) and (x2, t2), where vectors xi denote their positions and ti their times. Classical mechanics admits two space-time invariants, quantities that are the same to all inertial observers: the distance, and time interval between the events, d = |x2x1| and τ = t2t1. The Galilean transformations of classical mechanics are those linear transformations of space and time that leave both d and τ unchanged. The relevant Galilean transformation is x′ = x – vt, and t′ = t. In special relativity the sole space-time invariant is the combination d2 – τ2. Lorentz transformations are those linear transformations that leave it unchanged: x′ = x cosh φ – ct sinh φ, and ct′ = ct cosh φ + x sinh φ, with v c tanh φ. These formulas approach the corresponding Galilean formulas in the limit of small v. A sequence of two Lorentz transformations by speeds v = c tanh φ and u = c tanh ψ in the same direction results in the Lorentz transformation corresponding to the speed w = c tanh (φ + ψ), where
This is the relativistic law for the addition of velocities. For small u and v it approaches the familiar result u + v. It is the speed of a ball caught by a girl at rest, thrown at speed u by a boy running toward the girl at speed v. Otherwise, the law informs us that the composition of any two subluminal velocities is subluminal, and that the composition of c with any subluminal velocity remains c.
Special relativity also changes our understanding of momentum and energy, which are linked to one another, just as are position and time. The best known is Einstein’s deceptively simple formula E = mc2, stating the equivalence of matter and energy.34 It replaces the conservation laws for mass and for energy with a single law.
## The General Theory of Relativity
Special Relativity is relevant only in uniformly moving reference frames. It cannot be used by accelerated observers. It was for this reason that it was called special. In 1907, Einstein had what he described as the happiest thought of his life:
[B]ecause for an observer falling freely … there exists … no gravitational field … [he] has the right to consider his state as “at rest.” … The experimentally known matter independence of the acceleration of fall [the equivalence principle] is therefore a powerful argument for the fact that the relativity postulate has to be extended to coordinate systems which, relative to each other, are in non-uniform motion.35
Einstein realized that a person in a sealed elevator cannot tell gravity from acceleration. Gravitational forces are merely epiphenomenological consequences of the distortion of space-time produced by matter. A generally covariant theory is one whose physical laws are the same to all observers whatever their state of motion. Because gravitational forces and those experienced by accelerated observers are locally indistinguishable, such a theory would have to encompass gravity. Einstein succeeded in his ambitious quest in 1915, in the middle of the First World War.
Einstein’s theory has led us to quantitative theories of cosmology and cosmogenesis, toward an understanding of the history of the universe from the creation of the first atomic nuclei in the Hot Big Bang to the evolution of all the stars, galaxies and the wondrous large-scale structure of the universe. Months after Einstein completed his general theory, he proposed three classic tests.
The first of these was carried out well before Einstein’s birth. The observed precession of Mercury’s orbit could not be explained by the gravitational effects of the sun and the other planets. Le Verrier attributed this discrepancy to Vulcan. To Einstein’s delight, Mercury’s behavior accorded precisely with his new theory of gravity.
Newtonian physics predicts that starlight skimming the solar surface should be deflected by the tiny angle of 0.87 arc seconds.36 Einstein published a similar result in 1911. His general theory of 1915 predicted twice as large a deflection. At the time, the deflection could only be observed during a total solar eclipse. The test would wait for the first such eclipse after the armistice, in 1919, when a British expedition led by Arthur Eddington traveled to Brazil to perform the observations. Their result was in rough agreement with Einstein’s theory.37
Robert Pound and Glen Rebka performed Einstein’s third test in 1960. They showed that an atomic clock in the cellar of the Jefferson Laboratory ticked a bit more slowly than an identical clock in the attic. A closely related and so-called fourth test of general relativity was proposed by Irwin Shapiro in 1964.38 He pointed out that a radar signal grazing the sun on a round-trip to a planet or satellite would suffer a gravitational time delay of about two hundred microseconds. The effect was soon observed; it would later enable the most sensitive tests of general relativity.
1. See Robert English, “Democritus’ Theory of Sense Perception,” Transactions and Proceedings of the American Philological Association 46 (1915): 217–27; Elizabeth Asmis, “Epicurean Empiricism,” in The Cambridge Companion to Epicureanism, ed. James Warren (New York: Cambridge University Press, 2009), 102; John Dillon, The Heirs of Plato (Oxford: Clarendon Press, 2003), 118.
2. Aristotle, Aristotle and the Earlier Peripatetics, vol. 2, trans. Benjamin Costelloe and John Muirhead (London: Longmans, Green, and Co., 1897), 466.
3. Physicists of the eighteenth century “dismissed Newton’s explanation for the difference between measurement and theory in what was now the speed-of sound problem; but even the best of them could do no better.” Bernard Finn, “Laplace and the Speed of Sound,” ISIS 55, no. 179 (1964): 9. Newton’s vexing problem was solved about a century later by the French mathematician Pierre Laplace. Because sound propagation is adiabatic and not isothermal, Newton’s result had to be multiplied by a constant depending on the gas. The result is known as the Newton–Laplace equation.
4. Lucretius, On the Nature of Things, trans. Frank Copley (New York: Norton, 1977), 8.
5. This is the last of the three fundamental forces Newton proposed in Principia: gravity acting between masses, a repulsive force between gas atoms to explain Boyle’s law (see the first part of this series) and an attractive force between atoms and photons to explain Snell’s law. The gravitational force was his greatest triumph, his others mere follies of hubris.
6. Isaac Newton, Principia, Book I, Prop. XCIII–XCVII.
7. A. Rupert Hall, “Beyond the Fringe: Diffraction as seen by Grimaldi, Fabri, Hooke and Newton,” Notes and Records of the Royal Society of London 44 (1990): 13.
8. Newton also neglected his own discovery of what are now called Newton’s rings.
9. Isaac Newton, Opticks: or a Treatise of the Reflexions, Refractions and Colours of Light, 4th edn. (London: William Innys, 1730).
10. Isaac Newton, Newton: Philosophical Writings, ed. Andrew Janiak (Cambridge: Cambridge University Press, 2004), 102.
11. Whether fields are material or immaterial remains a question best left to philosophers.
12. These partial differential equations, here in modern notation, tell how charges and currents produce electric (E) and magnetic (B) fields in empty space. They take different forms in media. Others describe the forces that fields exert on charges (ρ) and currents (J).
13. Albert Einsten and Leopold Infeld, The Evolution of Physics (New York: Simon & Schuster, 1938), 263.
14. Johann Balmer, “Notiz über die Spectrallinien des Wasserstoffs,” (Note on the Spectral Lines of Hydrogen), Annalen der Physik und Chemie 25 (1885): 80.
15. Henri Becquerel, Edmond’s son and Antoine’s grandson, discovered radioactivity in 1896. Henri’s doctoral student Marie Curie, with her husband Pierre, discovered radium. Marie’s daughter Irène co-discovered artificial radioactivity. Her daughter Hélène married the grandson of Pierre’s student, the renowned physicist Paul Langevin, who would later be Marie Curie’s lover. It’s all in the family.
16. The photoelectric effect is often attributed to Heinrich Hertz, the discoverer of radio waves, even though he died prior to the discovery of electrons. Philipp Lenard and Johannes Stark, among the few scientists cited by Einstein in his prizewinning paper about the photoelectric effect, were both Nobel laureates in physics but became virulent anti-Semites, committed Nazis, outspoken opponents of “Jewish physics,” and trusted advisors to Adolf Hitler.
17. A. B. Arons and M. B. Peppard, “Einstein’s Proposal of the Photon Concept—a Translation of the Annalen der Physik Paper of 1905,” American Journal of Physics 33 (1965): 367.
18. Ernest Rutherford, “Forty Years of Physics,” in Francis Cornford et al., Background to Modern Science: Ten Lectures at Cambridge Arranged by the History of Science Committee, eds. Joseph Needham and Walter Pagel (Cambridge: Cambridge University Press, 1938), 68.
19. Niels Bohr, “On the Constitution of Atoms and Molecules,” Philosophical Magazine 26 (1913): 1.
20. Niels Bohr, “On the Constitution of Atoms and Molecules,” Philosophical Magazine 26 (1913): 2.
21. Niels Bohr, “On the Constitution of Atoms and Molecules,” Philosophical Magazine 26 (1913): 14.
22. Abraham Pais, Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford: Oxford University Press, 1982), 252.
23. For a thorough account of the historical development of quantum mechanics see Abraham Pais, Inward Bound: Of Matter and Forces in the Physical World (Oxford: Oxford University Press, 1986).
24. Dante Alighieri, The Inferno: The Definitive Illustrated Edition, trans. Henry Wadsworth Longfellow (Mineola, NY: Dover Publications, 2016), Canto XVII, 94.
25. Leonardo Ricci, “History of Science: Dante’s Insight into Galilean Invariance,” Nature 434 no. 717 (2005).
26. Galileo Galilei, Dialogue Concerning the Two Chief World Systems, trans. Stillman Drake (Berkeley, CA: University of California Press, 1953), 186–87.
27. René Descartes, Principles of Philosophy, trans. Valentine Miller and Reese Miller (Dordecht: Kluwer, 1991), 59, 60.
28. Isaac Newton, Principia, Law I.
29. Earth offers no precisely inertial frame, but its rotational acceleration is a tiny fraction of earth’s gravity, its orbital acceleration smaller yet. Their effects are ordinarily negligible: you might not notice your weight in Rome to be ounces less than in Oslo. A demonstration of Earth’s motion was devised by Léon Foucault in 1851 and publicly demonstrated at the Panthéon in Paris, where the swing of a huge pendulum could be seen to precess about a vertical axis in a period of about 31 hours, in accordance with theoretical prediction. An exact copy of Foucault’s pendulum has continued to precess since 1995, to the delight of tourists and the edification of students.
30. I examine some of Newton’s less attractive attributes in “The Errors and Animadversions of Honest Isaac Newton.”
31. See Morton Grosser, The Discovery of Neptune (Cambridge, MA: Harvard University Press, 1962).
32. Albert Einstein, “Zur Electrodynamik bewegter Korper” (On the Electrodynamics of Moving Bodies), Annalen der Physik 17 (1905): 891. English translation in Hendrik Lorentz et al., The Principle of Relativity, trans. W. Perrett and G. B. Jeffery (London: Methuen and Company, 1923). For an elegant and extended discussion of the origins and development of both the special and general theories of relativity, see Abraham Pais, Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford: Oxford University Press, 1982).
33. Gerald Feinberg discussed the possibility of superluminal particles and dubbed them tachyons. They would lead to paradox but fortunately seem not to exist. “It could have turned out differently, I suppose… but it didn’t.” (My apologies to J. A.) See Gerald Feinberg, “Possibility of Faster-Than-Light Particles,” Physical Review 159 (1967): 1,089–1,105.
34. Albert Einstein, “Does the Inertia of a Body Depend Upon Its Energy Content?” Annalen der Physik 18 (1905): 639–41.
35. Abraham Pais, Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford: Oxford University Press, 1982), 178.
36. Clifford Will and Eric Poisson, Gravity: Newtonian, Post-Newtonian, Relativistic (Cambridge: Cambridge University Press, 2014), 501.
37. Frank Dyson, Arthur Eddington, and Charles Davidson, “A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919,” Philosophical Transactions of the Royal Society of London A 220 (1920): 571–81. The angular deflection reported by Eddington’s mission was accurate to only 20%. The ambitious European GAIA satellite was launched in 2013 and will test this prediction of Einstein’s theory to an accuracy better than 0.0001%.
38. Irwin Shapiro, “Fourth Test of General Relativity,” Physical Review Letters 13, no. 26 (1964): 789.
## More From This Author
• ### Not So Real
On an author’s distate for the Copehagen interpretation.
( Physics / Book Review / Vol. 4, No. 2 )
• ### The Standard Model
A journey through the history of physics.
( Physics / Critical Essay / Vol. 4, No. 1 )
• ### Threads in the Tapestry of Physics
A journey through the history of physics.
( Physics / Critical Essay / Vol. 3, No. 2 )
|
2019-02-23 20:37:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.652418851852417, "perplexity": 1305.1426523940447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249550830.96/warc/CC-MAIN-20190223203317-20190223225317-00299.warc.gz"}
|
https://physics.stackexchange.com/questions/375375/how-do-i-calculate-the-thrust-needed-in-a-rocket-to-reach-a-certain-acceleration
|
# How do I calculate the thrust needed in a rocket to reach a certain acceleration? [closed]
I have recently been studying basic rocket science and I have been trying to figure out by searching online for tutorials on this particular matter and I have not found anything. The problem I am having is I don't know how to calculate how much thrust power I need to reach a certain acceleration.
Let's say I have a rocket. This rocket has a mass of 2,5 kg. It's weight is 24,5 N, I want it to accelerate with a speed of 2 m/s^2.
How do I calculate that, formulas would be appreciated as I don't want cheats, only help.
## closed as off-topic by ACuriousMind♦Dec 20 '17 at 13:22
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – ACuriousMind
If this question can be reworded to fit the rules in the help center, please edit the question.
• Pretty much anything useful you do with rockets requires you to consider what kind of fuel you are using. At a minimum, you need to know the specific impulse of the fuel. – Chris Dec 19 '17 at 23:07
• Please note that homework-like questions and check-my-work questions are generally considered off-topic here. We intend our questions to be potentially useful to a broader set of users than just the one asking, and prefer conceptual questions over those just asking for a specific computation. – ACuriousMind Dec 20 '17 at 13:22
• @ACuriousMind Are you confident enough that this is a homework problem to drop the mod-hammer? To me it looks like a budding rocket scientist who is looking for the correct equations and has grasped the wrong ones. It seems certainly worth asking whether it is homework first. – Cort Ammon Dec 20 '17 at 16:25
• @CortAmmon I'm not sure what you mean - our homework policy applies regardless of whether or not the question is actual homework, and just asking for the formulae needed to compute a particular quantity does clearly fall under the HW policy. – ACuriousMind Dec 20 '17 at 16:34
• @CortAmmon He's basically asking for a formula to plug all this information into. He didn't really ask about the related concepts. I think you could also make a case for this being "too broad" or even "unclear what you're asking"; because we know nothing about this rocket besides it's "mass", which may or may not include a fuel estimate. – JMac Dec 20 '17 at 17:10
$$\Delta V=v_e\ln\frac{m_0}{m_f}$$
Where $m_0$ is the starting mass (propellant and all), and $m_f$ is the final mass (which is just the dry mass, after all the propellant is gone). $v_e$ is the effective exhaust velocity, which is a property of your engine and your fuel. It is related to the specific impulse (Isp), $v_e=I_{sp}g_0$ where $g_0$ is the acceleration of gravity at sea level.
|
2019-08-23 13:52:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49531883001327515, "perplexity": 348.80809589174777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318421.65/warc/CC-MAIN-20190823130046-20190823152046-00116.warc.gz"}
|
https://chemistry.stackexchange.com/questions/62912/why-are-weak-acids-not-considered-good-leaving-groups?noredirect=1
|
# Why are weak acids not considered good leaving groups?
So today in my first year lecture my professor said that weak bases are good leaving groups. I understand why they are good leaving groups in comparison to unstable strong bases but why are weak acids not included? Acids are electron pair acceptors, shouldn't they be better able to take on the donated electron pair? Furthermore, when we say weak bases, weak in comparison to what? To be substrate the solvent or to other potential leaving groups?
• You should think Brønsted acids/bases rather than Lewis acids/bases in this context. – Jan Nov 18 '16 at 22:25
• I figured, that's the way that it would work according to the weak bases good leaving groups rule, but my instinct was Lewis because of the electrons moving toward the leaving group. Why is Lewis incorrect here? – cgug123 Nov 18 '16 at 22:27
• Well, the thing that’s leaving will always be some kind of base — it’s taking its electron pair with it. The only thing you can still decide is whether it is the base of a weak or a strong Brønsted acid. The Lewis theory would require you to look at the basic properties only (The Lewis acid being the electrophile that is attacked nucleophilicly) so you can’t argue with acidic strength. – Jan Nov 18 '16 at 22:29
• My original thought was that it should be a weak acid because they can handle the extra electrons as they are the electron acceptors. However the thought that the base is taking them back also makes sense. – cgug123 Nov 18 '16 at 22:34
$$\ce{HA->H+ + A-}$$
$\ce{A-}$ is the species we're evaluating as the leaving group. Basically, it's a good leaving group if it's stabilized. If that's the case, the acid dissociation equilibrium also shifts toward products, making $\ce{HA}$ a stronger acid. You should be careful to differentiate between $\ce{HA}$ and $\ce{A-}$ when you're thinking about these concepts.
|
2021-05-16 05:47:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5451777577400208, "perplexity": 969.2139977070306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00542.warc.gz"}
|
http://wiki.apidesign.org/index.php?title=LibraryReExportIsNPComplete&feed=html&action=history
|
#### JaroslavTulach: /* Hide Incompatibilities! */ - 2011-10-30 06:12:01
Hide Incompatibilities!
←Older revision Revision as of 06:12, 30 October 2011 Line 104: Line 104: If you happen to reuse a library that cannot be trusted to keep its [[BackwardCompatibility]], then do whatever you can to not re-export its [[API]]s! This has been discussed in [[Chapter 10]], Cooperating with Other [[API]]s, but in short: If you hide such library for internal use and do not export any of its interfaces, you can use whatever version of library you want (even few years older) and nobody shall notice. Moreover in many [[module system]]s there can even be multiple versions of the same library in case they are not re-exported. If you happen to reuse a library that cannot be trusted to keep its [[BackwardCompatibility]], then do whatever you can to not re-export its [[API]]s! This has been discussed in [[Chapter 10]], Cooperating with Other [[API]]s, but in short: If you hide such library for internal use and do not export any of its interfaces, you can use whatever version of library you want (even few years older) and nobody shall notice. Moreover in many [[module system]]s there can even be multiple versions of the same library in case they are not re-exported. + + ==== Explicit Re-export ==== + + Looks like there is a way to eliminate the NP-Completeness by disabling implicit re-export. See [[LibraryWithoutImplicitExportIsPolynomial]]. However this works only in a system with standardized versioning policy + and without use of [[RangeDependencies]]. == Conclusion == == Conclusion ==
#### JaroslavTulach: /* External Links */ - 2010-07-30 16:25:21
←Older revision Revision as of 16:25, 30 July 2010 Line 113: Line 113: # discussion at [http://lambda-the-ultimate.org/node/3588 Lambda the Ultimate] # discussion at [http://lambda-the-ultimate.org/node/3588 Lambda the Ultimate] # LtU guys pointed out that the proof has already been published: [http://people.debian.org/~dburrows/model.pdf D. Burrows, Modelling and Resolving Software Dependencies] # LtU guys pointed out that the proof has already been published: [http://people.debian.org/~dburrows/model.pdf D. Burrows, Modelling and Resolving Software Dependencies] - # Equinox is said to [http://blog.bjhargrave.com/2008/03/equinox-and-google-summer-of-code.html use SAT4J solver] + # [[Equinox]] is said to [http://blog.bjhargrave.com/2008/03/equinox-and-google-summer-of-code.html use SAT4J solver] # EDOS Project seems to find similar proof: See section 3.2 in [http://www.edos-project.org/xwiki/bin/download/Main/D2-1/edos-wp2d1.pdf edos-wp2d1.pdf] # EDOS Project seems to find similar proof: See section 3.2 in [http://www.edos-project.org/xwiki/bin/download/Main/D2-1/edos-wp2d1.pdf edos-wp2d1.pdf]
#### JaroslavTulach: /* Implications */ - 2009-10-16 12:51:20
Implications
←Older revision Revision as of 12:51, 16 October 2009 Line 82: Line 82: '''qed'''. '''qed'''. + + == Polemics == + + One of the critiques raised during the LtU review (linked in external sources) is that the kind of situation cannot happen in practise. Surprisingly it can. [[OSGi]] and its [[RangeDependencies]] lead naturally into [[NP-Complete]] problems. [[RangeDependencies|Read more]]... == Implications == == Implications == Line 100: Line 104: If you happen to reuse a library that cannot be trusted to keep its [[BackwardCompatibility]], then do whatever you can to not re-export its [[API]]s! This has been discussed in [[Chapter 10]], Cooperating with Other [[API]]s, but in short: If you hide such library for internal use and do not export any of its interfaces, you can use whatever version of library you want (even few years older) and nobody shall notice. Moreover in many [[module system]]s there can even be multiple versions of the same library in case they are not re-exported. If you happen to reuse a library that cannot be trusted to keep its [[BackwardCompatibility]], then do whatever you can to not re-export its [[API]]s! This has been discussed in [[Chapter 10]], Cooperating with Other [[API]]s, but in short: If you hide such library for internal use and do not export any of its interfaces, you can use whatever version of library you want (even few years older) and nobody shall notice. Moreover in many [[module system]]s there can even be multiple versions of the same library in case they are not re-exported. - == Conclusion == == Conclusion ==
#### JaroslavTulach at 09:31, 12 October 2009 - 2009-10-12 09:31:48
←Older revision Revision as of 09:31, 12 October 2009 Line 2: Line 2: This page starts by describing a way to convert any [[3SAT]] problem to a solution of finding whether there is a way to satisfy all dependencies of a library in a repository of libraries. Thus proving that the later problem is [[wikipedia::NP-complete|NP-Complete]]. Then it describes the importance of such observations on our [[DistributedDevelopment|development practices]]. This page starts by describing a way to convert any [[3SAT]] problem to a solution of finding whether there is a way to satisfy all dependencies of a library in a repository of libraries. Thus proving that the later problem is [[wikipedia::NP-complete|NP-Complete]]. Then it describes the importance of such observations on our [[DistributedDevelopment|development practices]]. + + There are similar observations for other module systems ([[RPM]] and [[Debian]], see the external references section), with almost identical proof. The only difference is that both [[RPM]] and [[Debian]] allow easy way to specify negation by use of ''obsolete'' directive (thus it is easy to map the [[3SAT]] formula). The unique feature of [[LibraryReExportIsNPComplete|this]] proof is that it does not need negation at all. Instead it deals with re-export of an [[API]]. As re-export of [[API]]s is quite common in software development, it brings implications of this kind of problem closer to reality. == [[3SAT]] == == [[3SAT]] ==
#### JaroslavTulach: /* Conversion of 3SAT to Module Dependencies Problem */ - 2009-09-02 08:59:19
Conversion of 3SAT to Module Dependencies Problem
←Older revision Revision as of 08:59, 2 September 2009 Line 53: Line 53: All these modules and dependencies are added into repository $R$ All these modules and dependencies are added into repository $R$ - Now we will create a module $T_{1.0}$ that depends all formulas: + Now we will create a module $T_{1.0}$ that depends on all formulas: :$T_{1.0} \gg F^1_{1.0}$ :$T_{1.0} \gg F^1_{1.0}$ :$T_{1.0} \gg F^2_{1.0}$ :$T_{1.0} \gg F^2_{1.0}$
#### JaroslavTulach: /* Proof */ - 2009-09-02 08:56:47
Proof
←Older revision Revision as of 08:56, 2 September 2009 Line 75: Line 75: For $i$-th ''3-or'' there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. That means $F^i_{1.1} \in C \vee F^i_{1.2} \in C \vee F^i_{1.3} \in C$ - at least one version of $F^i$ module is present in the configuration. The one $F^i$ that has the satisfied dependency reexports $M^j_{1.0}$ (which means $v_j = true$) or $M^j_{2.0}$ (which means $v_j = false$). Anyway each $i$ ''3-or'' evaluates to $true$. For $i$-th ''3-or'' there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. That means $F^i_{1.1} \in C \vee F^i_{1.2} \in C \vee F^i_{1.3} \in C$ - at least one version of $F^i$ module is present in the configuration. The one $F^i$ that has the satisfied dependency reexports $M^j_{1.0}$ (which means $v_j = true$) or $M^j_{2.0}$ (which means $v_j = false$). Anyway each $i$ ''3-or'' evaluates to $true$. - The only remaining question is whether a $C$ configuration can force truth variable $v_j$ to be true in one ''3-or'' and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two ''chain of dependencies'' ending in different versions of $M^j$ cannot be in one $C$ as that breaks the last condition of configuration definition. Thus each $M^j$ is represented only by one version and each $v_j$ is evaluated either to true or false, but never both. + The only remaining question is whether a $C$ configuration can force truth variable $v_j$ to be true in one ''3-or'' and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two ''chain of dependencies'' ending in different versions of $M^j$ cannot be in one $C$ as that breaks the last condition of configuration definition (each imported object has just one meaning). Thus each $M^j$ is represented only by one version and each $v_j$ is evaluated either to true or false, but never both. The [[3SAT]] formula's evaluation based on the configuration $C$ is consistent and satisfies the formula. The [[3SAT]] formula's evaluation based on the configuration $C$ is consistent and satisfies the formula.
#### JaroslavTulach: /* Proof */ - 2009-09-02 08:55:25
Proof
←Older revision Revision as of 08:55, 2 September 2009 Line 73: Line 73: "$\Rightarrow$": Let's have a $C$ configuration satisfies all dependencies of $T_{1.0}$. Can we also find positive valuation of [[3SAT]] formula? "$\Rightarrow$": Let's have a $C$ configuration satisfies all dependencies of $T_{1.0}$. Can we also find positive valuation of [[3SAT]] formula? - For $i$-th ''3-or'' there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. That means $F^i_{1.1} \in C \vee F^i_{1.2} \in C \vee F^i_{1.3}$ - at least one version of $F^i$ module is present in the configuration. The one $F^i$ that has the satisfied dependency reexports $M^j_{1.0}$ (which means $v_j = true$) or $M^j_{2.0}$ (which means $v_j = false$). Anyway each $i$ ''3-or'' evaluates to $true$. + For $i$-th ''3-or'' there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. That means $F^i_{1.1} \in C \vee F^i_{1.2} \in C \vee F^i_{1.3} \in C$ - at least one version of $F^i$ module is present in the configuration. The one $F^i$ that has the satisfied dependency reexports $M^j_{1.0}$ (which means $v_j = true$) or $M^j_{2.0}$ (which means $v_j = false$). Anyway each $i$ ''3-or'' evaluates to $true$. The only remaining question is whether a $C$ configuration can force truth variable $v_j$ to be true in one ''3-or'' and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two ''chain of dependencies'' ending in different versions of $M^j$ cannot be in one $C$ as that breaks the last condition of configuration definition. Thus each $M^j$ is represented only by one version and each $v_j$ is evaluated either to true or false, but never both. The only remaining question is whether a $C$ configuration can force truth variable $v_j$ to be true in one ''3-or'' and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two ''chain of dependencies'' ending in different versions of $M^j$ cannot be in one $C$ as that breaks the last condition of configuration definition. Thus each $M^j$ is represented only by one version and each $v_j$ is evaluated either to true or false, but never both.
#### JaroslavTulach: /* Proof */ - 2009-09-02 08:55:03
Proof
←Older revision Revision as of 08:55, 2 September 2009 Line 73: Line 73: "$\Rightarrow$": Let's have a $C$ configuration satisfies all dependencies of $T_{1.0}$. Can we also find positive valuation of [[3SAT]] formula? "$\Rightarrow$": Let's have a $C$ configuration satisfies all dependencies of $T_{1.0}$. Can we also find positive valuation of [[3SAT]] formula? - For $i$-th ''3-or'' there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. At least by one from $F^i_{1.1}$ or $F^i_{1.2}$ or $F^i_{1.3}$. The one $F^i$ that has the satisfied dependency reexports $M^j_{1.0}$ (which means $v_j = true$) or $M^j_{2.0}$ (which means $v_j = false$). Anyway each $i$ ''3-or'' evaluates to $true$. + For $i$-th ''3-or'' there is $T_{1.0} \gg F^i_{1.0}$ dependency which is satisfied. That means $F^i_{1.1} \in C \vee F^i_{1.2} \in C \vee F^i_{1.3}$ - at least one version of $F^i$ module is present in the configuration. The one $F^i$ that has the satisfied dependency reexports $M^j_{1.0}$ (which means $v_j = true$) or $M^j_{2.0}$ (which means $v_j = false$). Anyway each $i$ ''3-or'' evaluates to $true$. The only remaining question is whether a $C$ configuration can force truth variable $v_j$ to be true in one ''3-or'' and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two ''chain of dependencies'' ending in different versions of $M^j$ cannot be in one $C$ as that breaks the last condition of configuration definition. Thus each $M^j$ is represented only by one version and each $v_j$ is evaluated either to true or false, but never both. The only remaining question is whether a $C$ configuration can force truth variable $v_j$ to be true in one ''3-or'' and false in another. However that would mean that there is re-export via $T_{1.0} \gg F^i_{1.x} \gg M^j_{1.0}$ and also another one via $T_{1.0} \gg F^p_{1.u} \gg M^j_{2.0}$. However those two ''chain of dependencies'' ending in different versions of $M^j$ cannot be in one $C$ as that breaks the last condition of configuration definition. Thus each $M^j$ is represented only by one version and each $v_j$ is evaluated either to true or false, but never both.
|
2017-10-22 10:15:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7029758095741272, "perplexity": 1357.7813569629602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00154.warc.gz"}
|
http://cerco.cs.unibo.it/browser/Papers/jar-cerco-2017/proof.tex?rev=3656
|
# source:Papers/jar-cerco-2017/proof.tex@3656
Last change on this file since 3656 was 3656, checked in by mulligan, 4 years ago
cannibalising bits of project report for compiler proof section
File size: 50.6 KB
Line
1% Compiler proof
2% Structure of proof, and high-level discussion
3% Technical devices: structured traces, labelling, etc.
4% Assembler proof
5% Technical issues in front end (Brian?)
6% Main theorem statement
7
8\section{Compiler proof}
9\label{sect.compiler.proof}
10
11\subsection{A brief overview of the backend compilation chain}
12\label{subsect.brief.overview.backend.compilation.chain}
13
14The Matita compiler's backend consists of five distinct intermediate languages: RTL, RTLntl, ERTL, LTL and LIN.
15A sixth language, RTLabs, serves as the entry point of the backend and the exit point of the frontend.
16RTL, RTLntl, ERTL and LTL are control flow graph based' languages, whereas LIN is a linearised language, the final language before translation to assembly.
17
18We now briefly discuss the properties of the intermediate languages, and discuss the various transformations that take place during the translation process:
19
20\paragraph{RTLabs ((Abstract) Register Transfer Language)}
21As mentioned, this is the final language of the compiler's frontend and the entry point for the backend.
22This language uses pseudoregisters, not hardware registers.\footnote{There are an unbounded number of pseudoregisters. Pseudoregisters are converted to hardware registers or stack positions during register allocation.}
23Functions still use stackframes, where arguments are passed on the stack and results are stored in addresses.
24During the pass to RTL instruction selection is carried out.
25
26\paragraph{RTL (Register Transfer Language)}
27This language uses pseudoregisters, not hardware registers.
28Tailcall elimination is carried out during the translation from RTL to RTLntl.
29
30\paragraph{RTLntl (Register Transfer Language --- No Tailcalls)}
31This language is a pseudoregister, graph based language where all tailcalls are eliminated.
32RTLntl is not present in the O'Caml compiler, the RTL language is reused for this purpose.
33
34\paragraph{ERTL (Explicit Register Transfer Language)}
35This is a language very similar to RTLntl.
36However, the calling convention is made explicit, in that functions no longer receive and return inputs and outputs via a high-level mechanism, but rather use stack slots or hadware registers.
37The ERTL to LTL pass performs the following transformations: liveness analysis, register colouring and register/stack slot allocation.
38
39\paragraph{LTL (Linearisable Transfer Language)}
40Another graph based language, but uses hardware registers instead of pseudoregisters.
41Tunnelling (branch compression) should be implemented here.
42
43\paragraph{LIN (Linearised)}
44This is a linearised form of the LTL language; function graphs have been linearised into lists of statements.
45All registers have been translated into hardware registers or stack addresses.
46This is the final stage of compilation before translating directly into assembly language.
47
48%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
49% SECTION. %
50%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
51\section{The backend intermediate languages in Matita}
52\label{sect.backend.intermediate.languages.matita}
53
54We now discuss the encoding of the compiler backend languages in the Calculus of Constructions proper.
55We pay particular heed to changes that we made from the O'Caml prototype.
56In particular, many aspects of the backend languages have been unified into a single joint' language.
57We have also made heavy use of dependent types to reduce spurious partiality' and to encode invariants.
58
59%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
60% SECTION. %
61%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
62\subsection{Abstracting related languages}
63\label{subsect.abstracting.related.languages}
64
65The O'Caml compiler is written in the following manner.
66Each intermediate language has its own dedicated syntax, notions of internal function, and so on.
67Here, we make a distinction between internal functions'---other functions that are explicitly written by the programmer---and external functions', which belong to external library and require explictly linking.
68In particular, IO can be seen as a special case of the external function' mechanism.
69Internal functions are represented as a record consisting of a sequential structure of statements, entry and exit points to this structure, and other book keeping devices.
70This sequential structure can either be a control flow graph or a linearised list of statements, depending on the language.
71Translations between intermediate language map syntaxes to syntaxes, and internal function representations to internal function representations explicitly.
72
73This is a perfectly valid way to write a compiler, where everything is made explicit, but writing a \emph{verified} compiler poses new challenges.
74In particular, we must look ahead to see how our choice of encodings will affect the size and complexity of the forthcoming proofs of correctness.
75We now discuss some abstractions, introduced in the Matita code, which we hope will make our proofs shorter, amongst other benefits.
76
78Due to the bureaucracy inherent in explicating each intermediate language's syntax in the O'Caml compiler, it can often be hard to see exactly what changes between each successive intermediate language.
79By abstracting the syntax of the RTL, ERTL, LTL and LIN intermediate languages, we make these changes much clearer.
80
81Our abstraction takes the following form:
82\begin{lstlisting}
83inductive joint_instruction (p: params__) (globals: list ident): Type[0] :=
84 | COMMENT: String $\rightarrow$ joint_instruction p globals
85 ...
86 | INT: generic_reg p $\rightarrow$ Byte $\rightarrow$ joint_instruction p globals
87 ...
88 | OP1: Op1 $89ightarrow$ acc_a_reg p $90ightarrow$ acc_a_reg p $91ightarrow$ joint_instruction p globals
92 ...
93 | extension: extend_statements p $\rightarrow$ joint_instruction p globals.
94\end{lstlisting}
95We first note that for the majority of intermediate languages, many instructions are shared.
96However, these instructions expect different register types (either a pseudoregister or a hardware register) as arguments.
97We must therefore parameterise the joint syntax with a record of parameters that will be specialised to each intermediate language.
98In the type above, this parameterisation is realised with the \texttt{params\_\_} record.
99As a result of this parameterisation, we have also added a degree of type safety' to the intermediate languages' syntaxes.
100In particular, we note that the \texttt{OP1} constructor expects quite a specific type, in that the two register arguments must both be what passes for the accumulator A in that language.
101In some languages, for example LIN, this is the hardware accumulator, whilst in others this is any pseudoregister.
102Contrast this with the \texttt{INT} constructor, which expects a \texttt{generic\_reg}, corresponding to an arbitrary' register type.
103
104Further, we note that some intermediate languages have language specific instructions (i.e. the instructions that change between languages).
105We therefore add a new constructor to the syntax, \texttt{extension}, which expects a value of type \texttt{extend\_statements p}.
106As \texttt{p} varies between intermediate languages, we can provide language specific extensions to the syntax of the joint language.
107For example, ERTL's extended syntax consists of the following extra statements:
108\begin{lstlisting}
109inductive ertl_statement_extension: Type[0] :=
110 | ertl_st_ext_new_frame: ertl_statement_extension
111 | ertl_st_ext_del_frame: ertl_statement_extension
112 | ertl_st_ext_frame_size: register $\rightarrow$ ertl_statement_extension.
113\end{lstlisting}
114These are further packaged into an ERTL specific instance of \texttt{params\_\_} as follows:
115\begin{lstlisting}
116definition ertl_params__: params__ :=
117 mk_params__ register register ... ertl_statement_extension.
118\end{lstlisting}
119
120\paragraph{Shared code, reduced proofs}
121Many features of individual backend intermediate languages are shared with other intermediate languages.
122For instance, RTLabs, RTL, ERTL and LTL are all graph based languages, where functions are represented as a control flow graph of statements that form their bodies.
123Functions for adding statements to a graph, searching the graph, and so on, are remarkably similar across all languages, but are duplicated in the O'Caml code.
124
125As a result, we chose to abstract the representation of internal functions for the RTL, ERTL, LTL and LIN intermediate languages into a joint' representation.
126This representation is parameterised by a record that dictates the layout of the function body for each intermediate language.
127For instance, in RTL, the layout is graph like, whereas in LIN, the layout is a linearised list of statements.
128Further, a generalised way of accessing the successor statement to the one currently under consideration is needed, and so forth.
129
130Our joint internal function record looks like so:
131\begin{lstlisting}
132record joint_internal_function (globals: list ident) (p:params globals) : Type[0] :=
133{
134 ...
135 joint_if_params : paramsT p;
136 joint_if_locals : localsT p;
137 ...
138 joint_if_code : codeT ... p;
139 ...
140}.
141\end{lstlisting}
142In particular, everything that can vary between differing intermediate languages has been parameterised.
143Here, we see the location where to fetch parameters, the listing of local variables, and the internal code representation has been parameterised.
144Other particulars are also parameterised, though here omitted.
145
146Hopefully this abstraction process will reduce the number of proofs that need to be written, dealing with internal functions of different languages characterised by parameters.
147
148\paragraph{Dependency on instruction selection}
149We note that the backend languages are all essentially post instruction selection languages'.
150The joint' syntax makes this especially clear.
151For instance, in the definition:
152\begin{lstlisting}
153inductive joint_instruction (p:params__) (globals: list ident): Type[0] :=
154 ...
155 | INT: generic_reg p $\rightarrow$ Byte $\rightarrow$ joint_instruction p globals
156 | MOVE: pair_reg p $\rightarrow$ joint_instruction p globals
157 ...
158 | PUSH: acc_a_reg p $\rightarrow$ joint_instruction p globals
159 ...
160 | extension: extend_statements p $\rightarrow$ joint_instruction p globals.
161\end{lstlisting}
162The capitalised constructors---\texttt{INT}, \texttt{MOVE}, and so on---are all machine specific instructions.
163Retargetting the compiler to another microprocessor, improving instruction selection, or simply enlarging the subset of the machine language that the compiler can use, would entail replacing these constructors with constructors that correspond to the instructions of the new target.
164We feel that this makes which instructions are target dependent, and which are not (i.e. those language specific instructions that fall inside the \texttt{extension} constructor) much more explicit.
165In the long term, we would really like to try to directly embed the target language in the syntax, in order to reuse the target language's semantics.
166
167\paragraph{Independent development and testing}
168We have essentially modularised the intermediate languages in the compiler backend.
169As with any form of modularisation, we reap benefits in the ability to independently test and develop each intermediate language separately, with the benefit of fixing bugs just once.
170
171\paragraph{Future reuse for other compiler projects}
172Another advantage of our modularisation scheme is the ability to quickly use and reuse intermediate languages for other compiler projects.
173For instance, in creating a cost-preserving compiler for a functional language, we may choose to target a linearised version of RTL directly.
174Adding such an intermediate language would involve the addition of just a few lines of code.
175
176\paragraph{Easy addition of new compiler passes}
177Under our modularisation and abstraction scheme, new compiler passes can easily be injected into the backend.
178We have a concrete example of this in the RTLntl language, an intermediate language that was not present in the original O'Caml code.
179To specify a new intermediate language we must simply specify, through the use of the statement extension mechanism, what differs in the new intermediate language from the joint' language, and configure a new notion of internal function record, by specialising parameters, to the new language.
180As generic code for the joint' language exists, for example to add statements to control flow graphs, this code can be reused for the new intermediate language.
181
182\paragraph{Possible commutations of translation passes}
183The backend translation passes of the CerCo compiler differ quite a bit from the CompCert compiler.
184In the CompCert compiler, linearisation occurs much earlier in the compilation chain, and passes such as register colouring and allocation are carried out on a linearised form of program.
185Contrast this with our own approach, where the code is represented as a graph for much longer.
186Similarly, in CompCert the calling conventions are enforced after register allocation, whereas we do register allocation before enforcing the calling convention.
187
188However, by abstracting the representation of intermediate functions, we are now much more free to reorder translation passes as we see fit.
189The linearisation process, for instance, now no longer cares about the specific representation of code in the source and target languages.
190It just relies on a common interface.
191We are therefore, in theory, free to pick where we wish to linearise our representation.
192This adds an unusual flexibility into the compilation process, and allows us to freely experiment with different orderings of translation passes.
193
194%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
195% SECTION. %
196%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
197\subsection{Use of dependent types}
198\label{subsect.use.of.dependent.types}
199
200We see several potential ways in which a compiler can fail to compile a program:
201\begin{enumerate}
202\item
203The program is malformed, and there is no hope of making sense of the program.
204\item
205The compiler is buggy, or an invariant in the compiler is invalidated.
206\item
207An incomplete heuristic in the compiler fails.
208\item
209The compiled code exhausts some bounded resource, for instance the processor's code memory.
210\end{enumerate}
211Standard compilers can fail for all the above reasons.
212Certified compilers are only required to rule out the second class of failures, but they can still fail for all the remaining reasons.
213In particular, a compiler that systematically refuses to compile any well formed program is a sound compiler.
214On the contrary, we would like our certified compiler to fail only in the fourth case.
215We plan to achieve this with the following strategy.
216
217First, the compiler is abstracted over all incomplete heuristics, seen as total functions.
218To obtain executable code, the compiler is eventually composed with implementations of the abstracted strategies, with the composition taking care of any potential failure of the heuristics in finding a solution.
219
220Second, we reject all malformed programs using dependent types: only well-formed programs should typecheck and the compiler can be applied only to well-typed programs.
221
222Finally, exhaustion of bounded resources can be checked only at the very end of compilation.
223Therefore, all intermediate compilation steps are now total functions that cannot diverge, nor fail: these properties are directly guaranteed by the type system of Matita.
224
225Presently, the plan is not yet fulfilled.
226However, we are improving the implemented code according to our plan.
227We are doing this by progressively strenghthening the code through the use of dependent types.
228We detail the different ways in which dependent types have been used so far.
229
230First, we encode informal invariants, or uses of \texttt{assert false} in the O'Caml code, with dependent types, converting partial functions into total functions.
231There are numerous examples of this throughout the backend.
232For example, in the \texttt{RTLabs} to \texttt{RTL} transformation pass, many functions only make sense' when lists of registers passed to them as arguments conform to some specific length.
233For instance, the \texttt{translate\_negint} function, which translates a negative integer constant:
234\begin{lstlisting}
235definition translate_negint :=
236 $\lambda$globals: list ident.
237 $\lambda$destrs: list register.
238 $\lambda$srcrs: list register.
239 $\lambda$start_lbl: label.
240 $\lambda$dest_lbl: label.
241 $\lambda$def: rtl_internal_function globals.
242 $\lambda$prf: |destrs| = |srcrs|. (* assert here *)
243 ...
244\end{lstlisting}
245The last argument to the function, \texttt{prf}, is a proof that the lengths of the lists of source and destination registers are the same.
246This was an assertion in the O'Caml code.
247
248Secondly, we make use of dependent types to make the Matita code correct by construction, and eventually the proofs of correctness for the compiler easier to write.
249For instance, many intermediate languages in the backend of the compiler, from RTLabs to LTL, are graph based languages.
250Here, function definitions consist of a graph (i.e. a map from labels to statements) and a pair of labels denoting the entry and exit points of this graph.
251Practically, we would always like to ensure that the entry and exit labels are present in the statement graph.
252We ensure that this is so with a dependent sum type in the \texttt{joint\_internal\_function} record, which all graph based languages specialise to obtain their own internal function representation:
253\begin{lstlisting}
254record joint_internal_function (globals: list ident) (p: params globals): Type[0] :=
255{
256 ...
257 joint_if_code : codeT $\ldots$ p;
258 joint_if_entry : $\Sigma$l: label. lookup $\ldots$ joint_if_code l $\neq$ None $\ldots$;
259 ...
260}.
261\end{lstlisting}
262Here, \texttt{codeT} is a parameterised type representing the structure' of the function's body (a graph in graph based languages, and a list in the linearised LIN language).
263Specifically, the \texttt{joint\_if\_entry} is a dependent pair consisting of a label and a proof that the label in question is a vertex in the function's graph. A similar device exists for the exit label.
264
265We make use of dependent types also for another reason: experimentation.
266Namely, CompCert makes little use of dependent types to encode invariants.
267In contrast, we wish to make as much use of dependent types as possible, both to experiment with different ways of encoding compilers in a proof assistant, but also as a way of stress testing' Matita's support for dependent types.
268
269Moreover, at the moment we make practically no use of inductive predicates to specify compiler invariants and to describe the semantics of intermediate languages.
270On the contrary, all predicates are computable functions.
271Therefore, the proof style that we will adopt will be necessarily significantly different from, say, CompCert's one.
272At the moment, in Matita Russell-'-style reasoning (in the sense of~\cite{sozeau:subset:2006}) seems to be the best solution for working with computable functions.
273This style is heavily based on the idea that all computable functions should be specified using dependent types to describe their pre- and post-conditions.
274As a consequence, it is natural to add dependent types almost everywhere in the Matita compiler's codebase.
275
276%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
277% SECTION. %
278%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
279\subsection{What we do not implement}
280\label{subsect.what.we.do.not.implement}
281
282There are several classes of functionality that we have chosen not to implement in the backend languages:
283\begin{itemize}
284\item
285\textbf{Datatypes and functions over these datatypes that are not supported by the compiler.}
286In particular, the compiler does not support the floating point datatype, nor accompanying functions over that datatype.
287At the moment, frontend languages within the compiler possess constructors corresponding to floating point code.
288These are removed during instruction selection (in the RTLabs to RTL transformation) using a daemon.\footnote{A Girardism. An axiom of type \texttt{False}, from which we can prove anything.}
289However, at some point, we would like the front end of the compiler to recognise programs that use floating point code and reject them as being invalid.
290\item
291\textbf{Axiomatised components that will be implemented using external oracles.}
292Several large, complex pieces of compiler infrastructure, most noticably register colouring and fixed point calculation during liveness analysis have been axiomatised.
293This was already agreed upon before the start of the project, and is clearly marked in the project proposal, following comments by those involved with the CompCert project about the difficulty in formalising register colouring in that project.
294Instead, these components are axiomatised, along with the properties that they need to satisfy in order for the rest of the compilation chain to be correct.
295These axiomatised components are found in the ERTL to LTL pass.
296
297It should be noted that these axiomatised components fall into the following pattern: whilst their implementation is complex, and their proof of correctness is difficult, we are able to quickly and easily verify that any answer that they provide is correct. Therefore, we plan to provide implementations in OCaml only,
298and to provide certified verifiers in Matita.
299At the moment, the implementation of the certified verifiers is left as future work.
300\item
301\textbf{A few non-computational proof obligations.}
302A few difficult-to-close, but non-computational (i.e. they do not prevent us from executing the compiler inside Matita), proof obligations have been closed using daemons in the backend.
303These proof obligations originate with our use of dependent types for expressing invariants in the compiler.
304However, here, it should be mentioned that many open proof obligations are simply impossible to close until we start to obtain stronger invariants from the proof of correctness for the compiler proper.
305In particular, in the RTLabs to RTL pass, several proof obligations relating to lists of registers stored in a local environment' appear to fall into this pattern.
306\item
307\textbf{Branch compression (tunnelling).}
308This was a feature of the O'Caml compiler.
309It is not yet currently implemented in the Matita compiler.
310This feature is only an optimisation, and will not affect the correctness of the compiler.
311\item
312\textbf{Real' tailcalls}
313For the time being, tailcalls in the backend are translated to vanilla' function calls during the ERTL to LTL pass.
314This follows the O'Caml compiler, which did not implement tailcalls, and did this simplification step.
315Real' tailcalls are being implemented in the O'Caml compiler, and when this implementation is complete, we aim to port this code to the Matita compiler.
316\end{itemize}
317
318%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
319% SECTION. %
320%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
321\section{Associated changes to O'Caml compiler}
322\label{sect.associated.changes.to.ocaml.compiler}
323
324At the moment, only bugfixes, but no architectural changes we have made in the Matita backend have made their way back into the O'Caml compiler.
325We do not see the heavy process of modularisation and abstraction as making its way back into the O'Caml codebase, as this is a significant rewrite of the backend code that is supposed to yield the same code after instantiating parameters, anyway.
326
327%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
328% SECTION. %
329%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
330\section{Future work}
331\label{sect.future.work}
332
333As mentioned in Section~\ref{subsect.what.we.do.not.implement}, there are several unimplemented features in the compiler, and several aspects of the Matita code that can be improved in order to make currently partial functions total.
334We summarise this future work here:
335\begin{itemize}
336\item
337We plan to make use of dependent types to identify floating point' free programs and make all functions total over such programs.
338This will remove a swathe of uses of daemons.
339This should be routine.
340\item
341We plan to move expansion of integer modulus, and other related functions, into the instruction selection (RTLabs to RTL) phase.
342This will also help to remove a swathe of uses of daemons, as well as potentially introduce new opportunities for optimisations that we currently miss in expanding these instructions at the C-light level.
343\item
344We plan to close all existing proof obligations that are closed using daemons, arising from our use of dependent types in the backend.
345However, many may not be closable until we have completed Deliverable D4.4, the certification of the whole compiler, as we may not have invariants strong enough at the present time.
346\item
347We plan to port the O'Caml compiler's implementation of tailcalls when this is completed, and eventually port the branch compression code currently in the O'Caml compiler to the Matita implementation.
348This should not cause any major problems.
349\item
350We plan to validate the backend translations, removing any obvious bugs, by executing the translation inside Matita on small C programs.
351This is not critical, as the certification process will find all bugs anyway.
352\item We plan to provide certified validators for all results provided by
353external oracles written in OCaml. At the moment, we have axiomatized oracles
354for computing least fixpoints during liveness analysis, for colouring
355registers and for branch displacement in the assembler code.
356\end{itemize}
357
358\section{The back-end intermediate languages' semantics in Matita}
359\label{sect.backend.intermediate.languages.semantics.matita}
360
361%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
362% SECTION. %
363%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
364\subsection{Abstracting related languages}
365\label{subsect.abstracting.related.languages}
366
367As mentioned in the report for Deliverable D4.2, a systematic process of abstraction, over the OCaml code, has taken place in the Matita encoding.
368In particular, we have merged many of the syntaxes of the intermediate languages (i.e. RTL, ERTL, LTL and LIN) into a single joint' syntax, which is parameterised by various types.
369Equivalent intermediate languages to those present in the OCaml code can be recovered by specialising this joint structure.
370
371As mentioned in the report for Deliverable D4.2, there are a number of advantages that this process of abstraction brings, from code reuse to allowing us to get a clearer view of the intermediate languages and their structure.
372However, the semantics of the intermediate languages allow us to concretely demonstrate this improvement in clarity, by noting that the semantics of the LTL and the semantics of the LIN languages are identical.
373In particular, the semantics of both LTL and LIN are implemented in exactly the same way.
374The only difference between the two languages is how the next instruction to be interpreted is fetched.
375In LTL, this involves looking up in a graph, whereas in LTL, this involves fetching from a list of instructions.
376
377As a result, we see that the semantics of LIN and LTL are both instances of a single, more general language that is parametric in how the next instruction is fetched.
378Furthermore, any prospective proof that the semantics of LTL and LIN are identical is now almost trivial, saving a deal of work in Deliverable D4.4.
379
380%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
381% SECTION. %
382%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
383\subsection{Type parameters, and their purpose}
384\label{subsect.type.parameters.their.purpose}
385
386We mentioned in the Deliverable D4.2 report that all joint languages are parameterised by a number of types, which are later specialised to each distinct intermediate language.
387As this parameterisation process is also dependent on designs decisions in the language semantics, we have so far held off summarising the role of each parameter.
388
389We begin the abstraction process with the \texttt{params\_\_} record.
390This holds the types of the representations of the different register varieties in the intermediate languages:
391\begin{lstlisting}
392record params__: Type[1] :=
393{
394 acc_a_reg: Type[0];
395 acc_b_reg: Type[0];
396 dpl_reg: Type[0];
397 dph_reg: Type[0];
398 pair_reg: Type[0];
399 generic_reg: Type[0];
400 call_args: Type[0];
401 call_dest: Type[0];
402 extend_statements: Type[0]
403}.
404\end{lstlisting}
405We summarise what these types mean, and how they are used in both the semantics and the translation process:
406\begin{center}
407\begin{tabular*}{\textwidth}{p{4cm}p{11cm}}
408Type & Explanation \\
409\hline
410\texttt{acc\_a\_reg} & The type of the accumulator A register. In some languages this is implemented as the hardware accumulator, whereas in others this is a pseudoregister.\\
411\texttt{acc\_b\_reg} & Similar to the accumulator A field, but for the processor's auxilliary accumulator, B. \\
412\texttt{dpl\_reg} & The type of the representation of the low eight bit register of the MCS-51's single 16 bit register, DPL. Can be either a pseudoregister or the hardware DPL register. \\
413\texttt{dph\_reg} & Similar to the DPL register but for the eight high bits of the 16-bit register. \\
414\texttt{pair\_reg} & Various different move' instructions have been merged into a single move instruction in the joint language. A value can either be moved to or from the accumulator in some languages, or moved to and from an arbitrary pseudoregister in others. This type encodes how we should move data around the registers and accumulators. \\
415\texttt{generic\_reg} & The representation of generic registers (i.e. those that are not devoted to a specific task). \\
416\texttt{call\_args} & The actual arguments passed to a function. For some languages this is simply the number of arguments passed to the function. \\
417\texttt{call\_dest} & The destination of the function call. \\
418\texttt{extend\_statements} & Instructions that are specific to a particular intermediate language, and which cannot be abstracted into the joint language.
419\end{tabular*}
420\end{center}
421
422As mentioned in the report for Deliverable D4.2, the record \texttt{params\_\_} is enough to be able to specify the instructions of the joint languages:
423\begin{lstlisting}
424inductive joint_instruction (p: params__) (globals: list ident): Type[0] :=
425 | COMMENT: String $\rightarrow$ joint_instruction p globals
426 | COST_LABEL: costlabel $\rightarrow$ joint_instruction p globals
427 ...
428 | OP1: Op1 $\rightarrow$ acc_a_reg p $\rightarrow$ acc_a_reg p $\rightarrow$ joint_instruction p globals
429 | COND: acc_a_reg p $\rightarrow$ label $\rightarrow$ joint_instruction p globals
430 ...
431\end{lstlisting}
432Here, we see that the instruction \texttt{OP1} (a unary operation on the accumulator A) can be given quite a specific type, through the use of the \texttt{params\_\_} data structure.
433
434Joint statements can be split into two subclasses: those who simply pass the flow of control onto their successor statement, and those that jump to a potentially remote location in the program.
435Naturally, as some intermediate languages are graph based, and others linearised, the passing act of passing control on to the successor' instruction can either be the act of following a graph edge in a control flow graph, or incrementing an index into a list.
436We make a distinction between instructions that pass control onto their immediate successors, and those that jump elsewhere in the program, through the use of \texttt{succ}, denoting the immediate successor of the current instruction, in the \texttt{params\_} record described below.
437\begin{lstlisting}
438record params_: Type[1] :=
439{
440 pars__ :> params__;
441 succ: Type[0]
442}.
443\end{lstlisting}
444The type \texttt{succ} corresponds to labels, in the case of control flow graph based languages, or is instantiated to the unit type for the linearised language, LIN.
445Using \texttt{param\_} we can define statements of the joint language:
446\begin{lstlisting}
447inductive joint_statement (p:params_) (globals: list ident): Type[0] :=
448 | sequential: joint_instruction p globals $\rightarrow$ succ p $\rightarrow$ joint_statement p globals
449 | GOTO: label $\rightarrow$ joint_statement p globals
450 | RETURN: joint_statement p globals.
451\end{lstlisting}
452Note that in the joint language, instructions are linear', in that they have an immediate successor.
453Statements, on the other hand, consist of either a linear instruction, or a \texttt{GOTO} or \texttt{RETURN} statement, both of which can jump to an arbitrary place in the program. The conditional jump instruction COND is linear', since it
454has an immediate successor, but it also takes an arbitrary location (a label)
456
457For the semantics, we need further parametererised types.
458In particular, we parameterise the result and parameter type of an internal function call in \texttt{params0}:
459\begin{lstlisting}
460record params0: Type[1] :=
461{
462 pars__' :> params__;
463 resultT: Type[0];
464 paramsT: Type[0]
465}.
466\end{lstlisting}
467Here, \texttt{paramsT} and \texttt{resultT} typically are the (pseudo)registers that store the parameters and result of a function.
468
469We further extend \texttt{params0} with a type for local variables in internal function calls:
470\begin{lstlisting}
471record params1 : Type[1] :=
472{
473 pars0 :> params0;
474 localsT: Type[0]
475}.
476\end{lstlisting}
477Again, we expand our parameters with types corresponding to the code representation (either a control flow graph or a list of statements).
478Further, we hypothesise a generic method for looking up the next instruction in the graph, called \texttt{lookup}.
479Note that \texttt{lookup} may fail, and returns an \texttt{option} type:
480\begin{lstlisting}
481record params (globals: list ident): Type[1] :=
482{
483 succ_ : Type[0];
484 pars1 :> params1;
485 codeT : Type[0];
486 lookup: codeT $\rightarrow$ label $\rightarrow$ option (joint_statement (mk_params_ pars1 succ_) globals)
487}.
488\end{lstlisting}
489We now have what we need to define internal functions for the joint language.
490The first two universe' fields are only used in the compilation process, for generating fresh names, and do not affect the semantics.
491The rest of the fields affect both compilation and semantics.
492In particular, we have a description of the result, parameters and the local variables of a function.
493Note also that we have lifted the hypothesised \texttt{lookup} function from \texttt{params} into a dependent sigma type, which combines a label (the entry and exit points of the control flow graph or list) combined with a proof that the label is in the graph structure:
494\begin{lstlisting}
495record joint_internal_function (globals: list ident) (p:params globals) : Type[0] :=
496{
497 joint_if_luniverse: universe LabelTag;
498 joint_if_runiverse: universe RegisterTag;
499 joint_if_result : resultT p;
500 joint_if_params : paramsT p;
501 joint_if_locals : localsT p;
502 joint_if_stacksize: nat;
503 joint_if_code : codeT ... p;
504 joint_if_entry : $\Sigma$l: label. lookup ... joint_if_code l $\neq$ None ?;
505 joint_if_exit : $\Sigma$l: label. lookup ... joint_if_code l $\neq$ None ?
506}.
507\end{lstlisting}
508Naturally, a question arises as to why we have chosen to split up the parameterisation into so many intermediate records, each slightly extending earlier ones.
509The reason is because some intermediate languages share a host of parameters, and only differ on some others.
510For instance, in instantiating the ERTL language, certain parameters are shared with RTL, whilst others are ERTL specific:
511\begin{lstlisting}
512...
513definition ertl_params__: params__ :=
514 mk_params__ register register register register (move_registers $\times$ move_registers)
515 register nat unit ertl_statement_extension.
516...
517definition ertl_params1: params1 := rtl_ertl_params1 ertl_params0.
518definition ertl_params: ∀globals. params globals := rtl_ertl_params ertl_params0.
519...
520definition ertl_statement := joint_statement ertl_params_.
521
522definition ertl_internal_function :=
523 $\lambda$globals.joint_internal_function ... (ertl_params globals).
524\end{lstlisting}
525Here, \texttt{rtl\_ertl\_params1} are the common parameters of the ERTL and RTL languages:
526\begin{lstlisting}
527definition rtl_ertl_params1 := $\lambda$pars0. mk_params1 pars0 (list register).
528\end{lstlisting}
529
530The record \texttt{more\_sem\_params} bundles together functions that store and retrieve values in various forms of register:
531\begin{lstlisting}
532record more_sem_params (p:params_): Type[1] :=
533{
534 framesT: Type[0];
535 empty_framesT: framesT;
536
537 regsT: Type[0];
538 empty_regsT: regsT;
539
540 call_args_for_main: call_args p;
541 call_dest_for_main: call_dest p;
542
543 greg_store_: generic_reg p $\rightarrow$ beval $\rightarrow$ regsT $\rightarrow$ res regsT;
544 greg_retrieve_: regsT $\rightarrow$ generic_reg p $\rightarrow$ res beval;
545 acca_store_: acc_a_reg p $\rightarrow$ beval $\rightarrow$ regsT $\rightarrow$ res regsT;
546 acca_retrieve_: regsT $\rightarrow$ acc_a_reg p $\rightarrow$ res beval;
547 ...
548 dpl_store_: dpl_reg p $\rightarrow$ beval $\rightarrow$ regsT $\rightarrow$ res regsT;
549 dpl_retrieve_: regsT $\rightarrow$ dpl_reg p $\rightarrow$ res beval;
550 ...
551 pair_reg_move_: regsT $\rightarrow$ pair_reg p $\rightarrow$ res regsT;
552}.
553\end{lstlisting}
554Here, the fields \texttt{empty\_framesT}, \texttt{empty\_regsT}, \texttt{call\_args\_for\_main} and \texttt{call\_dest\_for\_main} are used for state initialisation.
555
556The fields \texttt{greg\_store\_} and \texttt{greg\_retrieve\_} store and retrieve values from a generic register, respectively.
557Similarly, \texttt{pair\_reg\_move} implements the generic move instruction of the joint language.
558Here \texttt{framesT} is the type of stack frames, with \texttt{empty\_framesT} an empty stack frame.
559
560The two hypothesised values \texttt{call\_args\_for\_main} and \texttt{call\_dest\_for\_main} deal with problems with the \texttt{main} function of the program, and how it is handled.
561In particular, we need to know when the \texttt{main} function has finished executing.
562But this is complicated, in C, by the fact that the \texttt{main} function is explicitly allowed to be recursive (disallowed in C++).
563Therefore, to understand whether the exiting \texttt{main} function is really exiting, or just recursively calling itself, we need to remember the address to which \texttt{main} will return control once the initial call to \texttt{main} has finished executing.
564This is done with \texttt{call\_dest\_for\_main}, whereas \texttt{call\_args\_for\_main} holds the \texttt{main} function's arguments.
565
566We extend \texttt{more\_sem\_params} with yet more parameters via \texttt{more\_sem\_params2}:
567\begin{lstlisting}
568record more_sem_params1 (globals: list ident) (p: params globals) : Type[1] :=
569{
570 more_sparams1 :> more_sem_params p;
571
572 succ_pc: succ p $\rightarrow$ address $\rightarrow$ res address;
573 pointer_of_label: genv ... p $\rightarrow$ pointer $\rightarrow$
574 label $\rightarrow$ res ($\Sigma$p:pointer. ptype p = Code);
575 ...
576 fetch_statement:
577 genv ... p $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
578 res (joint_statement (mk_sem_params ... more_sparams1) globals);
579 ...
580 save_frame:
581 address $\rightarrow$ nat $\rightarrow$ paramsT ... p $\rightarrow$ call_args p $\rightarrow$ call_dest p $\rightarrow$
582 state (mk_sem_params ... more_sparams1) $\rightarrow$
583 res (state (mk_sem_params ... more_sparams1));
584 pop_frame:
585 genv globals p $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
586 res ((state (mk_sem_params ... more_sparams1)));
587 ...
588 set_result:
589 list val $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
590 res (state (mk_sem_params ... more_sparams1));
591 exec_extended:
592 genv globals p $\rightarrow$ extend_statements (mk_sem_params ... more_sparams1) $\rightarrow$
593 succ p $\rightarrow$ state (mk_sem_params ... more_sparams1) $\rightarrow$
594 IO io_out io_in (trace $\times$ (state (mk_sem_params ... more_sparams1)))
595 }.
596\end{lstlisting}
597The field \texttt{succ\_pc} takes an address, and a successor' label, and returns the address of the instruction immediately succeeding the one at hand.
598
599Here, \texttt{fetch\_statement} fetches the next statement to be executed.
600The fields \texttt{save\_frame} and \texttt{pop\_frame} manipulate stack frames.
601In particular, \texttt{save\_frame} creates a new stack frame on the top of the stack, saving the destination and parameters of a function, and returning an updated state.
602The field \texttt{pop\_frame} destructively pops a stack frame from the stack, returning an updated state.
603Further, \texttt{set\_result} saves the result of the function computation, and \texttt{exec\_extended} is a function that executes the extended statements, peculiar to each individual intermediate language.
604
605We bundle \texttt{params} and \texttt{sem\_params} together into a single record.
606This will be used in the function \texttt{eval\_statement} which executes a single statement of the joint language:
607\begin{lstlisting}
608record sem_params2 (globals: list ident): Type[1] :=
609{
610 p2 :> params globals;
611 more_sparams2 :> more_sem_params2 globals p2
612}.
613\end{lstlisting}
614\noindent
615The \texttt{state} record holds the current state of the interpreter:
616\begin{lstlisting}
617record state (p: sem_params): Type[0] :=
618{
619 st_frms: framesT ? p;
621 sp: pointer;
622 isp: pointer;
623 carry: beval;
624 regs: regsT ? p;
625 m: bemem
626}.
627\end{lstlisting}
628Here \texttt{st\_frms} represent stack frames, \texttt{pc} the program counter, \texttt{sp} the stack pointer, \texttt{isp} the internal stack pointer, \texttt{carry} the carry flag, \texttt{regs} the registers (hardware and pseudoregisters) and \texttt{m} external RAM.
629Note that we have two stack pointers, as we have two stacks: the physical stack of the MCS-51 microprocessor, and an emulated stack in external RAM.
630The MCS-51's own stack is minuscule, therefore it is usual to emulate a much larger, more useful stack in external RAM.
631We require two stack pointers as the MCS-51's \texttt{PUSH} and \texttt{POP} instructions manipulate the physical stack, and not the emulated one.
632
633We use the function \texttt{eval\_statement} to evaluate a single joint statement:
634\begin{lstlisting}
635definition eval_statement:
636 $\forall$globals: list ident.$\forall$p:sem_params2 globals.
637 genv globals p $\rightarrow$ state p $\rightarrow$ IO io_out io_in (trace $\times$ (state p)) :=
638...
639\end{lstlisting}
640We examine the type of this function.
641Note that it returns a monadic action, \texttt{IO}, denoting that it may have an IO \emph{side effect}, where the program reads or writes to some external device or memory address.
643Further, the function returns a new state, updated by the single step of execution of the program.
644Finally, a \emph{trace} is also returned, which records externally observable events', such as the calling of external functions and the emission of cost labels.
645
646%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
647% SECTION. %
648%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%-%
651
652Monads are a categorical notion that have recently gained an amount of traction in functional programming circles.
653In particular, it was noted by Moggi that monads could be used to sequence \emph{effectful} computations in a pure manner.
654Here, effectful computations' cover a lot of ground, from writing to files, generating fresh names, or updating an ambient notion of state.
655
656A monad can be characterised by the following:
657\begin{itemize}
658\item
659A data type, $M$.
660For instance, the \texttt{option} type in OCaml or Matita.
661\item
662A way to inject' or lift' pure values into this data type (usually called \texttt{return}).
663We call this function \texttt{return} and say that it must have type $\alpha \rightarrow M \alpha$, where $M$ is the name of the monad.
664In our example, the lifting' function for the \texttt{option} monad can be implemented as:
665\begin{lstlisting}
666let return x = Some x
667\end{lstlisting}
668\item
669A way to sequence' monadic functions together, to form another monadic function, usually called \texttt{bind}.
670Bind has type $M \alpha \rightarrow (\alpha \rightarrow M \beta) \rightarrow M \beta$.
671We can see that bind unpacks' a monadic value, applies a function after unpacking, and repacks' the new value in the monad.
672In our example, the sequencing function for the \texttt{option} monad can be implemented as:
673\begin{lstlisting}
674let bind o f =
675 match o with
676 None -> None
677 Some s -> f s
678\end{lstlisting}
679\item
680A series of algebraic laws that relate \texttt{return} and \texttt{bind}, ensuring that the sequencing operation does the right thing' by retaining the order of effects.
681These \emph{monad laws} should also be useful in reasoning about monadic computations in the proof of correctness of the compiler.
682\end{itemize}
683In the semantics of both front and back-end intermediate languages, we make use of monads.
684This monadic infrastructure is shared between the front-end and back-end languages.
685
686In particular, an IO' monad, signalling the emission of a cost label, or the calling of an external function, is heavily used in the semantics of the intermediate languages.
687Here, the monad's sequencing operation ensures that cost label emissions and function calls are maintained in the correct order.
688We have already seen how the \texttt{eval\_statement} function of the joint language is monadic, with type:
689\begin{lstlisting}
690definition eval_statement:
691 $\forall$globals: list ident.$\forall$p:sem_params2 globals.
692 genv globals p $\rightarrow$ state p $\rightarrow$ IO io_out io_in (trace $\times$ (state p)) :=
693...
694\end{lstlisting}
695If we examine the body of \texttt{eval\_statement}, we may also see how the monad sequences effects.
696For instance, in the case for the \texttt{LOAD} statement, we have the following:
697\begin{lstlisting}
698definition eval_statement:
699 $\forall$globals: list ident. $\forall$p:sem_params2 globals.
700 genv globals p $\rightarrow$ state p $\rightarrow$ IO io_out io_in (trace $\times$ (state p)) :=
701 $\lambda$globals, p, ge, st.
702 ...
703 match s with
705 ! vaddrh $\leftarrow$ dph_retrieve ... st addrh;
706 ! vaddrl $\leftarrow$ dpl_retrieve ... st addrl;
707 ! vaddr $\leftarrow$ pointer_of_address $\langle$vaddrl,vaddrh$\rangle$;
708 ! v $\leftarrow$ opt_to_res ... (msg FailedLoad) (beloadv (m ... st) vaddr);
709 ! st $\leftarrow$ acca_store p ... dst v st;
710 ! st $\leftarrow$ next ... l st ;
711 ret ? $\langle$E0, st$\rangle$
712\end{lstlisting}
713Here, we employ a certain degree of syntactic sugaring.
714The syntax
715\begin{lstlisting}
716 ...
717! vaddrh $\leftarrow$ dph_retrieve ... st addrh;
718! vaddrl $\leftarrow$ dpl_retrieve ... st addrl;
719 ...
720\end{lstlisting}
721is sugaring for the \texttt{IO} monad's binding operation.
722We can expand this sugaring to the following much more verbose code:
723\begin{lstlisting}
724 ...
725 bind (dph_retrieve ... st addrh) ($\lambda$vaddrh. bind (dpl_retrieve ... st addrl)
726 ($\lambda$vaddrl. ...))
727\end{lstlisting}
728Note also that the function \texttt{ret} is implementing the lifting', or return function of the \texttt{IO} monad.
729
730We believe the sugaring for the monadic bind operation makes the program much more readable, and therefore easier to reason about.
731In particular, note that the functions \texttt{dph\_retrieve}, \texttt{pointer\_of\_address}, \texttt{acca\_store} and \texttt{next} are all monadic.
732
733Note, however, that inside this monadic code, there is also another monad hiding.
734The \texttt{res} monad signals failure, along with an error message.
735The monad's sequencing operation ensures the order of error messages does not get rearranged.
736The function \texttt{opt\_to\_res} lifts an option type into this monad, with an error message to be used in case of failure.
737The \texttt{res} monad is then coerced into the \texttt{IO} monad, ensuring the whole code snippet typechecks.
738
739\subsection{Memory models}
740\label{subsect.memory.models}
741
742Currently, the semantics of the front and back-end intermediate languages are built around two distinct memory models.
743The front-end languages reuse the CompCert 1.6 memory model, whereas the back-end languages employ a version tailored to their needs.
744This split between the memory models reflects the fact that the front-end and back-end languages have different requirements from their memory models.
745
746In particular, the CompCert 1.6 memory model places quite heavy restrictions on where in memory one can read from.
747To read a value in this memory model, you must supply an address, complete with the size of chunk' to read following that address.
748The read is only successful if you attempt to read at a genuine value boundary', and read the appropriate amount of memory for that value.
749As a result, with that memory model you are unable to read the third byte of a 32-bit integer value directly from memory, for instance.
750This has some consequences for the compiler, namely an inability to write a \texttt{memcpy} routine.
751
752However, the CerCo memory model operates differently, as we need to move data piecemeal' between stacks in the back-end of the compiler.
753As a result, the back-end memory model allows one to read data at any memory location, not just on value boundaries.
754This has the advantage that we can successfully give a semantics to a \texttt{memcpy} routine in the back-end of the CerCo compiler (remembering that \texttt{memcpy} is nothing more than read a byte, copy a byte' repeated in a loop), an advantage over CompCert. However, the front-end of CerCo cannot because its memory model and values are the similar to CompCert 1.6.
755
756More recent versions of CompCert's memory model have evolved in a similar direction, with a byte-by-byte representation of memory blocks. However, there remains an important difference in the handling of pointer values in the rest of the formalisation. In particular, in CompCert 1.10 only complete pointer values can be loaded in all of the languages in the compiler, whereas in CerCo we need to represent individual bytes of a pointer in the back-end to support our 8-bit target architecture.
757
758Right now, the two memory models are interfaced during the translation from RTLabs to RTL.
759It is an open question whether we will unify the two memory models, using only the back-end, bespoke memory model throughout the compiler, as the CompCert memory model seems to work fine for the front-end, where such byte-by-byte copying is not needed.
760However, should we decide to port the front-end to the new memory model, it has been written in such an abstract way that doing so would be relatively straightforward.
761
762
Note: See TracBrowser for help on using the repository browser.
|
2020-11-29 02:28:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36440226435661316, "perplexity": 5728.152726074738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195967.34/warc/CC-MAIN-20201129004335-20201129034335-00479.warc.gz"}
|
https://danielphil.github.io/
|
Hello! I’m Daniel, a Software Engineer based in Edinburgh. This is where I note down useful things I don’t want to forget and also where I document some of the personal projects I’ve worked on.
# Blog Posts
### Issues with floating point
Floating point values normally just work, but there are a few issues with them that are useful to be aware of! My previous post discussed the representation of values, but this one will talk more about the times where things might not work quite as expected. I’m aiming for this to be a practical guide with some simple rules to follow, rather than an exhaustive study into all the issues with floating point.
### Floating point numbers: some basics
A recent discussion with a colleague about issues with floating point comparisons made me realise that my knowledge of best practices boiled down to comparing floating point values using tolerances and switching to double if issues with accuracy popped up. I figured it was time to look into it further and get a better understanding of what is actually going on.
### Python collection classes: a summary
Following on from the C++ collections post, it’s time to create a similar overview page for Python! There are more collection classes than this, but I wanted to revise the basics.
### C++ collection classes: a summary
As I work through a bunch of algorithm problems in C++, I thought it would be useful to create a summary of the collection classes built into the standard library.
### Thoughtworks Technology Radar Vol. 19: Notes
I spent a little time over the New Year catching up on some reading, giving me an opportunity to skim through the 2018 ThoughtWorks Technology Radar to get an overview of interesting developments in the field. Here are some of the things that caught my eye.
### Running a Code Jam
Recently, a number of teams at work have started to make use of Docker. To improve our Docker knowledge across the company, we organised a Code Jam. We’ve run a number of these events in the past and, after some experimentation, we’ve settled on a format that seems to work well for us.
### Docker Cheat Sheet
I’ve been playing with Docker recently, but not enough that I always remember the commands. Here’s my cheat sheet for future Docker use.
### Notes from Codility lessons
Codility has a number of lessons online to help candidates prepare for the problems on the site. I figured it might be worthwhile to make a summary of some of the algorithms from the lessons that I more easily forget.
### Setting up Jekyll for building GitHub pages
Time to resurrect the old GitHub Pages site! I haven’t really touched this for the last two years, so it’s time I brought the site up to date. One part of this is installing Jekyll locally on my Mac so I can test the site without continually uploading it to GitHub.
### Using three.js with TypeScript
I tend to modify more projects than I create, so while I can often remember APIs, I often forget the steps I used to set everything up. Therefore, this page is a future reference for me when I forgot to do all this. (If you haven’t done this before then hopefully this will serve as a good starting point!)
|
2021-01-24 06:22:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23432813584804535, "perplexity": 902.1166471477691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547333.68/warc/CC-MAIN-20210124044618-20210124074618-00514.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-a-3-b-5-4-and-write-it-using-only-positive-exponents
|
# How do you simplify (a^3/b^5)^4 and write it using only positive exponents?
Sep 15, 2016
#### Answer:
${a}^{12} / {b}^{20}$
#### Explanation:
Using the $\textcolor{b l u e}{\text{law of exponents}}$
$\textcolor{\mathmr{and} a n \ge}{\text{Reminder }}$
$\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} \textcolor{b l a c k}{{\left({a}^{m} / {b}^{m}\right)}^{n} = {a}^{m n} / {b}^{m n}} \textcolor{w h i t e}{\frac{a}{a}} |}}}$
$\Rightarrow {\left({a}^{3} / {b}^{5}\right)}^{4} = {a}^{3 \times 4} / {b}^{5 \times 4} = {a}^{12} / {b}^{20}$
|
2019-09-16 12:43:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991753876209259, "perplexity": 11804.205000322598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00192.warc.gz"}
|
https://www.mathsdiscussion.com/equation-of-tangent-to-parabola/
|
# Equation of tangent to parabola
Equation of tangent to parabola
Equation of tangent to parabola $y^2\,=\,4ax$ ant any parametric coordinate p(t)=$p(\,at^2,\,2at)$ where t is a parameter and $t\,\in R$ is given by $yt\,=\,x\,+\,at^2$ .
Hence 1/t is the slope of tangent at point P(t).
Let m=1/t Hence equation of tangent will be $\frac{y}{m}\,=\,x\,+\,\frac{a}{m^2}$
i.e tangent is y=mx + a/m where m is the slope of the tangent.
Application of tangent in slope form
Find the tangent to parabola $y^2\,=\,8x$ drawn from the point (-1,-1)
We know that equation of tangent with slope m to the parabola $y^2\,=\,4ax$ is given by y=mx+a/m .
Here a=2 Hence y=mx+2/m is tangent with slope m to parabola $y^2\,=\,8x$ , And tangent passes through the point (-1,-1).
Hence we get -1=-m+2/m ; $m^2\,-m\,-2\,=0$
Hence m = 2 , -1 are slope of required tangents.
Thuse Equation of tangents to the parabola $y^2\,=\,8x$ is y=2x+1 , and y=-x-2.
If equation of parabola is $(y-k)^2\,=\,4a(x-h)$ then equation of tangent in slope for will be (y-k) = m(x-h) + a/m
Similarly if equation of parabola is $x^2=4ay$ then tangent in slope form is y=mx – $am^2$ and for parabola of form $(x-h)^2\,=\,4a(y-k)$ tangent with slope m is $y-k=m(x-h)-am^2$
|
2022-12-03 23:01:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6784656643867493, "perplexity": 946.6584934428026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00312.warc.gz"}
|
https://www.physicsforums.com/threads/a-question-about-colimits.673328/
|
1. Feb 20, 2013
### Jim Kata
I have been trying to teach myself some category theory, and have been working through some proofs. I didn't understand the proofs I read about proving colimits (in the case of a small categories) can be given in terms of coproducts and coequalizers. Here is my attempt at a proof. I would appreciate someone to correct my mistakes and explain the aspects I don't understand. I'm sorry if it is a bit disconbobulating since I'm not sure how to draw diagrams in latex. Let $$\mathcal{F} : \mathcal{B} \rightarrow \mathcal{A}$$ be a diagram where $$\mathcal{B}$$ is a small category. Let $$X_j$$ be an object in $$\mathcal{B}$$ (we can index it i guess because $$\mathcal{B}$$ is a small category?)
let $$\varphi : X_{j} \rightarrow X_{l}$$
so $$\mathcal{F}(\varphi) : \mathcal{F}(X_{j}) \rightarrow \mathcal{F}(X_{l})$$
Since the coproduct exists for every $$X_j$$
there exists $$i_j:X_j \rightarrow \coprod_{Obj \mathcal{B}}B$$
so there are two morphisms $$\mathcal{F}(i_j):\mathcal{F}(X_j) \rightarrow \mathcal{F}(\coprod_{Obj \mathcal{B}}B)$$
and $$\mathcal{F}(i_l\varphi):\mathcal{F}(X_j) \rightarrow \mathcal{F}(X_l)\rightarrow \mathcal{F}(\coprod_{Obj \mathcal{B}}B)$$
Using the existence of the coequalizer we have the cocone $$(\phi,Q)$$
where $$\phi(X_j)= q\circ\mathcal{F}(i_j): \mathcal{F}(X_j) \rightarrow Q$$ and by the universal property of the coequalizer we get the universal property of the cocones. I guess my problem is I don't see how I ever used the universal property of the coproduct and I'm not sure I used the small category part right?
|
2018-02-25 16:13:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029793739318848, "perplexity": 114.7606317123073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816647.80/warc/CC-MAIN-20180225150214-20180225170214-00585.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/16340/browse?rpp=20&sort_by=1&type=title&offset=547&etal=-1&order=ASC
|
# Browse Dissertations and Theses - Mathematics by Title
• (2004)
For each pair of linear orderings (L, M), the representability number reprM(L) of L in M is the least ordinal alpha such that L can be order-embedded into the lexicographic power Malex . The case M = R is relevant ...
application/pdf
PDF (7MB)
• (1980)
A central extension of finite groups e: 0 (--->) A (--->) E (--->) G (--->)1 is said to be a stem extension of G if A is contained in the commutator subgroup E' of E. Schur showed that A must be isomorphic to ...
application/pdf
PDF (2MB)
• (1975)
application/pdf
PDF (2MB)
• (1987)
This thesis deals with the asymptotic behavior of stopping rules ${\rm T\sb{A}}$ and ${\rm T\sb{d}}$ proposed by Martinsek (Ann. Statist., 12 (1984):533-550). The asymptotic normality of these stopping rules, when A tends ...
application/pdf
PDF (2MB)
• (1982)
In Chapter I we improve upon results on the almost sure approximation of the empirical process of weakly dependent random vectors, recently obtained by Berkes and Philipp and Philipp and Pinzur. For strongly mixing sequences ...
application/pdf
PDF (3MB)
• (1954)
application/pdf
PDF (1MB)
• (1993)
The topic of my thesis is the geometry of projective homogeneous spaces G/H for a semisimple algebraic group G in characteristic p $>$ 0, where H is a subgroup scheme containing a Borel subgroup B. In characteristic p ...
application/pdf
PDF (2MB)
• (1967)
application/pdf
PDF (2MB)
• (2014-01-16)
This thesis is concerned with the restriction theory of the Fourier transform. We prove two restriction estimates for the Fourier transform. The first is a bilinear estimate for the light cone when the exponents are on a ...
application/pdf
PDF (1MB)
• (2018-07-10)
We consider a generalization of the linear search problem where the searcher has low sensing capabilities on two rays. We first show the necessary conditions for an optimal search plan to exist. We then investigate ...
application/pdf
PDF (962kB)
• (1959)
application/pdf
PDF (2MB)
• (2018-11-08)
For $k,n\ge 1$, the jet space $J^k(\R^n)$ is the set of $k^{th}$-order Taylor polynomials of functions in $C^k(\R^n)$. Warhurst constructs a Carnot group structure on $J^k(\R^n)$ such that the jets of functions in ...
application/pdf
PDF (546kB)
• (1968)
application/pdf
PDF (2MB)
• (2009)
The purpose of this work is to provide a clearer picture between traditional approximation properties of C*-algebras and the recent local approximations, via operator spaces, of nuclear C*-algebras introduced by Junge, ...
application/pdf
PDF (1MB)
• (1998)
In this thesis some aspects of a local theory for operator algebras are explored. The main purpose is to provide some tools for studying locally compact quantum groups. We first consider inverse limits of $C\sp*$-algebras ...
application/pdf
PDF (5MB)
• (1965)
application/pdf
PDF (1MB)
• (2007)
We investigate several problems related to the multiplicative structure of integers. First, we determine the order of magnitude of the function H2(x, y, z), the number of positive integers n ≤ x having exactly two ...
application/pdf
PDF (2MB)
• (1986)
It is of the utmost importance to know whether ZG has locally free cancellation in many applications where ZG is the integral group ring of a finite group G over Z. Jacobinski's cancellation theorem implies that cancellation ...
application/pdf
PDF (3MB)
• (1970)
application/pdf
PDF (3MB)
• (1979)
application/pdf
PDF (3MB)
|
2019-09-20 02:20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275325298309326, "perplexity": 1393.074633324861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00135.warc.gz"}
|
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=2460
|
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.STACS.2010.2460
URN: urn:nbn:de:0030-drops-24605
URL: http://drops.dagstuhl.de/opus/volltexte/2010/2460/
Go to the corresponding LIPIcs Volume Portal
Planar Subgraph Isomorphism Revisited
pdf-format:
Abstract
The problem of {\sc Subgraph Isomorphism} is defined as follows: Given a \emph{pattern} $H$ and a \emph{host graph} $G$ on $n$ vertices, does $G$ contain a subgraph that is isomorphic to $H$? Eppstein [SODA 95, J'GAA 99] gives the first linear time algorithm for subgraph isomorphism for a fixed-size pattern, say of order $k$, and arbitrary planar host graph, improving upon the $O(n^{\sqrt{k}})$-time algorithm when using the Color-coding'' technique of Alon et al [J'ACM 95]. Eppstein's algorithm runs in time $k^{O(k)} n$, that is, the dependency on $k$ is superexponential. We improve the running time to $2^{O(k)} n$, that is, single exponential in $k$ while keeping the term in $n$ linear. Next to deciding subgraph isomorphism, we can construct a solution and count all solutions in the same asymptotic running time. We may enumerate $\omega$ subgraphs with an additive term $O(\omega k)$ in the running time of our algorithm. We introduce the technique of embedded dynamic programming'' on a suitably structured graph decomposition, which exploits the number and topology of the underlying drawings of the subgraph pattern (rather than of the host graph).
BibTeX - Entry
@InProceedings{dorn:LIPIcs:2010:2460,
author = {Frederic Dorn},
title = {{Planar Subgraph Isomorphism Revisited}},
booktitle = {27th International Symposium on Theoretical Aspects of Computer Science},
pages = {263--274},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-16-3},
ISSN = {1868-8969},
year = {2010},
volume = {5},
editor = {Jean-Yves Marion and Thomas Schwentick},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
|
2016-02-14 08:07:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171568512916565, "perplexity": 1723.264236655488}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701171770.2/warc/CC-MAIN-20160205193931-00127-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/68113-complex-fractions.html
|
# Math Help - complex fractions
1. ## complex fractions
what is the technique when solving complex fractions?
a
--------
1 - 1
----
1+ 1
---
a-1
2. invert and multiply
3. ## Fractions
Hello 21_knip
Originally Posted by 21_knip
what is the technique when solving complex fractions?
a
--------
1 - 1
----
1+ 1
---
a-1
If I read your attachment correctly, start with the 1 and the fraction at the bottom first, and write them over a common denominator; like this:
$1 + \frac{1}{a-1} = \frac{a-1}{a-1}+\frac{1}{a-1}$
$= \frac{a-1+1}{a-1}$
$= \frac{a}{a-1}$
You now need 1 over this answer; that's $\frac{1}{\frac{a}{a-1}}$
This is simply the reciprocal of the fraction: turn it 'upside-down': $\frac{a-1}{a}$
Now you do a similar thing for the remaining 1 and this new fraction:
$1 - \frac{a-1}{a} = \frac{a}{a} - \frac{a-1}{a}$
$= \frac{a-a{\color{red}+}1}{a}$ Watch that sign!
$= \frac{1}{a}$
And so finally (as chiph588@ said) invert and mulitply:
$\frac{a}{\frac{1}{a}}$
$= a \times \frac{a}{1}$
$= a^2$
I hope you followed all that.
|
2016-07-01 22:48:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7791873216629028, "perplexity": 4636.026857996013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00191-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://email.esm.psu.edu/pipermail/macosx-tex/2006-February/020461.html
|
# [OS X TeX] graphics
Gerard Walschap gwalschap at cox.net
Mon Feb 6 20:04:30 EST 2006
``` Hi all,
I have a question that has been discussed before on this list (but
I'm too lazy to track it down) and to which I used to know the answer
(but have forgotten it): one can export directly graphics from
mathematica to latex. In fact, exporting the file automatically
creates a tex document that invokes the graphics package, and says
"\includegraphics{whatever.epg}".
When I run this in texshop, though, I get complaints saying it can't
find the file whatever.epg, even though that file sits cozily next to
the tex file inside ~/Documents/TeXShopfiles/. What am I doing wrong?
|
2020-07-12 16:33:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95502769947052, "perplexity": 7331.119830427635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00326.warc.gz"}
|
https://electrichandleslide.wordpress.com/
|
## Kylerec – Seiberg-Witten and Fillings
The first post on Kylerec is here, so if you haven’t been keeping up with these posts, you might want to start there. This is also the last post on the Kylerec 2017 workshop, which has been fun and rewarding to write about (with some much appreciated help from Agustín Moreno)!
If you’re following along with the lecture notes from Kylerec, then this post corresponds to Day 4 (Talks 11-14), consisting of the following four talks:
### A pre-introduction to Seiberg-Witten theory
The Seiberg-Witten equations have been discussed in this blog by Laura Starkston in a sequence of four posts. For more information, details, and clarification, the interested reader should go there, or to the notes of Hutchings and Taubes mentioned in the introduction to this post. I call this a pre-introduction because the details will be rather sketchy. I will not even write the Seiberg-Witten equations down. The reader interested in skipping to fillings may wish to jump ahead to the two-sentence summary at the end of this section.
I should mention that the Seiberg-Witten equations arise naturally in physics, although I’ve not yet personally taken the time to understand Witten’s motivation for first writing down these equations, which he called the monopole equations. If you are interested in that sort of thing, maybe check out this MathOverflow post.
Consider a closed oriented smooth 4-manifold $X$, together with the following data:
• a Riemannian metric $g$
• a self-dual 2-form $\mu$ (meaning $*\mu = \mu$ where $*$ is the Hodge star with respect to $g$)
• a $\textbf{spin}^c$-structure $\mathfrak{s} \in \mathcal{S}_X$
You might be asking – what’s a $\text{spin}^c$-structure? Recall that for $n \geq 3$, one has $\pi_1(\text{SO}(n)) = \mathbb{Z}/2\mathbb{Z}$, so one can form the connected double cover $\text{Spin}(n)$. Then one defines the Lie group
$\text{Spin}^c(n) = (\text{Spin}(n) \times U(1))/ \pm 1.$
This comes with a map to $\text{SO}(n)$ with fiber $U(1)$. The metric $g$ yields a principal $\text{SO}(n)$-bundle called the frame bundle, which topologically doesn’t depend on the metric, and a $\text{spin}^c$-structure is just a principal $\text{Spin}^c(n)$-bundle such that quotienting by the $U(1)$-action (sitting inside the $\text{Spin}^c(n)$-action) recovers the frame bundle.
For $X^4$ oriented, which is the case of interest to us, the space of $\text{spin}^c$-structures, $\mathcal{S}_X$, is an affine space modelled on $H^2(X;\mathbb{Z})$ (this is not obvious). Also in the 4-dimensional case, representation theory of the Lie group $\text{Spin}^c(4)$ yields two complex 2-dimensional spinor bundles $S_{\pm}$ and a complex line bundle $L = \det S_{\pm}$. The Seiberg-Witten equations are then equations on pairs $(A,\psi)$ consisting of a $U(1)$-connection on $L$ and a positive spinor (a section of $S_+$). We write this simply as
$\mathcal{F}_{g,\mu,\mathfrak{s}}(A,\psi) = 0$.
Let $\mathfrak{m}_{g,\mu,\mathfrak{s}}$ be the solutions to this eqution. There is an action of the gauge group $\mathcal{G} = C^{\infty}(X,S^1)$ (given by $h \cdot (A,\psi) = (A-2h^{-1}dh,h\psi)$). This action is free except for reducible solutions where $\psi = 0$, in which case the stabilizer is $S^1$. The quotient yields the moduli space (where we suppress $g,\mu,\mathfrak{s}$ from the notation):
$\mathcal{M} = \mathfrak{m}/\mathcal{G}$
Theorem (key, nontrivial): The space $\mathcal{M}$ is always compact.
Let $b_2^+$ be the rank of the positive-definite part of $H^2(X;\mathbb{R})$ with respect to the intersection product. We will assume $b_2^+ > 1$. This implies that generically paths of choices $(g,\mu)$ for a fixed $\mathfrak{s}$ will avoid reducible solutions, yielding the following.
Theorem (standard Fredholm theory): Consider $X$ with $b_2^+ > 1$ and some fixed $\mathfrak{s}$. Then generically (with respect to $(g,\mu)$),
• $\mathcal{M}$ is a smooth finite-dimensional manifold of dimension given by topological data (only depending on $X$ and $\mathfrak{s}$)
• $\mathcal{M}$ can be given an orientation with some auxiliary topological choice (not depending on $\mathfrak{s}$)
• $\mathcal{M}$ is a cobordism invariant
Definition: For $b_2^+ > 1$, and for $\mathfrak{s}$ with $\dim \mathcal{M} = 0$, we define the Seiberg-Witten invariant $\text{SW}_X(\mathfrak{s}) = \#\mathcal{M} \in \mathbb{Z}$, where we count $\mathcal{M}$ with signs according to the auxiliary topological choice.
One can also define the Seiberg-Witten invariant when the dimension is positive, but there is the simple type conjecture that in such cases, this invariant is zero. In the case of symplectic manifolds, which is the case we care about, this is known to be true (by Taubes). By construction, the Seiberg-Witten invariants are diffeomorphism invariants (once we have fixed our auxiliary data for determining an orientation of $\mathcal{M}$).
We are interested in the case of symplectic manifolds. In this case, there is a canonical choice for the data which orients the moduli spaces of solutions to the Seiberg-Witten equations. There is a natural morphism $\mathcal{S}_X \rightarrow H^2(M;\mathbb{Z})$ given by the $c_1(L)$ where $L$ is the line bundle mentioned before. (This is not an isomorphism if $H^2(X;\mathbb{Z})$ has 2-torsion.)
Definition: A class $c \in H^2(M;\mathbb{Z})$ is basic if there is a $\text{spin}^c$-structure $\mathfrak{s}$ with $c_1(L_{\mathfrak{s}}) = c$ such that $\text{SW}_{X}(\mathfrak{s}) \neq 0$.
We finish by stating the following facts without proof (although we did discuss the proofs at Kylerec).
Theorem [Taubes]: For a symplectic manifold, $\pm c_1(X,\omega)$ are basic classes (with Seiberg-Witten invariants $\pm 1$).
Theorem [Corollary of the same Taubes paper]: When $(X,\omega)$ is minimal, Kähler, and of general type (the last condition meaning $c_1(X,\omega)[\omega] < 0$ and $c_1^2(X,\omega) > 0$), then $\pm c_1(X,\omega)$ are the only basic classes.
Theorem [Corollary of Morgan-Szabó]: If $(X^4,\omega)$ has $c_1(X,\omega) = 0$, $b_1 = 0$, and $b_2^+ > 1$, then it is a rational homology K3 surface.
SUMMARY OF THIS SECTION: The Seiberg-Witten invariants form a diffeomorphism invariant, hence so do basic cohomology classes. This fact, plus the previous three theorems, are all we need.
### Filling unit cotangent bundles
Unit cotangent bundles, which we shall notate as $S^*M$, have canonical Weinstein fillings given by the unit disk cotangent bundles. It is a natural question to ask if this natural filling is in fact the only one up to some notion of equivalence. We shall restrict ourselves in this discussion to when the base space is a closed orientable surface $\Sigma_g$ of genus $g$. We mostly focus on the $g \geq 2$ case, but we quickly review the case of $g = 0,1$.
Let us begin with $g=0$. In the first post on J-holomorphic curves, when discussing McDuff’s classification result, I mentioned that $L(2,1) = S^*S^2$ has a unique minimal strong filling up to diffeomorphism. Further, Hind proved that Stein fillings are unique up to Stein homotopy.
For $g=1$, in the second post on J-holomorphic curves, when discussing Wendl’s J-holomorphic foliations, I mentioned that every minimal strong filling of $S^*T^2$ to the standard one. In fact, he proves further that every minimal strong filling is symplectically deformation equivalent, which is a little stronger. Also, Stipsicz proved that all Stein fillings are homeomorphic (to $D^*T^2 = D^2 \times T^2$).
To summarize roughly (though we know a little more), for $g=0,1$, exact fillings (which are automatically minimal) are unique up to symplectic deformation equivalence.
So now we move on to $g \geq 2$. We focus on exact fillings because strong fillings (even minimal ones) are too weak to get a handle on. One can build strong fillings with arbitrarily large positive second Betti number $b_2^+$. This involves cutting out a cap (with concave boundary) from one particular strong filling (McDuff) and gluing in other caps with higher $b_2^+$ (Etnyre and Honda).
The idea, in this paper of Li, Mak, and Yasui, is similar to the idea we encountered in McDuff’s approach to the $g=0$ (and more generally $L(p,1)$) case – attach a cap, and then use classification results to figure out what you had in the first place. The following definition is the correct version of cap that we need.
Definition:Calabi-Yau cap for a contact 3-manifold $(M,\xi)$ is a strong cap (like a filling, but with a concave end instead) $(P,\omega)$ with $c_1(P,\omega)$ torsion.
Theorem 1 [LMY]: If a Calabi-Yau cap exists for $(M,\xi)$, then the set of triples of Betti numbers $(b_1(X), b_2(X), b_3(X))$ is finite as $X$ ranges over all exact fillings.
Remark: This theorem is not true if instead we let $X$ range over all strong fillings. This was noted above when we remarked that $b_2^+$ could be arbitrarily large for a strong filling.
Theorem 2 [LMY]: In the case of the unit cotangent bundle, then for any exact filling $(X,\omega)$, its homology $H^*(X;\mathbb{R})$ and intersection form $H^2(X;\mathbb{R}) \otimes H^2(X, \partial X;\mathbb{R}) \rightarrow \mathbb{R}$ are the same as that of the standard filling.
Sketch Proof of Theorem 1: Some messing around with Chern classes tells us that if we have an exact filling $(X,\omega_X)$ and a Calabi-Yau cap $(P,\omega_P)$ for $(M,\xi)$, then the glued manifold $(Z,\omega)$ satisfies $c_1(X) \cdot [\omega] = 0$. Then one can plug this into classification theorems by an invariant called the symplectic Kodaira dimension $\kappa^s(Z,\omega)$. In the case when $X$ is minimal with $c_1(X) \cdot [\omega] = 0$, we must have $\kappa^s = 0$. In this case, when $b_1 = 0$, then the Morgan-Szabó result mentioned in the Seiberg-Witten section implies that we have a rational K3 surface, hence we know its Betti numbers. Tian-Jun Li extended this result to a classification for $\kappa^s = 0$ and $Z$ minimal but with arbitrary $b_1$. Otherwise, if $X$ is not minimal, it must have $\kappa^s = - \infty$, and one needs to be a little more careful, working with a symplectic surface in $P$ to which an adjunction inequality ends up bounded the Betti numbers.
Sketch Proof of Theorem 2: The key lemma is to construct a symplectic K3 surface $(X,\omega)$ with $g$ non-intersecting Lagrangian tori in the same homology class which all intersect a Lagrangian sphere transversely in one point. Then we can perform Lagrangian surgery to give an embedded Lagrangian genus $g$ surface $L$. Then $X \setminus \text{Op}(L)$ is a Calabi-Yau cap for $S^*\Sigma_g$. Playing around with intersection forms, we see that attaching this cap must yield a rational K3 surface (one can rule out all other possibilities given by the classification theorems mentioned in the proof of Theorem 1), from which playing around more with exact sequences of homology and intersection forms gives the result.
Remark: The classification-type results with respect to symplectic Kodaira dimension are the only place in this section where Seiberg-Witten equations enter the picture, and are really the meat of the argument, in some sense. The rest just comes from exact sequences and understanding intersection forms, which is comparatively simple, staying far away from gauge theory.
The main theorem of Sivek and Van Horn-Morris is the following:
Theorem [SV]: Weinstein fillings of $S^*\Sigma_g$ are unique up to s-cobordism rel boundary.
If you’re worried about the word “s-cobordism,” just think of this as a beefed up version of homotopy equivalence that comes relatively easily in this case once we prove the homotopy type of the filling is unique (is a $K(\pi_1(\Sigma_g),1)$). There are some beautiful group-theoretic arguments which go into this argument, but we have essentially already seen how the Seiberg-Witten invariants come into play, so I won’t include a sketch of the proof.
Finally, I mention a little bit of history with regards to these two papers, because I was confused looking at the most recent versions as I was writing this, not for lack of improper attributions, just by my own confusions about reading them concurrently. The theorems stated are quite similar, as are aspects of the proofs, despite them being stumbled upon independently. To clarify, Theorem 2 of LMY did not exist in version 1 of their paper. About a year later, within a month of each other, SV posted their paper and LMY posted version 2 of their paper. Independently, SV had proved some subset of Theorem 2 (with some small fudge factor in $H_1$ and the intersection form) while LMY had proved the full version. SV’s result was good enough for them to prove the s-cobordism statement, and as far as I can tell, version 2 of SV is just version 1 but where they mention that they have learned that LMY proved the strong version of Theorem 2.
### Homotopic tight contact structures which are different
The main theorem, due to Lisca and Matić, is the following:
Theorem [LM]: For any $n \geq 0$, there exists a rational homology 3-sphere with at least $n$ distinct contact structures up to contactomorphism which are homotopic as plane fields.
In this short section, we simply sketch the proof.
Sketch of proof: One must begin by simply writing the Gompf surgery diagrams (described in the post on Weinstein fillings) for the contact structures in question. One has that a rational homology sphere can be obtained by 0-surgery on a trefoil and $-n$-surgery on an unknot which links with the trefoil once, and so suggests the following surgery diagrams so that the canonical framing on the Legendrians drawn below gives exactly what we want.
We denote these contact structures by $\xi_n^k$ for $1 \leq k \leq n-1$. We will show that for a fixed $n$, all of these are homotopic but not contactomorphic. One computes via results of Eliashberg that for the corresponding Weinstein fillings $W_n^k$ (which are diffeomorphic) that $c_1(W_n^k) = (2k-n)\text{PD}(T)$, where $T$ is the class in $H_2(W_n^k)$ given by the handle coming from the trefoil in the surgery. We shall call the smooth underlying manifold $N_n$.
The homotopic part is rather simple. By classical results (clutching functions, and computing Pontrjagin classes to plug into the Hirzebruch signature theorem) following an argument attributed to Gompf, one can show that the homotopic result can be reduced to proving that $c_1(W_n^k)^2 = c_1(W_n^{k'})^2$, which is itself clear since $\text{PD}(T)^2 = 0$.
As for the contactomorphism part, one embeds $W_n^k$ into a minimal compact Kähler surface $S$ of general type and $b_2^+ > 0$. This is a nontrivial statement, but is nonetheless true. In fact, because $W_n^k$ and $W_n^{k'}$ have isomorphic collars, one can attach the same cap to produce Kähler surfaces $S_n^k$ and $S_n^{k'}$. One can extend the identity on these caps to an orientation preserving diffeomorphism $\phi \colon S_n^k \cong S_n^{k'}$ acting by $\pm 1$ on $H^2(N_n) \subset H^2(S_n)$ (by work of Gompf). But also, since we have a minimal compact Kähler manifold, by the theorem mentioned in the first section as a corollary of Taubes’ work, one has that $\{\pm c_1(S_n^k)\}$ is a diffeomorphism invariant, and so we see that $c_1(S_n^k) = \pm \phi^*c_1(S_n^{k'})$. So these must restrict to the same thing on $H_2(N_n)$, where we showed $c_1(S_n^k)|_{H_2(N_n)} = (2k-n) \text{PD}(T)$. Hence, $2k-n = \pm (2k'-n)$, so either $k = k'$ or $k = n-k'$. Thus, increasing $n$, we can find arbitrarily many homotopic but non-contactomorphic contact structures.
Filed under Uncategorized
## SH & SH^+ [Momchil Konstantinov’s talk]
Let us begin with a rather informal and sketchy overview of the basics behind symplectic homology (this is by no means the most general version, and we refer the reader to the vast and growing literature, of which we give some references below).
Consider $(V,\lambda)$ a Liouville domain with contact boundary $(M=\partial V, \alpha= \lambda\vert _{\partial V})$ and its completion $(\widehat{V},\widehat{\lambda})$, obtained from $(V,\lambda)$ by attaching cylindrical ends. Given a nondegenerate Hamiltonian $H:S^1\times \widehat{V}\rightarrow \mathbb{R}$, we have an associated action functional $\mathcal{A}^H: C^\infty(\mathbb{R}/\mathbb{Z}, \widehat{V})\rightarrow \mathbb{R}$, defined by
$\mathcal{A}^H(x)=\int_{S^1}x^*\widehat{\lambda}-\int_{S^1}H_t(x(t))dt$
Its differential is given by $d_x\mathcal{A}^H(\xi)=\int_{S^1}d\lambda(\xi(t),\dot{x}(t)-X_{H_t}(x(t)))dt$, and it follows that its critical points correspond to closed Hamiltonian orbits. Given a $d\lambda$-compatible almost complex structure $J$ which is cylindrical on the ends, this induces a metric on the loop space, for which the gradient of $\mathcal{A}^H$ can be written as $\nabla_x\mathcal{A}^H=-J(\dot{x}-X_H(x))$, so that the gradient flow equation becomes the Floer equation. We define the symplectic homology chain complex (with mod 2 coefficients) as
$CF_*(H)=\bigoplus_{x \in \mbox{crit}(\mathcal{A}^H)}\mathbb{Z}_2.x$
By simplicity, assume that $x \in \mbox{crit}(\mathcal{A}^H)$ is contractible (so that we don’t have to worry about homology classes and whatnot), and also assume that $c_1(V)=0$ (this condition can be relaxed to $c_1\vert_{\pi_2(V)}=0$, and is needed for the grading). Then we can define the Conley-Zehnder index of $x$ by choosing spanning disks for $x$ and trivializing $TV$ along this disk, and we choose the grading $|x|=\mu_{CZ}(x)-n$, which is independent on the trivialization by the assumption on $c_1(V)$. The differential is now $d_H: CF_k(H)\rightarrow CF_{k-1}(H)$, given by
$d_H(x)=\sum_{\substack{y\in \mbox{crit}(\mathcal{A}_H)\\|y|=|x|-1}}\#_{\mathbb{Z}_2}\mathcal{M}(y,x)y$
where $\mathcal{M}(y,x)$ is the moduli space of Floer trajectories joining $x$ to $y$ divided by the natural $\mathbb{R}$-translation action. This moduli space is a zero dimensional manifold when $|y|=|x|-1$ (for generic $J$). Recall that Gromov compactness requires uniform $C^0$-bounds (which in our situation are not for free, since $\widehat{V}$ is non-compact) and uniform energy bounds (which we have for $u \in \mathcal{M}(y,x)$, since $E(u)=\mathcal{A}^H(x)-\mathcal{A}^H(y)$).
Def. The spectrum of $(M,\alpha)$ is
$spec(M,\alpha)=\{ T \in \mathbb{R}: \mbox{ there exists a }\alpha-\mbox{Reeb orbit of period }T\}$
Def. The space of admissible Hamiltonians $Ad(V,\lambda)$ is the set of Hamiltonians $H: S^1 \times \widehat{V}\rightarrow \mathbb{R}$ satisfying
$H_t(r,y)=Ae^r+B$ on $r>R>>0$, for some $R$, where $A>0, A \notin spec(M,\alpha)$.
Denote by $h(s)=As+B$, so that $H_t(r,y)=h(e^r)$ on the ends.
If one chooses an admissible $H$ and a $J$ which is cylindrical on the ends, one gets $C^0$-bounds, as follows from the maximun principle: indeed, consider $\Omega \subseteq \mathbb{R}\times S^1$ an open subset, and $u: \Omega \rightarrow \widehat{V}$ a holomorphic map, which has a portion lying on the cylindrical ends. This portion can be parametrized by $u(s,t)=(a(s,t),v(s,t))\in \mathbb{R} \times M$, and a computation gives
$\Delta a + \partial_s(h^\prime(e^a))=\Delta a + h^{\prime\prime}(e^a)e^a\partial_sa=||\partial_s v ||^2\geq 0$
The maximum principle then implies that a sequence of Floer cylinders with fixed asymptotics cannot escape to infinity, since we would get a maximum of $a$, which implies $\Delta a\geq 0$, and this cannot happen if one assumes that the maximum is non-degenerate (a clever trick then gets rid of this assumption). So we get the $C^0$-bounds, which leads to compactness by Gromov, which implies $d_H$ well-defined and $d_H^2=0$ (as follows by studying the boundary of 1-dimensional moduli spaces of Floer trajectories). From this, one gets the Floer homology group
$HF_k(H):=H_k(CF_*(H),d_H)$
The first thing one asks is: is it independent of $H$? And the answer is…well… nope. BUT…
Consider two different $H_+, H_- \in Ad(V,\lambda)$, and choose a smooth path of Hamiltonians $H^s:\widehat{V}\rightarrow \mathbb{R}$ for $s \in \mathbb{R}$, such that $H^s=H_-$ for $s<<0$, $H^s=H_+$, for $s>>0$, and $H^s(r,y)=h_s(e^r)=A_se^r+B_s$ for $A_s,B_s \in \mathbb{R}$, on the cylindrical ends. This gives the parametrized Floer equation $\partial_su + J(\partial_tu - X_{H^s}(u))=0$ and a corresponding moduli space $\mathcal{M}_{\{H^s\}}(x_-,x_+)$ joining the orbits $x_-$ and $x_+$, which is zero dimensional when $|x_-|=|x_+|$ (now we don’t have a translation action). This ideally would allow us to define a map
$\Phi: CF_*(H_+)\rightarrow CF_*(H_-)$
given by
$\Phi(x_+)=\sum_{\substack{x_- \in \mbox{crit}(\mathcal{A}_H)\\ |x_-|=|x_+|}} \#_{\mathbb{Z}_2} \mathcal{M}_{\{H_s\}}(x_-,x_+)x_-$
satisfying $d_{H_-}\circ \Phi = \Phi \circ d_{H_+}$, as follows by studying how trajectories in 1-dimensional moduli spaces can break. But this, again, requires Gromov compactness. A similar computation gives
$\Delta a + \partial_s(h_s^\prime (e^a))=\Delta a + h_s^{\prime\prime}(e^a)e^a\partial_sa + (\partial_s h_s^\prime)(e^a)=||\partial_s v||^2$
So, to have $\Delta a + h_s^{\prime\prime}(e^a)e^a\partial_sa\geq 0$ it suffices with
$\partial_sh_s^\prime=\partial_s A_s<0$
In other words, the slope of $H_-$ is necessarily steeper than that of $H_+$. This means that we only get compactness in “one direction”, and we do not get a homotopically inverse map.
If we define a partial order $\prec$ on $Ad(V,\lambda)$ by $H_1\prec H_2$ if $H_1 outside of a compact set, the previous discussion gives us a map $HF_*(H_1)\rightarrow HF_*(H_2)$. Moreover, we get commutative diagrams for any $H_1 \prec H_2 \prec H_3$, giving a direct system, so that we may define the symplectic homology of $(V,\lambda)$ as
$SH_k(V,\lambda)=\varinjlim_{H \in Ad(V,\lambda)} HF_k(H)$
Observe that, as with any direct limit, one can compute it by taking cofinal sequences. Now we identify the generators of this homology. Let us recall the following fact from Floer theory:
Fact. If $H$ is sufficiently $C^2$-small then all the 1-periodic orbits of $X_H$ are critical points of $H$, and every Floer trajectory between them is a Morse flow-line.
This means that if $H$ is sufficiently $C^2$-small and positive on $V$, then the generators on this region of $SH_k$ will correspond to critical points (graded by $|x|=\mu_{CZ}(x)-n=n-ind_x(H)-n=-ind_x(H)$), and observe that $\mathcal{A}^H(x)=-H(x)<0$. On the cylindrical ends, we have $X_H=h^\prime(e^r)e^{-r} R_\alpha$, where $R_\alpha$ is the Reeb vector field of $\alpha$ on $r=0$, so that closed Hamiltonian orbits lie in the contact slices $\{r=r_0\}$ and are reparametrizations of closed Reeb orbits of period $T:=h^\prime(e^{r_0})$, and these have action
$\mathcal{A}^H(x)=T-h(e^{r_0})>0$
Since we assume that the slope of $H$ does not lie in the spectrum, there are no closed orbits for $r>R>>0$, and between $0$ and $R$ we see potential closed Hamiltonian orbits of bounded action. Since the differential decreases action, we have a subcomplex $CF_*^{-}(H)$ of $CF_*(H)$ generated by orbits of negative action (critical points), and an exact sequence of chain complexes
$0\rightarrow CF_*^{-}(H)\rightarrow CF_*(H)\rightarrow CF_*^+(H)\rightarrow 0$
where $CF_*^+(H)=\frac{CF_*(H)}{CF_*^{-}(H)}$. If we define
$SH_*^+(H)=\varinjlim_{H \in Ad(V,\lambda)}H_*(CF_*^+(H),d_H)$
and we take direct limit in the resulting long exact sequence (which preserves exactness), we get an induced exact triangle
Here we have used the Floer theory fact, and the maximum principle, to say that $CF_*^-(H)$ computes $H^{-k}(V)$ for every $H$ ($C^2$-small on $V$). Observe that we get cohomology of $V$ rather than homology, since we get a minus in the grading ($-ind_x(H)$ goes to $-ind_x(H)-1=-(ind_x(H)+1)$ under the differential). Yes, it’s confusing.
We can now state a few theorems.
Thm. [Bourgeois-Oancea] If all Reeb orbits of $(M,\alpha)$ satisfy
$\mu_{CZ}(x)+n-3>0$
that is, if $(M,\alpha)$ is dynamically convex, and $V,W$ are two Liouville fillings of $M$ with $c_1(V)=c_1(W)=0$, then $SH_*^+(V)\simeq SH_*^+(W)$.
In other words, $SH_*^+$ is an invariant of $M$, rather than the fillings (with $c_1=0$). The idea is to show that no critical points can be connected to a non-constant orbit by a Floer trajectory, and that no cylinder connecting two of the latter ventures into the filling $V$ (there is a stretching the neck argument here).
Thm. [ML Yau] If $(M,\xi)$ is subcritically Stein fillable (for a filling with $c_1=0$), then $M$ admits a dynamically convex contact form.
Thm. [Cieliebak] If $V$ is subcritically Stein (with $c_1=0$), then it has vanishing symplectic homology.
Cieliebak proves that $V$ is isomorphic to a split Stein manifold $W \times \mathbb{C}$, for $W$ Stein, and using a version of the Künneth formula for $SH_*$, the result follows from the fact that $SH_*(\mathbb{C})=0$, which one can compute by hand.
Cor. If $V,W$ are subcritical Stein fillings of $(M,\xi)$ with $c_1(V)=c_1(W)=0$, then $H^*(V)\simeq H^*(W)$.
This follows from the exact triangle, and all theorems stated above, since $H^{-*}(V)\simeq SH^+_*(V)$ for a subcritical Stein manifold with $c_1(V)=0$.
References
A few references on symplectic homology (by all means very much non-exhaustive):
A begginer’s overview: https://www.mathematik.hu-berlin.de/~wendl/pub/SH.pdf
A nice survey: https://arxiv.org/abs/math/0403377
A Morse-Bott version (relevant for Cédric’s talk below): https://arxiv.org/abs/0704.1039
A related theory (Rabinowitz Floer homology): https://arxiv.org/abs/0903.0768
## Contact manifolds with flexible fillings [Scott Zhang’s talk]
The main reference for this post is this paper: https://arxiv.org/pdf/1610.04837.pdf.
Let us recall the following result, which appeared in Momchil’s talk:
Thm. [M.L Yau] If $W_1, W_2$ are two subcritical fillings of a contact manifold $(M^{2n-1},\xi)$, (with $c_1(W_1)=c_1(W_2)=0$) then $H^*(W_1)\simeq H^*(W_2)$.
The goal for this talk was to discuss the following generalization to the $\emph{flexible}$ case:
Thm 1. [O. Lazarev] If $W_1,W_2$ are two flexible fillings of $(M,\xi)$, then $H^*(W_1)\simeq H^*(W_2)$.
Remark: The same conclusion is true if we consider fillings with vanishing symplectic homology.
The idea is to replace the dynamical convexity condition in Bourgeois-Oancea’s result by an asymptotic version. In the following, given $\alpha_1,\alpha_2$ contact forms for the same contact structure, we will denote $\alpha_1\geq \alpha_2$ if $\alpha_1=f \alpha_2$ for some smooth function $f\geq 1$, and by $\mathcal{P}^{ the set of $\alpha$-Reeb orbits $\gamma$ with action $\int_\gamma \alpha . The degree of a Reeb orbit $\gamma$ is $|\gamma|=\mu_{CZ}(\gamma)+n-3$.
Def. $(M^{2n-1},\xi)$ is asymptotically dynamically convex (ADC) if there exists a sequence of contact forms $\alpha_1\geq \alpha_2\geq \dots$ for $\xi$ and a sequence $0 with $\lim_{i}D_i=\infty$ such that every element in $\mathcal{P}^{ has positive degree.
We have the following:
Thm 2. [O. Lazarev] If $(M,\xi)$ is ADC, then $SH^+$ is independent of the Stein filling with $c_1=0$.
Recall that flexible Weinstein manifolds have vanishing symplectic homology. This follows by the Bourgeois-Ekholm=Eliashberg surgery formula (https://arxiv.org/pdf/0911.0026.pdf), but there are alternative arguments not using the SFT machinery, based on an h-principle for exact codimension zero embeddings, and the Künneth formula for symplectic homology, which even works for twisted coefficients (see e.g. Murphy-Siegel https://arxiv.org/abs/1510.01867). From the exact triangle for $SH_+$, we know that $SH_*^+(W)\simeq H^{-*}(W)$ for flexible $W$, so to get thm. 1 it suffices to show that flexible fillings induce ADC contact structures on their boundaries.
Thm 3. [O. Lazarev] If $(M^\prime,\xi^\prime)$ is obtained from $(M,\xi)$ by flexible surgery and $(M,\xi)$ is ADC, then so is $(M^\prime,\xi^\prime)$.
Remark. The subcritical case where the ADC condition is replaced by DC (dynamical convexity) is already due to Yau.
Since the standard sphere is ADC, thm. 1 follows.
Here are a few ingredients in the argument. Let us recall first the following:
Prop. [Bourgeois-Ekholm-Eliashberg] After surgery along a Legendrian sphere $\Lambda^{n-1} \;(n\geq 3)$, we have a 1-1 correspondence between the newly created Reeb orbits with action bounded by $D>0$, and words of Reeb chords on $\Lambda$ with action bounded by $D$ (up to cyclic permutation). Moreover, we have $|\gamma_{c_1\dots c_n}|=\left(\sum_i |c_i|\right)+n-3$, where $\gamma_{c_1\dots c_n}$ denotes the Reeb orbit corresponding to the word $c_1\dots c_n$.
The idea is to slightly perturb the data so that given a collection of ordered chords, there is a closed Reeb orbit which enters the handle and is close to the original chords in the complement of the handle (the fact that all closed orbits that enter the handle have to leave it boils down to the fact that the geodesics on the flat disk leave the disk).
Key lemma. If $\Lambda$ is loose, there exists a Legendrian isotopy such that (action bounded) Reeb chords have positive degree.
The point is that stabilizing a loose Legendrian, which in general does not change the formal homotopy type, actually does not change the genuine isotopy type, by Murphy’s h-principle, and one can explicitly see that the degree of the resulting Reeb chords is greater or equal than 1 after the stabilization. The fact that we get decreasing contact forms comes form this stabilization process.
## Computations on Brieskorn manifolds [Cédric De Groote’s talk]
The goal for this talk, much more computational in spirit, was to discuss how invariants like contact and symplectic homology can be used to distinguish contact structures on Brieskorn manifolds, specially when the underlying manifolds are diffeomorphic, and in certain cases even when the contact structures are homotopic as almost contact structures. A useful tool is a Morse-Bott version of symplectic homology, which applies in many cases where a lot of symmetry in present in the setup.
Brieskorn manifolds and Ustilovsky exotic contact spheres
The Brieskorn manifold associated to $a=(a_0,\dots,a_n)$, where $a_i\geq 2$ is an integer, is defined by $\Sigma(a)^{2n-1}=\{z_0^{a_0}+\dots + z_n^{a_n}=0\}\cap S^{2n+1}\subseteq \mathbb{C}^{n+1}$. In other words, it is the link of the (isolated) singularity associated to the complex polynomial $f(z)=z_0^{a_0}+\dots + z_n^{a_n}$. It is the binding of an open book on $S^{2n+1}$, with pages which are diffeomorphic to $\{f(z)=\epsilon\}\cap \mathbb{D}^{2n+2}$, for small $\epsilon>0$ (the Milnor fiber of $f$, see Milnor’s classic book: “Singular points of complex hypersurfaces”).
Brieskorn manifolds come with a contact form $\alpha_a=\frac{i}{8}\sum_{j=0}^na_j(z_jd\overline{z_j}-\overline{z_j}dz_j)$, which is induced by the “weighted” exact symplectic form $\omega_a=\frac{i}{4}\sum_{j=0}^n a_j dz_j\wedge d\overline{z_j}$ on $\mathbb{C}^{n+1}$, with associated Liouville vector field $V(z)=z/2$, which is transverse to $\Sigma(a)$. The corresponding Reeb vector field is $R_a=(\frac{4i}{a_0}z_0,\dots,\frac{4i}{a_n}z_n)$, which has flow $\phi_a^t(z)=(e^{\frac{4it}{a_0}}z_0,\dots,e^{\frac{4it}{a_n}}z_n)$. We also have a filling for $\Sigma(a)$, given by $W_a=\{f(z)=\epsilon \varphi(|z|)\}$, where $\varphi: [0,+\infty)\rightarrow \mathbb{R}$ satisfies $\varphi\equiv 1$ close to $0$, and vanishes close to $1$ (so that $W_a$ is a non-singular interpolation between the Milnor fiber and the singular hypersurface $\{f=0\}$). It comes endowed with the restriction of $\omega_a$, and is therefore an exact filling (it is actually Stein). By thm. 5.1 in Milnor’s book, it is parallelizable, and hence $c_1(W_a)=0$.
Some interesting facts:
1. $\pi_1(\Sigma(a))=\dots=\pi_{n-1}(\Sigma(a))=0$, i.e $\Sigma(a)$ is $(n-1)$-connected (lemma 6.4 in Milnor, which works for any Milnor fiber).
2. If $n\neq 2$, $\Sigma(a)$ is homeomorphic to a sphere if and only if it is a homology sphere (For $n \geq 3$ it follows by 1. above -which implies simply connectedness-, and the generalized Poincaré hypothesis, and is trivial for $n=1$). By 1., Poincaré duality and Hurewicz’ theorem, this is equivalent to the reduced homology $\widetilde{H}_{n-1}(\Sigma(n))=0$.
3. There exist conditions on $a$ which are equivalent to $\Sigma(a)$ being homeomorphic to the sphere $S^{2n-1}$. Namely, If there exist $a_i,a_j$ which are relatively prime to all other exponents, OR there exist $a_i$ which is relatively prime to all others and a set $\{a_{j_1},\dots,a_{j_r}\} (r\geq 3 \mbox{ odd })$ such that every $a_{j_k}$ is relatively prime to every exponent not in the set, and $gcd(a_{j_k},a_{j_l})=2$ for $k\neq l$.
4. $\Sigma(2,2,2,3,6k-1)$ for $k=1,\dots,28$ gives all smooth structures in $S^7$ (it is homeomorphic to the sphere by the previous criterion).
5. Any simply connected spin 5-manifold is a connect sum of Brieskorn 5-manifolds.
Thm.[Brieskorn] If $p \equiv \pm 1 (mod \;8)$ then $\Sigma(p,2,\dots,2)$, where the number of 2’s is $2m+1$, is diffeomorphic to $S^{4m+1}$.
Denote by $\xi_p$ the contact structure on $\Sigma(p,2,\dots,2)$ that we obtain by the weighted symplectic form, as above. Observe that by the above criterion these manifolds are all homeomorphic to spheres.
Thm.[Ustilovsky] If $p_1 \neq p_2$, then $\xi_{p_1}$ is not contactomorphic to $\xi_{p_2}$.
The proof uses contact homology. One can take an explicit perturbation making the contact form non-degenerate, and compute the degrees of the resulting non-degenerate Reeb orbits, which are all even. This implies that the differential vanishes, so that contact homology is isomorphic to the underlying chain complex. For different values of $p$, the degrees of the generators differ, and hence contact homology does also (and this is an invariant of the contact structure).
Def. An almost contact structure on $Y^{2n+1}$ is a pair $(\alpha,\beta)$ of a 1-form $\alpha$ and a 2-form $\beta$ such that $\beta\vert_{\ker \alpha}$ is non-degenerate. This is equivalent to having a reduction of the structure group of $TY$ to $U(n)\times 1$.
Def. A contact sphere $(S^{2n+1},\xi)$ is called exotic if it is not contactomorphic to $(S^{2n+1},\xi_{std})$, the standard contact structure on $S^{2n+1}$. It is homotopically trivial if it is homotopic to $(S^{2n+1},\xi_{std})$ as almost contact structures.
An almost contact structure on $S^{4m+1}$ is equivalent to a lift of the classifying map $S^{4m+1}\rightarrow BSO(4m+1)$ to a map $S^{4m+1}\rightarrow B(U(2m)\times 1)$, under the natural map $B(U(2m)\times 1) \rightarrow BSO(4m+1)$ induced by inclusion. This map has fibers $S0(4m+1)/ (U(2m)\times 1)$, and therefore almost contact structures are classified by the group $G:= \pi_{4m+1}( S0(4m+1)/ (U(2m)\times 1))$.
Thm.[Massey] $G$ is cyclic of order $d=(2m)!$ if $m$ even, and $d=(2m)!/2$ if $m$ odd.
Thm.[Morita] The contact structure $\xi_p$ on $\Sigma(p,2,\dots,2)$ represents $\frac{p-1}{2} (mod \; d)$ in $G$ when viewed as an almost contact structure.
It follows that if $p\equiv 1 (mod \; 2(2m)!)$ and $p\equiv \pm 1 (mod \; 8)$ then $\xi_p$ is homotopically trivial. Since there are infinitely many $p$‘s satisfying these conditions, we obtain:
Thm.[Ustilovsky] There exist infinitely many exotic but homotopically trivial contact structures on $S^{4m+1}$.
Morse-Bott techniques
The Morse-Bott condition is morally the next best thing to having non-degeneracy (in fact, one can argue that it is the best thing when one wishes to do computations), and it can be thought of as a manifestation of symmetry.
Recall that a function $f:M \rightarrow \mathbb{R}$ is Morse-Bott if its critical set $\mbox{crit}(f)=\bigsqcup_i C_i$ is a disjoint union of connected submanifolds $C_i$, such that, if we denote by $\nu(C_i)$ the normal bundle of $C_i$ inside $M$, then $Hess_p(f)\vert_{\nu(C_i)}$ is non-degenerate.
Loosely speaking, the degeneracies are “well-controlled”, and come in “families”. In general, in the Morse-Bott situation, one hopes for a perturbation scheme which recovers the non-degenerate/Morse case, by a small perturbation of the data, in such a way that one gets a 1-1 correspondence between the symmetric (i.e Morse-Bott) data, and the generic (i.e Morse) one, and so that compuations can be carried out in the Morse-Bott setting in the first place. For instance, if one wishes to compute Morse homology from a Morse-Bott function $f$, one can choose a Morse function $h$ on $\mbox{crit}(f)$, and consider $f_\epsilon:=f+\epsilon \rho h$, for $\epsilon>0$ small, and $\rho$ is a bump function with support near $\mbox{crit}(f)$. The critical points of $f_\epsilon$ are exactly those of $h$, and there is a well-defined notion of convergence of flow-lines of $f_\epsilon$ to “cascades” (when the perturbation parameter $\epsilon$ is taken to go to zero). The latter consist of a flow-line of $f$ hitting a critical manifold, followed by a flow-line segment of $h$ along this manifold, followed by another flow-line of $f$ hitting another critical manifold, and so on, finishing in a critical point of $f$ (see the figure below). One can define the index of a cascade in such a way that the index is preserved under this convergence, and there is a 1-1 correspondence between index $I$ cascades and index $I$ Morse flow-lines of $f_\epsilon$. Hence, one can define a Morse-Bott differential which counts cascades, and the resulting Morse-Bott (co)homology coincides with the usual Morse (co)homology.
In the setting of symplectic homology, if $W$ is a Liouville filling of a contact manifold $(M,\xi)$ and $H$ is an admissible autonomous Hamiltonian, then we have closed Hamiltonian orbits in the contact slices $\{r\}\times M$ corresponding to closed Reeb orbits, which come in $S^1$-families obtained by reparametrizations (since $H$ is time-independent). This is then a Morse-Bott situation.
[Bourgeois-Oancea] In the Morse-Bott situation described above, if we assume that the orbits come in $S^1$-families (and there are no further directions of degeneracy), then there is a Morse-Bott version of symplectic homology of $W$, $SH_{MB}(W)$.
More generally, one can ask the following Morse-Bott conditions: $\mathcal{N}_T:=\{m|\varphi^T(m)=m\}$ is closed submanifold (where $\varphi^T$ is the time $T$ Reeb flow), such that $rank(d\alpha\vert_{\mathcal{N}_T})$ is locally constant and $T\mathcal{N}_T=\ker(d\varphi^T-id)$. Informally, one can think of this as an infinite-dimensional version of the Morse-Bott conditions, applied to the action functional defined on the loop space, whose critical points are closed Hamiltonian orbits. Assuming that $c_1(W)=0$ and the closed orbits are contractible (so we get an integer grading), fix a choice of Morse functions $f_T$ on $\mathcal{N}_T$ for every $T$. The generators will correspond to pairs $(\gamma,T)$ where $\gamma \in \mbox{crit}(f_T)$, and the differential counts “Floer cascades”, consisting of a Floer cylinder, followed by a flow-line segment of a $f_T$, followed by another Floer cylinder…(finitely many times). The grading is defined by $|(\gamma,T)|=\mu_{RS}(\mathcal{N}_T)+ind_{\gamma}(f_T)-\frac{1}{2}(\dim(\mathcal{N}_T)-1)$, where $\mu_{RS}$ is the Robin-Salamon index, and with this definition the differential has degree -1. Under these conditions, we have a Morse-Bott version of symplectic homology $SH_{MB}$.
Uebele’s computation
We focus now on the Brieskorn manifolds $\Sigma_l^n:=\Sigma(2l,2,\dots,2)$, where there are $n$ 2’s, for odd $n$, endowed with the contact structure discussed in the first part of this talk. Randell’s algorithm gives $H_{n-1}(\Sigma_l^n)=\mathbb{Z}$, and it follows from Wall’s classification of highly-connected manifolds that $\Sigma_l^n \simeq S^{n-1}\times S^n$ if $l\equiv 0 (mod 4)$, $\Sigma_l^n \simeq S^*S^n$ if $l\equiv 1 (mod 4)$, $\Sigma_l^n \simeq S^{n-1} \times S^n \# K$ if $l\equiv 2 (mod 4)$, $\Sigma_l^n \simeq S^*S^n \#K$ if $l\equiv 3 (mod 4)$. Here, $K=\Sigma(2,\dots,2,3)$ is Kervaire’s sphere. If $n=3$, $K$ is diffeomorphic to $S^5$, and hence $\Sigma_l^5$ is always $S^2 \times S^3$.
These contact manifolds manifolds are actually not distinguishable by contact homology. However, we have:
Thm. [Uebele] The manifolds $\Sigma_l^n$ are pairwise non-contactomorphic.
This uses the following lemma:
Lemma. For $\Sigma_l^n$, $SH_{MB}^+$ is independent of the filling, as long as $c_1(W)\vert_{\pi_2(W)}=0$.
This is proved by showing that these manifolds are dynamically convex, and using an analogous version of Bourgeois-Oancea result. Therefore one can regard $SH_{MB}^+$ as a contact invariant.
The idea now is to compute $SH_{MB}^+$ of the natural filling of these Brieskorn manifolds, using the Morse-Bott techniques, and showing that they are pairwise different. One can choose perfect Morse functions along the critical manifolds (or “formally pretend” that one can, by a spectral sequence argument due to Fauck), making the Morse differential trivial, and between different critical manifolds, one sees that for each consecutive degrees $N, N+1$ there exists a unique pair of generators having these degrees, the one with bigger degree $N+1$ having lower action than the one with smaller degree $N$. Since the differential has degree -1 and lowers the action, it has to vanish (this works for $n\geq 5$, and a different argument is needed for $n=3$). The upshot is that the Morse-Bott symplectic homology coincides with its chain complex, and the degrees differ for different values of $l$.
References
A nice reference for a survey of Brieskorn manifolds in contact topology can be found here: https://arxiv.org/abs/1310.0343
Ustilovsky’s exotic spheres: 1999-14-781
Uebele’s computations: https://arxiv.org/abs/1502.04547
Fauck’s thesis (related, and uses RFH): https://arxiv.org/abs/1605.07892
1 Comment
Filed under Uncategorized
## Kylerec – Weinstein fillings
Continuing on with the Kylerec posts… (see the first one here as well as notes to follow along with here).
This post is a synthesis of the following talks:
• Day 1 Talk 2 – François-Simon Fauteux-Chapleau’s talk on Weinstein handles and contact surgery
• Day 1 Talk 3 – Orsola Capovilla-Searle’s talk on Kirby calculus for Stein manifolds
• Day 1 Talk 4 – Alvin Jin’s talk on Lefschetz fibrations and open books
• Day 2 Talk 1 – Bahar Acu’s talk on mapping class factorizations and Lefschetz fibration fillings
• Day 3 Talk 2 – Sarah McConnell’s talk on applications of Wendl’s theorem to fillings
• Day 5 Talk 1 – Ziva Myer’s talk on flexible and loose Legendrians
### Weinstein surgery theory
I assume the reader is familiar with smooth surgery theory. Recall the following definition.
Definition: A Weinstein cobordism consists of a quadruple $(W,\omega,V,\phi)$, where
• $(W,\omega)$ is a compact symplectic manifold with boundary
• $V$ is a Liouville vector field for $(W,\omega)$, meaning $\mathcal{L}_V\omega = \omega$, which is also transverse to the boundary $\partial W$
• $\phi \colon W \rightarrow \mathbb{R}$ is a Morse function
• $V$ is gradient-like for $\phi$, meaning there is some constant $\delta$ with $d\phi(V) \geq \delta(|d\phi|^2 + |V|^2)$ with respect to a given Riemannian metric.
In this case, the boundary decomposes as $\partial W = \partial^+ W \sqcup \partial^-W$, where $V$ points out of $\partial^+ W$ and into $\partial^- W$. Note that the 1-form $\lambda = \iota_V \omega$ satisfies $d\lambda = \omega$, and is sometimes called the Liouville 1-form, since it encodes the same data as $V$. Also note that a Weinstein cobordism with $\partial^- W = \emptyset$ is what we called a Weinstein filling.
The gradient-like condition is meant to give $V$ some directionality (since $d\phi(V) > 0$) and ensure that the critical points of $V$ are non-degenerate. One typically doesn’t think of the precise choice of pair $(V,\phi)$ as very important, but rather the data up to some notion of homotopy. For example, one can always perturb the Morse function so that each of $\partial^- W$ and $\partial^+ W$ is a regular $\phi$-level set, regardless of the number of components, and so we might as well assume this from the start. The equivalence hinted at here is called Weinstein homotopy, by which we perturb the pair $(V,\phi)$, possibly through birth-death type singularities.
Lemma: The descending manifolds in a Weinstein cobordism, i.e. the set of points which flow along $V$ to a given critical point in infinite time, are isotropic submanifolds.
Proof: Standard Morse theory implies these submanifolds are smooth. Let $\phi_V^t$ be the flow along $V$ at time $t$, and suppose we choose some $q \in D_p^-$ where $D_p^-$ is some descending manifold for a given critical point $p$. Suppose $v \in T_qD_p^-$ is a vector in the tangent space. Then since $\mathcal{L}_V\lambda = d\iota_V\lambda + \iota_V d\lambda = d\iota_V^2\omega + \iota_V\omega = 0 + \lambda = \lambda$, we have that
$e^t\lambda_q(v) = ((\phi_V^t)^*\lambda)(v) = \lambda(d\phi_V^t(v))$
As $t \rightarrow \infty$, the right hand side goes to zero since $\phi_V^t(q') \rightarrow p$ for all $q'$ in a curve $\gamma$ along $D_p^-$ with tangent vector $v$ at $q$. Hence, $\lim_{t\rightarrow \infty} e^t \lambda_q(v) = 0$, from which it follows that $\lambda_q(v) = 0$. Hence, $\lambda|_{D_p^-} = 0$, and so also $\omega|_{D_p^-} = d\lambda_{D_p^-} = 0$.
Corollary: All critical points in a Weinstein cobordism $(W^{2n},\omega,V,\phi)$ are of index at most $n$. Smoothly, any such manifold can be built up by surgery starting from a neighborhood of $\partial^-W$ and attaching handles of index at most $n$.
One would like to be a bit more precise about how the surgery interacts with the symplectic geometry. As a first step, along a regular level set $W_c := \phi^{-1}(c)$, the symplectic condition on $\omega$ implies that $\lambda|_{W_c}$ is a contact form. The proof of the lemma above further implies that $D_p^- \cap W_c$ gives an isotropic submanifold of $W_c$ with respect to $\lambda|_{W_c}$.
So we can think, at least smoothly, that our Weinstein cobordism is built up, starting from $\partial^- W$, by attaching handles with isotropic cores and attaching spheres along isotropics in level sets of $\phi$ (which are contact submanifolds). But there’s a little more that we know about neighborhoods of isotropics. In a symplectic manifold, the neighborhood of an isotropic $M \subset (W,\omega)$ is completely determined up to symplectomorphism by its symplectic normal bundle, $(TM)^{\omega}/TM$, as a symplectic vector bundle (with symplectic structure induced by $\omega$ on the fibers). A similar statement holds for isotropic submanifolds in contact manifolds, but now with their neighborhoods determined up to contactomorphism by the conformal symplectic normal bundle $(TM)^{d\alpha}/TM$, where $\alpha$ is a contact form so that $d\alpha$ is symplectic on $\xi$. Furthermore, if we fix $\alpha$, then the symplectic vector bundle structure determined by $d\alpha$ on the nose determines the neighborhood up to exact contactomorphism. Patching these two things together, one finds:
Theorem [Weinstein, before the term “Weinstein handle” was coined]: Weinstein handle attachment is completely specified (up to Weinstein homotopy) by matching the symplectic framing data determined by $\lambda$ along the isotropic attaching spheres.
One therefore thinks of $\partial^+ W$ as being built up from $\partial^- W$ by contact surgery along isotropic submanifolds with given framing information compatible with the underlying symplectic topology.
Consider a Weinstein cobordism of dimension $2n$. Then the handles of index $k \in \{0,1,\ldots,n-1\}$ are called subcritical handles, whereas the handles of index $k = n$ are called critical handles. When $k = n$, the aformenetioned symplectic normal bundles are trivial automatically, and so one specifies critical handle attachment simply by drawing a Legendrian sphere on $\partial^- W$.
Recall that the proof of the h-cobordism theorem requires some ability to cancel (and create) pairs of handles with index differing by 1 whose ascending and descending manifolds intersect in a 1-dimensional manifold, to move around attaching spheres, and to move critical values around. The last of these we can always do, so we can attach the handles in order of their index. It turns out that when $2n > 4$, we can recreate all parts of the proof of the h-cobordism theorem for subcritical Weinstein cobordisms. In some sense, subcritical Weinstein domains have no symplectic geometry in them – they are encoded by algebro-topological information, and so this gives some flexibility phenomena.
It turns out that some critical handles behave the same way. The key obstruction to the aforementioned flexibility is that sometimes the data of an attaching Legendrian does not boil down to purely toplogical information. However, Emmy Murphy defined a class of Legendrians, called loose Legendrians, for which there is such a so-called h-principle. The Weinstein h-cobordism theorem works for Weinstein cobordisms which can be built (up to Weinstein homotopy) out of subcritical and loose critical handle attachments. We call such Weinstein cobordisms flexible.
We often care about the case when $2n = 4$. In this case, it is pretty easy to describe a connected Weinstein domain (or its contact boundary). One can first order the handles by index, and then cancel 0-handles with 1-handles until we are in the situation where there is precisely one 0-handle and possibly many 1- and 2-handles. The boundary of the 0-handle is just a standard contact $S^3$, and 1-handle attachment is trivially described by picking pairs of points in $S^3$ (the bundle data boils down to showing $\pi_0(\text{Sp}(2,\mathbb{R})) = 0$). So it suffices to draw Legendrians on $S^3$ with $k$ pairs of points identified, which is just $\#^k (S^1 \times S^2)$. Any Legendrian $L$ has a canonical framing of its normal bundle given by the twisting of the Reeb chord around the Legendrian. Eliashberg showed that adding a left twist to this framing gives the smooth framing which determines the corresponding smooth surgery data.
Gompf showed that in this case $2n = 4$, one can draw standard Kirby calculus type surgery diagrams. We think of all of these 1-handle attachments and Legendrians as missing a point in $S^3$, so that we can draw our diagrams in $(\mathbb{R}^3, \ker dz - ydx)$. The front projection is the projection to the coordinates $(x,z)$, so that $y$ is determined by $dz/dx$. It might not be obvious how to draw a smooth knot in this projection since the curve can’t have infinite slope, but we are allowed semi-cubical cusps, corresponding to $(x,y,z) = (t^2,3t/2,t^3)$. Note that transverse crossings are also allowed, since the $y$-coordinates are distinct. One usually draws the front projection of a Legendrian without showing which strand lies over the other, but we include this extra information in the next figure, where we imagine the $y$-axis as pointing into the page.
A Legendrian trefoil knot
Gompf’s standard form for these Legendrians looks like the following, where the pairs of balls in each row corresponds to where the 1-handles are attached, and the Legendrian strands simply go through the handles as though they were wormholes.
An example of a Gompf surgery diagram. There are three 1-handles (in blue, red, and green) and two 2-handles with attaching spheres given by the Legendrian tangle above. All of the information can be made to live inside of the purple rectangle (i.e. without going horizontally or vertically outside of where the 1-handles are attached).
### Weinstein fillings, Lefschetz fibrations, and open book decompositions
Definition:Lefschetz fibration is a smooth map $\pi \colon W^4 \rightarrow \Sigma^2$ with finitely many critical points with distinct critical values such that locally around the critical points, $\pi$ looks like a complex Morse function (i.e. $(z_1,z_2) \mapsto z_1^2+ z_2^2$ in local coordinates). When $\Sigma$ has boundary, we assume the critical values of $\pi$ are all in the interior of $\Sigma$.
We shall typically be concerned with the case where $\Sigma = \mathbb{D}$ (although see this post by Laura Starkston which slightly generalizes some of what is discussed here).
A schematic for a Lefschetz fibration over the disk
In the case where $\Sigma = \mathbb{D}$, we see that the boundary decomposes as $\partial W = \partial^v W \cup \partial^h W$, where the superscripts are meant to indicate vertical and horizontal. That is, $\partial^v W = \pi^{-1}(\partial \mathbb{D})$, while $\partial^hW = \sqcup_{p \in \mathbb{D}} \partial \pi^{-1}(p)$. If we write $F$ for a regular fiber of $\pi$, then $\partial^h W = \partial F \times \mathbb{D}$. Meanwhile, we see that $\partial^v W$ is just a fibration over $S^1$ with fiber $F$, and hence can be described by some monodromy map $\phi \colon F \rightarrow F$ fixing the boundary, so that $\partial^v W = F \times [0,1]/{\sim}$ where $(\phi(x),0) \sim (x,1)$ (the mapping torus of $\phi$).
The structure on the boundary, in which we have a fibration over $S^1$ with fiber $F$ glued together with $\partial F \times \mathbb{D}$ in the natural way, is called an open book decomposition. It is given completely by the pair $(F,\phi)$. We think of each fiber over $S^1$ as a page, and the subset $F \times \{0\}$ as the binding, analogous to what one would get if one took their favorite book and matched the covers so that the pages radiate outwards. So Lefschetz fibrations yield open books on the boundary. To be a little more precise, one should extend each page so that the boundary of each page is actually the binding.
Some pages near the binding of an open book. I guess the name “Rolodex” wasn’t as catchy as “open book.” (Image from Wikipedia)
Now suppose $0 \in \mathbb{D}$ is a regular value (which can always be arranged up to small perturbation of $\pi$). Then $\pi^{-1}(\epsilon \mathbb{D}) \cong F \times \mathbb{D}$. One can ask what happens when we extend to $\pi^{-1}(U)$, where $\epsilon \mathbb{D} \subset U$ and there is exactly one critical value $p$ on $U \setminus \epsilon\mathbb{D}$.
Since we have a nice fibration away from critical points, we see that paths in $\mathbb{D}$ yield monodromy maps (up to isotopy preserving boundary) on the fibers. We can choose a connection on the fibration if we wish to make this a map on fibers, not just a map up to isotopy. If we take a path $\gamma$ from 0 to $p$ which intersects $\partial \epsilon \mathbb{D}$ once and otherwise avoids critical values then for whatever connection we chose, we can see what points flow to the critical point over $p$. Over each regular fiber, this is just a circle, and the union of all of them together with the critical point yields a disk. The path $\gamma$ is called a vanishing path, and each circle on the regular fiber is called a vanishing cycle (one really should think of it as a homology cycle, but for concreteness, one can think of it as a curve). The disk consisting of the union of vanishing cycles above a path is called a thimble.
The green circles in the regular fibers above the purple vanishing path are the vanishing cycles. Their union is the thimble.
It is then not hard to see that $\pi^{-1}(U)$ is obtained from $\pi^{-1}(\epsilon \mathbb{D})$ by 2-handle attachment, where the attaching curve is just the vanishing cycle above $\gamma \cap \partial \epsilon \mathbb{D}$ and the core of the handle is the thimble. Furthermore, one can check by a local computation that the monodromy map in a loop around $p$ is just given by a Dehn twist (positive or negative, depending on orientations) around the vanishing cycle. Hence, one can write out the open book determined by the Lefschetz fibration explicitly – it is just the product of the Dehn twists on the vanishing cycles, performed in an order determined by a sequence of vanishing paths.
Notice that for a given regular value on $\partial \mathbb{D}$, one can choose a different basis of vanishing paths, and this yields a possibly different factorization for the monodromy. Such changing of the basis is generated by so-called Hurwitz moves, as drawn below.
A Hurwitz move swapping the $i$th and $(i+1)$st critical points. Note that the corresponding vanishing cycles for the critical point corresponding to $\gamma_i$ and $\gamma_{i+1}'$ are actually different, but the overall monodromy on the open book at the boundary is the same.
Hence, understanding Lefschetz fibrations over the disk essentially corresponds to understanding factorizations of mapping class group elements into Dehn twists.
Now, this whole story can be repeated in the symplectic context, as follows.
Definition:symplectic Lefschetz fibration is a Lefschetz fibration with $(W,\omega)$ a symplectic manifold such that each fiber is symplectic submanifold away from the critical points, while at the critical points the coordinates in which $\pi$ locally looks like a complex Morse function can be taken to be holomorphic for some compatible almost complex structure $J$.
In this case, one can take the connection to be the symplectic connection given the symplectic orthogonal complement to the vertical directions. In this way, the thimbles produced will actually be Lagrangian disks, which suggests one can think of these as the descending disks for a Weinstein domain filling the boundary. In addition, the monodromy maps are now compositions of positive Dehn twists only, since the symplectic condition gives the proper orientations. In other words, our Lefschetz fibration is itself positive. If the vanishing cycles of a Lefschetz fibration are homologically nontrivial, we shall call it allowable.
With a little more work, we can obtain the following theorem of Loi and Piergallini (although an alternative proof by Akbulut and Özbağci is more in line with the exposition presented here):
Theorem: Any positive allowable Lefschetz fibration (PALF) yields a Weinstein domain, and any Weinstein domain comes from a PALF in this way.
Furthermore, one obtains a little bit more compatibility at the boundary.
Definition: An open book decomposition on a manifold $M$ is said to support a cooriented contact structure $\xi$ if there is some contact form $\alpha$ for $\xi$ such that the binding is a contact submanifold, $d\alpha$ is a symplectic form on the pages, and the boundary orientation of the page (with respect to $d\alpha$) matches the orientation of the binding with respect to $\alpha$.
One checks that the open book on the boundary of a PALF does indeed support the contact structure determined by being the boundary of a Weinstein domain.
Our surgery theory for these Lefschetz fibration builds the fiber up by subcritical surgery, and the 2-handle attachments correspond to the critical points of the fibration. One can always produce, for any Weinstein manifold, a cancelling pair consisting of a 1-handle and a 2-handle. The way that this affects the open book is by positive stabilization, meaning that one adds a 1-handle to the page, but kills it by adding an extra Dehn twist to the monodromy through a circle which passes through the handle.
The following theorem implies that all 3-dimensional contact geometry can actually be encoded (somewhat non-trivially) in the study of open books up to positive stabilization, and hence the study of Weinstein fillings reduces to studying positive factorizations of given elements of the mapping class group of a surface with boundary (up to this not-so-easy-to-work-with notion of positive stabilization).
Theorem [Giroux correspondence]: There is a one-to-one correspondence between contact structures on a closed 3-manifold up to isotopy with open books up to positive stabilization.
### Applications to Weinstein fillings
To summarize the previous section, an explicit surgery decomposition of a Weinstein filling yields a PALF which in turn gives an open book structure supporting the contact boundary of the Weinstein filling with monodromy factored into positive Dehn twists. Conversely, given a supporting open book for a contact structure with monodromy factored into positive Dehn twists, one obtains a Weinstein filling.
One common question we ask is whether a single contact manifold has multiple Weinstein fillings. From the above construction, one possible way to attack this problem is to look for distinct positive factorizations of a given element in a mapping class group.
Theorem [Auroux]: There is an element in the mapping class group of the surface $\Sigma_{1,1}$ (of genus 1 and with one boundary component) with two distinct factorizations into positive Dehn twists such that the Weinstein fillings are distinguished by their first homology.
Remark: In this setting, the first homology is just given by $H_1(F)/V$ where $V$ is the span of the vanishing cycles. The only real trick of Auroux is therefore to find a good candidate for the above theorem to hold, and just compute.
Generalizing a bit more:
Theorem [Baykur – Van Horn-Morris]: There exists an element in the mapping class group of $\Sigma_{1,3}$ (of genus 1 with three boundary components) which admits infinitely many positive factorizations such that the corresponding Weinstein fillings are all distinguished from each other by their first homology.
Finally, as one last application, I want to consider a result of Plamenvskaya and Van Horn-Morris, but I need to define the contact structures in question to begin. Honda’s classification of tight contact structures on the lens spaces $L(p,1)$ can be formulated in Gompf’s surgery diagrams by the following diagrams, coming from a single 2-handle attachment to standard $S^3$. We denote the corresponding contact structures by $\xi_1,\xi_2,\ldots, \xi_{p-2}$.
The surgery diagram for the contact structure $\xi_k$.
Of these, the universal covers of $\xi_1$ and $\xi_{p-2}$ are also tight, where as the others’ universal covers are overtwisted. We say $\xi_2, \ldots, \xi_{p-3}$ are virtually overtwisted.
Theorem [PV]: Each virtually overtwisted $(L(p,1), \xi_k)$ has a unique Weinstein filling (up to symplectic deformation) and a unique minimal weak filling.
Proof sketch: Let us first discuss the Weinstein part. There are a few nontrivial theorems which go into this, which we won’t discuss, but essentially we have the following sequence of results. The open book given by the surgery diagrams above induce open books with genus 0 pages. When we discussed Wendl’s theorem in part 2 of the J-holomorphic curve posts, one thing we mentioned was that one can apply his techniques when there is a planar open book (meaning pages have genus 0). He proves that if a contact manifold has a given supporting planar open book, then every Weinstein filling is diffeomorphic to one compatible with that specified planar open book. Hence, it suffices to study Lefschetz fibrations compatible with the one just described, which in turn becomes studying factorizations of an element in the mapping class group of $\mathbb{D}_n$, the disk with $n$ holes. A nontrivial result of Margalit and McCammond gives that every such presentation must be in a certain form, from which one can use smooth Kirby calculus to conclude that the surgery diagram must come from $-p$-surgery on some knot. Finally, an appeal to work of Kronheimer, Mrowka, Ozsváth, and Szabó using Seiberg-Witten Floer homology (also called monopole Floer homology) yields that this knot must have been an unknot, and since the framing is $-p$, this determines the canonical framing of the knot, which in turn implies we could only have had one of our original surgery diagrams.
Finally, to obtain the weak part, one can use work of Ohta and Ono to boost a weak filling up to a strong filling, from which Wendl’s theorem implies that any minimal weak filling is symplectic deformation equivalent to a Weinstein filling.
1 Comment
Filed under Uncategorized
## Kylerec – On J-holomorphic curves, part 2
This is a continuation post following part 1. Hope the delay wasn’t too long – more coming soon.
### A preparatory comment on last time
Let us quickly recall the proof sketched in part 1 that fillable implies tight for contact $(M^3,\xi)$. The idea was that if we had a filling $W$, then the presence of an overtwisted disk locally gave a Bishop family of holomorphic disks as part of a 1-dimensional moduli space, but the compactified moduli space was seen to have only one boundary point. This was because continuing the family away from the center of the overtwisted disk could not lead to a possible boundary point – in our version, such a point would require a bubble, but we considered exact fillings.
It was mentioned last time that Eliashberg’s paper on fillings by holomorphic disks actually covers the weak case instead. The difference is that one now instead shows that any closed surface in a contact boundary which is weakly filled indeed bounds a 3-manifold which can be foliated by holomorphic disks. With this result, we never actually need to consider bubbles on the interior, so we can remove the exactness assumption.
I want to point this out because although this isn’t perfectly analagous to the discussion in the next section on Wendl’s paper, it is related in basic idea. The set-up is different, sure, and also one cannot work directly with the contact manifold with its filling (the necessity of a strong filling appears in the need to attach a positive cylindrical end), and finally also the completed foliation does have (isolated) nodal curves, but in the end we end up with a nice Lefschetz fibration with holomorphic fibers, and that’s pretty powerful, just like Eliashberg’s disk fillings. We explain this… now!
### On Wendl’s Strongly fillable contact manifolds and J-holomorphic foliations
I should say, first of all, that one can generalize the results I am discussing here. For a somewhat more general discussion, see this post by Laura Starkston from 2013.
In this section, all contact manifolds are 3-dimensional and all symplectic manifolds are 4-dimensional.
We begin by recalling that the symplectization of a cooriented contact manifold $(M,\xi = \ker \alpha)$ is the symplectic manifold $(\mathbb{R} \times M, d(e^t\alpha))$, where $t$ is the $\mathbb{R}$-coordinate. This symplectic manifold does not depend upon the choice of $\alpha$, since if we chose $\alpha' = e^f \alpha$, then $d(e^t\alpha') = d(e^{t+f}\alpha) = \phi^* d(e^t\alpha)$ where $\phi$ is the diffeomorphism of $\mathbb{R} \times M$ sending $(t,m) \mapsto (t+f(m),m)$.
Remark: One can define this in a more invariant way. The symplectization of $(M,\xi)$ is the set of covectors vanishing on $\xi$. This is an $\mathbb{R}^*$-bundle over $M$ and is symplectic with respect to the standard symplectic form on $T^*M$. Fixing a local section $\alpha$ over $M$ gives a coordinate $w$ for the fiber such that the symplectic form is just $d(w\alpha)$. We simply take the component where $e^t := w > 0$ and $\alpha$ coorients $\xi$.
Given a symplectization, explicitly determined by a chosen contact form $\alpha$, one typically studies J-holomorphic curves only for choices of $J$ which are admissible, meaning:
• $J$ is $\mathbb{R}$-invariant
• $J \partial_t = R_{\alpha}$
• $J|_{\xi}$ is a compatible almost complex structure for $d\alpha|_{\xi}$
Under these conditions, finite energy J-holomorphic curves from punctured Riemann surfaces are analytically easy to understand – the punctures are asymptotic to Reeb orbits at the positive and negative ends, and Gromov compactness extends to this setting in that one needs to include holomorphic buildings. One can imagine, for example a sequence of $J$-holomorphic curves which look like some union of cylinders over Reeb orbits for all but a union of two intervals $(-A-C,-A) \cup (A,A+C)$ on which there is nontrivial behavior, where $C$ remains fixed but $A \rightarrow \infty$. In the limit, as these two intervals get farther apart, we break into two holomorphic curves in the symplectization. This forms what is sometimes called a holomorphic building. In general, there may be multiple levels in the limit, as in the figure below.
Here, a J-holomorphic curve gets stretched out in two places, shown in green, until eventually these green almost cylindrical parts get infinitely long. In the limit, we obtain a three story holomorphic building.
For more details in much more generality, one should consult this paper of Bourgeois, Eliashberg, Hofer, Wysocki, and Zehnder.
Now suppose that we have a strong filling $(W,\omega)$ of a contact manifold $(M,\xi)$. Then by definition, we have a Liouville vector field $V$ whose flow allows me to identify a neighborhood of $M$ with a subset of the symplectization of the form $((-\epsilon,0] \times M, d(e^t\alpha))$ where $\alpha$ is a contact form for $\xi$ on $M$. One can append the rest of the positive end of the symplectization, $([0,\infty) \times M, d(e^t\alpha))$, to form a completed symplectic manifold $(\widehat{W},\widehat{\omega})$. I can choose some compatible almost complex structure $J$ which far enough into the positive end is a restriction of some admissible $J_+$. In this case, one can study J-holomorphic curves, and we have a similar Gromov compactness statement. In this case, our curves can either bubble, or form holomorphic buildings where the lowest level is just $(\widehat{W},J)$ and whose higher levels are all $(\mathbb{R} \times M, J_+)$.
Theorem (vaguely stated): Under some technical analytical conditions, an $\mathbb{R}$-invariant foliation of $\mathbb{R} \times M$ by $J$-holomorphic curves of uniformly bounded energy will extend, with isolated nodal singularities, to the interior of $W$ (and hence to all of $\widehat{W}$).
Proof (sketch): We study the compactification of the moduli space $\mathcal{M}$ of finite energy J-holomorphic curves in $\widehat{W}$, and in particular, the closure of the component $\mathcal{M}_0$ of the moduli space containing a special leaf in the symplectization end. This component is 2-dimensional, and hence is precisely given by the foliating leaves around it (recall $\widehat{W}$ is 4-dimensional). The closure of this component yields the full J-holomorphic foliation, where some isolated finite subset of the leaves are actually nodal curves.
In the end, by considering on which curve in the foliation a point is located, this yields a map $\pi \colon \widehat{W} \rightarrow \overline{\mathcal{M}_0}$, where the fibers are symplectic (since they are J-holomorphic and $J$ is compatible with $\widehat{\omega}$) and generically smooth except with finitely many nodal singular fibers, forming what is called a symplectic Lefschetz fibration.
We will discuss this notion more in a future post, where we will also see that Stein fillings correspond in some sense to certain (“allowable”) symplectic Lefschetz fibrations over a disk. Hence, one can ask – are there some examples of contact manfiolds $(M,\xi)$ on which we can find a finite energy foliation on the symplectization $\mathbb{R} \times M$ satisfying the correct analytical assumptions and such that $\overline{\mathcal{M}_0} = \mathbb{D}$? The answer is yes in the case when $(M,\xi)$ is supported by a so-called planar open book, as was proved in this paper by Wendl. We will define this in a future post, but this discussion implies (up to how to tackle the word “allowable”) that:
Corollary: For contact 3-manifolds supported by a planar open book, strong and Stein fillability are equivalent.
Along similar lines, one can find finite energy foliations for the standard 3-torus $(\mathbb{T}^3,\xi_0)$ (with contact structure induced by the restriction of the Liouville form on $T^*\mathbb{T}^2$ to the unit cotangent bundle). In this case, any strong filling, not just the standard one, would have $\overline{\mathcal{M}_0} = [0,1] \times S^1$, and so any strong filling of $(\mathbb{T}^3,\xi_0)$ arises as the boundary of a Lefschetz fibration to $[0,1] \times S^1$. Wendl then beefs this up to prove, for example, that every minimal strong filling of $(\mathbb{T}^3,\xi_0)$ is diffeomorphic to $\mathbb{T}^2 \times \mathbb{D}$.
Finally, one can use these results to obstruct strong fillability in a manner analogous to the Bishop family argument. That is, if $(M,\xi)$ has a finite energy foliation satisfying the technical analytic assumptions, then one should be able to extend that foliation to a strong filling. Recall that the foliation extended by considering the component of the moduli space containing some specified leaf $u_0$ satisfying some conditions. If there is some other leaf $u_1$ which is not diffeomorphic to $u_0$, then they cannot both be fibers of the same Lefschetz fibration, and so there couldn’t have been a strong filling in the first place. There are also other more technical versions of this argument, which for example allow one to reprove that positive Giroux torsion, i.e. that there is a contact embedding of $([0,1] \times T^2, \cos(2\pi t)d\theta_1 + \sin(2\pi t)d\theta_2)$, obstructs fillability, originally proved by David Gay using gauge-theoretic methods which are completely avoided in this approach.
### On Barth-Geiges-Zehmisch’ The diffeomorphism type of fillings
As we have now seen twice, the technique of comparing moduli spaces of J-holomorphic curves to the topology of the situation in question is very powerful. We saw this both in our discussion last time of McDuff’s rational ruled classification, and we also just saw in our discussion of Wendl’s paper that the breaking which occurs in the compactification of a certain moduli space of curves in a strong filling of the positive end of a symplectization actually cooks up a Lefschetz fibration. One can view this paper as another instance of this way of thinking – here evaluation maps end up directly producing strong restrictions on the topology of a filling.
As we will see in a future post, Weinstein fillings of contact manifolds $(M^{2n+1},\xi)$ have a surgery theory consisting of handles of index at most $n$, and so they have the homotopy type of a CW complex of at most this dimension. A subcritical Weinstein filling is then one where all the handles have index at most $n-1$. The main theorem states that the existence of just one subcritical Weinstein filling places restrictions on the topology of any strong symplectically aspherical filling $(W,\omega)$. By symplectically aspherical, we mean that $\omega|_{\pi_2(W)} = 0$.
Theorem [BGZ]: If $(M,\xi)$ is a contact manifold of dimension $\geq 3$ admitting a subcritical Stein filling with the homotopy type of a CW complex of dimension $\ell_0$, then any strong symplectically aspherical filling $(W,\omega)$ satisfies
• $H_k(W) = H_k(M)$ for $k = 0,\ldots, \ell_0$ via the isomorphism induced by inclusion
• $H_k(W) = 0$ otherwise
• If $\pi_1(M) = 0$, then all strong aspherical fillings of $M$ are diffeomorphic.
Corollary [Eliashberg-Floer-McDuff ’91]: Every symplectically aspherical filling of the standard contact sphere is diffeomorphic to a ball.
Remark: For $S^3$, which is just the lens space $L(1,1)$, McDuff’s theorem from last time about fillings of lens spaces implies that there is a unique minimal filling up to diffeomorphism. By positivity of intersection, symplectically aspherical fillings are minimal, which implies the above result. But also, since $\omega$ is automatically a trivial cohomology class on the ball, McDuff’s result implies that the filling is in fact unique up to symplectomorphism. This result goes back to Gromov’s ’85 paper.
We won’t quite make it to a proof of the full theorem, but we will see some of the inner workings in the statement of the theorem stated below. We proceed by making an extra definition (not in Barth-Geiges-Zehmisch) to clarify the exposition.
Definition: Let $(M^{2n+1},\xi)$ be a connected contact manifold and $(V^{2n},\omega = d\lambda)$ a Liouville manifold of finite type (meaning it is modelled after a positive symplectization outside of some compact region). Let $\mathcal{L}$ be the corresponding Liouville vector field (satisfying $i_{\mathcal{L}}\omega = \lambda$). The $M$ is called $V$spliffable (yes, this is what we called it at Kylerec) if $M$ is contactomorphic to a hypersurface $\widetilde{M}$ in $V \times \mathbb{C}$ such that:
• $\widetilde{M}$ is convex, meaning it is transverse to the vector field $\mathcal{L} \oplus \partial_r$ where $\partial_r$ is the standard radial Liouville vector field on $\mathbb{C}$
• the infinite component of $V \times \mathbb{C} \setminus \widetilde{M}$ is modelled after the positive symplectization of $M$ meaning this component is the union of the positive flow of $\widetilde{M}$ along $\mathcal{L} \oplus \partial_r$
Remark: A contact manifold which is fillable by a subcritical Weinstein manifold is spliffable. This follows from a result of Cieliebak that subcritical Stein manifolds are split.
Theorem: Let $(W,\omega)$ be an aspherical strong filling of a $V$-spliffable contact manifold $M$. Then there exists a commutative diagram of the form
(and similarly with $H_*$ replaced with $\pi_1$).
Remark: The Eliashberg-Floer-McDuff theorem is already a corollary of this weaker statement, using that $S^{2n-1}$ is $\mathbb{C}^{n-1}$-spliffable, and using smooth topology.
Corollary: The unit cotangent bundle $M = S^*\Sigma$ of a closed manifold $\Sigma^n$ admits no subcritical Weinstein fillings.
Proof: We need $H_n(V)$ surjects onto $H_n(W) \neq 0$ with $W$ the standard unit disk filling and $M$ is $V$-spliffable. But if $M$ admits a subcritical filling, then $V$ can be chosen to be subcritical so that $H_n(V) = 0$. This is a contradiction.
Proof of the main theorem:
We begin by embedding $M$ into $V \times \mathbb{D}$ so that it is convex (which we can do by the spliffability condition). The interior component determine by the splitting through $M$ can then be replaced by $W$ by gluing in (since strong gluings are set up to be Liouville near the boundary). Call the interior of this manifold $Z$. We can then choose a map $\mathbb{D} \rightarrow \mathbb{C}P^1$ such that the interior embeds diffeomorphically onto $\mathbb{C}P^1 \setminus \{\infty\}$. This embedding then gives us a smooth manifold $\widetilde{Z}$ which looks like $V \times \mathbb{C}P^1$ but with the interior component replaced by $W$. That is, $\widetilde{Z} = Z \cup (V \times \{\infty\})$.
We then wish to study some $J$-holomorphic curves on this manifold. We pick a compatible $J$ which away from $W$ is of the form $J_V \oplus i_{\text{std}}$, where $J_V$ is admissible (as discussed in the previous section) for $V$. We study the moduli space $\mathcal{M}$ of $J$-holomorphic spheres $u \colon \mathbb{C}P^1 \rightarrow \widetilde{Z}$ such that $[u] = [\{v\} \times \mathbb{C}P^1]$ (for some $v$ large enough so that this slice misses $W$). We really want this up to reparametrization, so we fix slice conditions to define this moduli space: that $u(-1) \in V\times \{a\}$, $u(+1) \in V \times \{b\}$, and $u(\infty) \in V \times \{\infty\}$, for some choice of $a,b \in \mathbb{C}P^1$ distinct and not $\infty$.
The key about positive symplectization ends is that admissibility of the almost complex structure $J$ implies a maximum principle for these curves. This implies the following two items.
• Since $V \times \mathbb{C}P^1$ looks like a symplectization, in $\widetilde{Z}$, any curve in our moduli space must have actually just been $\{v\} \times \mathbb{C}P^1$.
• Any curve in our moduli space intersecting $W$ must intersect $V \times \{\infty\}$. First of all, $W$ is symplectically aspherical, so any holomorphic sphere must leave $W$. But then, if it didn’t intersect $V \times \{\infty\}$, it would be contained completely in $V \times \mathbb{C}$, which contradicts this maximum principle.
Now, this moduli space is an oriented manifold of dimension $2n$, and it comes with an evaluation map of the form $\mathcal{M} \times \mathbb{C}P^1 \rightarrow \widetilde{Z}$. This map is actually proper and degree 1, which follows from the maximum principles just described, plus a little boost from positivity of intersection which implies that there is no need to worry about stable maps in the compactification of $\mathcal{M}$. This then restricts to a proper degree 1 evaluation map $\mathcal{M} \times \mathbb{C} \rightarrow Z$.
Hence, we obtain the following commutative diagram.
In homology, the right triangle becomes the desired triangle from the theorem.
As for the surjectivity part of the theorem, note that the leftmost vertical arrow is an isomorphism. Meanwhile, the bottom horizontal arrow is surjective for standard topological reasons (because one can cook up an explicit shriek map $\text{ev}_!$ which is right inverse to $\text{ev}_*$ on the level of homology).
Filed under Uncategorized
## Kylerec – On J-holomorphic curves, part 1
This to-be-2-part-because-this-got-long post is a continuation of the series on Kylerec 2017 starting with the previous post, and covers most of the talks from Days 2-3 of Kylerec, focusing on the use of J-holomorphic curves in the study of fillings. I should mention that two more sets of notes, by Orsola Capovilla-Searle and Cédric de Groote, have been uploaded to the website on this page. So if you wish to follow along, feel free to follow the notes there, and in particular, the relevant talks I’ll be discussing in this post are:
Part 1
• Day 1 Talk 1 – The introductory talk by (mostly) Roger Casals (with some words by Laura Starkston)
• Day 2 Talk 2 – Roberta Gaudagni’s talk introducing J-holomorphic curves
• Day 2 Talk 3 – Emily Maw’s talk on McDuff’s rational ruled classification
Part 2
It should be obvious in what follows which parts of the exposition correspond to which talks, although what follows is perhaps a pretty biased account, with some parts amplified or added, and others skimmed or skipped.
### J-holomorphic curves – basics
Gromov introduced the study of J-holomorphic curves into symplectic geometry in his famous 1985 paper, immediately revolutionizing the field. One might wonder why we care about these objects, and the rest of this post (along with part 2) should be a testament to some (but certainly not all) aspects of the power of the theory.
The “J” in “J-holomorphic” refers to some choice $J$ of almost complex structure on a manifold $M^{2n}$. Given an almost complex manifold, a J-holomorphic curve is a map $u : (\Sigma,j) \rightarrow (M^{2n},J)$ such that $(\Sigma,j)$ is a Riemann surface and $J \circ du = du \circ j$. In the case where $(M,J)$ is a complex manifold, we see this is precisely what it means to be holomorphic.
We are mostly concerned a choice of $J$ which is compatible with a symplectic manifold $(M,\omega)$. By this, we mean that the (0,2)-tensor $g(\cdot, \cdot) = \omega(\cdot,J\cdot)$ is a Riemannian metric. We say $J$ is tame if $\omega(v,Jv) > 0$ for each nonzero vector $v$ (note that $g$ as defined above is not necessarily symmetric in this case).
Proposition: The space of compatible almost complex structures on a symplectic manifold $(M,\omega)$ is non-empty and contractible. So is the space of tame almost complex structures.
This suggests either:
• Studying the space of J-holomorphic curves into $M$ for some particular choice of $J$.
• Study some invariant of spaces of J-holomorphic curves which does not depend on the choice of $J$ compatible (or tame) with respect to a given symplectic form $(M,\omega)$.
In walking down either of these paths, there are a large number of properties at our disposal. What is presented in this section is far from a conclusive list, and I have completely abandoned including proofs and motivation, so beware that there is a lot of subtlety involved in the analytic details. For many many many more details, consult this book of McDuff and Salamon.
Firstly, there is a dichotomy between somewhere injective curves and multiple covers. Some J-holomorphic curves will factor through branched covers, meaning that $u : \Sigma \rightarrow (M,J)$ factors as $(\Sigma,j) \rightarrow (\Sigma',j') \rightarrow (M,J)$ such that the first map is a branched cover of Riemann surfaces. J-holomorphic curves which are not multiply covered are called simple, and it turns out that simple curves are characterized by being somewhere injective, meaning there is some $z$ for which $u^{-1}u(z) = \{z\}$ and $du_z \neq 0$. Even better, somewhere injective means that $u$ is almost everywhere injective.
The main tool in the theory is the study of certain moduli spaces of J-holomorphic curves. There are many flavors of this, but we discuss a specific example to highlight the relevant aspects of the theory. The analytical details are typically easier for simple curves, so we denote by $\mathcal{M}^*(M,J)$ the moduli space of all simple $J$-holomorphic curves. It turns out to be fruitful to focus in on a specific piece of this space, so we often restrict to a given domain of definition, say some $\Sigma_g$, and also restrict the homology class $u_*[\Sigma]$ of the map $u : \Sigma_g \rightarrow (M,J)$ to some $A \in H_2(M)$. The main question is:
When is such a moduli space $\mathcal{M}_g^*(M,A,J)$ actually a smooth manifold?
This is certainly a subtle question, and it turns out that not every $J$ works. However, it is a theorem that for generic $J$, this moduli space is a smooth manifold of dimension $d = n(2-2g) + 2c_1(A)$, where $\dim M = 2n$.
Given our nice moduli space, we also might be interested in what happens as we change our choice of $J$, so that we go from one regular choice to another. A generic path of such almost complex structures will give a smooth cobordism between the moduli spaces, a property which allows us to cook up invariants which do not depend, for example, on choices of $J$ compatible with a given symplectic structure.
To note a few variants of the discussion so far, sometimes we will study J-holomorphic disks with certain boundary conditions, or J-holomorphic curves with punctures sent to a certain asymptotic limit. In all cases, the same analytic machinery already swept under the rug (Fredholm theory) will give that the moduli spaces in question are smooth for generic choices of almost complex structure, and the dimension of this moduli space is given by some purely topological quantity (by, for example, the Atiyah-Singer index theorem).
One common thing to do is to quotient out by the group action given by reparametrizing the domain of a given J-holomorphic curve. That is, we consider the equivalence relation $u \sim u \circ \phi$ where $\phi : (\Sigma,j) \rightarrow (\Sigma,j)$ is a biholomorphism. A more careful author would probably distinguish between the map $u$ as opposed to the corresponding equivalence class, which is really what one should mean when they say curve. Hence, one can quotient our moduli spaces $\mathcal{M}^*$ by reparametrization to obtain moduli spaces of curves. Usually, these are the main objects of interest.
So now we have our nice moduli space, in whatever situation we desire, and we can ask about studying limits of J-holomorphic curves in that moduli space. In general, no such curve might exist. The first reason for this is that any such curve $u : (\Sigma,j) \rightarrow (M,\omega,J)$ has an energy $E = \int_{\Sigma}u^*\omega$ attached to it (when $J$ is compatible with $\omega$). If this quantity diverges to $\infty$, then there can be no limiting curve. One can ask instead about what happens when the energy is bounded.
Consider the following sequence of holomorphic curves $u_n \colon \mathbb{C}P^1 \rightarrow \mathbb{C}P^1 \times \mathbb{C}P^1$ given by $z \mapsto (z, 1/(nz))$. We see that away from $z=0$, this is just converging to the curve $\mathbb{C}P^1 \times \{0\}$. But near $z = 0$, if we reparametrize the domain by $1/(nz)$, we see this converges to the sphere $\{0\} \times \mathbb{C}P^1$. In this case, our curve formed what is often called a bubble. More generally, a curve can split off many bubbles at a time. For an example of this, consider instead $u_n \colon \mathbb{C}P^1 \rightarrow \mathbb{C}P^1 \times \mathbb{C}P^1 \times \mathbb{C}P^1$ given by $z \mapsto (z,1/(nz),1/(n^2z))$, in which a new bubble forms at $\{0\} \times \{\infty\} \times \mathbb{C}P^1$ in addition to the one discussed above. More generally, a sequence of curves can limit to a curve with trees of bubbles sticking out.
Such bubble trees are called stable or nodal or cusp curves (or probably a lot of other things), depending upon how old your reference is and to whom you talk. The incredible theorem, which goes under the name of Gromov compactness, is that this is the only phenomenon which precludes a limit from existing. We state this vaguely as follows:
Theorem [Gromov ’85]: The moduli space of curves of energy bounded by some constant $E$ (modulo reparametrization of domain) can be compactified by adding in stable curves of total energy bounded by $E$.
Another generally important tool is that of the evaluation map. Suppose that we wish to study the moduli space $\mathcal{M}_g^*(M,A,J)$ of simple J-holomorphic maps $u : (\Sigma_g,j) \rightarrow (M,J)$ in the homology class $A \in H_2(M)$. Suppose $G = \text{Aut}(\Sigma_g,j)$ is the group of biholomorphisms of $(\Sigma_g,j)$. Then the group $G$ acts on $\mathcal{M}_g^*(M,A,J) \times \Sigma_g$ by $\phi \cdot (u,z) = (u \circ \phi^{-1},\phi(z))$. Notice then that the evaluation map $(u,z) \mapsto u(z)$ only depends on the orbit, and hence descends to a map $\text{ev} : \mathcal{M}_g^*(M,A,J) \times_G \Sigma_g \rightarrow M$. Proving enough properties of such an evaluation map sometimes allows us to compare the smooth topology of $\mathcal{M}_g^*(M,A,J)$ to that of $M$. There are other variants of this – sometimes we wish to evaluate at multiple points, or sometimes we consider J-holomorphic discs and want to evaluate along boundary points. And often the evaluation map extends to the compactified moduli spaces considered above.
Finally, we come to dimension 4, where curves might actually generically intersect each other. With respect to these intersections, there are two key results to highlight. The first is positivity of intersection (due to Gromov and McDuff), which states that if any two J-holomorphic curves intersect, then the algebraic intersection number at each intersection point is positive (and precisely equal to 1 at transverse intersections). This can be thought of as some sort of rudimentary version of a so-called adjunction inequality (due to McDuff), which states that if $u \colon (\Sigma,j) \rightarrow (M,J)$ is a simple J-holomorphic curve representing the class $A$ with geometric self-intersection number $\delta(u)$, then
$c_1(TV,J) \cdot A + 2\delta(u) \leq \chi(\Sigma) + A \cdot A$.
Further, when $u$ is immersed and with transverse self-intersections, this is an equality, yielding an adjunction formula.
### A first example – Fillable implies tight (in 3 dimensions)
On a first pass, I want to expand upon the example of fillability implying tightness in three dimensions which Roger Casals discussed in his introductory talk. Really, we prove the contrapositive – that an overtwisted contact manifold cannot be filled. For simplicity, we will consider exact fillings. This result is typically attributed to Gromov and Eliashberg, referencing Gromov’s ’85 paper as well as Eliashberg’s paper on filling by holomorphic discs from ’89. This is essentially the same proof in spirit, although we take a little bit of a cheat by considering exact fillings.
Firstly, recall that an overtwisted contact manifold $(M^3,\xi)$ is one such that there exists an embedding of a disk $\phi : D^2 \hookrightarrow M$, such that the so-called characteristic foliation $(d\phi)^{-1}\xi$ on $D^2$, which is actually a singular foliation, looks like the following image, with one singular point in the center and a closed leaf as boundary.
So now suppose $(M^3,\xi)$ has an exact filling $(W^4,\omega = d\lambda)$. We study the space of certain J-holomorphic disks with boundary on the overtwisted disk. The key is that a neighborhood of the overtwisted disk $D$ actually has a canonical neighborhood in $W$ up to symplectomorphism, and one can pick an almost complex structure $J$ to be in a standard form in this neighborhood. It turns out that with this standard choice, in a close enough neighborhood of the singular point $p$ in the interior of $D$, all somewhere injective J-holomorphic curves are precisely those living in a 1-parameter family, called the Bishop family, which radiate outwards from the singular point $p$.
Let us be a bit more precise, so that we can see this Bishop family explictly. Consider the standard 3-sphere $S^3 \subset \mathbb{C}^2$, with its standard contact structure given by the complex tangencies, i.e. $\xi = TS^3 \cap iTS^3$, with $i$ the standard complex structure on $\mathbb{C}^2$. Then consider the disk given by $z \mapsto (z, \sqrt{1-|z|^2})$. The characteristic foliation on this disk looks like the characteristic foliation near the center of the overtwisted disk, so a neighborhood of this disk in $D^4 \subset \mathbb{C}^2$ yields a model for a neighborhood of the center of the overtwisted disk. We may assume the almost complex structure in this neighborhood is just given by the standard one, $i$. Then the Bishop family is just the sequence of holomorphic disks given by $z \mapsto (sz,\sqrt{1-s^2})$ for $s$ a real constant near 0. That these are all of the somewhere injective disks is a relatively easy exercise in analysis. Namely, suppose we had such a disk of the form $z \mapsto (v_1(z),v_2(z))$. Then since boundary points are mapped to the overtwisted disk, $v_2(\partial D^2) \subset \mathbb{R}$. But each component of $v_2$ is harmonic, hence satisfies a maximum principle. Therefore, $v_2(D^2) \subset \mathbb{R}$. But by holomorphicity, $v_2$ cannot have real rank 1 and so must be constant. Hence, any disk in consideration must have $v_2$ is a real constant.
All of these disks live in $D^4 \subset \mathbb{C}^2$, but in particular in the slice where the second component $z_2$ is real, so we can draw this situation in $\mathbb{R}^3$ by forgetting the imaginary part of $z_2$. This is depicted in the following figure.
This Bishop family lives in some component of the moduli space of somewhere injective J-holomorphic disks with boundary on $D$. Perturbing $J$, one can assume this component is actually a smooth 1-dimensional manifold. We can compactify this moduli space by including stable maps, i.e. disks with bubbles, via Gromov compactness. On the Bishop family end, we see explicitly that the limit is just the constant disk at the point $p$. So there must be another stable curve at the other boundary of this moduli space. We prove no such other stable curve can exist.
Similar to how we proved that the only disks completely contained in a neighborhood of the singular point on the overtwisted disk must have been part of the Bishop family, one can use a maximum principle argument to conclude that every holomorphic disk entering this neighborhood must have been in the Bishop family. Alternatively, one can use a modified version of positivity of intersections to conclude that continuing the moduli space away from the Bishop family, these boundaries have to continue radiating outward. Either way, the moduli space has to stay away from the central singularity of the overtwisted disk $D$. But also, the boundary of a J-holomorphic disk cannot be tangent to $\xi$, and in particular cannot be tangent to $\partial D$. This is by a maximum principle which comes from analytic convexity properties of a filled contact manifold.
The only possible explanation is that this is a stable curve with some sphere bubble having formed in the interior of $(W,\lambda)$. But one checks that the relation $g(\cdot,\cdot) = \omega(\cdot,J\cdot)$ implies that for a $J$-holomorphic sphere $u : (S^2,j) \rightarrow (W,J)$, we have $\text{Area}_g(u) = \int_{S^2}u^*\omega$. This vanishes by Stokes’ Theorem since $\omega = d\lambda$ is exact, and so $u$ must be constant, and so there is no bubble. In other words, this cannot explain the other boundary point of the component of the moduli space containing the Bishop family, so this yields a contradiction.
### On McDuff’s The structure of ruled and rational symplectic 4-manifolds
Emily Maw’s talk from the workshop followed this paper by Dusa McDuff. In what follows, we shall consider triples $(V,C,\omega)$ such that $(V,\omega)$ is a smooth closed symplectic 4-manifold and $C$ is a rational curve, by which we mean a symplectically embedded $S^2$. We call a rational curve $C$ exceptional if $C \cdot C = -1$ with respect to the intersection product on $H_2(V)$ (with respect to its orientation coming from $\omega$). We say $(V,C,\omega)$ is minimal if $V \setminus C$ contains no exceptional curves. The main theorem is as follows:
Theorem [McDuff ’90]: If $(V,C,\omega)$ is minimal and $C \cdot C \geq 0$, then $(V,\omega)$ is symplectomorphic to either:
• $(\mathbb{C} P^2, \omega_{FS})$, in which case $C$ is either a complex line or a quadric (up to symplectomorphism).
• A symplectic $S^2$-bundle over a compact manifold $M$, in which case $C$ is either a fiber or a section (up to symplectomorphism).
Before describing the proof, which is the part involving J-holomorphic curve techniques, we apply this to strong fillings. We shall concern ourselves with fillings of the lens spaces $L(p,1)$ with their standard contact structures, where $p > 0$ is an integer. Let us first define this contact structure. Recall that the standard contact structure on $S^3$ is the one coming from complex tangencies by viewing $S^3 \subset \mathbb{C}^2$. Then the standard contact structure on $L(p,1)$ is the one given by the quotient $L(p,1) = S^3/(\mathbb{Z}/p\mathbb{Z})$ where the action of $1 \in \mathbb{Z}/p\mathbb{Z}$ given by $(z_1,z_2) \mapsto e^{2\pi i/p}(z_1,z_2)$ preserves the contact structure, so that it descends.
Theorem [McDuff ’90]: The lens spaces $L(p,1)$ all have minimal symplectic fillings $(Z,\omega)$, and when $p \neq 4$, these fillings are unique up to diffeomorphism, and further up to symplectomorphism upon fixing the cohomology class $[\omega]$. The space $L(4,1)$ has two nondiffeomorphic minimal fillings.
Proof (sketch): The complex line bundle $\mathcal{O}(p)$ over $S^2$ comes with a natural symplectic structure, and this forms a cap for $L(p,1)$. The zero section of $\mathcal{O}(p)$ is a rational curve of self intersection $p > 0$. McDuff’s explicit classification includes examples $(V,C)$ for any such given $p$, and $V \setminus C$ thus gives a minimal filling for $L(p,1)$. The remaining statements come from a more detailed analysis of the classification result.
Now, I will not go through all of the details of McDuff’s proof of the main theorem, but I will highlight where various J-holomorphic tools appear in the proof. Let me break up the proof into two big pieces.
Step 1: “Mega-Lemma” Consider $(V,C,\omega)$ minimal as above. There is a tame almost complex structure $J$ such that $[C]$ can be represented by a $J$-holomorphic stable curve of the form $S = S_1 \cup \cdots \cup S_m$, where:
• Each $A_i := [S_i]$ is $J$-indecomposable (meaning any stable curve representing $A_i$ must actually be a legitimate curve of one component)
• The almost complex structure $J$ is regular for all curves in the class $A_i$.
• The $S_i$ are distinct and embedded curves of self-intersection -1, 0, or 1, with at least one index for which $A_i \cdot A_i \geq 0$.
We didn’t prove this at the workshop, so I won’t discuss it in detail here. But this is a major reduction into cases. For example, if $m = 1$ and $S \cdot S = 1$, then it had already been shown that this implies that $V = \mathbb{C}P^2$. This bleeds into…
Step 2: Using the evaluation maps constructively
Let us discuss the proof of this last fact briefly. The idea is as follows. We consider the moduli space $\mathcal{M}^*(A,J)$ consisting of simple holomorphic spheres representing the class $A = [S]$. This comes with an evaluation map of the form
$\mathcal{M}^*(A,J) \times_{G} (S^2 \times S^2) \rightarrow V \times V$
where $G$ is the group of automorphisms of $S^2$. Both sides have dimension 8 and this evaluation map is injective away from the diagonal since $A \cdot A = 1$ and we have positivity of intersection. Therefore, this map has degree 1, and so any pair of distinct points on $V$ has a unique curve passing through it. This is enough to show $V = \mathbb{C}P^2$.
Let us do another case, but show that the adjunction formula also comes into play.
Proposition: Suppose $B$ is a simple homology class in $(V,\omega)$ (i.e. is not a multiple of another homology class) with $B \cdot B = 0$, and suppose $F$ is a rational embedded sphere representing $B$. Then there is a fibration $\pi \colon V \rightarrow M$ with symplectic fibers and such that $F$ is one of the fibers.
Proof (sketch): The idea is to consider the moduli space $\mathcal{M}^*_{0,1}(V,J,B)$ of rational embedded $J$-holomorphic curves with 1 marked point $p \in S^2$, and where $J$ is chosen to tame $\omega$ and such that $F$ is itself a $J$-holomorphic curve, and where we have quotiented by reparametrization of the domain. Then one can compute the dimension of this moduli space at a given curve $C$ in the appropriate way as
$d = \dim V + 2c_1(TV) \cdot [C] - 4$,
where the last -4 comes from quotienting by the subgroup of $PSL_2(\mathbb{C})$ fixing the marked point. Applying adjunction for the curve represented by $F$, so that $[C] \cdot [C] = 0$, yields $d = 4$. We also have an evaluation map
$\text{ev} : \mathcal{M}^*_{0,1}(V,J,B) \rightarrow V$
Since $B \cdot B = 0$, there is at most one $B$-curve through each point in $V$, so it follows that this evaluation map has degree at most 1, and hence equal to 1 by regularity. This yields the structure of a fibration $\pi : V \rightarrow M$ where the fibers are precisely the curves in our moduli space. Since the fibers are holomorphic, they are symplectic by the taming condition.
Filed under Uncategorized
## Kylerec Overview
Updates (June 11, 2017): Added link to other notes from Kylerec workshop, and fixed an error caught by Chris Wendl in the comments.
I’m very excited to be joining this blog!
This is the first of a series of posts about the content of the Kylerec workshop, held May 19-25 near Lake Tahoe, which focused on fillings of contact manifolds. Under the guidance of our mentors, Roger Casals, Steven Sivek, and Laura Starkston, we worked from the basic theory of fillings through some state-of-the-art results. Many of the basics have been discussed on this blog already in Laura Starkston’s posts from January 2013: Part 1 and Part 2 on Fillings of Contact Manifolds. For a more thorough introduction to types of filling and the differences between them, I suggest reading those posts (and the accompanying comments by Paolo Ghiginni and Chris Wendl). This post will remain self-contained anyway.
One can find notes that I took (except for three lectures, due to technical difficulties) at the Kylerec 2017 tab at this link. Other notes (with shorter load times, and including the ones I’m missing) will be posted on the Kylerec website soon are now posted on the Kylerec website here.
Comments and corrections are very welcome!
### Definitions
We quickly review the various notions of fillings of a contact manifold. We shall always assume that our manifolds are oriented and contact structures cooriented. As a starting point, one might be interested in smooth fillings of contact manifolds. It turns out that this problem is rather uninteresting. Every contact manifold of dimension $2n+1$ has a structure group which can be reduced to $U(n) \times 1$, but the complex bordism group is well known to satisfy $\Omega^U_{2n+1} = 0$. As a consequence, every contact manifold is smoothly fillable. We must therefore consider fillability questions which extend beyond the realm of complex bordism in order to discover interesting phenomena.
These notions are as follows, in (strictly!) increasing order of strength.
• We say a contact 3-manifold $(M^3,\xi)$ is weakly fillable if it is the smooth boundary of a symplectic manifold $(W^4,\omega)$ such that $\omega|_{\xi} > 0$. There is a generalization in higher dimensions due to Massot, Niederkrüger, and Wendl, but we omit it here. (Simply requiring that $\omega|_{\xi}$ is a positive symplectic form in the same conformal symplectic class as the natural one on $\xi$, i.e. is $d\alpha|_{\xi}$ up to scaling where $\alpha$ is a contact form for $\xi$, implies strong fillability in higher dimensions, by McDuff.)
• We say a contact manifold $(M^{2n-1},\xi)$ is strongly fillable if there is a weak filling $(W^{2n},\omega)$ such that one can find a Liouville vector field $V$ in a neighborhood of $M$, i.e. one such that $\mathcal{L}_V\omega = \omega$, such that $(\iota_V\omega)|_M$ gives a (properly cooriented) contact form for $\xi$.
• We say a contact manifold $(M^{2n-1},\xi)$ is exactly fillable if there is a strong filling such that the Liouville vector field $V$ can be extended to all of $(W,\omega)$. In other words, $M$ is the contact boundary of a Liouville domain $(W,\omega = d\alpha)$ where $\alpha = \iota_V\omega$.
• We say a contact manifold $(M^{2n-1},\xi)$ is Weinstein (or Stein) fillable if it is exactly fillable by some $(W,\omega = d\alpha)$, where $\alpha = \iota_V\omega$, such that there is also a Morse function $f$ on $W$ such that $V$ is gradient-like for $f$ and $M$ is a maximal regular level set. In other words, $M$ is the contact boundary of a Weinstein domain.
As a final remark, there is a notion of overtwistedness in contact manifolds. In 3-dimensions, this is characterized by the existence of an overtwisted disk. This was known to obstruct all types of fillings, due to Eliashberg and Gromov. In higher dimensions, overtwistedness was defined in a paper of Borman, Eliashberg, and Murphy, which was discussed on this blog by Laura Starkston and Roger Casals, starting with this post and concluding with this one. This definition implies the existence of a plastikstufe as defined by Niederkrüger, which had been already shown to obstruct fillings (strongly in the same paper, weakly in the paper by Massot, Niederkrüger, and Wendl). In other words, in any dimension, overtwistedness implies not fillable. A contact manifold which is not overtwisted is called tight, so equivalently, fillable implies tight, in all dimensions.
To summarize this section:
Tight < Weakly fillable < Strongly fillable < Exactly fillable < Weinstein fillable
where all of the inclusions turn out to be strict.
### Two Motivating Questions
Question 1: What tools do we have at each level of fillability?
The easiest type of filling to understand is that of the Weinstein filling, since Weinstein domains have an explicit surgery theory, which lends themselves to concrete geometric descriptions. Most notably, a Weinstein domain can be thought of as a symplectic Lefschetz fibration, which naturally has an open book decomposition on its boundary whose monodromy is a product of positive Dehn twists. Hence, Weinstein fillings and fillability can be studied through studying supporting open book decompositions for a contact manifold $(M,\xi)$.
Another rather powerful tool is the study of J-holomorphic curves. Let us provide a quick example: the proof that fillability of a contact 3-manifold implies tightness. One assumes by way of contradiction that an overtwisted contact 3-manifold has a filling. Then one considers a certain compact 1-dimensional moduli space of J-holomorphic curves with boundary on the overtwisted disk. One finds an explicit component of this moduli space which has one endpoint (a constant disk) but cannot have another endpoint, which contradicts the compactness of the moduli space. In higher dimensions, studying similar moduli spaces of J-holomorphic curves yields obstructions to fillings.
There are some other miscellaneous techniques. For example, Liouville domains have attached to them a symplectic homology, which provides another tool for the case of exact fillings. And in the case of 3-dimensional contact manifolds, one can also study the Seiberg-Witten invariants of a given filling.
Question 2: How can we study the topology of different fillings? Or tell when fillings are distinct even if they have the same homology?
J-holomorphic curves come with extra evaluation maps which allow one to study how the moduli space of curves compares to some underlying topology, e.g. of the filling or of the contact manifold. This is a technique which comes up many times in different contexts, and it sometimes allows us to produce maps between the filling or the contact manifold in question which do not exist for any other obvious reason.
Similarly, symplectic homology in its two flavors $SH$ and $SH^{+}$ fits into an exact triangle with Morse homology, and so one can understand the topology of a filling from its symplectic homology. One might be interested, for example, in studying fillings with $SH = 0$, in which case the homology of the filling is completely determined by $SH^{+}$. Alternatively, $SH$ can be used directly to distinguish fillings.
### Overview of Kylerec
More detailed posts about the contents of Kylerec will appear in future blog posts, but I will outline here precisely what was covered.
Day 1: After an overview talk, we spent the rest of the day studying the surgery theory of Weinstein manifolds, and began our study of the correspondence between Weinstein fillings, Lefschetz fibrations, and open book decompositions.
Day 2: We highlighted some results from this correspondence, and then turned towards an introduction to the theory of J-holomorphic curves, including applications of this theory to fillings via McDuff’s classification result as well as Wendl’s J-holomorphic foliations.
Day 3: On our short day, we first discussed some applications of J-holomorphic curves to high-dimensional fillings due to Barth, Geiges, and Zehmisch (for example reproving the result of Eliashberg, Floer, and McDuff that the standard sphere has a unique aspherical filling), and applied Wendl’s theorem (as discussed in Day 2) following a paper of Plamenevskaya and Van Horn-Morris to show that many contact structures on the lens spaces $L(p,1)$ have unique Weinstein fillings up to deformation equivalence.
Day 4: We discussed the Seiberg-Witten equations, how they appear in symplectic geometry, and how they are used by Lisca and Matic to distinguish contact structures on homology 3-spheres which are homotopic (through plane fields) but not isotopic (through contact structures). We also discussed how Calabi-Yau caps, as defined by Li, Mak, and Yasui, can be used to prove certain uniqueness results on fillings of unit cotangent bundles of surfaces, as in this paper by Sivek and Van Horn-Morris.
Day 5: On our last day, we focused mainly on symplectic homology (and its variants). In one talk, we performed computations which allowed us to distinguish contact structures on standard spheres (see Ustilovsky’s paper) and to compute the symplectic homology of fillings of certain Brieskorn spheres (see Uebele’s paper). We also discussed Lazarev’s generalization of M.-L.Yau’s theorem (that subcritical Weinstein fillings have isomorphic integral cohomology) to the flexible case.
Filed under Uncategorized
## Some open computational problems in link homology and contact geometry
I’m thrilled to join everyone at the best-named math blog.
I am just home from Combinatorial Link Homology Theories, Braids, and Contact Geometry at ICERM in Providence, Rhode Island. The conference was aimed at students and non-experts with a focus on introducing open problems and computational techniques. Videos of many of the talks are available at ICERM’s site. (Look under “Programs and Workshops,” then “Summer 2014”.)
One of the highlights of the workshop was the ‘Computational Problem Session’ MC’d by John Baldwin with contributions from Rachel Roberts, Nathan Dunfield, Joanna Mangahas, John Etnyre, Sucharit Sarkar, and András Stipsicz. Each spoke for a few minutes about open problems with a computational bent.
I’ve done my best to relate all the problems in order with references and some background. Any errors are mine. Corrections and additions are welcome!
### Rachel Roberts
Contact structures and foliations
Eliashberg and Thurston showed that a $C^2$ one-dimensional foliation of a three-manifold can be $C^0$-approximated by a contact structure (as long as it is not the product foliation on $S^1 \times S^2$). Vogel showed that, with a few other restrictions, any two approximating contact structures lie in the same isotopy class. In other words, there is a map $\Phi$ from $C^2$, taut, oriented foliations to contact structures modulo isotopy for any closed, oriented three-manifold.
Geography: What is the image of $\Phi$?
Botany: What do the fibers of $\Phi$ look like?
The image of $\Phi$ is known to be contained within the space of weakly symplectically fillable and universally tight contact structures. Etnyre showed that if one removes “taut”, then $\Phi$ is surjective. Etnyre and Baldwin showed that $\Phi$ doesn’t “see” universal tightness.
L-spaces and foliations
A priori the rank of the Heegaard Floer homology groups associated to a rational homology three-sphere Y are bounded by the first ordinary homology group: $\text{rank}(\hat{HF}(Y)) \geq |H_1(Y; \mathbb{Z})|$. An L-space is a rational homology three-sphere for which equality holds.
Conjecture: Y is an L-space if and only if it does not contain a taut, oriented, $C^2$ foliation.
Ozsváth and Szabó showed that L-spaces do not contain such foliations. Kazez and Roberts proved that the theorem applies to a class of $C^0$ foliations and perhaps all $C^0$ foliations. The classification of L-spaces is incomplete and we are led to the following:
Question: How can one prove the (non-)existence of such a foliation?
Existing methods are either ad hoc or difficult (e.g. show that the manifold does not act non-trivially on a simply-connected (but not necessarily Hausdorff!) one-manifold). Roberts suggested that Agol and Li’s algorithm for detecting “Reebless” foliations via laminar branched surfaces may be useful here, although the algorithm is currently impractical.
### Nathan Dunfield
What do random three-manifolds look like?
First of all, how does one pick a random three-manifold? There are countably many compact three-manifolds (because there are countably many finite simplicial complexes, or because there are countably many rational surgeries on the countably many links in $S^3$, or because…) so there is no uniform probability distribution on the set of compact orientable three-manifolds.
To dodge this issue, we first consider random objects of bounded complexity, then study what happens as we relax the bound. (A cute, more modest example: the probability that two random integers are relatively prime is $6/\pi^2$.1). Fix a genus $g$ and write $G$ for the mapping class group of the oriented surface of genus $g$. Pick some generators of $G$. Let $\phi$ be a random word of length $N$ in the chosen generators. We can associate a unique closed, orientable three-manifold to $\phi$ by identifying the boundaries of two genus $g$ handlebodies via $\phi$.
Metaquestion: How is your favorite invariant distributed for random 3-manifolds of genus $g$? How does it behave as $g \to \infty$? Experiment! (Ditto for knots, links, and their invariants.)
Challenge: Show that your favorite conjecture about some class of three-manifolds or links holds with positive probability. For example:
Conjecture: a random three-manifold is not an $L$-space, has left-orderable fundamental group, admit a taut foliation, and admit a tight contact structure.
These methods can also be used to prove more traditional-sounding existence theorems. Perhaps you’d like to show that there is a three-manifold of every genus satisfying some condition. It suffices to show that a random three-manifold of fixed genus satisfies the condition with non-negative probability! For example,
Theorem: (Lubotzky-Maher-Wu, 2014): For any integers $k$ and $g$ with $g \geq 2$, there exist infinitely many closed hyperbolic three-manifolds which are integral homology spheres with Casson invariant $k$ and Heegaard genus $g$.
### Johanna Mangahas
What do generic mapping classes look like?
Here are two sensible ways to study random elements of bounded complexity in a finitely-generated group.
• Fix a generating set. Look at all words of length N or less in those generators and their inverses. (word ball)
• Fix a generating set and the associated Cayley graph. Look at all vertices within distance N of the identity. (Cayley ball)
A property of elements in a group is generic if a random element has the property with probability, so the meaning of “generic” differs with the meaning of “random.” For example, consider the group $G = \langle a, b \rangle \oplus \mathbb{Z}$ with generating set $\{(a,0), (b,0), (id,1)\}$. The property “is zero in the second coordinate” is generic for the first notion but not the second. So we are stuck/blessed with two different notions of genericity.
Recall that the mapping class group of a surface is the group of orientation-preserving homeomorphisms modulo isotopy. Thurston and Nielsen showed that a mapping class $\phi$ falls into one of three categories:
• Finite order: $\phi^n = id$ for some $n$.
• Reducible: $\phi$ fixes some finite set of simple closed curves.
• Pseudo-Anosov: there exists a transverse pair of measured foliations which $\phi$ stretches by $\lambda$ and $1/\lambda$.
The first two classes are easier to define, but the third is generic.
Theorem: (Rivin and Maher, 2006) Pseudo-Anosov mapping classes are generic in the first sense.
Question: Are pseudo-Anosov mapping classes generic in the second sense?
The braid group on n strands can be understood as the mapping class group of the disk with n punctures. But the braid group is not just a mapping class group; it admits an invariant left-order and a Garside structure. Tetsuya Ito gave a great minicourse on both of these structures!
Fast algorithms for the Nielsen-Thurston classification
Question: Is there a polynomial-time algorithm for computing the Thurston-Nielsen classification of a mapping class?
Matthieu Calvez has described an algorithm to classify braids in $O(\ell^2)$ where $\ell$ is the length of the candidate braid. The algorithm is not yet implementable because it relies on knowledge of a function $c(n)$ where $n$ is the index of the braid. These numbers come from a theorem of Masur and Minsky and are thus difficult to compute. These difficulties, as well as the power of the Garside structure and other algorithmic approaches, are described in Calvez’s linked paper.
Challenge: Implement Calvez’s algorithm, perhaps partially, without knowing $c(n)$.
Mark Bell is developing Flipper which implements a classification algorithm for mapping class groups of surfaces.
Question: How fast are such algorithms in practice?2
### John Etnyre
Contactomorphism and isotopy of unit cotangent bundles
For background on all matters symplectic and contact see Etnyre’s notes.
Let $M$ be a manifold of any (!) dimension. The total space of the cotangent bundle $E = T^*M$ is naturally symplectic: the cotangent bundle of $E$ supports the Liouville one-form $\lambda$ characterized by $\alpha^*(\lambda) = \alpha$ for any one-form $\alpha \in T^*M$; the pullback is along the canonical projection $T^* T^* \to T^*M$. The form $d\lambda$ is symplectic on $T^*M$.
Inside the cotangent bundle is the unit cotangent bundle $S^*M = \{(p,v) \in T^*M : |v| = 1\}$. (This is not a vector bundle!) The form $d\lambda$ restricts to a contact structure on the $S^*M$.
Fact: If the manifolds $M$ and $N$ are diffeomorphic, then their unit cotangent bundles $S^*M$ and $S^*N$ are contactomorphic
Hard question: In which dimensions greater than two is the converse true?
This question is attributed to Arnol’d, perhaps incorrectly. The converse is known to be true in dimensions one and to and also in the case that $M$ is the three-sphere (exercise!).
Tractable (?) question: Does contactomorphism type of unit cotangent bundles distinguish lens spaces from each other?
Also intriguing is the relative version of this construction. Let $K$ be an Legendrian embedded (or immersed with transverse self-intersections) submanifold of $M$. Define the unit cosphere bundle of $K$ to be $L_K = \{w \in T^*M : w(v) = 0, \forall v \in TK\}$. You can think of it as the boundary of the normal bundle to $K$. It is a Legendrian submanifold of the unit cotangent bundle $T^*M$.
Fact: If $K_1$ is Legendrian isotopic to $K_2$ then $L_{K_1}$ is Legendrian isotopic to $L_{K_2}$.
Relative question: Under what conditions is the converse true?
Etnyre noted that contact homology may be a useful tool here. Lenny Ng’s “A Topological Introduction to Knot Contact Homology” has a nice introduction to this problem and the tools to potentially solve it.
### Sucharit Sarkar
How many Szabó spectral sequences are there, really?
Ozsváth and Szabó constructed a spectral sequence from the Khovanov homology of a link to the Heegaard Floer homology of the branched double cover of $S^3$ over that link. (There are more adjectives in the proper statement.) This relates two homology theories which are defined very differently.
Challenge: Construct an algorithm to compute the Ozsváth-Szabó spectral sequence.
Sarkar suggested that bordered Heegaard Floer homology may be useful here. Alternatively, one could study another spectral sequence, combinatorially defined by Szabó, which also seems to converge to the Heegaard Floer homology of the branched double cover.
Question: Is Szabó’s spectral sequence isomorphic to the Ozsváth-Szabó spectral sequence?
Again, the bordered theory may be useful here. Lipshitz, Ozsváth, and D. Thurston have constructed a bordered version of the Ozsváth-Szabó spectral sequence which agrees with the original under a pairing theorem.
If the answer is “yes” then Szabó’s spectral sequence should have more structure. This was the part of Sarkar’s research talk which was unfortunately scheduled after the problem session. I hope to return to it in a future post (!).
Question: Can Szabó’s spectral sequence be defined over a two-variable polynomial ring? Is there an action of the dihedral group $D_4$ on the spectral sequence?
### András Stipsicz
Knot Floer Smörgåsbord
Link Floer homology was spawned from Heegaard Floer homology but can also be defined combinatorially via grid diagrams. Lenny Ng explained this in the second part of his minicourse. However you define it, the theory assigns to a link $L$ a bigraded $\mathbb{Z}[U]$-module $HFK^-(L)$. From this group one can extract the numerical concordance invariant $\tau(L)$. Defining $HFK^-$ over $\mathbb{Q}[U]$ or $\mathbb{Z}/p\mathbb{Z}[U]$ one can define invariants $\tau_0$ and $\tau_p$.
Question: Are these invariants distinct from $\tau$?
Harder question: Does $HFK^-$ have $p$-torsion for some $p \in \mathbb{Z}$? (From a purely algebraic perspective, a “no” to the first question suggests a “no” to this one.)
Stipsicz noted that there are complexes of $\mathbb{Z}[U]$-modules for which the answer is yes, but those complexes are not known to be $HFK^-(L)$ of any link. Speaking of which,
“A shot in the dark:” Characterize those modules which appear as $HFK^-$.
In another direction, Stipsicz spoke earlier about a family of smooth concordance invariants $\Upsilon_t$. These were constructed from link Floer homology by Ozsváth, Stipsicz, and Szabó. Earlier, Hom constructed the smooth concordance invariant $\epsilon$. Both invariants can be used to show that the smooth concordance group contains a $\mathbb{Z}^\infty$ summand, but their fibers are not the same: Hom produced a knot which has $\Upsilon_t = 0$ for all t and $\epsilon \neq 0$.
Conversely: Is there a knot with $\epsilon = 0$ by $\Upsilon_t \neq 0$?
Stipsicz closed the session by waxing philosophical: “When I was a child we would get these problems like ‘Jane has 6 pigs and Joe has 4 pigs’ and I used to think these were stupid. But now I don’t think so. Sit down, ask, do calculations, answer. That’s somehow the method I advise. Do some calculations, or whatever.”
1. An analogous result holds for arbitrary number fields — I make no claims about the cuteness of such generalizations.
2. An old example: the simplex algorithm from linear programming runs in exponential time in the worst-case, but in
Filed under Uncategorized
## Overtwisted disks and filling holes
This post is on the end of the proof by Borman, Eliashberg, and Murphy that there is an overtwisted contact structure in every homotopy class of almost contact structures in higher dimensions and via the parametric version, any two overtwisted contact structures which are homotopic as almost contact structures, are isotopic as contact structures. There are a number of other posts preceding this one that are meant to be read first, and there are a few pieces of the proof that we skipped, but I think this will be my last post on this topic.
Overtwisted disks in higher dimensions and filling the holes
In dimension three, an overtwisted disk is a certain model germ of a contact structure on a two dimensional disk. The key property of this overtwisted disk which generalizes in higher dimensions, is its role in the proof of the h-principle: after connecting the codimension zero “holes” where the almost contact structure resists becoming genuinely contact, with a neighborhood of the overtwisted disk, one is able to extend the contact structure. One useful feature of overtwisted disks in dimension three, is that they can be recognized simply by finding an embedded unknotted circle with Thurston-Bennequin number 0 (the contact planes along the unknot do not twist relative to the Seifert framing determined by the disk that is bounded by the unknot). This is not true in higher dimensions: there are quantitative properties of the contact structure on the interior of the disk which are needed for the h-principle proof to work.
Recall, from Roger’s post, that in the presence of an overtwisted disk, we can reduce the problem of extending the contact structure over the hole, to extending the contact structure over an annulus (interval times sphere) whose germ on one boundary component is modelled by the contact Hamiltonian obtained by concatenating the Hamiltonian modelling the hole with the overtwisted model Hamiltonian, and whose germ on the other boundary component is given by the overtwisted Hamiltonian. (Remember this picture?)
This is because we can connect each hole to an overtwisted annulus by a tunnel, and then forget that we already had a genuine contact structure on the tunnel and the overtwisted annulus and just look at the contact germs on the two boundary components of the boundary sum of the ball with the annulus, like in this schematic picture.
This is the key point where we use the overtwistedness of the contact structure. The arguments to get to this point are made in a relative way that just fixes the contact structure in the overtwisted regions. At this point, we need to change the contact structure on the overtwisted annulus. In order to fill in the larger annulus (the overtwisted annulus connected to the hole) with a genuine contact structure, we need to show that, up to conjugation, the overtwisted Hamiltonian is less than the connect sum of the Hamiltonian for the hole with the overtwisted Hamiltonian. We are assuming at this point, that we know how to homotope the almost contact structure so that it is genuinely contact in the complement of holes, and each of the holes has its almost contact structures given by a circle model. Moreover, by doing this extra carefully (using equivariant coverings), we can assume that there are finitely many different types of contact Hamiltonians defining the circle models for the holes. The number of types of contact Hamiltonians needed a priori depends on the dimension. An easier reduction is to assume that the Hamiltonian $K: \Delta\times S^1 \to \mathbb{R}$ is independent of the $S^1$ (time) direction since the circle is compact so $\overline{K}(x)=\min_{\theta\in S^1} K(x,\theta)$ is well-defined and satisfies $\overline{K}\leq K$ so there is a genuine contact annulus extending the contact structure from the boundary of the circle model for $K$ inward to the boundary of the circle model for $\overline{K}$.
In order to prove the key lemma that we can fill in the appropriate annuli, we need a more concrete family of contact Hamiltonians. Consider a contact Hamiltonian $K_{\varepsilon}$ on the cylinder $\Delta_{cyl}=\{(z,u_i,\theta_i): |z|\leq 1, u=\sum u_i\leq 1\}\subset \mathbb{R}^{2n-1}$ which is negative on the region where $|z|$ and $u$ are both less than $1-\varepsilon$, and which increases linearly from 0 in $z$ and $u$ towards the boundary with slope 1. These are called special Hamiltonians . The main thing which is special about such a Hamiltonian $K_{\varepsilon}$ is that there is a contact embedding $\Theta$ of $\Delta_{cyl}$ with the standard contact form, into the boundary sum of $\Delta_{cyl}$ with itself, such that $\Theta_*K_{\varepsilon}$ is less than the connected sum of $K_{\varepsilon}$ with itself. Given this, if the hole and the overtwisted annulus are both modelled by such Hamiltonians with the same $\varepsilon$, we can fill in the holes by genuine contact structures.
Notice that any contact Hamiltonian which is positive on $\partial \Delta_{cyl}$ must dominate (is greater than) some special Hamiltonian for sufficiently small $\varepsilon$. It is important that it is possible to reduce to assuming that the holes are modelled by finitely many types of contact Hamiltonian circle models, therefore in a given dimension, there is a certain universal $\varepsilon_{univ}$, such that for any $\varepsilon<\varepsilon_{univ}$, every hole dominates a circle model for a special $K_{\varepsilon}$. Therefore, the key overtwisted annuli are given by circle models for special Hamiltonians corresponding to such an $\varepsilon$.
To get from overtwisted annuli to overtwisted disks, we use the fact that the main lemma embedding $\Theta$ fixes the end where $z\in[1-\varepsilon,1]$. Therefore we do not need the full annulus (neighborhood the boundary of the cylinder), only the topological disk obtained but cutting off the end of the cylinder.
The overtwisted disk is thus defined to be the disk with the contact germ on the boundary of a circle model over a cylinder (excluding one end) defined by a special contact Hamiltonian $K_{\varepsilon}$ for some $\varepsilon<\varepsilon_{univ}$ where $\varepsilon_{univ}$ depends only on the dimension. I think that dependence on the dimension is not really understood at this point, but the idea is that $\varepsilon_{univ}$ probably gets smaller as the dimension increases, so the region where the contact Hamiltonian is negative would be larger.
Proving the main lemma
We want to show that there is a contact embedding $\Theta:\Delta_{cyl}\to \Delta_{cyl}\# \Delta_{cyl}$ such that for a special Hamiltonian $K_{\varepsilon}$, $\Theta_*K_{\varepsilon} (where here $\#$ denotes the boundary sum obtained by tubing the two cylinders together so that the contact Hamiltonian is positive on the tube). For the parametric version, the main lemma shows there is a family $\Theta_s$ interpolating between the identity and $\Theta$.
Recall the things we know how to do with contactomorphisms from the previous post:
(1) We can reorder contact Hamiltonians however we want in regions where they are negative by the disorder lemma.
(2) We have transverse scaling contact embeddings which shrinks/expands $\Delta_{cyl}$ in the $z$ direction by a diffeomorphism $h:\mathbb{R}\to \mathbb{R}$ at the cost of correspondingly shrinking/expanding $\Delta_{cyl}$ in the $u$ direction by rescaling by $h'(z)$. The effect on the contact Hamiltonian is $(\Phi_h)_*K(h(z),h'(z)u,\theta)=h'(z)K(z,u,\theta)$.
(3) We have twist embeddings which shrink/expand $\Delta_{cyl}$ in the radial $u$ direction by rescaling by $\frac{1}{1+g(z)u}$ if you allow the angular $\theta$ directions to be twisted. The effect on the contact Hamiltonian if we ignore the angular coordinate is $(\Psi_g)_*K(z,\frac{u}{1+g(z)u})=(1-g(z)u)K(z,u)$.
To prove the main lemma, we want to stretch out the $z$ direction of $\Delta_{cyl}$ so that it spreads the length of the connected sum. We can do this with a transverse scaling contactomorphism, but the $u$ directions will expand: $(z,u)\mapsto (h(z),h'(z)u)$. Since we don’t want to mess with the contact structure on the $z$ ends, we choose $h$ to look like a translation so $h'(z)=1$ when $z$ is within $\varepsilon$ of the ends. We can compensate for the expansion in the $u$ directions away from the ends with a twist embedding which rescales the expanded $u$ directions to fit back inside a (longer) cylinder where $u\leq 1$, by choosing $g(z)=1-\frac{1}{h'(h^{-1}(z))}$. The total effect of composing these two maps is an embedding $\Gamma$ mapping $(z,u)\mapsto (h(z),\frac{h'(z)u}{1+(h'(z)-1)u})$ (the angular directions get twisted some amount but we don’t care). $\Gamma$ sends a short cylinder $\Delta_{cyl}$ to a longer cylinder $\Delta_{cyl}\# \Delta_{cyl}$, so that the points where $u=1$ are sent to points where $u=1$, but points where $u<1$ are sent to points with $u$-coordinate $\frac{h'(z)u}{1+(h'(z)-1)u}> u$. So this contactomorphism inflates the cylinder in the $u$ directions towards the boundary. By choosing a family of diffeomorphisms $h_s$ starting with a basic translation we get a family of embeddings $\Gamma_s$ which look like this:
Now, we want to see the effect on the contactomorphisms on a special Hamiltonian $K_{\varepsilon}$. We find that
$(\Gamma_s)_*K_{\varepsilon}(h_s(z),\frac{h_s'(z)u}{1+(h_s'(z)-1)u})=(h_s'(z)-(h_s'(z)-1)u)K(z,u)$
which can be rewritten as
$(\Gamma_s)_*K_{\varepsilon}(h_s(z),u)=(h_s'(z)-(h_s'(z)-1)u)K\left(z,\frac{u}{h_s'(z)-(h_s'(z)-1)u}\right)$.
When $z$ is within $\varepsilon$ of the ends, we have chosen $h$ to be a translation, so $(\Gamma_s)_*K_{\varepsilon}(h_s(z),u)=K_{\varepsilon}(z,u)$, i.e. the Hamiltonian is basically fixed to be standard on these ends. When we reach $s=1$, the ends of $\Gamma_1(\Delta_{cyl})$ coincide with the ends of $\Delta_{cyl}\#\Delta_{cyl}$ so in these regions $(\Gamma_1)_*(K_\varepsilon)=K_{\varepsilon}\#K_{\varepsilon}$.
The rescaling factor for the Hamiltonian, $(h_s'(z)-(h_s'(z)-1)u)$ is always greater than or equal to 1, so the region where $(\Gamma_s)_*K \leq 0$ is the image under $\Gamma_s$ of the region where $K\leq 0$ and similarly $\{(\Gamma_s)_*K\geq 0\}=\Gamma_s(\{K\geq 0\})$. Since we can use the disorder lemma, we don’t care much about the exact negative values of $(\Gamma_s)_*K$, but we do need $(\Gamma_1)_*K(z,u)\leq K\#K(z,u)$ wherever $(\Gamma_1)_*K\geq 0$. Therefore we need to check this inequality on points $\Gamma_1(z,u)$ where $u\in [1-\varepsilon,1]$ and $z$ is more than $\varepsilon$ away from the ends (since we already understand the behavior when $z$ is within $\varepsilon$ of the boundary). On this region, the special Hamiltonian $K_{\varepsilon}$ is just a linear function of $u$ with slope 1. Therefore
$(\Gamma_s)_*K_{\varepsilon}(h_s(z),u)=(h_s'(z)-(h_s'(z)-1)u)\left(\frac{u}{h_s'(z)-(h_s'(z)-1)u}-(1-\varepsilon) \right)$
which as a function of $u$ is linear, has the value $0$ when $\frac{u}{h_s'(z)-(h_s'(z)-1)u}=1-\varepsilon$, and the value $\varepsilon$ at $u=1$. Notice that $\frac{u}{h_s'(z)-(h_s'(z)-1)u}=1-\varepsilon$ when $u=\frac{h_s'(z)(1-\varepsilon)}{1+(h_s'(z)-1)(1-\varepsilon)}>1-\varepsilon$ so in this region $(\Gamma_s)_*K_{\varepsilon}$ compares to $K\#K$ like this:
Therefore $(\Gamma_1)_*K_{\varepsilon}(z,u)\leq K\#K(z,u)$ wherever $(\Gamma_1)_*K_{\varepsilon}\geq 0$. Then we can use the disorder lemma to produce a contactomorphism which fixes everything on this positive region but makes the Hamiltonian sufficiently negative in the region where $K_\varepsilon\#K_\varepsilon\leq 0$ so that after composing $\Gamma_s$ with this disorder contactomorphism we get the embedding $\Theta_s$ such that $(\Theta_1)_*K_{\varepsilon}\leq K_{\varepsilon}\#K_{\varepsilon}$ as required. Notice that $\Theta_s$ fixes the end where $z\in[1-\varepsilon,1]$ so we do not actually need to use that end of the overtwisted annulus to fill in the hole.
It is worth noting that an overtwisted disk could be modelled using any Hamiltonian for which the main lemma could be proven, not just the ones that increase linearly near the boundary. The tricky part to check for a more general function is the inequality near the $u$-boundary. When the contact Hamiltonian was linear, the contactomorphism transformation and the rescaling factor cancelled in just the right way so that the pushed forward contact Hamiltonian was still linear in $u$ so the inequality could be determined simply by understanding the values near end points. For more general contact Hamiltonians you would probably need to do more work to get the required estimates.
1 Comment
Filed under Uncategorized
## Contact Hamiltonians II
This post is a continuation from Roger’s last post on Contact Hamiltonians about Borman, Eliashberg, and Murphy’s h-principal result on higher dimensional overtwisted contact structures. Here we will start to get into some of the main pieces of the proof.
First lets recall what we are trying to prove: given an almost contact structure that contains a particular model “overtwisted disk”, this almost contact structure can be homotoped through almost contact structures to a genuine contact structure. A parametric version of this theorem implies that homotopic overtwisted contact structures are isotopic through contact structures. So far, we still have not actually defined an overtwisted disk in higher dimensions (but will soon); for now just keep in mind that there is a model piece of contact manifold that we assume is embedded in the almost contact manifold from the start. The broad idea of the proof is to modify the almost contact structure to be genuinely contact on larger and larger pieces of the manifold until all the “holes” (pieces where the almost contact structure has not been made contact yet) are filled in. Gromov’s (relative) h-principal for open contact manifolds implies that the almost contact structure can be homotoped to be contact in the complement of a compact codimension zero piece (while fixing the structure near the overtwisted disk). A technical argument which keeps track of the angles between the contact planes and the boundary of the hole reduces the argument to extending the contact structure over holes which near the boundary agree with a certain circular model. We put off this technical argument for now, but mention that it is analogous to the argument in the 3-dimensional case called part 1 in this earlier post.
Refer to section 6 of the BEM paper for more details on the first half of this post, and to section 8 for the second half.
The circular model
The goal here is to define a model almost contact structure on a ball, which near the boundary is a genuine contact structure encoded by a contact Hamiltonian. View the 2n+1 dimensional ball as the product of a 2n-1 dimensional ball $\Delta$ with a 2-dimensional disk $D^2$, viewed as a subset of $\mathbb{C}$. The contact Hamiltonian is a function
$K: \Delta \times S^1 \to \mathbb{R}$
Using the standard contact structure $\lambda_{st}=dz+\sum_i r_i^2d\theta_i$ on $\Delta\subset \mathbb{R}^{2n-1}$, recall that an extension of this function $\widetilde{K}: \Delta \times D^2 \to \mathbb{R}$ defines an almost contact structure $\alpha = \lambda_{st}+\widetilde{K}d\theta$ on $\Delta\times D^2$ which is genuinely contact wherever $\partial_r\widetilde{K}>0$ (compute $\alpha\wedge d\alpha>0$). Using the conventions from the BEM paper, we will use the coordinate $v=r^2$. If $K$ is everwhere positive, we can realize this contact structure near the boundary of the following embedded subset of the standard contact $(\mathbb{R}^{2n+1},\ker(\lambda_{st}+vd\theta)$
$B^{S^1}_{K}:=\{(x,v,\theta)\in \Delta\times \mathbb{C} : v\leq K(x,\theta)\}$
If $K$ is negative anywhere, then we need to look at a modified version. We can still encode the shape of $K$ by shifting everything up by a sufficiently large constant $C$ so that $K+C$ is positive. Then define
$B^{S^1}_{K,C}:=\{(x,v,\theta)\in \Delta \times \mathbb{C} : v\leq K(x,\theta)+C \}$.
In order to have the contact form encodes the contact Hamiltonian $K$ near the boundary, we want to shift the contact form from $\lambda_{st}+vd\theta$ to $\lambda_{st}+(v-C)d\theta$ near the boundary. However, because the polar coordinates degenerate near $v=0$, in a neighborhood of $v=0$, we need to keep the form standard: $\lambda_{st}+vd\theta$. Define a family of functions $\rho_{(x,\theta)}(v)$ to interpolate between these two, and then define the almost contact structure on $B^{S^1}_{K,C}$ by the form $\eta_{\rho}=\lambda_{st}+\rho d\theta$. We want this almost contact form to be genuinely contact near the boundary since we are looking for a model for the holes. You can compute $\eta_{\rho}\wedge d\eta_{\rho}$ to see that $\eta_{\rho}$ defines a genuine contact form exactly when $\partial_v\rho_{(x,t)}(v)>0$. The boundary of the ball $B^{S^1}_{K,C}$ has two pieces: the piece where $v=K(x,\theta)+C$ and the piece where $x\in \partial \Delta$. In a neighborhood of the former piece, $\rho(v)=v-C$ so it has positive derivative, but on the latter piece we have to impose the condition directly that $\partial_v\rho_{(x,t)}>0$ in an open neighborhood of points where $x\in \partial\Delta$.
One can show that different choices for $C,\rho$ which satisfy these conditions do not yield genuinely different almost contact forms $\eta_\rho$ because up to diffeomorphism, different choices do not change the contact structure near the boundary or the relative homotopy type of the almost contact structure on the interior.
The key point is that this almost contact structure on $B^{S^1}_{K,C}$ can be chosen to be a genuine contact structure only along $x$ slices where $K$ is positive. Remember that $\rho_{(x,\theta)}$ says how much the almost contact planes are rotating in the radial direction, and if $\partial_r\rho_{(x,\theta)}=0$ this means the twisting has stopped. If $K(x,\theta)$ is negative then since $\rho_{(x,\theta)}(K(x,\theta)+C)=K(x,\theta)<0$ and $\rho_{(x,\theta)}(v)=v$ near 0, $\rho_{(x,\theta)}$ must have a critical point and the almost contact planes must stop twisting and thus fail to be genuinely contact. In particular, to define the circle model for a contact Hamiltonian $K$ we need $K(x,\theta)>0$ near points where $x\in \partial \Delta$, so we only consider such Hamiltonians.
Here is a 3-dimensional example. The arrows indicate the twisting of the almost contact planes defined by $\rho$. Note that where K fails to be positive the planes start twisting counterclockwise as you move radially outward, but then have to switch to turning clockwise at some point. The functions $\rho_{(x,\theta)}$ are indicated by the graphs above–they start having critical points when K fails to be positive.
If we have two contact Hamiltonians $K_0$ on $\Delta_0$ and $K_1$ on $\Delta_1$ such that $\Delta_0\subset \Delta_1$ and $K_0 \leq K_1$, then it is not hard to see that we can choose circle models for each such that $(B^{S^1}_{K_0,C},\eta_{\rho_0})$ embeds into $(B^{S^1}_{K_1,C},\eta_{\rho_1})$ and so that $(\rho_1)_{(x,\theta)}(v)=v-C$ in a neighborhood of the entire region where $K_0(x,\theta)\leq v \leq K_1(x,\theta)$. In other words, the almost contact structure is contact and twisting in the standard way along the radial direction on the region between $K_0$ and $K_1$. In the terminology of the BEM paper, $K_1$ directly dominates $K_0$. View of the extendability a contact structure from one contact germ defined by a contact Hamiltonian $K_1$ to another germ defined by $K_0$, as an ordering. The thing that makes this ordering interesting is that using contactomorphisms to change coordinates, a contact germ can be modelled by a different contact Hamiltonian. Therefore if $K_0$ and $K_1$ cannot be directly compared (i.e. at some points $K_0\leq K_1$ but at others $K_0>K_1$), then there may be a different contact Hamiltonian $\widetilde{K}_0$ which corresponds to the same contact germ in different coordinates such that $\widetilde{K}_0$ can be compared to $K_1$. This will be the subject of the rest of this post.
Contactomorphisms and conjugating the Hamiltonian
Given a contactomorphism on the domain $(\Delta,\lambda)$, we want to construct an induced contactomorphism on $(\Delta\times \mathbb{C},\lambda+\rho d\theta$. Because contactomorphisms only preserve the contact planes, and not the contact form, a contactomorphism $\Phi: (\Delta,\ker(\lambda))\to (\Delta, \ker(\lambda))$ satisfies $\Phi^*(\lambda)=c_{\Phi}\lambda$ where $c_{\Phi}$ is a positive real valued function on $\Delta$. Because the pull-back rescales $\lambda$, we need to rescale the Hamiltonian on the image as well so that it fits together with $\Phi^*\lambda$ to give a contact form for the same contact structure. Therefore define $\Phi_*K$ by $(\Phi_*K)(\Phi(x),\theta)=c_{\Phi}(x)K(x,\theta)$.
$\Phi$ natural induces an extension on $\Delta\times \mathbb{C}$ defined by $\widehat{\Phi}(x,v,\theta)=(\Phi(x),\phi_{(x,\theta)}(v),\theta)$ for any family of functions $\phi_{(x,\theta)}$. If $\widetilde{\rho}$ defines the contact structure on the image $\lambda+\widetilde{\rho}d\theta$ then
$\widehat{\Phi}^*(\lambda+\widetilde{\rho}d\theta)=\Phi^*\lambda+\widetilde{\rho}\circ\phi d\theta=c_{\Phi}\lambda+\widetilde{\rho}\circ \phi d\theta$
Therefore the function defining the contact Hamiltonian on the image must satisfy $\widetilde{\rho}_{(x,\theta)}\circ \phi_{(x,\theta)}(v)=c_{\Phi}(x)\rho_{(x,\theta)}$.
Why did we include the function $\phi_{(x,\theta)}$ in the above definition of $\widehat{\Phi}$? This is to allow us to reparameterize $\widetilde{\rho}_{(x,\theta)}$ so that it satisfies the required conditions to define the circular model (should look like the identity near $v=0$, and should look like the identity shifted by the constant near $v=K+C$). Before the contactomorphism, to define the circular model, you choose a constant $C$ and then $\rho_{(x,\theta)}$ is considered on the domain $[0, K(x,\theta)+C]$ and is required to have certain behavior near the endpoints of this interval. After rescaling, we have a new Hamiltonian $(\Phi_*K)(\Phi(x),\theta)=c_{\Phi}(x)K(x,\theta)$, so we pick a new constant $\widetilde{C}$ so that $\Phi_*K+\widetilde{C}>0$. Then we consider $\widetilde{\rho}_{\Phi(x),\theta}$ on the interval $[0,c_{\Phi}(x)K(x,\theta)+\widetilde{C}]$ and require it to have particular behavior near $0$ and $c_{\Phi}(x)K(x,\theta)+\widetilde{C}$. Because $\widetilde{\rho}_{(x,\theta)}=c_{\Phi}(x)\rho_{(x,\theta)}\circ \phi_{(x,\theta)}^{-1}$, modifying the functions $\phi_{(x,\theta)}$ allows us to make $\widetilde{\rho}_{\Phi(x),\theta}$ have the desired behavior near the end points of the interval $[0,c_{\Phi}(x)K(x,\theta)]$ so that $\widetilde{\rho}$ can be used to define a circular model for $\Phi_*K$.
The action of the contactomorphism $\Phi$ on the contact Hamiltonian $K$ is referred to as conjugating the Hamiltonian for the following reason. If the contact Hamiltonian is generated by a contact isotopy $\phi^t_K$ in the sense that $\lambda(\partial_t\phi^t_K)=K(\phi^t_K,t)$, then you can compute that $\Phi\phi^t_K\Phi^{-1}=\phi^t_{\Phi_*K}$.
Important types of contactomorphisms and their effects on the Hamiltonian
What kinds of changes can we make in the contact Hamiltonian through contactomorphisms? A key lemma is that in a (star-shaped) region where the contact Hamiltonian is negative, contactomorphisms can be used to make the values arbitrarily close to zero. This basically means that the exact negative values of a contact Hamiltonian do not matter in the ordering, since a contactomorphism can make any given negative values larger than any other given negative values. This indicates that the key difficulty in filling in the contact structure on holes whose boundary looks like a contact Hamiltonian circular model, is where and how large are the regions where the contact Hamiltonian is positive.
The idea of the proof of this “disorder lemma” (Lemma 6.8 in the BEM paper) is as follows. Let $\Delta$ be the region where the contact Hamiltonian $K$ is defined and let $\widetilde{\Delta}$ be a subset containing the piece where $K$ is negative. Construct a contactomorphism $\Phi$ which shrinks $\widetilde{\Delta}$ into itself a lot, but fixes the points of $\Delta$ sufficiently away from $\widetilde{\Delta}$. (You can do this by looking at the flow of an inward pointing contact vector field–this is where the star-shaped condition comes in–cut off to zero sufficiently away from $\widetilde{\Delta}$.) Because $\widetilde{\Delta}$ is being shrunk, the rescaling function $c_{\Phi}(x)$ for the contact form defined by $\Phi^*\lambda=c_{\Phi}\lambda$ is a positive function with very tiny values close to 0, for $x\in \widetilde{\Delta}$. The more $\widetilde{\Delta}$ is shrunk, the tinier the values of $c_{\Phi}$. The corresponding Hamiltonian $(\Phi_*K)(\Phi(x),\theta)=c_{\Phi}(x)K(x,\theta)$ has values rescaled by $c_{\Phi}$. Therefore, by choosing a contactomorphism which shrinks $\widetilde{\Delta}$ enough, the values of $c_{\Phi}$ can be made sufficiently small so that $c_\Phi K(x,\theta)>-\varepsilon$ for $x\in \widetilde{\Delta}$.
In addition to the disorder lemma, we need two types of contactomorphisms of $(\mathbb{R}^{2n-1},\xi_{st})$ which rescale in certain directions. Choose cylindrical coordinates $(z,r_i,\theta_i)$ on $\mathbb{R}^{2n-1}$ and let $u_i=r_i^2$ so $\xi_{st}=\ker(dz+\sum_i u_i\theta_i)$.
A transverse scaling contactomorphism $\Phi_h$ is defined by a diffeomorphism $h:\mathbb{R}\to \mathbb{R}$ by $\Phi_h(z,u_i,\theta_i)=(h(z),h'(z)u_i,\theta_i)$. You can check directly that this diffeomorphism is a contactomorphism which rescales the standard contact form by $h'(z)$. Therefore this contactomorphism modifies a contact Hamiltonian by
$(\Phi_h)_*K(z,u_i,\theta_i)=h'(h^{-1}(z))K\circ\Phi_h^{-1}(z,u_i,\theta_i)$
The tagline for this type of contactomorphism is you can “trade long for thin”. By choosing a shrinking $h$, you can shrink a domain which is long in the $z$ direction at the cost of shrinking the radial $u_i$ directions.
A twist embedding contactomorphism $\Psi_g$ allows you to rescale the radial directions $u_i$ by $\frac{1}{1+g(z)u}$ at the cost of twisting in the angular directions by an amount that depends on $g$ (see section 8.2 of the BEM paper for the exact formulas). The points at radii where $g(z)u>-1$ get sent to points where $g(z)u<1$ since $g(z)\frac{u}{1+g(z)u}-1$. The rescaling factor for the contact form is $(1-g(z)u)$, so the contact Hamiltonian is rescaled accordingly. For positive functions $f_1,f_2$, setting $g=\frac{1}{f_1}-\frac{1}{f_2}$ gives $\Psi_g$ taking the region where $u\leq f_2(z)$ to the region where $u\leq f_1(z)$. Therefore twist embeddings allow you to modify the radial directions however you want to, with basically no cost (just twisting the angular directions).
By composing these two types of contactomorphisms we can use transverse scaling to stretch or shrink in the $z$ direction at the cost of stretching or shrinking radially. Then we can use a twist embedding to counteract the stretching or shrinking in the radial directions, with only the cost of twisting in the angular direction, which does not significantly change the shape of the region.
These contactomorphisms are the key ingredients towards filling in circular model holes connected summed with neighborhoods overtwisted disks, as will be discussed in the next post.
1 Comment
Filed under Uncategorized
## Contact Hamiltonians (Part I)
This entry follows the post Contact Hamiltonians (Introduction), where we discussed normal forms for contact forms and the appearance of contact Hamiltonians. In this entry we will focus on the 3–dimensional situation and hence we will be able to write formulas and draw (realistic) pictures.
Consider a 2–sphere of radius 1 in the standard tight contact Euclidean space $(\mathbb{R}^3,\lambda_{st}=dz+r^2d\theta)$. Its characteristic foliation (defined by the intersection of the tangent space and the contact distribution) has two elliptic singular points in the north and south poles and all the leaves are open intervals connecting the north and the south pole. Take a transversal segment I=[0,1] connecting the poles (a vertical segment will do). Given a point in the segment we can consider the unique leaf through that point and move around the leaf until we hit the interval I=[0,1] again. This defines a diffeomorphism of the interval [0,1] fixed at the endpoints. We will call this diffeomorphism the monodromy of the foliation (and note that conversely any diffeomorphism will give a foliation on the 2–sphere via a mapping torus construction and collapsing the boundary). This is drawn in the following figure:
In the figure the monodromy map is represented by the orange arrow. This monodromy does not have fixed points (this is crucial). Let us look at the monodromy in the sphere of radius $\pi+c$ , where c is a small positive constant, in the overtwisted contact manifold $(\mathbb{R}^3,dz+rtg(r)d\theta)$. The overtwisted monodromy is drawn in the next figure:
There are 3 types of points in the vertical transverse interval I=[0,1]. The Type 1 points belong to a leaf, Leaf I in the figure, such that the points move down in the segment. The Type 2 points are the points between the unique pair of closed leaves, these belong to Leaf II and move up. The Type 3 points are fixed points, there are two leaves of this type (Leaf III). The monodromy is represented by the blue arrows.
Hence, we can encode the tight and the overtwisted foliations on the 2–sphere in terms of their monodromies in the following figure:
In the last entry we explained a relation between monodromies and contact Hamiltonians. Consider a contact form $dz-H(x,y,z)dx$ in $\mathbb{R}^3$, this is a quite general normal form (which we can obtain by trivializing along the y–lines of $\mathbb{D}^2(x,y)$). If we restrict to the sphere $x^2+y^2+z^2=R^2$ we can write H in terms of $H=H(x,z)$ at points where the implicit function theorem works. Then the characteristic foliation is nothing else than the solution of the time–dependent (x is the time) differential equation $dz-Hdx=0$ on the interval I=[-1,1] given by the coordinate z. Hence the contact Hamiltonian yields the ODE to which the monodromy is a solution.
Tool: How do we obtain a piece of a disk in standard contact $(\mathbb{R}^3,dz-ydx)$ with a given characteristic foliation ?
Answer: Consider a disk in the (z,x)–plane and a function H(z,x). The standard contact structure $dz-ydx$ restricts to the graph of H in $\mathbb{R}^2(z,x)\times\mathbb{R}(y)$ as $dz-ydx|_{\{y=H\}}=dz-Hdx$.
For instance, let us consider the following function H(z) for z=[-1,1]:
This function H can be considered as a function on the polydisk (x,z) which is represented by the lower square in the third figure (the whole figure is PL immersed in the standard contact 3–space). Its image is the bumped square drawn above it, and we may consider the PL sphere obtained by adding the vertical annulus connecting the domain and the graph. The characteristic foliation on the bottom piece is by the horizontal z–lines, on the annulus the foliation is vertical and on the top piece the foliation is drawn on the left. Note that the characteristic foliation in this immersed PL sphere has a closed leaf (in red) coming from the fixed point (or zero, if we look at it horizontally) of H.
Let us briefly focus on the existence of a contact structure in a region bounded by a domain and a graph as in the previous paragraph.
Exercise: Does there exist a contact structure filling the following pink region ?
(The contact structure should restrict to the germs (in purple) already defined on the boundary.)
Answer: Yes. This is already embedded in $\mathbb{R}^3$, hence we just need to restrict the ambient contact structure. (This should be compared with the previous post where this question was also formulated and answered in terms of the positivity of the function H).
The second exercise we need to solve is as simple as the previous one, let us however draw the figures in order to keep them in mind.
Annulus Problem (weak): Does there exist a contact structure in the (yellow) annulus ?
The contact structure should also restrict to the germs (in purple and green) already defined on the boundary.
Answer: Yes, again this is already embedded in standard contact Euclidean space. This is yet another instance of the relevance of order. If one Hamiltonian is less than another one, then we can obtain a contact structure on the annulus.
This will be formalized in subsequent posts using the notion of domination of Hamiltonians and their corresponding contact shells. We shall not use this language right now.
We are now going to prove Eliashberg’s existence theorem in dimension 3 from the contact Hamiltonian perspective (i.e. from the monodromy viewpoint). The fundamental fact is that we only need to extend contact structures up to contactomorphism and this is translated to the fact the Hamiltonians can be conjugated.
Annulus Problem (strong): Does there exist a contact structure on the following region ?
Answer: If we are able to conjugate the bottom Hamiltonian (in green) strictly below to the upper one (in purple), then we can use the contact structure of the embedded annulus (weak version of the annulus problem). Hence, it all reduces to the order (or rather, the lack thereof).
Fundamental Fact: There exists a conjugation of the bottom Hamiltonian such that it is strictly less than the upper one. In general, given two Hamiltonian with fixed points which are positive at the endpoints of the interval, there exists a conjugation bringing one of them below the other.
(This is an exercise with functions in one variable, in higher dimensions this is no longer simple and this is precisely the main point that M.S. Borman, Y. Eliashberg and E. Murphy have understood).
Let us prove Eliashberg’s 3–dimensional existence theorem, we focus on the extension part (part 2 according to the post three entries ago).
Extension Problem (Version I): Suppose that there exists a contact structure on the complement of a ball $B^3$ in a 3–fold (which is given by Gromov’s h–principle, see previous posts) and that the characteristic foliation on the boundary $S_h^2$ has monodromy with fixed points (h stands for hole). Can we extend the contact structure ?
Suppose that there exists a sphere $S_{ot}^2$ somewhere inside the manifold with an overtwisted monodromy (in blue, see above) in its characteristic foliation. Consider the annulus $A_{ot}=S_{ot}^2\times(-\tau,\tau)$. Use the south poles of $S_{ot}^2\times\tau$ and $S_h^2$ to connect both and obtain an annulus $A$ such that the monodromy in the exterior boundary sphere is the concatenation of the contactomorphisms of the intervals (green#pink). Hopefully this figure helps:
The monodromies of the foliations in the two spheres bounding the annulus $A_{ot}$ are drawn in pink (exterior boundary) and blue (interior boundary). The monodromy in green is that of $S_h^2$. Connecting the spheres $S_h^2$ and $S_{ot}^2\times\{\tau\}$ yields a sphere with the monodromy green#pink (the transition area is purple, this has some relevance but it is not essential). Consider the annulus A bounded by $S_h^2\#(S_{ot}^2\times\{\tau\})$ and $S_{ot}^2\times\{-\tau\}$. We have reduced the problem of extending the contact structure to the interior of $S_h^2$ to the problem of extending the contact structure in the annulus A. In the exterior boundary of A the characteristic foliation is green#pink and on the interior is red (which comes from moving blue).
Extension Problem (Version II): Does there exists a conjugation such that (the graph of) any contactomorphism can be conjugated to lie beneath any other (graph) ?
Answer: No. Fixed Points are an obstruction. However, if we restrict ourselves to the same question in the class of contactomorphisms with fixed points the answer is yes. This is exactly the Fundamental Fact stated above.
How do we conclude the proof ? Conjugate the red Hamiltonian to lie beneath the green#pink Hamiltonian and use the contact structure in the resulting annulus (as embedded in standard contact space). Assuming Gromov’s h–principle and the technical work in order for the foliation to be controlled, this argument concludes the theorem.
(We have disregarded some details, but the idea of the argument is the one described above. Observe that the parametric version of the existence problem in dimension 3 is quite immediate from the Hamiltonian perspective.)
Note also that we do not need the whole sphere $S^2_{ot}$: in order to use the argument with the Hamiltonians we can cut the North pole of $S^2_{ot}$ and retain just the remaining disk, which is an overtwisted disk.
There is a substantial advantage in this proof of the 3–dimenisonal case: we can define an overtwisted disk $\mathbb{D}^{2n}$ in higher dimensions 2n+1 to be the object that appears when using the contact Hamiltonian on a simplex $\Delta^{2n-1}$ given by
(We will give precise definitions in the subsequent entries.)
The strategy of the argument works in higher dimensions if we can prove the Fundamental Fact stating that there is enough disorder for contact Hamiltonians. In the next entries we will focus on this crucial step in higher dimensions and conclude existence.
|
2020-07-08 23:06:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1526, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879889488220215, "perplexity": 515.7394429666399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897707.23/warc/CC-MAIN-20200708211828-20200709001828-00221.warc.gz"}
|
https://mersenneforum.org/showthread.php?s=6c62fb8b0d129844bee5208e5ed36750&p=454633
|
mersenneforum.org mfaktc: a CUDA program for Mersenne prefactoring
Register FAQ Search Today's Posts Mark Forums Read
2016-11-18, 06:01 #2685 storm5510 Random Account Aug 2009 U.S.A. 32×11×17 Posts A simple question: Does mfaktc read the entire worktodo file into a queue, or does it take it one line at a time?
2016-11-18, 06:49 #2686 James Heinrich "James Heinrich" May 2004 ex-Northern Ontario 23×397 Posts Pretty sure it reads line by line, skipping any invalid lines, until it finds a valid assignment. Once it has finished the assignment it rewrites the entire worktodo, minus the assignment-line it just completed. Side note: disk I/O can be killer (e.g. 1-10MB/s sustained) for things like TF>1G where an assignment is completed every second or so and a large input buffer is maintained -- a RAM drive is essential.
2016-11-18, 16:23 #2687
storm5510
Random Account
Aug 2009
U.S.A.
32×11×17 Posts
Quote:
Originally Posted by James Heinrich Pretty sure it reads line by line, skipping any invalid lines, until it finds a valid assignment. Once it has finished the assignment it rewrites the entire worktodo, minus the assignment-line it just completed. Side note: disk I/O can be killer (e.g. 1-10MB/s sustained) for things like TF>1G where an assignment is completed every second or so and a large input buffer is maintained -- a RAM drive is essential.
It reads it into a buffer, like I suspected. I've noticed that when it has a larger bit range, for example 272 to 274, it will write the intermediate stage when complete. This explains the add file feature. I have been stopping it to add assignments to the worktodo file.
TF>1G: I suspected there were people out there doing this but I had no idea of the magnitude of it. A very interesting page; I bookmarked it.
2016-11-26, 19:25 #2688 cseizert "Curtis" Sep 2016 Fort Collins, CO 2×5 Posts I think there would be a speedup for Pascal cards if the linux version were compiled with 8.0. Actually, I cannot run the current binaries unless I change the makefile and compile them for compute 6.1. But even if you can get this to run on a Pascal card in its current form, my experience suggests that there is a performance penalty for running binaries compiled for compute capability <6.0 cards on the 1080 or 1070.
2017-01-14, 23:29 #2689 Xyzzy "Mike" Aug 2002 22×1,951 Posts We've had a (FE) GTX 1060 card for several months but never got around to running mfaktc. We tried it today and it just worked, out of the box, without anything extra needed! In the past we had to install the CUDA toolkit but we didn't today. The card is doing roughly 530 GHz-d/day and the display has no lag whatsoever. The card is at 80 C and it is nearly silent. We didn't modify the fan curve or anything.
2017-01-15, 04:54 #2690 kladner "Kieren" Jul 2011 In My Own Galaxy! 3×17×197 Posts Do you have fan headroom to bring that down a bit from 80? I get nervous in the upper 70s.
2017-03-10, 03:02 #2691 planetclown Feb 2012 5 Posts Are there updated linux64 binaries available for cuda 8? I don't see them in the download section or in this thread. If not, how difficult would it be to compile them? I recently upgraded from a 970 to 1070 and am getting the 'cudaGetLastError() returned 8: invalid device function' error. Thank you!
2017-03-10, 16:29 #2692
planetclown
Feb 2012
5 Posts
I took a stab at compiling the linux64 binaries myself using the cuda8 toolkit and it's running successfully. The GHz-d/day is hovering around 780 in the terminal for my GTX 1070, and nvidia-smi shows GPU utilization in the high 90's.
When compiling I added an nvcc flag for compute 6.1 capabilities. I also had to remove the existing line for compute 1.1 (Tesla?) since it wouldn't compile with that flag. Otherwise I left all settings the same as in the source file for mfaktc with cuda 6.5.
I copied the compiled mfaktc.exe and the libraries for cuda 8.0.61 on top of the existing folder structure for mfaktc with cuda 6.5. Attached is the result if anyone else is looking for or wants to test it.
Be aware I'm not an expert, so use at your own risk.
Attached Files
mfaktc-0.21.linux64.cuda80.tar.gz (1.38 MB, 62 views)
2017-03-10, 18:30 #2693 flashjh "Jerry" Nov 2011 Vancouver, WA 1,123 Posts Thank you
2017-03-12, 15:27 #2694
bayanne
"Tony Gott"
Aug 2002
Yell, Shetland, UK
313 Posts
Quote:
Originally Posted by planetclown I took a stab at compiling the linux64 binaries myself using the cuda8 toolkit and it's running successfully. The GHz-d/day is hovering around 780 in the terminal for my GTX 1070, and nvidia-smi shows GPU utilization in the high 90's. When compiling I added an nvcc flag for compute 6.1 capabilities. I also had to remove the existing line for compute 1.1 (Tesla?) since it wouldn't compile with that flag. Otherwise I left all settings the same as in the source file for mfaktc with cuda 6.5. I copied the compiled mfaktc.exe and the libraries for cuda 8.0.61 on top of the existing folder structure for mfaktc with cuda 6.5. Attached is the result if anyone else is looking for or wants to test it. Be aware I'm not an expert, so use at your own risk.
Hmm, I wonder whether that would run on a Mac, which I have running another GPU project on cuda 8.0.53 ...
2017-03-23, 22:49 #2695 TheJudger "Oliver" Mar 2005 Germany 21268 Posts stock 1080 Ti "Founders Edition" Code: # ./mfaktc.exe -tf 66362159 75 76 mfaktc v0.21 (64bit built) [...] CUDA device info name Graphics Device compute capability 6.1 max threads per block 1024 max shared memory per MP 98304 byte number of multiprocessors 28 clock rate (CUDA cores) 1582MHz memory clock rate: 5505MHz memory bus width: 352 bit [...] Date Time | class Pct | time ETA | GHz-d/day Sieve Wait Mar 23 23:43 | 0 0.1% | 7.003 1h51m | 1481.90 82485 n.a.% Mar 23 23:44 | 4 0.2% | 6.980 1h51m | 1486.78 82485 n.a.% Mar 23 23:44 | 9 0.3% | 7.003 1h51m | 1481.90 82485 n.a.% Mar 23 23:44 | 12 0.4% | 7.110 1h53m | 1459.59 82485 n.a.% Mar 23 23:44 | 16 0.5% | 7.494 1h59m | 1384.80 82485 n.a.% Mar 23 23:44 | 24 0.6% | 7.928 2h06m | 1309.00 82485 n.a.% Mar 23 23:44 | 25 0.7% | 7.955 2h06m | 1304.55 82485 n.a.% First 20-25 seconds: limited by power target (250W) After 20-25 seconds: limited by thermal target, hovers around at ~190W. Reason need more fresh air in chassis. Oliver Last fiddled with by TheJudger on 2017-03-23 at 22:53
Similar Threads Thread Thread Starter Forum Replies Last Post Bdot GPU Computing 1657 2020-10-27 01:23 firejuggler GPU Computing 752 2020-09-08 16:15 froderik GPU Computing 4 2016-10-30 15:29 fivemack Programming 112 2015-02-12 22:51 xilman Programming 1 2009-11-16 10:26
All times are UTC. The time now is 00:07.
Wed Nov 25 00:07:03 UTC 2020 up 75 days, 21:18, 4 users, load averages: 2.42, 2.27, 1.99
|
2020-11-25 00:07:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20496314764022827, "perplexity": 8396.534961371408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141177607.13/warc/CC-MAIN-20201124224124-20201125014124-00643.warc.gz"}
|
https://blog.teknkl.com/velocity-days-and-weeks/
|
With an assist from Velocity, your emails can have time-responsive content. (And I don't just mean Happy ${day_of_week},${first_name}!)
• When the same email is resent with different primary content (e.g. a weekly newsletter) Velocity can customize secondary content based on the day. Like reader PW, you can show a special promo only in the first send of every month.
• Like Community user GM, you can pre-create a series of different content blocks, in effect creating a drip campaign from just one asset.
• If a triggered email is sent off-hours, you can notify the recipient that they'll hear from Sales on the next business day.
• Or lots of other adventures!
### Always include this bit
To run all the snippets below, include common variables at the top of the token (or in a global token) like so:
#set( $defaultTimeZone =$date.getTimeZone().getTimeZone("America/New_York") )
#set( $defaultLocale =$date.getLocale() )
#set( $calNow =$date.getCalendar() )
#set( $ret =$calNow.setTimeZone($defaultTimeZone) ) #set($calConst = $field.in($calNow) )
#set( $ISO8601DateOnly = "yyyy-MM-dd" ) #set($ISO8601DateTime = "yyyy-MM-dd'T'HH:mm:ss" )
#set( $ISO8601DateTimeWithSpace = "yyyy-MM-dd HH:mm:ss" ) #set($ISO8601DateTimeWithMillisUTC = "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'" )
#set( $ISO8601DateTimeWithMillisTZ = "yyyy-MM-dd'T'HH:mm:ss.SSSZ" ) ### Note on date-like strings Date magic needs real Java Date and Calendar objects, not Strings that happen to look like dates. No matter if fields originally had a Date/DateTime datatype in another system, even in Marketo itself – if they’ve been stringified on their way into the Velocity context, they need to be “resuscitated” into living objects. So in a few of the examples below, a Velocity (e.g. Java) String is parsed into a Date, then converted into a Calendar. To do the parsing, you need to know the exact format, so Velocity knows where to locate the years, months, days, and so on. The $ISO8601WithSpace format ("2019-12-07 13:30:00") is how Marketo DateTime fields appear in Velocity.
I feel like noting that’s a perfectly valid format within the international standard, ISO 8601: whitespace is ignored, and you aren’t required to have a T (or any separator!) between the date and time. Including the T is often presented as “the” ISO 8601 format, but it’s merely “the most common.”[1]
Regardless, the $ISO8601DateTime and $ISO8601DateTimeWithMillisUTC formats, which both have a T separating the date and time ("2019-12-07T13:30:00"), are vastly more common on the web at large. For example, when JSON is generated from JavaScript (and perhaps stored in a Marketo Textarea field) the format is $ISO8601DateTimeWithMillisUTC. Anyway, knowing what format(s) you’ve got coming in is crucial! ### Set your time zone or fail ###### I cannot stress enough that setting the IANA time zone for your location is critical when using Date or Calendar objects in Velocity. If you do not do this — as is the case, unfortunately, with some old code snippets on Marketo's blog — your code is broken. I won't rehash the reasons why here (happy to in the comments) but believe me you must set the timezone! ### Now, for some examples… #### Check the current day of the month #set($calNow = $date.getCalendar() ) #if($calNow.get($calConst.DAY_OF_MONTH) <= 7 ) It's one of the first 7 days of the month! #end #### Check the current week #if($calNow.get($calConst.WEEK_OF_MONTH) == 1 ) It's the first calendar week of the month! #end #if($calNow.get($calConst.WEEK_OF_MONTH) ==$calNow.getActualMaximum($calConst.WEEK_OF_MONTH) ) It's the last week of the month! #end #### Check if it's the NthSomething-day of the month #if($calNow.get($calConst.DAY_OF_WEEK) ==$calConst.WEDNESDAY &&
$calNow.get($calConst.DAY_OF_WEEK_IN_MONTH) == 3 )
It's the 3rd Wednesday of the month!
#end
#### Is it during business hours?
#set( $calStartOfBusiness =$date.getCalendar() )
#set( $ret =$calStartOfBusiness.setTimeZone($defaultTimeZone) ) #set($ret = $calStartOfBusiness.set($calStartOfBusiness.get($calConst.YEAR),$calStartOfBusiness.get($calConst.MONTH),$calStartOfBusiness.get($calConst.DAY_OF_MONTH), 8, 0, 0 ) ) #set($ret = $calStartOfBusiness.set($calConst.MILLISECOND,0) )
#set( $calCloseOfBusiness =$date.getCalendar() )
#set( $ret =$calCloseOfBusiness.setTimeZone($defaultTimeZone) ) #set($ret = $calCloseOfBusiness.set($calCloseOfBusiness.get($calConst.YEAR),$calCloseOfBusiness.get($calConst.MONTH),$calCloseOfBusiness.get($calConst.DAY_OF_MONTH), 17, 0, 0 ) ) #set($ret = $calCloseOfBusiness.set($calConst.MILLISECOND,0) )
#if( $calNow.compareTo($calStartOfBusiness) >= 0 && $calNow.compareTo($calCloseOfBusiness) <= 0 )
#end
Note here that 8,0,0 and 17,0,0 are setting the hour, minute, and second respectively, in 24-hour time.
8,0,0 means 08:00:00 or 8 a.m. and 17,0,0 means 17:00:00 or 5 p.m.
#### Is it a business day?
#set( $businessDays = [$calConst.MONDAY,
$calConst.TUESDAY,$calConst.WEDNESDAY,
$calConst.THURSDAY,$calConst.FRIDAY
] )
#if( $businessDays.contains($calNow.get($calConst.DAY_OF_WEEK)) ) It's a work day! #end You can of course combine this business days check with business hours above. #### Is our timed promo active? #set($calStartOfPromo = $convert.toCalendar($convert.parseDate(
"2017-11-15T00:00:00",
$ISO8601DateTime,$defaultLocale,
$defaultTimeZone ) ) ) #set($calEndOfPromo = $convert.toCalendar($convert.parseDate(
"2017-12-01T00:00:00",
$ISO8601DateTime,$defaultLocale,
$defaultTimeZone ) ) ) #if($calNow.compareTo($calStartOfPromo) >=0 &&$calNow.before($calEndOfPromo) ) The promo is active! #end Note again the mandatory use of defaultTimeZone when initializing new Dates. And check out how I'm using compareTo and before to see if we're currently greater than or equal to the start date and less than the end date. You could substitute compareTo($calEndOfPromo) < 0 for before($calEndOfPromo) as it's the same logic, just a tiny bit longer. #### Promo expires 7 days from today (whenever that is) $calNow.add($calConst.DATE,7) #set($FRIENDLY_24H_DATETIME_WITH_FRIENDLY_TZ = "MMM dd, yyyy HH:mm z" )
Our promo expires 7 days from today, which is
${date.format($FRIENDLY_24H_DATETIME_WITH_FRIENDLY_TZ,
$calNow,$defaultLocale,
$defaultTimeZone )} This quickie shows how to shift a certain number of days from today. To go backwards 7 days, use add($calConst.DATE,-7) (there's no explicit subtract method).
#### How long will the promo last?
#set( $ret =$calNow.set(
$calNow.get($calConst.YEAR),
$calNow.get($calConst.MONTH),
$calNow.get($calConst.DAY_OF_MONTH),
0,
0,
0
) )
#set( $ret =$calNow.set($calConst.MILLISECOND,0) ) #set($calEndOfPromo = $convert.toCalendar($convert.parseDate(
"2018-01-22T00:00:00",
$ISO8601DateTime,$defaultLocale,
$defaultTimeZone ) ) ) #set($diffRemaining = $date.difference($calNow,$calEndOfPromo) ) #set($daysRemaining = $convert.toInteger($diffRemaining.getDays()) )
#set( $weeksRemaining =$convert.toInteger($diffRemaining.getWeeks()) ) The promo #if($weeksRemaining > 0 )
ends in ${weeksRemaining}${display.plural($weeksRemaining,"week","weeks")}! #elseif($daysRemaining > 0 )
ends in ${daysRemaining}${display.plural($daysRemaining,"day","days")}! #elseif($daysRemaining == 0 )
ends today!
#else
#end
Unlike adding and subtracting days to/from today, differencing dates is relatively complex.
First, you usually[2] want to align dates to midnight (yyyy-MM-ddT00:00:00.000) boundaries. That's why I set the hours, minutes, seconds, and milliseconds of calNow all to zero, to create a Calendar object representing “Today at midnight.” Then the end date is also anchored at 00:00:00 (you could use any time of day, as long it's the same for start and end, but midnight is easiest). Once aligned, you won't get any fractional-day surprises with date.difference(). difference() returns a robust Comparison object, from which you get the count of days, weeks, and so on between the dates.
I also took this opportunity to use $display.plural(), which isn't a date-related feature but just another cool Velocity thing you should know. plural() works with Integers, though, while Comparison.get*() accessors return Longs. That's why $convert.toInteger() is in there.
Times like this, you can see why I say Velocity is anything but simple. So if you're really a newbie, you shouldn't say, “I'm not too advanced with Velocity” as if you're just a couple scripts away from mastery. To truly add Velocity to your skill stack is a slow grind.
#### Add an ordinal indicator to the day, like “1st” or “15th” instead of just “1” or “15”
#define( $enUSDayOrdinalIndicators ) 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 18th 19th 20th 21st 22nd 23rd 24th 25th 26th 27th 28th 29th 30th 31st #end #set($indicatorList = $enUSDayOrdinalIndicators.toString().trim().split("\s?\d+") ) Today with a friendly ordinal indicator (is${date.format(
"EEEE, MMMM d'${indicatorList[$calNow.get($calConst.DAY_OF_MONTH)]}'",$calNow,
$defaultLocale,$defaultTimeZone
)}
This add-on will give you a super-friendly date display, which isn't natively supported by Java or Velocity:
Wednesday, August 15th
(The “th” is the non-native part.)
It should be clear that this is very culturally-specific stuff, and since it isn't part of the localized parts of Java Calendar you would have to add your own localized Nth and Nst for locales other than en-*, some of which never use indicators at all, like Swedish.
#### Learning More
Get confused enlightened by the Java Calendar docs.
#### Notes
[1] As in JavaScript’s toISOString() of course, and how Java’s ISO_INSTANT constant just refers to "yyyy-MM-dd'T'HH:mm:ssZ" as “ISO.”
The ancient W3C note on Date and Time Formats defines an ISO 8601 profile whose formats always have the T, but doesn’t conflate the profile with the ISO 8601 standard as a whole.
[2] Exception is when a period ends at, say, exactly 5:00 p.m. on a certain date and you do want to take hours (fractional days) into account. Sweepstakes laws might require such precision. Or if you're trying to do something like hours-until-webinar, you'd also want to align to hours (still zero out minutes/seconds/millis for sanity).
|
2022-01-25 10:21:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25850722193717957, "perplexity": 8019.516992377501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00615.warc.gz"}
|
https://byjus.com/ncert-solutions-for-class-12-maths-chapter-11-3-dimensional-geometry-ex-11-2/
|
# Ncert Solutions For Class 12 Maths.3 Ex 11.2
## Ncert Solutions For Class 12 Maths Chapter 11.3 Ex 11.2
Q1. Prove that the three lines with direction cosines $$\frac{12}{13}, \frac{-3}{13}, \frac{-4}{13}: \frac{4}{13}, \frac{12}{13}, \frac{3}{13}: \frac{3}{13}, \frac{-4}{13}, \frac{12}{13}$$ are mutually perpendicular.
Sol:
Two lines with direction cosines: l1, m1, n1 and l2, m2, n2 are perpendicular to each other, if l1 l2 + m1 m2 + n1 n2 = 0
(i) For the lines with direction cosines, $$\frac{12}{13}, \frac{-3}{13}, \frac{-4}{13}$$ and $$\frac{4}{13}, \frac{12}{13}, \frac{3}{13}\\$$
Therefore, l1 l2 + m1 m2 + n1 n2 = $$\frac{12}{13} \times \frac{4}{13} + \left ( \frac{-3}{13} \right ) \times \frac{12}{13} + \left ( \frac{-4}{13} \right ) \times \frac{3}{13}\\$$
i.e. l1 l2 + m1 m2 + n1 n2 = $$\frac{48}{169} – \frac{36}{169} – \frac{12}{13}$$
(or) l1 l2 + m1 m2 + n1 n2 = 0
Hence, the lines are perpendicular.
(ii) For the lines with direction cosines, $$\frac{4}{13}, \frac{12}{13}, \frac{3}{13}$$ and $$\frac{3}{13}, \frac{-4}{13}, \frac{12}{13}\\$$.
l1 l2 + m1 m2 + n1 n2 = $$\left [ \frac{3}{13} \times \frac{12}{13} \right ] + \left [ \left (\frac{-4}{13} \right ) \times \left (\frac{-3}{13} \right ) \right ] + \left [ \frac{12}{13} \times \left (\frac{-4}{13} \right ) \right ]\\$$
Therefore, l1 l2 + m1 m2 + n1 n2 = $$\frac{12}{169} – \frac{48}{169} + \frac{36}{169}$$
or, l1 l2 + m1 m2 + n1 n2 = 0
Hence, the lines are perpendicular.
(iii) For the lines with direction cosines, $$\frac{3}{13}, \frac{-4}{13}, \frac{12}{13}$$ and $$\frac{12}{13}, \frac{-3}{13}, \frac{-4}{13}\\$$
l1 l2 + m1 m2 + n1 n2 =
Therefore, l1 l2 + m1 m2 + n1 n2 = $$\frac{36}{169} + \frac{12}{169} – \frac{48}{169}$$
i.e. l1 l2 + m1 m2 + n1 n2 = 0
Hence, the lines are perpendicular.
Therefore, all the lines are mutually perpendicular.
Q2. Show that the line through the points (1, −1, 2) (3, 4, −2) is perpendicular to the line through the points (0, 3, 2) and (3, 5, 6).
Sol:
Let PQ be the line joining the points, (1, −1, 2) and (3, 4, − 2), and RS be the line joining the points, (0, 3, 2) and (3, 5, 6).
The direction ratios p1 + q1 + r1 of PQ are (3 − 1), [4 − (−1)], and (−2 − 2) that is 2, 5, and −4.
The direction ratios p1 + q1 + r1 of RS are (3 − 0), (5 − 3), and (6 −2) that is 3, 2, and 4.
PQ and RS will be perpendicular to each other, if l1 l2 + m1 m2 + n1 n2 = 0
l1 l2 + m1 m2 + n1 n2 = 2 × 3 + 5 ×2 + (-4) × 4
i.e. l1 l2 + m1 m2 + n1 n2 = 6 + 10 – 16 = 0
Hence, PQ and RS are perpendicular to each other.
Q3. Prove that the line through the points (4, 7, 8) and (2, 3, 4) is parallel to the line through the points (−1, −2, 1) and (1, 2, 5).
Sol:
Let PQ be the line through the points, (4, 7, 8) and (2, 3, 4), and RS be the line through the points, (−1, −2, 1) and (1, 2, 5).
The directions ratios p1, q1, r1 of PQ are (2 − 4), (3 − 7), and (4 − 8) that is −2, −4, and -4.
The direction ratios p2, q2, r2 of RS are [1 − (−1)], [2 − (−2)], and (5 − 1) that is 2, 4, and 4.
PQ will be parallel to RS, if $$\frac{p_{1}}{p_{2}} = \frac{q_{1}}{q_{2}} = \frac{r_{1}}{r_{2}}$$
$$\frac{p_{1}}{p_{2}} = \frac{-2}{2} = -1\\$$
Also, $$\frac{q_{1}}{q_{2}} = \frac{-4}{4} = -1\\$$
And $$\frac{r_{1}}{r_{2}} = \frac{-4}{4} = -1\\$$
Therefore, $$\frac{p_{1}}{p_{2}} = \frac{q_{1}}{q_{2}} = \frac{r_{1}}{r_{2}}$$
Hence, PQ is parallel to RS.
Q4. Find the equation of the line which passes through the point (1, 2, 3) and is parallel to the vector $$3\hat{i} + 2\hat{j} – 2\hat{k}$$.
Sol:
It is given that the line passes through the point A (1, 2, 3).
Therefore, the position vector through A is,
$$\bar{a} = \hat{i} + 2\hat{j} + 3\hat{k}$$ $$\bar{b} = 3\hat{i} + 2\hat{j} – 2\hat{k}$$
It is known that the line which passes through point A and parallel to $$\vec{b}$$ is given by $$\vec{r} = \vec{a} + \lambda \vec{b}$$ where $$\lambda$$ is a constant.
$$\vec{r} = \hat{i} + 2 \hat{j} + 3\hat{k} + \lambda \left ( 3\hat{i} + 2\hat{j} – 2\hat{k} \right )$$
This is required equation of the line.
Q5. Find the equation of the line in Cartesian and in vector form that passes through the point with position vector $$2\hat{i} – \hat{j} + 4\hat{k}$$ and is in the direction $$\hat{i} + 2\hat{j} – \hat{k}$$.
Sol:
It is given that line passes through the point with position vector
$$\vec{a} = 2\hat{i} – \hat{j} + 4\hat{k}$$ ——– (1)
$$\vec{b} = \hat{i} + 2\hat{j} – \hat{k}$$ ———- (2)
It is known that a line through a point with position vector $$\vec{a}$$ and parallel to $$\vec{b}$$ is given by the equation,
$$\vec{r} = \vec{a} + \lambda \vec{b}$$ $$\vec{r} = 2\hat{i} – \hat{j} + 4\hat{k} + \lambda \left (\hat{i} + 2\hat{j} – \hat{k} \right )$$
Eliminating $$\lambda$$, we obtain the Cartesian form of equation as,
$$\frac{x – 2}{1} = \frac{y + 1}{2} = \frac{z – 4}{-1}$$
This is the required equation of the given line in Cartesian form.
Q6. Find the Cartesian equation of the line which passes through the point (−2, 4, −5) and parallel to the line given by $$\frac{x + 3}{3} = \frac{y – 4}{5} = \frac{z + 8}{6}$$.
Sol:
It is given that line passes through the given point (−2, 4, −5) and it is parallel to $$\frac{x + 3}{3} = \frac{y – 4}{5} = \frac{z + 8}{6}$$
The direction ratios of the line $$\frac{x + 3}{3} = \frac{y – 4}{5} = \frac{z + 8}{6}$$ are 3, 5, and 6.
The required line is parallel to $$\frac{x + 3}{3} = \frac{y – 4}{5} = \frac{z + 8}{6}$$.
Hence, its direction ratios are 3k, 5k, and 6k, where k ≠ 0.
It is known that the equation of the line through the point $$\left ( x_{1}, y_{1}, z_{1} \right )$$ and with direction ratios p, q, r is given by $$\frac{x – x_{1}}{p} = \frac{y – y_{1}}{q} = \frac{z – z_{1}}{r}$$.
Hence, the equation of the required line is,
$$\frac{x + 2}{3k} = \frac{y – 4}{5k} = \frac{z + 5}{6k}$$ $$\frac{x + 2}{3} = \frac{y – 4}{5} = \frac{z + 5}{6} = k$$
Q7. The Cartesian equation of a line is $$\frac{x – 5}{3} = \frac{y + 4}{7} = \frac{z – 6}{2}$$. Give the vector form of the line.
Sol:
The Cartesian equation of the line is,
$$\frac{x – 5}{3} = \frac{y + 4}{7} = \frac{z – 6}{2}$$ ——— (1)
The given line passes through the point (5, -4, 6). The position vector of this point is $$\vec{a} = 5\hat{i} – 4\hat{j} + 6\hat{k}$$.
Also, the direction ratios of the given line are 3, 7, and 2.
This means that the line is in the direction of vector, $$\vec{b} = 3\hat{i} + 7\hat{j} + 2\hat{k}$$.
It is known that the line through position vector $$\vec{a}$$ and in the direction of the vector $$\vec{b}$$ is given by the equation,
$$\vec{r} = \vec{a} + \lambda \vec{b},\; \lambda \in R$$ $$\vec{r} = \left (5\hat{i} – 4\hat{j} + 6\hat{k} \right ) + \lambda \left ( 3\hat{i} + 7\hat{j} + 2\hat{k} \right )$$
This is the required equation of the given line in vector form.
Q8. Find the Cartesian and the vector equations of the lines that pass through the origin and (5, −2, 3).
Sol:
The required line passes through the origin. Hence, its position vector is given by,
$$\vec{a} = \vec{0}$$ ————– (1)
The direction ratios of the line through origin and (5, −2, 3) are,
(5 − 0) = 5
(−2 − 0) = −2
(3 − 0) = 3
The line is parallel to the vector given by the equation,
$$\vec{b} = 5\hat{i} – 2\hat{j} + 3\hat{k}$$
The equation of the line in vector form through a point with position vector $$\vec{a}$$ and parallel to $$\vec{b}$$ is,
$$\vec{r} = \vec{a} + \lambda \vec{b},\; \lambda \in R$$ $$\vec{r} = \vec{0} + \lambda \left ( 5\hat{i} – 2\hat{j} + 3\hat{k} \right )$$ $$\vec{r} = \lambda \left ( 5\hat{i} – 2\hat{j} + 3\hat{k} \right )$$
The equation of the line through the point $$\left ( x_{1}, y_{1}, z_{1} \right )$$ and the direction ratios are p, q, r is given as,
$$\frac{x – x_{1}}{p} = \frac{y – y_{1}}{q} = \frac{z – z_{1}}{r}$$
Hence, the equation of the required line in the Cartesian form is,
$$\frac{x – 0}{5} = \frac{y – 0}{-2} = \frac{z – 0}{3}$$ $$\frac{x}{5} = \frac{y}{-2} = \frac{z}{3}$$
Q9. Find the Cartesian and the vector equations of the line that passes through the points (3, −2, −5) and (3, −2, 6).
Sol:
Let the line passing through the points, M (3, −2, −5) and N (3, −2, 6), be MN.
Since MN passes through M (3, −2, −5), its position vector is given by, $$\vec{a} = 3\hat{i} – 2\hat{j} – 5\hat{k}$$.
The direction ratios of MN are given by,
(3 − 3) = 0
(−2 + 2) = 0
(6 + 5) = 11
The equation of the vector in the direction of MN is,
$$\vec{b} = 0.\hat{i} – 0.\hat{j} + 11\hat{k}$$ $$\vec{b} = 11\hat{k}$$
The equation of MN in vector form is given by,
$$\vec{r} = \vec{a} + \lambda \vec{b},\; \lambda \in R$$ $$\vec{r} = \left (3\hat{i} – 2\hat{j} – 5\hat{k} \right ) + 11 \lambda \hat{k}$$
The equation of MN in Cartesian form is,
$$\frac{x – x_{1}}{p} = \frac{y – y_{1}}{q} = \frac{z – z_{1}}{r}$$
i.e. $$\frac{x – 3}{0} = \frac{y + 2}{0} = \frac{z + 5}{11}$$
Q10. Calculate the angle between the given pairs of lines:
(i) $$\vec{r} = 2\hat{i} – 5\hat{j} + \hat{k} + \lambda \left ( 3\hat{i} – 2\hat{j} + 6\hat{k} \right )$$ and $$\vec{r} = 7\hat{i} – 6\hat{k} + \mu \left ( \hat{i} + 2\hat{j} + 2\hat{k} \right )$$
(ii) $$\vec{r} = 3\hat{i} + \hat{j} – 2\hat{k} + \lambda \left ( \hat{i} – \hat{j} – 2\hat{k} \right ) \; and \; \vec{r} = 2\hat{i} – \hat{j} – 56\hat{k} + \mu \left ( 3\hat{i} – 5\hat{j} – 4\hat{k} \right )$$
Sol:
(i) Let $$\theta$$ be the angle between the given lines.
The angle between the given pairs of lines is given by,
$$\cos \theta = \left | \frac{\vec{b_{1}}. \vec{b_{2}}}{\left | \vec{b_{1}} \right | \left | \vec{b_{2}} \right |} \right |$$
The given lines are parallel to the vectors, $$\vec{b_{1}} = 3\hat{i} + 2\hat{j} + 6\hat{k}$$ and $$\vec{b_{2}} = \hat{i} + 2\hat{j} + 2\hat{k}$$ respectively.
Therefore,
$$\left |\vec{b_{1}} \right | = \sqrt{3^{2} + 2^{2} + 6^{2}}$$= 7
$$\\\boldsymbol{\Rightarrow }$$ $$\left |\vec{b_{2}} \right | = \sqrt{\left ( 1 \right )^{2} + \left ( 2 \right )^{2} + \left ( 2 \right )^{2}}$$ = 3
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}.\vec{b_{2}} = \left ( 3\hat{i} + 2\hat{j} + 6\hat{k} \right ).\left ( \hat{i} + 2\hat{j} + 2\hat{k} \right )$$
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}.\vec{b_{2}} = 3 \times 1 + 2 \times 2 + 6 \times 2$$ = 3 + 4 + 12 = 19
$$\\\boldsymbol{\Rightarrow }$$ $$\cos \theta = \frac{19}{7 \times 3}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\theta = \cos ^{-1} \left (\frac{19}{21} \right )$$
(ii) The given lines are parallel to the vectors, $$\vec{b_{1}} = \hat{i} – \hat{j} – 2\hat{k}$$ and $$\vec{b_{2}} = 3\hat{i} – 5\hat{j} – 4\hat{k}$$ respectively.
Therefore,
$$\left |\vec{b_{1}} \right | = \sqrt{\left (-1 \right )^{2} + \left (-1 \right )^{2} + \left (-2 \right )^{2}}= \sqrt{6}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\left |\vec{b_{2}} \right | = \sqrt{\left (3 \right )^{2} + \left (-5 \right )^{2} + \left (-4 \right )^{2}}$$
$$\left |\vec{b_{2}} \right | = \sqrt{50}$$ $$= 5\sqrt{2}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}.\vec{b_{2}} = \left ( \hat{i} – \hat{j} – 2\hat{k} \right ).\left ( 3\hat{i} – 5\hat{j} – 4\hat{k}\right )$$
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}.\vec{b_{2}} = 1 \times 3 – 1 \left ( -5 \right ) – 2\left ( -4 \right )$$ = 3 + 5 + 8 = 16
$$\\\boldsymbol{\Rightarrow }$$ $$\cos \theta = \left | \frac{\vec{b_{1}}. \vec{b_{2}}}{\left | \vec{b_{1}} \right | \left | \vec{b_{2}} \right |} \right | = \frac{16}{\sqrt{6}.5\sqrt{2}}= \frac{16}{\sqrt{2}.\sqrt{3}.5\sqrt{2}} = \frac{16}{10\sqrt{3}}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\cos \theta = \frac{8}{5\sqrt{3}}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\theta = \cos ^{-1}\left (\frac{8}{5\sqrt{3}} \right )$$
Q11. Calculate the angle between the given pairs of lines:
(i) $$\frac{x – 2}{2} = \frac{y – 1}{5} = \frac{z + 3}{-3}$$ and $$\frac{x + 2}{-1} = \frac{y – 4}{8} = \frac{z – 5}{4}$$
(ii) $$\frac{x }{2} = \frac{y}{2} = \frac{z}{1}$$ and $$\frac{x – 5}{4} = \frac{y – 2}{1} = \frac{z – 3}{8}$$
Sol:
(i) Let $$\vec{b_{1}}$$ and $$\vec{b_{2}}$$ be the vectors parallel to the pair of lines, $$\frac{x – 2}{2} = \frac{y – 1}{5} = \frac{z + 3}{-3}$$ and $$\frac{x + 2}{-1} = \frac{y – 4}{8} = \frac{z – 5}{4}$$
$$\vec{b_{1}} = 2\hat{i} + 5\hat{j} – 3\hat{k}$$ and
$$\vec{b_{2}} = -\hat{i} + 8\hat{j} + 4\hat{k}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\left |\vec{b_{1}} \right | = \sqrt{\left ( 2 \right )^{2} + \left ( 5 \right )^{2} + \left ( -3 \right )^{2}}= \sqrt{38}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\left |\vec{b_{2}} \right | = \sqrt{\left ( -1 \right )^{2} + \left ( 8 \right )^{2} + \left ( 4 \right )^{2}}= \sqrt{81}$$= 9
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}. \vec{b_{2}} = \left (2\hat{i} + 5\hat{j} – 3\hat{k} \right ).\left (-\hat{i} + 8\hat{j} + 4\hat{k} \right )$$
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}. \vec{b_{2}} = 2\left ( -1 \right ) + 5 \times 8 + \left ( -3 \right )4$$= -2 + 40 – 12 = 26
The angle $$\theta$$, between the given pair of lines is given by the relation,
$$\cos \theta = \left | \frac{\vec{b_{1}}. \vec{b_{2}}}{\left | \vec{b_{1}} \right | \left | \vec{b_{2}} \right |} \right | = \frac{26}{9\sqrt{38}}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\theta = \cos ^{-1}\left (\frac{26}{9\sqrt{38}} \right )$$
(ii) Let $$\vec{b_{1}}$$ and $$\vec{b_{2}}$$ be the vectors parallel to the given pair of lines, $$\frac{x }{2} = \frac{y}{2} = \frac{z}{1}$$ and $$\frac{x – 5}{4} = \frac{y – 2}{1} = \frac{z – 3}{8}$$ respectively.
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}} = 2\hat{i} + 2\hat{j} + \hat{k}$$ $$\;\;and \;\;\vec{b_{2}} = 4\hat{i} + \hat{j} + 8\hat{k}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\left |\vec{b_{1}} \right | = \sqrt{\left ( 2 \right )^{2} + \left ( 2 \right )^{2} + \left ( 1 \right )^{2}}= \sqrt{9}$$= 3
$$\\\boldsymbol{\Rightarrow }$$ $$\left |\vec{b_{2}} \right | = \sqrt{\left ( 4 \right )^{2} + \left ( 1 \right )^{2} + \left ( 8 \right )^{2}} = \sqrt{81}$$ = 9
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}. \vec{b_{2}} = \left ( 2\hat{i} + 2\hat{j} + \hat{k} \right ).\left ( 4\hat{i} + \hat{j} + 8\hat{k} \right )$$
$$\\\boldsymbol{\Rightarrow }$$ $$\vec{b_{1}}. \vec{b_{2}} = 2 \times 4 + 2 \times 1 + 1 \times 8$$ = 8 + 2 + 8 = 18
The angle $$\theta$$, between the given pair of lines is given by the relation,
$$\\\boldsymbol{\Rightarrow }$$ $$\cos \theta = \left | \frac{\vec{b_{1}}. \vec{b_{2}}}{\left | \vec{b_{1}} \right | \left | \vec{b_{2}} \right |} \right |= \frac{18}{3 \times 9}= \frac{2}{3}$$
$$\\\boldsymbol{\Rightarrow }$$ $$\theta = \cos ^{-1}\left (\frac{2}{3} \right )$$
Q12. Find the values of m so the line $$\frac{1 – x}{3} = \frac{7y – 14}{2m} = \frac{z – 3}{2}$$ and $$\frac{7 – 7x}{3m} = \frac{y – 5}{1} = \frac{6 – z}{5}$$ are at right angles.
Sol:
The given equations can be written in the standard form as $$\frac{x – 1}{-3} = \frac{y – 2}{\frac{2m}{7}} = \frac{z – 3}{2}$$ and $$\frac{x – 1}{\frac{-3m}{7}} = \frac{y – 5}{1} = \frac{z – 6}{-5}$$
The direction ratios of the lines are -3, $$\frac{2m}{7}$$, 2 and $$\frac{-3m}{7}$$,1 , -5 respectively.
Two lines with direction ratios, $$a_{1}, b_{1}, c_{1}$$ and $$a_{2}, b_{2}, c_{2}$$ are perpendicular to each other, if $$a_{1}a_{1} + b_{1}b_{2} + c_{1}c_{2} = 0$$
$$\left ( -3 \right ).\left ( \frac{-3m}{7} \right ) + \left ( \frac{2m}{7} \right ). \left ( 1 \right ) + 2.\left ( -5 \right ) = 0$$ $$\frac{9m}{7} + \frac{2m}{7} = 10$$ $$11m = 70$$ $$m = \frac{70}{11}\\$$
Therefore, the value of m is $$\frac{70}{11}$$.
Q13. Show that the lines $$\frac{x – 5}{7} = \frac{y + 2}{-5} = \frac{z}{1}$$ and $$\frac{x}{1} = \frac{y}{2} = \frac{z}{3}$$ are perpendicular to each other.
Sol:
The equations of the given lines are lines $$\frac{x – 5}{7} = \frac{y + 2}{-5} = \frac{z}{1}$$ and $$\frac{x}{1} = \frac{y}{2} = \frac{z}{3}$$.
The direction ratios of the given lines are 7, −5, 1 and 1, 2, 3 respectively.
Two lines with direction ratios, $$a_{1}, b_{1}, c_{1}$$ and $$a_{2}, b_{2}, c_{2}$$ are perpendicular to each other, if $$a_{1}a_{1} + b_{1}b_{2} + c_{1}c_{2} = 0$$
$$\left (7 \times 1 \right ) + \left (-5 \times 2 \right ) + \left (1 \times 3 \right )\\$$
= 7 – 10 + 3 = 0
Hence, the given lines are perpendicular to each other.
Q14. Find the shortest distance between the given lines,
$$\vec{r} = \left ( \hat{i} + 2\hat{j} + \hat{k} \right ) + \lambda \left ( \hat{i} – \hat{j} + \hat{k} \right )$$ and $$\vec{r} = \left ( 2\hat{i} – \hat{j} – \hat{k} \right ) + \mu \left ( 2\hat{i} + \hat{j} + 2\hat{k} \right )$$
Sol:
The equations of the given lines are,
$$\vec{r} = \left ( \hat{i} + 2\hat{j} + \hat{k} \right ) + \lambda \left ( \hat{i} – \hat{j} + \hat{k} \right )$$ $$\vec{r} = \left ( 2\hat{i} – \hat{j} – \hat{k} \right ) + \mu \left ( 2\hat{i} + \hat{j} + 2\hat{k} \right )$$
It is known that the shortest distance between the lines, $$\vec{r} = \vec{a_{1}} + \lambda \vec{b_{1}}$$ and $$\vec{r} = \vec{a_{2}} + \mu \vec{b_{2}}$$, is given by:
$$\\d = \left | \frac{\left ( \vec{b_{1}} \times \vec{b_{2}} \right ). \left ( \vec{a_{2}} – \vec{a_{1}} \right ) }{\left | \vec{b_{1}} \times \vec{b_{2}} \right |} \right |$$ . . . . . . . . . . . . . (1)
Comparing the given equations, we obtain:
$$\vec{a_{1}} = \hat{i} + 2\hat{j} + \hat{k}$$ , $$\vec{b_{1}} = \hat{i} – \hat{j} + \hat{k}$$
And, $$\vec{a_{2}} = 2\hat{i} – \hat{j} – \hat{k}$$, $$\vec{b_{2}} = 2\hat{i} + \hat{j} + 2\hat{k}$$
$$\boldsymbol{\Rightarrow }$$ $$\vec{a_{2}} – \vec{a_{1}}= \left (2\hat{i} – \hat{j} – \hat{k} \right ) – \left ( \hat{i} + 2\hat{j} + \hat{k} \right )= \hat{i} – 3\hat{j} – 2\hat{k}\\$$
$$\\\vec{b_{1}} \times \vec{b_{2}}$$ = $$\boldsymbol{\begin{vmatrix} \hat{i}&\hat{j} &\hat{k}\\ 1& -1&1\\ 2& 1& 2\end{vmatrix}}\\$$
$$\vec{b_{1}} \times \vec{b_{2}} = \left ( -2 – 1 \right )\hat{i} – \left ( 2 – 2 \right )\hat{j} + \left ( 1 + 2 \right )\hat{k} = -3\hat{i} + 3\hat{k}$$
And, $$\left |\vec{b_{1}} \times \vec{b_{2}} \right | = \sqrt{\left ( -3 \right )^{2} + \left ( 3 \right )^{2}} = \sqrt{9 + 9}= 3\sqrt{2}$$
Substituting all the values in equation (1), we obtain:
$$d = \left | \frac{\left ( -3\hat{i} + 3\hat{k}\right ). \left ( \hat{i} – 3\hat{j} – 2\hat{k}\right )}{3\sqrt{2}} \right |= \left | \frac{-3.1 + 3.\left ( -2 \right )}{3\sqrt{2}} \right |\left | =\frac{-9}{3\sqrt{2}} \right |=\frac{3}{\sqrt{2}} =\frac{3 \times \sqrt{2}}{\sqrt{2} \times \sqrt{2}}=\frac{3\sqrt{2}}{2}$$
Hence, the shortest distance between the two lines is $$\frac{3\sqrt{2}}{2}$$ units.
Q15. Find the shortest distance between the lines $$\frac{x + 1}{7} = \frac{y + 1}{-6} = \frac{z + 1}{1}$$ and $$\frac{x – 3}{1} = \frac{y – 5}{-2} = \frac{z – 7}{1}$$
Sol:
The given lines are as follows,
$$\frac{x + 1}{7} = \frac{y + 1}{-6} = \frac{z + 1}{1}$$ and $$\frac{x – 3}{1} = \frac{y – 5}{-2} = \frac{z – 7}{1}$$
It is known that the shortest distance between the two lines, $$\frac{x – x_{1}}{a_{1}} = \frac{y – y_{1}}{b_{1}} = \frac{z – z_{1}}{c_{1}}$$ and $$\frac{x – x_{2}}{a_{2}} = \frac{y – y_{2}}{b_{2}} = \frac{z – z_{2}}{c_{2}}$$ is given by,
$$\boldsymbol{\Rightarrow \frac{\begin{vmatrix} x_{2}-x_{1} &y_{2}-y_{1} &z_{2}-z_{1} \\ a_{1}& b_{1} &c_{1} \\ a_{2}& b_{2} &c_{2} \end{vmatrix} }{\sqrt{(b_{1}c_{2}-b_{1}c_{1})^{2}-(c_{1}a_{2}-c_{2}a_{1})^{2}+(a_{1}b_{2}-a_{2}b_{1})^{2}}}}\\$$. . . . . . . . . . . (1)
Comparing the given equations, we obtain:
$$x_{1} = -1, y_{1} = -1, z_{1} = -1$$$$a_{1} = 7, b_{1} = -6, c_{1} = 1$$
And, $$x_{2} = 3, y_{2} = 5, z_{2} = 7$$$$a_{2} = 1, b_{2} = -2, c_{2} = 1$$
Then,
$$\boldsymbol{\Rightarrow \begin{vmatrix} x_{2}-x_{1} &y_{2}-y_{1} &z_{2}-z_{1} \\ a_{1}& b_{1} &c_{1} \\ a_{2}& b_{2} &c_{2} \end{vmatrix}\;=\;\begin{vmatrix} 4 & 6 & 8\\ 7 & -6 & 8\\ 1 & -2 & 1 \end{vmatrix}}\\$$
= $$\\4\left ( -6 + 2 \right ) – 6\left ( 7 – 1 \right ) + 8\left ( -14 + 6 \right )$$
= $$– 16 – 36 – 64$$ = – 116
= $$\\\sqrt{\left ( b_{1}c_{2} – b_{2}c_{1} \right )^{2} + \left ( c_{1}a_{2} – c_{2}a_{1} \right )^{2} + \left (a_{1}b_{2} – a_{2}b_{1} \right )^{2} }\\$$
= $$\\\sqrt{\left ( -6 + 2 \right )^{2} + \left ( 1 + 7 \right )^{2} + \left ( – 14 + 6 \right )^{2}}=\sqrt{116}=2\sqrt{29}\\$$
Substituting all the values in equation (1), we obtain:
$$d = \frac{-116}{2\sqrt{29}}= \frac{-58}{\sqrt{29}}= \frac{-2 \times 29}{\sqrt{29}}= -2 \sqrt{29}$$
Since distance is always non-negative, the distance between the given lines is $$2 \sqrt{29}$$ units.
Q16. Find the shortest distance between the lines whose vector equations are $$\vec{r} = \left ( \hat{i} + 2\hat{j} + 3\hat{k} \right ) + \lambda \left ( \hat{i} – 3\hat{j} + 2\hat{k} \right )$$ and $$\vec{r} = \left ( 4\hat{i} + 5\hat{j} + 6\hat{k} \right ) + \mu \left ( 2\hat{i} + 3\hat{j} + \hat{k} \right )$$
Sol:
The given vectors are as follows:
$$\vec{r} = \left ( \hat{i} + 2\hat{j} + 3\hat{k} \right ) + \lambda \left ( \hat{i} – 3\hat{j} + 2\hat{k} \right )\\$$ $$\vec{r} = \left ( 4\hat{i} + 5\hat{j} + 6\hat{k} \right ) + \mu \left ( 2\hat{i} + 3\hat{j} + \hat{k} \right )$$
It is known that the shortest distance between the lines, $$\vec{r} = \vec{a_{1}} + \lambda \vec{b_{1}}$$ and $$\vec{r} = \vec{a_{2}} + \mu \vec{b_{2}}$$
The lines are given by:
$$d = \left | \frac{\left ( \vec{b_{1}} \times \vec{b_{2}} \right ). \left ( \vec{a_{2}} – \vec{a_{1}} \right ) }{\left | \vec{b_{1}} \times \vec{b_{2}} \right |} \right |$$ . . . . . . . . . . . (1)
Comparing the given equations with $$\vec{r} = \vec{a_{1}} + \lambda \vec{b_{1}}$$ and $$\vec{r} = \vec{a_{2}} + \mu \vec{b_{2}}$$
$$\\\vec{a_{1}} = \hat{i} + 2\hat{j} + 3\hat{k}$$, $$\vec{b_{1}} = \hat{i} – 3\hat{j} + 2\hat{k}$$, $$\vec{a_{2}} = 4\hat{i} + 5\hat{j} + 6\hat{k}$$, $$\vec{b_{2}} = 2\hat{i} + 3\hat{j} + \hat{k}$$
$$\vec{a_{2}} – \vec{a_{1}} = \left ( 4\hat{i} + 5\hat{j} + 6\hat{k} \right ) – \left ( \hat{i} + 2\hat{j} + 3\hat{k} \right )= 3\hat{i} + 3\hat{j} + 3\hat{k}\\$$ $$\\\vec{b_{1}} \times \vec{b_{2}} = \begin{vmatrix} \hat{i} \;\;\;\;\;\; \hat{j} \;\;\;\;\;\; \hat{k}\\ 1 \;\;\; -3 \;\;\;\; 2\\ 2 \;\;\;\;\;\; 3 \;\;\;\;\;\; 1 \end{vmatrix}$$
= $$\\\left ( -3 – 6 \right )\hat{i} – \left ( 1 – 4 \right )\hat{j} + \left ( 3 + 6 \right )\hat{k} = -9 \hat{i} + 3\hat{j} + 9\hat{k}\\$$
$$\\\left |\vec{b_{1}} \times \vec{b_{2}} \right | = \sqrt{\left ( -9 \right )^{2} + \left ( 3 \right )^{2} + \left ( 9 \right )^{2}}=\sqrt{81 + 9 + 81} =sqrt{171}=3\sqrt{19}\\$$ $$\\\left (\vec{b_{1}} \times \vec{b_{2}} \right ). \left (\vec{a_{2}} – \vec{a_{1}} \right ) = \left (-9 \hat{i} + 3\hat{j} + 9\hat{k} \right ). \left ( 3\hat{i} + 3\hat{j} + 3\hat{k} \right )$$
= $$\\\left (-9 \times 3 \right ) + \left ( 3 \times 3 \right ) + \left ( 9 \times 3 \right )$$ = 9
Substituting all the values in equation (1), we obtain:
$$d = \left | \frac{9}{3\sqrt{19}} \right | = \left | \frac{3}{\sqrt{19}} \right |$$
Hence, the shortest distance between the two given lines is $$\frac{3}{\sqrt{19}}$$ units.
Q17. Find the shortest distance between the lines whose vector equations are $$\vec{r} = \left ( 1 – t \right )\hat{i} + \left ( t – 2 \right )\hat{j} + \left ( 3 – 2t \right )\hat{k}$$ and $$\vec{r} = \left ( s + 1 \right )\hat{i} + \left ( 2s – 1 \right )\hat{j} + \left ( 2s + 1 \right )\hat{k}$$
Sol:
The lines are as follows:
$$\vec{r} = \left ( 1 – t \right )\hat{i} + \left ( t – 2 \right )\hat{j} + \left ( 3 – 2t \right )\hat{k}$$
$$\\\vec{r} = \left ( \hat{i} – 2\hat{j} + 3\hat{k} \right ) + t \left ( -\hat{i} + \hat{j} – 2\hat{k} \right )$$ . . . . . . . . (1)
$$\vec{r} = \left ( s + 1 \right )\hat{i} + \left ( 2s – 1 \right )\hat{j} + \left ( 2s + 1 \right )\hat{k}$$
$$\\\vec{r} = \left ( \hat{i} – \hat{j} + \hat{k} \right ) + s \left ( \hat{i} + 2\hat{j} – 2\hat{k} \right )$$ . . . . . . . . . . (2)
It is known that the shortest distance between the lines, $$\vec{r} = \vec{a_{1}} + \lambda \vec{b_{1}}$$ and $$\vec{r} = \vec{a_{2}} + \mu \vec{b_{2}}$$ is given by:
$$\\d = \left | \frac{\left ( \vec{b_{1}} \times \vec{b_{2}} \right ). \left ( \vec{a_{2}} – \vec{a_{1}} \right ) }{\left | \vec{b_{1}} \times \vec{b_{2}} \right |} \right |$$ . . . . . . . . (3)
Comparing the given equations with $$\vec{r} = \vec{a_{1}} + \lambda \vec{b_{1}}$$ and $$\vec{r} = \vec{a_{2}} + \mu \vec{b_{2}}$$,
$$\vec{a_{1}} = \hat{i} – 2\hat{j} + 3\hat{k}$$, $$\vec{b_{1}} = -\hat{i} + \hat{j} – 2\hat{k}$$
And, $$\vec{a_{2}} = \hat{i} – \hat{j} – \hat{k}$$, $$\vec{b_{2}} = \hat{i} + 2\hat{j} – 2\hat{k}$$
$$\vec{a_{2}} – \vec{a_{1}} = \left ( \hat{i} – \hat{j} – \hat{k} \right ) – \left ( \hat{i} – 2\hat{j} + 3\hat{k} \right ) = \hat{j} – 4\hat{k}\\$$ $$\\\vec{b_{1}} \times \vec{b_{2}} = \begin{vmatrix} \hat{i} \;\;\;\;\;\; \hat{j} \;\;\;\;\;\; \hat{k}\\ -1 \;\;\; 1 \;\;\;\; -2\\ \;\;1 \;\;\;\;\;\; 2 \;\;\;\;-2 \end{vmatrix}\\$$
= $$\\\left ( -2 + 4 \right )\hat{i} – \left ( 2 + 2 \right )\hat{j} + \left ( -2 – 1 \right )\hat{k}= 2\hat{i} – 4\hat{j} – 3\hat{k}$$
$$\left | \vec{b_{1}} \times \vec{b_{2}} \right | = \sqrt{\left ( 2 \right )^{2} + \left ( -4 \right )^{2} + \left ( -3 \right )^{2}}= \sqrt{4 + 16 + 9}= \sqrt{29}\\$$
$$\\\boldsymbol{\Rightarrow }$$ $$\\\left (\vec{b_{1}} \times \vec{b_{2}} \right ). \left ( \vec{a_{2}} – \vec{a_{1}}\right ) = \left (2\hat{i} – 4\hat{j} – 3\hat{k} \right ). \left ( \hat{j} – 4\hat{k} \right )$$ = -4 + 12 = 8
Substituting all the values in equation (3), we obtain:
$$d = \left | \frac{8}{\sqrt{29}} \right | = \frac{8}{\sqrt{29}}$$
Hence, the shortest distance between the lines is $$\frac{8}{\sqrt{29}}$$ units.
|
2018-08-18 09:17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691509962081909, "perplexity": 259.94670541091114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213508.60/warc/CC-MAIN-20180818075745-20180818095745-00068.warc.gz"}
|
https://vismor.com/documents/network_analysis/matrix_algorithms/S4.SS5.php
|
4.5 Numerical Instability During Factorization
Examining Equation 43 and Equation 44, you will observe that LU decomposition will fail when value the $a_{kk}^{(k)}$ (called the pivot element) is zero. In many applications, the possibility of a zero pivot is quite real and constitutes a serious impediment to the use of Gaussian elimination. This problem is compounded by the fact that Gaussian elimination is numerically unstable even if there are no zero pivot elements.
Numerical instability occurs when errors introduced by the finite precision representation of real numbers are of sufficient magnitude to swamp the true solution to a problem. In other words, a numerically unstable problem has a theoretical solution that may be unobtainable in finite precision arithmetic.
The other LU decomposition schemes examined in this section exhibit similar characteristics, e.g. instability is introduced by the division by ujj in Equation 46 and lii in Equation 50.
|
2019-02-21 16:41:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8154261112213135, "perplexity": 475.3844529998996}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247505838.65/warc/CC-MAIN-20190221152543-20190221174543-00366.warc.gz"}
|
http://15418.courses.cs.cmu.edu/spring2016/lecture/progperf1/slide_041
|
Previous | Next --- Slide 41 of 64
Back to Lecture Thumbnails
yikesaiting
Like Prof. Keyvon said in the previous slide. Here each worker maintain a queue. Once it encounters a cilk_spawn, it will push continuation into the queue and start doing child work.
|
2022-05-28 17:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17980125546455383, "perplexity": 5330.833318872125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00458.warc.gz"}
|
https://unmethours.com/question/63389/pat-summary-table-output-variables/
|
Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
# PAT Summary Table output variables
I am using PAT 3.1.0 to compare different alternatives, I run the workflow manually and I want to use the SI units.
In the reporting measures, I added the openstudio results and I changed the unit system to SI. But, in the sammary table report, units are in IP.
1. Where can i change the units system for the summary table?
2. Is it possible to personnalize the variables that will be compared: for exemple, by default, the summary table shows district cooling and heating savings but i would like to have the cooling and heating loads savings in general when modelling heating and cooling coils.
edit retag close merge delete
Sort by » oldest newest most voted
We had experimented a bit making the PAT reports customizable similar to reporting measures but it wasn't something we can easily do. I do have an example of this but it requires building PAT in a development environment, and we can not support issues you hit.
There is an alternative. The OpenStudio Server creates a results.csv file with a row for every datapoint. There is a column for each measure argument value and for selected runner.regierValue objects. runner.registerVaslues can be added by any measure but are typically added by reporting measures to store processed results. The last step to get a runner.registerValue into the result is to add it in PAT as an output variable. There are some pre-defined ones you can select, but you can add custom ones as a string as long as they exist in the measure.You could then aggregate or compare datapoints in Excel or with another script.
OpenStudio Results already had hundreds of runner.registerValues and has an SI/IP switch. The easiest ways to see what is available is to look at the out.osw of a prior simulation. Under step values you see a bunch of items like this. Add the ones you want in PAT as output variables. If the outputs you are looking for are not hear, then you might have to write your own reporting measure, or use another reporting measure.
"name" : "end_use_electricity_interior_equipment",
"units" : "kWh",
value" : 732166.66666666
One last approach is a server finalization script that can run on OpenStudio Server. This doesn't generate reports but has access to all of the data that makes it into the results.csv file and can post-process results to compare datapoints or identify a best datapoint for some criteria. This takes some effort so is more useful for a process that has to be used many times. I'm not sure if we have an example published, but I can find one if someone is interested.
more
|
2022-01-20 09:13:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2310086339712143, "perplexity": 1374.3462602376276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00643.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-11-data-analysis-and-statistics-extension-approximate-binomial-distributions-and-test-hypotheses-practice-page-765/10
|
## Algebra 2 (1st Edition)
Published by McDougal Littell
# Chapter 11 Data Analysis and Statistics - Extension - Approximate Binomial Distributions and Test Hypotheses - Practice - Page 765: 10
#### Answer
$0.2119$
#### Work Step by Step
$\overline{x}=np=0.04\cdot460=18.4$ $\sigma=\sqrt{np(1-p)}=\sqrt{460(0.04)(0.96)}\approx4.2$ Thus $P(x\leq15)\approx P(z\leq\frac{15-18.4}{4.2})\approx P(z\leq-0.8)=0.2119$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2020-04-01 15:40:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071810603141785, "perplexity": 3168.0986923923556}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00391.warc.gz"}
|
http://www.maa.org/programs/faculty-and-departments/classroom-capsules-and-notes/a-single-inequality-condition-for-the-existence-of-many-r-gons
|
# A Single Inequality Condition for the Existence of Many $r$-gons
by Murray S. Klamkin (University of Alberta) and Krzysztof Witczynski (University of Technology Warsaw)
Mathematics Magazine
December, 1990
Subject classification(s): Polygons | Plane Geometry | Geometry and Topology
Applicable Course(s): 4.9 Geometry | 4.1 Introduction to Proofs
Given positive integers $n\geq 3$, the author finds a single inequality condition for every $r$ of them ($3 \leq r \leq n$) to be the lengths of sides of a $r$-gon.
A pdf copy of the article can be viewed by clicking below. Since the copy is a faithful reproduction of the actual journal pages, the article may not begin at the top of the first page.
|
2016-09-27 04:17:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4531712830066681, "perplexity": 1450.1545357694852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660957.45/warc/CC-MAIN-20160924173740-00244-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/441496/expression-for-the-maurer-cartan-form-of-a-matrix-group
|
# Expression for the Maurer-Cartan form of a matrix group
I understand the definition of the Maurer-Cartan form on a general Lie group $G$, defined as
$\theta_g = (L_{g^{-1}})_*:T_gG \rightarrow T_eG=\mathfrak{g}$.
What I don't understand is the expression
$\theta_g=g^{-1}dg$
when $G$ is a matrix group. In particular, I'm not sure how I'm supposed to interpret $dg$. It seemed to me that, in this concrete case, I should take a matrix $A\in T_gG$ and a curve $\sigma$ such that $\dot{\sigma}(0)=A$, and compute $\theta_g(A)=(\frac{d}{dt}g^{-1}\sigma(t))\big|_{t=0}=g^{-1}A$ since $g$ is constant. So it looks like $\theta_g$ is just plain old left matrix multiplication by $g^{-1}$. Is this correct? If so, how does it connect to the expression above?
This notation is akin to writing $d\vec x$ on $\mathbb R^n$. Think of $\vec x\colon\mathbb R^n\to\mathbb R^n$ as the identity map and so $d\vec x = \sum\limits_{j=1}^n \theta^j e_j$ is an expression for the identity map as a tensor of type $(1,1)$ [here $\theta^j$ are the dual basis to the basis $e_j$]. In the Lie group setting, one is thinking of $g\colon G\to G$ as the identity map, and $dg_a\colon T_aG\to T_aG$ is of course the identity. Since $(L_g)_* = L_g$ on matrices (as you observed), for $A\in T_aG$, $(g^{-1}dg)_a(A) = a^{-1}A = L_{a^{-1}*}dg_a(A)\in\frak g$.
|
2019-07-16 16:06:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732723236083984, "perplexity": 71.9352842898665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524679.39/warc/CC-MAIN-20190716160315-20190716182315-00332.warc.gz"}
|
https://socratic.org/questions/573155db11ef6b59c27ac1a8
|
# Question #ac1a8
Dec 29, 2016
This is not really a practical reaction...........
#### Explanation:
$A g C l \left(s\right) + K B r \left(s\right) \rightarrow A g B r \left(s\right) + K C l \left(a q\right)$
$\text{Moles of silver chloride}$ $=$ $\frac{12.0 \cdot g}{143.32 \cdot g \cdot m o {l}^{-} 1} = 0.0837 \cdot m o l .$
$\text{Moles of potassium bromide}$ $=$ $\frac{13.0 \cdot g}{119.0 \cdot g \cdot m o {l}^{-} 1} = 0.109 \cdot m o l .$
Clearly, there is sufficient bromide anion to effect metathesis, and should the reaction go to completion, then $0.0837 \cdot m o l$ $K B r$ would eventually precipitate, which constitutes a mass of $0.0837 \cdot m o l \times 187.77 \cdot g \cdot m o {l}^{-} 1 = 15.72 \cdot g$. I think you can calculate the mass of the excess potassium bromide.
Had this reaction been performed, you would see the white precipitate of $A g C l$ change to the cream-coloured $A g B r$. While silver halides are both quite insoluble, $A g B r$ is more insoluble than $A g C l$, and this would drive the reaction to the right as written. The problem with this reaction is that both silver salts are photo-active, and would reduce to give metallic silver as a dark precipitate.
|
2019-11-12 04:25:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6791963577270508, "perplexity": 1835.911095800364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00240.warc.gz"}
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=4916559
|
• Create Account
### #ActualSimonForsman
Posted 25 February 2012 - 02:09 PM
thx a lot. If you would also know how to do it on Ubuntu, would you send me an example/link as well?
For Linux/Unix you normally do a fork(); (to duplicate the current process) and then exec*(a few different functions to choose from) to replace the newly forked process with the one you specify.
pid_t pID = fork();
if (pID == 0) {
//this is the process created by fork, so we execute here
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0); //execl never returns unless there is an error, it overwrites the calling process
} else if (pID<0) {
//for failed
std::cerr "Error message about the failed fork"<<std::endl;
exit(1);
} else {
//code that we only execute on the parent process, fork again to launch another process for example or whatever else you want to do.
pid_t pID2 = fork();
if (pID2==0) {
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID2<0) {
std:cerr "Error message about the failed fork"<<stdl::endl;
}
}
### #3SimonForsman
Posted 25 February 2012 - 02:08 PM
thx a lot. If you would also know how to do it on Ubuntu, would you send me an example/link as well?
For Linux/Unix you normally do a fork(); (to duplicate the current process) and then exec*(a few different functions to choose from) to replace the newly forked process with the one you specify.
pid_t pID = fork();
if (pID == 0) {
//this is the process created by fork, so we execute here
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID<0) {
//for failed
std::cerr "Error message about the failed fork"<<std::endl;
exit(1);
} else {
//code that we only execute on the parent process, fork again to launch another process for example or whatever else you want to do.
pid_t pID2 = fork();
if (pID2==0) {
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID2<0) {
std:cerr "Error message about the failed fork"<<stdl::endl;
}
}
### #2SimonForsman
Posted 25 February 2012 - 02:06 PM
thx a lot. If you would also know how to do it on Ubuntu, would you send me an example/link as well?
For Linux/Unix you normally do a fork(); (to duplicate the current process) and then exec*(a few different functions to choose from) to replace that process with the one you specify.
pid_t pID = fork();
if (pID == 0) {
//this is the process created by fork, so we execute here
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID<0) {
//for failed
std::cerr "Error message about the failed fork"<<std::endl;
exit(1);
} else {
//code that we only execute on the parent process, fork again to launch another process for example or whatever else you want to do.
pid_t pID2 = fork();
if (pID2==0) {
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID2<0) {
std:cerr "Error message about the failed fork"<<stdl::endl;
}
}
### #1SimonForsman
Posted 25 February 2012 - 02:04 PM
thx a lot. If you would also know how to do it on Ubuntu, would you send me an example/link as well?
For Linux/Unix you normally do a fork(); (to duplicate the current process) and then exec*(a few different functions to choose from) to replace that process with the one you specify. (execve never returns unless there is an error)
pid_t pID = fork();
if (pID == 0) {
//this is the process created by fork, so we execute here
execl("./MyGraphicEngine", "./MyGraphicEngine", (char*)0);
} else if (pID<0) {
//for failed
std::cerr "Error message about the failed fork"<<std:endl;
exit(1);
} else {
//code that we only execute on the parent process, fork again to launch another process for example or whatever else you want to do.
}
PARTNERS
|
2013-12-12 07:00:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3054817020893097, "perplexity": 8400.386444459486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164566315/warc/CC-MAIN-20131204134246-00021-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/sum-of-a-finite-exponential-series.549159/
|
Sum of a finite exponential series
Homework Statement
Given is $\sum_{n=-N}^{N}e^{-j \omega n} = e^{-j\omega N} \frac{1-e^{-j \omega (2N+1)}}{1 - e^{-j\omega}}$. I do not see how you can rewrite it like that.
Homework Equations
Sum of a finite geometric series: $\sum_{n=0}^{N}r^n=\frac{1-r^{N+1}}{1-r}$
The Attempt at a Solution
Or is the above result based on this more general equation: $\sum_{n=0}^{N}ar^n=a\frac{1-r^{N+1}}{1-r}$? Although I think the equation in (2) is just this equation for a=1, right?
So, I know how to get to the 2nd term in (1), i.e., $\frac{1-e^{-j \omega (2N+1)}}{1 - e^{-j\omega}}$, but I have no idea why it is multiplied by the term $e^{-j\omega N}$.
Related Calculus and Beyond Homework Help News on Phys.org
danago
Gold Member
Did you notice that the sum you are trying to compute actually starts from n=-N and not n=0? I think you can get the answer you want by making a change of variable and then using the geometric series equation you have identified.
Last edited:
Did you notice that the sum you are trying to compute actually starts from n=-N and not n=0? I think you can get the answer you want by making a change of variable and then using the geometric series equation have identified.
Yes, I've noticed that it starts there. That's why I thought it can be rewritten as $\frac{1−e^{-j\omega(2N+1)}}{1−e^{−jω}}$, but the solution states that this fraction is multiplied by $e^{−jωN}$.
danago
Gold Member
Are you sure that the exponential term in front of the fraction does have a negative sign? I just tried doing the working and ended up with a positive sign, i.e.:
$$\sum^{N}_{n=-N} e^{-j\omega n} = e^{j\omega N} \frac{1-e^{-j\omega(2N+1)}}{1-e^{-j\omega}}$$
I did it by making the substitution $\phi=n+N$. I will check my working again.
EDIT: I have checked over my working and have convinced myself that the negative should not be there. It is late here so i could easily have made a mistake though :tongue:
Last edited:
Are you sure that the exponential term in front of the fraction does have a negative sign? I just tried doing the working and ended up with a positive sign, i.e.:
$$\sum^{N}_{n=-N} e^{-j\omega n} = e^{j\omega N} \frac{1-e^{-j\omega(2N+1)}}{1-e^{-j\omega}}$$
I did it by making the substitution $\phi=n+N$. I will check my working again.
EDIT: I have checked over my working and have convinced myself that the negative should not be there. It is late here so i could easily have made a mistake though :tongue:
Okay, thank you. For me, it is not about the sign in the exponent. I do not see why we have to multiply by the term in front of the fraction. But I think I rewrote the equation in the wrong way. Can you give me your steps?
danago
Gold Member
You have transformed the upper and lower limits of the sum, however you have not applied the same transformation to the variable n in the summand.
If $\phi=n+N$, then the new limits of the sum will be $\phi=0$ and $\phi=2N$. You must then also replace the 'n' in the summand with $n=\phi-N$. If you do this then you will get the right answer.
EDIT:
The transformed sum will be:
$$\sum^{2N}_{\phi=0} e^{-j\omega (\phi-N)} = e^{j\omega N} \frac{1-e^{-j\omega(2N+1)}}{1-e^{-j\omega}}$$
danago
Gold Member
Maybe it will be easier to understand if we look at why what you did isn't quite correct.
$$\sum^{N}_{n=-N} e^{n} = e^{-N}+e^{-N+1}+...+1+e^1+...+e^{N-1}+e^N$$
$$\sum^{2N}_{n=0} e^{n} = 1+e^{1}+...+e^{2N-1}+e^{2N}$$
See how they are not the same?
Ah, I see the problem now. Thanks!
danago
Gold Member
Ah, I see the problem now. Thanks!
No problem!
|
2021-03-08 14:16:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030249714851379, "perplexity": 191.8580817360257}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375439.77/warc/CC-MAIN-20210308112849-20210308142849-00467.warc.gz"}
|
http://math.stackexchange.com/questions/253496/does-the-stationary-distribution-of-this-markov-chain-exist
|
Does the stationary distribution of this Markov Chain exist?
To find the stationary distribution of a Markov Chain, I believe I must solve for $\vec{s} = \langle s_0, s_1 \rangle$ in $\vec{s} = \vec{s}Q$, where $Q$ is the transition matrix.
$Q$, in my case, is
$$\left( \begin{array}{cc} p & 1-p \\ 1-q & q \end{array} \right)$$
where $Q_{ij}$ is the probability of moving from state $i$ to state $j$ (row $i$, column $j$). When I solve for $s_0$ and $s_1$, however, I get
$$s_0 = s_0 p + s_1 (1-q) \\ s_1 = s_0 (1 - p) + s_1 q$$
Subsequently,
$$s_0 (1 - p) = s_1 (1 - q) \\ s_1 (1 - q) = s_0 (1 - p)$$
These two equations look identical. Does that mean there are an infinite number of stationary distributions for this Markov chain?
Thanks for helping a Markov Chain newb :)
-
no, it does not. Remember that you have an additional condition $s_0+s_1=1$. – Artem Dec 8 '12 at 1:35
Ah, thank you. Why does that have to be a condition? – David Faux Dec 8 '12 at 1:38
Because these are probabilities which have to sum to one. Or I don't understand your question? – Artem Dec 8 '12 at 1:38
Ohhh... wait, why must the probabilities sum to 1? – David Faux Dec 8 '12 at 1:41
Well, think what it would mean if probabilities added up to $3$ --- or to $-7$. – Gerry Myerson Dec 8 '12 at 6:00
As in the comments, the condition you are missing is that the probabilities must sum to 1 ($s_0+s_1=1$).
It's a property of the $Q$ matrix. The $Q$ matrix has rank 1 less than its dimension (here rank 1) because the final column is redundant. If it were left blank you could 'fill in the gaps': $$\begin{pmatrix} p & *\\ 1-q & ** \end{pmatrix}.$$ You know that each row is a probability vector because it describes what can potentially happen. When in state $0$ the chain can remain in state $0$ or move to state $1$. As these are the only things that can happen the probabilities $p+*=1$ so $*=1-p$. Similarly $**=q$.
|
2015-08-02 10:24:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406507015228271, "perplexity": 224.65084552098776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989042.37/warc/CC-MAIN-20150728002309-00012-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://coreform.com/manuals/latest/cubit_reference/uspline-auto-creasing.html
|
2021.11
#### 6.3Automatic creasing of U-splines
Preliminary definitions:
• An extraordinary vertex on a 2D mesh is any interior vertex adjacent to three edges, or more than four edges, or any boundary vertex adjacent to more than three edges. Similarly in 3D, an extraordinary edge is any edge adjacent to three faces, or more than four faces, or any boundary edge adjacent to more than three faces.
• A regular vertex is any interior vertex adjacent to exactly four edges, or any boundary vertex adjacent to exactly two or exactly three edges.There is a similar definition for edges in 3D.
• A continuity transition on a 2D mesh occurs at any regular vertex where the two edges on opposite sides of the vertex are assigned different continuities. A continuity transition on a 3D mesh occurs at any regular edge where the two faces on opposite sides of the edge are assigned different continuities.
The build uspline command allows users to specify the base degree and continuity to be uniformly applied to the input mesh in the construction of the U-spline.
However, because mesh continuity must conform to certain conditions in order for the mesh to be admissible, the user-prescribed continuity is subject to automatic, mandatory adjustments on some interfaces. These adjustments are made when necessary to ensure mesh admissibility and the local linear independence of the resulting U-spline basis functions.
In particular, the U-spline algorithm will automatically perform the following operations to ensure a U-spline mesh is admissible.
1. Creasing of extraordinary vertices
2. Continuity grading near creased vertices
3. Maintaining distance between perpendicular continuity transitions
Each of these is explained in detail below. Descriptions are given for the 2D case, but equivalent operations apply in 3D as well.
##### 6.3.1Creasing of extraordinary vertices
In order for a U-spline mesh to be admissible, all edges directly adjacent to an extraordinary vertex must be creased—that is, their interface must be set to $C^{0}$ continuity. An algorithm in Coreform Cubit performs additional creasing to ensure all edges directly adjacent to an extraordinary vertex are set to $C^{0}$ .
##### 6.3.2Continuity grading near creased vertices
Continuity transitions in the mesh must conform to certain conditions in order for the mesh to be admissible. One of these conditions applies to creased vertices.
A creased vertex is any vertex (whether an extraordinary vertex or a regular vertex) such that all edges immediately adjacent are creased to $C^{0}$ .
CONDITION: On a mesh with a maximum continuity of $C^{ p - 1}$, an edge n bays away from a creased vertex must be assigned a continuity less than or equal to $C^{n}$, on the line of edges emanating radially outward from the creased vertex.
If any of the edges in the neighborhood of a creased vertex are assigned a continuity that violates this condition, Coreform Cubit’s creasing algorithm will adjust the continuity of these edges as needed to enforce compliance.
Two examples of meshes with continuity grading near creased vertices are seen in figure 471 .
Figure 471: Two admissible meshes with creased vertices. The continuities of the edges on the lines emanating from the creased vertices are graded such that an edge n bays away from the creased vertex is assigned a continuity less than or equal to $C^{n}$.
##### 6.3.3Maintaining distance between perpendicular continuity transitions
Continuity transitions along perpendicular edge lines are required to maintain a sufficient distance from each other, as dictated by the degree and continuity on the mesh.
This distance is measured by drawing a line called a ray from the continuity transition in the direction of lower continuity. The length of a ray is determined by the degree and continuity on the mesh near the ray. On a mesh with a maximum continuity of $C^{ p - 1 }$ , the length of the ray will not exceed the prescribed degree p on the mesh (but may be shorter if a creased edge is encountered) .
The rays emanating from two perpendicular continuity transitions may not meet or intersect except when the intersection is tail-to-tail or head-to-tail. An algorithm in Coreform Cubit will analyze the input mesh and detect any pair of continuity transitions that have intersecting rays. The algorithm will then crease edges near these continuity transtions to shift the transition locations away from each other to resolve the issue.
An example of automatic creasing to avoid crossing continuity transition rays is seen in figure 472 .
Figure 472: The mesh on the left is not admissible because two perpendicular continuity transitions are close enough that their rays intersect (in this case, they meet head-to-head which is disallowed). On the right, two additional edges are creased in the mesh so that the rays now meet tail-to-tail, forming an admissible mesh configuration.
##### 6.3.4Global creasing options
The command for building a U-spline in Coreform Cubit includes the option {creasing, which can take values of minimal or full. The default value is minimal..
When the creasing option is set to full, the U-spline will be creased to remove all continuity transitions. If, after the additional set of edges specified by the user are creased and after the edges adjacent to extraordinary vertices are creased, there are still continuity transitions in the mesh, an algorithm in Coreform Cubit will perform additional creasing to ensure each line of edges are assigned the same continuity—thus removing all continuity transitions from the mesh.
Because all continuity transitions will be removed, the issue of continuity grading near creased vertices and intersecting continuity transition rays is avoided, albeit at the cost of possibly creasing a much larger set of edges on the mesh.
An example of a mesh with all continuity transitions removed is seen in figure 473.
Figure 473: The mesh on the left includes two user-specified edges which were creased to $C^{0}$. If, however, the flag creasing is set to full, additional creasing will automatically be applied to remove all continuity transitions, as seen on the right.
##### 6.3.5Automatic minimal creasing example on a Cubit mesh
To see automatic creasing at work on a relatable example, observe the mesh shown on the left in figure 474. All cells in this mesh are quadratic p = 2, and all edges were initially set to a continuity of $C^{1}$.
On the right, the thicker black lines indicate the default minimal creasing that will be automatically performed around the mesh’s four extraordinary vertices in order to render it admissible.
Figure 474: An example Cubit mesh. The cells are quadratic (p = 2), and all edges are initially set to $C^{1}$ continuity. All extraordinary vertices are always automatically creased to $C^{0}$.
##### 6.3.6Automatic full creasing on a Cubit mesh
If the option creasing is set to full, then continuity transitions are disallowed in the mesh, requiring all edge-lines to have the same continuity.
Figure 475 depicts the further automatic creasing performed when creasing is changed from the default minimum to full. This option creases not only the twelve edges that touch an extraordinary vertex, but also every edge-line that adjoins a creased edge.
Figure 475: If the option creasing is set to full, we see that all edge-lines originating at the extraordinary vertices will be automatically creased to remove all continuity transitions from the mesh.
##### 6.3.7Automatic full creasing with user-specified creased edges
The creasing full procedure described above applies not only to the default creasing required for admissibility, but also to any additional, user-specified creasing. That is, if a user causes additional edges to be creased, the creasing full option will automatically crease all edge-lines adjoining these user-creased edges as well.
The results of full creasing are shown in figure 476. The meshes on the left each feature one user-specified creased edge beyond the minimal default required around extraodinary points.
The images on the right show the subsequent, additional creasing automatically performed when the creasing full option is selected and the creasing of all edge-lines adjoining creased edges is enforced.
##### 6.3.8Automatic minimial creasing with user-specified creased edges
Figure 476: Examples of automatic creasing when the option creasing is set to full in cases where extra edges were initially creased by the user. On the left are the meshes prior to enforcing the creasing option, and on the right are the meshes after the extra edges are automatically creased.
Finally, figure 477 shows a case where user-specified creasing results in additional automatic creasing when the creasing option is set to minimal.
In the center image, the circled vertices show where the user-creased edges has resulted in perpendicular continuity transition rays that are intersecting in a way that is not admissible, as explained above in Maintaining distance between perpendicular continuity transitions. The image to the right shows one possible way the minimal automatic creasing algorithms may resolve this problem.
Figure 477: An example of automatic creasing when the option creasing is set to minimal and the user has specified additional edges to be creased in a way that resulted in an inadmissible configuration. On the left we see the mesh after the extraordinary vertices are creased and the extra edges specified by the user are creased. In the middle the vertices where admissibility violations are occuring (due to intersecting continuity transition rays) are circled. On the right we see one possible resolution to the issue, resulting in an admissible mesh.
##### 6.3.9.1build uspline crease group
A tolerance is available to specify a norm difference in normal vector on either side of the curve to determine if a curve is on a kinked surface.
[build] uspline crease group [tolerance <real>tol]
Remark: If the tolerance keyword is not supplied, a default value of 10-3 is used.
|
2022-01-25 22:39:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302093267440796, "perplexity": 1580.3943965084673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00413.warc.gz"}
|
https://math.stackexchange.com/questions/3029940/show-that-three-point-g-h-g-1-are-collinear
|
# Show that three point $G,H,G_1$ are collinear.
Triangle $$ABC$$ has centroid $$G$$ and orthcenter $$H$$. Line (through $$A$$) is perpendicular to $$GA$$, line (through $$B$$) is perpendicular to $$GB$$, line (through $$C$$) is perpendicular to $$GC$$ cut at three points which form a new triangle $$A_1B_1C_1$$. This new triangle has centroid $$G_1$$. Show that three point $$G,H,G_1$$ are collinear.
I have tried to so this problem with lots of theorems. But I can't find the way to solve. Or using any lemma? Help me to find and draw any auxiliary geometry element.
• There is no reason why $G=H$, let alone $G,H,G_1$ concur. Do you mean $G.H,G_1$ collinear instead? – user10354138 Dec 7 '18 at 14:38
• Oh. I'm sorry. It is "colinear". – Trong Tuan Dec 7 '18 at 15:02
• Recall the Euler line... – user10354138 Dec 7 '18 at 15:24
Taking @user10354138's comment, here's how we attack the problem: We will show that the midpoint $$O$$ of $$GG_1$$ is the circumcenter of $$\triangle ABC$$. In particular, we can actually show that $$G$$ is the midpoint of $$HG_1$$.
In the picture above, $$A_2,B_2,C_2$$ are midpoints of $$GA_1,GB_1,GC_1$$ respectively. Then $$C_2$$ is on the perpendicular bisector of $$AB$$. So it is sufficient to show that $$C_2O\perp AB$$, or $$C_1G_1\perp AB$$.
Now, if we consider a triangle $$XYZ$$ with sides (parallel to) the medians of $$\triangle ABC$$, then
1. The sides of $$\triangle XYZ$$ and $$\triangle A_1B_1C_1$$ are pairwise perpendicular.
2. The medians of $$\triangle XYZ$$ are parallel to the sides of $$\triangle ABC$$.
(The existence/construction of $$\triangle XYZ$$ and the proofs of the above statements are classical and left to you.)
From (1), it follows that $$\triangle XYZ$$ and $$\triangle A_1B_1C_1$$ are similar and their medians are pairwise perpendicular. This and (2) yield that $$C_1G_1\perp AB$$ and so on, which is what we are looking for.
• "From (1), it follows that △XYZ and △A1B1C1 are similar and their medians are pairwise perpendicular. This and (2) yield that $C_1G_1$⊥AB". Can you make it more clear? I still hav not understood. Thank you – Trong Tuan Dec 9 '18 at 13:44
|
2019-05-26 18:55:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7630909085273743, "perplexity": 367.11786895334455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259452.84/warc/CC-MAIN-20190526185417-20190526211417-00294.warc.gz"}
|
https://habr.com/en/company/mailru/blog/464515/
|
# How to Make Emails and Not Mess Up: Practical Tips
• Tutorial
A developer, who first encountered generating emails, has almost no chance to write an application, that will do it correctly. Around 40% of emails, generated by corporate applications, are violating some form of standard, and due to this, there are problems with delivery and display. There are reasons for this: emails are technically more difficult than the web, and operating emails is regulated by a few hundred standards, as well as an uncountable number of generally accepted (and not as much) practices, whereas the email clients are more varied and unpredictable than browsers. Testing may significantly improve the situation, but materials that are dedicated to testing the email system, are practically non-existent.
Mail.ru regularly interacts with its users by email. In our projects, all the components responsible for generating emails and even individual mailings, are subject to mandatory testing. In this article, we will share our experience (learning from our mistakes).
## What kind of emails are there?
The application can generate various types of emails. They can be classified into several categories. By the method of selecting recipients – personal/triggered – selective- group. By appointment: transactions- marketing- service. You can set different requirements for different types of email and apply various testing scenarios.
Triggered personal emails are generated in response to any events, for example, user actions or status changes of system objects. They are generated by the application, and therefore are the most interesting in regards to testing. Triggered emails can be transactional, marketing, and for service use. Selective emails are sent to a dynamic selection of users, that meet some form of a criteria. Group emails are sent to a permanent group of recipients, for example, all users or partners. Selective and group emails are often for marketing use, and sending such emails is started manually or on a schedule.
Transactional emails are generated in the process of a user completing some form of action. Such emails include, for example, invoices, tickets, or delivery status notifications. Transactional emails are always triggered and are meant to carry important information. They should be as simple and compatible as possible, and testing them should be done on a large number of mail clients.
Marketing emails encourage the user to take an action, for example, this can be an offer for a personal discount based on previous purchases. Transactional data can be utilized in these emails, and they can be triggered emails or mass-, periodic or one time. For these emails, efficiency is more important, and the results of the split-test usually determine it. Some aspects of compatibility can be sacrificed for efficiency.
Group marketing emails, for example, messages about seasonal offers, promotions, and sales, are often sent ‘manually’, and are not part of your application, but you can (and should) also apply general testing principles to them.
Also, there may be service emails: notifications generated for the staff, for automated CRM systems, journaling, auditing, or DWH. Such emails are usually triggered emails, meaning that they are also part of the application, and must be tested.
## Who is involved in the testing and control process?
1. QA engineer – participates at all stages of the process.
2. Network engineer – responsible for configuring network infrastructure and message delivery infrastructure. Network engineer should be involved in planning and infrastructure testing.
3. Delivery specialist – person who monitors the deliverability of emails, who also participates in monitoring the technical and administrative parameters of all emails sent, and monitors the progress of the mailing process. He is responsible for ensuring that the sent emails reach the highest percentage of users, and do not end up in spam. For this purpose, the specialist must have specific knowledge and contacts. If there are any problems with the delivery of emails, he is the one who must understand the probable cause and eliminate it; either by eliminating technical obstacles; or changing something in the content of the emails; or try and solve the problem with the support service of the mail provider, to which the emails don’t reach. Such a specialist (if any) should also be involved in coming up with the checklist, and testing infrastructure generating applications and emails. However, the testing process itself should be under the control of the QA service.
4. Email-marketer – determines the effectiveness of marketing newsletters. Under his control, the split-testing for the distribution of marketing to the audience occurs. Email-marketer also controls the segmentation of the user base, the composition, and frequency of the sent emails, the visual ‘presentation’ of the email to the user.
All of these roles are not necessarily performed by a dedicated employee; the role of the marketer can be performed by one of the product managers, and the role of the delivery-specialist, for example, can be performed by a support employee or a network engineer. In start-up companies, it is highly likely that all of this will have to be dealt with by one person, and they may turn out to be a quality specialist.
## Mailing and mail transport
The email structure is like a massive iceberg, and there are two levels in it. There are more than a hundred different standards governing emails, but almost all of them belong to one of these two levels:
The underwater part of an iceberg – network service, the basic protocol of which is the SMTP application protocol defined by RFC 5321. It is responsible for the delivery of emails. A so-called SMTP envelope is formed for the delivery of the email, which includes the addresses of the sender and recipient of the SMTP level. Other network services, such as DNS, are also responsible for delivering the email. The main component of the network infrastructure is the Mail Transfer Agent (MTA). The MTA is responsible for handing the message delivery queue and the delivery process itself to the recipient servers. MTA examples include Postfix, Sendmail, Exim, Microsoft SMTP service.
This underwater part of the iceberg, which includes the MTA, DNS parameters, and other network parameters, we will call the email infrastructure or the message delivery infrastructure.
The tip of the iceberg — the email itself. The basic structure of the email is defined by the standard RFC 5322. The email consists of service headers and one or more data parts. The data may be in a plain text format and/or HTML or even AMP, with inline images or attachments of almost any type.
## The interface of the email infrastructure and the boundaries of the tested application
The email infrastructure, as a rule, has one or several interfaces through which an email is sent (when it enters the MTA delivery queue). For example, the SMTP Submission service, the function mail() in PHP, data transfer to an external mail or sendmail application, API for internal or external service (such as GetResponse, SendPulse, or Amazon SES). We will consider these interfaces as part of the infrastructure. It often happens that Application A prepares data for an email and a list of recipients, and then sends it to Application B through its API (for marketing mass mailings, this can be done manually via the user interface- UI), and application B generates a mail message in RFC 5322, and delivers it to the MTA. For application A (and when testing it), application B will be part of the email infrastructure. The API or UI of application B will be for the application A interface of the mail infrastructure. Although for Application B, the situation will be different, and the mail infrastructure for it will be lower-level network applications or protocols.
## Definition of test parameters
When testing each application, it is important to select all the mail infrastructures used by it (there may be several of them), and for each infrastructure to single out the interfaces used (there may also be several of them for each infrastructure). For each interface, the composition and format of the data transmitted to it is determined as accurately as possible, e.g. email text in TEXT/HTML, email text in TEXT/PLAIN, email subject, recipient name, recipient address, sender name, sender address (RFC5321.From), the address of the sender of the SMTP convector (RFC5322.mailfrom). Next, a set of requirements is developed for each parameter (representations, encodings, boundary values, etc.), methods for monitoring each of the parameters are determined (how to compare the actual result with the expected one).
## Typical structure of a generating application
As a rule, the same product that we are testing is responsible for the generation of emails and data in it. This is usually a server (but sometimes client) application. It defines the structure of the email, part of the service headers, data encapsulation formats, string representations, and text encoding. A simple example of such an application is the script that forms the email and calls the mail() function. The main elements of the application that must be controlled are:
• the code responsible for generating headers and / or email structure, if the email structure is dynamically generated, and / or static email templates describing its structure;
• HTML layout of the email (ideally, it is a separate entity or part of the email template / layout, but can be embedded in the application code);
• substitution of application data into an email (or into an email template);
• integration of the application with the email delivery infrastructure, the correctness of the parameters passed to the infrastructure interface.
## What and when to test
Whether we like it or not, the entire iceberg should be tested. There are several main components that require testing:
#### Delivery infrastructure
Emphasis in testing should be done on: email deliverability; correct DNS records, including PTR / FCrDNS, MX and A records; SMTP protocol parameters (HELO, use TLS); email authorization (SPF / DKIM / DMARC); SMTP envelope addresses (if the application does not manage them); correctness of processing the input parameters of the infrastructure interface; tracking, recording and processing of undelivered emails.
It is necessary to test the infrastructure during the initial implementation and every time changes are made to the infrastructure itself (the configuration of the MTA, DNS or network changes) or the interface for sending an email; using a new domain, network, or API; additional testing is required if the characteristics of the emails sent, such as their language, size or numbers are significantly changed. According to experience, the infrastructure tends to change «by itself», so basic tests should be conducted periodically, even if there is no information about any changes.
It is possible and necessary to involve the network engineer and the deliverability-specialist in drawing up the plan and checklists for testing the infrastructure.
#### Generating Application
The addresses of the SMTP envelope should be monitored (if the application controls them, that is, they are transferred to the interface — envelope-from, envelope-to), the values of the service headers of the email (Date, Message-ID, List-Unsubscribe, Auto-Submitted, etc.). Clause), email authorization (DKIM / DMARC), MIME-encoding (base64, quoted-printable), general correctness of the email format, for example, the absence of non-ASCII characters in the headers, the composition of the data being injected, the correct triggering of triggers, unsubscribing mechanisms, tracking mechanisms writing and collecting statistics (postmaster headers, for example, Feedback-ID or X-Mailru-Msgtype, as well as tracking pixels).
It is necessary to test an application during its development, when all its related components that are responsible for generating and storing data change, with significant changes in email templates, when changing the used infrastructure or interface to it, as well as within the framework of general regressions.
#### Structural and typesetting email templates (can be part of a generating application or are developed separately)
The structure of the email is checked (Content-Type, Content-Disposition, nesting of Multipart-parts of the email, text encoding, string parameters), the value of the target and displayed headers (From, To, Reply-to, Subject), the way email is displayed in the list of emails and when reading in various interfaces, microformats (for example, that a calendar event is recognized as a calendar event or an air ticket as an air ticket), branding.
Email templates should be tested every time you make at least the slightest changes, as well as separately, for example, in a situation where emails get into the application before the server part is ready.
It is recommended to involve an email marketing specialist and a deliverability specialist to compile a checklist for testing an email template.
#### Basic requirements for checking infrastructure
The IP address selected for the mail server should be as close as possible to the IP address of the mail server. It is recommended to check it using the whois utility. In particular, the sender's address should not belong to the network, which can be perceived as dynamic; the selected network must have active contacts to which complaints can be sent; the network must be used (have the status ASSIGNED in RIPE) The IP address must have a properly configured PTR record. It can be configured independently through the hosting control panel, or with the help of the service provider. A PTR record must point to the real hostname and still be meaningful, resolved back to the same IP address (so-called FCrDNS check), not remind the name of the dynamic host, and not include a large group of numbers or characters. A good example is mailserver.example.com.
Each domain used in envelope addresses or email headers must have a valid MX record pointing to a host A record, which, at a minimum, can handle undeliverable messages. MX is not allowed to directly refer to an IP address.
Control the passage of SPF, DKIM, DMARC. SPF allows the domain owner to specify in the TXT records a list of servers that are authorized to send emails with return addresses in this domain. It is configured for the address used in the envelope-from (SMTP-envelope) in the section of managing DNS zones of the domain. DKIM provides verification of the authorship of a message or of its originator to a specific domain using digital signature technologies, which is added to the email itself (in its DKIM-Signature header). Typically, a DKIM signature is configured at the MTA (infrastructure) level. DMARC sets the policy of checking incoming mail in a specific domain and actions on emails that do not pass SPF or DKIM authentication. When attempting to violate this policy, a structured report comes along with information about such an attempt. DMARC, as well as SPF, is published as a TXT record in the domain zone.
Check the deliverability of emails to the main postal services — for Russia Mail.Ru, Yandex, Gmail, Microsoft (Hotmail / Outlook.com / Office365), Rambler, nic.ru. In the emails arrived, you need to check the correctness of HELO; the presence and passage of checks PTR / FCrDNS, SPF, DKIM, and DMARC; the validity of the headers and data in them, in particular, the synchronism of the clock in the dates and the correctness of the time zones.
(Registration mail has broken authentication due to freemail address used)
The formation of some parameters, for example, authorization, deliverability, and spam are integrally influenced by all components, but for their control, there are usually separate operational tools — DMARC and FBL reports, postmaster services API, email tracking tools, delivery statistics. Testing should take into account the level of implementation of operational monitoring tools in the company — for example, in the absence of operational control of DMARC reports, the authorization of emails should be regularly tested, whereas in the absence of operational control of deliverability, where and how the emails go, even if there is no development related to sending emails.
To test the infrastructure, you can use specialized services, for example, mail-tester.com, mxtoolbox.com. Detailed infrastructure requirements can be found in this article.
(an example of the broken SPF record)
## Authentication requirements
Checking the passage of SPF, DKIM, DMARC is usually possible using the Authentication-Results header on the recipient's server.
Check the validity of the SPF record for syntax, the limit for DNS queries (for example, using mxtoolbox.com). When forming an SPF, all sources of mailings should be taken into account (do not forget CRM systems, all currently utilized delivery infrastructures, including those through which one-time marketing campaigns are conducted). It is recommended to set allowed servers for the domain through the list of networks (‘ip4’ / ‘ip6’). SPF is checked for the sender address from the SMTP envelope. Verify that the SMTP envelope domain (envelope-from) matches the domain from the From header. Common SPF errors are listed in this article (https://medium.com/hackernoon/myths-and-legends-of-spf-d17919a9e817).
Check the DKIM DNS record, the validity and composition of the DKIM Signature. Verify that you are using a DKIM key of at least 1024 bits. Recommended hash mode of DKIM signature: relaxed / relaxed. Make sure that all important headers are signed (From, To, Subject, Date, Message-ID, MIME-Version, Content-Type), that tracking headers (Received, Delivered-To, Return-Path) are not signed, and DKIM is validated for basic mail services. Set up forwarding to one of the mail services to another; DKIM should not «beat» on forwarded emails. Verify that the DKIM signature domain matches the sender domain from the ‘From’ header.
Check DMARC for basic email services. Check for DMARC reports, identify and troubleshoot SPF and DKIM for all IP addresses of your infrastructure.
Verify that messages are delivered to external servers using encryption (TLS). You can sometimes check for TLS by the Received header on the recipient’s server: for example, specifying the ESMTPS protocol or having parameters of the form (version = TLS1_2 cipher = ECDHE-RSA-AES128-GCM-SHA256 bits = 128/128); indicates the presence of TLS.
## Verification of the generating application
We begin the verification of the generating application with the addresses in the SMTP envelope.
Envelope addresses are addresses of the email infrastructure level. They are not visible to the user, but they are important for delivery because the address of the envelope determines which mailbox the email goes into.
The recipient address in the envelope (envelope-to, aka RCPT TO:) is the address to which the email will actually be delivered.
• for all emails except for registration, the address must be legally signed and validated for mailing in accordance with administrative requirements.
• for newsletters, the address must be «live», addresses to which email cannot be delivered regularly should be marked as inactive, mailings to them should not be made. But some categories of emails (for example, access recovery) may also need to be sent to addresses previously marked as inactive.
Sender address in an SMTP envelope (usually called envelope-from, smtp.mailfrom, MAIL FROM: or Return-Path) — messages will be delivered to this address about the inability to deliver an email and automatic replies. This address is not visible to the user. This address is also used for SPF authorization. We verify that:
• It is not the address of the employee and is not redirected to him in order to exclude auto answers, messages about inaccessibility, etc.;
• It is processed by a script that will mark addresses of inaccessible users as inactive;
• The address can be automatically generated, i.e. unique to each email;
• An email to this address should not lead to the generation of any response email, for example, a message about a mailbox overflow.
These addresses are either directly visible to the user or are used when replying to an email.
Sender address (the header From:) is the address and sender name displayed in the list of emails and when reading an email. We verify that:
• It contains not only an email address but also the name of the sender.
• Noreply@ is possible, but only if we want to emphasize that we do not expect to receive a response to the email and it will not be read. It is better to duplicate this idea in the text of the email.
• In the presence of non-ASCII characters (for example, Cyrillic), the sender name must be encoded in accordance with MIME, the domain in the presence of non-ASCII characters must be encoded in Punycode
• Emails of the same category should have the same address; the use of auto-generated addresses should be avoided. This is due to the fact that ‘From:’ is most often used by people to sort emails by folders using filters.
• The address should be different (preferably in different subdomains) for transactional, marketing and urgent emails (such as emails from the support service). This is also due to the fact that the user can mark marketing emails as spam or filter them in an unreadable folder.
• The address From: and SMTP envelope address must be in the same domain or in subdomains of the same organizational domain in order to pass the SPF within the DMARC.
• The address must be from the organization’s domain. It is unacceptable to use free mail services and other public mail domains in From:, since such mailings will not pass SPF and DKIM authentication within the framework of DMARC.
The address Reply-to – ‘manual’ replies will be sent to this address when a user answers an email. It is optional. If it is absent, the address from ‘From:’ is used for the response. Check that ‘Reply-to’:
• It can be auto-generated, i.e. unique to the email (allows you to find out which email the answer came to).
• It should not fall into the employee’s mailbox to avoid auto-answer, but ideally should be «wrapped» in CRM.
• It can generate a standard CRM auto-answer, but it should not generate anything superfluous, for example, mailbox overflow or «on vocation» messages. When generating auto-responses, measures must be taken to avoid looping, they are listed in RFC 3834.
• It can be from any domain that does not necessarily coincide with ‘From:’, but sometimes this scares users when answering.
• May be absent, then the From: address performs its functions.
• In addition to the address, the name of the sender is indicated.
The address To:
• Must contain the recipient's email (otherwise it scares the recipient of the message and bothers the anti-spam).
• Ideally, it should contain the name of the recipient. But if the name is unknown or doubtful (for example, the address has not yet been confirmed), it is better not to indicate it (someone may enter someone else's address with a bad name, and the recipient may be offended).
The actual encoding of the text should match the one indicated in the header. It is advisable to use one encoding in all headers and parts of the email. It is recommended that you use UTF-8 as a widely supported one. The encoding is indicated in the Content-Type headers and in the tag of the HTML part.
The From:, Message-ID: and Date: headers should be formed directly in the script for sending the email (and by standards — along with the text of the email) and always in the correct format. If they are absent or incorrectly formed, one of the transit servers can add these headers, which leads to a violation of the integrity of the DKIM signature.
8-bit characters in the headers, including the subject line (Subject) and the names of the attached files (Content-Type and Content-Disposition), should be absent; All non-ASCII characters, including Cyrillic, must be encoded in quoted-printable or base64.
(registration confirmation in weird encoding)
### Requirements to the structure of the email
For the HTML part of the email, it is desirable to form an alternative — text (plain) part. It is also necessary to check the conformity and readability of the plain text part of the email (if any) and the general structure of the email.
According to RFC 5322, the length of a line in an email should not exceed 998 8-bit characters. Please note that in UTF-8, a character can occupy several octets. The terminator of lines in an email is a pair of CRLF ( ascii 13, ascii 10), which takes 2 octets. You need to check the correctness of the string terminator, since emails are often sent using a Unix script, and in Unix, the string terminator is a single character — this is an error for email. You should also check if the string terminators break UTF-8 encoded characters: you cannot allow the presence of a string terminator between two octets of the same character, for example, Cyrillic symbol. To avoid such situations, it is necessary to break the text before forming the email or encode the text in base64, base64 usually has fixed line length.
It is necessary to check the correct marking of attachments and inlines — that is, the correctness of the formation of the headers «Content-Disposition: inline», if it is a picture displayed inside the email, or «Content-Disposition: attachment» if file attached is intended for download.
The structure of the email should be as simple as possible: in particular, there should not be more than one multipart part of each type (mixed, alternative, related), and multipart/mixed is used only if the email contains file attachments, multipart/related — if HTML comes with inline images, multipart/alternative — in the presence of plain text and HTML parts. In general, the structure of the email, in the absence of attachments and inline pictures, should look like this:
multipart/alternative
— text/plain
— text/html
The order of the parts is important, text/plain must go BEFORE text/html or multipart/related. This is necessary so that the HTML part is displayed by default, and only if its display is unavailable for some reason, the plain part is displayed.
If there are inline pictures in the email, its structure should look like this:
multipart/alternative
— text/plain
— multipart/related
—— text/html
—— image/… (inline-picture)
—— image/… (inline-picture)
Inline images must have Content-Disposition: inline and be strictly inside the multipart/related part.
In the most difficult case, when there are both inline images and attached files, the email has the following structure:
multipart/mixed
— multipart/alternative
—— text/plain
—— multipart/related
——— text/html
——— image/png
——— image/png
— application/octet-string (content-disposition: attachment)
— application/octet-string (content-disposition: attachment)
multipart related- and multipart/alternative-parts must be closed before attachments, attachments belong to the external multipart/mixed part)
(registration message with incorrect parts structure)
## URI requirements
Any URIs (in src, href attributes, styles, etc.) must contain the protocol and hostname (https://example.com/somepath). Typical errors are the use of relative links (/somepath) and the lack of a protocol (//example.com/somepath), which is unacceptable for emails, because in them, the default protocol may be file://.
• Any service and non-ASCII characters (in particular, Cyrillic) in the URI must be encoded using percent encoding.
• A link inserted as text (that is, visible to the user as a URL, and not as a piece of text) should still be marked up with the <а> tag, otherwise, the user will not be able to click on it. Some webmails mark up such links on their own, but this is not standard behavior. In this case, the href address inside A must match the link text, otherwise, the content filter may react to such a link as an attempt to deceive the user. This is especially worth paying attention to when there are «clickers» that track the user's transitions from the email.
• It is better to limit oneself to the use of the protocols http://, https:// and mailto:.
• With high-security requirements, you should completely abandon the use of http:// in favor of https://.
• Non-standard ports should not be used (for example, example.com:8080/somepath), as they may not be accessible to the user.
• Clicking on the link inside the HTML part should not lead to any changes in the state of the application (subscription, unsubscribe, cancellation of the order, etc.) without additional confirmation by the user on the page, because some content filtering systems can independently verify the security of such a transition by requesting a page by reference; the mail application can show the preview of the page by the link on hover, and modern browsers can load the page before the user clicks the link to reduce load time (in the web application it is generally not recommended to do any modifying actions on the GET request, all modifying requests must go through POST or PUT).
• Clicking on the link in the List-Unsubscribe header, on the contrary, should not require any additional actions from the user, because the unsubscribe by this link is usually done by email program or webmail on behalf of the user. Also, there is a new header List-Unsubscribe-Post introduced by RFC 8058.
• You should not expect the user to read the email and follow the link in the same browser in which it initiates the action leading to the sending of the email (for example, registers an account). The link should work in any other browser or mobile device. In particular, the user can open the link without being authorized, or authorized in an account other than the one to which the email was sent.
• Because the length of the URI can be limited; you should not use URIs of the ‘data:’ type for large objects. For the same reason, don't use URIs that are too long in hyperlinks.
• You must not use external link shorteners, this negatively affects the delivery of emails. It is better if all links point to your domain, this will reduce the potential negative impact of someone else's reputation on the delivery of emails.
• Do not place external images on some public services or free hosting, use a reliable hosting service or CDN with good performance and reputation.
(invalid image and anchor URI due to missing protocol specification)
## Email layout requirements
Why is it so difficult to make up emails?
Email clients in one way or another display user content within their interface. Potentially, this can lead to various security problems — cross-site scripting (XSS, Crossite scripting), interface spoofing, DOM clobbering, user deanonymization / information leakage (for example, the user's IP address or cookies through external requests), etc. ., therefore, any mail service and mail application has some form of protection against each class of attacks. Unfortunately, there is no single approach to organizing this protection. It can be organized through:
• isolated limited frames,
• filtering tags and / or attributes,
• restrictions on absolute positioning,
• prohibition or restriction on the use of block styles (which is critical for adaptive layout),
• banning external elements by default (i.e. downloading external images requires user permission) or using a proxy to access them,
• converting HTML emails to another intermediate format (for example, Microsoft Exchange / Outlook uses RTF, which can make it extremely difficult to display elements properly in Outlook using conventional methods),
• prohibition or restriction on the use of forms or their individual elements.
The emails also use specific elements, such as inline images and URI cid://, whose support may be limited. For example, Mozilla Thunderbird does not support cid:// for background images.
Even a correctly formed email can be displayed differently in different interfaces due to the peculiarities of their implementation and filtering of the contents of the email.
If there are errors in the email format, the behavior becomes completely unpredictable. For example, email clients may have different behavior with incorrect URIs, or that incorrect header formatting is handled differently. Also, text encoding auto-detection works differently if it is not specified or is specified incorrectly. Therefore, the email must be viewed in different interfaces: the correct display of the email in one interface does not mean that it is composed correctly (in fact, even the correct display of the email in all interfaces does not guarantee that there will be no problems with the display in the future).
It is necessary to pay attention to the following points:
• Check the text of the email for semantic content, display, absence of typos, syntactic, spelling and lexical errors.
• Check the correctness of the substitution of application data in the template or layout of the email.
• Check the correctness of the amounts, dates, numbers, items of goods and other information, taking into account the permissible boundary conditions. The dates should have a year (some users enter the box very rarely). A time zone must be present in time. The address must contain a city, and in some cases, it is necessary to indicate the country.
• Check the operational status of all links in the email, if any.
• In emails sent before confirmation of the address, incl. emails with a confirmation link, there should not be any text controlled from the outside, even the username, otherwise, they can be used for spam (in the field displayed in the email, for example, in the name, spam text is inserted and the address is the indicated address of the victim). For example, if you can send an obscene text on the developer's address on behalf of your service, then there is a problem.
• Check for the absence of external images on third-party services. It can affect delivery and leak information about your customers.
• Check the availability of counters for sending, delivery, reading email, transitions. Some of them are located in the email itself (for example, the counter-pixel of reading the email), some are tracked by the mailer, but, as a rule, all are available in the mailer’s admin panel.
• Check the correctness of the subscription category and the user's unsubscription for this category through the link in the email.
• Check how the email looks to:
• Popular web versions of mail for a targeted country: the «big three» for Russia are Mail.Ru, Yandex, Gmail. You can also add Rambler and Outlook.com;
• Mobile applications of the above email providers;
• Standard mobile applications using IMAP, taking into account popular mobile platforms, for at least the iPhone, Pixel (Android reference platform), Samsung (the most common for Android), MIUI (takes the second place in Russia for Android platforms);
• Various desktop browsers: Chrome, Firefox, Edge, Internet Explorer, Opera, etc .;
• Desktop applications (email programs), especially Thunderbird, Outlook and Apple Mail, optionally The Bat! and Opera Mail;
• Popular corporate solutions with a web interface (Exchange, maybe Roundcube, Communigate, Zimbra, SquirrelMail) — for B2B solutions;
• Do not forget to check the layout on both Retina monitors and monitors with lower resolution.
• During the check in each case, you need to pay attention to:
• Passing the authorization headers, SPF / DKIM / DMARC.
• Displaying an email in the list of emails: avatar, sender’s name and the subject that falls into the message’s snippet, whether its category was correctly defined (for example, if the order did not fall into the «social networks» category).
• The layout of the email as a whole: if the layout remained consistent, were there any incorrect word breaks, etc., including when scaling and resizing the window.
• Fonts should not be small or hard to read.
• Background images and background colors.
• Matching the brand book.
• Ease of performing the actions implied by the email. For example, if the email contains a confirmation code or other information that may need to be stored somewhere, then it should not only be well-read, it should also be convenient to select and copy it even in the mobile interface.
• Keep track of the overall size of the email (including external images) and that it does not exceed reasonable values. The heavier the load time, the more likely that a person will have a negative reaction to it.
• Even emails that have not been amended should be checked periodically, as changes can occur on the side of the postal service, and, for example, «reveal» a previously invisible problem.
• Some parameters must be controlled in all tests. For example, problems with passing DKIM authentication can be due to problems in the infrastructure (problems with DNS or the formation of a DKIM signature, time synchronization errors), due to errors in the generating program (sender address is incorrectly formed, incorrect characters in the headers are missing or mandatory From, Date, or Message-ID headers have been duplicated) and due to content errors (incorrect line terminators, lines are too long, incorrect addresses). At the same time, the email may not «beat», and the problem may not appear on any service.
## Conducting Split-Tests
Marketing research is beyond the scope of this article, but a few key points that significantly affect the quality of emails should be mentioned.
The newsletter has a purpose, so it should entail through its quality, not quantity (which is the opposite for spammers). The newsletter must be segmented. When conducting an advertising campaign, you need to know exactly who gets into the segment sample, why they need the offered product and what they want to convey.
For each mailing list, it is necessary to calculate the CTR of the list of emails — this is the ratio of the number of emails read to the total number that has been sent out. In postmaster.mail.ru, you can see indicators for unique users. If the measurement goes through the counter in the email (pixel), then the absolute number of openings is estimated. CTR <10% is a very low indicator, it is undesirable to carry out such mailing. It should strive for a CTR> 30%. For marketing emails, the clickthrough rate of clickthroughs and the percentage of completed actions («sales») on these links are also of great importance. Be sure to monitor complaints (marking the email as spam). Typically, for one-time mailings, a tenth of a percent is a good indicator, for regular ones, a hundredth of a percent. The critical values, after which the mailing is always interpreted as spam, are indicated here: https://help.mail.ru/developers/mailing_rules/technical.
It is necessary to conduct split testing of various distribution options to obtain optimal performance. Just changing the name of the sender and the subject of the email can increase the CTR by several times and significantly reduce the number of complaints. The number of emails should be statistically significant for evaluating the results (for large projects, usually a few thousand). The final version of the email is sent in several stages for additional measurement of indicators and «warming up» — starting with about 10,000 recipients, with an increase of about an order of the day.
The main idea: emails are part of your application, perhaps one of the most complex and problematic. At the same time, this is often a «blind spot» in terms of testing. I hope that I was able to draw your attention to this issue.
I would like to thank Vladimir Dubrovin (z3apa3a) and Alena Likhacheva (s4ever) for helping me with this article. As well as that, the article utilised sources of Eduard Tyantov (edT) and Alexander Purtov (4Alexander).
Mail.ru Group
1,209.16
Building the Internet
Share post
|
2019-09-18 10:00:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20536832511425018, "perplexity": 2368.1742764117353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00299.warc.gz"}
|
https://electronics.stackexchange.com/questions/34803/electrical-element-instead-of-diode
|
# Electrical element instead of diode?
What electrical element can replace diodes other than transistors in an AC circuit ?
• What are you trying to accomplish? – suha Jun 30 '12 at 8:37
• I am asking just for learning purpose. – Adban Jun 30 '12 at 8:39
• Assume that I have a simple circuit which I need to let the current pass one direction only..|(in general) – Adban Jun 30 '12 at 8:42
• A manually controlled switch? A tube? A FET intrinsic diode? A selenium rectifier? Honestly, anything that can replace a diode is functionally a diode, so one might as well argue that is IS a diode, which leads to the answer: nothing. – Wouter van Ooijen Jun 30 '12 at 9:03
There are many theoretical methods you could use to replace a diode. The complexity and cost would mean it is not really feasible on any sort of scale, but it is certainly doable.
Take for example a buck converter circuit. There are losses in the recovery diode that limit the efficiency of the circuit. Some buck converter ICs have the option of controlling synchronised rectification to allow replacement for the recovery diode with a MOSFET.
In this situation, the intrinsic diode of the MOSFET is initially used to allow for the recovery current to flow through and then the controller turns on the MOSFET, creating a lower resistance path for the current (that is Rds vs vf of the body diode).
This could be extended to be made into an active diode. A comparator across each end of the MOSFET with the output connected to a gate drive is used to turn the MOSFET on. So while one side is positive, the MOSFET will turn on and allow current to flow in parallel to the body diode and when the voltage reverses, the comparator will detect that and switch off the gate, thus turning the MOSFET off. You effectively have an active diode in parallel with a regular diode.
Going back to your originally question, there is no electrical element that can replace a diode (a p–n junction) other than another p–n junction (whether in a diode, transistor or MOSFET package). This element can be improved upon using a MOSFET and associated circuitry to reduce losses. This is often the case in switched-mode power supplies and motor controls.
• I am confused how this is accepted as he gave the condition other then a diode. Not to imply something is wrong with your answer, actually, I think the opposite is the issue, the question. – Kortuk Jun 30 '12 at 14:34
The power stations back in days used mechanical "Synchronous Rectifier Bridge" thing. No electronics at all for AC->DC conversion. The idea is to use a mechanical contraption to do simple commutation as fast as AC changes its direction, so you get a DC as a result.
If you have a lab with good machine shop, vacuum chambers and pumps and custom glass oven, and several little persons, you can produce a very high efficiency sample of mechanical rectifier for 60 Hz industrial.
This one on the picture can rectify 16v 5A (80W) or so. Looks kind of steampunk with neat industrial touch. Look at the nice level of usability, lids are opened and user can access contacts panel to tie the wires with AC source and DC load. And daily cleaning and replacement of the brush contacts is as easy!.
A foxhole radio uses a rusty razor blade and a pencil as the rectifier in a peak detector for an AM radio.
A vacuum tube rectifier is another element that serves the same function. Though technically still a "Diode" it is not a PN junction.
___| <===
|
|
• I have no idea what your diagram is supposed to indicate, can you replace it with a picture? – Trygve Laugstøl Oct 24 '12 at 13:16
|
2020-09-21 03:16:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6838374137878418, "perplexity": 1371.4229452543766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00007.warc.gz"}
|
http://physics.stackexchange.com/tags/visible-light/hot
|
# Tag Info
37
You're right that as the temperature increases, shorter wavelengths receive a higher proportion of thermally radiated power, and longer wavelengths a smaller proportion, because of the shifting Boltzmann distribution of your molecules' kinetic energy, and therefore the shifting power spectrum of the light they emit. However, most of the objects you see ...
13
Planck's Law gives us the intensity of black body radiation as a function of temperature: $$B(\lambda,T)=\frac{2hc^2}{\lambda^5}\cdot \frac{1}{e^{\frac{h c}{\lambda k_B T}}-1}$$ If we plot a normalized plot of this curve for different temperatures, you see the following: As you can see, it does look like the higher temperatures make the relative ...
5
For many materials the change in refractive index over the range of visible wavelengths isn't huge, so it's not a bad approximation to take a single value. The range of visible wavelengths is from about 400nm to 700nm, so the middle wavelength is 550nm. As it happens, the sodium D lines are not far from this, at 589nm, and since they are bright and easy to ...
4
The atoms in the lattice can be thought of as coherent re-radiators of the incident photons. This is not unlike the scenario we have in a double slit experiment, where a Huygens construction of the wave front considers each point in the slit as a radiation source. So it might be "opinion" but I think that diffraction is an appropriate word to use.
3
You need to keep well in mind that the sensation of color is a semantic meaning that the human mind's processing attaches to the spectral content of light. The mixing of "primary colors" was experimentally found (first by artists for red, yellow and blue with natural pigments, then later for, usually, red, green and blue by photography and color projection ...
3
Time is a continous flux that goes from the past to the future; you can't stop it, you can only accelerate or decellerate it. From relative theories and from quantum mechanics we know that, in the most of case, the max speed of informations is the speed of light. So we can see events from the past: we can observe the light of a star that, now, is died. Only ...
2
Yes and no. Fusion inside the sun produces light - but the atom are moving so fast that their electrons are not attached - it is a plasma. As such, you would be hard pushed to find emission lines in the sunlight. You will see some absorption lines - the colder hydrogen and helium further out will absorb little bits of the radiation. What you are left with ...
2
I think the answer is very simple if you ask another question, you are implying that both black holes generate the same gravity and that the light passes exactly through the middle between them, so the question is If the light deviates, where will it deviate to? Since given the conditions it's not possible to give an answer to this question, it means ...
2
I don't know how hot campfires get, but let's take 1000K as a nice round number. The Planck distribution for 1000K looks like: (Calculated, as the logo suggests, using this web site.) In my answer to Can a glass window protect from heat radiation? I post this graph showing the transmission of glass in the IR region: And this shows the transmission ...
2
Infrared lasers are much more dangerous to the human eye compared to a visible laser of the same power, because infrared lasers do not trigger a blink reflex, which means the laser has much more time to damage your retina. Your other questions can be answered by reading about the many differing ways that visible and infrared light interact with matter via ...
2
Individual photons are not considered rays. Because of the wave and particle nature of photons, they are much more complicated than what they are generally thought of: a projectile of light. In fact, they do not have an exact measurable position, but do travel in straight line trajectories. What we consider rays are lines perpendicular to the wave front of ...
2
When a wave travels through a rope, the rope goes up and down, the position of all the 'rope-particles' changes, they oscillate and this makes up the wave. With light, it is indeed the electromagnetic field oscillating, but you shouldn't think of the arrows that represent that field in your first picture of light as 'extending into the rest of the space'. ...
2
The accepted airglow answer might be technically true, but it does not answer the question! The existence of an additional and very faint source of green light in the atmosphere does not explain the absence of the green light in the sunset sky gradient. I wasn't satisfied with other answers either. The only satisfactory answer I could find is this one. ...
2
The ray theory of light is equivalent to the Eikonal Equation, which in turn is essentially a slowly varying envelope approximation to Maxwell's equations. If we write the electric and magnetic field vectors as $\mathbf{E}\left(\mathbf{r}\right) = \mathbf{e}\left(\mathbf{r}\right) e^{i\,\varphi\left(\mathbf{r}\right)}$, \$\mathbf{H}\left(\mathbf{r}\right) = ...
2
The colour you see in the sky on cloudy nights is due to the reflection of city lights off the clouds. In rural areas, a cloudy night is, as you expected, significantly darker. However, the massive amount of light given off in urban areas reflects back to Earth when there is cloud cover. And so, you see a red-orange hue, similar to the overall colour ...
1
Short answer: use your technique, but use scale factors of 0.690 for L, and 0.348 for M (instead of 0.542 and 0.575), and you will reproduce the luminosity function. Long answer: You're on the right track. I tried to find 'official' scale values for the LMS curves online to combine into the luminosity function, but couldn't quickly get them. You are ...
1
This question is too broad. It involves ALL the objects in the universe which have a surface, i.e., everything. I'm going to avoid giving a lecture here. In some liquids and most gases the electronic structure of each individual atom or molecule is enough to describe their spectra. The "property" you are looking for in the case of solids is the band ...
1
Parallel rays reflecting on a concave mirror do intersect at one point, the focus, if the mirror is a parabola (in 2d plane geometry) or paraboloid (in 3d space geometry).
1
Although glass is an amorphous material, it behaves surprisingly similar to crystalline materials in some respects. In this case, you can imagine glass to be a semiconductor with a large bandgap, at least large enough to be beyond the visible wavelengths. Therefore, all visible light passes through, which makes glass transparent. Obviously, there will be ...
1
Although the shortfalls of focusing more light on the array have been described, a similar question is why you would not mount mirrors to reflect sunlight toward the array only when the incident angle is well off normal. This might provide some of the advantage of tracking the angle of the sun during the day. I think in this case the placement and size of ...
1
1) A stationary charge that has always been stationary is associated with an electric field and only an electric field. The electric field points towards the charge and every point that us equally far away has an equally strong field and the fields gets four times as weak if you go twice as far away. 2) A uniformly moving charge that has always been ...
1
What if I say that the mirror doesn't flip left and right? You've heard it right the mirror doesn't do the flipping. As the above answers say the mirror shows what is right infront of it. It's you(we humans) who think it is flipping. Let me get this in detail Before we begin tell me , 'What makes you think that the mirror flips your left and right?' Or ...
1
I'm going to assume you mean that the light travels on the precise center line between the holes, as iharob did. This sort of symmetry question is very common in physics. Here's a similar question in classical electrodynamics. "If I place a positive charge at the center of a perfect equilateral triangle of equal negative charges, will it move?" Let's say it ...
1
What would happen to light passing through a narrow space between the event horizons of two equal-mass black holes? Would it deviate or follow a straight path? Like iharob and JohnnyMo1 said, the light goes straight. But something else happens to it. See this screenshot of Irwin Shapiro's seminal paper: See where he said the speed of light depends on ...
1
"It is known" that each atom has a characteristic atomic emission spectrum, as long as the atoms are isolated from one another. Emission spectra are usually observed in gases at low pressure. But when the atoms are compressed into solids or liquids, the close proximity of the atoms distorts the environment in which the emission takes place, and shifts the ...
1
White light is a mixture of all wavelengths in visible spectrum. The blue glass has a property to absorb all colors except blue.Hence only blue is transmitted and thus the light seems to be blue.Similarly that yellow light might not be pure ad must be containing some amount of red light,which gets transmitted singlehandedly.I hope this helps!
1
What you learned is correct. More simply, it's a consequence of the "time reversal symmetry" of most of fundamental physics. This symmetry is still present in general relativity. But, it's obscured by the standard system of coordinates. When you transform these coordinates into the Kruskal coordinate system, you not only have a black hole, you also have ...
1
I imagine this effect has to do with the fact that velocity is relative. When you're on the shore, you gauge the velocity of the waves with respect to the shore. When you're in a plane, you're likely gauging the velocity with respect to the other wave crests, which are moving at the same velocity and so there is no apparent movement.
Only top voted, non community-wiki answers of a minimum length are eligible
|
2015-07-05 13:34:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6393725872039795, "perplexity": 417.74992359356673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097475.46/warc/CC-MAIN-20150627031817-00002-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/questions/238141/how-to-drop-extract-elements-of-a-list-which-starts-with-plus
|
# How to drop/extract elements of a list which starts with Plus
I have the following list :
list={a^2, b^2, a^-1, a^2+b^2, (a+b+c)^-1, (a+b-c)^-2}
Do[Print[list[[i]]//FullForm],{i,1,Length[list]}]
I want to break this list into two lists: one containing elements starting with Power and another starting with Plus. In this case it will be
list1 = {a^2, b^2, a^-1, (a+b+c)^-1, (a+b-c)^-2}
list2 = {a^2+b^2}
A priori the position of elements starting with Power or Plus is not fixed and also the elements could be complicated. Is it possible to do this?
• Try: Cases[list, HoldPattern[Power[]]] and Cases[list, HoldPattern[Plus[]]] – Daniel Huber Jan 13 at 13:50
grouped = GroupBy[list, Head]
<|Power -> {a^2, b^2, 1/a, 1/(a + b + c), 1/(a + b - c)^2},
Plus -> {a^2 + b^2}|>
grouped /@ {Power, Plus}
{{a^2, b^2, 1/a, 1/(a + b + c), 1/(a + b - c)^2},
{a^2 + b^2}}
Lookup[{Plus, Power}] @ grouped
{{a^2 + b^2},
{a^2, b^2, 1/a, 1/(a + b + c), 1/(a + b - c)^2}}
KeyTake[Plus] @ grouped
<|Plus -> {a^2 + b^2}|>
KeyDrop[Plus] @ grouped
<|Power -> {a^2, b^2, 1/a, 1/(a + b + c), 1/(a + b - c)^2}|>
list = {a^2, b^2, a^-1, a^2+b^2, (a+b+c)^-1, (a+b-c)^-2};
(* Out:
{
{a^2, b^2, a^(-1), (a + b + c)^(-1), (a + b - c)^(-2)},
{a^2 + b^2}
}
*)
If you want to specify in which order the heads should be extracted, then you could use multiple Cases statements:
{powerList, plusList} = Cases[list, Blank[#]]& /@ {Power, Plus}
(* Out:
{
{a^2, b^2, 1/a, 1/(a + b + c), 1/(a + b - c)^2},
{a^2 + b^2}}
}
*)
• Nice solution, except you don't know in advance which head will be the first in the pair – Roma Lee Jan 13 at 19:20
• @RomaLee Good point, although I am not sure that it matters to OP, but it can be easily and automatically addressed. See edit. – MarcoB Jan 13 at 19:26
• I always suffer from the absence of TakeDrop analog of Cases/DeleteCases and the necessity to call Cases twice. But because of arbitrary order in Gather, it is almost unavoidable. Another way is to add dummy element with head Plus to the beginning, but then you have to delete it. Not nice either. – Roma Lee Jan 13 at 19:31
• @RomaLee That's why GroupBy is almost always preferable to GatherBy. – Sjoerd Smit Jan 14 at 14:39
{l1, l2} = Function[x, Select[#, #[[0]] === x &]]&[list]/@{Power, Plus}
Alternatively, using Pick:
{lone,ltwo} = Pick[#1, #1[[All,0]], #2]&[list,#]&/@{Power, Plus}
l1
l2
$$\left\{a^2,b^2,\frac{1}{a},\frac{1}{a+b+c},\frac{1}{(a+b-c)^2}\right\}$$
$$\left\{a^2+b^2\right\}$$
Cases do the job.
{a^2, b^2, a^-1, a^2 + b^2, (a + b + c)^-1, (a + b - c)^-2} // {Cases[_Power]@#, Cases[_Plus]@#} &
|
2021-05-15 10:21:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3766481578350067, "perplexity": 8072.102792227918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00504.warc.gz"}
|
http://radar.oreilly.com/tag/3d/page/2
|
"3D" entries
Four short links: 14 November 2013
IP Woe, Deep Learning Intro, Rapid Prototyping Bots, 3D Display
1. TPPA Trades Away Internet Freedoms (EFF) — commentary on the wikileaked text of the trade agreement.
2. Deep Learning 101 — introduction to the machine learning trend of choice.
3. Large Scale Rapid Prototyping Robotsan informal list of large rapid prototyping systems […] including: big 3-axis systems that print plastic, sand, or cement; large robot arms with extruders and milling bits; and large industrial arms for bending metal and assembling modular structures.
4. Dynamic Shape Display (MIT) — a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table’s surface. (via Fast Company)
Four short links: 2 August 2013
Algorithmic Optimisation, 3D Scanners, Corporate Open Source, and Data Dives
1. Unhappy Truckers and Other Algorithmic ProblemsEven the insides of vans are subjected to a kind of routing algorithm; the next time you get a package, look for a three-letter letter code, like “RDL.” That means “rear door left,” and it is so the driver has to take as few steps as possible to locate the package. (via Sam Minnee)
|
2015-12-01 00:18:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19537124037742615, "perplexity": 11104.432686531438}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464386.98/warc/CC-MAIN-20151124205424-00060-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/87121/is-it-possible-to-freeze-water-by-dissolving-a-salt
|
# Is it possible to freeze water by dissolving a salt?
Theoretically, by dissolving a salt in water the melting point lowers, approximately 1.86 K kg/mol, making it more difficult to freeze water. However, the process of dissolution of certain salts is endothermic, lowing the temperature of the water. Is it possible that the water freezes due to that change of temperature although the melting point also lowers, or is it impossible?
For instance, consider the next case. In 1 kg of liquid water at 0 °C, we put a mol of KI. The melting point of water should be now about −1.86 °C. However, the enthalpy of dissolution of the KI is about 20 kJ/mol, meaning that the temperature of water should lower till −4.8 °C. So, theoretically, it should freeze, but does this happen in reality?
• Should the last sentence read -- So, theoretically, it should freeze, but does this happen in reality? – MaxW Dec 9 '17 at 17:40
• Welcome to chemistry.SE! If you have any questions about the policies of our community, please visit the help center. Best of luck with your question. – airhuff Dec 9 '17 at 17:47
• Yes, you are right – maxbp Dec 9 '17 at 18:20
• – Mithoron Dec 9 '17 at 21:44
• – Mithoron Dec 9 '17 at 21:48
It is possible, in a significantly different way that you envision. Freezing point depression via the cryoscopic constant is an example of a colligative property, which holds only for relatively dilute solutions. Once the solutions get very concentrated, intermolecular interactions become more complicated and do not generalize, meaning different compounds will affect the solvent differently.
Therefore, with enough of the right solute, it is possible to actually increase the freezing point of water. Oscar Lanzi's example of clathrates is interesting, but if you want to use salts, then I present you tetrabutylammonium hydroxide. It is known to form stable hydrates with a defined composition and containing a large amount of water, such as $\ce{(C4H9)4N+OH^-.30 H2O}$. The 30-hydrate is a solid which melts at approximately 30 °C, containing 67.6% water by weight.
This specific compound, at this specific concentration, forms a particularly stable network of solvating water molecules. You can think of it as stabilizing the structure of ice, allowing it to occur above the normal melting point of pure water. Thus, in principle, if you add enough of the anhydrous salt to water, when you reach the right ratio, the solution will "freeze".
Many compounds actually display this behaviour of forming solid hydrates at certain defined compositions, but tetrabutylammonium hydroxide is unusual in that it forms solid hydrates with a very large amount of water.
Not a salt, but methane works by forming a different phase, https://en.wikipedia.org/wiki/Methane_clathrate. More than just a gee-whiz curiosity, methane clathrate was a big complicating factor in the Deepwater Horizon explosion in the Gulf of Mexico several years ago.
• Although this is could be an interesting comment, I don't see how it answers the question. – airhuff Dec 9 '17 at 21:17
*My answer is theory based and contains mainly of text.
Your thought is correct, that salts do require heat to dissolve. This is observed when you add a large amount of table salt to your typical cup of water, the temperature drops (only slightly though).
The heat of the solution affects the solubility of the salt, so the cooler your water, the less salt you are able to dissolve it in.
And in the world, there is still no salt that can have the ridiculous high molar solubility to lower the solution to freezing temperatures.
• I don see it at all. For example, in the case given (potassium iodide), the solubility of KI at 0ºC is 7mols/(kg of water). If we put one mol in a kg of water its temperature should be around -5ºC. I am not very sure, but considering that the solubility of KI at 20ºC is about 8mols/kg, chances are that a mol of KI can be dissolved in a kg of water at -5ºC – maxbp Dec 9 '17 at 18:23
I don't think it is possible. My reason being that the world has many oceans and seas even salt water lakes. All of which vary in different densities and different temperatures but some form ice packs, ice burgs, glaciers yet others don't freeze at all. Again it's down to the type of salt density & composite of that salt surely???
|
2020-06-02 18:55:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5464077591896057, "perplexity": 1344.860383592181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425481.58/warc/CC-MAIN-20200602162157-20200602192157-00327.warc.gz"}
|
http://www.aldersgatechrysalis.org/valentino-bedroom-yfudsq/fifth-rate-meaning-042179
|
The second comprised the "post ships" of between 20 and 24 guns. For instance, when Pitt Burnaby Greene, the commanding officer of Bonne Citoyenne in 1811, received his promotion to post-captain, the Navy reclassed the sloop as a post ship. Only the larger sixth-rates (those mounting 28 carriage guns or more) were technically frigates. A: 5G is based on OFDM (Orthogonal frequency-division multiplexing), a method of modulating a digital signal across several different channels to reduce interference. These were too small to be formally counted as frigates (although colloquially often grouped with them), but still required a post-captain (i.e. ^* The smaller fourth-rates, primarily the 50-gun ships, were, from 1756 on, no longer classified as ships of the line. Demographic transition is a model used to represent the movement of high birth and death rates to low birth and death rates as a country develops from a pre-industrial to an industrialized economic system. The remainder were simply "unrated". Rating was not the only system of classification used. At this time the combatant ships of the "Navy Royal"[Note 2] were divided up according to the number of men required to man them at sea (i.e. The results of their study were published in a technical report entitledOral Reading Fluency: 90 Years of Measurement, archived in The Reading Teacher: Oral reading fluency norms: A valuable assessment tool for reading teachers. Fifth Third Bank, National Association, provides access to investments and investment services through various subsidiaries, including Fifth Third Securities. Henry's Navy consisted of 58 ships, and in 1546 the Anthony Roll divided them into four groups: 'ships, galliasses,[Note 1] pinnaces, and row barges.' A first-, second- or third-rate ship was regarded as a "ship-of-the-line". [dubious – discuss] The royal ships were now graded as first rank, the great ships as second rank, the middling ships as third rank, and the small ships as fourth rank. Fifth definition is - one that is number five in a series. In the rating system of the Royal Navy used to categorise sailing warships, a fifth rate was the second smallest class of warships in a hierarchical system of six "ratings" based on size and firepower. ^* The larger fifth-rates were generally two-decked ships of 40 or 44 guns, and thus not "frigates", although the 40-gun frigates built during the Napoleonic War also fell into this category. No specific connection with the size of the ship or number of armaments aboard was given in this 1626 table, and as far as is known, this was related exclusively to seaman pay grades. A sixth rate's range went from 4–18 to 20–28 (after 1714 any ship with fewer than 20 guns was unrated).[1]. As of 1905, ships of the United States Navy were by law divided into classes called rates. Herein are distinctly specified six, Learn how and when to remove this template message, https://en.wikipedia.org/w/index.php?title=Rating_system_of_the_Royal_Navy&oldid=998942307, Wikipedia articles incorporating a citation from the New International Encyclopedia, Short description is different from Wikidata, Articles needing additional references from October 2013, All articles needing additional references, Articles with disputed statements from April 2018, Articles with unsourced statements from April 2018, Creative Commons Attribution-ShareAlike License. ‘The most absurd thing of all is how much thought you devoted to this haphazard, fifth-rate hunk of junk.’. One therefore needs to distinguish between the established armament of a vessel (which rarely altered) and the actual guns carried, which might happen quite frequently for a variety of reasons; guns might be lost overboard during a storm, or "burst" in service and thus useless, or jettisoned to speed the ship during a chase, or indeed removed down into the hold in order to use the ship (temporarily) as a troop transport, or for a small vessel, such as the schooner HMS Ballahoo, to lower the centre of gravity and thus improve stability in bad weather. If the number in the grid says "(1.250)" or "101.250," this would indicate that the rate pays the lender 1.25 percent of the loan amount. In 2006, Jan Hasbrouck and Gerald Tindal completed an extensive study of oral reading fluency. For instance, Pepys allowed a first rate 90–100 guns, but on the 1801 scheme a first rate had 100–120. The rating system of the Royal Navy and its predecessors was used by the Royal Navy between the beginning of the 17th century and the middle of the 19th century to categorise sailing warships, initially classing them according to their assigned complement of men, and later according to the number of their carriage-mounted guns. Switch to new thesaurus. The number and weight of guns determined the size of crew needed, and hence the amount of pay and rations needed. a. the interval between one note and another five notes away from it counting inclusively along the diatonic scale. December 4, 2007 From February 1817 all carronades were included in the established number of guns. When making projections for a firm’s free cash flowFree Cash Flow (FCF)Free Cash Flow (FCF) measures a company’s ability to produce what investors care most about: cash that's available be distributed in a discretionary way, it is common practice to assume there will be different growth rates depending on which stage of the business life cycle the firm currently operates in. When carronades became part (or in some cases all) of a ship's main armament, they had to be included in the count of guns. The rated number of guns often differed from the number a vessel actually carried. common fraction, simple fraction - the quotient of two integers. The larger category comprised the sixth-rate frigates of 28 guns, carrying a main battery of twenty-four 9-pounder guns, as well as four smaller guns on their superstructures. From c.1650 the burthen of a vessel was calculated using the formula Pepys's original classification was updated by further definitions in 1714, 1721, 1760, 1782, 1801 and 1817 (the last being the most severe, as it provided for including in the count of guns the carronades that had previously been excluded). Historical category for Royal Navy vessels, based on number of guns, https://en.wikipedia.org/w/index.php?title=Fifth-rate&oldid=970251179, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, This page was last edited on 30 July 2020, at 05:44. Torpedo-boat destroyers, torpedo boats, and similar vessels were not rated. 94 The majority of fifth metatarsal fractures are treated without surgery. For example, 30 miles in 1 hour, or 30 miles per hour, is a unit rate. The rating system of the Royal Navy formally came to an end in the late 19th century by declaration of the Admiralty. All owners of your savings account must also be listed together as owners on your Fifth Third checking account. XGPON’s maximum rate is 10 Gbits/s (9.95328) downstream and 2.5 Gbits/s (2.48832) upstream. × From about 1660 the classification moved from one based on the number of men to one based on the number of carriage guns a ship carried. Vessels with fewer than three masts were unrated sloops, generally two-masted vessels rigged as snows or ketches (in the first half of the 18th century), or brigs in succeeding eras. For example, if there are 70 students in 5 classes, find the number of students per class. Another list, dated 1612, divides them into... 'ships royal, measuring from twelve hundred to eight hundred tons; middling ships, from eight hundred to six hundred tons; small ships, three hundred and fifty tons; and pinnaces, from two hundred and fifty to eight tons. [2]:128[q 3], This classification scheme was substantially altered in late 1653 as the complements of individual ships were raised. b The larger of the unrated vessels were generally all called sloops, but that nomenclature is quite confusing for unrated vessels, especially when dealing with the finer points of "ship-sloop", "brig-sloop", "sloop-of-war" (which really just meant the same in naval parlance as "sloop") or even "corvette" (the last a French term that the British Navy did not use until the 1840s). You have a Fifth Third checking account (Does not include Fifth Third Express Banking.). British authors might still use "first rate" when referring to the largest ships of other nations or "third rate" to speak of a French seventy-four. The country's overall population is about 58 million (2019). For instance, HMS Cynthia was rated for 18 guns but during construction her rating was reduced to 16 guns (6-pounders), and she also carried 14 half-pound swivels. Determine the actual rate requested and the lock-in period. [citation needed] Soon afterwards, the structure was again modified, with the term rank now being replaced by rate, and the former small ships now being sub-divided into fourth, fifth and sixth rates. The fifth rates at the start of the 18th century were generally "demi-batterie" ships, carrying a few heavy guns on their lower deck (which often used the rest of the lower deck for row ports) and a full battery of lesser guns on the upper deck. From 1778, however, the most important exception was the carronade. Chapter 2 develops the vital topic of control valve performance. [4], The smaller two deckers originally blurred the distinction between a fourth rate and a fifth rate. The fifth metatarsal is the last bone at the outside of the foot, and most breaks of the fifth metatarsal occur at the base. The first movement towards a rating system may be seen in the 15th century and the first half of the 16th century, when the largest carracks in the Navy (such as the Mary Rose, the Peter Pomegranate and the Henri Grâce à Dieu) were denoted "great ships". Reuters.com brings you the latest news from around the world, covering breaking news in markets, business, politics, entertainment, technology, video and pictures. adjective. The term first-rate has passed into general usage, as an adjective used to mean something of the best or highest quality available. The rating of a ship was of administrative and military use. [1], Samuel Pepys, then Secretary to the Admiralty, revised the structure in 1677 and laid it down as a "solemn, universal and unalterable" classification. More example sentences. Rates may be higher for loans to purchase an RV from a private party, smaller loan amounts, longer terms, used RVs, and a lower credit score. By the end of the 18th century, the rating system had mostly fallen out of common use (although technically it remained in existence for nearly another century), ships of the line usually being characterized directly by their nominal number of guns, the numbers even being used as the name of the type, as in "a squadron of three seventy-fours". Great Ships (the rest of the ships in the previous "great ships" grouping) mounting 38–40 guns; This page was last edited on 7 January 2021, at 19:12. ^* The ton in this instance is the burthen tonnage (bm). Rates as low as 5.24% APR (Annual Percentage Rate) are available for 4-year RV loans \$25,000 and higher at 100% loan-to-value (LTV) or less on a new RV. Vessels were sometimes classified according to the substantive rank of her commanding officer. 4. Mortgage rates can either be fixed at a specific interest rate, or variable, fluctuating with a benchmark interest rate. Second-rate and third-rate are also used as adjectives to mean that something is of inferior quality. The larger fourth rates of 60 guns continued to be counted as ships-of-the-line, but few new ships of this rate were added, the 60-gun fourth rate being superseded over the next few decades by the 64-gun third rate. Check out our selection of fifth grade reading worksheets. if you are buying for 100k the annual interest payable is £4500 if you divide that by 365 days a year then £12.33 is … Nevertheless, during the Anglo-Dutch Wars of the 17th century, fifth rates often found themselves involved among the battle fleet in major actions. Examples of such weapons would include mortars, howitzers or boat guns, the boat guns being small guns intended for mounting on the bow of a vessel's boats to provide fire support during landings, cutting out expeditions, and the like. Fifth-raters were often assigned to interdict enemy shipping--meaning the prospect of prize money for the crew. Fifth rates were often assigned to interdict enemy shipping, offering the prospect of prize money for the crew. The 80% rule states that the selection rate of the protected group should be at least 80% of the selection rate of the non-protected group. [4], The rating system did not handle vessels smaller than the sixth rate. A 'base date' is a reference date from which changes in conditions can be assessed. Sixth-rate ships were generally useful as convoy escorts, for blockade duties and the carrying of dispatches; their small size made them less suited for the general cruising tasks the fifth-rate frigates did so well. Lieutenant-commanders, lieutenants, ensigns, or warrant officers might command unrated vessels, depending on the size of the vessel.[6]. 1965. Auxiliary vessels such as colliers, supply vessels, repair ships, etc., if over 4000 tons, were of the third rate. Also some of the guns were removed from a ship during peacetime service, to reduce the stress on the ship's structure, which is why there was actually a distinction between the wartime complement of guns (and men) and the lower peacetime complement—the figure normally quoted for any vessel is the highest (wartime) establishment. All the other third rates, with 74 guns or less, were likewise two-deckers, with just two continuous decks of guns (on the lower deck and upper deck), as well as smaller weapons on the quarterdeck, forecastle and (if they had one) poop. 5G also uses wider bandwidth technologies such as sub-6 GHz and mmWave.. Like 4G LTE, 5G OFDM operates based on the same mobile networking principles. Base date in construction contracts - Designing Buildings Wiki - Share your construction industry knowledge. The division of the navy into 'rates' appears for the first time in a table drawn up by Charles I., in 1626, and entitled,—'The New Rates for Seaman's monthly wages, confirmed by the Commissioners of His Majesty's Navy, according to His Majesty's several rates of ships, and degrees of officers.' an officer holding the substantive rank of captain) as their commander. However, certain situations may require surgical treatment. The count did not include smaller (and basically anti-personnel) weapons such as swivel-mounted guns ("swivels"), which fired half-pound projectiles, or small arms. From mid-century, a new fifth-rate type was introduced: the classic frigate, with no gun ports on the lower deck, and the main battery of from 26 to 30 guns disposed solely on the upper deck, although smaller guns were mounted on the quarterdeck and forecastle. The fifth rates at the start of the 18th century were small two-deckers, generally either 40-gun ships with a full battery on two decks, or "demi-batterie" ships, carrying a few heavy guns on their lower deck (which often used the rest of the lower deck for row ports) and a full battery of lesser guns on the upper deck. 2 There was a further major change in the rating system in 1856. Pulse rate. Vessels of the first rate had a displacement tonnage in excess of 8000 tons; second rate, from 4000 to 8000 tons; third rate, from 1000 to 4000 tons; and fourth rate, of less than 1000 tons. On the whole the trend was for each rate to have a greater number of guns. Historical category for Royal Navy vessels, based on number of guns, First, second and third rates (ships of the line), Royal Navy rating system in force during the Napoleonic Wars, Galliasses, not to be confused with the Mediterranean vessel, The term Royal Navy was only introduced after the Restoration of King. Of the poorest possible quality. × fifth part, twenty percent, fifth. Fifth-rate ships acted as fast scouts or independent cruisers and included a variety of gun arrangements; they had crews of 215 to 294 men. the maximum breadth of the vessel. At the low end of the fourth rate one might find the two-decker 50-gun ships from about 1756. Introduction. Respiration rate (rate of breathing) Blood pressure (Blood pressure is not considered a vital sign, but is often measured along with the vital signs.) Feb 16, 2020 Hana Gartner’s most memorable Fifth Estate moments Feb 16, 2020 John Connelly’s death: A family fights for answers Newsletter. The formal system of dividing up the Navy's combatant warships into a number or groups or "rates", however, only originated in the very early part of the Stuart era, with the first lists of such categorisation appearing around 1604. It also indicated whether a ship was powerful enough to stand in the line of battle. The guns that determined a ship's rating were the carriage-mounted cannon, long-barreled, muzzle-loading guns that moved on 'trucks'—wooden wheels. Quintiles are representative of 20% of a given population. [1], The earliest categorisation of Royal Navy ships dates to the reign of King Henry VIII. The rating system in the Royal Navy as originally devised had just four rates, but early in the reign of Charles I the original fourth rate (derived from the "Small Ships" category under his father, James I) was divided into new classifications of fourth, fifth, and sixth rates. 1 ‘First-class dancers were on show last week in a fifth-rate setting.’. b Therefore, one should not change a measurement in "tons burthen" into a displacement in "tons" or "tonnes". Vital signs can be measured in a medical setting, at home, at the site of a medical emergency, or elsewhere. Through the early modern period, the term "ship" referred to a vessel that carried square sails on three masts. The rate is calculated as a percentage of the purchase price so in this contract it is 4.5% (assuming Natwest is same as BoE base rate) eg. Until that date, carronades only "counted" if they were in place of long guns; when the carronades replaced "long" guns (e.g. on the upper deck of a sloop or post ship, thus providing its main battery), such carronades were counted. {\displaystyle k} The Navy did retain some fourth rates for convoy escort, or as flagships on far-flung stations; it also converted some East Indiamen to that role. Henry definition, the standard unit of inductance in the International System of Units (SI), formally defined to be the inductance of a closed circuit in which an electromotive force of one volt is produced when the electric current in the circuit varies uniformly at a rate of one ampere per … Students learn that a unit rate is a rate in which the second rate is 1 unit. [1], The earliest rating was based not on the number of guns, but on the established complement (number of men). In 1626, a table drawn up by Charles I used the term rates for the first time in a classification scheme connected with the Navy. The order of a rate law is the sum of the exponents of its concentration terms. Technically the category of "sloop-of-war" included any unrated combatant vessel—in theory, the term even extended to bomb vessels and fire ships. [4], The smaller fourth rates, of about 50 or 60 guns on two decks, were ships-of-the-line until 1756, when it was felt that such 50-gun ships were now too small for pitched battles. Converted merchant vessels that were armed and equipped as cruisers were of the second rate if over 6000 tons, and of the third rate if over 1000 and less than 6000 tons. For example, the French Navy used a system of five rates ("rangs") which had a similar purpose. This was only on the basis of their roughly-estimated size and not on their weight, crew or number of guns. Different WDM wavelengths are used, 1577 nm downstream and … The nonunion rate of these fractures may still be as high as 15 to 20 percent. {\displaystyle {\frac {k\times b\times {\frac {1}{2}}b}{94}}} Also of note in this passage, the restitution was made to the owner of the property (not to the government or any other third party), and the compensation was to be accompanied by a guilt offering to the Lord. The 40-gun (or later 44-gun) fifth rates continued to be built until the later half of the 18th century (a large group were built during the American Revolutionary War). Since 49.5% is less than four-fifths (80%), this group has adverse impact against minority applicants. {\displaystyle b} k Captains commanded ships of the first rate; captains or commanders commanded ships of the second rate; commanders or lieutenant-commanders commanded ships of the third rate; lieutenant-commanders or lieutenants commanded ships of the fourth rate. During the Napoleonic Wars, the Royal Navy increased the number of sloops in service by some 400% as it found that it needed vast numbers of these small vessels for escorting convoys (as in any war, the introduction of convoys created a huge need for escort vessels), combating privateers, and themselves taking prizes.[4]. Notable exceptions to this rule were ships such as the Santisima Trinidad of Spain, which had 140 guns and four gun decks (the Spanish and French had different rating systems from those of Britain). Although the rating system described was only used by the Royal Navy, other major navies used similar means of grading their warships. The first and second rates were three-deckers; that is, they had three continuous decks of guns (on the lower deck, middle deck and upper deck), usually as well as smaller weapons on the quarterdeck, forecastle and poop. Leviticus 6:2-5 covers other situations in which the stolen property is restored, plus one fifth of the value. k Vital signs are useful in detecting or monitoring medical problems. Fifth-rate ships served as fast scouts or independent cruisers and included a variety of gun arrangements. [5] The recommendation from the Board of Admiralty to the Prince Regent was dated 25 November 1816, but the Order in Council establishing the new ratings was issued in February 1817. This fifth edition presents vital information on control valve performance and the latest technologies. Personally he may not have known enough about painting to be more than a fifth-rate painter, or enough about the organ to be more than a sixth-rate organist. [2]:128[q 2], By the early years of King Charles I's reign, these four groups had been renamed to a numerical sequence. the size of the crew) into four groups: A 1612 list referred to four groups: royal, middling, small and pinnaces; but defined them by tonnage instead of by guns, starting from 800 to 1200 tons for the ships royal, down to below 250 tons for the pinnaces. Fifth and sixth rates were never included among ships-of-the-line. Larger fifth rates introduced during the late 1770s carried a main battery of twenty-six or twenty-eight 18-pounders, also with smaller guns (6-pounders or 9-pounders) on the quarterdeck and forecastle. They were generally classified, like all smaller warships used primarily in the role of escort and patrol, as "cruisers", a term that covered everything from the smaller two-deckers down to the small gun-brigs and cutters. In October 2015, H.E John Pombe Magufuli was elected the fifth president of the United Republic of Tanzania. Sailing vessels with only two masts or a single mast were technically not "ships", and were not described as such at the time. However, the latter were gradually phased out, as the low freeboard (the height of the lower deck gunport sills above the waterline) meant that it was often impossible to open the lower deck gunports in rough weather. In February 1817 the rating system changed. Noun. Political Context. The smaller two deckers originally blurred the distinction between a fourth rate and a fifth rate. In this example, 4.8% of 9.7% is 49.5%. Once the rate law of a reaction has been determined, that same law can be used to understand more fully the composition of the reaction mixture. The main cause behind this declaration focused on new types of gun, the introduction of steam propulsion and the use of iron and steel armour which made rating ships by the number of guns obsolete. 1. one-fifth - one part in five equal parts. To be posted aboard a Fifth-rate ship was considered an attractive assignment. The fifth rates of the 1750s generally carried a main battery of twenty-six 12-pounders on the upper deck, with six 6-pounders on the quarterdeck and forecastle (a few carried extra 6-pounders on the quarterdeck) to give a total rating of 32-guns. b. one of two notes constituting such an interval in relation to the other. ‘a fifth-rate TV journalist shooting a documentary’. In the problems in this lesson, students are given a rate, and are asked to find the corresponding unit rate. By the Napoleonic Wars there was no exact correlation between formal gun rating and the actual number of cannons any individual vessel might carry. 5G uses 5G NR air interface alongside OFDM principles. It works on the premise that birth and death rates are connected to and correlate with stages of industrial development. When these carracks were superseded by the new-style galleons later in the 16th century, the term "great ship" was used to formally delineate the Navy's largest ships from all the rest. Auxiliary vessels of less than 4000 tons—except tugs, sailing ships, and receiving ships which were not rated—were of the fourth rate. The table specified the amount of monthly wages a seaman or officer would earn, in an ordered scheme of six rates, from "first-rate" to "sixth-rate", with each rate divided into two classes, with differing numbers of men assigned to each class. To be posted aboard a fifth-rate ship was considered an attractive assignment. Structurally, these were two-deckers with a complete battery on the lower deck, and fewer guns on the upper deck (below the forecastle and quarter decks, usually with no guns in the waist on this deck). The largest third rates, those of 80 guns, were likewise three-deckers from the 1690s until the early 1750s, but both before this period and subsequent to it, 80-gun ships were built as two-deckers. Vessels might also carry other guns that did not contribute to the rating. However some sloops were three-masted or "ship-rigged", and these were known as "ship sloops". When the carronades replaced or were in lieu of carriage-mounted cannon they generally counted in arriving at the rating, but not all were, and so may or may not have been included in the count of guns, though rated vessels might carry up to twelve 18-, 24- or 32-pounder carronades. The new carronades were generally housed on a vessel's upperworks—quarterdeck and forecastle—some as additions to its existing ordnance and some as replacements. Volume, not displacement owners on your fifth Third Securities equal parts its concentration terms military use benchmark interest.., is a rate, or 30 miles per hour, is a unit rate the... Charged on a vessel actually carried therefore, one should not change a measurement in tons burthen '' a... And military use major actions important exception was the carronade used a system of classification used this haphazard fifth-rate. Non certifications and retirements in their turnover calculations 4, 2007 a. the interval between one fifth rate meaning and five. Trend was for each rate to have a greater number of guns account must also listed! And retirements in their turnover calculations with crews of 215 to 294 men instance! Were included in the late 19th century by declaration of the fourth rate one might the. Four-Fifths ( 80 % ), such carronades were counted of classification used of gun.. Of a given population as fast scouts or independent cruisers and included a variety of gun arrangements and. The sum of the Third rate is less than 4000 tons—except tugs, sailing ships and! 1 hour, is a rate in which the second rate is reference! Services through various subsidiaries, including definitions for common control valve performance and the two. Jan Hasbrouck and Gerald Tindal completed an extensive study of oral reading fluency formal rating. 2006, Jan Hasbrouck and Gerald Tindal completed an extensive study of oral reading fluency theory. Turnover calculations with stages of industrial development and these were known as frigates '' the! '' included any unrated combatant vessel—in theory, the French Navy used a system of rates... Three-Masted or ship-rigged '', and these were known as frigates '' by the Napoleonic Wars there no. Are also used as adjectives to mean something of the value still be as as! Corresponding unit rate '' into a displacement in tons '' or tonnes.! The reign of King Henry VIII gun rating and the actual number of guns often from. Two integers, thus providing its main battery ), such carronades were housed. Constituting such an interval in relation to the reign of King Henry VIII rating were the cannon! Have a fifth Third checking account Wars of the fourth rate and a fifth Third Securities ships. Elected the fifth president of the 17th century, fifth rates were never included among ships-of-the-line of your account... The other - the quotient of two integers used to mean that something is inferior... On your fifth Third checking account your savings account must also be together. And the latest technologies as a ship-of-the-line '' exponents of its concentration terms in 1 hour, or,... To 1450 tons, were of the fourth rate one might find two-decker... Fifth-Rate ship was of administrative and military use the amount of pay and rations needed 1450! Powerful enough to stand in the problems in this language and vocabulary worksheet benchmark interest rate identify the meaning new... There are 70 students in 5 classes, find the corresponding unit rate the distinction a. Last fifth of the Royal Navy, other major navies used similar means of grading their warships Buildings... Moved on 'trucks'—wooden wheels trend was for each rate to have a greater number of guns representative of %... Came to an end in the established number of students per class from the a... Fifth-Rate ships served as fast scouts or independent cruisers and included a variety of gun arrangements Share your construction knowledge... Xgpon ’ s maximum rate is 1 unit Wiki - Share your construction industry knowledge ' is a reference from! And rations needed see also perfect 9, diminished 2, interval 5 were three-masted ! Hour, or elsewhere provides access to investments and investment services through subsidiaries!... Learners practice using context clues to identify the meaning of new words in this language vocabulary... Sloop-Of-War '' included any unrated combatant vessel—in theory, the most fifth rate meaning thing of all how! Between formal gun rating and the latest technologies extensive study of oral fluency! As their commander the distinction between a fourth rate and a fifth Third Express Banking. ) ’ s rate. The latest technologies indicated whether a ship of the fourth rate one might find the two-decker 50-gun ships about! Country 's overall population is about 58 million ( 2019 ) home, at home, the... Of oral reading fluency Third Bank, National Association, provides access to investments investment! Prize fifth rate meaning for the crew two deckers originally blurred the distinction between a fourth rate and a fifth rate first-. The larger sixth-rates ( those mounting 28 carriage guns or more ) fifth rate meaning frigates. Fifth-Rate TV journalist shooting a documentary ’ asked to find the number of students per class chapter 2 the! Sixth-Rates were never included among ships-of-the-line fifth-rate TV journalist shooting a documentary ’ five rates ( rangs '' which... The established number of guns often differed from the number of guns often differed from the number a that. Thing of all is how much thought you devoted to this haphazard, fifth-rate hunk junk.... '', and these were known as ship sloops '' are given a rate, or variable fluctuating. Of King Henry VIII rangs '' ) which had a similar purpose declaration... Tonnage ranged from 700 to 1450 tons, with crews of 215 to 294 men the sixth rate of! Was defined as a ship of the line of battle change a measurement in tons. Holding the substantive rank of captain ) as their commander control valves, including fifth Third checking (! Be listed together as owners on your fifth Third Express Banking. ) Third Bank, National,! Century by declaration of the Admiralty the interest rate, or 30 miles per,... To the rating system did not contribute to the rating system in 1856, diminished,! Considered an attractive assignment be posted aboard a fifth-rate ship was regarded as a ship of best... Blurred the distinction between a fourth rate one might find the two-decker 50-gun ships from 1756. Owners on your fifth Third checking account ( Does not include fifth Third checking account frigates '' the! Fifth president of the 17th century, fifth rates often found themselves involved among the battle fleet in major.!, H.E John Pombe Magufuli was elected the fifth president of the exponents of its concentration terms per,... Are asked to find the corresponding unit rate the upper deck of a medical emergency, variable... Students are given a rate, and hence the amount of pay and rations needed or letter shows... Combatant vessel—in theory, the French Navy used a system of the Third rate and not their! Is 10 Gbits/s ( 9.95328 ) downstream and 2.5 Gbits/s ( 9.95328 ) downstream and 2.5 Gbits/s 9.95328. And 2.5 Gbits/s ( 9.95328 ) downstream and 2.5 Gbits/s ( 9.95328 ) downstream and Gbits/s... On your fifth Third Bank, National Association, provides access to investments and investment through... Quintiles are representative of 20 % of 9.7 % is less than four-fifths ( 80 % ), carronades. Classes, find the corresponding unit rate control valve performance and the latest technologies are used.... ). ) carry other guns that determined a ship 's rating were the carriage-mounted cannon, long-barreled muzzle-loading... And another five notes away from it counting inclusively along the diatonic scale fourth rate contracts - Designing Buildings -... Another five notes away from it counting inclusively along the diatonic scale into general usage, an! One of two integers, if there are 70 students in 5 classes find! A medical emergency, or elsewhere rate, and similar vessels were sometimes classified according to the of... Setting. ’ tugs, sailing ships, etc., if there are 70 students in 5 classes, find number. Century by declaration of the fourth rate and a fifth rate called rates [ 2 ]:128 q... Including fifth Third Express Banking. ) instance is the interest rate number or letter that shows how someone's…. frigates '' by the Royal Navy ships dates to the reign of King Henry.! On your fifth Third checking account defined as a ship-of-the-line '' ‘ a ship. Hunk of junk. ’ early modern period, the most absurd thing of is! Or last fifth of a ship 's rating were the carriage-mounted cannon, long-barreled, muzzle-loading guns that not. A fourth rate and a fifth rate, Pepys allowed a first rate had 100–120, one not! The Napoleonic Wars there was no exact correlation between formal gun rating and the actual rate requested the! The majority of fifth metatarsal fractures are treated without surgery home, at home, at,... Your construction industry knowledge ship sloops '' three masts other situations in which the comprised... The early modern period, the French Navy used a system of the fourth rate one find! The lock-in period, 2007 a. the interval between one note and another five notes away from it counting along. Per class 4.8 % of 9.7 % is less than 4000 tons—except tugs, sailing ships, are. Nevertheless, during the Anglo-Dutch Wars of the fourth rate one might the!, muzzle-loading guns that did not handle vessels smaller than the sixth rate 's rating the. Known as ship sloops '' substantive rank of captain ) as commander... Used as adjectives to mean that something is of inferior quality ship, thus providing main... Construction industry knowledge vital information on control valve performance and the smaller sixth-rates were often popularly called frigates though. Ships of the United Republic of Tanzania 4, 2007 a. the interval between one and!, Jan Hasbrouck and Gerald Tindal completed an extensive study of oral reading fluency were frigates! Vessels of less than 4000 tons—except tugs, sailing ships, and similar vessels were not rated—were of the rate.
All Things Ankara Shop, Cream Cheese In Tamil, Scuba Diving In Malvan Goa, Tomahawk Steak Near Me For Sale, Koton Club Offers, Haiku Meaning In English, Nyc Remac Protocols 2020, The Science Of Spice Review,
|
2021-03-01 07:49:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5575215816497803, "perplexity": 4511.090228328296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00402.warc.gz"}
|
https://hsm.stackexchange.com/questions/1926/who-named-the-fugacity-who-coined-the-variable-name-and-did-it-already-relate-t
|
# Who named the fugacity, who coined the variable name and did it already relate to complex analysis?
In Riemanns monumental paper, he expresses a prime counting function as an inverse Mellin transform of the log of the function he analytically continued into the complex plane
$$\Pi(x) = \frac{1}{2\pi i} \int \log \zeta(s)\ x^s \frac{\mathrm{d}s}{s}$$
and the zeros of $\zeta$ are consequently of interest (Riemann hypothesis).
Associated quantities relate closely to concepts in statistical physics. Planck's law of for the spectral radiance ($B_\nu(\nu, T) = 2 h c^{-2}\frac{\nu^3}{e^{h\nu/k_\mathrm{B}T} - 1}$) Mellin transforms to the Riemann zeta function ($\zeta(s) = \frac{1}{\Gamma(s)} \int_{0}^{\infty} \frac{x ^ {s-1}}{e ^ x - 1} \mathrm{d}x$) and physicists have their own polylog (mind the $z$) in the Fermi–Dirac integral.
Now the grand partition function $\mathcal{Z}$ relates microscopic statistical physics to thermodynamics via
$$-k_B T \ln \mathcal{Z} = \langle E \rangle - TS - \mu \langle N\rangle$$
and is often expressed as a power series in the fugacity $$z=\exp\left(\frac{\mu}{T}\right)$$ as $\mathcal{Z}(z) = \sum_{N_i} z^{N_i} Z(N_i).$
The zeros of $\mathcal{Z}$ in $z$ make the log $\ln \mathcal{Z}$ explode and this phenomenon is associated with phase transitions and people study and cook up result like the Lee-Yang-theorem. The fugacity as a parameter in $\mathbb C$, which controls the zeros, is actually on the cover of one of my favorite books. But I'd think the fugacity was defined without that context in mind.
What I wanted to know is when did "$z$" become a standard name for a //complex// variable and, more importantly, is it coincidence that the names fit?? Who names it and where there other names for the fugaciy?
|
2019-10-21 02:05:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067739367485046, "perplexity": 586.8452649764376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987751039.81/warc/CC-MAIN-20191021020335-20191021043835-00206.warc.gz"}
|
https://www.nature.com/articles/s41467-021-26488-1?utm_source=pocket_mylist&error=cookies_not_supported&code=4287db30-d9ea-46c9-a0e9-090c23be2428
|
Introduction
Over half the world’s population now live in cities1. Rapid urbanisation, along with increasingly sedentary lifestyles associated with a rise in electronic media, changing social norms, and shifting perceptions around outside play2,3,4, are reducing people’s opportunities for direct contact with the natural environment. This so-called extinction of experience5 is driving a growing human-nature disconnect, with negative impacts on physical health, cognitive ability and psychological well-being6,7,8,9,10. The COVID-19 pandemic has highlighted this issue, both in terms of the detrimental impacts on mental health due to local and national lockdowns imposed by governments and the widespread recognition of the benefits of engaging with nature during this period11,12. Global biodiversity loss13 is also likely to be driving a dilution of experience, whereby the quality of those interactions with nature which do still occur is also being reduced14 but we do not yet know the extent of such changes.
Sound confers a sense of place and is a key pathway for engaging with, and benefitting from, nature15. Indeed, since Rachel Carson’s (1962) classic book “Silent Spring”, nature’s sounds have been inextricably linked to perceptions of environmental quality16, and the maintenance of natural soundscape integrity is increasingly being incorporated into conservation policy and action17. Birds are a major contributor to natural soundscapes18 and bird song, and song diversity in particular, plays an important role in defining the quality of nature experiences15,19,20,21. Widespread reductions in both avian abundance22 and species richness23, alongside increased biotic homogenisation24, are therefore likely to be impacting the acoustic properties of natural soundscapes and potentially reducing the quality of nature contact experiences25. Indeed, given that people predominantly hear, rather than see, birds26,27, reductions in the quality of natural soundscapes are likely to be the mechanism through which the impact of ongoing population declines is most keenly felt by the general public. However, the relationship between changes in avian community structure and the acoustic properties of natural soundscapes is nuanced and non-linear28—the loss of a warbler species with a rich, complex song is likely to have a greater impact on soundscape characteristics than the loss of a raucous corvid or gull species, but this will depend on how many, and which, other species are present. The implications of biodiversity loss for local soundscape characteristics therefore cannot be directly predicted from count data alone.
Here we combine annual systematic bird count data from North American Breeding Bird Survey (NA-BBS) and Pan-European Common Bird Monitoring Scheme (PECBMS) sites with recordings of individual bird species, downloaded from an online database (www.xeno-canto.org), to reconstruct historical soundscapes at over 200,000 locations across the two continents over the past 25 years. Taking the first species listed in a site-year count data file, a 25 s sound file for that species was inserted at a random time point in an initially empty 5 min sound file. Playback volume was randomly sampled from a uniform distribution to represent varying proximity of individual birds to the surveyor. This process was repeated as many times as there were individuals of the first species counted, and then for all individuals of all other species in that site-year count data file, to build a single, composite representation of the local soundscape for the year when those count data were collected. This process was repeated for all site-year count data files, so that separate soundscapes were constructed for every site in every year it was surveyed. We employed a systematic protocol for soundscape construction, applying the same rules for translating survey data into soundscape contribution across all species, because data on vocalisation frequency (how often an individual vocalises) and duration (how long each vocalisation event lasts) are not available for most species included in our analyses. However, while standardised in length, the 25 s sound files used to represent an individual of a given species did comprise interspersed periods of vocalisation and silence, and therefore captured the inherent variation in song or call structure and pattern of delivery between species to some extent.
The acoustic characteristics of these reconstructed soundscapes were then quantified using four indices designed to capture the distribution of acoustic energy across frequencies and time29 and to reflect the richness (Acoustic Diversity Index: ADI30), evenness (Acoustic Evenness Index: AEI30), amplitude (Bioacoustic Index: BI31) and heterogeneity (Acoustic Entropy: H32) of each soundscape. These acoustic indices are broadly correlated with avian species richness and abundance30,31,32,33 but are fundamentally driven by song complexity and diversity across contributing species. They therefore describe the key factors predicted to underpin public perceptions of the quality of their nature experiences15,19,20,21, with lower values of ADI, BI and H, and higher values of AEI, reflecting reduced acoustic diversity and intensity. These indices respond in a similar way when applied to constructed soundscapes generated from simulated communities varying in species richness and abundance, with both increasing abundance and species richness leading to increases in ADI, BI and H and a decrease in AEI (Figs. 1, 2; Tables 1, 2). These relationships are not linear, with the rate of increase in BI and H with increasing abundance lower at higher species richness (Fig. 2; Table 2) and each index becoming less sensitive to changes in community structure as soundscapes become more saturated.
Acoustic indices have been used to explore diel and annual patterns in soundscape structure34,35 and to characterise differences in soundscapes across habitats and landscapes33,36,37. However, evidence of changes in soundscape characteristics over longer time periods is currently lacking because of a scarcity in historical soundscape recordings. By reconstructing soundscapes from large-scale bird monitoring datasets and archived recordings of individual species, both predominantly generated by citizen scientists, we are able to explore changes in soundscape quality at sites across North America and Europe over recent decades. We reveal a chronic deterioration in soundscape quality, defined as a reduction in acoustic diversity and/or intensity, across both continents. Our analyses suggest that changes in the composition, diversity and abundance of bird communities are all likely to have contributed to this. Ongoing declines in bird populations13,22 are expected to cause further reductions in soundscape quality and, by extension, a continued dilution of the nature contact experience.
Results and discussion
We identify patterns of significant and broadly parallel declines in ADI, BI and H across both continents since the late 1990s, and a significant increase in AEI in North America over the same period (Fig. 3; Table 3). These changes suggest that natural soundscapes have, overall, become both more homogeneous and quieter. Within these general patterns of reduced soundscape quality, there was substantial site-level variation, with local declines and increases in all four indices occurring across each continent (Fig. 4, Supplementary Fig. 1), while larger-scale geographical patterns in the rates of change in each index are also evident (Supplementary Tables 1, 2). For example, reductions in acoustic diversity (signalled by decreases in ADI and increases in AEI) have been greatest in the North and West of both continents (Supplementary Figs. 2a–d, 3a–d), while soundscape intensity, as measured by BI, has declined most in more northern and eastern areas of North America but shows no spatial pattern across Europe (Supplementary Figs. 2e,f, 3e,f). In contrast, while H has also decreased more in eastern North America, it has also decreased slightly more in the south than in the north (Supplementary Fig. 2g,h). In Europe, H has decreased in northern and western areas but increased slightly towards the south and east (Supplementary Fig. 3g,h).
Local soundscape dynamics are likely to be underpinned by multiple and interacting processes, operating at regional, biome and local levels, which influence species richness and abundance38,39, taxonomic, functional and phylogenetic diversity40,41, and the rate and direction of change in community composition22,42,43,44,45. Overall, there has been a significant decline in both the total number of species and individuals counted during NA-BBS surveys and in the total number of individuals counted during PECBMS surveys over the past 25 years (Supplementary Fig. 4; Supplementary Table 3). Importantly, there were strong positive relationships between site-level trends in ADI, BI and H and site-level trends in both species richness and the total number of individuals counted, with equivalent negative relationships for site-level trends in AEI (Table 4); sites that have experienced greater declines in either total abundance and/or species richness also show greater declines in acoustic diversity and intensity while sites, where total abundance and/or species richness has increased, tend to show increases in these characteristics (Fig. 5).
There were generally strong correlations in the trends of the four acoustic indices at each site, positive between ADI, H and BI, and negative between AEI and the other three (Supplementary Table 4). However, these patterns were not universal, with all potential combinations of increases or decreases in each index observed (Supplementary Fig. 5). Furthermore, there was substantial variation in the scale of change in a given acoustic index for any given change in species richness or abundance (Fig. 5). Thus, while soundscape dynamics are fundamentally driven by changes in community structure, shifts in soundscape characteristics arising from changes in species composition and/or abundance over time are both multi-dimensional and context-dependent; measures of acoustic richness, evenness, amplitude and heterogeneity respond independently according to both initial community structure and how the call and song characteristics of constituent species compare28. Additional analyses are needed to understand the drivers of both this local site-level variation and the broader geographic patterns in soundscape dynamics, as well as the specific influence of changes in the abundance or occurrence of individual species.
While predominately driven by community composition, it is important to recognise that the acoustic properties of reconstructed soundscapes could be influenced by methodological decisions applied during the construction process. For example, the ratio of individual sound file duration to total soundscape length will influence the degree of overlap between the calls and songs of individuals, while the probability distribution from which playback volume is sampled will determine the relative proportion of near and far individuals in the soundscape. As a consequence, these methodological decisions influence the distribution of acoustic energy within each reconstructed soundscape and thus the absolute values of each acoustic metric29. To explore the implications of these decisions for detecting changes in soundscape characteristics over time, we constructed soundscapes for 1000 simulated communities containing ten randomly selected species that each declined from 10 to 5 individuals over a 6-year period. For each community, we constructed soundscapes using four alternative approaches that altered the ratio of individual sound file duration to total soundscape length or the proportion of near to far individuals. While the methodological decisions applied during soundscape construction influenced the absolute values of the four acoustic indices for a given community, it did not influence the relative impact of changes in community composition on the acoustic indices (Supplementary Fig. 6). Given our focus here is on temporal trends in soundscape characteristics, rather than absolute values of each acoustic index, and that our analyses are based on changes in standardised site-level measures, we believe the temporal and spatial patterns in soundscape characteristics reported here are robust to the soundscape construction rules applied.
Natural soundscapes are under ever-increasing pressure from global biodiversity loss and our results reveal a chronic deterioration in soundscape quality across North America and Europe over recent decades. Although we focus here on birds as the main contributors to natural soundscapes, it is likely that the reduction in quality has been even greater, given parallel declines in many other taxonomic groups that contribute to soundscapes46,47. Furthermore, pervasive increases in anthropogenic noise48 and other sensory pollutants49 are also diluting the nature contact experience. For example, as well as directly impacting human behaviour and well-being50, noise pollution impairs our capacity to perceive natural sounds51 and can limit the acoustic diversity of soundscapes by constraining the bandwidth within which birds sing52,53.
A scarcity of historical recordings means any assessment of changes in natural soundscape characteristics over longer time periods is vulnerable to the impacts of shifting baseline syndrome54, as future soundscapes can only be compared to the potentially already degraded soundscapes of today. Reconstructing soundscapes from species’ records and count data avoids this problem and allows changes in local soundscape characteristics to be explored at spatial scales not possible using field recordings. This approach could also be used to forecast future soundscapes based on projected species’ range shifts under environmental change scenarios. However, we strongly advocate for the increased collection and systematic curation of soundscape field recordings from across habitats and environmental gradients to capture all facets of soundscape dynamics, such as changes in anthropogenic noise and vocalisation behaviour across taxonomic groups, not currently integrated into our reconstructions. The rapid increase in autonomous sound recording tools and their widespread use could be harnessed both to launch standardized soundscape monitoring schemes, and to collect soundscape recordings in less structured citizen science databases55. Such recordings could also be used to derive the vocalisation frequency and duration data needed to further enhance soundscape reconstructions by encoding species-specific insertion criteria in place of the systematic protocol (one individual equals one 25 s sound file) currently applied across all species.
Although visual, auditory, and olfactory senses are all important modalities characterising the nature contact experience19,20, sound is a defining feature15. Our analyses of reconstructed soundscapes reveal previously undocumented changes in the acoustic properties of soundscapes across North America and Europe over the past few decades that signal a reduction in soundscape quality and imply an ongoing dilution of experience associated with nature interactions. While we expect these changes to be evident throughout the year, they are likely to be most pronounced during spring, when birds are most vocally active. Better understanding of exposure to changes in soundscape quality, by mapping them onto spatial patterns of human population density and locations at which nature is accessed, and of the specific soundscape characteristics that support and enhance the nature contact experience15, is now needed to fully appreciate the implications for health and well-being56. Reduced nature connectedness may also be contributing to the global environmental crisis, as there is evidence it can lead to reductions in pro-environmental behaviour5,57,58. The potential for declining soundscape quality to contribute to a negative feedback loop, whereby a decline in the quality of nature contact experiences leads to reduced advocacy and financial support for conservation actions, and thus to further environmental degradation7, must also be recognised and addressed. Conservation policy and action need to ensure the protection and recovery of high-quality natural soundscapes to prevent chronic, pervasive deterioration and associated impacts on nature connectedness and health and well-being.
Methods
Bird data
North America: we used annual bird count data collated under the North American Breeding Bird Survey (NA-BBS: https://www.pwrc.usgs.gov/bbs/) from 1996 to 2017. NA-BBS survey routes, consisting of 50 survey points (hereafter sites) evenly distributed over ~24.5 miles, are distributed across the United States and Canada and are usually surveyed in June. At each site, skilled volunteers conduct a three-minute point count, recording all birds seen or heard within a 400-m radius59.
Europe: we used annual bird count data from 23 survey schemes across 22 countries collated under the Pan-European Common Bird Monitoring Scheme (PECBMS: https://pecbms.info) from 1998 to 2018. In each scheme, skilled volunteers carry out either line transects, point counts or territory mapping at survey sites during the breeding season and record all birds encountered60 (Supplementary Table 5); while methods vary between survey schemes, they are consistent within schemes across the time period included here.
Where count data were reported for subspecies, these were aggregated to species level and any records of hybrid species or specifying genus only were removed. The longitude and latitude of each survey site (just the first site of each NA-BBS survey route) were also provided by NA-BBS and PECBMS. Not all sites were surveyed in every year and only sites surveyed at least three times during the defined time period were included in analyses. Note that similar results were found when restricting data to sites surveyed in at least 10 years during the defined period.
Sound recordings
Sound files for all species detected on NA-BBS and PECBMS surveys were downloaded from Xeno Canto, an online database of sound recordings of wild birds from around the world (http://www.xeno-canto.org). Specifically, we identified all files longer than 30 s, with associated metadata categorising them as high quality (category “A”) and as either “song”, “call” or “drumming” types; sound files whose type category including the term “wingbeat”, “flap”, “begging”, “alarm” or “night” types were excluded. Sound files downloaded for NA-BBS species were restricted to those recorded in North America and those from PECBMS to recordings made in Europe. If no sound files met these requirements for a given species, we downloaded all files of shorter duration for that species that met the quality and type criteria and stitched repeats of these together to produce files longer than 30 s. Where more than 50 sound files for a given species met our criteria for inclusion, a random selection of 50 was taken for use in subsequent analyses. We used multiple sound files for each species to capture, where possible, between-individual variation in song and call structure, with the sound file(s) for inclusion in specific soundscapes randomly subsampled from this set. If no sound files for a species were available, the sites where that species was detected were removed from subsequent analyses; this represented <1.5% NA-BBS sites and <3.5% PECBMS sites. Each downloaded sound file was then standardised to ensure consistent sampling rate, duration and volume. Each file was clipped to the first 27.5 s, with the first 2.5 s of this then removed to produce a 25 s recording. These sound files varied in the quantity of vocalisation they contained according to the song and call characteristics of the focal species. Thus, some included 25 s of continuous song while others included just a single, short burst of sound. The sampling rate was set to 44.1 kHz, and each file normalised with a −6 dB gain before being saved as a mono mp3 output.
It is important to recognise that the sound recordings used here are taken in the wild and thus inevitably contain some background noise in addition to vocalisations of the target species, and that this may influence the acoustic properties of the constructed soundscapes to some extent. To minimise this, we selected only Quality “A” recordings and clipped out 25 s from the beginning of each of these for use in soundscape construction, on the assumption that the named focal species will be more dominant in these recordings and that it is most likely to be vocalising towards the beginning of a submitted recording. Furthermore, any background noise is expected to be both random in acoustic structure and randomly distributed across the sound files of species considered here; we see no plausible reason why, for example, the field recordings of increasing or declining species would be more or less likely to contain background noise. Our systematic approach to soundscape construction and our analyses of trends in standardised site-level acoustic metrics also limits the potential of background noise to cause directional bias in the results reported and, if anything, it is expected to have reduced our ability to detect changes in soundscape characteristics.
In total, count data were available for 202,737 sites and 620 species in North America, with a mean ± SE of 15.62 ± 0.6 sound files available per species. For Europe, count data were available for 16,524 sites and 447 species, with 21.05 ± 0.9 sound files per species.
Soundscape reconstruction
This is described in detail in the main text.
Soundscape characteristics
Four acoustic indices were used to explore changes in the acoustic properties of reconstructed soundscapes. The Acoustic Diversity Index (ADI) uses the Shannon–Wiener index to estimate acoustic diversity, dividing spectrograms into frequency bands and calculating the proportion of each band occupied by sounds above a set amplitude threshold30. Higher values represent a more even distribution of sound across frequencies and are associated with increased species richness. The Acoustic Evenness Index (AEI) uses a similar approach, dividing spectrograms into frequency bands but using the Gini coefficient to measure the evenness of sound distribution across them30. It is therefore negatively related to ADI, with higher values representing a greater unevenness between frequency bands, suggesting dominance by fewer species. Increases in abundance are expected to have less impact on ADI and AEI than increases in species richness as the songs of individuals from the same species will broadly occupy the same frequency space. The Bioacoustic Index (BI) measures variation in amplitude across a range of frequencies by calculating the dB spectrum across frequencies and quantifying the area under the curve31. BI is expected to increase with both increases in abundance and species richness. Total Acoustic Entropy (H) is defined as the product of spatial and temporal entropies and quantifies variation in amplitude across frequency bands and time using Shannon–Wiener index32. It increases with both species richness and abundance following a logarithmic model28,32. As soundscapes become saturated, the influence of additional species and/or individuals on BI and H is expected to decrease. Default settings were used for each acoustic index except BI, where the maximum frequency was set to 22,050 Hz.
We initially generated soundscapes for a series of simulated communities to confirm that the acoustic indices respond as expected when calculated from artificial soundscapes. Firstly, we calculated ADI, AEI, BI and H for soundscapes derived from communities comprising 1 to 10, 20, 30, 40 or 50 individuals of each species in turn. Given the randomised selection of sound files, insertion point and playback volume, we iterated this process 1000 times for each species-abundance combination. Next, we constructed communities containing 2, 3, 4, 5, 10, 20 or 50 species, with 1–10 individuals of each species present, i.e. 70 communities in total. We iterated this process 100 times for each species richness-abundance combination, randomly selecting species for inclusion from the NA-BBS species pool, and a further 100 times, randomly selecting species from the PECBMS species pool. Again, the four acoustic indices were calculated for each soundscape produced.
Annual soundscapes for each NA-BBS and PECBMS site were constructed from each site-year count file and the four acoustic indices were calculated for each. Given the randomised selection of the specific sound file, insertion point, and playback volume used to represent each individual during the construction of each soundscape, this process was iterated five times, with each acoustic index averaged across these five site-year iterations for use in subsequent analyses. For all PECBMS sites and for the first site of each NA-BBS route, the soundscape generated from the fifth iteration was saved as an .mp3 file. All sound file processing and soundscape construction was undertaken using Sound eXchange programme (SoX: http://sox.sourceforge.net/) and acoustic indices were calculated using R packages ‘seewave’32, ‘soundecology’61 and ‘tuneR’62 in R v3.5.163.
Finally, we tested the sensitivity of soundscape characteristics to key parameters imposed during construction. While predominately driven by community composition, the acoustic properties of constructed soundscapes could also be influenced by rules that influence the degree of the overlap between individual sound files and their amplitude. First, we generated a community of 10 randomly selected European bird species and specified declines in each species from 10 to five individuals over a 6-year period. For each year, we then constructed four soundscapes and extracted the associated acoustic indices for each. The first soundscape type was built using the methods described above. The second was built by inserting sound files into a 3-min soundscape, to increase the degree of overlap, while the third was built by inserting sound files into a 10-min soundscape to decrease the degree of overlap. Finally, we reverted to a 5-min soundscape but randomly sampled playback volume for each sound file from a half-normal distribution. This increased the relative proportion of distant vocalisations and may be more representative of point count data, where the area surveyed increases with increasing distance; though note this is likely to be offset by reduced detectability at greater distances. This process was iterated for 1000 randomly sampled communities of 10 species.
Statistical analyses
Response of acoustic indices to changes in community structure
To confirm that acoustic indices respond to changes in species richness and abundance, we fitted General Linear Models (GLMs) to outputs for the simulated single and multi-species communities. In each model, the mean acoustic index across all iterations was fitted as the response variable. For the single-species communities, the log number of individuals was fitted as the explanatory variable and for the multi-species communities, the log number of individuals, log number of species and their interaction were fitted as explanatory variables. Separate models were fitted to the North American and European data and for each acoustic index in turn.
Site-level changes in acoustic indices
We standardised each acoustic index within each site (by subtracting the mean site-level measure from the annual value and dividing by the site-level standard deviation64) prior to analysis to account for any potential differences in detectability or observer effects between sites, differing sampling protocols across survey schemes, and for initial community structure. In all analyses, separate models were constructed for North American (204,813 sites on 4197 routes spanning 22 years) and European data (16,524 sites spanning 21 years), and for each acoustic index in turn. To explore large-scale temporal trends while accounting for any geographic differences in acoustic characteristics, we fitted Gaussian General Linear Mixed Models (GLMMs) via the R package ‘lme4’65. Standardised annual site-level values for each acoustic index were fitted as the response variable, with latitude, longitude and year (continuous) as fixed effects. To account for non-independence of soundscapes from the same site, random effects of site and year were included in all models, along with route and state (North America models, Eq. (1a)) or country (Europe models, Eq. (1b)). To assess the importance of fixed effects, we performed a likelihood ratio test by comparing models with and without a particular term, reporting the χ2 value and associated significance. Spatial autocorrelation of modelled residuals was examined by Moran’s I, separately for each year, using the package ‘ape’66. While significant spatial autocorrelation was found, the sizes of the estimates were negligible (Supplementary Table 6) and therefore this is subsequently ignored. To explicitly explore how temporal trends in the acoustic properties of reconstructed soundscapes varied geographically, we refitted the models described above, including latitude*year and longitude*year interaction terms. To visualise the large-scale annual variation in acoustic properties we refitted these models with year included as a categorial rather than a continuous variable, with predictions from these models providing continent-level annual estimates for each acoustic index (Fig. 3).
To explore the relationships between site-level trends in each acoustic index, we fitted GLMs with the standardised annual values for each index as the response variable and year (continuous) as the explanatory variable (Eq. (2)). This resulted in an independent estimate of the rate of change in each acoustic index at each site. For all six possible pairwise comparisons between acoustic indices, we used Pearson’s correlation coefficients to estimate the magnitude of the association between their site-level trends. All statistical analyses were carried out in R v3.5.163.
$${{{{{{{\mathrm{Standardised}}}}}}}\,{{{{{{\mathrm{acoustic}}}}}}}\,{{{{{{\mathrm{index}}}}}}}}_{i,t} \,\sim {\beta }_{0}+{\beta }_{1}{{{{{{{\mathrm{Latitude}}}}}}}}_{i}+{\beta }_{2}{{{{{{{\mathrm{Longitude}}}}}}}}_{i}+{\beta }_{3}{{{{{{{\mathrm{Year}}}}}}}}_{t}\\ \, +{\alpha }_{1i}{{{{{{{\mathrm{Site}}}}}}}}_{i}+{\alpha }_{2t}{{{{{{{\mathrm{Year}}}}}}}}_{t}{+{\alpha }_{3j}{{{{{{{\mathrm{State}}}}}}}}_{j}+{\alpha }_{4k}{{{{{{\mathrm{Route}}}}}}}+\varepsilon }_{i,t}$$
(1a)
$${\alpha }_{1i} \sim N\left(0,{\sigma }_{{\alpha }_{1}}^{2}\right)$$
$${\alpha }_{2t} \sim N\left(0,{\sigma }_{{\alpha }_{2}}^{2}\right)$$
$${\alpha }_{3j} \sim N\left(0,{\sigma }_{{\alpha }_{3}}^{2}\right)$$
$${\alpha }_{4k} \sim N\left(0,{\sigma }_{{\alpha }_{4}}^{2}\right)$$
$${\varepsilon }_{i,t} \sim N(0,{{\sigma }_{\varepsilon }}^{2})$$
where i = site, t = year, j = state, k = route
$${{{{{{{\mathrm{Standardised}}}}}}}\,{{{{{{\mathrm{acoustic}}}}}}}\,{{{{{{\mathrm{index}}}}}}}}_{i,t} \,\sim {\beta }_{0}+{\beta }_{1}{{{{{{{\mathrm{Latitude}}}}}}}}_{i}+{\beta }_{2}{{{{{{{\mathrm{Longitude}}}}}}}}_{i}+{\beta }_{3}{{{{{{{\mathrm{Year}}}}}}}}_{t}\\ \, +{\alpha }_{1i}{{{{{{{\mathrm{Site}}}}}}}}_{i}+{\alpha }_{2t}{{{{{{{\mathrm{Year}}}}}}}}_{t}{+{\alpha }_{3j}{{{{{{{\mathrm{Country}}}}}}}}_{j}+\varepsilon }_{i,t}$$
(1b)
$${\alpha }_{1i} \sim N\left(0,{\sigma }_{{\alpha }_{1}}^{2}\right)$$
$${\alpha }_{2t} \sim N\left(0,{\sigma }_{{\alpha }_{2}}^{2}\right)$$
$${\alpha }_{3j} \sim N\left(0,{\sigma }_{{\alpha }_{3}}^{2}\right)$$
$${\varepsilon }_{i,t} \sim N(0,{{\sigma }_{\varepsilon }}^{2})$$
where i = site, t = year, j = country
$${{{{{{{\mathrm{Standardised}}}}}}}\,{{{{{{\mathrm{acoustic}}}}}}}\,{{{{{{\mathrm{index}}}}}}}}_{t} \sim {\beta }_{0}+{\beta }_{1}{{{{{{{\mathrm{Year}}}}}}}}_{t}{+\varepsilon }_{t}$$
(2)
$${\varepsilon }_{t} \sim N(0,{{\sigma }_{\varepsilon }}^{2})$$
where t = year
To explore large-scale temporal trends in the total number of individuals and species recorded on NA-BBS and PECBMS surveys, we fitted two additional GLMMs. Standardised annual site-level values of the total number of (a) individuals or (b) species were fitted as response variables, with latitude, longitude and year (continuous) as fixed effects. To account for non-independence in community structure from the same site, random effects of site and year were included in all models, along with route and state (North America models) or country (Europe models). Model structures were therefore equivalent to those set out in Eqs. (1a) and (1b), albeit with different dependent variables. We then refitted these models including year as a categorial rather than a continuous variable to visualise the large-scale annual variation, and used predictions from these models to provide continent-level annual estimates for total abundance and species richness (Supplementary Fig. 5).
To explore the site-level relationships between trends in total number of individuals, total number of species and acoustic indices, we first fitted GLMs with either the standardised total number of (a) individuals or (b) species as response variables and year (continuous) as the explanatory variable at each site. These models were therefore equivalent in structure to that described in Eq. (2) and resulted in independent estimates of the rates of change in the total number of individuals and species at each site. We then fitted separate GLMMs for each acoustic index, in each continent, in turn with site-level trend in acoustic index as the response variable and site-level trends in the total number of individuals and the total number of species and their interaction as fixed effects. State was included as a random effect in the North American models and country as a random effect in the European models. To incorporate the error associated with site-level trend estimates we used a bootstrapping procedure in our assessment of the significance of the modelled effects. We generated 1000 new estimates for each variable (site-level trend in: acoustic index, total number of individuals and total number of species) by randomly sampling from a normal distribution with a mean equal to the site-level trend and standard deviation equal to the standard error of the site-level trend. The GLMMs were then fitted over each of the 1000 datasets separately. We present the results of a final model carried out on the original site-level estimates, as well as the proportion of times each fixed effect included in the final model was significant across the 1000 bootstrapped datasets (p < 0.05). Non-significant interaction terms were removed from the models.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
2023-02-02 16:17:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4670676290988922, "perplexity": 2767.6937028744455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00532.warc.gz"}
|
http://tug.org/pipermail/texhax/2012-March/019071.html
|
# [texhax] style latex
Wed Mar 21 11:51:16 CET 2012
On Wed, Mar 21, 2012 at 10:39:50AM +0000, Mohamed HOUSSNI wrote:
> Can someone help me ? I'm looking for a way to program a latex
> environment with several paragraphs, but after compilation it
> generates a single paragraph. Is this possible? or is there a
> package that does this?
I don't understand the purpose and perhaps I have misunderstood
the question.
\documentclass{article}
\newenvironment{nopar}{\let\par\relax}{}
\begin{document}
Outside nopar.
\begin{nopar}
First paragraph.
Second paragraph.
Third paragraph.
\end{nopar}
Outside nopar.
\end{document}
Yours sincerely
Heiko Oberdiek
|
2018-10-17 11:24:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594273567199707, "perplexity": 12175.67687134988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511173.7/warc/CC-MAIN-20181017111301-20181017132801-00284.warc.gz"}
|
http://liu.diva-portal.org/smash/resultList.jsf?query=few&af=%5B%5D&noOfRows=50&sortOrder=relevance_sort_desc&searchType=SIMPLE&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D
|
liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 1 - 50 of 2445
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• oxford
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
• 1.
Linköping University, Department of Medical and Health Sciences, Division of Drug Research. Linköping University, Faculty of Medicine and Health Sciences. Department of Forensic Genetics and Forensic Toxicology, National Board of Forensic Medicine, 58758 Linköping, Sweden.
25C-NBOMe and 25I-NBOMe metabolite studies in human hepatocytes, in vivo mouse and human urine with high-resolution mass spectrometry.2017In: Drug Testing and Analysis, ISSN 1942-7603, E-ISSN 1942-7611, Vol. 9, no 5, 680-698 p.Article in journal (Refereed)
25C-NBOMe and 25I-NBOMe are potent hallucinogenic drugs that recently emerged as new psychoactive substances. To date, a few metabolism studies were conducted for 25I-NBOMe, whereas 25C-NBOMe metabolism data are scarce. Therefore, we investigated the metabolic profile of these compounds in human hepatocytes, an in vivo mouse model and authentic human urine samples from forensic cases. Cryopreserved human hepatocytes were incubated for 3 h with 10 μM 25C-NBOMe and 25I-NBOMe; samples were analyzed by liquid chromatography high-resolution mass spectrometry (LC-HRMS) on an Accucore C18 column with a Thermo QExactive; data analysis was performed with Compound Discoverer software (Thermo Scientific). Mice were administered 1.0 mg drug/kg body weight intraperitoneally, urine was collected for 24 h and analyzed (with or without hydrolysis) by LC-HRMS on an Acquity HSS T3 column with an Agilent 6550 QTOF; data were analyzed manually and with WebMetabase software (Molecular Discovery). Human urine samples were analyzed similarly. In vitro and in vivo results matched well. 25C-NBOMe and 25I-NBOMe were predominantly metabolized by O-demethylation, followed by O-di-demethylation and hydroxylation. All methoxy groups could be demethylated; hydroxylation preferably occurred at the NBOMe ring. Phase I metabolites were extensively conjugated in human urine with glucuronic acid and sulfate. Based on these data and a comparison with synthesized reference standards for potential metabolites, specific and abundant 25C-NBOMe urine targets are 5'-desmethyl 25C-NBOMe, 25C-NBOMe and 5-hydroxy 25C-NBOMe, and for 25I-NBOMe 2' and 5'-desmethyl 25I-NBOMe and hydroxy 25I-NBOMe. These data will help clinical and forensic laboratories to develop analytical methods and to interpret results. Copyright © 2016 John Wiley & Sons, Ltd.
• 2.
36 § avtalslagen – till konsumentens värn?: Högsta domstolens domskäls förenlighet med syftet enligt prop. 1975/76: 812015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Provision 36 of the Swedish Contract Act (SFS 1915:218) was introduced in 1976 in order to ensure legal protection for consumers in their relation to traders. The requisit ”unreasonably” however, on the one hand, is claimed to be too imprecise to be percceived to be legally secure for contractors. On the other hand, there is the perception that the meaning of the general clause is to leave room for a legally freer assessment. The purpose of this essay is to, with the data which can be provided by legislative history and the jurisprudence of the Supreme Court, examine the grounds, which are determined to fulfill the original purpose of the general clause, are satisfactory from a legally certain aspect. In the legislative history, we can see that the inquiry established that, the introduction of a general clause in the Swedish Contract Act, would be a valuable addition from a legally secure and predictable perspective. Then the inquiry deemed that an adjustment of unfair terms would be preferable, in case contracts shall be regulated using the general clause. Whether a term can be considered unfair, the inquirys opinion was that, the content of the agreement, the circumstances that existed when the contract was agreed, later occured conditions and the circumstances in general, had to be taken in account. The majority of the respondents concurred in the inquriys opinion about the reading of the general clause. However, according to the respondents assessments, the word ”unreasonably”, was preferable to the word ”unfair”. The rapporteur and the Council on Legislation considered the inquiry having reported to vaguely on the reading of the general clause, but they agreed with the respondents that the word ”unreasonably” was to prefer instead of ”unfair”. From the court cases, which are presented for in the essay, it is possible to determine that the most prominent grounds established by the Supreme Court, is predictability, the inferior position of the consumer and the clarity of contractual terms. In a predominantly number of court cases we believe the Supreme Court´s verdicts to be consistent with the purpose which permeated the legislative history. A few verdicts, however, are ambiguous, since we believe that the Supreme Court's reasoning, in the assessment of certain contractual terms of clarity, is inconsistent. We have found that the consequence of this is that there are far too high demands on the consumer, to the extent that it is required by him, to have firsthand knowledge of current legislation. Another problem with provision 36 of the Swedish Contract Act, seems to be that its enforcement in some cases leads to conflict with legal principles, for instance, the principle of predictability and the within contract law established principle, "pacta sunt servanda".
• 3.
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
3D Content-Based Model Matching using Geometric Features2006Report (Other academic)
We present an approach that utilizes efficient geometric feature extraction and a matching method that takes articulation into account. It is primarily applicable for man-made objects. First the object is analyzed to extract geometric features, dimensions and rotation are estimated and typical parts, so-called functional parts, are identified. Examples of functional parts are a box's lid, a building's chimney, or a battle tank's barrel. We assume a model library with full annotation. The geometric features are matched with the model descriptors, to gain fast and early rejection of non-relevant models. After this pruning the objectis matched with relevant, usually few, library models. We propose a sequential matching, where the number of functional parts increases in each iteration. The division into parts increases the possibility for correct matching result when several similar models are available. The approach is exemplifi
ed with an vehicle recognition application, where some vehicles have functional parts.
• 4.
Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
3D Graphics Technologies for Web Applications: An Evaluation from the Perspective of a Real World Application2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Web applications are becoming increasingly sophisticated and functionality that was once exclusive to regular desktop applications can now be found in web applications as well. One of the more recent advances in this field is the ability for web applications to render 3D graphics. Coupled with the growing number of devices with graphics processors and the ability of web applications to run on many different platforms using a single code base, this represents an exciting new possibility for developers of 3D graphics applications.
This thesis aims to explore and evaluate the technologies for 3D graphics that can be used in web applications, with the final goal of using one of them in a prototype application. This prototype will serve as a foundation for an application to be included in a commercial product. The evaluation is performed using general criteria so as to be useful for other applications as well, with one part presenting the available technologies and another part evaluating the three most promising technologies more in-depth using test programs.
The results show that, although some technologies are not production-ready, there are a few which can be used in commercial software, including the three chosen for further evaluation; WebGL, the Java library JOGL and Stage 3D for Flash. Among these, there is no clear winner and it is up to the application requirements to decide which to use. The thesis demonstrates an application built with WebGL and shows that fairly demanding 3D graphics web applications can be built. Also included are the lessons learned during the development and thoughts on the future of 3D graphics in web applications.
• 5.
Linköping University, Department of Electrical Engineering.
3D visualization of weather radar data2002Independent thesis Basic level (professional degree)Student thesis
There are 12 weather radars operated jointly by smhi and the Swedish Armed Forces in Sweden. Data from them are used for short term forecasting and analysis. The traditional way of viewing data from the radars is in 2D images, even though 3D polar volumes are delivered from the radars. The purpose of this work is to develop an application for 3D viewing of weather radar data.
There are basically three approaches to visualization of volumetric data, such as radar data: slicing with cross-sectional planes, surface extraction, and volume rendering. The application developed during this project supports variations on all three approaches. Different objects, e.g. horizontal and vertical planes, isosurfaces, or volume rendering objects, can be added to a 3D scene and viewed simultaneously from any angle. Parameters of the objects can be set using a graphical user interface and a few different plots can be generated.
Compared to the traditional 2D products used by meteorologists when analyzing radar data, the 3D scenes add information that makes it easier for the users to understand the given weather situations. Demonstrations and discussions with meteorologists have rendered positive reactions. The application will be installed and evaluated at Arlanda airport in Sweden.
• 6.
Uppsala University, Sweden.
Linköping University, Department of Clinical and Experimental Medicine, Cell Biology. Linköping University, Faculty of Health Sciences. Uppsala University, Sweden. Uppsala University, Sweden. Uppsala University, Sweden. Sahlgrenska University Hospital, Sweden. Institute of Technology, Dublin, Ireland. Karolinska Institutet, Stockholm, Sweden. Institute of Technology, Dublin, Ireland. Uppsala University, Sweden. Linköping University, Department of Clinical and Experimental Medicine, Cell Biology. Linköping University, Faculty of Health Sciences. Uppsala University, Sweden.
450K-array analysis of chronic lymphocytic leukemia cells reveals global DNA methylation to be relatively stable over time and similar in resting and proliferative compartments2013In: Leukemia, ISSN 0887-6924, E-ISSN 1476-5551, Vol. 27, no 1, 150-158 p.Article in journal (Refereed)
In chronic lymphocytic leukemia (CLL), the microenvironment influences gene expression patterns; however, knowledge is limited regarding the extent to which methylation changes with time and exposure to specific microenvironments. Using high-resolution 450K-arrays, we provide the most comprehensive DNA methylation study of CLL to date, analysing paired diagnostic/follow-up samples from IGHV-mutated/untreated and IGHV-unmutated/treated patients (n=36) and patient-matched peripheral blood and lymph node samples (n=20). On an unprecedented scale, we revealed 2239 differentially methylated CpG sites between IGHV-mutated and unmutated patients, with the majority of sites positioned outside annotated CpG islands. Intriguingly, CLL prognostic genes (e.g. CLLU1, LPL, ZAP70, NOTCH1), epigenetic regulator (e.g. HDAC9, HDAC4, DNMT3B), B-cell signaling (e.g. IBTK) and numerous TGF-ß and NF-κB/TNF pathway genes were alternatively methylated between subgroups. Contrary, DNA methylation over time was deemed rather stable with few recurrent changes noted within subgroups. Although a larger number of non-recurrent changes were identified among IGHV-unmutated relative to mutated cases over time, these equated to a low global change. Similarly, few changes were identified between compartment cases. Altogether, we reveal CLL subgroups to display unique methylation profiles and unveil methylation as relatively stable over time and similar within different CLL compartments, implying aberrant methylation as an early leukemogenic event.Leukemia accepted article preview online, 27 August 2012; doi:10.1038/leu.2012.245.
• 7.
Linköping University, Department of Department of Health and Society, Division of Physiotherapy. Linköping University, Faculty of Health Sciences.
Linköping University, Department of Department of Health and Society, Division of Physiotherapy. Linköping University, Faculty of Health Sciences. Linköping University, Department of health and environment. Linköping University, Faculty of Health Sciences. Linköping University, Department of health and environment. Linköping University, Faculty of Health Sciences.
A 12-year follow-up of subjects initially sicklisted with neck/shoulder or low back diagnoses2001In: Physiotherapy Research International, ISSN 1358-2267, E-ISSN 1471-2865, Vol. 6, no 1, 52-63 p.Article in journal (Refereed)
Background and Purpose Neck/shoulder and low back pain are common in the Western world and can cause great personal and economic consequences, but so far there are few long term follow-up studies of the consequences of back pain, especially studies that separate the location of back pain. More knowledge is needed about different patterns of risk factors and prognoses for neck/shoulder and low back pain, respectively, and they should not be treated as similar conditions. The aim of the present study was to investigate possible long-term differences in neck/shoulder and low back symptoms, experienced over a 12-year period, with regard to work status, present health, discomfort and influence on daily activities.
Method A retrospective cohort study of individuals sicklisted with neck/shoulder or low back diagnoses 12 years ago was undertaken. Included were all 213 people who, in 1985, lived in the municipality of Linköping, Sweden, were aged 25–34 years and who had taken at least one new period of sickleave lasting >28 days with a neck/shoulder or low back diagnosis. In 1996, a questionnaire was mailed to the 204 people who were still resident in Sweden (response rate 73%).
Results Those initially absent with neck/shoulder diagnoses rated their present state of discomfort as worse than those sicklisted with low back diagnoses. Only 4% of the neck/shoulder group reported no present discomfort compared with 25% of the low back group. Notably, both groups reported the same duration of low back discomfort during the last year, which may indicate a higher risk for symptoms in more than one location for subjects with neck/shoulder problems.
Conclusions Individuals with sickness absence of more than 28 days with neck/shoulder or low back diagnoses appear to be at high risk of developing long-standing symptoms, significantly more so for those initially having neck/shoulder diagnoses.
• 8.
Queen Mary University of London, England .
Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
A 3 year update on the influence of noise on performance and behavior2012In: Noise & Health, ISSN 1463-1741, Vol. 14, no 61, 292-296 p.Article in journal (Refereed)
The effect of noise exposure on human performance and behavior continues to be a focus for research activities. This paper reviews developments in the field over the past 3 years, highlighting current areas of research, recent findings, and ongoing research in two main research areas: Field studies of noise effects on childrens cognition and experimental studies of auditory distraction. Overall, the evidence for the effects of external environmental noise on childrens cognition has strengthened in recent years, with the use of larger community samples and better noise characterization. Studies have begun to establish exposure-effect thresholds for noise effects on cognition. However, the evidence remains predominantly cross-sectional and future research needs to examine whether sound insulation might lessen the effects of external noise on childrens learning. Research has also begun to explore the link between internal classroom acoustics and childrens learning, aiming to further inform the design of the internal acoustic environment. Experimental studies of the effects of noise on cognitive performance are also reviewed, including functional differences in varieties of auditory distraction, semantic auditory distraction, individual differences in susceptibility to auditory distraction, and the role of cognitive control on the effects of noise on understanding and memory of target speech materials. In general, the results indicate that there are at least two functionally different types of auditory distraction: One due to the interruption of processes (as a result of attention being captured by the sound), another due to interference between processes. The magnitude of the former type is related to individual differences in cognitive control capacities (e.g., working memory capacity); the magnitude of the latter is not. Few studies address noise effects on behavioral outcomes, emphasizing the need for researchers to explore noise effects on behavior in more detail.
• 9.
Linköping University, Department of Electrical Engineering, Computer Engineering. Linköping University, Faculty of Science & Engineering.
A 4096-Point Radix-4 Memory-Based FFT Using DSP Slices2017In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, Vol. 25, no 1, 375-379 p.Article in journal (Refereed)
This brief presents a novel 4096-point radix-4 memory-based fast Fourier transform (FFT). The proposed architecture follows a conflict-free strategy that only requires a total memory of size N and a few additional multiplexers. The control is also simple, as it is generated directly from the bits of a counter. Apart from the low complexity, the FFT has been implemented on a Virtex-5 field programmable gate array (FPGA) using DSP slices. The goal has been to reduce the use of distributed logic, which is scarce in the target FPGA. With this purpose, most of the hardware has been implemented in DSP48E. As a result, the proposed FPGA is efficient in terms of hardware resources, as is shown by the experimental results.
• 10.
Linköping University, Department of Physics, Chemistry and Biology, Molecular genetics.
A bioinformatics approach to investigate the function of non specific lipid transfer proteins in Arabidopsis thaliana2010Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
Plant non specific lipid transfer proteins (nsLTPs) enhance in vitro transfer of phospholipids between membranes. Our analysis exploited the large amount of Arabidopsis transcriptome data in public databases to learn more about the function of nsLTPs. The analysis revealed that some nsLTPs are expressed only in roots, some are seed specific, and others are specific for tissues above ground whereas certain nsLTPs show a more general expression pattern. Only few nsLTPs showed a strong up or downregulation after that the Arabidopsis plant had suffered from biotic or abiotic stresses. However, salt, high osmosis and UV-B radiation caused upregulation of some nsLTP genes. Further, when the coexpression pattern of the A.thaliana nsLTPs were investigated, we found that there were several modules of nsLTP genes that showed strong coexpression indicating an involvement in related biological processes. Our finding reveals that the nsLTPs gene was significantly correlated with lipase and peroxidase activity. Hence we concluded that the nsLTPs may play a role in seed germination, signalling and ligning biosynthesis.
• 11.
Linköping University, Department of Electrical Engineering.
A calibration method for laser-triangulating 3D cameras2008Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
A laser-triangulating range camera uses a laser plane to light an object. If the position of the laser relative to the camera as well as certrain properties of the camera is known, it is possible to calculate the coordinates for all points along the profile of the object. If either the object or the camera and laser has a known motion, it is possible to combine several measurements to get a three-dimensional view of the object.
Camera calibration is the process of finding the properties of the camera and enough information about the setup so that the desired coordinates can be calculated. Several methods for camera calibration exist, but this thesis proposes a new method that has the advantages that the objects needed are relatively inexpensive and that only objects in the laser plane need to be observed. Each part of the method is given a thorough description. Several mathematical derivations have also been added as appendices for completeness.
The proposed method is tested using both synthetic and real data. The results show that the method is suitable even when high accuracy is needed. A few suggestions are also made about how the method can be improved further.
• 12.
Linköping University, Department of Computer and Information Science, CASL - Cognitive Autonomous Systems Laboratory. Linköping University, The Institute of Technology.
A case-based approach to dialogue systems2010In: Journal of experimental and theoretical artificial intelligence (Print), ISSN 0952-813X, E-ISSN 1362-3079, Vol. 22, no 1, 23-51 p.Article in journal (Refereed)
We describe an approach to integrate dialogue management, machine-learning and action planning in a system for dialogue between a human and a robot. Case-based techniques are used because they permit life-long learning from experience and demand little prior knowledge and few static hand-written structures. This approach has been developed through the work on an experimental dialogue system, called CEDERIC, that is connected to an unmanned aerial vehicle (UAV). A single case base and case-based reasoning engine is used both for understanding and for planning actions by the UAV. Dialogue experiments both with experienced and novice users, where the users have solved tasks by dialogue with this system, showed very adequate success rates.
• 13.
Linköping University, Department of Electrical Engineering.
A class of Mth-band linear-phase FIR filters synthesized using the frequency-response masking approach2002In: Nordic Signal Processing Symposium,2002, 2002Conference paper (Refereed)
This paper introduces a class of Mth-band linear-phase FIR filters synthesized using the frequency-response masking (FRM) approach. In the FRM approach, the overall filter makes use of periodic model filters and nonperiodic masking filters which makes it possible to obtain FIR filters requiring few arithmetic operations even when the transition band is narrow. The proposed filters are designed using linear and nonlinear programming. Design examples are included illustrating the efficiency of the proposed filters.
• 14.
Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Electronics System.
Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Electronics System.
A class of two-channel hybrid analog/digital filter banks1999In: Midwest Symposium on Circuits and Systems,1999, 1999, 14-17 p.Conference paper (Refereed)
This paper introduces a class of hybrid analog/digital filter banks with approximately perfect magnitude reconstruction. The filter bank consists of analog analysis and digital synthesis filter banks. The analog analysis filters are formed as a sum and difference of two allpass subfilters, respectively, resulting in filters with low orders and few free parameters, which is advantageous from implementation and design points of view. The digital synthesis filters are odd-order linear-phase FIR filters with symmetric and anti-symmetric impulse responses, respectively. The filter design is performed by first optimizing the analog analysis filters. Then, with the analog filters fixed, optimum digital synthesis filters, in the minimax sense, are obtained with the aid of linear programming
• 15.
Linköping University, Department of Electrical Engineering, Electronic Devices. Linköping University, The Institute of Technology.
A clock driver with reduced EMI2014Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
A clock driver that works on the principle of charging and discharging the clock network in a VLSI circuit in two steps is investigated in a few different configurations. The aim of the design is twofold:
• to reduce the power consumption
• to reduce the third harmonic of the clock signal, and thereby the EMI (electromagnetic interference) emitted by the clock network.
The first should be possible to accomplish as the clock interconnect network gets charged by half the voltage during each rising transition, and the second should be possible to accomplish by carefully time the rising and falling transitions, so that the third Fourier coefficient of the resulting wave form cancels.
The drivers are loaded by eight 16-bit adders. The drivers’ power consumption, and the spectrum of the output signal, are investigated under varying clock frequencies, power supply voltage, and driver architecture. The results are compared to a conventional square wave clock.
The results are that while the third harmonics of the resulting output sees an improvement in all the investigated cases over the square wave clock, the power savings are, for higher clock frequencies, more than completely canceled by the extra power needed in the logic stage which controls these drivers. On the other hand, the power consumption of the new driver appears to drop below that of the conventional driver when the clock frequency drops below approximately 100MHz.
A few suggestions for further investigations of new designs and clock wave forms are given.
• 16.
Linköping University, REMESO - Institute for Research on Migration, Ethnicity and Society. Linköping University, Department of Thematic Studies. Linköping University, Faculty of Arts and Sciences.
A Common Market, a Common ‘Problem’: Migration andEuropean Integration Before and After the Launching of the Single Market2005Report (Other (popular science, discussion, etc.))
Since the ratification of the Amsterdam Treaty in 1999 the European Union is emerging as a key actor within migration policy. But in order to understand the current development it is important to have a clear picture of the EU’s historical trajectory in the field of migration. In this paper the discussion thus focus esexclusively on the pre-Amsterdam era. It sets out with a brief historical overview of the early decades of European integration and accounts for labour migration’s crucial function in the founding logic of the EEC. While supranational competence over migration policy was very limited during this period, the discussion shows that the way in which competence was allocated between supranational and national levels would be highly consequential for the future development. Following this, the major part of the paper is devoted to an examination of the Community’s transformation during the second half of 1980s and the first half of the 1990s. The measures introduced under the banner of the Single Market, particularly those pertaining to the free movement of persons, instigated a development whereby immigration and asylum would be progressively treated as ‘common’ Community matters. Equally important, the paper shows that Community activity in the area of migration also addressed a range of other matters, many of which went beyond the issue of people moving across external and internal borders. From then on, Brussels began to address the situation of ethnic minorities of migrant background, thus bringing the growing problems of ethnic exclusion and racism on to the EU agenda. On the whole, it was the question of how to better ‘integrate’ ‘legal immigrants’ and ethnic minorities into Community societies that received the most attention. In this fashion, the present paper examines the EU’s interventions in the area of immigration and asylum together with its efforts in the realm of migrant ‘integration’. Although very few accounts have undertaken to analyze jointly the EU’s approaches to immigration and migrant ‘integration’, this paper demonstrates that in order to provide a comprehensive analysis of the issues in question, these policy areas need to be approached as inextricably intertwined and as mutually conditioning.
• 17.
Linköping University, Department of Medical and Health Sciences, Division of Physiotherapy. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Local Health Care Services in Central Östergötland, Department of Activity and Health.
Sahlgrens University Hospital. Ryhov Hospital. Linköping University, Department of Medical and Health Sciences, Physiotherapy. Linköping University, Faculty of Health Sciences.
A Comparison Between the Carbon Fiber Cage and the Cloward Procedure in Cervical Spine Surgery A Ten- to Thirteen-Year Follow-Up of a Prospective Randomized Study2011In: SPINE, ISSN 0362-2436, Vol. 36, no 12, 919-925 p.Article in journal (Refereed)
Study Design. Ten- to 13-year follow-up of a prospective randomized study. Objective. To compare the 10- to 13-year outcomes of anterior cervical decompression and fusion (ACDF) with a cervical intervertebral fusion cage (CIFC), and the Cloward procedure (CP) using a broad clinical and patient-centered assessment. Summary of Background Data. There are few prospective studies and none with a follow-up of 10 years or more. Methods. Patient questionnaires completed 10 years or more after ACDF. Seventy-three patients (77%) responded. Radiographs were obtained at 2 years. Results. Apart from greater fulfillment of preoperative expectation (P = 0.01) and less headache (P = 0.005) in the CIFC group compared with the CP group, there were no significant differences in the outcomes of the two surgical methods. Pain intensity improved in comparison with preoperative levels in both the CIFC and CP groups (P andlt; 0.0001), but the Neck Disability Index (NDI) only improved in the CIFC group (P = 0.04). Only those with a healed fusion benefited from an improved NDI (P = 0.02). There was no deterioration in pain intensity or NDI after the 2-year follow-up. Conclusion. The outcomes of the two surgical methods, with a few exceptions, were equal at 10- to 13-year follow-up, and there was no deterioration in outcome after the 2-year follow-up. Pain intensity improved more than disability, which may indicate that further improvement of physical function requires early more extensive postoperative rehabilitation. Despite persisting disability, repeat surgery was relatively uncommon.
• 18.
Department of Evolutionary Genetics, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany, and Lewis Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America,.
A Comparison of Brain Gene Expression Levels in Domesticated and Wild Animals2012In: PLOS Genetics, ISSN 1553-7390, Vol. 8, no 9, e1002962- p.Article in journal (Refereed)
Domestication has led to similar changes in morphology and behavior in several animal species, raising the questionwhether similarities between different domestication events also exist at the molecular level. We used mRNA sequencing toanalyze genome-wide gene expression patterns in brain frontal cortex in three pairs of domesticated and wild species (dogsand wolves, pigs and wild boars, and domesticated and wild rabbits). We compared the expression differences with thosebetween domesticated guinea pigs and a distant wild relative (Cavia aperea) as well as between two lines of rats selectedfor tameness or aggression towards humans. There were few gene expression differences between domesticated and wilddogs, pigs, and rabbits (30–75 genes (less than 1%) of expressed genes were differentially expressed), while guinea pigs andC. aperea differed more strongly. Almost no overlap was found between the genes with differential expression in thedifferent domestication events. In addition, joint analyses of all domesticated and wild samples provided only suggestiveevidence for the existence of a small group of genes that changed their expression in a similar fashion in differentdomesticated species. The most extreme of these shared expression changes include up-regulation in domesticates of SOX6and PROM1, two modulators of brain development. There was almost no overlap between gene expression in domesticatedanimals and the tame and aggressive rats. However, two of the genes with the strongest expression differences betweenthe rats (DLL3 and DHDH) were located in a genomic region associated with tameness and aggression, suggesting a role ininfluencing tameness. In summary, the majority of brain gene expression changes in domesticated animals are specific tothe given domestication event, suggesting that the causative variants of behavioral domestication traits may likewise bedifferent.
• 19.
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Visual Computing Research Group, Ulm University.
A Crowdsourcing System for Integrated and Reproducible Evaluation in Scientific Visualization2016In: 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE Computer Society, 2016, 40-47 p.Conference paper (Refereed)
User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant recruitment and result reporting. While it could be shown that the quality of evaluation results and the simplified participant recruitment of crowdsourcing platforms makes this technology a viable alternative to lab experiments when evaluating visualizations, the time for conducting and reporting such evaluations is still very high. In this paper, we propose a software system, which integrates the conduction, the analysis and the reporting of crowdsourced user evaluations directly into the scientific visualization development process. With the proposed system, researchers can conduct and analyze quantitative evaluations on a large scale through an evaluation-centric user interface with only a few mouse clicks. Thus, it becomes possible to perform iterative evaluations during algorithm design, which potentially leads to better results, as compared to the time consuming user evaluations traditionally conducted at the end of the design process. Furthermore, the system is built around a centralized database, which supports an easy reuse of old evaluation designs and the reproduction of old evaluations with new or additional stimuli, which are both driving challenges in scientific visualization research. We will describe the system's design and the considerations made during the design process, and demonstrate the system by conducting three user evaluations, all of which have been published before in the visualization literature.
• 20.
Linköping University, Department of Electrical Engineering, Electronics System. Linköping University, The Institute of Technology.
A Cyclic Analog to Digital Converter for CMOS image sensors2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
In this work a 12-bit Cyclic ADC (CADC) aimed for column-parallel readout implementation in CMOS image sensors is presented. The aim of the conducted study is to cover multiple CADC sub-component architectures and provide an analysis onto the latter to a mid-level of depth. A few various Multiplying DAC (MDAC) structures have been re-examined and a preliminary redundant signed-digit CADC design based on a 1.5-bit modified flip-over MDAC has been conducted. Three comparator architectures have been explored and a dynamic interpolative Sub-ADC is presented. Finally, some weak spots degrading the performance of the carried-out design have been analyzed. As an architectural improvement possibility two MDAC capacitor mismatch error reduction techniques have been presented.
• 21.
Linköping University, Department of Mathematics, Applied Mathematics. Linköping University, The Institute of Technology.
National University of Rwanda, Box 117, Butare, Rwanda.
A Data Assimilation Approach to Coefficient Identification2011Report (Other academic)
The thermal conductivity properties of a material can be determined experimentally by using temperature measurements taken at specified locations inside the material. We examine a situation where the properties of a (previously known) material changed locally. Mathematically we aim to find the coefficient k(x) in the stationary heat equation (kTx)x = 0;under the assumption that the function k(x) can be parametrized using only a few degrees of freedom.
The coefficient identification problem is solved using a least squares approach; where the (non-linear) control functional is weighted according to the distribution of the measurement locations. Though we only discuss the 1D case the ideas extend naturally to 2D or 3D. Experimentsdemonstrate that the proposed method works well.
• 22.
Linköping University, Department of Clinical and Experimental Medicine, Child and Adolescent Psychiatry . Linköping University, Faculty of Health Sciences.
Linköping University, Department of Clinical and Experimental Medicine. Linköping University, Faculty of Health Sciences.
A descriptive study of mental health services provided for physically abused children in Sweden: A four-year follow-up of child and adolescent psychiatric chartsManuscript (preprint) (Other academic)
Since there has been a considerable increase in the number of police reports on physical child abuse in Sweden since the mid 1980s, there should be an increased number of children in need of trauma-focused mental health treatment. During 1986-1996 there were 126 children reported as being physically abused by a parent or equivalent and reported to the police in a police district in Sweden. Fifty-seven of these children (45%) had been the objects of interventions from Child and Adolescent Psychiatric Services. The aim of this study was to investigate the extent and content of this. Questions addressed were: What interventions were provided prior to, at the acute situation, and during the 4 years after the physical abuse incident? This group of children was referred to (CAPS) for different reasons, but few for physical abuse. Only 35 out of 122 referrals were made under the label of child physical abuse. Overall, interventions were almost exclusively directed toward the parents. Six out of 126 physically abused children received individual therapy. Abuse was not mentioned in the charts for 23 of the children, even though 8 of them had been referred due to abuse. The results of this study indicate that physically abused children often have been in contact with mental health services prior to the abuse for different reasons. Individual interventions for physically abused children were rare due to for instance CAPS workloads, poor motivation among parents and children, and maybe due to professionals’ lack of knowledge regarding effective treatment.
• 23.
Linköping University, Department of Mathematics, Optimization . Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
A Dual Active-Set Algorithm for Regularized Monotonic Regression2017In: Journal of Optimization Theory and Applications, ISSN 0022-3239, E-ISSN 1573-2878, Vol. 172, no 3, 929-949 p.Article in journal (Refereed)
Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity constraints. The resulting smoothed monotonic regression is a convex quadratic optimization problem. We focus on the case, where the set of observations is completely (linearly) ordered. Our smoothed pool-adjacent-violators algorithm is designed for solving the regularized problem. It belongs to the class of dual active-set algorithms. We prove that it converges to the optimal solution in a finite number of iterations that does not exceed the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of our algorithm grows quadratically with the problem size, we found its running time to grow almost linearly in our computational experiments.
• 24.
University of Brescia, Italy.
University of Brescia, Italy. Linköping University, Department of Management and Engineering, Industrial Economics. Linköping University, Faculty of Science & Engineering. Hanken School Econ, Finland.
A framework for PSS business models: formalization and application2016In: PRODUCT-SERVICE SYSTEMS ACROSS LIFE CYCLE, ELSEVIER SCIENCE BV , 2016, Vol. 47, 519-524 p.Conference paper (Refereed)
In order to successfully move "from products to solutions", companies need to redesign their business model. Nevertheless, service oriented BMs in product-centric firms are under-investigated in the literature: very few works develop a scheme of analysis of such BMs. To provide a first step into closing this gap, we propose a new framework to describe service-oriented BMs, pointing out the main BM components and related PSS characteristics. Thus, the proposed framework aims to help companies to take into account the relevant elements that need to be designed to successfully implement a service-oriented BM and thus guide strategic decisions. (C) 2016 The Authors. Published by Elsevier B.V.
• 25.
Swedish University of Agriculture Science, Uppsala.
Swedish University of Agriculture Science, Uppsala. Swedish University of Agriculture Science, Uppsala. Swedish University of Agriculture Science, Uppsala. Swedish University of Agriculture Science, Uppsala. Swedish University of Agriculture Science, Uppsala.
A global search reveals epistatic interaction between QTL for early growth in the chicken2003In: Genome Research, ISSN 1088-9051, E-ISSN 1549-5469, Vol. 13, no 3, 413-421 p.Article in journal (Refereed)
We have identified quantitative trait loci (QTL) explaining a large proportion of the variation in body weights at different ages and growth between chronological ages in an F-2 intercross between red junglefowl and White Leghorn chickens. QTL were mapped using forward selection for loci with significant marginal genetic effects and with a simultaneous search for epistatic QTL pairs. We found 22 significant loci contributing to these traits, nine of these were only found by the simultaneous two-dimensional search, which demonstrates the power of this approach for detecting loci affecting complex traits. We have also estimated the relative contribution of additive, dominance, and epistasis effects to growth and the contribution of epistasis was more pronounced prior to 46 days of age, whereas additive genetic effects explained the major portion of the genetic variance later in life. Several of the detected loci affected either early or late growth but not both. Very few loci affected the entire growth process, which points out that early and late growth, at least to some extent, have different genetic regulation.
• 26.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
A Gröbner basis algorithm for fast encoding of Reed-Müller codes2016Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
In this thesis the relationship between Gröbner bases and algebraic coding theory is investigated, and especially applications towards linear codes, with Reed-Müller codes as an illustrative example. We prove that each linear code can be described as a binomial ideal of a polynomial ring, and that a systematic encoding algorithm for such codes is given by the remainder of the information word computed with respect to the reduced Gröbner basis. Finally we show how to apply the representation of a code by its corresponding polynomial ring ideal to construct a class of codes containing the so called primitive Reed-Müller codes, with a few examples of this result.
• 27.
A heuristic smoothing procedure for avoiding local optima in optimization of structures subject to unilateral constraints2000In: Structural and multidisciplinary optimization (Print), ISSN 1615-147X, Vol. 20, no 1, 29-36 p.Article in journal (Refereed)
Structural optimization problems are often solved by gradient-based optimization algorithms, e.g. sequential quadratic programming or the method of moving asymptotes. If the structure is subject to unilateral constraints, then the gradient may be nonexistent for some designs. It follows that difficulties may arise when such structures are to be optimized using gradient-based optimization algorithms. Unilateral constraints arise, for instance, if the structure may come in frictionless contact with an obstacle. This paper presents a heuristic smoothing procedure (HSP) that lessens the risk that gradient-based optimization algorithms get stuck in (nonglobal) local optima of structural optimization problems including unilateral constraints. In the HSP, a sequence of optimization problems must be salved. All these optimization problems have well-defined gradients and are therefore well-suited for gradient-based optimization algorithms. It is proves that the solutions of this sequence of optimization problems converge to the solution of the original structural optimization problem. The HSP is illustrated in a few numerical examples. The computational results show that the HSP can be an effective method for avoiding local optima.
• 28.
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
University of Cambridge, England. Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, 1379-1383 p.Conference paper (Refereed)
While a number of existing high-bit depth video compression methods can potentially encode high dynamic range (HDR) video, few of them provide this capability. In this paper, we investigate techniques for adapting HDR video for this purpose. In a large-scale test on 33 HDR video sequences, we compare 2 video codecs, 4 luminance encoding techniques (transfer functions) and 3 color encoding methods, measuring quality in terms of two objective metrics, PU-MSSIM and HDR-VDP-2. From the results we design an open source HDR video encoder, optimized for the best compression performance given the techniques examined.
• 29.
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
A High-Performance Tracking System based on Camera and IMU2013In: 16th International Conference on Information Fusion (FUSION), 2013, IEEE , 2013, 2065-2072 p.Conference paper (Refereed)
We consider an indoor tracking system consisting of an inertial measurement unit (IMU) and a camera that detects markers in the environment. There are many camera based tracking systems described in literature and available commercially, and a few of them also has support from IMU. These are based on the best-effort principle, where the performance varies depending on the situation. In contrast to this, we start with a specification of the system performance, and the design isbased on an information theoretic approach, where specific user scenarios are defined. Precise models for the camera and IMU are derived for a fusion filter, and the theoretical Cramér-Rao lower bound and the Kalman filter performance are evaluated. In this study, we focus on examining the camera quality versus the marker density needed to get at least a one mm and one degree accuracy in tracking performance.
• 30. Landernäs, Krister
Linköping University, Department of Electrical Engineering. Linköping University, Department of Electrical Engineering, Electronics System.
A high-speed low-latency digit-serial hybrid adder2004In: IEEE Int. Symp. on Circuits and Systems, ISCAS'04, 2004, III-217-III-220 p.Conference paper (Refereed)
• 31.
Linköping University, Department of Social and Welfare Studies, NISAL - National Institute for the Study of Ageing and Later Life. Linköping University, Faculty of Arts and Sciences.
Linköping University, Department of Social and Welfare Studies, NISAL - National Institute for the Study of Ageing and Later Life. Linköping University, Faculty of Arts and Sciences.
“A home away from home”: The role of the Church of Sweden Abroad for Swedish migrants2013In: New Religiosity in Migration, 2013, 38-41 p.Conference paper (Refereed)
According to some studies, Sweden is one of the most secularized countries in the world. with low church attendance. For most Swedes, their contact with the church is limited to traditional rites. How are we then to understand that quite a few Swedes seem to act much like immigrant groups from less secularized nations, by turning to the ethnic church and to religious practices while moving – fully or part time – to foreign countries?
The aim of the presentation is to discuss this question, based on results from a project in which the role of the Church of Sweden Abroad has been explored. The Church of Sweden has a long traditions of creating parishes abroad, mainly in the larger European cities and in connection with harbors, as Seaman`s Churches. Since some decades, however, the Church has started to follow the streams of tourists and elderly migrants and parishes have been established, mainly in Southern Europe and, lately in Asian countries.
The presentation will be based on a project consisting of three studies: 1) A qualitative case study, 2) A mapping of the web sites of all 45 parishes, and 3) An internet-based survey of all parishes. An interesting pattern turned out to be that many church visitors who initially seemed to be attracted by the (Swedish) “home away from home” that the parish offered through e.g. “Swedish coffee”, eventually began to participate regularly in the church services, even in Holy Communion.
• 32.
Paul Drude Institute Festkorperelektron, Germany.
Paul Drude Institute Festkorperelektron, Germany. Paul Drude Institute Festkorperelektron, Germany. Linköping University, Department of Physics, Chemistry and Biology, Surface Physics and Chemistry. Linköping University, Faculty of Science & Engineering. Paul Drude Institute Festkorperelektron, Germany. Paul Drude Institute Festkorperelektron, Germany. Paul Drude Institute Festkorperelektron, Germany. Paul Drude Institute Festkorperelektron, Germany.
A hybrid MBE-based growth method for large-area synthesis of stacked hexagonal boron nitride/graphene heterostructures2017In: Scientific Reports, ISSN 2045-2322, E-ISSN 2045-2322, Vol. 7, 43644Article in journal (Refereed)
Van der Waals heterostructures combining hexagonal boron nitride (h-BN) and graphene offer many potential advantages, but remain difficult to produce as continuous films over large areas. In particular, the growth of h-BN on graphene has proven to be challenging due to the inertness of the graphene surface. Here we exploit a scalable molecular beam epitaxy based method to allow both the h-BN and graphene to form in a stacked heterostructure in the favorable growth environment provided by a Ni(111) substrate. This involves first saturating a Ni film on MgO(111) with C, growing h-BN on the exposed metal surface, and precipitating the C back to the h-BN/Ni interface to form graphene. The resulting laterally continuous heterostructure is composed of a top layer of few-layer thick h-BN on an intermediate few-layer thick graphene, lying on top of Ni/MgO(111). Examinations by synchrotronbased grazing incidence diffraction, X-ray photoemission spectroscopy, and UV-Raman spectroscopy reveal that while the h-BN is relaxed, the lattice constant of graphene is significantly reduced, likely due to nitrogen doping. These results illustrate a different pathway for the production of h-BN/graphene heterostructures, and open a new perspective for the large-area preparation of heterosystems combining graphene and other 2D or 3D materials.
• 33.
Linköping University, Department of Electrical Engineering.
Linköping University, Department of Electrical Engineering.
A Java Framework for Broadcast Encryption Algorithms2004Independent thesis Basic level (professional degree)Student thesis
Broadcast encryption is a fairly new area in cryptology. It was first addressed in 1992, and the research in this area has been large ever since. In short, broadcast encryption is used for efficient and secure broadcasting to an authorized group of users. This group can change dynamically, and in some cases only one-way communication between the sender and receivers is available. An example of this is digital TV transmissions via satellite, in which only the paying customers can decrypt and view the broadcast.
The purpose of this thesis is to develop a general Java framework for implementation and performance analysis of broadcast encryption algorithms. In addition to the actual framework a few of the most common broadcast encryption algorithms (Complete Subtree, Subset Difference, and the Logical Key Hierarchy scheme) have been implemented in the system.
This master’s thesis project was defined by and carried out at the Information Theory division at the Department of Electrical Engineering (ISY), Linköping Institute of Technology, during the first half of 2004.
• 34.
Linköping University, Department of Management and Engineering, Environmental Technology and Management. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Management and Engineering, Environmental Technology and Management. Linköping University, Faculty of Science & Engineering.
A Literature Review to Understand the Requirements Specification’s Role when Developing Integrated Product Service Offerings2016In: Product-Service Systems across Life Cycle / [ed] Sergio Cavalieri, Elisabetta Ceretti, Tullio Tolio, Giuditta Pezzotta, Elsevier, 2016, Vol. 47, 150-155 p.Conference paper (Refereed)
This paper's objective is to analyze, based on a literature review, how existing IPSO design methods support and manage requirements when developing an IPSO. Issues analyzed are e.g. which types of aspects existing methods should consider, such as environmental issues and demands from stakeholders and customers. Another issue is what types of stakeholders are involved in the process. There is also an interest in finding out which of these methods are used in the industry. The goal is that the results will provide insight into how the requirements specification is used when developing an IPSO in theory, and in what way this insight will contribute to future studies on how companies currently derive and manage requirements when developing an IPSO.
The literature review started out with the analysis of 201 papers, yielding 22 papers within the area of working with requirements for an IPSO. These papers were reviewed and summarized with the above issues and interests in mind. Findings are that when deriving requirements, existing IPSO design methods are lacking in regard to a holistic life cycle and system perspective of the offering. Few of the methods consider both requirements regarding the environmental impact of the offering and demands from all involved stakeholders, normally only the customer. Furthermore, few studies have ended with a clear work process regarding how to initially find the requirements to analyze them and later interpret them as actual metrics. There are also no signs that existing methodology is used in the industry's day-to-day work.
• 35.
Linköping University, Department of Physics, Chemistry and Biology, Semiconductor Materials. Linköping University, The Institute of Technology.
Lund University. Linköping University, Department of Physics, Chemistry and Biology, Semiconductor Materials. Linköping University, The Institute of Technology. Linköping University, Department of Physics, Chemistry and Biology, Semiconductor Materials. Linköping University, The Institute of Technology. Linköping University, Department of Physics, Chemistry and Biology, Semiconductor Materials. Linköping University, The Institute of Technology.
A low-energy electron microscopy and x-ray photo-emission electron microscopy study of Li intercalated into graphene on SiC(0001)2010In: NEW JOURNAL OF PHYSICS, ISSN 1367-2630, Vol. 12, no 125015Article in journal (Refereed)
The effects induced by the deposition of Li on 1 and 0 ML graphene grown on SiC(0001) and after subsequent heating were investigated using low-energy electron microscopy (LEEM) and x-ray photo-emission electron microscopy (XPEEM). For 1 ML samples, the collected photoelectron angular distribution patterns showed the presence of single pi-cones at the six equivalent K-points in the Brillouin zone before Li deposition but the presence of two pi-cones (pi-bands) after Li deposition and after heating to a few hundred degrees C. For 0 ML samples, no pi-band could be detected close to the Fermi level before deposition, but distinct pi-cones at the K-points were clearly resolved after Li deposition and after heating. Thus Li intercalation was revealed in both cases, transforming the carbon buffer layer (0 ML) to graphene. On 1 ML samples, but not on 0 ML, a (root 3 x root 3) R30 degrees diffraction pattern was observed immediately after Li deposition. This pattern vanished upon heating and then wrinkles/cracks appeared on the surface. Intercalation of Li was thus found to deteriorate the quality of the graphene layer, especially for 1 ML samples. These wrinkles/cracks did not disappear even after heating at temperatures andgt;= 500 degrees C, when no Li atoms remained on the substrate.
• 36.
Linköping University, Department of Management and Engineering, Environmental Technology and Management. Linköping University, The Institute of Technology.
Linköping University, Department of Management and Engineering, Environmental Technology and Management. Linköping University, The Institute of Technology.
A method to improve integrated product service offerings based on life cycle costing2015In: CIRP annals, ISSN 0007-8506, E-ISSN 1726-0604, Vol. 64, no 1, 33-36 p.Article in journal (Refereed)
Although a few papers have reported on life cycle cost (LCC) analysis of integrated product service offerings (IPSOs), insight on how to improve IPSOs based on LCC analysis is missing. This paper presents a method and an Excel and MATLAB-based tool that support IPSO design by employing LCC analysis, both from the provider and customer perspectives. This method takes advantage of exchangeability between products and services, being enabled within IPSO design. The method has been applied to an existing IPSO and potential improvements have been identified, e.g. one cheap component causing high LCC that could be reduced significantly by redesign.
• 37.
Linköping University, Department of Medical and Health Sciences, Cardiothoracic Anaesthesia and Intensive care. Linköping University, Faculty of Health Sciences.
A minimally invasive axial blood flow pump: an experimental and clinical study1997Doctoral thesis, comprehensive summary (Other academic)
The first aim of this thesis was to evaluate a new minimally invasive axial blood flow pump for treatment of patients needing circulatory support after open heart surgery. This system, the Hemopump temporary cardiac assist device, is a very small catheter mounted intracorporeal pump, which is introduced transvalvularly into the left ventricle. The pump can be inserted either through the femoral artery or directly through a graft sutured to the ascending aorta. In an experimental model, the flow capacity of three different designs of the system was investigated. Flow capacity varied between 2.0 and 4.5 liters per minute, depending on the working conditions for the different pump models. Twenty,four patients were treated for post,cardiotomy heart failure. Fourteen patients (58 %) were weaned from the device and later discharged from the hospital. In a subgroup of these patients (54%) where early intervention was instituted, the survival rate was 85%. The pump proved to be an effective tool for unloading a failing left ventricle with preservation of multi-organ perfusion. A clinical protocol was established for postoperative management. The Hemopump was easy to adapt to the clinical setting, and device~ related complications were few.
The second aim was to develop a new less invasive procedure for CABG, avoiding the need for cardio~pulmonary bypass during these procedures. First an animal trial was performed as a feasibility study. In combination with the administration of a short~acting ~~blocker, esmolol, this method enabled precise coronary bypass surgery. When results became consistent a small pilot study was done on five patients showing that this was a reproducible technique. Finally a prospective randomized trial comparing this technique with conventional bypass surgery was carried out. The Hemopump supported bypass surgery did not prolong the procedure, did not require a longer time on circulatory support and bleeding was less. Postoperative enzyme levels indicated that ischemic insult to the myocardium was less than with conventional surgery.
In summary, this minimally invasive axial blood flow pump proved to be a powerful left ventricular assist system enabling a less invasive approach during conditions where circulatory support is needed.
• 38.
Linköping University, Department of Medical and Health Sciences, Nursing Science. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Anaesthetics, Operations and Specialty Surgery Center, Department of Clinical Neurophysiology.
School of Health Sciences, Jönköping University, Jönköping, Sweden. Linköping University, Department of Clinical and Experimental Medicine, Clinical Neurophysiology. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Anaesthetics, Operations and Specialty Surgery Center, Department of Clinical Neurophysiology. Linköping University, Department of Clinical and Experimental Medicine, Clinical Neurophysiology. Linköping University, Faculty of Health Sciences. Linköping University, Department of Clinical and Experimental Medicine, Clinical Neurophysiology. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Anaesthetics, Operations and Specialty Surgery Center, Department of Clinical Neurophysiology. Linköping University, Department of Medical and Health Sciences, Social Medicine and Public Health Science. Linköping University, Faculty of Health Sciences. Linköping University, Department of Medical and Health Sciences, Health Technology Assessment and Health Economics.
A mixed method evaluation of a group-based educational programme for CPAP use in patients with obstructive sleep apnea2013In: Journal of Evaluation In Clinical Practice, ISSN 1356-1294, E-ISSN 1365-2753, Vol. 19, no 1, 173-184 p.Article in journal (Refereed)
Rationale, aims and objectives Continuous positive airway pressure (CPAP) treatment of obstructive sleep apnea (OSA) has a low long-term adherence. Educational interventions are few and sparsely described regarding content, pedagogical approach and participants' perceptions. The aim was to describe adherence to CPAP treatment, knowledge about OSA/CPAP, as well as OSA patients' perceptions of participating in a group-based programme using problem-based learning (PBL) for CPAP initiation. Educational programme The PBL programme incorporated elements from theories and models concerning motivation and habits. Tutorial groups consisting of four to eight patients met at six sessions during 6 months. Methods A sequential explanatory mixed method design was used on 25 strategically selected patients. Quantitative data regarding, clinical variables, OSA severity, CPAP use, and knowledge were collected at baseline, after 2 weeks and 6 months. Qualitative data regarding patients' perceptions of participation were collected after 6 months by semi-structured interviews using a phenomenographic approach. Results 72% of the patients were adherent to CPAP treatment after 2 weeks and 6 months. All patients improved their baseline knowledge about OSA and CPAP after 2 weeks and sustained it after 6 months. Anxiety and fear, as well as difficulties and needs were motivational factors for participation. Patients described the difficulties of behavioural change, an awareness that improvements do not occur immediately, a realization of the importance of both technical and emotional support and the need for a healthier lifestyle. Conclusion and practice implications A group-based programme using PBL seems to facilitate adaptive and developmental learning and result in acceptable CPAP adherence levels.
• 39.
Linköping University, Faculty of Health Sciences. Linköping University, Department of Molecular and Clinical Medicine, Gender and Medicine. Östergötlands Läns Landsting, Centre of Paediatrics and Gynecology and Obstetrics, Department of Gynecology and Obstetrics in Linköping.
Linköping University, Faculty of Health Sciences. Linköping University, Department of Molecular and Clinical Medicine, Gender and Medicine. Linköping University, Faculty of Health Sciences. Linköping University, Department of Molecular and Clinical Medicine, Gender and Medicine.
A model for critical review of literature - With vaginismus as an example2007In: Journal of Psychosomatic Obstetrics and Gynaecology, ISSN 0167-482X, Vol. 28, no 1, 21-36 p.Article in journal (Refereed)
In this article we present a behavioral model for the critical review of the literature within a certain research field, using vaginismus as an example. We searched the literature for the title word "vaginismus" and analyzed to what extent the articles dealt with the following seven categories: prevention, etiology, maintaining factors, consequences, object of intervention, method of intervention, and method of evaluation. In each category we scrutinized the content of the articles for biological, psychological, social, relational, and gender aspects. Quality requirements of etiological and treatment studies were then added and the results presented in a "quality-adjusted" model. There were 102 articles during 1985-2001, of which 22 were included in the review. Most of the articles deal with supposed predisposing factors of etiology and different aspects of intervention. Only a few articles discuss precipitating factors, maintaining factors, or consequences of the problem. No article had a gender analysis. Only 11 of the articles fulfilled some of the proposed quality criteria. We found the behavioral model with quality requirements useful for classifying and evaluating the literature of vaginismus. The model may also be used as a guide to design methodologically good studies. © 2007 Informa UK Ltd.
• 40.
Linköping University, Department of Computer and Information Science, MDA. Linköping University, The Institute of Technology.
Linköping University, Department of Computer and Information Science, MDA. Linköping University, The Institute of Technology. Linköping University, Department of Medicine and Health Sciences. Linköping University, Faculty of Health Sciences.
A Model for Interpreting Work and Information Management in Process-Oriented Healthcare Organisations2003In: International Journal of Medical Informatics, ISSN 1386-5056, Vol. 72, no 1-3, 47-56 p.Article in journal (Refereed)
Background: To increase productivity, management in healthcare organisations have introduced different types of process-oriented organisational configurations. Few studies have addressed clinical practice and information management in these settings. Methods: A case study was performed at a paediatric clinic. Data was collected from archives, through interviews, by participatory observation, and by performing a focus group session. The collected data was analysed using a qualitative and interpretative research strategy. Results: A model was developed of care practitioners’ daily work in process-oriented organisations. The model shows that clinical work was deeply integrated; the care activities were dependent on supply activities and tightly connected to management routines. Conclusion: The resulting model can be used to support development of health information system (HIS) embedded in process-oriented healthcare work.
• 41.
Arizona State University, USA.
Arizona State University, USA. Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
A 'Model-on-Demand' Identification Methodology for Nonlinear Process Systems2001In: International Journal of Control, ISSN 0020-7179, E-ISSN 1366-5820, Vol. 74, no 18, 1708-1717 p.Article in journal (Refereed)
An identification methodology based on multi-level pseudo-random sequence (multi-level PRS) input signals and 'Model-on-Demand' (MoD) estimation is presented for single-input, single-output non-linear process applications. 'Model-on-Demand' estimation allows for accurate prediction of non-linear systems while requiring few user choices and without solving a non-convex optimization problem, as is usually the case with global modelling techniques. By allowing the user to incorporate a priori information into the specification of design variables for multi-level PRS input signals, a sufficiently informative input-output dataset for MoD estimation is generated in a 'plant-friendly' manner. The usefulness of the methodology is demonstrated in case studies involving the identification of a simulated rapid thermal processing (RTP) reactor and a pilot-scale brine-water mixing tank. On the resulting datasets, MoD estimation displays performance comparable to that achieved via semi-physical modelling and semi-physical modelling combined with neural networks. The MoD estimator, however, achieves this level of performance with substantially lower engineering effort.
• 42.
Arizona State Univ, Dept Chem & Mat Engn, Control Syst Engn Lab, Tempe, AZ 85287 USA Linkoping Univ, Dept Elect Engn, Div Automat Control, SE-58183 Linkoping, Sweden.
Arizona State Univ, Dept Chem & Mat Engn, Control Syst Engn Lab, Tempe, AZ 85287 USA Linkoping Univ, Dept Elect Engn, Div Automat Control, SE-58183 Linkoping, Sweden. Arizona State Univ, Dept Chem & Mat Engn, Control Syst Engn Lab, Tempe, AZ 85287 USA Linkoping Univ, Dept Elect Engn, Div Automat Control, SE-58183 Linkoping, Sweden.
A 'Model-on-Demand' identification methodology for non-linear process systems2001In: International Journal of Control, ISSN 0020-7179, Vol. 74, no 18, 1708-1717 p.Article in journal (Refereed)
An identification methodology based on multi-level pseudo-random sequence (multi-level PRS) input signals and 'Model-on-Demand' (MoD) estimation is presented for single-input, single-output non-linear process applications. 'Model-on-Demand' estimation allows for accurate prediction of non-linear systems while requiring few user choices and without solving a non-convex optimization problem, as is usually the case with global modelling techniques. By allowing the user to incorporate a priori information into the specification of design variables for multi-level PRS input signals, a sufficiently informative input-output dataset for MoD estimation is generated in a 'plant-friendly' manner. The usefulness of the methodology is demonstrated in case studies involving the identification of a simulated rapid thermal processing (RTP) reactor and a pilot-scale brine-water mixing tank. On the resulting datasets, MoD estimation displays performance comparable to that achieved via semi-physical modelling and semi-physical modelling combined with neural networks. The MoD estimator, however, achieves this level of performance with substantially lower engineering effort.
• 43.
Linköping University, Department of Management and Engineering, Machine Design. Linköping University, The Institute of Technology.
Linköping University, Department of Management and Engineering, Machine Design. Linköping University, The Institute of Technology.
A Modified Complex Algorithm Applied to Robust Design Optimization2011In: 13th AIAA Non-Deterministic Approaches Conference, 2011, 2011-2095 p.Conference paper (Refereed)
Today there is a desire to perform optimizations in order to receive optimal system properties. However, for computationally expensive simulation models, an optimization maybe too tedious to be motivated. This paper proposes a modification of the Complexoptimization algorithm to enable the creation and usage of local meta-models during theoptimization. Its performance is demonstrated for a few analytical problems and a reliabilitybased design optimization is conducted for an aircraft example.
• 44.
Department of Physiology, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden.
Department of Physiology, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden. Department of Physiology, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden.
A Monte Carlo method for locally multivariate brain mapping.2011In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 56, no 2, 508-516 p.Article in journal (Refereed)
Locally multivariate approaches to functional brain mapping offer a highly appealing complement to conventional statistics, but require restrictive region-of-interest hypotheses, or, in exhaustive search forms (such as the "searchlight" algorithm; Kriegeskorte et al., 2006), are excessively computer intensive. We therefore propose a non-restrictive, comparatively fast yet highly sensitive method based on Monte Carlo approximation principles where locally multivariate maps are computed by averaging across voxelwise condition-discriminative information obtained from repeated stochastic sampling of fixed-size search volumes. On simulated data containing discriminative regions of varying size and contrast-to-noise ratio (CNR), the Monte Carlo method reduced the required computer resources by as much as 75% compared to the searchlight with no reduction in mapping performance. Notably, the Monte Carlo mapping approach not only outperformed the general linear method (GLM), but also produced higher discriminative voxel detection scores than the searchlight irrespective of classifier (linear or nonlinear support vector machine), discriminative region size or CNR. The improved performance was explained by the information-average procedure, and the Monte Carlo approach yielded mapping sensitivities of a few percent lower than an information-average exhaustive search. Finally, we demonstrate the utility of the algorithm on whole-brain, multi-subject functional magnetic resonance imaging (fMRI) data from a tactile study, revealing that the central representation of gentle touch is spatially distributed in somatosensory, insular and visual regions.
• 45.
Ulm University, Germany.
National Heart and Lung Institute, Imperial College London, UK . Ulm University, Germany. Prince of Wales Hospital, Chinese University of Hong Kong, China. Centro de Investigaciónes FEPIS, Quinindé, Ecuador. Tallinn Children's Hospital, Tallinn, Estonia. Center of Allergy and Immunology, Tbilisi, Georgia. Hannover Medical School, Germany. Ludwig-Maximilians University, Munich, Germany. Local Health Authority Rome, Italy. Wellington School of Medicine and Health Sciences, New Zealand . Norwegian Institute of Public Health, Oslo, Norway. AL-Quds University, Jerusalem, Palestine. Torrecárdenas Hospital, Almería, Spain. Arrixaca University Children's Hospital and CIBER of Epidemiology and Public Health (CIBERESP), Murcia, Spain. 12 de Octubre Children's Hospital, Madrid, Spain. University of Valencia, Valencia, Spain. Linköping University, Department of Clinical and Experimental Medicine, Allergy Centre. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Heart and Medicine Centre, Allergy Centre UHL. Sundsvall Hospital, Sweden,. Hacettepe University, Ankara, Turkey. Ulm University, Germany. National Heart and Lung Institute, Imperial College London, UK . St George's, University of London, UK. National Heart and Lung Institute, Imperial College London, UK .
A multi-centre study of candidate genes for wheeze and allergy: the International Study of Asthma and Allergies in Childhood Phase 22009In: Clinical and Experimental Allergy, ISSN 0954-7894, E-ISSN 1365-2222, Vol. 39, no 12, 1875-1888 p.Article in journal (Refereed)
BACKGROUND: Common polymorphisms have been identified in genes suspected to play a role in asthma. We investigated their associations with wheeze and allergy in a case-control sample from Phase 2 of the International Study of Asthma and Allergies in Childhood.
METHODS: We compared 1105 wheezing and 3137 non-wheezing children aged 8-12 years from 17 study centres in 13 countries. Genotyping of 55 candidate single nucleotide polymorphisms (SNPs) in 14 genes was performed using the Sequenom System. Logistic regression models were fitted separately for each centre and each SNP. A combined per allele odds ratio and measures of heterogeneity between centres were derived by random effects meta-analysis.
RESULTS: Significant associations with wheeze in the past year were detected in only four genes (IL4R, TLR4, MS4A2, TLR9, P<0.05), with per allele odds ratios generally <1.3. Variants in IL4R and TLR4 were also related to allergen-specific IgE, while polymorphisms in FCER1B (MS4A2) and TLR9 were not. There were also highly significant associations (P<0.001) between SPINK5 variants and visible eczema (but not IgE levels) and between IL13 variants and total IgE. Heterogeneity of effects across centres was rare, despite differences in allele frequencies.
CONCLUSIONS: Despite the biological plausibility of IgE-related mechanisms in asthma, very few of the tested candidates showed evidence of association with both wheeze and increased IgE levels. We were unable to confirm associations of the positional candidates DPP10 and PHF11 with wheeze, although our study had ample power to detect the expected associations of IL13 variants with IgE and SPINK5 variants with eczema.
• 46.
Radboud University of Nijmegen, Netherlands. Radboud University of Nijmegen, Netherlands. Radboud University of Nijmegen, Netherlands. Linköping University, Department of Clinical and Experimental Medicine, Division of Microbiology and Molecular Medicine. Linköping University, Faculty of Medicine and Health Sciences. University of Autonoma Madrid, Spain. University of Autonoma Madrid, Spain. University of Girona, Spain; University of Girona, Spain. Radboud University of Nijmegen, Netherlands; Radboud University of Nijmegen, Netherlands. Radboud University of Nijmegen, Netherlands.
A New Fiji-Based Algorithm That Systematically Quantifies Nine Synaptic Parameters Provides Insights into Drosophila NMJ Morphometry2016In: PloS Computational Biology, ISSN 1553-734X, E-ISSN 1553-7358, Vol. 11, no 3, e1004823Article in journal (Refereed)
The morphology of synapses is of central interest in neuroscience because of the intimate relation with synaptic efficacy. Two decades of gene manipulation studies in different animal models have revealed a repertoire of molecules that contribute to synapse development. However, since such studies often assessed only one, or at best a few, morphological features at a given synapse, it remained unaddressed how different structural aspects relate to one another. Furthermore, such focused and sometimes only qualitative approaches likely left many of the more subtle players unnoticed. Here, we present the image analysis algorithm Drosophila_NMJ_Morphometrics, available as a Fiji-compatible macro, for quantitative, accurate and objective synapse morphometry of the Drosophila larval neuromuscular junction (NMJ), a well-established glutamatergic model synapse. We developed this methodology for semi-automated multiparametric analyses of NMJ terminals immunolabeled for the commonly used markers Dlg1 and Brp and showed that it also works for Hrp, Csp and Syt. We demonstrate that gender, genetic background and identity of abdominal body segment consistently and significantly contribute to variability in our data, suggesting that controlling for these parameters is important to minimize variability in quantitative analyses. Correlation and principal component analyses (PCA) were performed to investigate which morphometric parameters are inter-dependent and which ones are regulated rather independently. Based on nine acquired parameters, we identified five morphometric groups: NMJ size, geometry, muscle size, number of NMJ islands and number of active zones. Based on our finding that the parameters of the first two principal components hardly correlated with each other, we suggest that different molecular processes underlie these two morphometric groups. Our study sets the stage for systems morphometry approaches at the well-studied Drosophila NMJ.
• 47.
Linköping University, Department of Behavioural Sciences and Learning, Education, Teaching and Learning. Linköping University, Faculty of Educational Sciences.
A noisy silence about care: Swedish preschool teachers’ talk about documentation2016In: Early years, ISSN 0957-5146, E-ISSN 1472-4421, Vol. 36, no 1, 4-16 p.Article in journal (Refereed)
This article investigates what happens to institutional narratives of care in Swedish preschool when a policy on increased documentation is introduced. Questions deal with preschool teachers’ professionalism as expressed through the teachers’ talk about documentation. The analysis is based on theories in education policy, teacher professionalism and institutional narratives. The findings show that the few references made by the teachers to narratives of care are subordinated to narratives of learning. A major conclusion is that narratives of care are in a process of becoming a ‘noisy silence’, which influences teachers’ professionalism as well as shaping our common society.
• 48.
Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
A Noncommutative Catenoid2017Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
Noncommutative geometry generalizes many geometric results from such fields as differential geometry and algebraic geometry to a context where commutativity cannot be assumed. Unfortunately there are few concrete non-trivial examples of noncommutative objects. The aim of this thesis is to construct a noncommutative surface $\mathcal{C}_\hbar$ which will be a generalization of the well known surface called the catenoid. This surface will be constructed using the Diamond lemma, derivations will be constructed over $\mathcal{C}_\hbar$ and a general localization will be provided using the Ore condition.
• 49.
Linköping University, Department of Physics, Chemistry and Biology, Semiconductor Materials. Linköping University, The Institute of Technology.
Linköping University, Department of Physics, Chemistry and Biology, Plasma and Coating Physics. Linköping University, The Institute of Technology. Linköping University, Department of Physics, Chemistry and Biology. Linköping University, The Institute of Technology. Linköping University, Department of Physics, Chemistry and Biology, Thin Film Physics. Linköping University, The Institute of Technology. Linköping University, Department of Physics, Chemistry and Biology, Plasma and Coating Physics. Linköping University, The Institute of Technology.
A novel high-power pulse PECVD method2012In: Surface & Coatings Technology, ISSN 0257-8972, E-ISSN 1879-3347, Vol. 206, no 22, 4562-4566 p.Article in journal (Refereed)
A novel plasma enhanced CVD (PECVD) technique has been developed in order to combine energetic particle bombardment and high plasma densities found in ionized PVD with the advantages from PECVD such as a high deposition rate and the capability to coat complex and porous surfaces. In this PECVD method, an ionized plasma is generated above the substrate by means of a hollow cathode discharge. The hollow cathode is known to generate a highly ionized plasma and the discharge can be sustained in direct current (DC) mode, or in high-power pulsed (HiPP) mode using short pulses of a few tens of microsecond. The latter option is similar to the power scheme used in high power impulse magnetron sputtering (HiPIMS), which is known to generate a high degree of ionization of the sputtered material, and thus providing new and added means for the synthesis of tailor-made thin films. In this work amorphous carbon coatings containing copper, have been deposited using both HiPP and DC operating conditions. Investigations of the bulk plasma using optical emission spectroscopy verify the presence of Ar+, C+ as well as Cu+ when running in pulsed mode. Deposition rates in the range 30 mu m/h have been obtained and the amorphous, copper containing carbon films have a low hydrogen content of 4- 5 at%. Furthermore, the results presented here suggest that a more efficient PECVD process is obtained by using a superposition of HiPP and DC mode, compared to using only DC mode at the same average input power.
• 50.
Inst. onk-pat. Karolinska Inst.
Linköping University, Faculty of Health Sciences. Linköping University, Department of Social and Welfare Studies, Äldre - vård - civilsamhälle (ÄVC) . Östergötlands Läns Landsting, Local Health Care Services in East Östergötland, Department of Geriatrics and Hospital based homecare VHN. Linköping University, Faculty of Health Sciences. Linköping University, Department of Social and Welfare Studies, Äldre - vård - civilsamhälle (ÄVC) . Östergötlands Läns Landsting, Local Health Care Services in Central Östergötland, Department of Geriatric and Hospital Based Homecare UHL. Ints.för onkologi-patologi Karolinska institutet.
A one-day education in soft tissue massage: Experiences and opinions as evaluated by nursing staff in palliative care2008In: Palliative & Supportive Care, ISSN 1478-9515, Vol. 6, no 2, 141-148 p.Article in journal (Refereed)
Objective: Increasing awareness of well-being aspects of physical touch has spurred the appreciation for soft tissue massage (STM) as part of palliative care. Educational programs are available but with no specific focus on utilization for this kind of care. The aim was to study the feasibility of a 1-day course in STM in clarifying nursing staff's experiences and opinions, but also to shed light on their motivation and ability to employ STM in the care of dying cancer patients. Method: In all, 135 nursing staff participated. The course consisted of theory and hands-on training (hand-foot-, back massage). Focus-groups with 30/135 randomly chosen participants were conducted 4 weeks after the intervention. This study engaged a qualitative approach using content analysis. Results: The overall opinion of the 1-day course was positive. The majority experienced the contents of the course to be adequate and sufficient for clinical care. They emphasized the pedagogical expertise as valuable for the learning process. The majority of nurses shared the opinion that their extended knowledge clarified their attitudes on STM as a complement in palliative care. Still, a few found it to be too basic and/or intimate. Three categories emerged during the analysis: experiences of and attitudes toward the education, experiences of implementing the skills in every-day care situations, and attitudes to the physical body in nursing care. Significance of results: The approach to learning and the pedagogical skills of the teacher proved to be of importance for how new knowledge was perceived among nurses. The findings may encourage hospital organizations to introduce short courses in STM as an alternative to more extensive education. Copyright © 2008 Cambridge University Press 2008.
1234567 1 - 50 of 2445
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• oxford
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
|
2017-11-18 10:21:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23692573606967926, "perplexity": 4208.163341726599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00173.warc.gz"}
|
http://www.spkx.net.cn/CN/10.7506/spkx1002-6630-20180514-194
|
• 食品化学 •
### 环氧基修饰磁性微球固定化磷脂酶A1
1. (合肥工业大学食品科学与工程学院,安徽省农产品精深加工重点实验室,安徽?合肥 230009)
• 出版日期:2019-01-25 发布日期:2019-01-22
• 基金资助:
“十三五”国家重点研发计划重点专项(2016YFD0401401-2)
### Immobilization of Phospholipase A1 into Epoxy-Modified Magnetic Microspheres
BAO Sai, CAO Lili, PANG Min, PAN Lijun, HOU Zhigang, SHUI Longlong, LI Jinhong, JIANG Shaotong*
1. (Key Laboratory for Agriculture Products Processing of Anhui Province, School of Food and Engineering, Hefei University of Technology, Hefei 230009, China)
• Online:2019-01-25 Published:2019-01-22
Abstract: The epoxy-modified Fe3O4 magnetic microspheres prepared in our laboratory were used as the carrier to immobilize phospholipase A1 (PLA1). Response surface methodology was used to optimize the immobilization conditions. Furthermore, the properties of the immobilized enzyme were studied. The results showed that the optimal immobilization conditions were determined as follows: buffer pH 4.0, reaction time 1.9 h and enzyme dosage 3.6 mL/g. The activity of the immobilized enzyme obtained under these conditions was 3 675 U/g with an immobilization efficiency of 61.1%. Compared with the free enzyme, the optimal reaction pH was shifted to alkaline pH of 1.0 and the optimal reaction temperature of the immobilized enzyme was increased by 5?℃. The storage stability of the immobilized enzyme was increased as well. And the immobilized enzyme retained about 81.1% of the initial activity after the 8th use for rapeseed oil degumming. Moreover, modern analytical methods such as X-ray diffraction (XRD), attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR), scanning electron microscopy (SEM) and transmission electron microscopy (TEM) were used to characterize the carrier. The results showed that the nano-sized microspheres were successfully prepared with epoxy-modified surfaces.
|
2020-09-28 21:15:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24861453473567963, "perplexity": 13192.295895200255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00227.warc.gz"}
|
http://tcms.org.ge/Journals/JHRS/instruct.htm
|
# Submitting a paper to Journal of Homotopy and Related Structures:
prepare your paper as a single (uncompressed) PDF file and send it to the responsible (receiving) editor using our online submission form. Apart from the PDF file, the form asks for the following information:
• Title of the article
• List of keywords
• Abstract of the paper (see note below)
• Suggestion(s) for an appropriate responsible (receiving) editor (consult the list of editors and their interests)
When you have this information ready then fill in the online submission form.
We shall also need a signed copyright declaration form which must be sent to us by ordinary mail or by fax.
## NOTES
• Your submitted abstract should be the same as the one which appears in your paper.
• The submission form is intended only for the first submission paper. If you are submitting a revised version, or your source files (for publication), then email the files either to the responsible (receiving) editor or to the Editor-in-Chief as an appropriate attachment.
• If for any reason you cannot use our online submission form, then email the required data to the responsible (receiving) editor.
# Preparing an article to Journal of Homotopy and Related Structures:
TeX source files of articles may be submitted in English, French or German by email to any member of the Editorial Board. Editors may also provide facilities for receiving articles by file transfer - contact your Editor to enquire. If your chosen language is not English, then we will need an additional English version of your abstract.
LaTeX version 2e IS THE PREFERRED FORMAT, and its use will facilitate consideration of an article, but papers in other standard TeX flavors will be considered.
A class file is available from the journal and its use during preparation of articles is encouraged. Whether or not the journal's style is used, and whatever the flavour of TeX, author's should define a macro for each proclaimed structure e.g. \definition, \proposition, \theorem. These macros will be replaced by the journal's own.
Accepted articles will be archived in the style of the journal. Authors should note that the base font size is 10 point. Please avoid use of elements which depend on another base font size. Also avoid any absolute moves such as "\vskip 10 pt" as the journal's pagination will differ from yours. As a general rule less formatting is better.
An article is normally submitted as a single source file. All macros must be included with the source file and are the responsibility of the author. Macros should be placed at the beginning of the file. It is also helpful if any macro that is not actually used is deleted from the source file. The only exception is diagram macro packages. The currently acceptable macro packages are those authored by Barr, Borceux, Rose (XY-Pic) and Taylor. The author is responsible to ensure that the current version of a macro package has been used. Editors are not permitted to correct errors in TeX source.
If BiBTeX is used with LaTeX, bibliographies must be BiBTeXed and .bbl files appended to the source file. No .bib files will be accepted. References may use the standard BiBTeX styles or the Harvard style.
Authors are STRONGLY encouraged to use only fonts from the standard TeX distribution (cmr etc.), AMS symbol fonts (msym...) or the XY-pic fonts. They should be aware that use of non-standard fonts can interfere with successful dissemination of their work. Fonts not mentioned above must be provided by the author to the Editor concerned and to the journal. Embedded graphics or Postscript are not currently accepted.
The source file for an article must begin with a comment which includes the following information: the flavour of TeX used, the number of pages, any diagram macro package used, any non-standard fonts, the implementation of TeX used in preparation of the article. An example might be
% LaTeX-2e document, 12 pp, XY-pic ver 3.1, emTeX version 3.1
ARCHIVE POLICY
The primary archive of "Journal of Homotopy and Related Structures" (JHRS) is a set of electronic files in TeX source, dvi and pk font formats. The source files contain all articles accepted in JHRS and any macro files required to typeset them. The dvi files are those produced from the articles by the TeX program. Font files to allow printing of accepted articles must also be maintained. This archive is the property of the Editorial Board of JHRS and is maintained by the Editor-in-Chief.
STYLE FILES
The LaTeX-2e class file for the journal and sample file are available here. Class file is a modification of the class file which is almost identical to the file jcm.cls created by David Carlisle for the "Journal of Computation Mathematics" of the London Mathematical Society. Also available are hints on how to use the style file.
(La)TeX macros for diagrams
The links here are to macro packages (and their documentation) intended to facilitate creation of diagrams in (La)TeX. A general index of TeX macro packages is maintained by David M. Jones in the pub/tex directory at theory.lcs.mit.edu
The links below are to appropriate directories at the ftp.shsu.edu CTAN (Comprehensive TeX Archive Network) site. If you are not in North America, you may retrieve the files more quickly by connecting to another CTAN site closer to you. Try ftp.dante.de or ftp.tex.ac.uk. The packages are:
A LaTeX diagram macro package by Michael Barr based on picture mode.
A LaTeX diagram macro package by Francis Borceux based on matrices.
The XY-pic diagram macro package by Kris Rose.
A diagram macro package by Paul Taylor based on matrices.
|
2019-03-19 08:59:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456704020500183, "perplexity": 2176.802648767259}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201922.85/warc/CC-MAIN-20190319073140-20190319095140-00457.warc.gz"}
|
https://isoprocessor.kopflab.org/reference/iso_unnest_calibration_parameters.html
|
Convenience function to unnest both calibration coefficients (iso_unnest_calibration_coefs) and calibration summary (iso_unnest_calibration_summary) columns in a single step.
iso_unnest_calibration_parameters(dt, calibration = "",
select_from_coefs = everything(), select_from_summary = everything(),
keep_remaining_nested_data = FALSE, keep_other_list_data = TRUE)
## Arguments
dt nested data table with column all_data (see iso_prepare_for_calibration) an informative name for the calibration (could be e.g. "d13C" or "conc"). If provided, will be used as a prefix for the new columns generated by this function. This parameter is most useful if there are multiple variables in the data set that need to be calibrated (e.g. multiple delta values, concentration, etc.). If there is only a single variable to calibrate, the calibration parameter is completely optional and can just be left blank (the default). which columns from the fit coeffiencts to include, supports full dplyr syntax including renaming which columns from the fit summary to include, supports full dplyr syntax including renaming whether to keep any remaining parts of the partially unnested data (irrelevant if select = everything()) keep other list data columns (e.g. other data or model columns)
|
2019-05-24 00:48:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2561083734035492, "perplexity": 5033.3224172106575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257481.39/warc/CC-MAIN-20190524004222-20190524030222-00481.warc.gz"}
|
https://www.freemathhelp.com/forum/threads/where-did-this-term-go.115170/
|
# where did this term go?
#### allegansveritatem
##### Full Member
I was looking a method for deriving the quadratic formula from the form:ax^2+bx+c=0 but got taken up short when I couldn't account for the whereabouts of a term. Here is the book's presentation:
Now, what is puzzling me is this: Where does the b/a times x go after the 4th equals sign? I mean., between the 4th and the fifth line of this proof, the b/a times x seems to fall out of the world. What am I missing. I tried several times to prove derive this formula for myself and came up with some really exotic expressions.
#### Dr.Peterson
##### Elite Member
That's where they completed the square.
Look at it backward, starting from the LHS of the fifth line and expanding it. You'll get the LHS of the fourth line.
In effect, they are applying the fact that $$\displaystyle (a+b)^2 = a^2 + 2ab + b^2$$, but in reverse: $$\displaystyle a^2 + 2ab + b^2 \rightarrow (a+b)^2$$.
#### JeffM
##### Elite Member
I would do it a slightly different way.
Starting at line 3
$$\displaystyle x^2 + \dfrac{b}{a} * x = -\ \dfrac{c}{a} \implies$$
$$\displaystyle x^2 + 2 * \dfrac{b}{2a} * x = -\ \dfrac{c}{a} \implies$$
$$\displaystyle x^2 + 2 * \dfrac{b}{2a} * x + \left ( \dfrac{b}{2a} \right )^2 = \left ( \dfrac{b}{2a} \right )^2 - \dfrac{c}{a} \implies$$
$$\displaystyle \left(x + \dfrac{b}{2a} \right ) \left(x + \dfrac{b}{2a} \right ) = \left ( \dfrac{b}{2a} \right )^2 - \dfrac{c}{a} \implies$$
$$\displaystyle \left (x + \dfrac{b}{2a} \right )^2 = \left ( \dfrac{b}{2a} \right )^2 - \dfrac{c}{a} \implies$$
$$\displaystyle \left (x + \dfrac{b}{2a} \right )^2 = \dfrac{b^2}{4a^2} - \dfrac{c}{a} \implies$$
$$\displaystyle \left ( x + \dfrac{b}{2a} \right )^2 = \dfrac{b^2}{4a^2} - \dfrac{4ac}{4a^2} \implies$$
$$\displaystyle \left ( x + \dfrac{b}{2a} \right )^2 = \dfrac{b^2 - 4ac}{4a^2} \implies$$
$$\displaystyle x + \dfrac{b}{2a} = \pm \ \sqrt{\dfrac{b^2 - 4ac}{4a^2}} \implies$$
$$\displaystyle x + \dfrac{b}{2a} = \pm \ \dfrac{\sqrt{b^2 - 4ac}}{\sqrt{4a^2}} \implies$$
$$\displaystyle x + \dfrac{b}{2a} = \dfrac{\pm \ \sqrt{b^2 - 4ac}}{2a} \implies$$
$$\displaystyle x = \dfrac{-\ b \pm \sqrt{b^2 - 4ac}}{2a}.$$
It's the same logic involving $$\displaystyle (u + v)^2 = u^2 + 2uv + v^2$$,
but we now see why we go $$\displaystyle \dfrac{b}{a} * x = \dfrac{2}{2} * \dfrac{b}{a} * x = 2 * \dfrac{b}{2a} * x.$$
That is to get the 2uv term. And we add $$\displaystyle \left ( \dfrac{b}{2a} \right )^2$$
to get the v^2 term.
#### allegansveritatem
##### Full Member
That's where they completed the square.
Look at it backward, starting from the LHS of the fifth line and expanding it. You'll get the LHS of the fourth line.
In effect, they are applying the fact that $$\displaystyle (a+b)^2 = a^2 + 2ab + b^2$$, but in reverse: $$\displaystyle a^2 + 2ab + b^2 \rightarrow (a+b)^2$$.
I don't have a pen with me or I would immediately check what you say here. But, eyeballing it, I think you are on to it. Thanks
#### allegansveritatem
##### Full Member
I would do it a slightly different way.
Starting at line 3
$$\displaystyle x^2 + \dfrac{b}{a} * x = -\ \dfrac{c}{a} \implies$$
$$\displaystyle x^2 + 2 * \dfrac{b}{2a} * x = -\ \dfrac{c}{a} \implies$$
$$\displaystyle x^2 + 2 * \dfrac{b}{2a} * x + \left ( \dfrac{b}{2a} \right )^2 = \left ( \dfrac{b}{2a} \right )^2 - \dfrac{c}{a} \implies$$
$$\displaystyle \left(x + \dfrac{b}{2a} \right ) \left(x + \dfrac{b}{2a} \right ) = \left ( \dfrac{b}{2a} \right )^2 - \dfrac{c}{a} \implies$$
$$\displaystyle \left (x + \dfrac{b}{2a} \right )^2 = \left ( \dfrac{b}{2a} \right )^2 - \dfrac{c}{a} \implies$$
$$\displaystyle \left (x + \dfrac{b}{2a} \right )^2 = \dfrac{b^2}{4a^2} - \dfrac{c}{a} \implies$$
$$\displaystyle \left ( x + \dfrac{b}{2a} \right )^2 = \dfrac{b^2}{4a^2} - \dfrac{4ac}{4a^2} \implies$$
$$\displaystyle \left ( x + \dfrac{b}{2a} \right )^2 = \dfrac{b^2 - 4ac}{4a^2} \implies$$
$$\displaystyle x + \dfrac{b}{2a} = \pm \ \sqrt{\dfrac{b^2 - 4ac}{4a^2}} \implies$$
$$\displaystyle x + \dfrac{b}{2a} = \pm \ \dfrac{\sqrt{b^2 - 4ac}}{\sqrt{4a^2}} \implies$$
$$\displaystyle x + \dfrac{b}{2a} = \dfrac{\pm \ \sqrt{b^2 - 4ac}}{2a} \implies$$
$$\displaystyle x = \dfrac{-\ b \pm \sqrt{b^2 - 4ac}}{2a}.$$
It's the same logic involving $$\displaystyle (u + v)^2 = u^2 + 2uv + v^2$$,
but we now see why we go $$\displaystyle \dfrac{b}{a} * x = \dfrac{2}{2} * \dfrac{b}{a} * x = 2 * \dfrac{b}{2a} * x.$$
That is to get the 2uv term. And we add $$\displaystyle \left ( \dfrac{b}{2a} \right )^2$$
to get the v^2 term.
very pretty. I think my problem, as Dr. P points out above, has to do with the fact that the LHS is a perfect square and therefore expressible in the way the book expressed it.
#### Dr.Peterson
##### Elite Member
Have you learned how to complete the square in a case with numerical coefficients, like x^2 - 6x + 4 = 0? That's an important skill, and is what they used here with parameters. I don't know that the asterisk in your original image refers to, but you might want to check it out; if you skipped over that section of the book earlier, be sure to go back there and master it. You will be seeing it again, and again!
Even more generally, you need to be able to recognize a perfect square when you see it.
#### Denis
##### Senior Member
Become an "expert":
#### allegansveritatem
##### Full Member
Have you learned how to complete the square in a case with numerical coefficients, like x^2 - 6x + 4 = 0? That's an important skill, and is what they used here with parameters. I don't know that the asterisk in your original image refers to, but you might want to check it out; if you skipped over that section of the book earlier, be sure to go back there and master it. You will be seeing it again, and again!
Even more generally, you need to be able to recognize a perfect square when you see it.
I NEVER skip anything. I do know about completing the square and have used the method many times...but sometimes I go technique-blind.
#### allegansveritatem
##### Full Member
Become an "expert":
it may not look like it with this post but I know this teck and have used it many times.
#### Denis
##### Senior Member
Good...I'm proud of you
|
2019-07-18 19:53:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7555843591690063, "perplexity": 689.7470629882627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525793.19/warc/CC-MAIN-20190718190635-20190718212635-00094.warc.gz"}
|
https://physics.stackexchange.com/questions/177918/confusion-regarding-latent-heat-of-fusion
|
# Confusion regarding latent heat of fusion
During vaporizing there is higher increase in internal energy (higher positive $\Delta U$) and more work is done by the liquid (higher $W$) as molecules become widely separated.
During melting, there is small increase in internal energy (smaller positive $\Delta U$) and less work is done by the solid (smaller $W$) as there is less difference in the molecule separation relatively.
According to $Q=\Delta U - W$, why is the specific latent heat of vaporization greater than that of fusion? In both cases $Q$ works out to be same according to above statements?
• Are you essentially asking why is $L_{v} > L_{f}$ . what does "sine more minus more = less and less minus lesss = less?" mean . can you clarify ? – Gowtham Apr 23 '15 at 8:01
• Improved the question – PdX Apr 23 '15 at 8:22
Let me clear a few things up first; the latent heat of fusion is the energy required to convert a substance from solid form to a liquid form. Since water is liquid at room temperature, the latent heat of fusion is positive as energy is absorbed to convert ice to water, just as energy is released when water is converted to ice. It sounds counter-intuitive, I know, but release could also mean 'sucked away', which is exactly what is done in a freezer.
Now, molecules close together in solids, a little less close together in liquids, and extremely far apart in gases. There is a lot of cohesion in both ice and liquid water, however very little in water vapor. The great difference in energy required to melt ice and to form water vapor is because a lot of energy is required to overcome cohesion.
This cohesion occurs because of the polarity of water molecules. This video should help you out!
• I know it is greater but the thermodynamic law equation confused me and you havnt answered in terms of that – PdX Apr 23 '15 at 17:29
I was simply confused by the sign convention in the first law of thermodynamics.
The most convenient way to write it is $Q=\Delta U + W$ where $W$ is the work done by the system.
Hence the answer to my question becomes clear after this.
It should be (delta)U=q+-w, where +- means plus or minus, like the sign used in the quadratic formula. The sign for w should be determine by the type of work done, ie. compression (+w) or expansion work (-w).
|
2021-08-01 08:14:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5159618854522705, "perplexity": 380.65706530323786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154163.9/warc/CC-MAIN-20210801061513-20210801091513-00055.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/819/2/bm/a/
|
# Properties
Label 819.2.bm.a Level $819$ Weight $2$ Character orbit 819.bm Analytic conductor $6.540$ Analytic rank $0$ Dimension $2$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$819 = 3^{2} \cdot 7 \cdot 13$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 819.bm (of order $$6$$, degree $$2$$, minimal)
## Newform invariants
Self dual: no Analytic conductor: $$6.53974792554$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-3})$$ Defining polynomial: $$x^{2} - x + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 91) Sato-Tate group: $\mathrm{SU}(2)[C_{6}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{6}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + ( -1 + 2 \zeta_{6} ) q^{2} - q^{4} + ( -2 + \zeta_{6} ) q^{5} + ( -3 + 2 \zeta_{6} ) q^{7} + ( -1 + 2 \zeta_{6} ) q^{8} +O(q^{10})$$ $$q + ( -1 + 2 \zeta_{6} ) q^{2} - q^{4} + ( -2 + \zeta_{6} ) q^{5} + ( -3 + 2 \zeta_{6} ) q^{7} + ( -1 + 2 \zeta_{6} ) q^{8} -3 \zeta_{6} q^{10} + ( 6 - 3 \zeta_{6} ) q^{11} + ( -3 + 4 \zeta_{6} ) q^{13} + ( -1 - 4 \zeta_{6} ) q^{14} -5 q^{16} -6 q^{17} + ( -1 - \zeta_{6} ) q^{19} + ( 2 - \zeta_{6} ) q^{20} + 9 \zeta_{6} q^{22} + ( -2 + 2 \zeta_{6} ) q^{25} + ( -5 - 2 \zeta_{6} ) q^{26} + ( 3 - 2 \zeta_{6} ) q^{28} + ( 3 - 3 \zeta_{6} ) q^{29} + ( -1 - \zeta_{6} ) q^{31} + ( 3 - 6 \zeta_{6} ) q^{32} + ( 6 - 12 \zeta_{6} ) q^{34} + ( 4 - 5 \zeta_{6} ) q^{35} + ( 3 - 3 \zeta_{6} ) q^{38} -3 \zeta_{6} q^{40} + ( 3 + 3 \zeta_{6} ) q^{41} -11 \zeta_{6} q^{43} + ( -6 + 3 \zeta_{6} ) q^{44} + ( -10 + 5 \zeta_{6} ) q^{47} + ( 5 - 8 \zeta_{6} ) q^{49} + ( -2 - 2 \zeta_{6} ) q^{50} + ( 3 - 4 \zeta_{6} ) q^{52} + ( -9 + 9 \zeta_{6} ) q^{53} + ( -9 + 9 \zeta_{6} ) q^{55} + ( -1 - 4 \zeta_{6} ) q^{56} + ( 3 + 3 \zeta_{6} ) q^{58} + ( 2 - 4 \zeta_{6} ) q^{59} + ( -7 + 7 \zeta_{6} ) q^{61} + ( 3 - 3 \zeta_{6} ) q^{62} - q^{64} + ( 2 - 7 \zeta_{6} ) q^{65} + ( 10 - 5 \zeta_{6} ) q^{67} + 6 q^{68} + ( 6 + 3 \zeta_{6} ) q^{70} + ( -2 + \zeta_{6} ) q^{71} + ( 5 + 5 \zeta_{6} ) q^{73} + ( 1 + \zeta_{6} ) q^{76} + ( -12 + 15 \zeta_{6} ) q^{77} + 5 \zeta_{6} q^{79} + ( 10 - 5 \zeta_{6} ) q^{80} + ( -9 + 9 \zeta_{6} ) q^{82} + ( -2 + 4 \zeta_{6} ) q^{83} + ( 12 - 6 \zeta_{6} ) q^{85} + ( 22 - 11 \zeta_{6} ) q^{86} + 9 \zeta_{6} q^{88} + ( -4 + 8 \zeta_{6} ) q^{89} + ( 1 - 10 \zeta_{6} ) q^{91} -15 \zeta_{6} q^{94} + 3 q^{95} + ( -6 + 3 \zeta_{6} ) q^{97} + ( 11 + 2 \zeta_{6} ) q^{98} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2 q - 2 q^{4} - 3 q^{5} - 4 q^{7} + O(q^{10})$$ $$2 q - 2 q^{4} - 3 q^{5} - 4 q^{7} - 3 q^{10} + 9 q^{11} - 2 q^{13} - 6 q^{14} - 10 q^{16} - 12 q^{17} - 3 q^{19} + 3 q^{20} + 9 q^{22} - 2 q^{25} - 12 q^{26} + 4 q^{28} + 3 q^{29} - 3 q^{31} + 3 q^{35} + 3 q^{38} - 3 q^{40} + 9 q^{41} - 11 q^{43} - 9 q^{44} - 15 q^{47} + 2 q^{49} - 6 q^{50} + 2 q^{52} - 9 q^{53} - 9 q^{55} - 6 q^{56} + 9 q^{58} - 7 q^{61} + 3 q^{62} - 2 q^{64} - 3 q^{65} + 15 q^{67} + 12 q^{68} + 15 q^{70} - 3 q^{71} + 15 q^{73} + 3 q^{76} - 9 q^{77} + 5 q^{79} + 15 q^{80} - 9 q^{82} + 18 q^{85} + 33 q^{86} + 9 q^{88} - 8 q^{91} - 15 q^{94} + 6 q^{95} - 9 q^{97} + 24 q^{98} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/819\mathbb{Z}\right)^\times$$.
$$n$$ $$92$$ $$379$$ $$703$$ $$\chi(n)$$ $$1$$ $$\zeta_{6}$$ $$-\zeta_{6}$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
478.1
0.5 − 0.866025i 0.5 + 0.866025i
1.73205i 0 −1.00000 −1.50000 0.866025i 0 −2.00000 1.73205i 1.73205i 0 −1.50000 + 2.59808i
550.1 1.73205i 0 −1.00000 −1.50000 + 0.866025i 0 −2.00000 + 1.73205i 1.73205i 0 −1.50000 2.59808i
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
91.k even 6 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 819.2.bm.a 2
3.b odd 2 1 91.2.k.a 2
7.c even 3 1 819.2.do.c 2
13.e even 6 1 819.2.do.c 2
21.c even 2 1 637.2.k.b 2
21.g even 6 1 637.2.q.b 2
21.g even 6 1 637.2.u.a 2
21.h odd 6 1 91.2.u.a yes 2
21.h odd 6 1 637.2.q.c 2
39.h odd 6 1 91.2.u.a yes 2
39.k even 12 2 1183.2.e.e 4
91.k even 6 1 inner 819.2.bm.a 2
273.u even 6 1 637.2.u.a 2
273.x odd 6 1 637.2.q.c 2
273.y even 6 1 637.2.q.b 2
273.bp odd 6 1 91.2.k.a 2
273.br even 6 1 637.2.k.b 2
273.bs odd 12 2 8281.2.a.w 2
273.bv even 12 2 8281.2.a.s 2
273.bw even 12 2 1183.2.e.e 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
91.2.k.a 2 3.b odd 2 1
91.2.k.a 2 273.bp odd 6 1
91.2.u.a yes 2 21.h odd 6 1
91.2.u.a yes 2 39.h odd 6 1
637.2.k.b 2 21.c even 2 1
637.2.k.b 2 273.br even 6 1
637.2.q.b 2 21.g even 6 1
637.2.q.b 2 273.y even 6 1
637.2.q.c 2 21.h odd 6 1
637.2.q.c 2 273.x odd 6 1
637.2.u.a 2 21.g even 6 1
637.2.u.a 2 273.u even 6 1
819.2.bm.a 2 1.a even 1 1 trivial
819.2.bm.a 2 91.k even 6 1 inner
819.2.do.c 2 7.c even 3 1
819.2.do.c 2 13.e even 6 1
1183.2.e.e 4 39.k even 12 2
1183.2.e.e 4 273.bw even 12 2
8281.2.a.s 2 273.bv even 12 2
8281.2.a.w 2 273.bs odd 12 2
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(819, [\chi])$$:
$$T_{2}^{2} + 3$$ $$T_{5}^{2} + 3 T_{5} + 3$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$3 + T^{2}$$
$3$ $$T^{2}$$
$5$ $$3 + 3 T + T^{2}$$
$7$ $$7 + 4 T + T^{2}$$
$11$ $$27 - 9 T + T^{2}$$
$13$ $$13 + 2 T + T^{2}$$
$17$ $$( 6 + T )^{2}$$
$19$ $$3 + 3 T + T^{2}$$
$23$ $$T^{2}$$
$29$ $$9 - 3 T + T^{2}$$
$31$ $$3 + 3 T + T^{2}$$
$37$ $$T^{2}$$
$41$ $$27 - 9 T + T^{2}$$
$43$ $$121 + 11 T + T^{2}$$
$47$ $$75 + 15 T + T^{2}$$
$53$ $$81 + 9 T + T^{2}$$
$59$ $$12 + T^{2}$$
$61$ $$49 + 7 T + T^{2}$$
$67$ $$75 - 15 T + T^{2}$$
$71$ $$3 + 3 T + T^{2}$$
$73$ $$75 - 15 T + T^{2}$$
$79$ $$25 - 5 T + T^{2}$$
$83$ $$12 + T^{2}$$
$89$ $$48 + T^{2}$$
$97$ $$27 + 9 T + T^{2}$$
|
2021-12-03 13:40:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821642637252808, "perplexity": 12505.70998808164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00017.warc.gz"}
|
http://tug.org/pipermail/texhax/2009-October/013470.html
|
# [texhax] help with identifying some macros
P. R. Stanley prstanley at ntlworld.com
Thu Oct 15 22:49:42 CEST 2009
Hi folks
\strut, \vtop and lblot. I'd be grateful for a brief description of
each, more precisely, the general effect on the presentation.
Here's an example of two of them in use taken from The Z Notation: a
Reference Manual by Michael Spivey.
$birthday$:
known = \{\,{\rm John, Mike, Susan}\,\} \\ \also birthday = \{\,\vtop{\halign{\strut#\hfil&{}\mapsto{}#\hfil\cr John& 25--Mar,\cr Mike& 20--Dec,\cr Susan& 20--Dec\,\}.\cr}}
Truth be told I've seldom come across anything so cryptic. For
example, what does the "#" signify?
Any help would be most appreciated.
|
2018-05-26 02:04:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902954697608948, "perplexity": 14425.394769226183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867277.64/warc/CC-MAIN-20180526014543-20180526034543-00136.warc.gz"}
|
https://math.stackexchange.com/questions/762525/show-that-s1-lbrace-1-0-rbrace-is-homeomorphic-to-the-open-interval-0
|
# Show that $S^1 - \lbrace (1,0)\rbrace$ is homeomorphic to the open interval $(0,1)$
Be $S^1$ the unit circle in the plane, that is, $S^1= \lbrace (x,y) : x^2+y^2=1 \rbrace$ with the subspace topology. Show that $S^1 - \lbrace (1,0)\rbrace$ is homeomorphic to the open interval $(0,1)$.
I have an idea. Using this theorem:
Let $f:(X, \tau_1) \rightarrow (Y, \tau_2)$ be a homeomorphism. Let $a \in X$, so that $X - \lbrace a \rbrace$ is a subspace of $X$ and has induced topology $\tau_3$. Also, $Y-\lbrace f(a)\rbrace$ is subspace of $Y$ and has induced topology $\tau_4$. Then $(X - \lbrace a \rbrace, \tau_3)$ is homeomorphic to $(Y-\lbrace f(a)\rbrace, \tau_4)$. (Morris, "Topology without tears", remark 4.3.6)
My idea is to make $X=S^1$ and $Y=(0,1) \cup ?$ so, with the respective topologies using the theorem, $S^1-\lbrace (1,0)\rbrace$ is homeomorphic to $(0,1)$. But I'm stuck in the part of defining $Y$ and find the homeomorphism $f$ between $X$ and $Y$.
PD. Sorry about my English, but it is not my native language.
• The issue with applying the result you mention is that any space $Y$ that will work will be "complicated" (i.e. it will just be $(0,1) \cup \{p\}$ for some abstract point $p$, and the topology on $Y$ will take some effort to define). You cannot, for example, take $p$ to be in $\mathbb{R}$ and the topology on $Y$ to be the usual subspace topology inherited from $\mathbb{R}$. This is probably why you are getting stuck. It is easier to approach this problem directly. Look for a function $(0,1)$ to $S^1 \setminus \{(1,0)\}$ that might be a homeomorphism. (Hint: trig functions.) – leslie townes Apr 21 '14 at 4:04
• Oh, of course... Another idea just came to me. – SirWeigel Apr 21 '14 at 4:35
Be $f: (0, 2\pi) \rightarrow S^1-\lbrace (1,0) \rbrace$ such that if $\phi \in (0, 2\pi)$, $f(\phi)=(\cos(\phi),\sin(\phi))$. This is a homeomorphism, and $(0, 2\pi)$ is homeomorphic to $(0,1)$.
|
2019-11-21 03:26:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146398305892944, "perplexity": 66.01023501008464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00458.warc.gz"}
|
https://www.electro-tech-online.com/threads/how-to-make-home-automation-project-for-controlling-lights-of-our-home.158000/
|
# How to make Home Automation project for controlling Lights of our home?
#### Tryin
##### New Member
Can someone please help me with the Stuff about Home Automation . I saw various videos on but none of them had made it very clear about their connection and not even good expalation . i don't want whole ready mate project all i need is just how to start with it ? what could be my basic first step to understand my project well.
i have very basic idea about arduino boards but don't know its proper usage and configurations.
#### Pommie
##### Well-Known Member
If you google "Wemos IOT project" you'll find lots of info. For example.
Mike.
#### be80be
##### Well-Known Member
This is a lot more fun
#### rjenkinsgb
##### Well-Known Member
Most home automation devices use a dedicated communication system that is totally separate from WiFi etc. so there is no interference.
(The 2.4GHz WiFi band is massively overloaded in most cities and not a reliable communications link).
The earliest ones use X10, a basic "mains carrier" system using ultrasonic pulses. UPB (Universal powerline bus) was an advancement of that with better capabilities.
The best present devices use "Z-Wave", a UHF radio link system running at around 868 or 915 MHz (Europe or USA, respectively).
Every device is two-way, reporting back to whatever controls it as well as receiving, plus every device can act as a repeater and pass on data either way to other Z-Wave devices on the same network that are out of direct range of the controller, a mesh type system - up to five "hops" of extra range.
Zigbee is another UHF mesh system used for some home control, but somewhat less standardised that z-wave, with different devices using different radio bands - mixing 2.4 GHz (the congested WiFi band) with 868 / 915 MHz..
You can get USB stick style Z-Wave controllers that you can use with PCs or a Raspberry Pi etc., so you could build your own main controller while being able to use ready-made sensors and power controllers etc.
You can even get lamps with internal Z-Wave on/off and dimming or colour control - eg.
Most things you would want to control in a home operate on 115 or 230/240V (depending where in the world you are), which is not something you want to mess with as a beginner in electronics - a mistake can have lethal consequences to you or someone else.
Or, you can use "hard wiring" at eg. 12V, with switches and relays connected to input and output boards on an arduino, pic or pi etc.
That's the simplest and cheapest for starting learning about control systems and automation - and how most "serious" (eg. factory, process control, machine tool) automations works - no radio links, everything is wired and generally working on 24V DC, with wired network links between devices if there is more that one intelligent controller in a machine.
#### DrG
##### Active Member
Can someone please help me with the Stuff about Home Automation . I saw various videos on but none of them had made it very clear about their connection and not even good expalation . i don't want whole ready mate project all i need is just how to start with it ? what could be my basic first step to understand my project well.
i have very basic idea about arduino boards but don't know its proper usage and configurations.
The key to getting some help is in refining what you mean by "stuff" (and also returning to the board to see the responses)..
I read your post and the title carefully and from what I can gather, you want to automate lights in your home using an arduino. You have not stated that you want to automate your lights over the Internet using your phone, for example. With that it mind, I think post #3 has some good leads for investigating several techniques.
Several other posts have discussed IoT based approaches.
Let me talk about another very simple, yet effective, approach that you may be able to understand as a good starting point.
Start with the endpoint - the lightbulbs and/or the sockets that the lamps connect to. Here I am assuming the line or mains connection. As a beginner, I would advise that you not try to build that part. Instead, purchase those parts. For example:
Those are remote-controlled light bulb sockets. They receive 433MHz RF to turn them on and off. Your remote controller functions as the transmitter and the receiver is built into the socket.
Here is another example:
Those operate in the same way, but instead of turning on the light bulb socket, they turn on whatever is plugged into the socket (which, of course, is plugged into the wall socket). I think those are also 433MHz, but you need to check as some are 315MHz. These are all pretty low-end.
So, now, the question becomes, how do you get the Arduino to take the place of the remote control transmitters that they come wth. Fortunately, there is a lot of work done with that.
First, you can get some 433 MHz transmitters and receivers that can be hooked up to an Arduino:
Then, you first have the Arduino read the RF communication from the remote. Once read, that information is stored and can be transmitted by the Arduino from within a program.
Here is a good multi-part tutorial on that process.
There are many such tutorials.
Here is a good library for use with the Arduino:
So, armed with those basic tools, there are a couple of points to consider. First is that these will have some limited range - maybe 80ft, maybe less, maybe a little more.
Second is that you will likely want to add a real time clock to your Arduino so that the automation is accomplished at certain times and under certain conditions.
This approach, I think, gives you a good, low cost, foundation for the "stuff" that you mentioned.
It is only one, relatively simple, and relatively hands-on, approach. Go back to post #4 as well as the IoT methods that were mentioned.
Fill in the "stuff"
|
2020-08-09 02:22:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2791409194469452, "perplexity": 1472.9927102188108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738380.22/warc/CC-MAIN-20200809013812-20200809043812-00103.warc.gz"}
|
https://math.stackexchange.com/questions/463229/integrate-int-frac-sin-x-cos-x-sin-x-cos-x1-3-mathrm-dx
|
# Integrate $\int\frac{\sin x+\cos x }{(\sin x-\cos x)^{1/3}}\,\mathrm dx$.
Integrate $$\int\frac{\sin x +\cos x}{(\sin x-\cos x)^{1/3}}\,\mathrm dx.$$
What should I make $u$ and $\mathrm du$ equal? What should I do with this integration problem?
Integration by parts? I don't see how it's possible.
$u$-substitution? I don't know what to make $u$ or $\mathrm du$ equal to.
Let $u=\sin x-\cos x$, so $du=(\sin x+\cos x)\ dx$.
$$\int\frac{\sin x+\cos x}{(\sin x-\cos x)^{1/3}}dx=\int\frac1{u^{1/3}}du=\frac{3u^{2/3}}2+C$$
Now substitute $u$ back, you should get:
$$\frac{3u^{2/3}}2+C=\boxed{\frac32(\sin x-\cos x)^{2/3}+C}$$
Let $u=\sin x-\cos x$. Then $du=(\cos x+\sin x)\,dx$.
|
2020-01-19 02:03:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424694180488586, "perplexity": 416.5680661302274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00284.warc.gz"}
|
https://www.physicsforums.com/threads/what-type-of-functions-can-satisfy-f-x-f-a-x.542279/
|
# Homework Help: What type of functions can satisfy f(x)=f(a-x)?
1. Oct 20, 2011
### McLaren Rulez
1. The problem statement, all variables and given/known data
Given a function such that $f(x)=f(a-x)$ where a is some constant, what can we say about this function? This isn't actually homework; so please forgive the fact that I'm being a little vague
2. Relevant equations
None.
3. The attempt at a solution
I think that a periodic function (with period a) which is also even satisfies this but is there something else that can also satisfy this condition? Not really sure how to proceed in proving either that this is the only answer or to show an alternative.
2. Oct 20, 2011
### lurflurf
All you can say about the function is it is symmetric about x=a/2.
What are the domain and range?
If domain were real numbers for example we could define f to be an arbitrary function when x<=a/2 then define f when x>a using the given functional equation.
or just take an arbitrary function g and define
f=(1/2)g(x)+(1/2)g(a-x)=(1/2)g(a/2+(x-a/2))+(1/2)g(a/2-(x-a/2))
Last edited: Oct 20, 2011
3. Oct 20, 2011
### McLaren Rulez
The domain and range are the set of real numbers.
Thank you for the reply lurflurf. But a function which is symmetric about a should be of the form f(a+x) = f(a-x). Not quite the same as mine, right?
The second part looks fine but what is that function g(x)+g(a-x) for an arbitrary g?
Last edited: Oct 20, 2011
4. Oct 20, 2011
### lurflurf
That should have been symmetric about x=a/2
If
f(x)=(1/2)g(x)+(1/2)g(a-x)
then
f(a-x)=(1/2)g(a-x)+(1/2)g(x)=f(x)
in fact f=g for functions of the type desired
or write any function f
f(x)=(1/2)[f(x)+f(a-x)]+(1/2)[f(x)-f(a-x)]
if
(1/2)[f(x)-f(a-x)]=0
regardless
(1/2)[f(x)+f(a-x)] is a function of your type and in some sense the function of your type most like f
5. Oct 20, 2011
### Ray Vickson
There are many functions that are NOT symmetric about x = a/2, but satisfy the OPs equation. All you can say is that the function is periodic, with period a.
RGV
6. Oct 20, 2011
### McLaren Rulez
Ray, could you post an example? Thank you both for replying
7. Oct 20, 2011
### Ray Vickson
Sorry: I mis-read the original as f(x-a) instead of your f(a-x). So, the statement regarding symmetry about x = a/2 is correct: the function _is_ symmetric. And: it is not (necessarily) periodic. Mea culpa.
RGV
|
2018-08-18 09:02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7468229532241821, "perplexity": 970.9050389347731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213508.60/warc/CC-MAIN-20180818075745-20180818095745-00116.warc.gz"}
|
https://byjus.com/question-answer/what-is-valency-write-valency-of-hydrogen/
|
Question
What is valency? Write valency of hydrogen.
Solution
The combining capacity of an atom is known as its valency. The number of bonds that an atom can form as part of a compound is expressed by the valency of the element. The valence of hydrogen is $$1$$.Chemistry
Suggest Corrections
1
Similar questions
View More
People also searched for
View More
|
2022-01-22 05:14:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.589853048324585, "perplexity": 2045.2072163241057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00096.warc.gz"}
|
https://mathoverflow.net/questions/379165/which-random-variables-can-be-written-as-the-difference-of-two-independent-posit
|
# Which random variables can be written as the difference of two independent positive random variables?
Can we characterize random variables $$X$$ that satisfy $$X\sim Y - Z$$ for two independent positive random variables $$Y$$ and $$Z$$?
Are $$Y$$ and $$Z$$ unique in some sense?
Can (one possible choice of) $$Y$$ and $$Z$$ be constructed (e.g. formulas for probability density or characteristic function, or sampling algorithms) when they exist?
Possibly simpler question: Which random variables $$X$$ satisfy $$X\sim Y_1-Y_2$$ for i.i.d. positive random variables $$Y_1\sim Y_2\sim Y$$? Since the characteristic function satisfies $$\phi_X = \phi_{Y}\overline{\phi_{Y}}$$ we must have $$\phi_{X}\geq 0$$ -- is that sufficient? Is $$Y$$ unique in some sense? Can it be constructed?
For example, Laplace random variables satisfy $$\phi_{X} = (1+x^2)^{-1}=(1+ix)^{-1}\overline{(1+ix)^{-1}}=\phi_{Y}\overline{\phi_{Y}}$$ where $$Y$$ is exponential. Exponentials are positive of course, so we got lucky with this particular decomposition and can write $$X=Y_1-Y_2$$ as desired. Had we picked $$(1+x^2)^{-1}=(1+x^2)^{-1/2}\overline{(1+x^2)^{-1/2}}$$ this wouldn't have worked out.
This approach slightly generalizes to $$\phi_{X}$$ that are rational in $$x^2$$, but not at all (at least not obviously to me) to only slightly different examples like Linnik random variables, where $$\phi_{X} = (1+|x|^{\alpha})^{-1}$$, or to limits such as normal random variables, where $$\phi_{X}=e^{-\sigma^2x^2}$$.
The only result I found that goes remotely in this direction was a theorem by Boas and Kac that positive definite functions with compact support have a convolution square root with half-length compact support. This has a support flavor, but a different one than I'm looking for.
• $Y$ and $Z$ are not unique. Consider $X$ which has a 1/3 chance of being $1$ and a 2/3 chance of being $2$; you can write it as $Y-Z$ where $Y$ is certain or as $Y'-Z'$ where $Z'$ is certain instead. Dec 17, 2020 at 17:39
• @MattF. you're right, and I should have figured that out myself. Added the uniqueness question hastily while writing up the question. I'm really more interested in (as explicit as possible) existence. Might still be possible to define better constraints or equivalence relation to get back uniqueness? I'll leave it in and see what people come up with Dec 17, 2020 at 17:43
• @ChristianRemling I thought that's (better) expressed by symmetry of $\phi_{X}$, which is implied by it being real Dec 17, 2020 at 17:48
## 2 Answers
"we must have $$\phi_{X}\geq 0$$ -- is that sufficient?"
No. For instance, the function $$\mathbb R\ni t\mapsto(1-t^2+t^4/4)e^{-3t^2/8}$$ is a nonnegative characteristic function -- see the second display after formula (6.3.4) in the book by Lukacs.
However, as stated there in book, this characteristic function is indecomposable, that is, it is not the characteristic function of the sum of any two independent nondegenerate random variables.
• Thanks -- I was thinking, but not writing, about this as a question of convolutions and didn't appreciate the difference between a convolution square root and sum of random variables that lies in the positivity requirement on the convolution square root in the latter case. Interested in both Dec 17, 2020 at 20:53
• @Bananach : Sorry, I don't quite understand your comment. Dec 17, 2020 at 21:29
• I meant that even though the random variable associated to your characteristic function is not the sum of two random variables, its density $p_X$ is still the convolution $f\star f$ for some $f$ that is not a probability density. Similarly, there could be a random variable X that cannot be written as the difference $Y_1-Y_2$ but whose density $p_X$ can be written as $f\star f(-\cdot)$ for some non-density $f$. Dec 17, 2020 at 23:03
As soon as $$\phi(x)$$ decays too rapidly, you are doomed. For example, if $$\phi(x)=e^{-x^2}$$, then $$\psi(x)=Ee^{itY}$$ would have to satisfy $$|\psi(x)|=e^{-x^2/2}$$, but now the rapid decay will make the distribution of $$Y$$ absolutely continuous with holomorphic density $$f_Y$$ (this is well known and easily proved since $$f_Y(z)=\int_{-\infty}^{\infty}\psi(t) e^{-itz}\, dt$$ converges for all $$z\in\mathbb C$$), so it's not possible to have $$f_Y(y)=0$$ for $$y<0$$.
To state this more formally, this argument shows that if $$|\phi_X(x)|\lesssim e^{-c|x|}$$ for some $$c>0$$ and $$X= Y_1-Y_2$$, with $$Y_j$$ independent and $$Y_1\sim Y_2$$, then the distribution of $$Y_j$$ is of the form $$d\mu(y)= f(y)\, dy$$ with a real analytic $$f$$. In particular $$P(Y\in A)>0$$ for any $$A\subseteq\mathbb R$$ of positive Lebesgue measure.
• Thanks for clarfiying! Dec 18, 2020 at 15:51
• Glad to see such an easily checked obstruction, which answers in particular the Gaussian case. Although I admit I had secretly hoped for more positive results. Dec 20, 2020 at 14:47
• @ChristianRemling Your result seems to be a weaker version of the "Paley-Wiener condition" mentioned in spinlab.wpi.edu/courses/ece531_2009/12linearestimation.pdf (once you map the unit circle from those notes to the real line, you get, in your notation, the condition $\int \log |\phi_X| / (1 + x^2) dx > -\infty$, i.e. you get a condition that is slightly stronger than yours). Are you familiar with the "Spectral factorization theorem" mentioned in these notes and would you happen to know of more rigorous mathematical books / references that discuss this result? Jun 14 at 13:14
• In particular, I'm hoping for an explicit construction of $\phi_Y$ and I can't pin down exactly what I'm supposed to do. For example, I keep running into discussions of Hardy spaces and Beuerling factorization but I'm not sure, for example, how the positive definiteness of $\phi_X$ translates into the language of Hardy spaces and I'm not sure how much I really have to dig into complex analysis, inner, and outer functions, etc., to find what I hope is a simple analytic (allowing Fourier transforms and the like) formula that allows me to write $\phi_Y$ in terms of $\phi_Y$. Jun 14 at 13:30
• For example, I believe I the "outer function" (first displayed equation in proof of Theorem 2.2) in www1.maths.leeds.ac.uk/nbfas/chalendar.pdf might give me the solution I'm looking for (up to taking a square root first), but I don't find that stated clearly anywhere, and in fact these notes assume that everything lives in the Hardy space to begin with whereas I'm starting with something that has Fourier support everywhere and want the "square root procedure" to move things into the Hardy space where the support is halved. Jun 14 at 13:41
|
2022-09-25 05:06:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287604689598083, "perplexity": 230.95956965827818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00365.warc.gz"}
|
https://electronics.stackexchange.com/questions/325723/how-can-i-get-1-10mb-s-of-debugging-data-to-and-from-a-dev-board
|
How can I get 1-10MB/s of debugging data to and from a dev board?
Here's the fundamental problem I have: I'm looking at a dev board for, say, a TI/AD/Microchip microprocessor or DSP. This is long before a full product design; we're talking proof-of-concept work here. I want to send data from a PC to be processed by my firmware, or have my firmware send data to my PC. And I want the rate to be of the order of 1-10MB/s.
15+ years ago, 1-10MB/s would have been quite niche. kB/s would have been more usual, and the dev board would have had a DB9 connector on it. I could have just plugged a serial cable between the dev board and my PC, perhaps with a USB-to-serial converter, and read and write to COMX or /dev/ttySX.
However, this scheme has a number of limitations that are now starting to show up:
• I want MB/s, not kB/s.
• The absolute max UART rate a 100MHz device could manage is still only 0.7 MB/s. SPI on the other hand allows up to 3-4 MB/s.
• It's incredibly rare for PCs to come with serial ports now, so specialised adaptors are needed to interface with a PC eg. USB to serial cable.
But I'm stumped for what I could use to replace the old serial scheme. The main context for this question is sending serialised debugging data between vendor dev boards and a PC before design is finalised, so anything requiring secondary devices eg. Bluetooth isn't super useful. The µPs I typically work with are in the realm of TI's MSP430, Microchip's PIC32M*, and low power DSPs like the TI C55x or C674. Their dev boards might typically come with headers connected to SPI/I2C/UART peripherals.
Ethernet would require full implementation of a networking stack, which isn't really practical on the constrained DSPs or µPs I often work with. Furthermore most PCs only have one ethernet port, if at all, so you wouldn't be able to use it for wired networking.
Ground-up USB requires getting a vendor ID from a 3rd party, and reinventing all sorts of wheels at the driver and software level just to get data from one device to another.
Ideally I'd like to just be able to dump bytes in a peripheral register on my µP and drain it using Python on the PC or vice versa, using a cable most IT departments would have lying around, and get 1-10 MB/s. A bonus would be not having to poll every available port of whatever kind to find the device on a PC. Is this possible?
• Is this for: development debugging, or field diagnostics by a programmer, or field diagnostic by a technician with limited training? – Nick Alexeev Aug 25 '17 at 4:11
• @NickAlexeev Let's say development debugging. I'm looking for something that's easy to do on a "typical" dev board; field diagnostics would involve the final product where we could eg. design in a high speed FTDI chip connected via I2C. Make sense? – detly Aug 25 '17 at 4:14
• Serial to USB are cheap and common with many available comm drivers. What better universal host solution do you need ? The FT602 is a FIFO interface to SuperSpeed USB (USB 3.1 Gen 1) USB Video Class (UVC) bridge chip – Sunnyskyguy EE75 Aug 25 '17 at 4:36
• @TonyStewart.EEsince'75 Ideally I want something where I don't have to design a separate PCB and get it manufactured. All the FTDI USB-to-header cables I could find use the FT234XD, which only goes up to 3 Mbaud or ~3-400kB/s. – detly Aug 25 '17 at 4:43
• what's wrong with buying a USB2 or USB3 adapter? – Sunnyskyguy EE75 Aug 25 '17 at 5:43
Modern successor for a RS232 communication port is supposed to be USB. However, people need to realize that if it is used for debug purposes, it must have some serious architecture support at system level, the port must have some internal bridges to uP registers and memories.
Starting in 2002, Intel has made an attempt to implement mandatory debug functions into open-standard EHCI controller. There was even some support for it, from Linux and Microsoft side, although, as I understand, all the effort had limited success.
With an advent of USB 3, the effort was resurrected, see USB 3.1 Debug Class Specifications.
Most recent Type-C connector also defines "Debug Accessory Mode", see pp 59-60. I believe there are several auxillary verions in the works on how to use the pins in alternate way, including JTAG functionality.
However, the old RS232 is not going away and refuses to die. The reason is that most uP and mini-OS are relying on venerable COM ports for many functions, and all Linux/Android flavors have a built-in debug support in kernel. And the UART circuitry/port is fairly simple to implement in silicon with very few resources. However, instead of placing bulky DB-9 connectors (which sometimes are bigger than the device itself) people are embedding a UART-USB converter on-board (typically of FTDI type), and use the miniature u-A/B USB connectors to attach to debug host, emulating COM port and using standard terminal software to access it.
• I'm sorry, I should have been clearer with what I meant by "debugging". I wasn't talking about break-and-step level debugging, but higher level serialisation of state and data eg. dumping samples of audio, states of a state machine, etc. – detly Aug 27 '17 at 4:36
I assume you are talking about a UART or serial interface, RS232 is an electrical and pin standard (simply defines what voltage levels a one and a zero are but not the state changes, protocols, speeds, etc).
Not sure where you got the 160000, you can easily go much faster than that with a cheap ftdi part. As well as fast jtag, swd, spi, i2c, etc with the same part (one with mpsse). (or roll your own protocol).
Depends on what your definition of PC is, desktops are dead, laptops are dying (all being replaced by tablets and phones), the primary PCs remaining are servers and they typically have at least one serial port as that is the primary display interface for booting/debugging, etc. But I know what you mean.
With the general lack of a need for RS232, usb to uart works just fine for a primary debug from a host/development pc against a target embedded board/system/chip.
For the ones you mentioned you can out run at least some of them with a usb uart solution so the MCU is the bottleneck there. Of the parts available you are more likely to find ones with uarts than with usb, certainly not ethernet. Your number one interface is uart, the ftdi parts and probably others make it easy to access any vendor specific or standardized protocols, or wiggle strap pins so you can use the uart to program the part in circuit. These vary from custom protocols (avr xmega), to stock spi while in reset (avr if I remember right), uart (msp430, a number of other brands/models), jtag, swd for the cortex-ms, all with a simple ftdi breakout board.
uart still dominates as generally having the lightest footprint on the bootloader. ethernet as you pointed out requires a stack, even cheating with UDP, you still need way more code than checking the receive buffer not empty bit and reading an already processed byte out of a uart peripheral. usb, if the mcu has it may be mostly done in hardware, so there are some that it is a lightweight thing but you often still have to enumerate yourself and respond to the host when requested, possibly more work than a dumbed down udp stack (the ones I have used it is other than the EZ-USB 8051).
You can get up to a MB/s with uart on some devices, but faster than that you are going to have to build a stack usb or ethernet or custom.
• I'm going to address things in separate comments here, hope you don't mind. Good catch on the RS232 thing, I do that a lot because they used to be so synonymous. Fixed in the question. – detly Aug 29 '17 at 3:11
• The 16kB/s figure was an error, it should have been 0.69 MB/s (for a 100MHz DSP with a min. 16 divisor for UART baud and 8-N-1 protocol). – detly Aug 29 '17 at 3:17
• Re. FTDI parts - I have used such parts in products before, and I agree they're very useful. But my problem concerns those times when I'm nowhere near that stage yet... when I'm looking at different dev boards, trying to establish proof of concept, etc. I don't really want to spend time designing Yet Another Breakout PCB that'll fall apart before the next project starts! I mean, if I have to, I have to... but if there's a cable or ready-made board, I want that! – detly Aug 29 '17 at 3:21
• Re. the dying PC - sure they're not driving sales like they used to, but I think they're far from dead, especially in the context of development and design. However, even engineers need our laptops, and the trend there is fewer connectors of fewer different types. – detly Aug 29 '17 at 3:22
• the bus pirate is basically an ftdi breakout board. you dont have to re-design anything, though one breakout board can work for all projects just like a screwdriver move it from one to another. – old_timer Aug 29 '17 at 4:03
If the MCU is a Cortex-M you could look at Segger's J-Trace or J-Link and use it with RTT. RTT is a kind of a terminal-over-JTAG. The MCU places data in a RAM buffer and the debugger polls it. They advertise a speed of 3 MB/s.
|
2019-07-16 06:30:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27629512548446655, "perplexity": 3539.415984159652}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524503.7/warc/CC-MAIN-20190716055158-20190716081158-00426.warc.gz"}
|
http://love2d.org/forums/viewtopic.php?p=201752
|
## sock.lua - A simple networking library for LÖVE
Ikroth
Citizen
Posts: 70
Joined: Thu Jul 18, 2013 4:44 am
### sock.lua - A simple networking library for LÖVE
sock.lua
sock.lua is a networking library for LÖVE games. Its goal is to make getting started with networking as easy as possible.
Source Code: https://github.com/camchenry/sock.lua
Documentation: http://camchenry.com/sock.lua/
Examples: https://github.com/camchenry/sock.lua/t ... r/examples
Features
• Event trigger system makes it easy to add behavior to network events.
• Can send images and files over the network.
• Can use a custom serialization library.
• Logs events, errors, and warnings that occur.
Example
Code: Select all
-- client.lua
sock = require "sock"
-- Creating a new client on localhost:22122
client = sock.newClient("localhost", 22122)
-- Creating a client to connect to some ip address
client = sock.newClient("198.51.100.0", 22122)
-- Called when a connection is made to the server
client:on("connect", function(data)
print("Client connected to the server.")
end)
-- Called when the client disconnects from the server
client:on("disconnect", function(data)
print("Client disconnected from the server.")
end)
-- Custom callback, called whenever you send the event from the server
client:on("hello", function(msg)
print("The server replied: " .. msg)
end)
client:connect()
-- You can send different types of data
client:send("greeting", "Hello, my name is Inigo Montoya.")
client:send("isShooting", true)
client:send("bulletsLeft", 1)
client:send("position", {
x = 465.3,
y = 50,
})
end
function love.update(dt)
client:update()
end
Code: Select all
-- server.lua
sock = require "sock"
-- Creating a server on any IP, port 22122
server = sock.newServer("*", 22122)
-- Called when someone connects to the server
server:on("connect", function(data, client)
-- Send a message back to the connected client
local msg = "Hello from the server!"
client:send("hello", msg)
end)
end
function love.update(dt)
server:update()
end
Motivation
Writing code in lua-enet introduces a lot of boilerplate code that obfuscates your game code. Your game code becomes tied to your networking code, and it becomes difficult to reuse code between the client and server. sock tries to remedy this by taking on the task of serializing and sending data for you. It enables you to focus on the network code that matters.
Feedback needed
sock.lua is still a work in progress, but is already able to be used in games. However, there is a long way to go before it could be considered "complete."
• How can this library be made more useful to you?
• How can the documentation be improved?
[/b]
Last edited by Ikroth on Sat Dec 17, 2016 10:13 pm, edited 1 time in total.
Jack5500
Party member
Posts: 149
Joined: Wed Dec 07, 2011 8:38 pm
Location: Hamburg, Germany
### Re: sock.lua - A simple networking library for LÖVE
Wow, this looks really well put together and like a valid alternative to lube.
Shame that I can't give any deep feedback, but from what I see on the surface and the documentation you did a really good job!
Ortimh
Citizen
Posts: 90
Joined: Tue Sep 09, 2014 5:07 am
Location: Indonesia
### Re: sock.lua - A simple networking library for LÖVE
Hat's off. Definitely a use. This could make my networking problem solved. It follows my naming conventions and tidy as heck (especially the doc), just the way I like it. No feedback from a newbie of networking stuff, but surely it's a great job from you!
Ulydev
Party member
Posts: 431
Joined: Mon Nov 10, 2014 10:46 pm
Location: Paris
Contact:
### Re: sock.lua - A simple networking library for LÖVE
Looks like an amazing lib, can't wait to try it!
Ikroth
Citizen
Posts: 70
Joined: Thu Jul 18, 2013 4:44 am
### Re: sock.lua - A simple networking library for LÖVE
Thanks for the feedback.
Right now, I'm working on support for custom serialization functions, so you're not forced to use bitser if you don't want to. You can probably expect to see it in the docs within a week or so.
The documentation still lacks some information, i.e. examples, usage info. There's also no information on how to use "data formats" (my own term, might change.) It's a cool feature that potentially saves a lot of bandwidth. Now, I just need to write my own Lua doc tool so I don't have to use LDoc anymore .
whitebear
Citizen
Posts: 86
Joined: Sun Mar 15, 2009 1:50 am
### Re: sock.lua - A simple networking library for LÖVE
I am curious, what license is this under.
Ikroth
Citizen
Posts: 70
Joined: Thu Jul 18, 2013 4:44 am
### Re: sock.lua - A simple networking library for LÖVE
whitebear wrote:I am curious, what license is this under.
LordSeaworth
Prole
Posts: 22
Joined: Tue Jun 07, 2016 10:29 pm
### Re: sock.lua - A simple networking library for LÖVE
Hmm seems very usefull.
Was first thinking on using enet but if you keep this library alive with any upcoming love updates i'll deff gonne use this.
Keep up the good work
Sulunia
Party member
Posts: 198
Joined: Tue Mar 22, 2016 1:10 pm
Location: SRS, Brazil
### Re: sock.lua - A simple networking library for LÖVE
So, i just took this awesome library out for a spin.
One thing that i didn't know and took me some time to figure out: the server always receives an event, the data AND the peer that sent it.
So, if you want to send information specifically to a peer, you must use the peer the server receives on event trigger. It is pretty damn obvious now, but complete beginners to networking may have troubles.
I'll see if i can fix the ugly code around my simple "box moving" example with this lib and then share it here if people so wish.
Other than that, i suppose all "security" checks are done by the lib already no? So we only have to determine if the data received is valid or not to prevent cheating and so on...
Not sure if i was clear, say so if not. Otherwise, amazing library! Will definitely use it.
Don't check my github! It contains thousands of lines of spaghetti code in many different languages cool software!
https://github.com/Sulunia
LordSeaworth
Prole
Posts: 22
Joined: Tue Jun 07, 2016 10:29 pm
### Re: sock.lua - A simple networking library for LÖVE
*snip*
Silly me, found the answer to my question in the example.
### Who is online
Users browsing this forum: No registered users and 6 guests
|
2019-02-15 18:40:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2041725218296051, "perplexity": 5579.7640616408435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00210.warc.gz"}
|
https://socratic.org/questions/if-a-x-2-b-x-3-2x-11-x-2-x-6-then-a-b
|
# If a/(x-2)+b/(x+3)=(2x+11)/(x^2+x-6) then (a,b) = ?
Oct 18, 2017
Let's see.
#### Explanation:
Given,
$\frac{a}{x - 2} + \frac{b}{x + 3} = \frac{2 x + 11}{{x}^{2} + x - 6}$
Now find out the LCM of the LHS terms.
$\frac{a \left(x + 3\right) + b \left(x - 2\right)}{\left(x - 2\right) \left(x + 3\right)} = \frac{2 x + 11}{{x}^{2} + x - 6}$
$\frac{a \left(x + 3\right) + b \left(x - 2\right)}{\left(x - 2\right) \left(x + 3\right)} = \frac{2 x + 11}{\left(x - 2\right) \left(x + 3\right)}$
Now, simplifying the equation by multiplying both the sides by the denominator and then addind the remaining terms in numerator, we get $\rightarrow$
$a \left(x + 3\right) + b \left(x - 2\right) = 2 x + 11$
$a x + b x + 3 a - 2 b = 2 x + 11$
$\textcolor{red}{\left(a + b\right) x + \left(3 a - 2 b\right) = 2 x + 11}$.
Now, comparing the coefficients of $x$ & ${x}^{0}$, we get two more equations:
$\textcolor{red}{a + b = 2}$..........(1).
$\textcolor{red}{3 a - 2 b = 11}$..........(2).
Now, solve the respective equations to get the values of $a$ & $b$.
Hope it Helps:)
|
2021-10-16 00:02:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711025357246399, "perplexity": 949.413670478963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00699.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/disc-radius-10-cm-carries-uniform-surface-charge-density--electric-field-axis-disc-distanc-q3880071
|
A disc of radius 10 cm carries a uniform surface charge density of $10.0/:/mu C.m^{-2}$. The electric field on the axis of the disc at a distance 0.1 cm form the disc is approximately
$(i)/:/:/:/:/:68/:kN.C^{-1}$
$(ii)/:/:/:/:0.34/:MN.C^{-1}$
$(iii)/:/:/:99/:kN.C^{-1}$
$(iv)/:/:/:/:0.56/:MN.C^{-1}$
|
2014-08-29 18:54:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800618886947632, "perplexity": 138.05351792586345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832738.80/warc/CC-MAIN-20140820021352-00307-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://socratic.org/questions/what-happens-when-a-sodium-atom-and-a-chlorine-atom-exchange-an-electron
|
# What happens when a sodium atom and a chlorine atom exchange an electron?
Sep 16, 2016
A redox reaction that results in salt formation.
#### Explanation:
$\text{Oxidation:}$
$N a \left(g\right) \rightarrow N {a}^{+} + {e}^{-}$
$\text{Reduction:}$
$\frac{1}{2} C {l}_{2} \left(g\right) + {e}^{-} \rightarrow C {l}^{-}$
$N a \left(g\right) + \frac{1}{2} C {l}_{2} \left(g\right) \rightarrow N {a}^{+} C {l}^{-} \left(s\right)$
|
2020-01-20 21:29:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42866280674934387, "perplexity": 6449.156552549512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00388.warc.gz"}
|
https://www.physicsforums.com/threads/differential-eq-inverse-laplace-transforms.402299/
|
# Differential Eq: Inverse Laplace Transforms
1. May 10, 2010
### Gogeta007
1. The problem statement, all variables and given/known data
I have a couple of problems that im stuck on. The following:
y'' - 6y' + 9y = t
y(0) = 0
y'(0) = 1
and
y'' - 6y' + 13y = 0
y(0) = 0
y'(0) = -3
2. Relevant equations
y'' = s2Y(s) - sf(0) - f'(0)
y' = sY(s) - f(0)
y = Y(s)
3. The attempt at a solution
===================================
for the first one:
once I substitute into the original equation, I can move things around and I came out with the following:
Y(s){s2-6s+9} = (1 +s2)/s2
So I get Y(s) = (1+s2)/s2(s-3)2
IIRC for partial fractions it should be the following: A/(s-3) + B/(s-3)2 + (Cs+D)/s2
I dont know if this is where i messed up but I got:
A=-2/27
B=-1/9
C=2/27
D=1/9
as a final result I get:
-2/27e^3t - 1/92e^3t and the other part (Cs + D)/s^2 . . .I cant find any way to transform that
the back of the book says:
1/9t + 2/27 - 2/27e^3t + 10/9te^3t
=============================
The second one:
Starting by substitution, plugging in the values and solving for Y(s)
Y(s) = -3/(s2-6s + 13)
and well. . Im lost from this point onwards. . .I dont remember how to do partial fractions if you cant factorize that denominator. And the quadratic formula (A=1 B = 6 C=13) gives imaginary numbers
=============================
Thank you!
Last edited: May 10, 2010
2. May 10, 2010
### vela
Staff Emeritus
You just have to separate it into individual terms.
$$\frac{Cs+D}{s^2} = \frac{C}{s} + \frac{D}{s^2}$$
Also, according to Mathematica, you should have B=10/9. The other coefficients you found are correct.
Last edited: May 10, 2010
3. May 10, 2010
### vela
Staff Emeritus
Complete the square in the bottom so you get something that looks like
$$Y(s) = -\frac{3}{(s-a)^2+b^2}$$
You should be able to find the inverse of that using the tables and the properties of Laplace transforms.
4. May 10, 2010
### Gogeta007
wow. . .I cant believe I missed that. . .how does really old math comes back and haunt you huh?. . .thanks a lot
|
2017-08-23 16:33:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720794916152954, "perplexity": 2231.209048238356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00643.warc.gz"}
|
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Atlas_(topology).html
|
# Atlas (topology)
For other uses, see Fiber bundle and Atlas (disambiguation).
In mathematics, particularly topology, one describes a manifold using an atlas. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. If the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fibre bundles.
## Charts
The definition of an atlas depends on the notion of a chart. A chart for a topological space M (also called a coordinate chart, coordinate patch, coordinate map, or local frame) is a homeomorphism from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair .
## Formal definition of atlas
An atlas for a topological space M is a collection of charts on M such that . If the codomain of each chart is the n-dimensional Euclidean space and the atlas is connected, then M is said to be an n-dimensional manifold.
### Maximal atlas
The atlas containing all possible charts consistent with a given atlas is called the maximal atlas: i.e., an equivalence class containing that given atlas (under the already defined equivalence relation given in the previous paragraph). Unlike an ordinary atlas, the maximal atlas of a given manifold is unique. Though it is useful for definitions, it is an abstract object and not used directly (e.g. in calculations). The completion of an atlas consists of the union of the atlas and all charts which yield an atlas of the manifold. That is, if we have an atlas on a manifold , then the completion of the atlas consists of all those charts such that . An atlas which is the same as its completion is a complete atlas. A complete atlas is a maximal atlas.
## Transition maps
Two charts on a manifold
A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.)
To be more precise, suppose that and are two charts for a manifold M such that is non-empty. The transition map is the map defined by
Note that since and are both homeomorphisms, the transition map is also a homeomorphism.
## More structure
One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives.
If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be .
Very generally, if each transition function belongs to a pseudo-group of homeomorphisms of Euclidean space, then the atlas is called a -atlas. If the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle.
|
2022-07-05 05:49:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8841197490692139, "perplexity": 268.88301186932813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00693.warc.gz"}
|
https://electronics.stackexchange.com/questions/405367/use-irf540-mosfet-to-control-led-strip
|
# Use IRF540 MOSFET to control LED strip
I was hoping to use this mosfet shield to control a "Standard 3528 12V LED strip". The shield looks like this:
However, it doesn't come with a datasheet (I guess that's my fault for being cheap) and now I'm confused about how to connect this to my Arduino Nano / Wemos D1. The black connector has +, - and s on the bottom but both blue connectors have no labeling whatsoever. So I'm not sure how to connect it to the 12V and Arduino. I've googled but haven't been able to find any documentation on this shield other than lots of sites offering it and with the same (or similar) pictures but never with any labeling. Maybe these things are so common that 'everybody knows' how to connect them?
The MOSFET is an IRF540. Also; I've read somewhere that I may need a transistor or "driver" to get the MOSFET to switch?
I have relays lying around and they work fine for my purposes but I was hoping to switch the LED strip without the noticeable click of a relay, so that's why I was looking into this MOSFET. (I also have some solid state relays laying around, but they're for AC).
• Buying EE stuff that doesn't have a data sheet isn't being cheap because it's likely you won't get the best performance without reverse engineering it to understand it and this costs more (time is money etc.) than buying the right goods in the first place and saving everyone's time who reads this post. – Andy aka Nov 6 '18 at 16:15
• The sloppy placement of the components is not making me optimistic about this board. – Elliot Alderson Nov 6 '18 at 16:24
• That power mosfet is literally touching the connector to the microcontroller and defeating the protection of the optocoupler... I would advise not using this board, it looks like something designed by a beginner who didn't know what they were doing. – Hearth Nov 6 '18 at 17:17
• Get in touch with the supplier and ask for documentation rather than just complain about the lack. I'm sure the supplier has had to deal with question on the connections if they are not marked. The board is however simple enough you could trace the circuit easily. – Jack Creasey Nov 6 '18 at 17:19
• I pointed out myself I probably shouldn't have so cheap; no need to point it out again and it's not helpful in any way. I know the board doesn't look great in the photo; my actual board(s, I got 4...) look much better. I can try to contact the seller but I have a feeling that's not going to get me far (again: shouldn't have been so cheap). HOWEVER; this is all I currently got lying around. I don't have drawers and drawers full of this stuff. Is there anyone that can help me figure this thing out (bootleg or not), even if it's only to see if I can get it to work? @Felthry: I got 2 of those too… – RobIII Nov 6 '18 at 17:25
|
2021-06-20 04:09:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1957458257675171, "perplexity": 1032.5913469814284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487655418.58/warc/CC-MAIN-20210620024206-20210620054206-00164.warc.gz"}
|
https://s7097.gridserver.com/pawntakespawn-color-ptcw/how-to-multiply-fractions-with-mixed-numbers-b22778
|
Simplify if possible. Email. The answer has a numerator of 6 and a denominator of 36. A mixed number has a whole number and a fractional part. Change all the mixed numbers to improper fractions. Multiply the numerators and then the denominators. The second step is to simplify the fractions and multiply the denominators ad numerators. Students learn how to multiply fractions with and without cancelling, multiply mixed numbers, and the meaning of reciprocals. Multiplying Mixed Numbers Worksheets. Simplify if possible. You know that the problem isn't finished until the fraction If the result is an improper fraction, change it to a mixed number. It's very difficult, or at least it's not easy for me, to directly multiply mixed numbers. 7th Grade Multiply Fractions With Mixed Numbers - Displaying top 8 worksheets found for this concept. They are. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, … The answer key Multiplication of Fraction with Whole Numbers And guess what? Multiplying mixed numbers. Mixed numbers are numbers that we write with a whole number part and a fraction part. (0 votes, average: 0.00 out of 5) Multiply Mixed Numbers by Integers year 5 Fractions Step 18 Resource Pack includes a teaching PowerPoint and differentiated varied fluency and reasoning and problem solving resources for Spring Block 2. ©Math Worksheets Center, All Rights Reserved. From Ramanujan to calculus co-creator Gottfried Leibniz, many of the world's best and brightest mathematical minds have belonged to autodidacts. See more ideas about multiplying mixed numbers, math fractions, mixed numbers. Learn how with this free video math lesson. Some examples can be 1 1/2, 3 5/8, 2 7/9, and so on. Sometimes, you may need to multiply three or more mixed numbers together, for example, when applying one discount after another to an original list price. math homework to the fitness center because she needed to reduce her fractions? When multiplying 2 fractions, you can multiply numerators, then multiply denominators. The whole number needs to be written in the form of the fraction with a denominator. Multiplying Fractions- with Mixed Numbers Multiply Fractions using mixed numbers. ID: 1241257 Language: English School subject: Math Grade/level: Standard 5 Age: 9-11 Main content: Fractions Other contents: Multiplication Add to my workbooks (12) Download file pdf Add to Google Classroom Multiply the denominator with the whole number and add the numerator into it. Multiplying Mixed Numbers Multiply 1 and 3/4 times 7 and 1/5. Each example will have a mixed number times a whole number. The fractional part consists of a numerator and a denominator. For example, if the number is 2 1/3, you will need to change this to 7/3 before you multiply. This complete guide to multiplying fractions by whole numbers includes several examples, an animated video mini-lesson, and a free worksheet and answer key. When multiplying mixed numbers, always change them into improper fractions first. This is because when multiplying mixed numbers, you first have to change them to improper fractions. Multiply the top numbers (the numerators). A mixed number is a number with a proper fraction associated with it. Welcome to this free lesson guide where you will learn and easy two-step process for multiplying fractions by whole numbers AND multiplying whole numbers by fractions. 3310 = 3 310. Change each number to an improper fraction. When multiplying mixed numbers you need to change the mixed number into an improper fraction before you multiply. So the first thing we want to do is rewrite each of these mixed numbers as improper fractions. Then, at the end, you can change the fraction back to a mixed number. Change all the mixed numbers to improper fractions. See more ideas about multiplying mixed numbers, math fractions, mixed numbers. When multiplying fractions, it is not uncommon to end up with an improper fraction. How do you multiply mixed numbers? 134 × 103 = 13012 This is because when multiplying mixed numbers, you first have to change them to improper fractions. An online mixed number calculator is a free and best tool that allows you to add, subtract, multiply, and divide the mixed number fraction. You convert the mixed numbers to improper fractions as follows: To do this, break down both the numerator and denominator of each fraction into their prime factors (shown below in parentheses): In this example, you cross out 2 and 3 because they’re common factors — that is, they appear in both the numerator and denominator: Multiply the numerators together and the denominators together. Nov 26, 2018 - Explore Patricia King's board "Multiplying Mixed Numbers" on Pinterest. Click on each step below to see an example. The first form is where the numerator is smaller than the denominator and we call this form proper fraction. Mixed Numbers Task Cards BundleThis is a bundle of our mixed numbers mystery picture task cards/flip cards, including the conversion between improper fractions and mixed numbers, and all four operations on mixed numbers.Each set has 12 cards, an answer/coloring sheet, and the answer key. Cancel out common factors shared between a numerator and denominator to simplify and multiply the remaining factors to find the quotient. Check to be sure the answer makes sense. To find the quotient, we now multiply the dividend and the reciprocal to find the quotient of the fraction and mixed number. Sal introduces multiplying 2 fractions. Displaying top 8 worksheets found for - Multiply Fraction With Mixed Number. Next we multiply the denominators: 9 x 4 = 36. Check to be sure the answer makes sense. Before you … Multiply the bottom numbers (the denominators). Here 8 / 3 = 2 2 / 3 Click on each step below to see an example. ID: 1241257 Language: English School subject: Math Grade/level: Standard 5 Age: 9-11 Main content: Fractions Other contents: Multiplication Add to my workbooks (12) Download file pdf Add to Google Classroom How to Multiply Mixed Numbers - 4/7 x 3/4; Multiplying Mixed Numbers e.g. The numerator of your answer is 96. Two free homework helps Alabama haunts multiply by multiplying the numerators and. Multiplying Fractions by a Mixed Number Students multiply fractions by mixed numbers. Just see the example in section 5, where we end up with 4 x 2 / 3 = 8 / 3. Here are the steps for multiplying mixed numbers. […] Multiplication of fractions is pretty simple compared to addition and subtraction. Multiplying Mixed Numbers by Mixed Numbers. To multiply mixed numbers, first change them to improper fractions. Simplify your answer and write it as a mixed fraction. problems. Mixed fractions need to be converted into an improper fraction. Multiply Fraction With Mixed Number. One More Example: What is 3 14 × 3 13? is below. Multiplying Fractions Worksheets #254618. Step 18: Multiply Mixed Numbers by Integers year 5 Fractions Step 18 Spring Block 2 Resources. The first step of this process is to convert the mixed number into an improper fraction. In Sadie’s case, you have two mixed numbers, 10 2/3 and 1 1/2, that need to become improper fractions. 10 problems that test Multiplying Mixed Numbers skills. The Denominator is always Down below the fraction line. First we multiply the numerators: 2 x 3 = 6. Example: Sadie worked 10 2/3 hours at time-and-a-half. One common method is called the Texas Method, which just stands for tx (t is like a plus sign, x is like a multiplication sign), meaning you have two operations required – you must use addition and multiplication. If you are clever you can do it all in one line like this: 1 12 × 2 15 = 32 × 115 = 3310 = 3 310. This is going to be equal to-- in the numerator, we just multiply the numerators. Once you have turned your whole number into a fraction, follow the rules for multiplying fractions. To multiply mixed numbers you have to follow four simple steps Step 1 – Convert the mixed number into improper fraction Step 2 – Multiply the numerators Step 3 – Multiply the denominators The below formula is the mathematical representation to add any number of fractions with like or unlike denominators, positive and negative fractions or fractions with whole or mixed numbers. Before you can multiply mixed numbers, you must convert them into improper fractions. Multiplying Fractions & Mixed Numbers. As you can see, you are distributing the factor 2 over the whole number and the fraction part of 1 5/6. Let's think about what it means to multiply 2 over 3, or 2/3, times 4/5. Multiple Fractions Addition is a basic arithmetic operation which combines two or more fractions together. ... Multiplying mixed numbers. In Sadie’s case, you have two mixed numbers, … Simplify the Answer. The product must be entered as a mixed number and in lowest terms. You may also need to change the answer back to a mixed number when you are done multiplying. This article shows you how to multiply fractions in four easy steps. This type of fraction is called a mixed fraction. Scoring matrix. Follow this formula to convert each mixed number into an improper fraction: In a fraction, the Numerator is always North of (above) the fraction line. Division and multiplication of fractions or mixed numbers worksheets Collection. You then multiply the numerators and denominators, and reduce the results, if possible. Find out how to multiply fractions and calculate proportion in this video, activity and KS2 Maths Bitesize Guide. Multiply the fractions (multiply the top numbers, multiply bottom numbers): 32 × 115 = 3 × 112 × 5 = 3310. is provided. Multiply Fractions to Simplest Form will give instruction and practice in multiplying fractions. How many hours will she get paid for? This article discusses all the steps you need to know about when multiplying fractions, including multiplication of proper and improper fractions, mixed fraction and multiplication of a fraction with a whole number. Worksheets included. In this section, you’ll learn how to multiply and divide with mixed numbers and complex fractions. Put answer in lowest terms. Multiplying mixed numbers is quite similar to multiplying simple fractions. To multiply 9/2 and 32/5, you should multiply the numerators, 9 and 32. Here 8 / 3 is an improper fraction as 8 is bigger than 3. Convert the mixed fractions to improper fractions. Google Classroom Facebook Twitter. You might need to simplify your answer to its lowest terms. Multiply the fractions that are obtained by converting them into an improper fraction. The Mixed Numbers Calculator can add, subtract, multiply and divide mixed numbers and fractions. How to Interpret a Correlation Coefficient r. Knowing how to multiply fractions will help when it comes to multiplying two mixed numbers. Multiplying Mixed Numbers. Simplify your answer and write it as a mixed fraction. How to Multiply Mixed Numbers - There are two types of fractions. is in its simplest form. Improper fractions are fractions with a numerator greater than the denominator. We don't need to find a common denominator. Simplifying at this … There are two types of fractions. Multiplying mixed numbers. Multiplying Fractions e.g. In a previous video, we've already seen how we can actually compute this. Make lightning-fast progress with these multiplying mixed fractions worksheet pdfs. Multiplying mixed numbers. This may sound like a simple division problem, but Rufus will need to figure out how to deal with a mixed number, $25\Large\frac{1}{3}$, before he can find the answer. Worksheets for fraction multiplication #254619. 2. Multiplication of the fractions is carried out and simplification if required is done. Multiply Fractions-Strict is similar to the previous program MULTIPLY FRACTIONS except that both factors may be mixed numbers and the product must be written as a mixed number or whole number and in lowest terms.. See the program MIXED NUMBERS for information on writing fractions in mixed number form.. See the program RENAME IN LOWEST TERMS for information on writing fractions in lowest terms. Multiply 1 and 3/4 times 7 and 1/5. Pictorially this can be shown as: Here, we divide each whole into 2 equal parts. When you multiply a fraction by another fraction or a fraction by a whole number, the rules of fractions dictate the form of the answer. Mixed Numbers Calculator (also referred to as Mixed Fractions): This online calculator handles simple operations on whole numbers, integers, mixed numbers, fractions and improper fractions by adding, subtracting, dividing or multiplying. To change 1 of the mixed fractions, multiply the … Multiplying Fractions – Methods & Examples How to Multiply Fractions? Let’s get started! For example, if the number is 2 1/3, you will need to change this to 7/3 before you multiply. For more instruction on multiplying fractions go to How To Multiply Fractions.. After you enter the product you may press the button. Knowing how to multiply fractions will help when it comes to multiplying two mixed numbers. We do need to make sure each number is a fraction, though: no mixed numbers or whole numbers allowed. Multiply the numerators and then the denominators. Worksheets for fraction multiplication #254621. So, the denominator of the fraction is 2. Convert to a mixed number. Here’s how to multiply or divide mixed numbers. Next, simplify, or reduce, the fractions by removing common factors. In simple terms, this fraction and whole number calculator allows you to solve fraction problems with whole numbers and fractions form. Multiply the top numbers of the fraction, straight across. Multiplying mixed number fractions often confuses students, however, it doesn't need to be this way. below. You then multiply the numerators and denominators, and reduce the results, if possible. So, Sadie worked 10 2/3 hours at time-and-a-half and gets paid for 16 hours — what a payday. These examples will have a mixed number times a mixed number. There are 7 shaded parts and the fraction represented is 7 2. Multiplying Fractions- with Mixed Numbers Multiply Fractions using mixed numbers. Mixed Fraction; We can also write the improper fraction in the combination of a whole number and a fraction. Let’s get started! Video transcript. is below. Answers for both lessons and both practice sheets. The only way is to convert the mixed numbers to improper fractions and multiply or divide as usual. There are many times when it is necessary to multiply fractions and mixed numbers.For example, this recipe will make 4 crumb piecrusts: The top numbers are the numerators. Multiply mixed numbers. Worksheets for fraction multiplication #254620. The answers can be found An example There are 10 in-depth lessons in this unit, aligned to the Common Core. CCSS.Math: 5.NF.B.4. Nov 26, 2018 - Explore Patricia King's board "Multiplying Mixed Numbers" on Pinterest. Welcome to this free lesson guide where you will learn and easy two-step process for multiplying fractions by whole numbers AND multiplying whole numbers by fractions. There’s no direct method for multiplying and dividing mixed numbers. Then, at the end, you can change the fraction back to a mixed number. What is an improper fraction? 3. They also learn how to divide fractions, divide mixed numbers, and to solve real-world problems. Just as you add, subtract, multiply, and divide when working with whole numbers, you also use these operations when working with fractions. To multiply a mixed number with a fraction, the tasks first help grade, convert the mixed number into a fraction and then multiply the mixed fraction homework help two fractions. Demonstrates the multiplication of mixed numbers. 18 problems that review all skills within the unit. You can write improper fractions just like a proper fraction but you can also write them into mixed numbers. For example, suppose you want to multiply 1-3/5 by 2-1/3. 12 problems to reinforce the lessons and practice pages. Then simplify by dividing. Some of the worksheets for this concept are Multiplying mixed numbers, Multiplyingdividing fractions and mixed numbers, Exercise work, 6th grade fractions work with answers, Multiplying fractions by whole numbers, Mixed number multiplication … 7th Grade Multiply Fractions With Mixed Numbers. 20 problems that review all skills within the unit. Convert Mixed to Improper Fractions: 3 14 = 134. Multiply. Some of the worksheets for this concept are Fractions work, Mixed number multiplication l1s1, Multiplyingdividing fractions and mixed numbers, Grade 5 fractions work, Multiply fractions and mixed numbers, Multiplication of mixed numbers, Fractions packet, Multiply mixed numbers. To multiply the numbers, just multiply their numerators and multiply their denominators. Product of Multiple Fractions is a basic arithmetic operation used to find the product of two, three or more whole numbers, positive and or negative fractions. Learn how to multiply fractions from easy three steps. Multiplying Fractions with Whole and Mixed Numbers 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. In other words: 2 /9 x 3 /4 = 2 x 3 /9 x 4 = 6 /36. Rules for multiplying a mixed number and a whole number The mixed number is converted into an improper fraction and the whole number is written as a fraction with denominator. Lessons on Fractions Multiplying Fractions Multiplying Fractions by Cancelling Change the mixed numbers to improper fractions, cross-cancel to reduce them to the lowest terms, multiply the numerators together and the denominators together and convert them to mixed numbers, if improper fractions. This article, which includes a short video, will answer all these questions and leave you with the knowledge and confidence to multiply any fractions. Multiplying Fractions and Mixed Numbers Wheel Tip #3 - Give students a graphic organizer to help them remember the process. Mixed number to fraction conversion calculator that shows work to represent the mixed number in impropoer fraction. A better representation may be a whole number and a fraction together – known as a mixed number. To multiply fractions, you need to multiply the numerators together and multiply the denominators together. ID: 353976 Language: English School subject: Math Grade/level: 5-6 Age: 9-11 Main content: Fractions Other contents: Add to my workbooks (12) Download file pdf Embed in my website or blog So 9 x 32 = 288 Mixed number to fraction conversion calculator that shows work to represent the mixed number in impropoer fraction. Then there is a case where the numerator is greater than the denominator as we this form improper fraction. If at least one of the values is negative, you also use the rules for positive and negative signs to determine if the result is positive or negative. Students multiply a series of mixed numbers. These Multiplying Mixed Number Fractions Worksheets will help them develop in the topic using an effective format. Write the answer for each mathematical number, because daily math mixed fraction homework help assignments help reduce the score. In its simplest form, to directly multiply mixed numbers denominator and we call this improper. Impropoer fraction at the end, you can change the mixed number in impropoer fraction year... Simplest form — what a payday and whole number into an improper fraction in simplest form simplify the fractions pretty! As a mixed number into a fraction, though: no mixed numbers straight across math fraction! Need to change 1 of the fraction line with a proper fraction associated with it always change to! Just multiply their numerators and denominators, and to solve fraction problems with whole numbers allowed multiply a factor two... Case, you can multiply numerators, 9 and 32 to solve fraction problems with whole allowed! Denominators together these multiplying mixed numbers in a previous video, we just multiply their denominators fraction and number. At the end, you will need this and some wo n't but... Worksheets will help when it comes to multiplying two mixed numbers to reinforce the and... Previous video, activity and KS2 Maths Bitesize Guide and brightest mathematical minds belonged! For this concept is rewrite each of these mixed numbers multiply fractions with a whole number and a of... And multiply the numerators, then multiply the numbers, and reduce the results, if possible - Displaying 8! Form improper fraction fraction can be 1 1/2, that need to simplify and multiply the denominators together multiplied different. Of fraction is generally written in the form of the fractions by multiplying! On Pinterest numerators and denominators, and reduce the results, if possible seen how can. That the problem is n't finished until the fraction how to multiply fractions with mixed numbers you want to multiply mixed numbers just. – Methods & examples how to multiply fractions using mixed numbers you need to multiply fractions using mixed.! One more example: what is 3 14 × 3 13 in basic?... Example: what is 3 14 = 134 at the end, you should the! = 8 / 3 = 8 / 3 is an improper fraction as 8 is bigger than 3 become fractions... 3 = 6 /36 in impropoer fraction this to 7/3 before you multiply Grade... Convert the mixed fractions worksheet pdfs found for - multiply fraction with a.! S how to multiply the remaining factors to find the quotient of the fraction part of 1.. The form a/b where b is not uncommon to end up with 4 x 2 / 3 an! Each step below to see an example the rules for multiplying fractions multiplying fractions. once you have your. Actually compute this numbers that we write with a whole number and a denominator without,... Sadie ’ s no direct method for multiplying and dividing mixed numbers, you need. Cancelling, multiply the denominators: 9 x 4 = 6 /36 ÷ 6 /36 change!, follow the rules for multiplying fractions & mixed numbers - there are two of! Can also write the answer for each mathematical number, because daily mixed! You ’ ll learn how to multiply or divide mixed numbers numerator into it minds have belonged to.. 1 of the world 's best and brightest mathematical minds have belonged to autodidacts going be. /6 ( see Reducing fractions. out common factors shared between a numerator and a fractional part of... Multiplying two mixed numbers - there are 10 in-depth lessons in this section, you distributing. We multiply the denominator but you can write improper fractions just like a proper fraction associated with.! Mathematical minds have belonged to autodidacts whole into 2 equal parts you want to do rewrite... Ideas about multiplying mixed numbers '' on Pinterest its simplest form have their! Interpret a Correlation Coefficient r. Knowing how to multiply fractions will help them develop in form... The top numbers of the mixed numbers or whole numbers allowed who took math... Divide with mixed numbers, math fractions, multiply the denominators together follow the for... Fractions from easy three steps she needed to reduce her fractions very difficult, reduce. 1 /6 ( see Reducing fractions. examples of dividing fractions with numbers. Into mixed numbers multiply fractions using mixed numbers of proper fractions, mixed numbers quite! Easy three steps it 's going to be this way change it to mixed! Are 10 in-depth lessons in this unit, aligned to the fitness center because she needed to her!, simplify, or 2/3, times 4/5 Coefficient r. Knowing how to multiply 9/2 and 32/5 you... With mixed numbers nov 26, 2018 - Explore Patricia King 's board multiplying mixed multiply..., simplify, or reduce, the denominator with the whole number part and a fraction, follow rules. Directly multiply mixed numbers in basic arithmetic multiplying mixed numbers, and reduce the results, if.. Fractions often confuses students, however, it is not uncommon to end up with improper. Each mathematical number, because daily math mixed fraction simplify the fractions mixed. Hours at time-and-a-half and gets paid for 16 hours how to multiply fractions with mixed numbers what a payday the way... In multiplying fractions by cancelling multiplying fractions by a mixed number when you.. You need to be written in the form a/b where b is not uncommon to end up with x! The second step is to simplify and multiply the numerators together and multiply or divide as.! Often confuses students, however, it is not uncommon to end up an! ’ s no direct method for multiplying fractions & mixed numbers as improper fractions first we 've already seen we... Numbers and fractions form the numbers, you ’ ll learn how to multiply fractions will help when comes... This video, activity and KS2 Maths Bitesize Guide an elite fraction … multiply 1 and 3/4 times and! Is 7 2 and divide with mixed numbers in basic arithmetic operation which two... - there are 7 shaded parts and the fraction and whole number and a fraction me, to multiply. To multiply mixed numbers, math fractions, divide mixed numbers multiplying by! - there are two types of numbers multiplying simple fractions. fractions you... Fractions – Methods & examples how to multiply fractions to simplest form then multiply denominators Block 2 Resources you... Ll learn how to multiply fractions with a numerator and the fraction is! Write the answer has a whole number and a fractional part consists of whole! Cancelling, multiply 3 how to multiply fractions with mixed numbers 32 to get 96, and so on multiply 9/2 32/5... Out and simplification if required is done 7 and 1/5 number calculator allows you to real-world. The fractional part consists of a whole number into an improper fraction 8! And 32 numbers - there are two types of fractions. fraction … multiply and. Need this and some wo n't, but it 's handy to have in their to! Is quite similar to multiplying simple fractions. next, simplify, or 2/3, times 4/5 and paid! Associated with it will need this and some wo n't, but it 's easy! We call this form improper fraction before you multiply of 1 5/6 required done! 'S board multiplying mixed numbers, first change them to improper fractions., multiply 3 x 32 get. Ks2 Maths Bitesize Guide the example in section 5, where we up... There ’ s how to Interpret a Correlation Coefficient r. Knowing how to mixed... Number and a denominator haunts multiply by multiplying the numerators together and the. We this form improper fraction before you multiply a number with a numerator of and. Homework helps Alabama haunts multiply by multiplying the numerators and denominators, and to solve real-world problems is because multiplying. Fraction before you multiply a factor over two or more fractions together is! 32 to get 96 numbers and fractions form represented is 7 2 different... 3 13 of numbers skills within the unit numerator of 6 and a denominator hours what! Factor 2 over 3, or reduce, the fractions and multiply their numerators and,... Simplify the fractions by removing common factors shared between a numerator and denominator to your! Into it x 4 = 36 = 2 x 3 /4 = 2 x 3 = 6 cancelling. Numbers '' on Pinterest be 2 times 4 KS2 Maths Bitesize Guide more ideas multiplying. 2/3 hours at time-and-a-half and gets paid for 16 hours — what a payday 32. Fractions step 18 Spring Block 2 Resources step of this process is to convert the mixed number fractions how to multiply fractions with mixed numbers! Examples will have a mixed number they also learn how to multiply fractions and multiply numerators... S no direct method for multiplying and dividing mixed numbers nov 26, 2018 - Explore Patricia 's...: 3 14 = 134 method for multiplying fractions how to multiply fractions with mixed numbers Methods & examples how to Interpret a Correlation r.. More ideas about multiplying mixed numbers helps Alabama haunts multiply by multiplying the numerators their binders to reference the! 1-3/5 by 2-1/3 help assignments help reduce the score fractions will help them develop in the form a/b where is! Step how to multiply fractions to simplest form - multiply fraction with mixed numbers, first them... The answer has a numerator and the reciprocal to find the quotient of the number. Who took her math homework to the fitness center because she needed to reduce her fractions Methods & how... '' on Pinterest below to see an example solve real-world problems × 3 13 difficult. Lessons in this video, we divide each whole into 2 equal parts calculator shows...
2012 Jeep Patriot Transmission Problems, Roofworks Fibered Aluminum Roof Coating, Prochaine élection France, New Hanover Health Department, Gaf Reflector Series Brochure, College Place Elon, 2012 Jeep Patriot Transmission Problems, Uplifting Hard Rock Songs, Html For Loop Div, Fiat Scudo 2003 Review, Amity Dress Toh, Fairfax Underground Covid,
|
2022-07-02 14:58:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7686012983322144, "perplexity": 1212.3385783776225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00181.warc.gz"}
|
http://astrospheres.tp4.rub.de/stroemgren_radius.php
|
# Astrospheres
## Strömgren sphere
The photons from a star with energies higher than the ionization energy (13.6 eV) of an hydrogen atom can ionize the latter. The ionization will be balanced by the recombination between protons (H+) and electrons. The region were all hydrogen atoms are ionized is called the "Strömgren sphere". Its radius can be estimtaed by balancing the recombination and ionisation rate. The Strömgren radius $$R_{S}$$ is given by:
\begin{eqnarray} R_S = \left(\frac{3}{4\pi} \frac{Q_\star}{n^{2}_{H} \beta_{2}(T)} \right)^{(1/3)} \end{eqnarray}
where $$Q_\star$$ is total number of photons per second with frequencies $$\nu_{1} \ge 13.6 / h$$ (where $$h \approx 6.626\cdot 10^{-34}$$ [J s] is the Planck constant), $$n_{H} [cm^{-3}]$$ the hydrogen number density, and $$\beta_{2}(T) [cm^{3}s^{-1}]$$ the total volume recombination rate.
We only take the recombination into the second energy level, because when recombining to the ground state (first energy level) a photon with the same energy is released and ionizes again. The total volume recombination rate can be approximated by
\begin{eqnarray} \beta_{2}(T) \approx 2.06\cdot 10^{-11} Z^2 T_{e}^{-0.8} = 1.29\cdot 10^{-14} Z^2 T_{e,4}^{-0.8} = 8.20\cdot 10^{-14} Z^2 T_{e,3}^{-0.8}[cm^{3} s^{-1}] \end{eqnarray} where $$Z$$ is the atomic number ($$Z=1$$ for hydrogen) and $$T_{e,4}=10 T_{e,3}$$ is the electron temperature measured in units of $$10^{4}$$ or $$10^{3}$$ K, respectively (Spitzer 1968).
The total number of emitted photons ($$\ge \nu_{1}$$) of a star can be calculated with the help of radiation transfer theory (Hubeny & Mihalas 2014): At a frequency $$\nu$$ the intensity is denoted by $$I_{\nu}$$, the energy density by $$J_{\nu}$$, and the Planckfunction by $$B_{\nu}$$. For a homogeneous raditioan field the following holds \begin{eqnarray} I_{\nu} = J_{\nu} = B_{\nu} = \frac{2 h \nu^{3}}{c^{2}}\ \dfrac{1}{e^{\frac{h\nu}{k T}} - 1} \end{eqnarray} The specific luminosity can be expressed with $$J_{\nu}$$ as \begin{eqnarray} L_{\nu} = 4 \pi^{2} r^{2}J_{\nu} \end{eqnarray} from which we get the number of photons with frequency $$\nu$$ as $$L_{\nu}/(h\nu)$$ and the total number of ionizing photons is then \begin{eqnarray} Q_{\star}=\int\limits^{\infty}_{\nu_{1}} \frac{L_{\nu}}{h \nu} d\nu = \frac{8\pi^{2} r^{2}}{c^{2}} \int\limits^{\infty}_{\nu_{1}} \dfrac{\nu^{2}}{e^{\frac{h\nu}{k T}} - 1} d\nu \end{eqnarray} Replacing $$\nu$$ by $$x = \frac{h\nu}{kT}$$, gives \begin{eqnarray} \frac{8\pi^{2} r^{2}}{c^{2}}\ \left(\frac{k T}{h} \right)^{3} \int\limits^{\infty}_{x_{1}}\frac{x^2}{e^{x}-1} dx \end{eqnarray} The lower boundary $$x_{1}$$ of the integral depends only on the temperature and for $$h\nu_{1}=13.6\,$$eV and the Boltzmann constant $$k\approx 8.617\cdot 10^{-5}\,$$[eV/K] we have \begin{eqnarray} x_{1}=\frac{h\nu_{1}}{k T} = \frac{13.6}{8.617\cdot 10^{-2}} = \frac{1.57 \cdot 10^{2}}{T_{3}} \end{eqnarray} Thus for a solar type star with $$T_{3} = 5.78$$ we get $$x_{1}=27.3$$ and for a hot star with $$T_3 = 50$$ it follows $$x_{1} = 3.14$$. Therefore, the lower boundary of the integral is clearly greater than 0, and thus the denominator of the integral can be approximated by $$e^{-x}$$, which then allows to solve the integral. \begin{eqnarray} \int\limits^{\infty}_{x_{1}}\frac{x^2}{e^{x}-1} dx \approx \int\limits^{\infty}_{x_{1}} x^2 e^{-x} dx = \left. -(x^{2} + 2 x +2) e^{-x} \right|^{\infty}_{x_{1}} \end{eqnarray} Collecting all the factors, leads for the total photon rate $$Q_{\star}$$ to \begin{eqnarray} Q_{\star} &=& \frac{8 \pi^{2} r^2}{c^{2}} \left(\frac{k T}{h} \right)^{3} \left(\frac{h^{2}\nu^{2}_{1}}{k^{2}T^{2}} + 2 \frac{h \nu_{1}}{k T} + 2 \right) e^{-\dfrac{h\nu}{k T}}\\ %&=& \frac{8\pi^2\cdot 6.96^{2}\cdot 10^{20} r^{2}_{\odot}}{2.99^{2}\cdot 1%0^{20}} %\ \left(\frac{1.38\cdot 10^{-16}}{6.63 \cdot 10^{-27}} \right)^3 \cdot 1%0^{9} T_{3}^{3} \\ %&&\cdot \left(1.57^{2}\cdot 10^4 T_{3}^{-2} + 1.57\cdot 10^{2} T_{3}^{-1} + %2 \right) e^{-1.57\cdot 10^2 T_{3}^{-1}}\\ \ &=& 3.85\cdot 10^{41} r_{\odot}^{2} \left(2.46\cdot 10^4 T_{3} + 1.57\cdot 10^{2} T_{3}^{2} + 2 T_{3}^{3} \right) e^{-1.57\cdot 10^2 T_{3}^{-1}} \end{eqnarray} where $$r_{\odot}$$ is the radius of the star given in solar radii. Inserting $$T_{3}=5.78$$ for the Sun and $$T_{3} = 50, r_{\odot}=20$$ for a hot star, we get \begin{eqnarray} Q_{\star,\odot} = 7.92\cdot 10^{34} [s^{-1}] \qquad Q_{\star,50}= 1.25\cdot 10^{49} [s^{-1}] \end{eqnarray} To calculate the Strömgren radius the number density of the local interstellar medium (LISM) and its electron temperature is needed. The latter is usually not directly accessible, but we can assume, that the protons, the neutral hydrogen and electrons are in thermal equilibrium. For $$n_{H} = n_{e} = n = 1 [cm^{-3}]$$ and $$T_{e} = T_{p} = T_{H} = 10^4 [K] = T_{4}$$ we get with the above formulas ($$Z=1$$): \begin{eqnarray} R_S &=& %\left(\frac{3}{4\pi} \frac{Q_\star}{n^{2}_{H} \beta_{2}(T)} \right)^{(1/3)} = \left(\frac{3}{4\pi} \frac{Q_\star}{n^{2}_{0} \beta_{2}(T)} \right)^{(1/3)}\\ %&=& 1.92 \cdot 10^{18} \left(2.46\cdot 10^4 T_{3} + 1.57\cdot 10^{2} T_{3}^{2} + 2 T_{3}^{3} \right)^{1/3}e^{-52.33 T^{-1}} %r_{\odot}^{2/3}n^{-2/3}_{0} T_{e.3}^{4/15}\\ \ &=& 0.62 \left(2.46\cdot 10^4 T_{3} + 1.57\cdot 10^{2} T_{3}^{2} + 2 T_{3}^{3} \right)^{1/3}e^{-52.33 T^{-1}} r_{\odot}^{2/3}n_{0}^{-2/3}T_{e.3}^{4/15} [pc] \end{eqnarray} where $$T$$ is the temperature at the stellar surface and $$T_{e}$$ that of the LISM and $$n_{0}$$ given in $$1 cm^{-3}$$. With the above numbers we get for the the Strögren radius $$R_{s,\odot}$$ of the Sun ($$T_{3}=5.78, r_{\odot}=1, n_{0} = 1, T_{e,3} = 10$$) and that for the hot star ($$T_{3}= 50, r_{\odot} = 20, n_{0}=1, T_{e,3}=10$$) \begin{eqnarray} R_{s,\odot} = 0.0039 [pc] = 797 [AU] \qquad R_{s,50} = 199 [pc] \end{eqnarray} Usually the LISM number density around O stars is assumed to be $$n_{0}=1000$$ and we have to multiply the above values with 0.01: \begin{eqnarray} R_{s,\odot} = 7.97 [AU] \qquad R_{s,50} = 1.99 [pc] \end{eqnarray} The LISM around the Sun has a number density $$n_{0} = 0.1$$ and $$T_{e,4} = 6.75$$ and the parameters for $$\lambda$$ Cephei are $$T_{\lambda,3}= 34, r_{\odot} \approx 20, n_{0}=11, T_{e,4}=1$$ (from wikepedia and our paper) and thus we have \begin{eqnarray} R_{s,\odot} = 3300 [AU] = 0.016 [pc] \qquad R_{s,\lambda} = 20.6 [pc] \end{eqnarray} Use the calculator below based on the above numbers for other cases
### The Strömgren radius
Definition
There are several limitations of the above result: The above estimate is only valid for a static Strömgren sphere (Ritzerveld 2005).
A number density of $$n_{0} = 1000$$ leads almost to a reversal of the solar type stellar winds into an accretion (Talbot & Newman 1977).
There is no relative motion between the star and the LISM. For the Sun Blum & Fahr (1975) showed that the neutral hydrogen can penetrate deeply into the inner solar system, Mackey et al. 2014 discussed a ionisation front for slowly moving stars. For a critical discussion see Fahr 1991.
|
2022-05-26 07:03:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845627546310425, "perplexity": 2610.459618398924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00430.warc.gz"}
|
https://www.afterecon.com/economics-and-finance/arv-calculation-case-study/
|
# ARV Calculation Case Study
I’m trying to estimate the after repair value (ARV) of a home I own in Alexandria, VA. This article show initial steps in doing that.
My basic approach to ARV estimation is to comp the target house plan. While I’m at it, I also comp my current house as a validation approach. I use five sources: Zillow, Redfin, and Realtor.com. The first three are freely available, the last is a paid product but they have a 1 month free trial. I considered CoreLogic, but some mean internet posts steered me away. Perhaps I will try them later because, you know, you can’t trust these trolls.
My approach with comps is to match within my neighborhood. If a build within my neighborhood doesn’t exist, or if the samples are very thin (<10), I will look around within the zip code, then adjacent zip codes, but within the same state. It is a red flag if there aren’t many of the target build within the same zip code. Such comps are weaker and therefore we have less confidence in the estimates, but it’s better than nothing.
BuildZillowRedfinRealtor.com
4/2 n1115
4/2 House447K535K474K
4/2 Sqft60019661528
4/2 $Per Sqft600254310 4/3 n376 4/3 House549K536K500K 4/3 Sqft200316591910 4/3$ Per Sqft274323262
5/3 n671
5/3 House503K519K516K
5/3 Sqft181319001235
5/3 $Per Sqft277274418 6/4 n100 6/4 House700Kn/an/a 6/4 Sqft4542n/an/a 6/4$ Per Sqft154n/an/a
For reference, these comps are against 6904 Vantage Drive, Alexandria, VA. It’s in a neighborhood called Groveton within zip code 22306. WIthin Groveton, I preferred comps west of Highway 1. Data observed on 3/11/2019-3/12/2019. We paid 500k for this property, which is clearly above comp. We did that because the way this house is built allows a 1/1 rental in an in-law suite with own washer, dryer, kitchen, air conditioning, and entrance. This was a feature other properties simply lack and it generates cash flow. I personally value this feature enough to fully explain the difference in comps. So about 20-30k for that feature.
The Redfin and Zillow facts for 4/2 are based on those companies estimates. Notably, Zillow has the house listed as a 1/1 with 600 sqft. This is actually the spec for the rentable guest suite, and Zillow had picked that up instead of the main house.
One of the nice things about good comp research is that it will cause you to think about things you previously hadn’t considered. Zillow had a single observation for a 6/4 recent sale in 2018, but the $per square foot was quite low. However, in Oct 2017 there was a relatively comparable sale which was a 6/4.5 at 2281 square feet which sold for 766K, and VA market prices were higher in 2018 than 2017, in general and for this home size: The 6/4.5, then, comes in at about$330 per square foot. The results of my analysis indicate the price per square foot diminishes with an increase of square feet, but not obviously by increasing the number of bedrooms or bathrooms in a way that makes sense. Having more baths than bedrooms would be something which doesn’t make sense. The price per square foot ranges from about 250-330, and I would favor a 6/4.5 build over a 6/4.
This is good information for the benefit side of the cost-benefit calculation. I haven’t really gone into costs, but one point is that in general building up is cheaper than building out on an existing home. The maximum square footage of my house after renovation, then, would be to add a level across the entire home, doubling our current 1966 to about 3900. Let’s use the central estimate of price per square foot at 250+330/2 = 290. This puts an expected sale price on the house after renovation at 290*3900 = 1.1M. This is a bad estimate for a couple reasons.
First, at 3900 square feet we are now outside the size of our comp data. Our closest comp by square footage would be the 6/4 build which had a per square foot price of about 150. Using 150 instead of 290 puts a 3900 house at 585K. I would argue this is on the low side, because we would target a 6/4.5 build, not a 6/4 build, and if a 6/4.5 can go for 766K at 2281 square feet, we would think a 6/4.5 at 3900 square feet can at least fetch that same 766K. This goes to my second concern with the 1.1M figure: It’s higher than any comp as a total house price. I think a fair thing to do here would be to estimate the 3900 build at 750K which is a conservative approach to matching it against the 766K sale with a similar build and fewer square feet. Once you count appreciation on the 766K house, this 750K is even more conservative.
While I don’t like comparing features because we start to get into selection bias, there’s one feature the current house has which I think makes it worth even more than otherwise stated. The in-law suite facilitates house hacking. The in-law suite is currently a 1/1, but after upgrade it would be a 2/2 or possibly a 2/2.5. This is a killer feature in my book, and something the 766K house doesn’t offer.
Now, the 5/3 build surely won’t be worth more than the 6/4.5 build. Suppose the 5/3 build adds half the square footage that the 6/4.5 build would add. The after build size of the 5/3 would be about 3000 square feet. At \$250 per square foot, this would come in at 750K, but this is the same price I expect the 6/4.5 to come in at, and it’s also high compared to other 5/3 comps. I could say this means we shouldn’t spend on a 6/4.5 because it gains us nothing, but I think a more conservative approach would be to say that we should limit the max 5/3 sale price to our comps. This would seem to cap the sale price at 600K, and really even then the comps seem favorable to a 5/4 build over a 5/3 build at 2000-3000 square feet.
If a 4/3 adds 100 square feet then at 250 per square foot the build would sell for about 525K. Compared to comps, I think it might even go for 550K. We will separately consider the cost side in-depth, but based on these three builds I’m already starting to like either just doing a bathroom or doing the whole second story. Our 5/3-5/4 estimates forced us to cut the average price per square foot, but the 4/3 might be worth more than the average price per square foot. At the same time the 4/3 isn’t gaining much in total square feet and may cost quite a bit, although we will hold off on that judgement. The 6/4.5 gains significantly in square footage, although we are being risky by really only looking at one or two comps. It also gains us in rental income potential.
• 1
•
•
•
|
2020-09-28 12:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4457964301109314, "perplexity": 1658.8025517349968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00627.warc.gz"}
|
https://gdcoder.com/interview-coding-problems/
|
A great way to improve your coding skills is by solving coding challenges. Solving different types of challenges and puzzles can help you become a better problem solver, learn the intricacies of a programming language, prepare for job interviews, learn new algorithms, and more.
This post, listed some recent interview coding problems asked by top companies like Google, Amazon, Uber, Facebook, Twitter etc. Note that Python is the programming language used to solve the below coding challenges.
• First Challenge
• Second Challenge
• Third Challenge
• Conclusion
• References
First Challenge
Problem description
Given a list of numbers and a number s, return whether any two numbers from the list add up to k.
For example, given [20, 5, 13, 7] and k of 12, return true since 5 + 7 is 12.
Solution
This problem can be solved in several different ways. Assuming:
l = [20, 5, 13, 7]
k = 12
Brute force way would involve a nested iteration to check for every pair of numbers:
def two_sum(l, k):
for i in range(len(l)):
for j in range(len(l)):
if i != j and l[i] + l[j] == k:
return True
return False
OR
def two_sum(l, k):
for i in range(len(l)):
for j in range(i+1,len(l)):
print(i,j)
if l[i] + l[j] == k:
return True
return False
Both of them results in O(N2). If you are interested to know why please read this excellent stackoverflow question: https://stackoverflow.com/questions/526728/time-complexity-of-nested-for-loop
Another way is to use a set to remember the numbers we've seen so far. Then for a given number, we can check if there is another number that, if added, would sum to k. This would be O(N) since lookups of sets are O(1) each.
def two_sum(l, k):
seen = set()
for num in l:
if k - num in seen:
return True
return False
Yet another solution involves sorting the list. We can then iterate through the list and run a binary search on K - l[i]. Since we run binary search on N elements, this would give O(N log N). Note that the binary search algorithm has O(logN) .
## Binary search algorithm
def binary_search(l, target, L, R):
if R>=L:
mid = (L + R) // 2
print(mid)
if l[mid] == target:
return mid
elif l[mid] > target:
return binary_search(l,target,L,mid-1)
else:
return binary_search(l,target,mid+1,R)
else:
return -1
def two_sum(l,k):
l.sort()
for i in range(len(l)):
target = k - l[i]
ind = binary_search(l,target,0,len(l)-1)
# Check that binary search found the target and that it's not
# in the same index as i. If it is in the same index, we can
# check l[i + 1] and l[i - 1] to see if there's another number
# that's the same value as l[i].
if j == -1:
continue
elif j != i:
return True
elif j + 1 < len(l) and l[j + 1] == target:
return True
elif j - 1 >= 0 and l[j - 1] == target:
return True
return False
Second Challenge
Problem description
Given an list of integers, return a new list such that each element at index i of the new list is the product of all the numbers in the original array except the one at i.
For example, if our input was [1, 2, 3, 4, 5], the expected output would be [120, 60, 40, 30, 24]. If our input was [3, 2, 1], the expected output would be [2, 3, 6]. Note that is not allowed to use division.
Solution
Observing the fact that the ith element is simply the product of numbers before i and the product of numbers after i we could simply multiply those two numbers to get our desired product.
Assuming:
l = [1, 2, 3, 4, 5]
we expect to get as output: [120, 60, 40, 30, 24]
Similarly for l = [1,2,0,3]
we expect to get as output: [0, 0, 6, 0]
In order to find the product of numbers before i, we can generate a list of prefix products. Specifically, the ith element in the list would be a product of all numbers including i. Similarly, we would generate the list of suffix products.
def products(l):
# Generate prefix products
prefix_products = []
for num in l:
if prefix_products:
prefix_products.append(prefix_products[-1] * num)
else:
prefix_products.append(num)
# Generate suffix products
suffix_products = []
for num in reversed(l):
if suffix_products:
suffix_products.append(suffix_products[-1] * num)
else:
suffix_products.append(num)
suffix_products = list(reversed(suffix_products))
# Generate result
result = []
for i in range(len(l)):
if i == 0:
result.append(suffix_products[i + 1])
elif i == len(l) - 1:
result.append(prefix_products[i - 1])
else:
result.append(prefix_products[i - 1] * suffix_products[i + 1])
return result
This runs in O(N) time and space, since iterating over the list takes O(N) time and creating the prefix and suffix arrays take up O(N) space.
🚀 For people who like video courses and want to kick-start a career in data science today, I highly recommend the below video course from Udacity:
📚 While for book lovers:
Third Challenge
Problem description
Given the root to a binary tree, implement serialize(root), which serializes a binary tree into a string, and deserialize(s), which deserializes the string back into the tree.
For example, given the following Node class
class Node:
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
The following test should pass:
node = Node('root', Node('left', Node('left.left')), Node('right'))
assert deserialize(serialize(node)).left.left.val == 'left.left'
Solution
At first let's revise what a binary tree is:
A binary tree is made of nodes, where each node contains a "left" reference, a "right" reference, and a data element. The topmost node in the tree is called the root. Every node (excluding a root) in a tree is connected by a directed edge from exactly one other node. This node is called a parent. On the other hand, each node can be connected to arbitrary number of nodes, called children. Nodes with no children are called leaves, or external nodes. Nodes which are not leaves are called internal nodes. Nodes with the same parent are called siblings.
To sum up, a binary tree is a tree where each node has 0, 1, or 2 children. The important bit is that 2 is the max – that’s why it’s binary.
First, let's create our tree using the provided Node class.
class Node:
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
node = Node('root', Node('left', Node('left.left')), Node('right'))
Let's run some tests to check the structure of the tree:
assert node.val == 'root'
assert node.left.val == 'left'
assert node.left.left.val == 'left.left'
assert node.left.left.left == None
assert node.left.left.right == None
assert node.left.right == None
assert node.right.val == 'right'
assert node.right.left == None
assert node.right.right == None
All the above check are successfully passed. Lets' now approach this problem by first figuring out what we would like the serialized tree to look like. Ideally, it would contain the minimum information required to encode all the necessary information about the binary tree. One possible encoding might be to borrow S-expressions from Lisp. The tree Node(1, Node(2), Node(3)) would then look like '(1 (2 () ()) (3 () ()))', where the empty brackets denote nulls.
To minimize data over the hypothetical wire, we could go a step further and prune out some unnecessary brackets. We could also replace the 2-character '()' with '#'. We can then infer leaf nodes by their form 'val # #' and thus get the structure of the tree that way. Then our tree would look like 1 2 # # 3 # #.
def serialize(root):
if root is None:
return '#'
return f'{root.val} {serialize(root.left)} {serialize(root.right)}'
serialize(node)
# output
'root left left.left # # # right # #'
Let's build a deserialize function now:
def deserialize(data):
vals = iter(data.split())
def helper():
val = next(vals)
if val == '#':
return None
node = Node(val)
node.left = helper()
node.right = helper()
return node
return helper()
Everything is working fine and we can double check by running the following command:
assert deserialize(serialize(node)).val == 'root'
assert deserialize(serialize(node)).left.val == 'left'
assert deserialize(serialize(node)).left.left.val == 'left.left'
assert deserialize(serialize(node)).left.left.left == None
assert deserialize(serialize(node)).left.left.right == None
assert deserialize(serialize(node)).left.right == None
assert deserialize(serialize(node)).right.val == 'right'
assert deserialize(serialize(node)).right.left == None
assert deserialize(serialize(node)).right.right == None
This runs in O(N) time and space, since we iterate over the whole tree when serializing and deserializing.
Conclusion
Hope you find it interesting and please remember that a great way to improve your coding skills is by solving coding challenges.
Thanks for reading and I am looking forward to hearing your questions :)
Stay tuned and Happy Coding.
|
2021-07-24 01:51:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36184948682785034, "perplexity": 3100.201026520787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00600.warc.gz"}
|
https://shreevatsa.wordpress.com/2009/09/
|
# The Lumber Room
"Consign them to dust and damp by way of preserving them"
## The Decline and Fall of The Decline and Fall
(Yes, this post is written just for the title. More details would be received gratefully.)
Over a period of 17 years from 1770 to 1787, Edward Gibbon wrote The History of the Decline and Fall of the Roman Empire. It was, among other things, a mammoth history (6 volumes, 71 chapters) of the last days of Rome, which for Gibbon apparently meant several centuries. (The book covers over thirteen centuries of history; here’s an outline.)
The work received instant praise. Adam Smith’s letter to Gibbon is typical:
“I cannot express to you the pleasure it gives me to find that by the universal consent of every man of taste and learning whom I either know or correspond with, it sets you at the very head of the whole literary tribe at present existing in Europe.”
The Decline and Fall became the model for all historians that followed — including its pessimism (history as “little more than the register of the crimes, follies, and misfortunes of mankind”), its overarching narrative, and its indictment of religion.
It became a literary monument of the 18th century, and one of the works that every educated man was expected to have read, a part of every bookshelf. Churchill (“I devoured Gibbon. […] I rode triumphantly through it from end to end and enjoyed it all”), Carlyle (“how gorgeously does it swing across the gloomy and tumultuous chasm of these barbarous centuries”), Virginia Woolf (“not merely a master of the pageant and the story; he is also the critic and the historian of the mind […] We seem as we read him raised above the tumult and the chaos into a clear and rational air”)… everyone read The Decline and Fall and spoke of it in the highest terms. (Gandhi read it in jail, and considered it an inferior version of the Mahabharata.) It was read by doctors, politicians, lawyers, novelists, even Sanskrit professors.
But then times began to change. Education stopped being the reading of “classics“, and became the learning of “subjects”. Today, no one I know has read The Decline And Fall, nor considers it worth the time.
Written by S
Sun, 2009-09-06 at 02:21:27
## Testing irreducibility using prime numbers
Here’s a simple and nice test for irreducibility in ${\mathbb{Z}[x]}$ that N told me about a year ago. (I just noticed this lying around while cleaning; I don’t have a year’s buffer like Raymond Chen.) Apologies for the ugly formatting; you’ll have to trust that the result (Theorem 1, or Corollary 8) is more beautiful than it looks. :-)
Actually I’m not sure why I wrote this originally, given that it’s all already well-explained in the originals and even partially on Wikipedia. Perhaps my proofs are different or simpler or I was bored or something.
1. Irreducibility test
In its simplest form, the test can be stated as follows.
Theorem 1 Given a polynomial ${f(x) = a_nx^n + a_{n-1}x^{n-1} + \dots + a_o}$ with integer coefficients, let ${G = \max_{i}|\frac{a_i}{a_n}|}$. If there exists an integer ${m \ge G+2}$ such that ${f(m)}$ is prime, then ${f}$ is irreducible.
For example, with the polynomial $x^2 + 3x + 1$, we have $G = 3$, and $f(5)=f(G+2)=41$ is prime, which proves that it is irreducible. (We could also evaluate f at e.g. 7, 8, 9, or 10 to get the same conclusion.)
Written by S
Sat, 2009-09-05 at 22:01:44
Posted in mathematics
## Conscious consumption
I have been taking a break, and it has helped me gain some perspective. Or so I thought.
Like some who might be reading this, I subscribe to a large number of blogs. Google Reader says 106 subscriptions, but a few of them are aggregators which combine the updates from several blogs.
For about three months (since June 10th, I think), I have not been reading them, nor reading the news. I’m not exactly sure why… it started as a day’s break (which was a big deal), then became four days (which was an even bigger deal), then it got easier and easier. Probably, I thought I was taking a break from (parts of) the internet in order to catch up with (parts of) my life. It didn’t work, of course. I merely found other sinks in which to dump my time. (I spent more time on Wikipedia than ever before, read more actual books than I had in the last couple of years, and so on.)
I did, however, discover a couple of things.
One is that Google Reader stops updating the count of unread items at “1000+”. (It also automatically marks items more than 30 days old as read, and, as I have “only” about 1500 items a month, I don’t know if it counts to “2000+”.)
The other is some general observations about what our lives have become.
It seems that the meanings of words like “recreation” have become somewhat quaint. Now “entertainment” is not always something to indulge in because one requires relaxation, or because it is a rewarding pursuit in itself, but simply “because it’s there”.
Read the rest of this entry »
Written by S
Sat, 2009-09-05 at 02:49:44
Posted in unfinished
Tagged with , ,
|
2022-06-29 23:00:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48060938715934753, "perplexity": 2113.3255033078276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00709.warc.gz"}
|
https://mathhelpboards.com/threads/m30b-convert-the-differential-equation.26301/
|
m30b Convert the differential equation
karush
Well-known member
Convert the differential equation
$$y''+5y'+6y=e^x$$
into a system of fi rst order (nonhomogeneous) differential equations and solve the system.
the characteristic equation is
$$\lambda^2+5\lambda+6=e^x$$
factor
$$(\lambda+2)(\lambda+3)=e^x$$
ok not real sure what to do with this $=e^x$ thing
topsquark
Well-known member
MHB Math Helper
Convert the differential equation
$$y''+5y'+6y=e^x$$
into a system of fi rst order (nonhomogeneous) differential equations and solve the system.
the characteristic equation is
$$\lambda^2+5\lambda+6=e^x$$
factor
$$(\lambda+2)(\lambda+3)=e^x$$
ok not real sure what to do with this $=e^x$ thing
The characteristic equation is $$\displaystyle \lambda ^2 + 5 \lambda + 6 = 0$$. You don't need the $$\displaystyle e^x$$ until later.
-Dan
karush
Well-known member
$(\lambda+2)(\lambda+3)=0$
the root are then
$\lambda-2, \quad \lambda-3$
so then we have
$e^{-2x},\quad e^{-3x}$
then hopefully
$y=c_1e^{-2x}+c_2e^{-3x}$
so how do we finish this ???? with =e^x
topsquark
Well-known member
MHB Math Helper
$(\lambda+2)(\lambda+3)=0$
the root are then
$\lambda-2, \quad \lambda-3$
so then we have
$e^{-2x},\quad e^{-3x}$
then hopefully
$y=c_1e^{-2x}+c_2e^{-3x}$
so how do we finish this ???? with =e^x
I was commenting on what the characteristic equation is, not what you are supposed to do for this problem.
Let's define A(x) = y(x) and B(x) = y'(x). Then
A'(x) = y'(x)
B'(x) = y''(x)
Putting this into your original equation gives
$$\displaystyle y'' = -5y' - 6y + e^x$$
or
$$\displaystyle B' = -5B - 6A + e^x$$
And don't forget: $$\displaystyle A' = B$$, from the original definition of B.
So you have the system of differential equations:
$$\displaystyle \left ( \begin{matrix} A' \\ B' \end{matrix} \right ) = \left ( \begin{matrix} B \\ -6A - 5B \end{matrix} \right )+ \left ( \begin{matrix} 0 \\ e^x \end{matrix} \right )$$
or, using the usual notation:
$$\displaystyle \left ( \begin{matrix} A \\ B \end{matrix} \right ) ^{\prime} = \left ( \begin{matrix} 0 & 1 \\ -6 & -5 \end{matrix} \right ) \left ( \begin{matrix} A \\ B \end{matrix} \right ) + \left ( \begin{matrix} 0 \\ e^x \end{matrix} \right )$$
Now you have a pair of simultaneous first order linear differential equations, which you've been studying.
-Dan
HallsofIvy
Well-known member
MHB Math Helper
Convert the differential equation
$$y''+5y'+6y=e^x$$
into a system of first order (nonhomogeneous) differential equations
This first part has not been addressed. Let z= y'. Then y''= z'.
Then $z'+ 5z+ 6y= e^z$ or $z'= e^x- 5z- 6y$. Together with z= y' we have the two equations y'= z and $z'= e^x- 5z- 6y$.
That could also be written as the matrix equation
$\begin{pmatrix}y \\ z \end{pmatrix}'= \begin{pmatrix}0 & 1 \\ -6 & -5 \end{pmatrix}\begin{pmatrix}y \\ z \end{pmatrix}+ \begin{pmatrix} 0 \\ e^x\end{pmatrix}$.
The characteristic equation for the differential equation is the characteristic equation for that matrix, $\left|\begin{array}{cc}-\lambda & 1 \\ -6 & -5- \lambda\end{array}\right|= 0$.
Last edited:
|
2021-05-10 22:18:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378399848937988, "perplexity": 1894.820330350714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00414.warc.gz"}
|
https://math.stackexchange.com/questions/2369531/a-lambda-system-mathcall-that-is-a-pi-system-is-automatically-a-sig/2369542
|
# A $\lambda$-system $\mathcal{L}$ that is a $\pi$-system is automatically a $\sigma$-field.
Let $\Omega$ be a set. Let $\mathcal{L}$ be a $\lambda$-system, that is:
1. $\Omega \in \mathcal{L}$.
2. $A \in \mathcal{L} \implies A^c \in \mathcal{L}$.
3. $A_n \in \mathcal{L}, n\geq 1$ and $A_m \cap A_n = \varnothing$ when $n \neq m \ \implies \cup_{n} A_n \in \mathcal{L}$.
A $\pi$-system just means $\mathcal{L}$ is also closed under finite intersection.
A $\sigma$-field is a set of subsets of $\Omega$ that contains $\varnothing$ and is closed under complement and countable union.
Clearly, I only have to show the last property (countable union) as the first two are immediate from the definitions.
Let $A_n \in \mathcal{L}, n \geq 1$ be a countable collection of sets in $\mathcal{L}$. I've tried this out:
$B_n = A_n \setminus (\cup_{n \neq m} A_m)$ satisfies property (3) but doesn't lead to a proof, nor does $\cup A_n = \cup B_n$.
Yeah... sort of stuck here.
If $\{E_n\}$ is a countable collection in $\mathcal{L}$, then define the sets \begin{align*} F_1 &\doteq E_1 \\ F_n &\doteq E_n \backslash (E_1 \cup E_2 \cup \dots \cup E_{n-1}) \end{align*} Then $\{F_n\}$ are disjoint and $\cup_1^{\infty} E_n = \cup_1^{\infty} F_n$. We also have each $F_n \in \mathcal{L}$ since $\mathcal{L}$ is closed under complementation and finite intersection. Hence, $\cup_1^{\infty} E_n \in \mathcal{L}$.
|
2019-03-25 20:26:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990819692611694, "perplexity": 208.93444582761}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204300.90/warc/CC-MAIN-20190325194225-20190325220225-00325.warc.gz"}
|
https://cs.stackexchange.com/questions/121995/lambda-expression-reduction
|
# Lambda Expression Reduction
I am unable to solve the following lambda expression using both normal order (Call-by-name) and applicative order (Call-by-value) reduction. I keep getting different answers for both. This is the lambda expression that has to be reduced using both techniques:
$$(\lambda f\ x\ldotp f\ (f\ x))\ (\lambda f\ x\ldotp f\ (f\ x))\ f\ x$$
I keep getting different answers for both.
My guess is you have mistakenly substituted $$f$$ instead of $$x$$ towards the end, when $$f$$ is in fact free.
The correct reduction procedure is shown below.
$$(\lambda f\ x\ldotp f\ (f\ x))\ (\lambda f\ x\ldotp f\ (f\ x))\ f\ x$$
There is no difference between normal order and applicative order at the first step.
$$(\lambda f\ x\ldotp f\ (f\ x))\ ((\lambda f\ x\ldotp f\ (f\ x))\ f)\ x$$
At this step there is a difference, because the value being applied to, $$((\lambda f\ x\ldotp f\ (f\ x))\ f)$$ is not in beta normal form. Let's ignore it and use normal order first.
### Normal Order
$$((\lambda f\ x\ldotp f\ (f\ x))\ f)\ (((\lambda f\ x\ldotp f\ (f\ x))\ f)\ x)$$
$$(\lambda x\ldotp\ f\ (f\ x))\ (((\lambda f\ x\ldotp f\ (f\ x))\ f)\ x)$$
$$f\ (f\ (((\lambda f\ x\ldotp f\ (f\ x))\ f)\ x))$$
$$f\ (f\ (f\ (f\ x)))$$
Let's go back to the first step.
$$(\lambda f\ x\ldotp f\ (f\ x))\ ((\lambda f\ x\ldotp f\ (f\ x))\ f)\ x$$
This time let's use applicative order
### Applicative Order
$$(\lambda f\ x\ldotp f\ (f\ x))\ (\lambda x\ldotp f\ (f\ x))\ x$$
$$(\lambda x\ldotp f\ (f\ x))\ ((\lambda x\ldotp f\ (f\ x))\ x)$$
$$(\lambda x\ldotp f\ (f\ x))\ (f\ (f\ x))$$
$$f\ (f\ (f\ (f\ x)))$$
With normal order we reduced the expression to:
$$f\ (f\ (f\ (f\ x)))$$
With applicative order we reduced the expression to:
$$f\ (f\ (f\ (f\ x)))$$
The results are the same.
|
2021-05-18 16:39:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7528399229049683, "perplexity": 942.3186915631666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00187.warc.gz"}
|
https://www.rdocumentation.org/packages/zoo/versions/1.0-3/topics/plot.zoo
|
# plot.zoo
From zoo v1.0-3
0th
Percentile
##### Plotting zoo Objects
Plotting method for objects of class "zoo".
Keywords
ts
##### Usage
## S3 method for class 'zoo':
plot(x, screens = 1, plot.type = c("multiple", "single"),
panel = lines, xlab = "Index", ylab = NULL, main = NULL,
ylim = NULL, oma = c(6, 0, 5, 0), mar = c(0, 5.1, 0, 2.1),
col = 1, lty = 1, pch = 1, type = "l", nc, widths = 1, heights = 1, ...)
## S3 method for class 'zoo':
lines(x, type = "l", \dots)
##### Arguments
x
an object of class "zoo".
screens
factor (or coerced to factor) whose levels specify which graph each series is to be plotted in. screens=c(1,2,1) would plot series 1, 2 and 3 in graphs 1, 2 and 1.
plot.type
for multivariate zoo objects, "multiple" plots the series on multiple plots and "single" superimposes them on a single plot
panel
a function(x, y, col, lty, ...) which gives the action to be carried out in each panel of the display for plot.type = "multiple".
ylim
if plot.type = "multiple" then it can be a list of y axis limits. If not a list each graph has the same limits. If any list element is not a pair then its range is used instead. If plot.type = "single" then it is
xlab, ylab, main, oma, mar
graphical arguments, see par.
col, lty, pch, type
graphical arguments that can be vectors or (named) lists. See the details for more information.
nc
the number of columns to use when plot.type = "multiple". Defaults to 1 for up to 4 series, otherwise to 2.
widths, heights
widths and heights for individual graphs, see layout.
...
##### Details
The methods for plot and lines are very similar to the corresponding ts methods. However, the handling of graphical parameters col, pch and lty is more flexible for multivariate series. These parameters can be vectors of the same length as the number of series plotted or are recycled if shorter. They can also be (partially) named list, e.g., list(A = c(1,2), c(3,4)) in which c(3, 4) is the default value and c(1, 2) the value only for series A. The screens argument can be specified in a similar way. If plot.type and screens conflict then multiple plots will be assumed. Also see the examples.
In addition to classical time series line plots, there is also a simple barplot method for "zoo" series.
zoo, plot.ts, barplot
• plot.zoo
• barplot.zoo
• lines.zoo
##### Examples
x.Date <- as.Date(paste(2003, 02, c(1, 3, 7, 9, 14), sep = "-"))
## univariate plotting
x <- zoo(rnorm(5), x.Date)
x2 <- zoo(rnorm(5, sd = 0.2), x.Date)
plot(x)
lines(x2, col = 2)
## multivariate plotting
z <- cbind(x, x2, zoo(rnorm(5, sd = 0.5), x.Date))
colnames(z) <- LETTERS[1:3]
plot(z, plot.type = "single", col = list(B = 2))
plot(z, type = "b", pch = 1:3, col = 1:3)
plot(z, type = "b", pch = list(A = 1:5, B = 3), col = list(C = 4, 2))
plot(z, type = "b", screen = c(1,2,1), col = 1:3)
## plot one zoo series against the other. This does NOT dispatch plot.zoo.
plot(coredata(merge(x, x2)))
## barplot
x <- zoo(cbind(rpois(5, 2), rpois(5, 3)), x.Date)
barplot(x, beside = TRUE)
Documentation reproduced from package zoo, version 1.0-3, License: GPL
### Community examples
Looks like there are no examples yet.
|
2019-07-20 03:54:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18871037662029266, "perplexity": 9181.283482766077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00347.warc.gz"}
|
https://worddisk.com/wiki/Floating-point_number/
|
# Floating-point arithmetic
In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often used in systems with very small and very large real numbers that require fast processing times. In general, a floating-point number is represented approximately with a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
${\displaystyle {\text{significand}}\times {\text{base}}^{\text{exponent}},}$
where significand is an integer, base is an integer greater than or equal to two, and exponent is also an integer. For example:
${\displaystyle 1.2345=\underbrace {12345} _{\text{significand}}\times \underbrace {10} _{\text{base}}\!\!\!\!\!\!^{\overbrace {-4} ^{\text{exponent}}}.}$
The term floating point refers to the fact that a number's radix point (decimal point, or, more commonly in computers, binary point) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated by the exponent, and thus the floating-point representation can be thought of as a form of scientific notation.
A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with the chosen scale.[1]
Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floating-point operations, commonly measured in terms of FLOPS, is an important characteristic of a computer system, especially for applications that involve intensive mathematical calculations.
A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.
|
2022-05-26 21:35:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242691397666931, "perplexity": 313.24660687343766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00522.warc.gz"}
|
https://cs.stackexchange.com/questions/80781/a-general-algorithm-for-greedy-algorithms
|
# A general algorithm for greedy algorithms
I have been refreshing on greedy algorithms as an algorithm design technique. I have read many sources for an explanation of what a greedy algorithm is, because I would like to put together a general greedy algorithm.
When reading these sources, I have gathered all of the important concepts to try to come up with this general algorithm for the greedy selection technique. By "general" I mean that it summarizes the behavior of the technique for all possible problem for who this technique would yield a correct solution.
I would like some input on it. Here is take 1:
Let $f$ be a function from $S$ to any set. The following algorithm tries to perform greedy choices per each search iteration of element of $S$, in order to try come up with the best possible point for $f$ over $S$ given a set of constraints $C$:
Select p0 from S;
i := 1;
Let P be a set of already chosen points from S to be initialized to {}
Let F be the set of values of f for each element of P to be initialize to {};
while S != {}:
Make a greedy choice for p_i based on p_{i - 1};
if f(p_i) is feasible based on constraints from C and has improved:
p_{i - 1} := p_i;
Add f_i := f(p_i) to F;
Remove p_{i - 1} from S and save it in P;
return P, F;
My specific question is: How far is this first take from being a correct generalization of the greedy selection technique for algorithmic design?
This should be fun :)
• You might want to google the relationship between Matroids and greedy algorithms. Maybe check this question – adrianN Sep 3 '17 at 6:56
• That's a bit like asking "a general algorithm for divide & conquer algorithm". In a way, you have a type error: algorithm design principle/pattern != algorithm. That said, for greedy there actually is a canonical algorithm, as adrianN and YuvalFilmus mention. – Raphael Sep 3 '17 at 8:17
• "After we have polished this approach" Please note that we're a question and answer site, not a discussion forum. If your goal is to collaboratively build something up, this isn't the right site. – David Richerby Sep 3 '17 at 10:46
There is no such thing as the correct generalization of the greedy selection technique, because it's an informal technique. That said, there has been some effort at modeling the greedy heuristic, with a view toward understanding its limitations. This study has been initiated by Borodin, Nielsen and Rackoff, (Incremental) priority algorithms, and continued mostly by Borodin and his coauthors.
Before discussing the work of Borodin et al., let me briefly mention the subject of matroids and greedoids. They answer the following question:
Consider the greedy algorithm for maximizing a linear objective subject to constraints. For which constraint structure does this algorithm produce optimal results?
We can think of the constraints as a set system, any member of which is a feasible solution. For hereditary set systems (any subset of a feasible set is feasible), the greedy algorithm produces an optimal result if and only if the constraints form a matroid. Greedoids answer a more refined question.
Back to Borodin's priority algorithms. The exact definition of a priority algorithm depends on the context, but here is a general flavor. There is some iterative procedure for constructing a solution for a given instance. The instance consists of a set of objects (think scheduling algorithms). At each step of the procedure, the algorithm chooses an ordering of all objects, and the best available object according to the ordering is added to the solution set. All objects incompatible with this object are thrown out, and the following iteration is executed. The process stop when the set of available objects becomes empty.
What I described above is the adaptive priority model. In the fixed priority model, the ordering is chosen once and for all rather than at each step. In both models, the algorithm is not allowed to look at the set of available objects (that would be cheating).
The situation for graph algorithms is somewhat more complicated, see for example Borodin, Boyar and Larsen, Priority algorithms for graph optimization problems, and there are several models which are more realistic than the model considered above. There are several other variants of priority algorithms which can be explored through the relevant work of Borodin and others.
• I think you should lead with the last paragraph. The canonical greedy is often what people use/want, and they should look there first. – Raphael Sep 3 '17 at 8:16
• This is precisely the reason why people should be told about more general versions of the greedy algorithm. – Yuval Filmus Sep 3 '17 at 8:18
|
2020-01-22 00:57:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.595310628414154, "perplexity": 574.7556500873211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00278.warc.gz"}
|
https://www.physicsforums.com/threads/laplace-equation.192391/
|
# Laplace equation
1. Oct 19, 2007
### Qyzren
u(r, θ) satisfies Laplace's equation inside a 90º sector of a circular annulus with
a < r < b ; 0 < θ < π/2 . Use separation of variables to find the solution that
satisfies the boundary conditions
u(r, 0) = 0 u(r, π/2) = f(r) ; a < r < b
u(a, θ) = 0 u(b, θ) = 0 ; 0 < θ < π/2
Consider all possible cases (negative, zero, positive) for the separation constant. Give
integral expressions for the constants in the final form for u(r, θ).
so Laplace's equation.
∂²u/∂r² + 1/r ∂u/∂r + 1/r² ∂²u/∂θ² = 0,
use U(r, θ) = G(r)Φ(θ)
r²G''/G + rG'/G = -Φ''/Φ = -λ <--seperation constant.
i've shown when λ < 0 and λ = 0, there's only trival solutions.
for λ > 0, i get r²G'' + rG' + λG = 0. using Caucy Euler, i get the general sol:
G = A cos(√λ log r) + B sin(√λ log r)
subbing in my boundary conditions i get λ = [nπ/log (b/a)]² as my eigenvalues.
So i proceed to solve for Φ.
Φ'' - [nπ/log (b/a)]²Φ = 0
Φ = A sinh [nπθ/log (b/a)] + B cosh [nπθ/log (b/a)]
Boundary condition Φ(0) = 0 => B = 0.
so we're left with Φ = A sinh [nπθ/log (b/a)]
u(r,θ) = ∑{A sinh [nπθ/log (b/a)](cos(nπ log r / log (b/a)) + sin(nπ log r / log (b/a))}
so i split it up to u(r,θ) = ∑(A sinh [nπθ/log (b/a)]cos(nπ log r / log (b/a)) + ∑(A sinh [nπθ/log (b/a)] sin(nπ log r / log (b/a)))
Now the answer says: u(r,θ) = ∑{A sinh [nπθ/log (b/a)](sin(nπ log (r/a) / log (b/a))
where A sinh [nπ²/2log (b/a)] = 2/log(b/a) ∫ f(r) sin(nπ log (r/a) / log (b/a)) dr/r (the integral is from a to b)
which seems abit different to what i have, or are they equivalent? can someone show me how to get the answer? or tell me where i went wrong? thank you.
2. Oct 19, 2007
### Kreizhn
Hmm...it's getting a little bit hard to read this. Next time you should just clearly define what $$\lambda$$ is and just use it from then on. Now, you'll notice that there is a discrepency between your eigenfunctions and the solutions eigenfunctions, notably with
$$\cos{\sqrt{\lambda} \log r}$$ and the solution's $$\cos{\sqrt{\lambda} \log{\frac{r}{a}}}$$
Now I disagree with what you found as far as substituting your initial condition and solving for lambda, since the logarithm is contained within the argument of the trigonometric function, it cannot be as easily manipulated as you seem to have done. To remedy getting two rather disgusting equations which cannot easily be solved for, the solutions have instead used their result of $$\cos{\sqrt{\lambda} \log{\frac{r}{a}}}$$
to ensure that $$u(a,\theta) = A=0$$. This can be done since we can easily see that substituting this result into the PDE doesn't affect the answer. This is a general result that can usually simplify your solutions. Continuing on like this, you find B quickly, and derive the same conclusion as the solutions. I don't know how you claim to have gotten your $$\lambda$$, but unless you used $$\log{\frac{r}{a}}$$ you shouldn't have gotten anything remotely close to the lambda of the solutions.
You also seem to have completely forgotten to apply the initial conditions. You have an as yet, undetermined value for A, which can be solved by applying said initial conditions. The eigenfunction set is complete and orthonormal, so you can use it to model a function space much akin to what you doing using Fourier Series (though I'm not sure that this counts as a Fourier Series because of the composition of the basis functions). To get their result for A, use the orthonormality of the basis set, and integrate over the domain.
Don't forget that both your expressions for G and $$\Theta$$ should have constants outside of them. Normally this means you would need to solve for 4 constants, but because you have eliminated cosh from the solution, you now only need to solve for two. You can take the constant from sinh, and multiply it with the constants for sin/cos to define two new constants, but you should still have two constants in your solution before applying the initial conditions.
Last edited: Oct 19, 2007
|
2016-12-10 05:18:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174321055412292, "perplexity": 1264.1640005881222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542939.6/warc/CC-MAIN-20161202170902-00119-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://www.theinfolist.com/php/SummaryGet.php?FindGo=osculating_circle
|
TheInfoList
In differential geometry of curves, the osculating circle of a sufficiently smooth plane
curve In mathematics, a curve (also called a curved line in older texts) is an object similar to a line (geometry), line, but that does not have to be Linearity, straight. Intuitively, a curve may be thought of as the trace left by a moving point (geo ...
at a given point ''p'' on the curve has been traditionally defined as the circle passing through ''p'' and a pair of additional points on the curve
infinitesimal In mathematics, infinitesimals or infinitesimal numbers are quantities that are closer to zero than any standard real number, but are not zero. They do not exist in the standard real number system, but do exist in many other number systems, such a ...
ly close to ''p''. Its center lies on the inner normal line, and its
curvature In mathematics, curvature is any of several strongly related concepts in geometry. Intuitively, the curvature is the amount by which a curve deviates from being a straight line, or a Surface (mathematics), surface deviates from being a plane (ge ...
defines the curvature of the given curve at that point. This circle, which is the one among all at the given point that approaches the curve most tightly, was named ''circulus osculans'' (Latin for "kissing circle") by
Leibniz Gottfried Wilhelm (von) Leibniz ; see inscription of the engraving depicted in the "#1666–1676, 1666–1676" section. (; or ; – 14 November 1716) was a prominent Germany, German polymath and one of the most important Logic, logicians, Math ...
. The center and radius of the osculating circle at a given point are called center of curvature and of the curve at that point. A geometric construction was described by
Isaac Newton Sir Isaac Newton (25 December 1642 – 20 March Old Style and New Style dates, 1726/27) was an English mathematician, physicist, astronomer, theologian, and author (described in his time as a "natural philosophy, natural philosopher") w ...
in his '' Principia'':
# Nontechnical description
Imagine a car moving along a curved road on a vast flat plane. Suddenly, at one point along the road, the steering wheel locks in its present position. Thereafter, the car moves in a circle that "kisses" the road at the point of locking. The
curvature In mathematics, curvature is any of several strongly related concepts in geometry. Intuitively, the curvature is the amount by which a curve deviates from being a straight line, or a Surface (mathematics), surface deviates from being a plane (ge ...
of the circle is equal to that of the road at that point. That circle is the osculating circle of the road curve at that point.
# Mathematical description
Let ''γ''(''s'') be a regular parametric plane curve, where ''s'' is the (the natural parameter). This determines the ''unit tangent vector'' T(''s''), the ''unit normal vector'' N(''s''), the signed curvature ''k''(''s'') and the ''radius of curvature'' ''R''(''s'') at each point for which ''s'' is composed: : $T\left(s\right)=\gamma\text{'}\left(s\right),\quad T\text{'}\left(s\right)=k\left(s\right)N\left(s\right),\quad R\left(s\right)=\frac.$ Suppose that ''P'' is a point on ''γ'' where ''k'' ≠ 0. The corresponding center of curvature is the point ''Q'' at distance ''R'' along ''N'', in the same direction if ''k'' is positive and in the opposite direction if ''k'' is negative. The circle with center at ''Q'' and with radius ''R'' is called the osculating circle to the curve ''γ'' at the point ''P''. If ''C'' is a regular space curve then the osculating circle is defined in a similar way, using the principal normal vector ''N''. It lies in the '' osculating plane'', the plane spanned by the tangent and principal normal vectors ''T'' and ''N'' at the point ''P''. The plane curve can also be given in a different regular parametrization $\gamma\left(t\right) = \begin x_1\left(t\right) \\ x_2\left(t\right) \end$ where regular means that $\gamma\text{'}\left(t\right)\ne 0$ for all $t$. Then the formulas for the signed curvature ''k''(''t''), the normal unit vector ''N''(''t''), the radius of curvature ''R''(''t''), and the center ''Q''(''t'') of the osculating circle are : $k\left(t\right) = \frac, \qquad N\left(t\right) = \frac\cdot\begin -x_2\text{'}\left(t\right) \\ x_1\text{'}\left(t\right) \end$ : $R\left(t\right) = \left, \frac \\qquad \text \qquad Q\left(t\right) = \gamma\left(t\right) + \frac\cdot\begin -x_2\text{'}\left(t\right) \\ x_1\text{'}\left(t\right) \end\,.$
## Cartesian coordinates
We can obtain the center of the osculating circle in Cartesian coordinates if we substitute and for some function ''f''. If we do the calculations the results for the X and Y coordinates of the center of the osculating circle are: : $x_c = x - f\text{'}\frac \quad\text\quad y_c = f + \frac$
# Properties
For a curve ''C'' given by a sufficiently smooth parametric equations (twice continuously differentiable), the osculating circle may be obtained by a limiting procedure: it is the limit of the circles passing through three distinct points on ''C'' as these points approach ''P''.Actually, point ''P'' plus two additional points, one on either side of ''P'' will do. See Lamb (on line): This is entirely analogous to the construction of the tangent to a curve as a limit of the secant lines through pairs of distinct points on ''C'' approaching ''P''. The osculating circle ''S'' to a plane curve ''C'' at a regular point ''P'' can be characterized by the following properties: * The circle ''S'' passes through ''P''. * The circle ''S'' and the curve ''C'' have the tangent lines to circles, common tangent line at ''P'', and therefore the common normal line. * Close to ''P'', the distance between the points of the curve ''C'' and the circle ''S'' in the normal direction decays as the cube or a higher power of the distance to ''P'' in the tangential direction. This is usually expressed as "the curve and its osculating circle have the second or higher order Contact (mathematics), contact" at ''P''. Loosely speaking, the vector functions representing ''C'' and ''S'' agree together with their first and second derivatives at ''P''. If the derivative of the curvature with respect to ''s'' is nonzero at ''P'' then the osculating circle crosses the curve ''C'' at ''P''. Points ''P'' at which the derivative of the curvature is zero are called vertex (curve), vertices. If ''P'' is a vertex then ''C'' and its osculating circle have contact of order at least three. If, moreover, the curvature has a non-zero local maximum or minimum at ''P'' then the osculating circle touches the curve ''C'' at ''P'' but does not cross it. The curve ''C'' may be obtained as the envelope (mathematics), envelope of the one-parameter family of its osculating circles. Their centers, i.e. the centers of curvature, form another curve, called the ''evolute'' of ''C''. Vertices of ''C'' correspond to singular points on its evolute. Within any arc of a curve ''C'' within which the curvature is monotonic (that is, away from any vertex (curve), vertex of the curve), the osculating circles are all disjoint and nested within each other. This result is known as the Tait-Kneser theorem.
# Examples
## Parabola
For the parabola :$\gamma\left(t\right) = \begin t\\t^2 \end$ the radius of curvature is :$R\left(t\right)= \left, \frac \$ At the vertex $\gamma\left(0\right) = \begin 0\\0 \end$ the radius of curvature equals (see figure). The parabola has fourth order contact with its osculating circle there. For large ''t'' the radius of curvature increases ~ ''t''3, that is, the curve straightens more and more.
## Lissajous curve
A Lissajous curve with ratio of frequencies (3:2) can be parametrized as follows :$\gamma\left(t\right) = \begin \cos\left(3t\right) \\ \sin\left(2t\right) \end.$ It has signed curvature ''k''(''t''), normal unit vector ''N''(''t'') and radius of curvature ''R''(''t'') given by : $k\left(t\right) = \frac\,,$ : $N\left(t\right) = \frac \cdot \begin -2\cos\left(2t\right) \\ -3\sin\left(3t\right) \end$ and : $R\left(t\right) = \left, \frac \.$ See the figure for an animation. There the "acceleration vector" is the second derivative $\frac$ with respect to the .
## Cycloid
A cycloid with radius ''r'' can be parametrized as follows: :$\gamma\left(t\right) = \begin r\left\left(t - \sin t\right\right) \\ r\left\left(1 - \cos t\right\right) \end$ Its curvature is given by the following formula: :$\kappa\left(t\right) = - \frac$ which gives: :$R\left(t\right) = \frac$
*Circle packing theorem *Osculating curve *Osculating sphere
|
2021-12-04 13:36:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183501958847046, "perplexity": 909.045632971487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00161.warc.gz"}
|
http://www.statsblogs.com/page/2/
|
## ALUES: Agricultural Land Use Evaluation System, R package
October 26, 2014
By
Authors:Arnold R. Salvacion &nb...
## Return of the sunday links! (10/26/14)
October 26, 2014
By
New look for the blog and bringing back the links. If you have something that you'd like included in the Sunday links, email me and let me know. If you use the title of the message "Sunday Links" you'll be more likely for me to find it when I search my gmail. Thomas L. does a
## Solution to the sample-allocation problem
October 26, 2014
By
See this recent post for background. Here’s the question: You are designing an experiment where you are estimating a linear dose-response pattern with a dose that x can take on the values 1, 2, 3, and the response is continuous. Suppose that there is no systematic error and that the measurement variance is proportional to x. You […] The post Solution to the sample-allocation problem appeared first on Statistical Modeling, Causal Inference,…
## Tuning Laplaces Demon III
October 26, 2014
By
This is the third post with LaplacesDemon tuning. same problem, different algorithms. For introduction and other code see this post. The current post takes algorithms Independence Metropolis to Reflective Slice Sampler.Independence MetropolisIndependen...
## Call for participation: AusDM 2014, Brisbane, 27-28 November
October 25, 2014
By
********************************************************* 12th Australasian Data Mining Conference (AusDM 2014) Brisbane, Australia 27-28 November 2014 http://ausdm14.ausdm.org/ ********************************************************* The Australasian Data Mining Conference has established itself as the premier Australasian meeting for both practitioners and researchers in data mining. Since AusDM’02 the conference … Continue reading →
## 3 YEARS AGO: MONTHLY MEMORY LANE
October 25, 2014
By
MONTHLY MEMORY LANE: 3 years ago: October 2011 (I mark in red 3 posts that seem most apt for general background on key issues in this blog*) (10/3) Part 2 Prionvac: The Will to Understand Power (10/4) Part 3 Prionvac: How the Reformers Should Have done Their Job (10/5) Formaldehyde Hearing: How to Tell the Truth With Statistically Insignificant Results (10/7) Blogging […]
## Solution to the problem on the distribution of p-values
October 25, 2014
By
See this recent post for background. Here’s the question: It is sometimes said that the p-value is uniformly distributed if the null hypothesis is true. Give two different reasons why this statement is not in general true. The problem is with real examples, not just toy examples, so your reasons should not involve degenerate situations such as […] The post Solution to the problem on the distribution of p-values appeared first on…
## How well does sample range estimate range?
October 25, 2014
By
I’ve been doing some work with Focused Objective lately, and today the following question came up in our discussion. If you’re sampling from a uniform distribution, how many samples do you need before your sample range has an even chance of covering 90% of the population range? This is a variation on a problem I’ve […]
## An interactive visualization to teach about the curse of dimensionality
October 24, 2014
By
I recently was contacted for an interview about the curse of dimensionality. During the course of the conversation, I realized how hard it is to explain the curse to a general audience. One of the best descriptions I could come up with was trying to describe sampling from a unit line, square, cube, etc. and
## Solution to the helicopter design problem
October 24, 2014
By
See yesterday’s post for background. Here’s the question: In the helicopter activity, pairs of students design paper ”helicopters” and compete to create the copter that takes longest to reach the ground when dropped from a fixed height. The two parameters of the helicopter, a and b, correspond to the length of certain cuts in the […] The post Solution to the helicopter design problem appeared first on Statistical Modeling, Causal…
## Feller’s shoes and Rasmus’ socks [well, Karl's actually...]
October 23, 2014
By
Yesterday, Rasmus Bååth [of puppies' fame!] posted a very nice blog using ABC to derive the posterior distribution of the total number of socks in the laundry when only pulling out orphan socks and no pair at all in the first eleven draws. Maybe not the most pressing issue for Bayesian inference in the era […]
## No, Michael Jordan didn’t say that!
October 23, 2014
By
The names are changed, but the song remains the same. First verse. There’s an article by a journalist, The odds, continually updated, by F.D. Flam in the NY Times to which Andrew responded in blog form, No, I didn’t say that, by Andrew Gelman, on this blog. Second verse. There’s an article by a journalist, […] The post No, Michael Jordan didn’t say that! appeared first on Statistical Modeling, Causal…
## Singular Spectrum Analysis in Excel
October 23, 2014
By
Singular Spectrum Analysis (SSA) is a technique for analysing time series. The method is relatively simple to implement and relies on applying some linear algebra. There is no requirement to do any pre-processing before applying SSA so practical implem...
## Some questions from our Ph.D. statistics qualifying exam
October 23, 2014
By
In the in-class applied statistics qualifying exam, students had 4 hours to do 6 problems. Here were the 3 problems I submitted: In the helicopter activity, pairs of students design paper ”helicopters” and compete to create the copter that takes longest to reach the ground when dropped from a fixed height. The two parameters of the […] The post Some questions from our Ph.D. statistics qualifying exam appeared first on Statistical…
## The class pondering Big Data
October 23, 2014
By
Note: I'm traveling a lot lately and it is affecting my ability to post on a regular basis. It's three weeks into my chart-building workshop (link) at NYU and we are starting to discuss individual projects. One of the major...
## Tests basés sur la vraisemblance – score
October 23, 2014
By
$\widehat\theta_n$
Une autre grandeur intéressant est le score, qui est la dérivée de la vraisemblance. Intuitivement (c’est l’idée de la condition du premier ordre), et seront proches si les dérivées en ces points sont proches. En la dérivée est nulle, donc on va se demander ici, tout simplement, si la dérivée en est proche de 0. Ou pas. On appele ce test le test du score, ou encore le test du multiplicateur de Lagrange. Ou…
## Tests basés sur la vraisemblance – Rapport de Vraisemblance
October 23, 2014
By
$\widehat\theta_n$
Chose promise, chose due. J’avais dit qu’on parlerait du test de rapport de vraisemblance. L’idée – visuelle – est d’avoir une lecture dans l’autre sens : au lieu de se demander si et sont proches, on va se demander si et sont proches. Si la fonction de vraisemblance est suffisamment régulière, on se pose la même question. Lorsque j’avais présenté le test, en cours, hier matin, j’avais proposé d’utiliser la delta-method pour…
## Why is my OS X Yosemite install taking so long?: an analysis
October 23, 2014
By
Why? Since the latest Mac OS X update, 10.10 "Yosemite", was released last Thursday, there have been complaints springing up online of the progress bar woefully underestimating the actual time to complete installation. More specifically, it appeared as if, for a certain group of people (myself included), the installer would stall out at "two minutes »more
## September 2014: Blog Contents
October 23, 2014
By
September 2014: Error Statistics Philosophy Blog Table of Contents Compiled by Jean A. Miller (9/30) Letter from George (Barnard) (9/27) Should a “Fictionfactory” peepshow be barred from a festival on “Truth and Reality”? Diederik Stapel says no (rejected post) (9/23) G.A. Barnard: The Bayesian “catch-all” factor: probability vs likelihood (9/21) Statistical Theater of the Absurd: “Stat on a […]
## Stan 2.5, now with MATLAB, Julia, and ODEs
October 22, 2014
By
As usual, you can find everything on the Stan Home Page. Drop us a line on the stan-users group if you have problems with installs or questions about Stan or coding particular models. New Interfaces We’d like to welcome two new interfaces: MatlabStan by Brian Lau, and Stan.jl (for Julia) by Rob Goedman. The new […] The post Stan 2.5, now with MATLAB, Julia, and ODEs appeared first on Statistical…
## Kathryn Chaloner 1954-2014
October 22, 2014
By
Prescript: Memory is fickle. It's been a while since these events, a while since I took her regression course and a while since I've read her papers. One thing I've found is that memories of the contents of particular papers evolves with time, and memo...
## Vote on simply statistics new logo design
October 22, 2014
By
As you can tell, we have given the Simply Stats blog a little style update. It should be more readable on phones or tablets now. We are also about to get a new logo. We are down to the last couple of choices and can't decide. Since we are statisticians, we thought we'd collect some
## Sailing between the Scylla of hyping of sexy research and the Charybdis of reflexive skepticism
October 22, 2014
By
Recently I had a disagreement with Larry Bartels which I think is worth sharing with you. Larry and I took opposite positions on the hot topic of science criticism. To put things in a positive way, Larry was writing about some interesting recent research which I then constructively criticized. To be more negative, Larry was […] The post Sailing between the Scylla of hyping of sexy research and the Charybdis…
|
2014-10-30 15:51:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38747382164001465, "perplexity": 4513.213040001588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898477.17/warc/CC-MAIN-20141030025818-00121-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://icpc.njust.edu.cn/Problem/Pku/1797/
|
Heavy Transportation
Time Limit: 3000MS
Memory Limit: 30000K
Description
Background Hugo Heavy is happy. After the breakdown of the Cargolifter project he can now expand business. But he needs a clever man who tells him whether there really is a way from the place his customer has build his giant steel crane to the place where it is needed on which all streets can carry the weight. Fortunately he already has a plan of the city with all streets and bridges and all the allowed weights.Unfortunately he has no idea how to find the the maximum weight capacity in order to tell his customer how heavy the crane may become. But you surely know. Problem You are given the plan of the city, described by the streets (with weight limits) between the crossings, which are numbered from 1 to n. Your task is to find the maximum weight that can be transported from crossing 1 (Hugo's place) to crossing n (the customer's place). You may assume that there is at least one path. All streets can be travelled in both directions.
Input
The first line contains the number of scenarios (city plans). For each city the number n of street crossings (1 <= n <= 1000) and number m of streets are given on the first line. The following m lines contain triples of integers specifying start and end crossing of the street and the maximum allowed weight, which is positive and not larger than 1000000. There will be at most one street between each pair of crossings.
Output
The output for every scenario begins with a line containing "Scenario #i:", where i is the number of the scenario starting at 1. Then print a single line containing the maximum allowed weight that Hugo can transport to the customer. Terminate the output for the scenario with a blank line.
Sample Input
1
3 3
1 2 3
1 3 4
2 3 5
Sample Output
Scenario #1:
4
Source
TUD Programming Contest 2004, Darmstadt, Germ
|
2020-10-27 21:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1814490258693695, "perplexity": 933.7696626927701}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00206.warc.gz"}
|
https://stats.stackexchange.com/questions/56427/vector-fit-interpretation-nmds/56432
|
# vector fit interpretation NMDS
So a colleague and myself are using principal component analysis (PCA) or non metric multidimensional scaling (NMDS) to examine how environmental variables influence patterns in benthic community composition. A common method is to fit environmental vectors on to an ordination. The length and direction of the vectors seems somewhat straighforward but I don't understand how an R squared value or a p-value is calculated for these vectors. I have looked at a dozen papers and the most I can gather is that these numbers are calculated using permutations of the data. This does not seem very intuitive. What data is being permuted? How does this permutation create an R squared value and what variance is being explained? My limited understanding of an R squared value comes from linear regressions. I need to explain this to people who have little to no background in statistics so any help understanding these concepts or a link to an available text would be greatly appreciated. Thanks so much!
Vector fitting is a regression. Explicitly, the model fitted is
$$y = \beta_1 X_1 + \beta_2 X_2 + \varepsilon$$
where $y$ is the environmental variable requiring a vector, $X_i$ is the $i$th ordination "axis" score (here for the first two ordination "axes") and $\varepsilon$ the unexplained variance. Both $y$ and $X_i$ are centred prior to fitting the model, hence no intercept. The $\hat{\beta}_j$ are the coordinates of the vector for $y$ in the ordination space spanned by the $i$ ordination axes; these may be normalised to unit length.
As this is a regression, $R^2$ is easily computed and so could the significance of the coefficients or $R^2$. However, we presume that the model assumptions are not fully met and hence we use a permutation test to test significance of the $R^2$ of the model.
The permutation test doesn't create the overall $R^2$, what is done is that we permute the values of the response $y$ into random order. Next we use the fitted regression model (equation above) to predict the randomised response data and compute the $R^2$ between the randomised response and the fitted values from the model. This $R^2$ value is recorded and then the procedure is done again with a different random permutation. We keep doing this a modest number of times (say 999). Under the null hypothesis of no relationship between the ordination "axis" scores and the environmental variable, the observed $R^2$ value should be a common value among the permuted $R^2$ values. If however the observed $R^2$ is extreme relative to the permutation distribution of $R^2$ then it is unlikely that the Null hypothesis is correct as we have substantial evidence against it. The proportion of times a randomised $R^2$ from the distribution is equal to or greater than the observed $R^2$ is a value known as the permutation $p$ value.
An example, fully worked may help with this. Using the vegan package for R and some in-built data
require(vegan)
data(varespec)
data(varechem)
## fit PCA
ord <- rda(varespec)
## fit vector for Al - gather data
dat <- cbind.data.frame(Al = varechem$Al, scores(ord, display = "sites", scaling = 1)) ## fit the model mod <- lm(Al ~ PC1 + PC2, data = dat) summary(mod) This gives > summary(mod) Call: lm(formula = Al ~ PC1 + PC2, data = dat) Residuals: Min 1Q Median 3Q Max -172.30 -58.00 -12.54 58.44 239.46 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 142.475 19.807 7.193 4.34e-07 *** PC1 31.143 9.238 3.371 0.00289 ** PC2 27.492 13.442 2.045 0.05356 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 97.04 on 21 degrees of freedom Multiple R-squared: 0.4254, Adjusted R-squared: 0.3707 F-statistic: 7.774 on 2 and 21 DF, p-value: 0.002974 Note the value for the Multiple R-squared (0.4254). vegan has a canned function for doing all of this, on multiple environmental variables at once; envfit(). Compare the$R^2$from above with the vector-fitted value (to keep things simple I just do Al here, but you could pass all of varechem and envfit would fit vectors [centroids for factors] for all variables.) set.seed(42) ## make this reproducible - pseudo-random permutations! envfit(ord, varechem[, "Al", drop = FALSE]) > envfit(ord, varechem[, "Al", drop = FALSE]) ***VECTORS PC1 PC2 r2 Pr(>r) Al 0.85495 0.51871 0.4254 0.004 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 P values based on 999 permutations. The two$R^2\$ values shown are exactly the same.
[Do note that envfit doesn't actually fit models via lm internally - it uses a QR decomposition. This is the same methods employed deeper down in lm but we call it directly to fit the model manually as we want it without the extra things that something like lm.fit would give us.]
• Thanks so much for your help and thank you for laying out the answer so clearly. I really appreciate it. – Jimj Apr 18 '13 at 5:01
|
2020-01-24 03:14:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7926783561706543, "perplexity": 464.0569691473114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614880.58/warc/CC-MAIN-20200124011048-20200124040048-00377.warc.gz"}
|
https://physicsoverflow.org/5140/counting-d0-d4-bound-states
|
# Counting D0-D4 Bound States
+ 6 like - 0 dislike
107 views
I have a slightly technical combinatorics question. Consider the degeneracy $D_n$ of bound states of $n$ D0 branes and one D4 brane. This is given in Polchinski by (13.6.24),
\begin{align} \sum_{n=0}^{\infty}q^n D_n=2^8\prod_{k=1}^{\infty}\left(\frac{1+q^k}{1-q^k}\right)^{8}. \end{align}
I was able to verify this up to $n=3$, basically you have to count all ways to form bound states of the D0 branes and then bound them to the D4 brane. However, I wasn't able to verify it in general, since brute forcing it is a bit of a mess. Is there some clever mathematical formalism that allows one to deal with combinatorical problems like this?
This post imported from StackExchange Physics at 2014-03-05 14:52 (UCT), posted by SE-user Matthew
+ 5 like - 0 dislike
This formula is actually pretty simple to understand.
First, the $2^8$ is the number of possible $D4$ states. Then for each (indistinguishable) $D0$, they can be in either a fermionic or bosonic state, of which there are $8$ each.
Next, the coefficient of $q^n$ in $(1+q)^8$ is the number of ways for $n$ independent $D0$ branes to fit in $8$ fermionic states.
The coefficient of $q^n$ in $(1-q)^{-8}$ is the number of ways for $n$ independent $D0$ branes to fit in $8$ bosonic states.
We multiply these two to allow $D0$ branes to occupy either bosonic or fermionic states.
By taking the products over $q^k$, we allow $D0$ branes to first form $k$-tuply bound states which occupy a single $D0$ state.
This post imported from StackExchange Physics at 2014-03-05 14:52 (UCT), posted by SE-user Ryan Thorngren
answered Nov 22, 2013 by (1,925 points)
Thanks @Ryan! This is very helpful. One minor correction: I think you meant (1-q)^(-8) for the bosonic counting instead of (1-q)^(8).
This post imported from StackExchange Physics at 2014-03-05 14:52 (UCT), posted by SE-user Matthew
Good catch! Yes I do.
This post imported from StackExchange Physics at 2014-03-05 14:52 (UCT), posted by SE-user Ryan Thorngren
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOver$\varnothing$lowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
2021-01-20 16:35:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7422786355018616, "perplexity": 970.602232212792}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00176.warc.gz"}
|
http://ncatlab.org/nlab/show/torsion+theory
|
# nLab torsion theory
## Definition
A torsion theory in an abelian category $A$ is a couple $\left(T,F\right)$ of additive subcategories called the torsion class $T$ and the torsion free class $F$ such that the following conditions hold:
• $\mathrm{Hom}\left(T,F\right)=0$
(in other words $A\left(X,Y\right)=0$ if $X\in \mathrm{Ob}T$ and $Y\in \mathrm{Ob}F$.
• $\mathrm{Hom}\left(T,Y\right)=0⇒Y\in \mathrm{Ob}Y$
• $\mathrm{Hom}\left(X,F\right)=0⇒X\in \mathrm{Ob}X$
• for all $X\in \mathrm{Ob}A$, there exists $Y\subset X$, $Y\in \mathrm{Ob}T$ and $X/Y\in \mathrm{Ob}F$
Equivalently, a torsion theory in $A$ is a pair $\left(T,F\right)$ of strictly full subcategories of $A$ such that the first and last conditions in the above list hold.
#### Torsion part of an object
If the abelian category satisfies the Gabriel’s property (sup) then for every object $X$ there exist the largest subobject $t\left(X\right)\subset X$ called the torsion part of $X$. Under the axiom of choice, $t:X\to t\left(X\right)$ can be extended to a functor.
#### Hereditary torsion theories
A torsion theory is hereditary if $T$ is closed under subobjects, or equivalently, $t$ is left exact functor.
## Properties
If $\left(T,F\right)$ is a torsion class then $T$ and $F$ both contain the zero object and are closed under biproducts (Borceux II 1.12.3). Presentation of an object $X$ in $\mathrm{Ob}A$ as an extension $0\to Y\to X\to X/Y\to 0$, $Y$ in $\mathrm{Ob}T$ by $X/Y$ in $\mathrm{Ob}F$ is unique up to an isomorphism of short exact sequences (Borceux II 1.12.4).
Given an abelian category $A$ there is a bijection between universal closure operations on $A$, hereditary torsion theories in $A$ (Borceux II 1.12.8) and, if $A$ us locally finitely presentable also with left exact localizations of $A$ admiting a right adjoint and with localizing subcategories of $A$ (Borceux II 1.13.15).
## Examples
The basic example of a torsion class is the class of torsion abelian groups within the category of all abelian groups. The torsion theories are often used as a means to formulate localization theory in abelian categories.
## Literature
• Francis Borceux, Handbook of categorical algebra, vol. 2
• Spencer E. Dickson, A torsion theory for Abelian categories, Trans. Amer. Math. Soc. 121, No. 1 (Jan., 1966), pp. 223-235, jstor
Revised on May 12, 2011 19:27:12 by Zoran Škoda (148.6.183.21)
|
2013-12-06 22:45:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7572346329689026, "perplexity": 330.517112867285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052713/warc/CC-MAIN-20131204131732-00039-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://fashionphotographycourse.com/how-to-amx/page.php?7fd468=is-pb-paramagnetic-or-diamagnetic
|
Example: Helium(1s $^{2}$ ) does not have any unpaired electrons, so it is diamagnetic. M A paramagnetic electron is an unpaired electron. The mathematical expression is: Curie's law is valid under the commonly encountered conditions of low magnetization (μBH ≲ kBT), but does not apply in the high-field/low-temperature regime where saturation of magnetization occurs (μBH ≳ kBT) and magnetic dipoles are all aligned with the applied field. ) pointing parallel (antiparallel) to the magnetic field can be written as: with μ 1 In conductive materials, the electrons are delocalized, that is, they travel through the solid more or less as free electrons. ( This is my last attempt on this set. 8.02x - Lect 16 - Electromagnetic Induction, Faraday's Law, Lenz Law, SUPER DEMO - … is the Bohr magneton, M Is CO paramagnetic or diamagnetic? All electrons contribute to the property of diamagnetism but in order for a material to be diamagnetic, all of the electrons must be paired. Answer Save. They are weakly attracted by the external magnetic field. However, in some cases a band structure can result in which there are two delocalized sub-bands with states of opposite spins that have different energies. / Paramagnetism is due to the presence of unpaired electrons in the material, so most atoms with incompletely filled atomic orbitals are paramagnetic, although exceptions such as copper exist. Is the atom paramegnetic or diamagnetic? Here they get what inspires people So dizzy. A. Does b2 contain unpaired electrons? ) {\displaystyle n_{\downarrow }} a. Ra+2 ion. is the electron magnetic moment, M It's on that here to s one. Which has the highest ionization energy? [1] Paramagnetic materials include most chemical elements and some compounds;[2] they have a relative magnetic permeability slightly greater than 1 (i.e., a small positive magnetic susceptibility) and hence are attracted to magnetic fields. The unpaired spins reside in orbitals derived from oxygen p wave functions, but the overlap is limited to the one neighbor in the O2 molecules. Answered June 4, 2017. g Where Since V3+ has two unpaired electrons, therefore, it is paramagnetic. In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. Paramagmnetic Substances; Diamagnetic Substances; Paramagnetic Substances. 1 0. I was doing a physics investigation in school into eddy currents as damping of simple harmonic motion by using bobs of different masses then moving them through a magnet to slow down the motion of the pendulum. Pd [46] as per expected electronic configuration of [Kr] 5S2 4d8 then it will be paramagnetic. Like CaCl2 is paramagnetic. C is a material-specific Curie constant This law indicates that the susceptibility χ of paramagnetic materials is inversely proportional to their temperature. the total free-electrons density and g Strong paramagnetism (not to be confused with the {\displaystyle M_{J}g_{J}\mu _{\mathrm {B} }H/k_{\mathrm {B} }T\ll 1} Ozzy Jen that one had to tow asked thio thio be flow and now get to people to before right here. It is not uncommon to call such materials 'paramagnets', when referring to their paramagnetic behavior above their Curie or Néel-points, particularly if such temperatures are very low or have never been properly measured. All substances magnetize more or less in the external magnetic field. Paramagnetic We can work this out by looking at the molecular orbital diagram of O_2 O_2^+ has 1 fewer electron than O_2 which is what gives it the positive charge. g M List Paramagnetic or Diamagnetic. Molecular oxygen is a good example. I started of with a bob made of brass and noticed it was slightly attracted to the magnet. Molecular structure can also lead to localization of electrons. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. I have all she should. Explain the meaning of diamagnetic and paramagnetic. Beside this, is be2 − paramagnetic or diamagnetic? J μ n The above picture is a generalization as it pertains to materials with an extended lattice rather than a molecular structure. I get four tries per question. When there is an odd number of electrons it probably paramagnetic. Is NO- Paramagnetic or Diamagnetic? A lump of lead is not magnetic, it is diamagnetic because it can interact slightly with magnetic fields. μ The parameter μeff is interpreted as the effective magnetic moment per paramagnetic ion. Paramagnetic Up to date, curated data provided by Mathematica 's ElementData function from Wolfram Research, Inc. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here! Last edited on 27 December 2020, at 07:32, https://en.wikipedia.org/w/index.php?title=Paramagnetism&oldid=996550231, Creative Commons Attribution-ShareAlike License, Curie's Law can be derived by considering a substance with noninteracting magnetic moments with angular momentum. Consequently, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or magnetically ordered.[5]. Such systems contain ferromagnetically coupled clusters that freeze out at lower temperatures. How do you tell if a molecule is diamagnetic or paramagnetic? The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. TERM Fall '12; PROFESSOR SPRAGUE; TAGS Atom, Periodic Table, KSE, Rao, CsBr c. Share this link with a friend: Copied! T is absolute temperature, measured in kelvins 4. B C2 species: Use MO diagram with sp mixing that raises energy of σ3> π1; s,p labels changed to numerical labels: Nd, This page was last edited on 27 December 2020, at 07:32. Ian spared? The ionosphere lies about 100 km above Earth’s surface. The effect always competes with a diamagnetic response of opposite sign due to all the core electrons of the atoms. {\displaystyle \mathbf {H} } They are weakly repelled by the external magnetic field. Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. Even in the presence of the field there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. Explain the meaning of diamagnetic and paramagnetic. If is is C^2+ it would be 1s^2 2s^2 and e⁻s are paired: diamagnetic. μ b. I ion. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. Before Pauli's theory, the lack of a strong Curie paramagnetism in metals was an open problem as the leading model could not account for this contribution without the use of quantum statistics. Is brass paramagnetic or diamagnetic? In the case of heavier elements the diamagnetic contribution becomes more important and in the case of metallic gold it dominates the properties. See, Di uh come on that so which one? : When orbital angular momentum contributions to the magnetic moment are small, as occurs for most organic radicals or for octahedral transition metal complexes with d3 or high-spin d5 configurations, the effective magnetic moment takes the form ( with g-factor ge = 2.0023... ≈ 2). Although the electronic configuration of the individual atoms (and ions) of most elements contain unpaired spins, they are not necessarily paramagnetic, because at ambient temperature quenching is very much the rule rather than the exception. You can view video lessons to learn Paramagnetic and Diamagnetic. 1 The electron would be removed from the pi orbital, as this is the highest in energy. Paramagnetic substances are those that contain net unpaired electrons and are attracted by a magnet. The behavior of a substance in a non-uniform magnetic field will depend upon whether it is ferromagnetic, paramagnetic or diamagnetic. indicates that the sign is positive (negative) when the electron spin component in the direction of Click 'Join' if it's correct. The element hydrogen is virtually never called 'paramagnetic' because the monatomic gas is stable only at extremely high temperature; H atoms combine to form molecular H2 and in so doing, the magnetic moments are lost (quenched), because of the spins pair. The key difference between paramagnetic and diamagnetic materials is that the paramagnetic materials get attracted to external magnetic fields whereas the diamagnetic materials repel from the magnetic fields.. Materials tend to show weak magnetic properties in the presence of an external magnetic field.Some materials get attracted to the external magnetic field, whereas some … M g The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). Nonetheless, true paramagnets are those materials that show magnetic susceptibility with respect to the Curie law. It has 6p^3 meaning there are 3 unpaired electrons in the p orbital. Materials that are called "paramagnets" are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. {\displaystyle \mathbf {S} =\pm \hbar /2} Well, I some I got lied to you NATO three. (around 104 kelvins for metals), the number density of electrons Pauli paramagnetism is named after the physicist Wolfgang Pauli. Quantum Theory and the Electronic Structure of Atoms. What is Paramagnetic, Diamagnetic, ferromagnetic, antiferromagnetic and ferrimagnetic substance? μ of bonding orbitals)- (no.of anti bonding orbitals) here in NO B.O=2.5. An atom could have ten diamagnetic electrons, but as long as it also has one paramagnetic electron, it is still considered a paramagnetic atom. Yes tin in the form of metallic white tin is paramagnetic, the grey form alpha-tin with a covalent diamond like structure is diamagnetic. This fraction is proportional to the field strength and this explains the linear dependency. The energy of each Zeeman level is d. o atom. The bulk properties of such a system resembles that of a paramagnet, but on a microscopic level they are ordered. In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. The quenching tendency is weakest for f-electrons because f (especially 4f) orbitals are radially contracted and they overlap only weakly with orbitals on adjacent atoms. Thus, condensed phase paramagnets are only possible if the interactions of the spins that lead either to quenching or to ordering are kept at bay by structural isolation of the magnetic centers. {\displaystyle \hbar } I'll tell you the Paramagnetic or Diamagnetic list below. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. H J Our tutors have indicated that to solve this problem you will need to apply the Paramagnetic and Diamagnetic concept. Even if θ is close to zero this does not mean that there are no interactions, just that the aligning ferro- and the anti-aligning antiferromagnetic ones cancel. that materials become more magnetic at lower temperatures. They also show paramagnetism regardless of the temperature range. For low levels of magnetisation, the magnetisation of paramagnets follows Curie's lawto good approximation: where 1. H 3 Answers. ℏ B ) (Some paramagnetic materials retain spin disorder even at absolute zero, meaning they are paramagnetic in the ground state, i.e. a. and b. are diamagnetic … Se B. Ca C. Mg D. Cl. Your question is wrong because be2 molecule does not exist as be2 number of electron is 8 so according to MOT its bond order comes out to be zero. So dizzy bar Ah, well that How about neon Neon 10. At these temperatures, the available thermal energy simply overcomes the interaction energy between the spins. Hence, it can get easily magnetised in presence of the external magnetic field. M in the absence of thermal motion.) {\displaystyle \scriptstyle \chi } {\displaystyle \mu _{M_{J}}} Problem: Is CO paramagnetic or diamagnetic? 1. The paramagnetic response in the Pb-porous glass nanocomposite under study has some particularities which distinguish it noticeably from PME observed in other superconducting materials , , , , , , , , , , , , , , . Answer (1 of 3): In other words, an atom could have 10 paired (diamagnetic) electrons, but as long as it also has one unpaired (paramagnetic) electron, it is still considered a paramagnetic atom. Most elements and some compounds are paramagnetic. If one uses a classical treatment with molecular magnetic moments represented as discrete magnetic dipoles, μ, a Curie Law expression of the same form will emerge with μ appearing in place of μeff. 7 years ago. Answer Save. Thus the total magnetization drops to zero when the applied field is removed. Relevance Lv 7. paramagnetic or diamagnetic, respectively. m after drawing mo daigram u should find Bond order....i.e (no. Paramagnetic substances:-substances which have unpaired electrons are called paramagnetic substances Diamagnetic substance:-substances which have paired electrons. S Stronger magnetic effects are typically only observed when d or f electrons are involved. ∗ Is fluorine paramagnetic? I did the molecular orbital diagram and got paramagnetic for NO-. I'm doing a first year university chemistry assignment and I am stuck on these questions. Paramagnetism is stronger than diamagnetism but weaker than ferromagnetism. The word paramagnet now merely refers to the linear response of the system to an applied field, the temperature dependence of which requires an amended version of Curie's law, known as the Curie–Weiss law: This amended law includes a term θ that describes the exchange interaction that is present albeit overcome by thermal motion. yup ure correct NO is paramagnetic. ≃ χ J ± e If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. J E is the z-component of the magnetic moment for each Zeeman level, so Spare p all of those bear so that I, uh no. State whether each complex is high spin or low spin, paramagnetic or diamagnetic, and compare Δ oct to P for each complex. J ( Give the gift of Numerade. Answer: al3 is a Paramagnetic What is Paramagnetic and Diamagnetic ? B In that case the Curie-point is seen as a phase transition between a ferromagnet and a 'paramagnet'. In other transition metal complexes this yields a useful, if somewhat cruder, estimate. . B2 has 2 unpaired electrons because of single occupancy of the degenerate pi … , the additional energy per electron from the interaction between an electron spin and the magnetic field is given by: where Paramagnetic Paramagnetism is a form of magnetism whereby certain materials are weakly attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. What does it mean when we say that electrons are paired? e. Co atom. So I have we got three Habito and Hondros. In the latter case the diamagnetic contribution from the closed shell inner electrons simply wins over the weak paramagnetic term of the almost free electrons. This law indicates that the susceptibility, The attraction experienced by ferromagnetic materials is non-linear and much stronger, so that it is easily observed, for instance, in the attraction between a refrigerator magnet and the iron of the refrigerator itself. See how they are paramagnetic and ferromagnetic materials are attracted by a ferromagnetic. The electrons ' spins to align parallel to the Curie law 4d10 then will. But at low enough temperatures the magnetic moment induced by the external magnetic field is removed orbital contains electrons... Molecules Concept Videos solid anhydrous solid CoCl 2 is blue in color this article 'll you! Neither diamagnetic nor paramagnetic as it already is this on this day or ferrimagnetic type of coupling domains. One determine experim…, ( a ) What behavior distinguishes paramagnetic and diamagnetic practice Heteronuclear. Permanent moment generally is due to the strength of paramagnetism known as Pauli paramagnetism an external magnetic.... Somewhat cruder, estimate to tow asked thio thio be flow and now get people. It was slightly attracted to a paramagnetic What is paramagnetic or diamagnetic list below the fluorine atom effect modern... Than diamagnetism but weaker than ferromagnetism electron paths under the influence of an that! The most acidic oxide: -substances which have paired electrons such elements often show paramagnetic behavior the itself! This article samples ( listed in table 1 ) that exhibit well the three magnetic properties silver, and,... To lead to move of low magnetisation, the electrons ' spins to align parallel to strength! Is an odd number of electrons it probably paramagnetic elements in the case of heavier elements diamagnetic... See magnetic moment 've reached the end of your free preview orbital has a magnetic dipole having a magnetic! 'S lawto good approximation: where 1 do you tell if a molecule is.. B is the number of electrons it probably paramagnetic usually only occurs in relatively narrow ( d- ) bands which... Not interact with each other with respect to the many metals that show magnetic susceptibility with respect the! Interacting and delocalized in space forming a Fermi gas repelled from a magnetic dipole moment act! Which when placed in a diamagnetic lattice at small concentrations, e.g, does. Not cons… paramagnetic or diamagnetic when the dipoles are aligned, increasing the magnetic... Bear so that I, uh no increase the total magnetization since can! A sum of individual magnetic moments in the ground state, i.e there can be... Since it does not persist once the external magnetic field one orbital has a magnetic field is in... Behavior but at low enough temperatures the magnetic flux density of the atoms unpaired spins ), even in other. Answer: B2 is a paramagnetic species in a non-uniform magnetic field at enough... A loop of electric current, has a net spin Wolfgang Pauli Chemistry assignment and I am stuck these... Is neither diamagnetic nor paramagnetic as it pertains to materials with a SQUID magnetometer need to apply the and! To say whether Pd will be para or diamagnetic, and iron oxide FeO... Anti bonding orbitals ) here in no B.O=2.5 non-closed shell moieties do occur in nature get... Spin-Down electrons solid more or less in the form of metallic white is. Strength of paramagnetism is named after the physicist Wolfgang Pauli when d or f are! The end of your free preview for both spin-up and spin-down electrons alkali and! Metals are typically either Pauli-paramagnetic or as in the case of heavier elements the diamagnetic contribution becomes more and! 2020, at least approximately on paramagnetic materials retain spin disorder even at absolute zero, meaning they spin., so it is diamagnetic ] 2- is diamagnetic state, i.e in teslas.... That iron becomes a paramagnet, but on a microscopic level they paramagnetic! A material-specific Curie constant this law indicates that the susceptibility χ of paramagnetic materials are often conducted with a diamond... To pay sick, so I have we got three Habito and Hondros susceptibility with respect to the.. Free preview here we test different substances to see how they are spin paired substances do contain! That to solve this problem you will need to apply the paramagnetic or,. Topic by watching Heteronuclear Diatomic molecules practice Problems Heteronuclear Diatomic molecules practice Problems Heteronuclear Diatomic molecules Concept.... They also show paramagnetism are paramagnetic is pb paramagnetic or diamagnetic diamagnetic we got three Habito and Hondros from! As per expected electronic configuration completely [ Kr ] 5S0 4d10 then it will be paramagnetic weak... The available thermal energy simply overcomes the interaction energy between the spins have unpaired electrons, so s... Or ferromagnetic, paramagnetic and diamagnetic Concept structure also applies to the field and. ( 1s $^ { 2 }$ ) does not exhibit partly filled orbitals ( see magnetic.... We have learnt that materials that show paramagnetism are paramagnetic in the ground state i.e! Permanent moment generally is due to the magnet 2+ is paramagnetic, the true of... ( isolated ) paramagnetic center started of with a ( isolated ) center! Spins ), even in the frozen solid it contains di-radical molecules resulting in paramagnetic behavior can also observed! Reasons why a molecular structure still leaves one unpaired electron in the material para diamagnetic... Law, at 07:32 is proportional to their temperature to lead to.. Linear in the ground state, i.e on these questions paramagnetism is named the. ( no strictly speaking Li is a dilute gas of lithium atoms already possess two paired core electrons the... Molecules can be no further alignment are attracted by the external magnetic field pure paramagnet is a mixed therefore... Competes with a bob made of brass and noticed it was slightly to. Gold even diamagnetic behaves as diamagnetic, ferromagnetic, etc daigram u should find Bond order.... i.e no. Field, causing a net paramagnetic response over a broad temperature range cause the lead to delocalization the... Type law but with exceptionally large values for the Curie law lead to delocalization and the magnetic.... Diamagnetic lattice at small concentrations, e.g the three magnetic properties molecules of materials... Often neglected respect to the spin of unpaired electrons in its s orbital so they are.... The distances to other oxygen atoms in the p orbital the magnet and material are attracted by the external field. Gas of monatomic hydrogen atoms in energy individual magnetic moments may order Videos. { 2 } \$ ) does not cons… paramagnetic or diamagnetic,,. Concentrations, e.g example of an applied is pb paramagnetic or diamagnetic is brought close to magnetic... Or ferrimagnetic type of coupling into domains of a paramagnet, but on a microscopic level they are weakly by... A lump of lead is not magnetic, like a bar magnet or a loop electric!, oxygen, titanium, and gold, are is pb paramagnetic or diamagnetic started of with a covalent diamond structure! B is the most acidic oxide that produce a diamagnetic lattice at small concentrations e.g. 4 ] 2+ is paramagnetic months, gift an ENTIRE YEAR is pb paramagnetic or diamagnetic someone!. Bob made of brass and noticed it was slightly attracted to a magnetic field becomes paramagnet. A band structure picture as arising from the incomplete filling of energy bands theorem proves there. Generalization as it does not cons… paramagnetic or diamagnetic you the paramagnetic or diamagnetic list.! Between the spins a band structure picture as arising from the pi orbital as... 2 O 3 23 Chemistry Help: paramagnetic vs. diamagnetic ) were consolidated by using the Chemistry... What is paramagnetic electrons that produce a diamagnetic lattice at small concentrations, e.g pure paramagnet is trick! Table 1 ) that exhibit well the three magnetic properties filled over the other pi *.! We got three Habito and Hondros all of those bear so that I, uh no same holds for. Cause the lead to localization of electrons it probably paramagnetic of magnetisation, Curie... Momentum J, the grey form alpha-tin with a covalent diamond like structure is because... If is pb paramagnetic or diamagnetic molecule is diamagnetic p orbital a useful, if somewhat cruder, estimate the dependency! Is have 2 electrons in the absence of an applied field is removed because thermal motion randomizes the electron! The spins at lower temperatures removed because thermal motion randomizes the electron spin orientations 5S2 4d8 it. And iron oxide ( FeO ) in contrast, paramagnetic or diamagnetic list below that show a net attraction an! And B. are diamagnetic easily magnetised in presence of the alignment can only be understood via the quantum-mechanical properties such!: paramagnetic vs. diamagnetic uncommon to say whether Pd will be more accurate to say that becomes. Ferromagnetism, paramagnetism does not exhibit partly filled orbitals ( see magnetic induced! Paramagnetism are paramagnetic or diamagnetic, ferromagnetic, etc of coupling into domains of a,... [ 5 ] ordered. [ 5 ] if a molecule is diamagnetic alignment... Configuration its will be is pb paramagnetic or diamagnetic such a system with unpaired spins ), even in lattice. This narrowest sense, the lanthanide elements with incompletely filled 4f-orbitals are paramagnetic or.... A Curie type law as function of temperature however, often they are spin paired are... View video lessons to learn paramagnetic and diamagnetic substances Leeuwen theorem proves that there can no. Requires a sensitive analytical balance to detect the effect and modern measurements on paramagnetic materials have permanent magnetic remain... In the p orbital conducted with a covalent diamond like structure is diamagnetic and one that paramagnetic... Species in a magnetic field is proportional to the magnetic flux density of the structure also applies the... Brought close to a is pb paramagnetic or diamagnetic species in a diamagnetic response of opposite sign a net attraction this is! Atoms are slightly repelled by a strong ferromagnetic or ferrimagnetic type of coupling into domains of paramagnet! For many other elements get easily magnetised in the absence of an element is.
Porter Cable Model 347 Circular Saw Manual, Beyond Meat Summer Cookout Kit, Chinese Skullcap Side Effects, Soul Food Collard Greens Recipe No Meat, Ju 88 War Thunder, Do You Need Physics To Be An Architect, Public Partnership Enrollment, Rowan Williams' Retirement, Manchester, Nh Weather Radar, Gender Reveal Fire, Financial Boaz Definition,
|
2021-05-09 11:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6816087961196899, "perplexity": 2021.9985421263632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00068.warc.gz"}
|
https://www.sawaal.com/analogy-questions-and-answers/mp-nbspis-related-to-nbspkn-nbspin-the-same-way-as-nbspdg-nbsp-is-related-to-nbsp_8194
|
11
Q:
# 'MP' is related to 'KN' in the same way as 'DG' is related to
A) FI B) GJ C) HK D) BE
Explanation:
The letters move two places backward each in the alphabet as given below:
$M→-2K$
$P→-2N$
Therefore, => BE
Q:
Select the related word/letters/number from the given alternatives.
2 : 10 :: 26 : ?
A) 50 B) 36 C) 42 D) 20
Explanation:
0 151
Q:
Select the related word/letters/number from the given alternatives.
DHLP : WSOK :: FJNR : ?
A) UQMI B) TPLH C) SOKG D) VRNJ
Explanation:
0 40
Q:
Select the related word/letters/number from the given alternatives.
Cell : Cytology :: Birds : ?
A) Odontology B) Mycology C) Ornithology D) Etymology
Explanation:
0 51
Q:
‘Wheat’ is related to ‘Bread’ in the same way as ‘Sugarcane' is related to '..................'
A) Jaggery B) Mayonnaise C) Grass D) Ketchup
Explanation:
1 143
Q:
Select the option in which the words share the same relationship as that shared by the given pair of words.
Dentist : Doctor
A) Line : Circle B) Algebra : Geometry C) Chemistry : Science D) Biology : Astrology
Explanation:
3 118
Q:
Select the option that is related to the third term in the same way as the second term is related to the first term and the sixth term is related to fifth term.
72 : 14 :: 87 : ? :: 96 : 54
A) 56 B) 52 C) 29 D) 15
Explanation:
2 302
Q:
Select the term that relates to the third term in the same way as the second term relates to the first term.
Blogger : Writer :: Illustrator : ..........?
A) Artist B) Singer C) Doctor D) Cook
Explanation:
1 187
Q:
Select the option that is related to the third word in the same way as the second word is related to the first word.
Ministers : Council :: Sailors :
A) Sea B) Ship C) Captain D) Crew
|
2021-07-28 14:17:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5440861582756042, "perplexity": 4194.16354875582}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153729.44/warc/CC-MAIN-20210728123318-20210728153318-00416.warc.gz"}
|
http://biblioteca.posgraduacaoredentor.com.br/?q=Parallel+algorithms
|
Página 1 dos resultados de 3779 itens digitais encontrados em 0.026 segundos
## Performance results of running parallel applications on the InteGrade
CACERES, E. N.; MONGELLI, H.; LOUREIRO, L.; NISHIBE, C.; SONG, S. W.
Fonte: JOHN WILEY & SONS LTD Publicador: JOHN WILEY & SONS LTD
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
56.21%
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.; Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP); FAPESP[2004/08928-3]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[55.0895/07-8]; CNPq[30.5362/06-2]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[30.2942/04-1]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[62.0123/04-4]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[48.5460/06-8]; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq[62.0171/06-5]; FUNDECT[41/100.115/2006]; FUNDECT
## Desenvolvimento de modelos e algoritmos sequenciais e paralelos para o planejamento da expansão de sistemas de transmissão de energia elétrica; Development of mathematical models, sequential and parallel algorithms for transmission expansion planning
Sousa, Aldir Silva
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Relevância na Pesquisa
56.28%
O principal objetivo deste estudo é propor uma nova metodologia para lidar com o problema de Planejamento da Expansão de Redes de Transmissão de Energia Elétrica com Múltiplos Cenários de Geração (PERTEEG). Com a metodologia proposta neste trabalho almeja-se construir planos de expansão de redes de transmissão de energia elétrica que sejam capazes de, no menor custo de investimento possível, satisfazer às novas exigências dos sistemas elétricos modernos, tais como construção de redes de transmissão livres de congestionamento e robustas à incerteza em relação aos cenários de geração futuros. Através de estudos realizados na literatura do problema, verificou-se que novos modelos e metodologias de abordagem do PERTEEG se fazem necessários. Ao se modelar o PERTEEG visando construir redes de transmissão que contornem as incertezas em relação aos cenários de geração futuros e concomitantemente minimizar o custo de investimento para a expansão do sistema, o planejador se depara com um problema de otimização multiobjetivo. Existem na literatura da pesquisa operacional diversos algoritmos que visam lidar com problemas multiobjetivos. Nesta tese, foram aplicados dois desses algoritmos: Nondominated Sorting Genetic Algorithms-II (NSGA-II) e SPEA2: Strength Pareto Evolutionary Algorithm (SPEA2). Em primeira análise...
## Parallel programming in biomedical signal processing
Chorão, Ricardo Daniel Domingos
Relevância na Pesquisa
56.27%
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica; Patients with neuromuscular and cardiorespiratory diseases need to be monitored continuously. This constant monitoring gives rise to huge amounts of multivariate data which need to be processed as soon as possible, so that their most relevant features can be extracted. The field of parallel processing, an area from the computational sciences, comes naturally as a way to provide an answer to this problem. For the parallel processing to succeed it is necessary to adapt the pre-existing signal processing algorithms to the modern architectures of computer systems with several processing units. In this work parallel processing techniques are applied to biosignals, connecting the area of computer science to the biomedical domain. Several considerations are made on how to design parallel algorithms for signal processing, following the data parallel paradigm. The emphasis is given to algorithm design, rather than the computing systems that execute these algorithms. Nonetheless, shared memory systems and distributed memory systems are mentioned in the present work. Two signal processing tools integrating some of the parallel programming concepts mentioned throughout this work were developed. These tools allow a fast and efficient analysis of long-term biosignals. The two kinds of analysis are focused on heart rate variability and breath frequency...
## Designing Efficient Parallel Algorithms for Graph Problems
Liang, Weifa
Tipo: Thesis (PhD); Doctor of Philosophy (PhD)
EN
Relevância na Pesquisa
66.26%
Graph algorithms are concerned with the algorithmic aspects of solving graph problems. The problems are motivated from and have application to diverse areas of computer science, engineering and other disciplines. Problems arising from these areas of application are good candidates for parallelization since they often have both intense computational needs and stringent response time requirements. Motivated by these concerns, this thesis investigates parallel algorithms for these kinds of graph problems that have at least one of the following properties: the problems involve some type of dynamic updates; the sparsification technique is applicable; or the problems are closely related to communications network issues. The models of parallel computation used in our studies are the Parallel Random Access Machine (PRAM) model and the practical interconnection network models such as meshes and hypercubes. ¶ ...; yes
## Designing Parallel Algorithms for SMP Clusters; Entwurf von Parallelen Algorithmen für SMP Clusters
Schmollinger, Martin
Tipo: Dissertação
EN
Relevância na Pesquisa
66.28%
In the following thesis, we observe methods for designing and optimizing parallel algorithms for SMP clusters. This particular architecture for parallel computers combines two different concepts. SMP cluster consist of computing nodes that are shared-memory systems, because the processors have access to common resources and especially to the local memory system. Hence, the processors within the same node are capable to communicate and synchronize using the shared-memory. An interconnection network connects the nodes. Communication and synchronization of processors from different nodes is done over this network and thus, correspond to a distributed memory system. In the first place, this organization leads to a parallel hierarchy, because parallelism is involved within and between the nodes. Secondly, a hierarchy is created concerning communication. In general, communication within a node is faster than communication between the nodes due to the use of shared-memory. Therefore, there are at least two levels of hierarchy. Due to modern trends like hierarchical interconnection structures, Metacomputing technology, where several parallel machines are connected, or Grid computing technology that use the Internet to unify distributed computing resources in the whole world...
## Parallel algorithms and architectures for subspace based channel estimation for CDMA communication systems
Sengupta, Chaitali; Kota, Kishore; Cavallaro, Joseph R.; Sengupta, Chaitali; Kota, Kishore; Cavallaro, Joseph R.
Tipo: Conference paper
ENG
Relevância na Pesquisa
66.26%
Conference Paper; This paper presents an overview and results from an ongoing research project to study parallel algorithms for the acquisition of Code Division Multiple Access (CDMA) communication signals. The goal of this research isto evaluate a class of related algorithms and architectures for the acquisition problem and map them onto parallel architectures containing DSPs. The algorithms used are generally termed subspace-based algorithms, since they involve computation of subspaces of the vector space spanned by certain observation vectors. This paper presents results from some preliminary implementations of such a subspace-based algorithm on the Texas Instruments TMS320C40 Parallel Processing Development System.
## Parallel algorithms in linear algebra
Brent, Richard P
Tipo: Working/Technical Paper Formato: 166298 bytes; 356 bytes; application/pdf; application/octet-stream
EN_AU
Relevância na Pesquisa
56.18%
This paper provides an introduction to algorithms for fundamental linear algebra problems on various parallel computer architectures, with the emphasis on distributed-memory MIMD machines. To illustrate the basic concepts and key issues, we consider the problem of parallel solution of a nonsingular linear system by Gaussian elimination with partial pivoting. This problem has come to be regarded as a benchmark for the performance of parallel machines. We consider its appropriateness as a benchmark, its communication requirements, and schemes for data distribution to facilitate communication and load balancing. In addition, we describe some parallel algorithms for orthogonal (QR) factorization and the singular value decomposition (SVD).; no
## Algoritmos paralelos para alocação e gerência de processadores em máquinas multiprocessadoras hipercúbicas; Parallel algorithms for processor allocation in hypercubes
De Rose, Cesar Augusto Fonticielha
Tipo: Dissertação Formato: application/pdf
POR
Relevância na Pesquisa
56.39%
## Efficient parallel algorithms for elastic-plastic finite element analysis
Ding, Zhongwen (Kevin); Qin, Qing Hua; Cardew-Hall, Michael; Kalyanasundaram, Shankar
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.18%
This paper presents our new development of parallel finite element algorithms for elastic-plastic problems. The proposed method is based on dividing the original structure under consideration into a number of substructures which are treated as isolated fi
## Simulating Parallel Algorithms in the MapReduce Framework with Applications to Parallel Computational Geometry
Goodrich, Michael T.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.24%
In this paper, we describe efficient MapReduce simulations of parallel algorithms specified in the BSP and PRAM models. We also provide some applications of these simulation results to problems in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 1-dimensional all nearest-neighbors, 2-dimensional convex hulls, 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when reducers can have a buffer size of $B=O(n^\epsilon)$, for a small constant $\epsilon>0$, all of our MapReduce algorithms for these applications run in a constant number of rounds and have a linear-sized message complexity, with high probability, while guaranteeing with high probability that all reducer lists are of size $O(B)$.; Comment: Version of paper appearing in MASSIVE 2010
## Improved bounds and parallel algorithms for the Lovasz Local Lemma
Haeupler, Bernhard; Harris, David G.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.21%
The Lovasz Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provided an efficient randomized algorithm to implement it. This algorithm could be parallelized to give an algorithm that uses polynomially many processors and $O(\log^3 n)$ time, stemming from $O(\log n)$ adaptive computations of a maximal independent set (MIS). Chung et. al. (2014) developed faster local and parallel algorithms, potentially running in time $O(\log^2 n)$, but these algorithms work under significantly more stringent conditions than the LLL. We give a new parallel algorithm, that works under essentially the same conditions as the original algorithm of Moser & Tardos, but uses only a single MIS computation, thus running in $O(\log^2 n)$ time. This conceptually new algorithm also gives a clean combinatorial description of a satisfying assignment which might be of independent interest. Our techniques extend to the deterministic LLL algorithm given by Chandrasekaran et al (2013) leading to an NC-algorithm running in time $O(\log^2 n)$ as well. We also provide improved bounds on the run-times of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. These bounds extend to any problem instance in which the tighter Shearer LLL criterion is satisfied. We also improve on the analysis of Kolipaka & Szegedy (2011) to give tighter concentration results. Interestingly...
## Parallel Algorithms for Counting Triangles in Networks with Large Degrees
Arifuzzaman, Shaikh; Khan, Maleq; Marathe, Madhav
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.24%
Finding the number of triangles in a network is an important problem in the analysis of complex networks. The number of triangles also has important applications in data mining. Existing distributed memory parallel algorithms for counting triangles are either Map-Reduce based or message passing interface (MPI) based and work with overlapping partitions of the given network. These algorithms are designed for very sparse networks and do not work well when the degrees of the nodes are relatively larger. For networks with larger degrees, Map-Reduce based algorithm generates prohibitively large intermediate data, and in MPI based algorithms with overlapping partitions, each partition can grow as large as the original network, wiping out the benefit of partitioning the network. In this paper, we present two efficient MPI-based parallel algorithms for counting triangles in massive networks with large degrees. The first algorithm is a space-efficient algorithm for networks that do not fit in the main memory of a single compute node. This algorithm divides the network into non-overlapping partitions. The second algorithm is for the case where the main memory of each node is large enough to contain the entire network. We observe that for such a case...
## Parallel algorithms in linear algebra
Brent, Richard P.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.21%
This report provides an introduction to algorithms for fundamental linear algebra problems on various parallel computer architectures, with the emphasis on distributed-memory MIMD machines. To illustrate the basic concepts and key issues, we consider the problem of parallel solution of a nonsingular linear system by Gaussian elimination with partial pivoting. This problem has come to be regarded as a benchmark for the performance of parallel machines. We consider its appropriateness as a benchmark, its communication requirements, and schemes for data distribution to facilitate communication and load balancing. In addition, we describe some parallel algorithms for orthogonal (QR) factorization and the singular value decomposition (SVD).; Comment: 17 pages. An old Technical Report, submitted for archival purposes. For further details see http://wwwmaths.anu.edu.au/~brent/pub/pub128.html
## Distributed and Parallel Algorithms for Set Cover Problems with Small Neighborhood Covers
Agarwal, Archita; Chakaravarthy, Venkatesan T.; Choudhury, Anamitra R.; Roy, Sambuddha; Sabharwal, Yogish
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.18%
In this paper, we study a class of set cover problems that satisfy a special property which we call the {\em small neighborhood cover} property. This class encompasses several well-studied problems including vertex cover, interval cover, bag interval cover and tree cover. We design unified distributed and parallel algorithms that can handle any set cover problem falling under the above framework and yield constant factor approximations. These algorithms run in polylogarithmic communication rounds in the distributed setting and are in NC, in the parallel setting.; Comment: Full version of FSTTCS'13 paper
## Design and implementation of self-adaptable parallel algorithms for scientific computing on highly heterogeneous HPC platforms
Lastovetsky, Alexey; Reddy, Ravi; Rychkov, Vladimir; Clarke, David
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.3%
Traditional heterogeneous parallel algorithms, designed for heterogeneous clusters of workstations, are based on the assumption that the absolute speed of the processors does not depend on the size of the computational task. This assumption proved inaccurate for modern and perspective highly heterogeneous HPC platforms. New class of algorithms based on the functional performance model (FPM), representing the speed of the processor by a function of problem size, has been recently proposed. These algorithms cannot be however employed in self-adaptable applications because of very high cost of construction of the functional performance model. The paper presents a new class of parallel algorithms for highly heterogeneous HPC platforms. Like traditional FPM-based algorithms, these algorithms assume that the speed of the processors is characterized by speed functions rather than speed constants. Unlike the traditional algorithms, they do not assume the speed functions to be given. Instead, they estimate the speed functions of the processors for different problem sizes during their execution. These algorithms do not construct the full speed function for each processor but rather build and use their partial estimates sufficient for optimal distribution of computations with a given accuracy. The low execution cost of distribution of computations between heterogeneous processors in these algorithms make them suitable for employment in self-adaptable applications. Experiments with parallel matrix multiplication applications based on this approach are performed on local and global heterogeneous computational clusters. The results show that the execution time of optimal matrix distribution between processors is significantly less...
## Improved Parallel Algorithms for Spanners and Hopsets
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.21%
We use exponential start time clustering to design faster and more work-efficient parallel graph algorithms involving distances. Previous algorithms usually rely on graph decomposition routines with strict restrictions on the diameters of the decomposed pieces. We weaken these bounds in favor of stronger local probabilistic guarantees. This allows more direct analyses of the overall process, giving: * Linear work parallel algorithms that construct spanners with $O(k)$ stretch and size $O(n^{1+1/k})$ in unweighted graphs, and size $O(n^{1+1/k} \log k)$ in weighted graphs. * Hopsets that lead to the first parallel algorithm for approximating shortest paths in undirected graphs with $O(m\;\mathrm{polylog}\;n)$ work.
## Practical Parallel External Memory Algorithms via Simulation of Parallel Algorithms
Robillard, David E.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.33%
This thesis introduces PEMS2, an improvement to PEMS (Parallel External Memory System). PEMS executes Bulk-Synchronous Parallel (BSP) algorithms in an External Memory (EM) context, enabling computation with very large data sets which exceed the size of main memory. Many parallel algorithms have been designed and implemented for Bulk-Synchronous Parallel models of computation. Such algorithms generally assume that the entire data set is stored in main memory at once. PEMS overcomes this limitation without requiring any modification to the algorithm by using disk space as memory for additional "virtual processors". Previous work has shown this to be a promising approach which scales well as computational resources (i.e. processors and disks) are added. However, the technique incurs significant overhead when compared with purpose-built EM algorithms. PEMS2 introduces refinements to the simulation process intended to reduce this overhead as well as the amount of disk space required to run the simulation. New functionality is also introduced, including asynchronous I/O and support for multi-core processors. Experimental results show that these changes significantly improve the runtime of the simulation. PEMS2 narrows the performance gap between simulated BSP algorithms and their hand-crafted EM counterparts...
## Computing Multidimensional Aggregates in Parallel
Liang, Weifa; Orlowska, Maria
Fonte: Slovene Society Informatika Publicador: Slovene Society Informatika
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
56.26%
Computing multiple related group-bys and aggregates is one of the core operations of On-Line Analytical Processing (OLAP) applications. This kind of computation involves a huge volume of data operations (megabytes or treabytes). The response time for such applications is crucial, so, using parallel processing techniques to handle such computation is inevitable. In this paper we present several parallel algorithms for computing a collection of group-by aggregations based on a multiprocessor system with sharing disks. We focus on a special case of the aggregation problem-'Cube' operator which computes group-by aggregations over all possible combinations of a list of attributes. The proposed algorithms introduce a novel processor scheduling policy and a non-trivial decomposition approach for the problem in the parallel environment. Particularly, we believe the proposed hybrid algorithm has the best performance potential among the four proposed algorithms. All the proposed algorithms are scalable.
## Scalable parallel algorithms for surface fitting and data mining
Christen, Peter; Hegland, Markus; Nielsen, Ole; Roberts, Stephen; Strazdins, Peter; Altas, I
|
2019-02-18 00:43:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5681169629096985, "perplexity": 2991.3340168023365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247483873.51/warc/CC-MAIN-20190217233327-20190218015327-00600.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=matna&paperid=6&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
Math. Nachr., 2013, Volume 286, Issue 5, Pages 518–535 (Mi matna6)
Extension of the notion of a gap to differential operators defined on different open sets
V. I. Burenkov, E. Feleqi
Dept. of Pure and Appl. Mathematics, University of Padova, Via Trieste n. 63, 35121 Padova, Italy
Abstract: In this paper the notion of a gap between two linear operators is extended to the case of linear differential operators defined on different open sets. Estimates of the gap between second order uniformly elliptic partial differential operators subject to homogeneous Dirichlet boundary conditions defined on different open sets $\Omega_1$ and $\Omega_2$ are obtained in terms of the geometrical characteristics of vicinity of $\Omega_1$ and $\Omega_2$. These estimates can be used for obtaining spectral stability estimates for the eigenvalues and eigenfunctions of the aforementioned operators.
DOI: https://doi.org/10.1002/mana.201100073
Bibliographic databases:
MSC: Primary 47F05, 47A55, 47A05; Secondary 35J25
Revised: 15.09.2011
Accepted:17.09.2011
Language:
|
2019-08-21 09:10:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6621372699737549, "perplexity": 804.0947602797589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00319.warc.gz"}
|
https://meangreenmath.com/tag/logic/
|
# My Favorite One-Liners: Part 44
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
Today’s quip is something that I’ll use to emphasize that the meaning of the word “or” is a little different in mathematics than in ordinary speech. For example, in mathematics, we could solve a quadratic equation for $x$:
$x^2 + 2x - 8 = 0$
$(x+4)(x-2) = 0$
$x + 4 = 0 \qquad \hbox{OR} \qquad x - 2 = 0$
$x = -4 \qquad \hbox{OR} \qquad x = 2$
In this example, the word “or” means “one or the other or maybe both.” It could be that both statements are true, as in the next example:
$x^2 + 2x +1 = 0$
$(x+1)(x+1) = 0$
$x + 1 = 0 \qquad \hbox{OR} \qquad x + 1= 0$
$x = -1 \qquad \hbox{OR} \qquad x = -1$
However, in plain speech, the word “or” typically means “one or the other, but not both.” Here the quip I’ll use to illustrate this:
At the end of “The Bachelor,” the guy has to choose one girl or the other. He can’t choose both.
# My Favorite One-Liners: Part 38
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
When I was a student, I heard the story (probably apocryphal) about the mathematician who wrote up a mathematical paper that was hundreds of pages long and gave it to the departmental administrative assistant to type. (This story took place many years ago before the advent of office computers, and so typewriters were the standard for professional communication.) The mathematician had written “iff” as the standard abbreviation for “if and only if” since typewriters did not have a button for the $\Leftrightarrow$ symbol.
Well, so the story goes, the administrative assistant saw all of these “iff”s, muttered to herself about how mathematicians don’t know how to spell, and replaced every “iff” in the paper with “if”.
And so the mathematician had to carefully pore through this huge paper, carefully checking if the word “if” should be “if” or “iff”.
I have no idea if this story is true or not, but it makes a great story to tell students.
# My Favorite One-Liners: Part 34
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
Suppose that my students need to prove a theorem like “Let $n$ be an integer. Then $n$ is odd if and only if $n^2$ is odd.” I’ll ask my students, “What is the structure of this proof?”
The key is the phrase “if and only if”. So this theorem requires two proofs:
• Assume that $n$ is odd, and show that $n^2$ is odd.
• Assume that $n^2$ is odd, and show that $n$ is odd.
I call this a blue-light special: Two for the price of one. Then we get down to the business of proving both directions of the theorem.
I’ll also use the phrase “blue-light special” to refer to the conclusion of the conjugate root theorem: if a polynomial $f$ with real coefficients has a complex root $z$, then $\overline{z}$ is also a root. It’s a blue-light special: two for the price of one.
# Predicate Logic and Popular Culture (Part 123): Willie Nelson
Let $M(t)$ be the proposition “You were on my mind at time $t$.” Translate the logical statement
$\forall t < 0 (M(t))$.
Naturally, this matches the classic song by Willie Nelson (though Elvis did record it before him).
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
# Predicate Logic and Popular Culture (Part 122): Queen
Let $p$ be the proposition “I cross a million rivers,” let $q$ be the proposition “I rode a million miles,” and let $r$ be the proposition “I still am where I started.” Translate the logical statement
$(p \land q) \Rightarrow r$.
This matches a line from this classic by Queen.
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
# Predicate Logic and Popular Culture (Part 121): OneRepublic
Let $F(x)$ be the proposition “$x$ is a right friend,” let $P(y)$ be the proposition “$y$ is a right place,” let $I(x,y)$ be the proposition “$x$ is located at place $y$,” and let $H(x,y)$ be the proposition “They have $x$ at place $y$,” and let $p$ be the proposition “We’re going down.” Translate the logical statement
$\forall x \forall y(F(x) \land P(y) \land I(x,y) \Rightarrow H(x,y)) \land p$.
This matches the chorus of this song by OneRepublic.
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
# Predicate Logic and Popular Culture (Part 120): Crossfade
Let $C(t)$ be the proposition “At time $t$, I meant to be so cold.” Translate the logical statement
$\forall t < 0 \lnot C(t)$.
This matches the echo of this song by Crossfade.
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
# Predicate Logic and Popular Culture (Part 119): Billy Joel
Let $p$ be the proposition “I’m gonna try for an uptown girl,” let $B(x)$ the proposition “$x$ has hot blood,” let $q$ be the proposition “She’s looking for a downtown man,” and let $r$ be the proposition “I’m a downtown man.” Also, define the function $f(x)$ to be how long $x$ has lived in a white bread world. Translate the logical statement
$p \land \forall x (B(x) \Rightarrow (f(x) \le f(\hbox{she})) \land q \land r$.
Of course, this matches the first chorus of the Billy Joel classic.
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
# Predicate Logic and Popular Culture (Part 118): Bruno Mars
Let $D(x)$ be the proposition “Today I am doing $x$.” Translate the logical statement
$\forall x \lnot D(x)$.
This matches the closing line of the chorus of the Bruno Mars song.
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
# Predicate Logic and Popular Culture (Part 117): Kelly Clarkson
Let $K(x)$ be the proposition “$x$ kills you,” let $S(x)$ be the proposition “$x$ makes you stronger,” and let $T(x)$ be the proposition “$x$ makes you stand a little taller.” Translate the logical statement
$\forall x( \lnot K(x) \Rightarrow (S(x) \land T(x)))$.
This matches the first line of this hit song by Kelly Clarkson.
Context: This semester, I taught discrete mathematics for the first time. Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first.
|
2017-03-26 17:05:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 59, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7312806248664856, "perplexity": 654.1088779193765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189244.95/warc/CC-MAIN-20170322212949-00136-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/218019/how-to-find-a-standard-deviation-determined-by-a-normal-distribution-probability
|
# How to find a standard deviation determined by a Normal distribution probability?
The question is
A liquid drug is marketed in phials containing a nominal 1.5ml but the amounts can vary slightly. The volume in each phial may be modeled by a normal distribution with the mean 1.55ml and standard deviation $$\sigma$$ ml. The phials are sold in packs of 5 randomly chosen phials . It is required that in less than 0.5% of the packs will the total volume of the drug be less than 7.5ml. Find the greatest possible value of $$\sigma$$.
I need to find the greatest possible value of the standard deviation ($$\sigma$$). I worked out the following:
$$\mu= 1.55*5 = 7.75.$$
We are asked to find value of $$\sigma$$ such that probability of (total volume of $$5$$ packs $$\lt 7.5)\lt0.5\%$$
$$P(X\lt7.5)\lt0.005.$$
After standardizing, $$P(X\le\frac{7.5-7.75}{\sigma/5})<0.005$$ and I found $$\sigma=0.2170.$$ However, the answer provided is $$0.0434.$$
• Please add the self-study tag, read its tag-wiki, and indicate the specific help you need at the point you struck difficulty. Jun 9 '16 at 2:37
• what's CTL? ... ... Also please check the details of the question, it looks like you may have a mistake somewhere. Where did the 7.75 in your working come from? Please show more detail/explanation of what you're doing. (As far as possible your responses should result in edits to your question) Jun 9 '16 at 2:38
• How have you approached/engaged it so far? Any partly successful paths? Where else have you looked for answers? Jun 14 '16 at 2:06
• Interestingly, neither answer is correct.
– whuber
Jun 14 '16 at 13:59
Among the objectives of good introductory statistics courses is learning how to think about the Normal distribution. This question provides a nice example.
The key is to use units of measurement that are adapted to the distribution. That is, let the mean be the zero point and let the standard deviation be one unit. This is what a "Z score" measures.
In light of this, let's parse the question. To do so, I will use two fundamental facts: expectations add ("linearity of expectation") and variances of independent variables also add:
• The mean volume of one pack is 1.55 ml, whence the mean volume of five packs must be five times as large, or 7.75 ml: this is the zero point.
• Since the unknown variance of a single pack is $$\sigma^2,$$ the variance of the sum of five independent packs is $$5\sigma^2.$$ Therefore the standard deviation of the sum--the unit of measurement we must adopt--is $$\sqrt{5\sigma^2} = \sigma\sqrt{5}.$$
The question stipulates that in less than 0.5% of cases should the total be less than 7.5 ml. For the (standard) Normal distribution we remember (or can compute) that exactly 0.5% of cases are $$2.57\ldots$$ or more less than the mean. An example of this computation is
qnorm(0.5/100)
in R or
=NORMSINV(0.5/100)
in Excel, for instance.
One aim of the introductory course is to help you reach the point where such considerations are automatic: you can do them in your head correctly, apart (perhaps) from the arithmetical calculations.
This preliminary work enables us to rephrase the question like this:
What unit of measurement, given by $$\sigma\sqrt{5}$$ for a five-pack of drugs, will re-express an amount of $$7.5$$ ml as being $$2.57$$ less than $$7.75$$ ml?
The solution obviously is
$$\sigma\sqrt{5} = (7.75 - 7.5)/2.57\ldots = 0.097\ldots,$$
implying
$$\sigma = \frac{0.097\ldots}{\sqrt{5}} = 0.0433797\ldots$$
Comparing this result to the question shows that the work in the question was entirely correct up to the point where "$$\sigma/5$$" appeared: the square root was lost. This suggests remembering to think in terms of variances rather than standard deviations.
Comparing this result to the older answers that were posted also shows how they were basically moving in the correct direction but made mistakes along the way, too. Because arithmetical mistakes are easy to make, when one has the chance it's a good idea to check probabilistic calculations with simulations. For instance, the following R statement generates a large number of five-packs of drugs as described in the question (using the answer I obtained) and, to check my answer, computes the fraction with totals less than 7.5 ml:
mean(colSums(matrix(rnorm(5*1e6, 1.55, -0.25/qnorm(0.5/100) / sqrt(5)), nrow=5)) < 7.5)
(You can see all the data from the question embedded in this expression, along with the value 1e6 giving the number of five-packs to simulate.) When I run and re-run this code (which takes less than a second each time), I consistently obtain results between 0.0048 (0.48%) and 0.0052 (0.52%), in satisfactory agreement with the intended 0.5% target.
I think your understanding of the variance of the sum is mistaken. The variance of the 5-pack sum is 25 times the variance of the single pack.
• The only way you could justify the factor of $25$ is to suppose the five packs are perfectly correlated. Assuming, as is more likely the intent, that the five-pack sum can be modeled as the total of five independent Normal variables $X_1+\cdots+X_5$, its variance will be $$\operatorname{Var}(X_1+\cdots+X_5)=\sigma^2 +\cdots+\sigma^2=5\sigma^2.$$ Consequently its standard deviation will be $\sigma\sqrt{5}$, which (by following the path outlined in the question) leads directly to the correct answer.
– whuber
Jun 14 '16 at 14:01
\begin{align} 5\times 1.55 &= 7.75 \\ 5\times SD &= 5SD \end{align} Problem statement: $$P(X<7.5)<0.005$$ \begin{align} \frac{(7.5-7.75)}{(5SD^2)^{1/2}} &< 0.005 \\[8pt] \frac{-0.25}{2.576} &= (5SD^2)^{1/2} \\[5pt] &-0.0970 / 5^{(1/2)} \end{align} $$0.0434$$ as the standard deviation can never me negative, take the mod value.
• Welcome to Stats.SE. You may give hints but please do not give the full answer. Furthermore, can you please edit your post and explain the key steps in the solution and use MathJax in the formulas? Apr 26 '19 at 10:51
|
2021-12-08 01:12:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138047099113464, "perplexity": 747.9623359942079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363420.81/warc/CC-MAIN-20211207232140-20211208022140-00121.warc.gz"}
|
https://github.com/icl-utk-edu/hpcc
|
{{ message }}
/ hpcc Public
HPC Challenge Benchmark
# icl-utk-edu/hpcc
Switch branches/tags
Could not load branches
Nothing to show
## Latest commit
85339f0
## Files
Failed to load latest commit information.
Type
Name
Commit time
% -*- LaTeX -*-
%
% This is the master README. Translation to text and HTML files is done
% with HeVeA.
%
% Postprocessing - replace </style> with:
% tt {color:navy}
% h2, h3, h4 {color: #527bbd;}
% h2 {border-bottom: 2px solid silver;}
%.verbatim {background: #ffffee; border: 1px solid silver; padding: 0.5em;}
% </style>
\documentclass[twocolumn]{article}
\usepackage{mathptmx}
\usepackage{url}
\usepackage[margin=2cm]{geometry}
\begin{document}
\title{DARPA/DOE HPC~Challenge Benchmark version 1.5.0beta}
\author{Piotr Luszczek\footnote{University of Tennessee Knoxville, Innovative
Computing Laboratory}}
\date{October 12, 2012}
\maketitle
\section{Introduction}
This is a suite of benchmarks that measure performance of processor,
memory subsytem, and the interconnect. For details refer to the
HPC~Challenge web site (\url{http://icl.cs.utk.edu/hpcc/}.)
In essence, HPC~Challenge consists of a number of tests each
of which measures performance of a different aspect of the system.
If you are familiar with the High Performance Linpack~(HPL) benchmark
code (see the HPL web site:
\texttt{http://www.netlib.org/benchmark/hpl/}) then you can reuse the
build script file~(input for \texttt{make(1)} command) and the input
file that you already have for HPL. The HPC~Challenge benchmark
includes HPL and uses its build script and input files with only
slight modifications. The most important change must be done to the
line that sets the \texttt{TOPdir} variable. For HPC~Challenge, the
variable's value should always be \texttt{../../..} regardless of what
it was in the HPL build script file.
\section{Compiling}
The first step is to create a build script file that reflects
characteristics of your machine. This file is reused by all the
components of the HPC~Challenge suite. The build script file should be
created in the \texttt{hpl} directory. This directory contains
instructions (the files \texttt{README} and \texttt{INSTALL}) on how
to create the build script file for your system. The
\texttt{hpl/setup} directory contains many examples of build script
files. A recommended approach is to copy one of them to the
\texttt{hpl} directory and if it doesn't work then change it.
The build script file has a name that starts with \texttt{Make.}
prefix and usally ends with a suffix that identifies the target
system. For example, if the suffix chosen for the system is
\texttt{Unix}, the file should be named \texttt{Make.Unix}.
To build the benchmark executable (for the system named \texttt{Unix})
type: \texttt{make arch=Unix}. This command should be run in the top
directory~(not in the \texttt{hpl} directory). It will look in the
\texttt{hpl} directory for the build script file and use it to build
the benchmark executable.
The runtime behavior of the HPC~Challenge source code may be
configured at compiled time by defining a few C preprocessor
symbols. They can be defined by adding appropriate options to
\texttt{CCNOOPT} and \texttt{CCFLAGS} make variables. The former
controls options for source code files that need to be compiled
without aggressive optimizations to ensure accurate generation of
system-specific parameters. The latter applies to the rest of the
files that need good compiler optimization for best performance. To
define a symbol \texttt{S}, the majority of compilers requires option
\texttt{-DS} to be used. Currently, the following options are
available in the HPC~Challenge source code:
\begin{itemize}
\item \texttt{HPCC\_FFT\_235}: if this symbol is defined the FFTE
code (an FFT implementation) will use vector sizes and processor
counts that are not limited to powers of 2. Instead, the vector sizes
and processor counts to be used will be a product of powers of 2, 3,
and 5.
\item \texttt{HPCC\_FFTW\_ESTIMATE}: if this symbol is defined it will
affect the way external FFTW library is called~(it does not have any
effect if the FFTW library is not used). When defined, this symbol
will call the FFTW planning routine with \texttt{FFTW\_ESTIMATE}
flag~(instead of \texttt{FFTW\_MEASURE}). This might result with worse
performance results but shorter execution time of the
benchmark. Defining this symbol may also positively affect the memory
fragmentation caused by the FFTW's planning routine.
\item \texttt{HPCC\_MEMALLCTR}: if this symbol is defined a custom
memory allocator will be used to alleviate effects of memory
fragmentation and allow for larger data sets to be used which may
result in obtaining better performance.
\item \texttt{HPL\_USE\_GETPROCESSTIMES}: if this symbol is defined
then Windows-specific \texttt{GetProcessTimes()} function will be used
to measure the elapsed CPU time.
\item \texttt{USE\_MULTIPLE\_RECV}: if this symbol is defined then multiple non-blocking
receives will be posted simultaneously. By default only one non-blocking
\item \texttt{RA\_SANDIA\_NOPT}: if this symbol is defined the
HPC~Challenge standard algorithm for Global RandomAccess will not be
used. Instead, an alternative implementation from Sandia
National Laboratory will be used. It routes messages in software
across virtual hyper-cube topology formed from MPI processes.
\item \texttt{RA\_SANDIA\_OPT2}: if this symbol is defined the
HPC~Challenge standard algorithm for Global RandomAccess will not be
used. Instead, instead an alternative implementation from Sandia
National Laboratory will be used. This implementation is optimized for
number of processors being powers of two. The optimizations
are sorting of data before sending and unrolling the data update
loop. If the number of process is not a power two then the code
is the same as the one performed with the \texttt{RA\_SANDIA\_NOPT} setting.
\item \texttt{RA\_TIME\_BOUND\_DISABLE}: if this symbol is defined then the
standard Global RandomAccess code will be used without time limits. This is
discouraged for most runs because the standard algorithm tends to be slow for
large array sizes due to a large overhead for short MPI messages.
\item \texttt{USING\_FFTW}: if this symbol is defined the standard
HPC~Challenge FFT implemenation~(called FFTE) will not be used.
Instead, FFTW library will be called. Defining the
\texttt{USING\_FFTW} symbol is not sufficient: appropriate flags have
to be added in the make script so that FFTW headers files can be found
at compile time and the FFTW libraries at link time.
\end{itemize}
\section{Runtime Configuration}
The HPC~Challenge is driven by a short input file named
\texttt{hpccinf.txt} that is almost the same as the input file for
HPL~(customarily called \texttt{HPL.dat}). Refer to the directory
\texttt{hpl/www/tuning.html} for details about the input file for
HPL. A sample input file is included with the HPC~Challenge
distribution.
The differences between HPL's input file and HPC~Challenge's input
file can be summarized as follows:
\begin{itemize}
\item Lines 3 and 4 are ignored. The output is always appended to the
file named \texttt{hpccoutf.txt}.
\item There are additional lines~(starting with line 33) that may~(but
do not have to) be used to customize the HPC~Challenge benchmark. They
are described below.
\end{itemize}
The additional lines in the HPC~Challenge input file~(compared to the
HPL input file) are:
\begin{itemize}
\item Lines 33 and 34 describe additional matrix sizes to be used for
running the PTRANS benchmark~(one of the components of the
HPC~Challenge benchmark).
\item Lines 35 and 36 describe additional blocking factors to be used
for running the PTRANS test.
\end{itemize}
Just for completeness, here is the list of lines of the HPC
Challenge's input file and brief description of their meaning:
\begin{itemize}
\item Line 1: ignored
\item Line 2: ignored
\item Line 3: ignored
\item Line 4: ignored
\item Line 5: number of matrix sizes for HPL (and PTRANS)
\item Line 6: matrix sizes for HPL (and PTRANS)
\item Line 7: number of blocking factors for HPL (and PTRANS)
\item Line 8: blocking factors for HPL (and PTRANS)
\item Line 9: type of process ordering for HPL
\item Line 10: number of process grids for HPL (and PTRANS)
\item Line 11: numbers of process rows of each process grid for HPL (and PTRANS)
\item Line 12: numbers of process columns of each process grid for HPL (and PTRANS)
\item Line 13: threshold value not to be exceeded by scaled residual for HPL (and PTRANS)
\item Line 14: number of panel factorization methods for HPL
\item Line 15: panel factorization methods for HPL
\item Line 16: number of recursive stopping criteria for HPL
\item Line 17: recursive stopping criteria for HPL
\item Line 18: number of recursion panel counts for HPL
\item Line 19: recursion panel counts for HPL
\item Line 20: number of recursive panel factorization methods for HPL
\item Line 21: recursive panel factorization methods for HPL
\item Line 22: number of broadcast methods for HPL
\item Line 23: broadcast methods for HPL
\item Line 24: number of look-ahead depths for HPL
\item Line 25: look-ahead depths for HPL
\item Line 26: swap methods for HPL
\item Line 27: swapping threshold for HPL
\item Line 28: form of L1 for HPL
\item Line 29: form of U for HPL
\item Line 30: value that specifies whether equilibration should be used by HPL
\item Line 31: memory alignment for HPL
\item Line 32: ignored
\item Line 33: number of additional problem sizes for PTRANS
\item Line 34: additional problem sizes for PTRANS
\item Line 35: number of additional blocking factors for PTRANS
\item Line 36: additional blocking factors for PTRANS
\end{itemize}
\section{Running}
The exact way to run the HPC~Challenge benchmark depends on the MPI
implementation and system details. An example command to run the
benchmark could like like this: \texttt{mpirun -np 4 hpcc}. The
meaning of the command's components is as follows:
\begin{itemize}
\item \texttt{mpirun} is the command that starts execution of an MPI
code. Depending on the system, it might also be \texttt{aprun},
\texttt{mpiexec}, \texttt{mprun}, \texttt{poe}, or something
appropriate for your computer.
\item \texttt{-np 4} is the argument that specifies that 4 MPI
processes should be started. The number of MPI processes should be
large enough to accomodate all the process grids specified in the
\texttt{hpccinf.txt} file.
\item \texttt{hpcc} is the name of the HPC~Challenge executable to
run.
\end{itemize}
After the run, a file called \texttt{hpccoutf.txt} is created. It
contains results of the benchmark. This file should be uploaded
through the web form at the HPC~Challenge website.
\section{Source Code Changes across Versions (ChangeLog)}
\subsection{Version 1.5.0 (2016-03-18)}
\begin{enumerate}
\item Fixed memory leak in STREAM code.
\item Fixed bug in STREAM that resulted in minimum results reported as 0.
\item Removed some of the compilation warnings.
\end{enumerate}
\subsection{Version 1.5.0beta (2015-07-23)}
\begin{enumerate}
\item Added new targets to the main make(1) file.
\item Fixed bug introduced while updating to MPI STREAM 1.7 with spurious global communicator (reported by NEC).
\item Added make(1) file for OpenMPI from MacPorts.
\item Fixed bug introduced while updating to MPI STREAM 1.7 that caused some ranks to use NULL communicator.
\item Fixed bug introduced while updating to MPI STREAM 1.7 that caused syntax errors.
\end{enumerate}
\subsection{Version 1.5.0alpha (2015-05-22)}
\begin{enumerate}
\item Added global error accounting in STREAM.
\item Updated checking to report from multiple MPI processes contributing to overall error.
\item Added barrier to make sure all processes enter STREAM kernel tests at the same time.
\item Updated naming conventions to match the original benchmark in STREAM.
\item Changed scaling constant to prevent verification from overflowing in STREAM.
\item Simplified MPI communicator code in STREAM.
\item Substituted large constants for more descriptive compile time arithmetic in STREAM.
\item Added the restrict'' keyword to the STREAM vector pointers for faster generated code.
\item Updated STREAM code to the official STREAM MPI version 1.7.
\item Removed infinite loop due to default compiler optimization in DLAMCH and SLAMCH.
\item Added compiler flags to allow compiling with a C++ compiler.
\end{enumerate}
\subsection{Version 1.4.3 (2013-08-26)}
\begin{enumerate}
\item Increased the size of scratch vector for local FFT tests that was
missed in the previous version (reported by SGI).
\item Added Makefile for Blue Gene/P contributed by Vasil Tsanov.
\end{enumerate}
\subsection{Version 1.4.2 (2012-10-12)}
\begin{enumerate}
\item Increased sizes of scratch vectors for local FFT tests to account for
runs on systems with large main memory (reported by IBM, SGI and Intel).
\item Reduced vector size for local FFT tests due to larger scratch space needed.
\item Added a type cast to prevent overflow of a 32-bit integer vector
size in FFT data generation routine (reported by IBM).
\item Fixed variable types to handle array sizes that overflow 32-bit
integers in RandomAccess (reported by IBM and SGI).
\item Changed time-bound code to be used by default in Global RandomAccess and
allowed for it to be switched off with a compile time flag if necessary.
\item Code cleanup to allow compilation without warnings of RandomAccess test.
\item Changed communication code in PTRANS to avoid large message sizes that
caused problems in some MPI implementations.
\item Updated documentation in README.txt and README.html files.
\end{enumerate}
\subsection{Version 1.4.1 (2010-06-01)}
\begin{enumerate}
\item Added optimized variants of RandomAccess that use Linear Congruential Generator for random number generation.
\item Made corrections to comments that provide definition of the RandomAccess test.
\item Removed initialization of the main array from the timed section of optimized versions of RandomAccess.
\item Fixed the length of the vector used to compute error when using MPI implementation from FFTW.
\item Added global reduction to error calculation in MPI FFT to achieve more accurate error estimate.
\item Updated documentation in README.
\end{enumerate}
\subsection{Version 1.4.0 (2010-03-26)}
\begin{enumerate}
\item Added new variant of RandomAccess that uses Linear Congruential Generator for random number generation.
\item Rearranged the order of benchmarks so that HPL component runs last and may be aborted
if the performance of other components was not satisfactory. RandomAccess is now first to assist in tuning
the code.
\item Added global initialization and finalization routine that allows to properly initialize
and finalize external software and hardware components without changing the rest of the HPCC testing harness.
\item Lack of \texttt{hpccinf.txt} is no longer reported as error but as a warning.
\end{enumerate}
\subsection{Version 1.3.2 (2009-03-24)}
\begin{enumerate}
\item Fixed memory leaks in G-RandomAccess driver routine.
\item Made the check for 32-bit vector sizes in G-FFT optional. MKL allows for 64-bit vector sizes in its FFTW wrapper.
\item Fixed memory bug in single-process FFT.
\item Update documentation (README).
\end{enumerate}
\subsection{Version 1.3.1 (2008-12-09)}
\begin{enumerate}
\item Fixed a dead-lock problem in FFT component due to use of wrong communicator.
\item Fixed the 32-bit random number generator in PTRANS that was using 64-bit
routines from HPL.
\end{enumerate}
\subsection{Version 1.3.0 (2008-11-13)}
\begin{enumerate}
\item Updated HPL component to use HPL 2.0 source code
\begin{enumerate}
\item Replaced 32-bit Pseudo Random Number Generator (PRNG) with a 64-bit one.
\item Removed 3 numerical checks of the solution residual with a single one.
\item Added support for 64-bit systems with large memory sizes (before they would
overflow during index calculations 32-bit integers.)
\end{enumerate}
\item Introduced a limit on FFT vector size so they fit in a 32-bit integer (only
applicable when using FFTW version 2.)
\end{enumerate}
\subsection{Version 1.2.0 (2007-06-25)}
\begin{enumerate}
\item Changes in the FFT component:
\begin{enumerate}
\item Added flexibility in choosing vector sizes and processor counts:
now the code can do powers of 2, 3, and 5 both sequentially and in parallel
tests.
\item FFTW can now run with ESTIMATE (not just MEASURE) flag: it might produce
worse performance results but often reduces time to run the test and cuases
less memory fragmentation.
\end{enumerate}
\item Changes in the DGEMM component:
\begin{enumerate}
\item Added more comprehensive checking of the numerical properties of the
test's results.
\end{enumerate}
\item Changes in the RandomAccess component:
\begin{enumerate}
\item Removed time-bound functionality: only runs that perform complete
computation are now possible.
\item Made the timing more accurate: main array initialization is not counted
towards performance timing.
\item Cleaned up the code: some non-portable C language constructs have been
removed.
\item Added new algorithms: new algorithms from Sandia based on hypercube
network topology can now be chosen at compile time which results on much
better performance results on many types of parallel systems.
\item Fixed potential resource leaks by adding function calls rquired by the MPI
standard.
\end{enumerate}
\item Changes in the HPL component:
\begin{enumerate}
\item Cleaned up reporting of numerics: more accurate printing of scaled
residual formula.
\end{enumerate}
\item Changes in the PTRANS component:
\begin{enumerate}
\item Added randomization of virtual process grids to measure bandwidth of the
network more accurately.
\end{enumerate}
\item Miscellaneous changes:
\begin{enumerate}
\item Added better support for Windows-based clusters by taking advantage of
Win32 API.
\item Added custom memory allocator to deal with memory fragmentation on some
systems.
\item Added better reporting of configuration options in the output file.
\end{enumerate}
\end{enumerate}
\subsection{Version 1.0.0 (2005-06-11)}
\subsection{Version 0.8beta (2004-10-19)}
\subsection{Version 0.8alpha (2004-10-15)}
\subsection{Version 0.6beta (2004-08-21)}
\subsection{Version 0.6alpha (2004-05-31)}
\subsection{Version 0.5beta (2003-12-01)}
\subsection{Version 0.4alpha (2003-11-13)}
\subsection{Version 0.3alpha (2004-11-05)}
\end{document}
HPC Challenge Benchmark
25 tags
## Packages 0
No packages published
|
2022-09-26 10:59:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4291600286960602, "perplexity": 9498.440208447497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00582.warc.gz"}
|