url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://socratic.org/questions/how-do-you-factor-3n-3-4n-2-6n-8 | # How do you factor 3n^3-4n^2-6n+8?
$f \left(x\right) = 3 n \left({n}^{2} - 2\right) - 4 \left({n}^{2} - 2\right) = \left({n}^{2} - 2\right) \left(3 n - 4\right)$ | 2020-08-14 08:05:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25450748205184937, "perplexity": 8523.29417067636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00435.warc.gz"} |
https://www.varsitytutors.com/ap_calculus_ab-help/basic-properties-of-definite-integrals-additivity-and-linearity?page=3 | # AP Calculus AB : Basic properties of definite integrals (additivity and linearity)
## Example Questions
1 3 Next →
Explanation:
Explanation:
Explanation:
Explanation:
Explanation:
### Example Question #26 : Basic Properties Of Definite Integrals (Additivity And Linearity)
Given that and , find the value of the following expression:
Explanation:
First, simplifying the given's gives us
And
Our goal is to get the given expression into terms of these two integrals. Our first step will be to try and get a from our expression.
First note,
And for the third term,
Putting these facts together, we can rewrite the original expression as
Rearranging,
The three terms in parentheses can all be brought together, as the top limit of the previous integral matches the bottom limit of the next integral. Thus, we now have
Substituting in our given's, this simplifies to
### Example Question #27 : Basic Properties Of Definite Integrals (Additivity And Linearity)
Evaluate the definite integral
Explanation:
Here we are using several basic properties of definite integrals as well as the fundamental theorem of calculus.
First, you can pull coefficients out to the front of integrals.
Second, we notice that our lower bound is bigger than our upper bound. You can switch the upper and lower bounds if you also switch the sign.
Lastly, our integral "distributes" over addition and subtraction. That means you can split the integral by each term and integrate each term separately.
Now we integrate and calculate using the Fundamental Theorem of Calculus.
Solve: | 2022-01-26 21:40:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277799129486084, "perplexity": 645.2048601934592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304961.89/warc/CC-MAIN-20220126192506-20220126222506-00156.warc.gz"} |
https://brilliant.org/problems/dominoes-and-more/ | # Dominoes Covering
There are many ways to cover a chess board completely with $$2 \times 1$$ dominos. In 1961, British physicist M.E.Fisher provedthat it can be done in 12988816 ways.
Now let us cut out two diagonally opposite corner squares. In how many ways can you cover the 62 squares of the mutilated chess board ?
###### Image credit: Wikipedia
×
Problem Loading...
Note Loading...
Set Loading... | 2018-10-21 09:19:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5312568545341492, "perplexity": 2219.398929653009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00358.warc.gz"} |
https://datascience.quantecon.org/scientific/plotting.html | • Lectures
• Code
• Notebooks
• Community
QuantEcon DataScience
Introduction to Economic Modeling and Data Science
# Plotting¶
Prerequisites
Outcomes
• Understand components of matplotlib plots
• Make basic plots
## Outline¶
In [1]:
# Uncomment following line to install on colab
#! pip install qeds
## Visualization¶
One of the most important outputs of your analysis will be the visualizations that you choose to communicate what you’ve discovered.
Here are what some people – whom we think have earned the right to an opinion on this material – have said with respect to data visualizations.
Above all else, show the data – Edward Tufte
By visualizing information, we turn it into a landscape that you can explore with your eyes. A sort of information map. And when you’re lost in information, an information map is kind of useful – David McCandless
I spend hours thinking about how to get the story across in my visualizations. I don’t mind taking that long because it’s that five minutes of presenting it or someone getting it that can make or break a deal – Goldman Sachs executive
We won’t have time to cover “how to make a compelling data visualization” in this lecture.
Instead, we will focus on the basics of creating visualizations in Python.
This will be a fast introduction, but this material appears in almost every lecture going forward, which will help the concepts sink in.
In almost any profession that you pursue, much of what you do involves communicating ideas to others.
We include some references that we have found useful below.
## matplotlib¶
The most widely used plotting package in Python is matplotlib.
The standard import alias is
In [2]:
import matplotlib.pyplot as plt
import numpy as np
Note above that we are using matplotlib.pyplot rather than just matplotlib.
pyplot is a sub-module found in some large packages to further organize functions and types. We are able to give the plt alias to this sub-module.
Additionally, when we are working in the notebook, we need tell matplotlib to display our images inside of the notebook itself instead of creating new windows with the image.
This is done by
In [3]:
%matplotlib inline
# activate plot theme
import qeds
qeds.themes.mpl_style();
The commands with % before them are called Magics.
### First Plot¶
Let’s create our first plot!
After creating it, we will walk through the steps one-by-one to understand what they do.
In [4]:
# Step 1
fig, ax = plt.subplots()
# Step 2
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
# Step 3
ax.plot(x, y)
Out[4]:
[<matplotlib.lines.Line2D at 0x7f34aea85e10>]
1. Create a figure and axis object which stores the information from our graph.
2. Generate data that we will plot.
3. Use the x and y data, and make a line plot on our axis, ax, by calling the plot method.
### Difference between Figure and Axis¶
We’ve found that the easiest way for us to distinguish between the figure and axis objects is to think about them as a framed painting.
The axis is the canvas; it is where we “draw” our plots.
The figure is the entire framed painting (which inclues the axis itself!).
We can also see this by setting certain elements of the figure to different colors.
In [5]:
fig, ax = plt.subplots()
fig.set_facecolor("red")
ax.set_facecolor("blue")
This difference also means that you can place more than one axis on a figure.
In [6]:
# We specified the shape of the axes -- It means we will have two rows and three columns
# of axes on our figure
fig, axes = plt.subplots(2, 3)
fig.set_facecolor("gray")
# Can choose hex colors
colors = ["#065535", "#89ecda", "#ffd1dc", "#ff0000", "#6897bb", "#9400d3"]
# axes is a numpy array and we want to iterate over a flat version of it
for (ax, c) in zip(axes.flat, colors):
ax.set_facecolor(c)
fig.tight_layout()
### Functionality¶
The matplotlib library is versatile and very flexible.
You can see various examples of what it can do on the matplotlib example gallery.
We work though a few examples to quickly introduce some possibilities.
Bar
In [7]:
countries = ["CAN", "MEX", "USA"]
populations = [36.7, 129.2, 325.700]
land_area = [3.850, 0.761, 3.790]
fig, ax = plt.subplots(2)
ax[0].bar(countries, populations, align="center")
ax[0].set_title("Populations (in millions)")
ax[1].bar(countries, land_area, align="center")
ax[1].set_title("Land area (in millions miles squared)")
fig.tight_layout()
Scatter and annotation
In [8]:
N = 50
np.random.seed(42)
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radii
fig, ax = plt.subplots()
ax.scatter(x, y, s=area, c=colors, alpha=0.5)
ax.annotate(
"First point", xy=(x[0], y[0]), xycoords="data",
xytext=(25, -25), textcoords="offset points",
)
Out[8]:
Text(25, -25, 'First point')
Fill between
In [9]:
x = np.linspace(0, 1, 500)
y = np.sin(4 * np.pi * x) * np.exp(-5 * x)
fig, ax = plt.subplots()
ax.grid(True)
ax.fill(x, y)
Out[9]:
[<matplotlib.patches.Polygon at 0x7f34ae3e4198>] | 2020-10-31 12:31:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2987785339355469, "perplexity": 3844.3295538270227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00481.warc.gz"} |
https://courses.lumenlearning.com/suny-dutchess-precalculus/chapter/2-2-section-exercises/ | ## 2.2 Section Exercises
### 2.2 Section Exercises
#### Verbal
1. Explain why the values of an increasing exponential function will eventually overtake the values of an increasing linear function.
2. Given a formula for an exponential function, is it possible to determine whether the function grows or decays exponentially just by looking at the formula? Explain.
3. The Oxford Dictionary defines the word nominal as a value that is “stated or expressed but not necessarily corresponding exactly to the real value.”[1] Develop a reasonable argument for why the term nominal rate is used to describe the annual percentage rate of an investment account that compounds interest.
#### Algebraic
For the following exercises, identify whether the statement represents an exponential function. Explain.
4. The average annual population increase of a pack of wolves is 25.
5. A population of bacteria decreases by a factor of$\text{ }\frac{1}{8}\text{ }$every$\text{ }24\text{ }$hours.
6. The value of a coin collection has increased by$\text{ }3.25%\text{ }$annually over the last$\text{ }20\text{ }$years.
7. For each training session, a personal trainer charges his clients$\text{ }\text{}5\text{ }$ less than the previous training session.
8. The height of a projectile at time$\text{ }t\text{ }$is represented by the function$\text{ }h\left(t\right)=-4.9{t}^{2}+18t+40.$
For the following exercises, consider this scenario: For each year$\text{ }t,$the population of a forest of trees is represented by the function$\text{ }A\left(t\right)=115{\left(1.025\right)}^{t}.\text{ }$In a neighboring forest, the population of the same type of tree is represented by the function$\text{ }B\left(t\right)=82{\left(1.029\right)}^{t}.\text{ }$(Round answers to the nearest whole number.)
9. Which forest’s population is growing at a faster rate?
10. Which forest had a greater number of trees initially? By how many?
11. Assuming the population growth models continue to represent the growth of the forests, which forest will have a greater number of trees after$\text{ }20\text{ }$years? By how many?
12. Assuming the population growth models continue to represent the growth of the forests, which forest will have a greater number of trees after$\text{ }100\text{ }$years? By how many?
13. Discuss the above results from the previous four exercises. Assuming the population growth models continue to represent the growth of the forests, which forest will have the greater number of trees in the long run? Why? What are some factors that might influence the long-term validity of the exponential growth model?
For the following exercises, determine whether the equation represents exponential growth, exponential decay, or neither. Explain.
14. $y=300{\left(1-t\right)}^{5}$
15. $y=220{\left(1.06\right)}^{x}$
16. $y=16.5{\left(1.025\right)}^{\frac{1}{x}}$
17. $y=11,701{\left(0.97\right)}^{t}$
For the following exercises, find the formula for an exponential function that passes through the two points given.
18. $\left(0,6\right)\text{ }$and$\text{ }\left(3,750\right)$
19. $\left(0,2000\right)\text{ }$and$\text{ }\left(2,20\right)$
20. $\left(-1,\frac{3}{2}\right)\text{ }$and$\text{ }\left(3,24\right)$
21. $\left(-2,6\right)\text{ }$and$\text{ }\left(3,1\right)$
22. $\left(3,1\right)\text{ }$and$\text{ }\left(5,4\right)$
For the following exercises, determine whether the table could represent a function that is linear, exponential, or neither. If it appears to be exponential, find a function that passes through the points.
23.
$x$ 1 2 3 4 $f\left(x\right)$ 70 40 10 -20
24.
$x$ 1 2 3 4 $h\left(x\right)$ 70 49 34.3 24.01
25.
$x$ 1 2 3 4 $m\left(x\right)$ 80 61 42.9 25.61
26.
$x$ 1 2 3 4 $f\left(x\right)$ 10 20 40 80
27.
$x$ 1 2 3 4 $g\left(x\right)$ -3.25 2 7.25 12.5
For the following exercises, use the compound interest formula,$\text{ }A\left(t\right)=P{\left(1+\frac{r}{n}\right)}^{nt}.$
28. After a certain number of years, the value of an investment account is represented by the equation$\text{ }10,250{\left(1+\frac{0.04}{12}\right)}^{120}.\text{ }$What is the value of the account?
29. What was the initial deposit made to the account in the previous exercise?
30. How many years had the account from the previous exercise been accumulating interest?
31. An account is opened with an initial deposit of $6,500 and earns$\text{ }3.6%\text{ }$interest compounded semi-annually. What will the account be worth in$\text{ }20\text{ }$years? 32. How much more would the account in the previous exercise have been worth if the interest were compounding weekly? 33. Solve the compound interest formula for the principal,$\text{ }P$. 34. Use the formula found in the previous exercise to calculate the initial deposit of an account that is worth$\text{ }14,472.74\text{ }$after earning$\text{ }5.5%\text{ }$interest compounded monthly for$\text{ }5\text{ }$years. (Round to the nearest dollar.) 35. How much more would the account in the previous two exercises be worth if it were earning interest for$\text{ }5\text{ }$more years? 36. Use properties of rational exponents to solve the compound interest formula for the interest rate,$\text{ }r.$ 37. Use the formula found in the previous exercise to calculate the interest rate for an account that was compounded semi-annually, had an initial deposit of$9,000 and was worth $13,373.53 after 10 years. 38. Use the formula found in the previous exercise to calculate the interest rate for an account that was compounded monthly, had an initial deposit of$5,500, and was worth $38,455 after 30 years. For the following exercises, determine whether the equation represents continuous growth, continuous decay, or neither. Explain. 39. $y=3742{\left(e\right)}^{0.75t}$ 40. $y=150{\left(e\right)}^{\frac{3.25}{t}}$ 41. $y=2.25{\left(e\right)}^{-2t}$ 42. Suppose an investment account is opened with an initial deposit of$\text{ }12,000\text{ }$earning$\text{ }7.2%\text{ }$interest compounded continuously. How much will the account be worth after$\text{ }30\text{ }$years? 43. How much less would the account from Exercise 42 be worth after$\text{ }30\text{ }$years if it were compounded monthly instead? #### Numeric For the following exercises, evaluate each function. Round answers to four decimal places, if necessary. 44. $f\left(x\right)=2{\left(5\right)}^{x},$ for$\text{ }f\left(-3\right)$ 45. $f\left(x\right)=-{4}^{2x+3},$ for$\text{ }f\left(-1\right)$ 46. $f\left(x\right)={e}^{x},$ for$\text{ }f\left(3\right)$ $f\left(x\right)=-2{e}^{x-1},$ for$\text{ }f\left(-1\right)$ 47. $f\left(x\right)=2.7{\left(4\right)}^{-x+1}+1.5,$ for$f\left(-2\right)$ 48. $f\left(x\right)=1.2{e}^{2x}-0.3,$ for$\text{ }f\left(3\right)$ 49. $f\left(x\right)=-\frac{3}{2}{\left(3\right)}^{-x}+\frac{3}{2},$ for$\text{ }f\left(2\right)$ #### Technology For the following exercises, use a graphing calculator to find the equation of an exponential function given the points on the curve. 50. $\left(0,3\right)\text{ }$and$\text{ }\left(3,375\right)$ 51. $\left(3,222.62\right)\text{ }$and$\text{ }\left(10,77.456\right)$ 52. $\left(20,29.495\right)\text{ }$and$\text{ }\left(150,730.89\right)$ 53. $\left(5,2.909\right)\text{ }$and$\text{ }\left(13,0.005\right)$ 54. $\left(11,310.035\right)\text{ }$ and $\left(25,356.3652\right)$ #### Extensions 55. The annual percentage yield (APY) of an investment account is a representation of the actual interest rate earned on a compounding account. It is based on a compounding period of one year. Show that the APY of an account that compounds monthly can be found with the formula$\text{ }\text{APY}={\left(1+\frac{r}{12}\right)}^{12}-1.$ 56. Repeat the previous exercise to find the formula for the APY of an account that compounds daily. Use the results from this and the previous exercise to develop a function$\text{ }I\left(n\right)\text{ }$for the APY of any account that compounds$\text{ }n\text{ }$times per year. 57. Recall that an exponential function is any equation written in the form$\text{ }f\left(x\right)=a\cdot {b}^{x}\text{ }$such that$a$and$b$are positive numbers and$b\ne 1.$Any positive number$b$can be written as$b={e}^{n}$for some value of$n$. Use this fact to rewrite the formula for an exponential function that uses the number$e$as a base. 58. In an exponential decay function, the base of the exponent is a value between 0 and 1. Thus, for some number$\text{ }b>1,$ the exponential decay function can be written as$\text{ }f\left(x\right)=a\cdot {\left(\frac{1}{b}\right)}^{x}.\text{ }$Use this formula, along with the fact that$\text{ }b={e}^{n},$ to show that an exponential decay function takes the form$\text{ }f\left(x\right)=a{\left(e\right)}^{-nx}\text{ }$for some positive number$\text{ }n\text{ }$. 59. The formula for the amount$\text{ }A\text{ }$in an investment account with a nominal interest rate$\text{ }r\text{ }$at any time$\text{ }t\text{ }$is given by$\text{ }A\left(t\right)=a{\left(e\right)}^{rt},$where$\text{ }a\text{ }$is the amount of principal initially deposited into an account that compounds continuously. Prove that the percentage of interest earned to principal at any time$\text{ }t\text{ }$can be calculated with the formula$\text{ }I\left(t\right)={e}^{rt}-1.$ #### Real-World Applications 60. The fox population in a certain region has an annual growth rate of 9% per year. In the year 2012, there were 23,900 fox counted in the area. What is the fox population predicted to be in the year 2020? 61. A scientist begins with 100 milligrams of a radioactive substance that decays exponentially. After 35 hours, 50mg of the substance remains. How many milligrams will remain after 54 hours? 62. In the year 1985, a house was valued at$110,000. By the year 2005, the value had appreciated to $145,000. What was the annual growth rate between 1985 and 2005? Assume that the value continued to grow by the same percentage. What was the value of the house in the year 2010? 63. A car was valued at$38,000 in the year 2007. By 2013, the value had depreciated to $11,000 If the car’s value continues to drop by the same percentage, what will it be worth by 2017? 64. Jamal wants to save$54,000 for a down payment on a home. How much will he need to invest in an account with 8.2% APR, compounding daily, in order to reach his goal in 5 years?
65. Kyoko has $10,000 that she wants to invest. Her bank has several investment accounts to choose from, all compounding daily. Her goal is to have$15,000 by the time she finishes graduate school in 6 years. To the nearest hundredth of a percent, what should her minimum annual interest rate be in order to reach her goal? (Hint: solve the compound interest formula for the interest rate.)
66. Alyssa opened a retirement account with 7.25% APR in the year 2000. Her initial deposit was $13,500. How much will the account be worth in 2025 if interest compounds monthly? How much more would she make if interest compounded continuously? 67. An investment account with an annual interest rate of 7% was opened with an initial deposit of$4,000 Compare the values of the account after 9 years when the interest is compounded annually, quarterly, monthly, and continuously. | 2022-05-19 22:28:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3450757563114166, "perplexity": 2068.85663868378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00573.warc.gz"} |
https://libraryguides.centennialcollege.ca/c.php?g=645085&p=5113178&preview=f6397ba153284f6524db9781a79e7d64 | It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
# Math help from the Learning Centre
This guide provides useful resources for a wide variety of math topics. It is targeted at students enrolled in a math course or any other Centennial course that requires math knowledge and skills.
## Forms of Fractions
A proper fraction is a fraction whose numerator is smaller than its denominator. For example,
$\frac12, \frac45, \frac37$
Consider the proper fraction $$\frac45$$ can be visualised as:
An improper fraction is a fraction whose numerator is greater than its denominator. For example,
$\frac32, \frac74,\frac95$
The improper fraction $$\frac74$$ can be visualised as:
A mixed fraction is a combination of a whole number and a proper fraction. For example,
$3\frac12, 5\frac25, 1\frac34$
The mixed fraction $$1\frac34$$ means there is one whole part plus $$\frac34$$ of a whole part. In fact, $$1\frac34 = 1 + \frac34$$ and can be visualised as follows:
+
Notice that the shaded region in the circles representing $$\frac74$$ and $$1\frac34$$ are the same! In fact, $$\frac74 = 1\frac34$$. They are different ways of representing the same thing!
## Switching Between Forms: Mixed and Improper Fractions
Switching from mixed to improper:
To switch from mixed to improper, multiply the whole number by the denominator of the proper fraction and add to the numerator of the proper fraction:
For example:
• $$\displaystyle 4\tfrac23 = \frac{4\times3 + 2}{3} = \frac{12+2}{3} = \frac{14}{2}$$
• $$\displaystyle 5\tfrac14 = \frac{5\times4 + 1}{4} = \frac{20+1}{4} = \frac{21}{4}$$
• $$\displaystyle 2\tfrac37 = \frac{2\times7 + 3}{7} = \frac{14+3}{7} = \frac{17}{7}$$
Switching from improper to mixed:
To switch from improper to mixed, use long division to divide the denominator into the numerator, finding the quotient and the remainder. The improper fraction is mixed form is
$\text{Quotient}\frac{\text{Remainder}}{\text{Denominator}}$
For example, suppose we want to change $$\frac{21}{4}$$ to a mixed fraction. $$21$$ divided by $$4$$ is $$5$$ with remainder $$1$$. so
$\frac{21}{4} = 5\frac14$
To add fractions, take the following steps:
1. Make sure the denominators are the same.
3. Simplify the fraction if necessary.
For example, suppose we want to add
$\frac{3}{12} + \frac{5}{12}$
The denominators are the same, so we add the numerators and simplify.
$\frac{3}{12} + \frac{5}{12} = \frac{8}{12} = \frac23$
The calculation above can be visualised as follows:
+==
If the denominators are different, you must take the following steps to make them the same:
1. Find the least common multiple of the denominators (see Prime Factorisation and Least Common Multiple)
2. Multiply each fraction above and below so that the each denominator becomes the least common multiple.
3. Add the fractions as shown above.
For example, suppose we want to add
$\frac25 - \frac14$
The least common multiple of $$5$$ and $$4$$ is $$20$$, so we need to multiply each fraction above and below so that their denominator becomes $$20$$.
$\frac25 = \frac{2\times 4}{5\times 4} = \frac{8}{20}$
$\frac14 = \frac{1\times 5}{4\times5} = \frac{5}{20}$
Now that the denominators are the same, the fractions can be added as before:
$\frac25 - \frac14 = \frac{8}{20} - \frac{5}{20} = \frac{3}{20}$
See the picture below for a visual explanation of what was done.
-
= =
See the video below for another example of adding and subtracting fractions.
## Multiplying/Dividing Fractions
To multiply fractions, just remember the following rule:
Top by top, bottom by bottom
For example,
$\frac35 \times \frac27 = \frac{3 \times 2}{5\times 7} = \frac{6}{35}$
To divide fractions, you just need to flip the dividing fraction and then multiply:
Flip and multiply
For example,
$\frac29 \div \frac34 = \frac29 \times \frac43 = \frac{2 \times 4}{9 \times 3} = \frac{8}{27}$
Notice that the $$\dfrac34$$ gets flipped to $$\dfrac43$$ and multiplied instead. | 2022-07-03 11:30:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8966242671012878, "perplexity": 742.9360988749094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00175.warc.gz"} |
https://www.nature.com/articles/s41588-021-00954-4?error=cookies_not_supported&code=49c44ecf-46cb-4765-851a-032997b44096 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# A generalized linear mixed model association tool for biobank-scale data
## Abstract
Compared with linear mixed model-based genome-wide association (GWA) methods, generalized linear mixed model (GLMM)-based methods have better statistical properties when applied to binary traits but are computationally much slower. In the present study, leveraging efficient sparse matrix-based algorithms, we developed a GLMM-based GWA tool, fastGWA-GLMM, that is severalfold to orders of magnitude faster than the state-of-the-art tools when applied to the UK Biobank (UKB) data and scalable to cohorts with millions of individuals. We show by simulation that the fastGWA-GLMM test statistics of both common and rare variants are well calibrated under the null, even for traits with extreme case–control ratios. We applied fastGWA-GLMM to the UKB data of 456,348 individuals, 11,842,647 variants and 2,989 binary traits (full summary statistics available at http://fastgwa.info/ukbimpbin), and identified 259 rare variants associated with 75 traits, demonstrating the use of imputed genotype data in a large cohort to discover rare variants for binary complex traits.
This is a preview of subscription content, access via your institution
## Access options
\$32.00
All prices are NET prices.
## Data availability
The individual-level genotype and phenotype data are available through formal application to the UKB (http://www.ukbiobank.ac.uk). GWAS summary statistics for the 2,989 binary traits from our analysis of the UKB data are fully available at http://fastgwa.info/ukbimpbin and the GWAS Catalog (GCP ID: GCP000224). Source data are provided with this paper.
## Code availability
FastGWA-GLMM, fastGWA-BB and ACAT-V are integrated in the GCTA software package (v.1.93.3), available at https://yanglab.westlake.edu.cn/software/gcta. The source code of GCTA v.1.93.3 is available at https://doi.org/10.5281/zenodo.5226943, and the analysis code to produce the major results presented in the paper is available at https://doi.org/10.5281/zenodo.5501110.
## References
1. Bycroft, C. et al. The UK Biobank resource with deep phenotyping and genomic data. Nature 562, 203–209 (2018).
2. Astle, W. J. et al. The allelic landscape of human blood cell trait variation and links to common complex disease. Cell 167, 1415–1429.e19 (2016).
3. Kemp, J. P. et al. Identification of 153 new loci associated with heel bone mineral density and functional involvement of GPC6 in osteoporosis. Nat. Genet. 49, 1468 (2017).
4. Wray, N. R. et al. Genome-wide association analyses identify 44 risk variants and refine the genetic architecture of major depression. Nat. Genet. 50, 668–681 (2018).
5. Tin, A. et al. Target genes, variants, tissues and transcriptional pathways influencing human serum urate levels. Nat. Genet. 51, 1459–1474 (2019).
6. Craig, J. E. et al. Multitrait analysis of glaucoma identifies new risk loci and enables polygenic prediction of disease susceptibility and progression. Nat. Genet. 52, 160–166 (2020).
7. Chang, C. C. et al. Second-generation PLINK: rising to the challenge of larger and richer datasets. GigaScience 4, 7 (2015).
8. Canela-Xandri, O., Law, A., Gray, A., Woolliams, J. A. & Tenesa, A. A new tool called DISSECT for analysing large genomic data sets using a Big Data approach. Nat. Commun. 6, 10162 (2015).
9. Loh, P. R., Kichaev, G., Gazal, S., Schoech, A. P. & Price, A. L. Mixed-model association for biobank-scale datasets. Nat. Genet. 50, 906–908 (2018).
10. Jiang, L. et al. A resource-efficient tool for mixed model association analysis of large-scale data. Nat. Genet. 51, 1749–1755 (2019).
11. Pirinen, M., Donnelly, P. & Spencer, C. C. Efficient computation with a linear mixed model on large-scale data sets with applications to genetic studies. Ann. Appl. Stat. 7, 369–390 (2013).
12. Van Rheenen, W. et al. Genome-wide association analyses identify new risk variants and the genetic architecture of amyotrophic lateral sclerosis. Nat. Genet. 48, 1043–1048 (2016).
13. Howson, J. M. et al. Fifteen new risk loci for coronary artery disease highlight arterial-wall-specific mechanisms. Nat. Genet. 49, 1113 (2017).
14. Zhou, W. et al. Efficiently controlling for case–control imbalance and sample relatedness in large-scale genetic association studies. Nat. Genet. 50, 1335–1341 (2018).
15. Yang, J., Lee, S. H., Goddard, M. E. & Visscher, P. M. GCTA: a tool for genome-wide complex trait analysis. Am. J. Hum. Genet 88, 76–82 (2011).
16. Liu, Y. et al. Acat: a fast and powerful p value combination method for rare-variant analysis in sequencing studies. Am. J. Hum. Genet. 104, 410–421 (2019).
17. Band, G. & Marchini, J. BGEN: a binary file format for imputed genotype and haplotype data. Preprint at bioRxiv https://doi.org/10.1101/308296 (2018).
18. Mbatchou, J. et al. Computationally efficient whole-genome regression for quantitative and binary traits. Nat. Genet. https://doi.org/10.1038/s41588-021-00870-7 (2021).
19. Zhou, W. et al. Scalable generalized linear mixed model for region-based association tests in large biobanks and cohorts. Nat. Genet. 52, 634–639 (2020).
20. Wu, P. et al. Mapping ICD-10 and ICD-10-CM codes to phecodes: workflow development and initial evaluation. JMIR Med. Inform. 7, e14325 (2019).
21. Chatila, T. A. Interleukin-4 receptor signaling pathways in asthma pathogenesis. Trends Mol. Med. 10, 493–499 (2004).
22. Wenzel, S. E. et al. IL4Rα mutations are associated with asthma exacerbations and mast cell/IgE expression. Am. J. Respir. Crit. Care Med. 175, 570–576 (2007).
23. Hirota, T. et al. Genome-wide association study identifies three new susceptibility loci for adult asthma in the Japanese population. Nat. Genet. 43, 893–896 (2011).
24. Lloyd-Jones, L. R. et al. Improved polygenic prediction by Bayesian multiple regression on summary statistics. Nat. Commun. 10, 5086 (2019).
25. Ni, G. et al. A comparison of ten polygenic score methods for psychiatric disorders applied across multiple cohorts. Biol. Psychiatry https://doi.org/10.1016/j.biopsych.2021.04.018 (2021).
26. Lloyd-Jones, L. R., Robinson, M. R., Yang, J. & Visscher, P. M. Transformation of summary statistics from linear mixed model association on all-or-none traits to odds ratio. Genetics 208, 1397–1408 (2018).
27. Dey, R., Schmidt, E. M., Abecasis, G. R. & Lee, S. A fast and accurate algorithm to test for binary phenotypes and its application to PheWAS. Am. J. Hum. Genet. 101, 37–49 (2017).
28. Breyer, J. P., Avritt, T. G., McReynolds, K. M., Dupont, W. D. & Smith, J. R. Confirmation of the HOXB13 G84E germline mutation in familial prostate cancer. Cancer Epidemiol. Prev. Biomark. 21, 1348–1353 (2012).
29. Ewing, C. M. et al. Germline mutations in HOXB13 and prostate-cancer risk. N. Engl. J. Med. 366, 141–149 (2012).
30. Karlsson, R. et al. A population-based assessment of germline HOXB13 G84E mutation and prostate cancer risk. Eur. Urol. 65, 169–176 (2014).
31. Yang, J., Zaitlen, N. A., Goddard, M. E., Visscher, P. M. & Price, A. L. Advantages and pitfalls in the application of mixed-model association methods. Nat. Genet. 46, 100–106 (2014).
32. Pulit, S. L., de With, S. A. & de Bakker, P. I. Resetting the bar: statistical significance in whole‐genome sequencing‐based association studies of global populations. Genet. Epidemiol. 41, 145–151 (2017).
33. Wu, Y., Zheng, Z., Visscher, P. M. & Yang, J. Quantifying the mapping precision of genome-wide association studies using whole-genome sequencing data. Genome Biol. 18, 86 (2017).
34. Yu, J. et al. A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nat. Genet. 38, 203–208 (2006).
35. Kang, H. M. et al. Efficient control of population structure in model organism association mapping. Genetics 178, 1709–1723 (2008).
36. Kang, H. M. et al. Variance component model to account for sample structure in genome-wide association studies. Nat. Genet. 42, 348–354 (2010).
37. Zhang, Z. et al. Mixed linear model approach adapted for genome-wide association studies. Nat. Genet. 42, 355–360 (2010).
38. Zhou, X. & Stephens, M. Genome-wide efficient mixed-model analysis for association studies. Nat. Genet. 44, 821–824 (2012).
39. Svishcheva, G. R., Axenovich, T. I., Belonogova, N. M., van Duijn, C. M. & Aulchenko, Y. S. Rapid variance components-based method for whole-genome association analysis. Nat. Genet. 44, 1166–1170 (2012).
40. Loh, P. R. et al. Efficient Bayesian mixed-model analysis increases association power in large cohorts. Nat. Genet. 47, 284–290 (2015).
41. Chen, H. et al. Control for population structure and relatedness for binary traits in genetic association studies via logistic mixed models. Am. J. Hum. Genet. 98, 653–666 (2016).
42. Gilmour, A. R., Thompson, R. & Cullis, B. R. Average information REML: an efficient algorithm for variance parameter estimation in linear mixed models. Biometrics 51, 1440–1450 (1995).
43. Breslow, N. E. & Lin, X. Bias correction in generalised linear mixed models with a single component of dispersion. Biometrika 82, 81–91 (1995).
44. Kuonen, D. Miscellanea. Saddlepoint approximations for distributions of quadratic forms in normal variables. Biometrika 86, 929–935 (1999).
45. McCarthy, S. et al. A reference panel of 64,976 haplotypes for genotype imputation. Nat. Genet. 48, 1279–1283 (2016).
46. UK10K consortium. The UK10K project identifies rare variants in health and disease. Nature 526, 82–90 (2015).
47. Abraham, G., Qiu, Y. & Inouye, M. FlashPCA2: principal component analysis of biobank-scale genotype datasets. Bioinformatics 33, 2776–2778 (2017).
48. Millard, L. A. C., Davies, N. M., Gaunt, T. R., Davey Smith, G. & Tilling, K. Software application profile: PHESANT: a tool for performing automated phenome scans in UK Biobank. Int. J. Epidemiol. 47, 29–35 (2017).
49. World Health Organization. International Statistical Classification of Diseases and Related Health Problems 10th revision (ICD-10) (World Health Organization, 2016).
50. Lubin, J. H. & Gail, M. H. Biased selection of controls for case–control analyses of cohort studies. Biometrics 40, 63–75 (1984).
51. Yang, J. et al. jianyangqt/gcta: GCTA (v1.93.3beta2). Zenodo https://doi.org/10.5281/zenodo.5226943 (2021).
52. Jiang, L., Zheng, Z., Fang, H. & Yang, J. A generalized linear mixed model association tool for biobank-scale data—code. Zenodo https://doi.org/10.5281/zenodo.5501110 (2021).
## Acknowledgements
We thank T. Qi for helpful discussion about the gene-based test. We thank J. Sidorenko for assistance in preparation of the UK Biobank data, and Alibaba Cloud—Australia and New Zealand for hosting the online tool. We thank the University of Queensland Research Computing Centre and the Westlake University High-Performance Computing Center for assistance in computing. J.Y. was supported by the Australian Research Council (grant no. FT180100186), the Australian National Health and Medical Research Council (grant no. 1113400) and the Westlake Education Foundation (grant no. 101566022001). The present study makes use of data from the UKB (applications: 12505 and 66982). UKB was established by the Wellcome Trust medical charity, Medical Research Council, Department of Health, Scottish Government and the Northwest Regional Development Agency. It has also had funding from the Welsh Assembly Government, British Heart Foundation and Diabetes UK. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
## Author information
Authors
### Contributions
J.Y. conceived and supervised the study. J.Y., L.J. and Z.Z. designed the experiment and developed the methods. Z.Z. developed the software tools with input from L.J., H.F. and J.Y. L.J. and Z.Z. performed the simulations and data analyses under the assistance and guidance of J.Y. L.J. and J.Y. wrote the manuscript with the participation of Z.Z. All the authors reviewed and approved the final manuscript.
### Corresponding author
Correspondence to Jian Yang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Genetics thanks Bjarni Vilhjálmsson and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer review reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Extended data
### Extended Data Fig. 1 Runtime of fastGWA-GLMM for 8 traits with different prevalence levels.
The x-axis represents the sample size, and the y-axis represents the total runtime in hour units. Different traits are labelled with different colours. The data used in this test consisted of 11,842,647 variants. All tests were performed in the same computing environment: 80 GB memory and 8 CPUs (Intel Xeon Gold 6148). Each test was repeated 5 times for an average.
### Extended Data Fig. 2 FPR for SAIGE, fastGWA-GLMM and REGENIE quantified using the null common variants in simulations.
Three methods, SAIGE, fastGWA-GLMM, and REGENIE, are compared. The y-axis represents the FPR computed from the null common variants (that is, all the common variants on the even chromosomes), and the x-axis represents different levels of prevalence of the simulated binary phenotypes (prevalence $$= n_{case}/(n_{case} + n_{control})$$). FPR is evaluated at five different alpha levels (α=0.05, 0.005, 5×10−4, 5×10−5, and 5×10−6), as shown in panels from a) to e), respectively. The dashed lines indicate the expected FPRs (that is, the alpha levels). Each boxplot represents the distribution of FPR across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dots. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 3 FPR for SAIGE, fastGWA-GLMM and REGENIE quantified using the rare null variants in simulations.
Three methods, SAIGE, fastGWA-GLMM, and REGENIE, are compared. The y-axis represents the FPR computed from the null rare variants (that is, all the rare variants on the even chromosomes), and the x-axis represents different levels of prevalence of the simulated binary phenotypes (prevalence $$= n_{case}/(n_{case} + n_{control})$$). FPR is evaluated at five different alpha levels (α=0.05, 0.005, 5×10−4, 5×10−5, and 5×10−6), as shown in panels from a) to e), respectively. The dashed lines indicate the expected FPRs (that is, the alpha levels). Each boxplot represents the distribution of FPR across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dots. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 4 Comparison of power (as measured by the mean χ2 value of the causal variants) between SAIGE, fastGWA-GLMM and REGENIE.
The y-axis represents the mean χ2 value of the causal variants (10,000 common and 1,000 rare causal variants on the odd chromosomes), and the x-axis represents different levels of prevalence of the simulated binary phenotypes (prevalence $$= n_{case}/(n_{case} + n_{control})$$). Apart from being evaluated for the 11,000 variants altogether in panel (a), the mean χ2 value is also evaluated for common (MAF ≥ 0.01) and rare (MAF < 0.01) causal variants separately, as shown in panels b) and c), respectively. Each boxplot represents the distribution of mean χ2 across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dots. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 5 FPR for fastGWA-GLMM and other methods quantified using all the null variants in simulations.
The y-axis represents the FPR computed from the null variants (that is, all the variants on the even chromosomes), and the x-axis represents different levels of prevalence of the simulated binary phenotypes (prevalence $$= n_{case}/(n_{case} + n_{control})$$). FPR is evaluated at five different alpha (P value threshold) levels (α=0.05, 0.005, 5×10−4, 5×10−5, and 5×10−6), as shown in panels from a) to e), respectively. The dashed lines indicate the expected FPRs (that is, the alpha levels). Each boxplot represents the distribution of FPR across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dots. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 6 FPR for fastGWA-GLMM and fastGWA-GLMM-Ped quantified using all the null variants in simulations.
FastGWA-GLMM-Ped: fastGWA-GLMM using the pedigree relatedness matrix. fastGWA-GLMM: fastGWA-GLMM using the sparse GRM. The y-axis represents the FPR computed from the null variants (that is, all the variants on the even chromosomes), and the x-axis represents different levels of prevalence of the simulated binary phenotypes (prevalence $$= n_{case}/(n_{case} + n_{control})$$). FPR is evaluated at five different alpha levels (α=0.05, 0.005, 5×10−4, 5×10−5, and 5×10−6), as shown in panels from a) to e), respectively. The dashed lines indicate the expected FPRs (that is, the alpha levels). Each boxplot represents the distribution of FPR across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dots. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 7 Mean χ2 value of the causal variants for fastGWA-GLMM and fastGWA-GLMM-Ped under different simulation scenarios.
FastGWA-GLMM-Ped: fastGWA-GLMM using the pedigree relatedness matrix. fastGWA-GLMM: fastGWA-GLMM using the sparse GRM. The y-axis represents the mean χ2 value of the causal variants (10,000 common and 1,000 rare causal variants on the odd chromosomes), and the x-axis represents different levels of prevalence of the simulated binary phenotypes (prevalence $$= n_{case}/(n_{case} + n_{control})$$). Apart from being evaluated for the 11,000 variants altogether in panel a), the mean χ2 value is also evaluated for common (MAF ≥ 0.01) and rare (MAF < 0.01) causal variants separately, as shown in panels b) and c) respectively. Each boxplot represents the distribution of mean χ2 across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dots. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 8 False positive rate (FPR) for ACAT-V, fastGWA-BB, and REGENIE-Burden under different simulation scenarios.
Three gene-based test methods are compared in this analysis, that is, ACAT-V (implemented in GCTA), fastGWA-BB, and REGENIE-Burden. The y-axis represents the FPR computed from the null genes (that is, all the 1,224 genes on chromosome 1 under the null simulation scenarios), and “Prev” on the x-axis represents different levels of simulated prevalence of the binary trait. The prevalence is defined as $$n_{case}/(n_{case} + n_{control})$$). FPR is evaluated at five different alpha levels (α=0.05, 0.01, 0.005, 0.001, and 5×10−4), as shown in panels from a) to e), repectively. The dashed lines indicate the expected FPRs (that is, the alpha levels). Each boxplot represents the distribution of FPR across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dot. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 9 Statistical power for ACAT-V, fastGWA-BB, and REGENIE-Burden under different simulation scenarios.
Three gene-based test methods are compared in this analysis, that is, ACAT-V (implemented in GCTA), fastGWA-BB, and REGENIE-Burden. The y-axis represents the power, defined as the proportion of the 100 simulated causal genes on chromosome 1 with P values less than the significance threshold after Bonferroni correction (that is, 0.05/1224=4.1×10−5, where 1,224 is the number of genes used in the simulation), and “Prev” on the x-axis represents different levels of simulated prevalence of the binary trait. The prevalence is defined as $$n_{case}/(n_{case} + n_{control})$$). We varied the proportion of variants being causal in a gene (5%, 20%, or 50%) and the directions of variant effects (random or consistent), as labelled in the title of each panel. Each boxplot represents the distribution of power across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dot. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
### Extended Data Fig. 10 Area under the curve (AUC) for ACAT-V, fastGWA-BB, and REGENIE-Burden under different simulation scenarios.
Three gene-based test methods are compared in this analysis, that is, ACAT-V (implemented in GCTA), fastGWA-BB, and REGENIE-Burden. The y-axis represents the AUC (that is, the area under the receiver operating characteristic (ROC) curve), and “Prev” on the x-axis represents different levels of simulated prevalence of the binary trait. The prevalence is defined as $$n_{case}/(n_{case} + n_{control})$$). We varied the proportion of variants being causal in a gene (5%, 20% or 50%) and the directions of variant effects (random vs. consistent), as labelled in the title of each panel. Each boxplot represents the distribution of AUC across 100 simulation replicates. The line inside each box indicates the median value, notches indicate the 95% confidence interval, central box indicates the interquartile range (IQR), whiskers indicate data up to 1.5 times the IQR, and outliers are shown as separate dot. In all the analyses, we used a one-sided $$\chi _{\mathrm{d.f.} = 1}^2$$ statistic to test against the null hypothesis of no association.
## Supplementary information
### Supplementary Information
Supplementary Notes 1–14, Tables 1–11 and 13–14, Figs. 1–17 and References.
### Supplementary Table
Supplementary Table 12 Quasi-independent association signals identified by fastGWA-GLMM for the 2,989 binary traits in the UK Biobank.
## Source data
### Source Data Fig. 3
Statistical source data.
### Source Data Extended Data Fig. 2
Statistical source data.
### Source Data Extended Data Fig. 3
Statistical source data.
### Source Data Extended Data Fig. 4
Statistical source data.
### Source Data Extended Data Fig. 5
Statistical source data.
### Source Data Extended Data Fig. 6
Statistical source data.
### Source Data Extended Data Fig. 7
Statistical source data.
### Source Data Extended Data Fig. 8
Statistical source data.
### Source Data Extended Data Fig. 9
Statistical source data.
### Source Data Extended Data Fig. 10
Statistical source data.
## Rights and permissions
Reprints and Permissions
Jiang, L., Zheng, Z., Fang, H. et al. A generalized linear mixed model association tool for biobank-scale data. Nat Genet 53, 1616–1621 (2021). https://doi.org/10.1038/s41588-021-00954-4
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/s41588-021-00954-4 | 2022-12-10 07:10:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7307082414627075, "perplexity": 7657.1218082661935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00742.warc.gz"} |
https://tex.stackexchange.com/questions/416441/setting-chapter-for-header-line-for-one-page | # Setting chapter for header line for one page
I am using the \pagestyle{scrheadings} with the current chaptername in the header. There is one chapter that should not appear in the table of contents, which is the reason why I'm using \chapter*{MyChapter}. The problem is that the chaptername of MyChapter is not shown in the header, but the name of the previous chapter. How can I change the chaptername in the header for only one single page?
Thank you
• Try \chaptermark{MyChapter} right after the \chapter*. – Tiuri Feb 21 '18 at 12:51
• \markboth{title wombat}{title wombat} – Johannes_B Feb 21 '18 at 13:12
• @Johannes_B this worked like a charm, thanks – mrk Feb 21 '18 at 14:27
With a KOMA-Script class you could set class option
headings=optiontoheadandtoc
Then you can use
\addchap[tocentry={}]{Chapter without tocentry}
Example:
\documentclass[ | 2019-10-16 00:25:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6248540878295898, "perplexity": 3286.5606792770104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00273.warc.gz"} |
https://linearalgebras.com/tag/conjugacy-class | If you find any mistakes, please make a comment! Thank you.
## Characterize the center of a group ring
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 7.2 Exercise 7.2.13 Let $\mathcal{K} = \{k_1, \ldots, k_m \}$ be a conjugacy class in the finite group $G$.… | 2023-01-31 16:34:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4213724434375763, "perplexity": 219.34835877483482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00615.warc.gz"} |
https://joelmoreira.wordpress.com/2018/04/26/optimal-intersectivity/ | ## Optimal intersectivity
In ergodic Ramsey theory, one often wants to prove that certain dynamically defined sets in a probability space intersect (or “recur”) in non-trivial ways. Typically, this is achieved by studying the long term behavior of the sets as the dynamics flow. However, in certain situations, one can establish the desired intersection (or recurrence) using purely combinatorial arguments, and without using the fact that the sets are dynamically defined. In such cases, one ends up obtaining a “static” (as opposed to dynamical) statement. An instance of this situation is the following intersectivity result of Bergelson, first used in this paper, and which I have mentioned before in this blog.
Lemma 1 Let ${A_1,A_2,\dots}$ be sets in a probability space ${(X,\mu)}$ with ${\inf_n\mu(A_n)>\delta}$. Then there exists an infinite set ${I\subset{\mathbb N}}$ with density ${d(I)>\delta}$, such that for every non-empty finite set ${F\subset I}$ we have
$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>0$
A different kind of intersection property is the following static modification of Poincaré’s recurrence theorem.
Lemma 2 Let ${(X,\mu)}$ be a probability space and let ${A_1, A_2,\dots}$ be sets with ${\inf_n\mu(A_n)>\delta}$. Then there exists an infinite subset ${I\subset{\mathbb N}}$ such that for every ${n,m\in I}$,
$\displaystyle \mu\left(A_n\cap A_m\right)>\delta^2$
Observe that if the events ${A_1,A_2,\dots}$ are independent, then the lower bound ${\delta^2}$ is essentially achieved, and so this lemma is that sense optimal. The purpose of this post is to present a proof of the following common strengthening of Lemmas 1 and 2.
Theorem 3 Let ${A_1,A_2,\dots}$ be sets in a probability space ${(X,\mu)}$ with ${\inf_n\mu(A_n)>\delta}$. Then there exists an infinite set ${I\subset{\mathbb N}}$ such that for every non-empty finite set ${F\subset I}$ we have
$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>\delta^{|F|}$
Observe that the bound ${\delta^{|F|}}$ is optimal, again by considering the case of independent sets.
I learned this strengthening, as well as its proof from Konstantinos Tyros last December, when we were in Lyon, France attending the conference Ultrafilters, Ramsey Theory and Dynamics.
One could ask whether something more can be said about the set ${I}$ in Theorem 3 other than being infinite . Something one can not hope for is that the set ${I}$ has positive density, as showed in my previous post, using Forest’s theorem that not all sets of recurrence are sets of nice recurrence. On the other hand, one can indeed obtain certain combinatorial structures inside ${I}$. For instance, assuming the index set of the sets ${(A_n)}$ is given the structure of a homogeneous tree, one can choose ${I}$ to be a strong subtree; this is Theorem 3 in this paper of Dodos, Kanellopoulos and Tyros. Results of this kind are related to density versions of the Hales-Jewett and the Carlson-Simpson theorems due respectively to Furstenberg and Katznelson, and to Dodos, Kanellopoulos and Tyros.
— 1. A truncated version —
We start with a proof of Lemma 2 which we will need to prove Theorem 3. In fact, we prove the following strengthening of Lemma 2 which I have mentioned before in this blog (but without a proof) and can be seen as a “truncated” version of Theorem 3.
Lemma 4 Let ${(X,\mu)}$ be a probability space and let ${A_1, A_2,\dots}$ be sets with ${\inf_n\mu(A_n)>\delta}$. Then for every ${k\in{\mathbb N}}$ there exists an infinite subset ${I\subset{\mathbb N}}$ such that for every ${F\subset I}$ with ${|F|= k}$ we have
$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>\delta^k$
Proof: We partition the collection ${\binom{\mathbb N} k}$ of all subsets ${F}$ of ${{\mathbb N}}$ with size ${|F|=k}$ into two pieces, according to whether ${\mu\big(\bigcap_{n\in F}A_n\big)}$ is bigger or smaller than ${\delta^k}$. We then use the infinite Ramsey’s theorem to find an infinite set ${I\subset{\mathbb N}}$ such that every ${F\subset I}$ is in the same cell of the partition. We now need only to show that it is impossible to have ${\mu\big(\bigcap_{n\in F}A_n\big)\leq\delta^k}$ for every ${F\subset I}$ with ${|F|=k}$.
In order to do this, let ${G\subset I}$ be the set of the first ${M}$ elements of ${I}$, where ${M}$ is very large and will be determined later. Also let ${f=f_G=\sum_{n\in G}1_{A_n}}$. Then by Jensen’s inequality
$\displaystyle \begin{array}{rcl} \left(\int_Xfd\mu\right)^k &\leq& \int_Xf^kd\mu = \sum_{(n_1,\dots,n_k)\in G^k}\int_X\prod_{i=1}^k1_{A_{n_i}}d\mu \\&=& \sum_{(n_1,\dots,n_k)\in G^k}\mu\left(\bigcap_{i=1}^kA_{n_i}\right) \end{array}$
It is clear that ${G^k}$ contains ${\binom Mk}$ tuples of distinct elements, each appearing ${k!}$ times, and hence ${G^k}$ contains ${M^k-k!\binom{M}k}$ tuples of elements with some repetition. Thus we get
$\displaystyle \left(\int_Xfd\mu\right)^k\leq M^k-k!\binom{M}k+k!\sum_{F\subset G, |F|=k}\mu\left(\bigcap_{n\in F}A_n\right)$
and so, using the fact that ${\int_Xfd\mu>M\delta}$, for some ${F\subset G}$ with ${|F|=k}$ we have
$\displaystyle \begin{array}{rcl} \mu\left(\bigcap_{n\in F}A_n\right) &\geq& \frac1{k!\binom{M}k}\left(\left(\int_Xfd\mu\right)^k- M^k+k!\binom{M}k\right) \\& >& \frac{M^k}{k!\binom{M}k}\left(\delta^k-1\right)+1 \end{array}$
Since ${M^k/k!\binom Mk\rightarrow1}$ as ${M\rightarrow\infty}$, if follows that if ${M}$ was large enough, then
$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>\delta^k$
as claimed, and this finishes the proof. $\Box$
Observe that this proof would give the full Theorem 3 if the following extension of Ramsey’s theorem were true.
Statement Let ${{\mathcal F}}$ denote the collection of all finite non-empty subsets of ${{\mathbb N}}$. For every finite coloring of ${{\mathcal F}}$ there exists an infinite set ${I\subset{\mathbb N}}$ such that for every ${k\in{\mathbb N}}$, the collection
$\displaystyle \big\{F\subset I:|F|=k\big\}$
is monochromatic.
Unfortunately, this statement is false, as seen by the following example. In this sense it is perhaps surprising that Theorem 3 is true.
Example 1 Let ${\chi:{\mathcal F}\rightarrow\{1,0\}}$ be the coloring given by
$\displaystyle \chi(F)=\begin{cases} 1&\text{ if }~ \min(F)<|F|\\ 0&\text{otherwise} \end{cases}$
Then given any infinite set ${I\subset{\mathbb N}}$, let ${x\in I}$ and find a finite subset ${F\subset I}$ with ${|F|=x+1}$ and ${x\in F}$ (which then satisfies ${\chi(F)=1}$) and another finite subset ${G\subset I}$ with ${|G|=x+1}$ but ${x\notin G}$ (which thus satisfies ${\chi(G)=0}$).
— 2. Proof of Theorem 3
Since the proof of Theorem 3 involves some annoying parameters, we first outline the steps in the proof without full details.
By refining the sequence of sets ${(A_n)}$ we can assume that they all have similar measure ${\mu(A_n)\approx\delta}$ for all ${n}$. Applying Lemma 2 we find a set ${I_1\subset{\mathbb N}}$ with ${\mu(A_n\cap A_m)>\delta^2}$ for every ${n,m\in I_1}$. After refining ${I_1}$, if needed, we can assume that, in fact, for every distinct ${n,m\in I_1}$ we have ${\mu(A_n\cap A_m)\approx\delta^2}$. Let ${n_1\in I_1}$ be arbitrary.
Now comes the tricky part: we condition the measure on ${A_{n_1}}$. In other words, we consider the probability measure ${\mu_1}$ on ${X}$ defined by ${\mu_1(B)=\mu(B\cap A_{n_1})/\mu(A_{n_1})\approx\mu(A\cap A_{n_1})/\delta}$. Observe that ${\mu_1(A_n)\approx\delta}$ for every ${n\in I_1\setminus\{n_1\}}$. Now use Lemma 2 to find an infinite set ${I_{2}\subset I_1\setminus\{n_1\}}$ such that ${\mu_1(A_n\cap A_m)\approx\delta^2}$ for every distinct ${n,m\in I_2}$.
Let ${n_2\in I_2}$ be arbitrary. Now we condition the measure on ${A_{n_2}}$, letting ${\mu_2(B):=\mu(A_{n_2}\cap B)/\mu(A_{n_2})}$ and noting that, since ${n_2\in I_1}$, for every ${m\in I_2\setminus\{n_2\}}$ we have ${\mu_2(A_m)\approx\delta}$. Therefore by applying Lemma 2 we can find an infinite ${I_{3,1}\subset I_2\setminus\{n_2\}}$ such that for every distinct ${n,m\in I_{3,1}}$ we have ${\mu_2(A_n\cap A_m)\approx\delta^2}$. Before we can choose ${n_3}$, we need to also consider the situation conditional on ${A_{n_1}\cap A_{n_2}}$ (which we recall, has measure ${\approx\delta^2}$ because both ${n_1}$ and ${n_2}$ are in ${I_1}$). Thus, letting ${\mu_{1,2}(B):=\mu(A_{n_1}\cap A_{n_2}\cap B)/\delta^2}$ we obtain that, for every ${n\in I_2}$, ${\mu_{1,2}(A_n)\approx\delta}$. Therefore, applying Lemma 2 again, we can further refine ${I_{3,1}}$ to an infinite subset ${I_3}$ such that for every distinct ${n,m\in I_3}$ also ${\mu_{1,2}(A_n\cap A_m)\approx\delta^2}$. We can now chose ${n_3\in I_3}$ arbitrarily.
We continuing building the elements ${n_1,n_2,n_3,...}$ of the eventually infinite set ${I}$, at each step making sure we have an infinite set ${I_k}$ such that for every non-empty subset ${F\subset\{n_1,\dots,n_k\}}$ we have
$\displaystyle \mu\left(\bigcap_{i\in F}A_i\right)\approx\delta^{|F|},$
and that every ${n\in I_k}$ we have
$\displaystyle \mu_F(A_n):=\frac{\mu\left(A_n\cap\bigcap_{i\in F}A_i\right)}{\mu\left(\bigcap_{i\in F}A_i\right)}\approx\delta.$
In each stage we can keep moving by conditioning ${\mu}$ in each new subset of ${\{n_1,\dots,n_k\}}$ and applying Lemma 2.
To make everything work out, we need to introduce a refinement step at each stage, to make sure all the sets in ${I_k}$ have similar measure, for all the conditional measures ${\mu_F}$. To this end we make use of the following version of the pigeonhole principle.
Lemma 6 Let ${(X,\mu)}$ be a probability space, let ${\delta>0}$ and let ${A_1, A_2,\dots}$ be sets with ${\mu(A_n)>\delta}$ for every ${n\in{\mathbb N}}$. Then for every ${\rho>1}$ there exists ${\theta\geq\delta}$ and an infinite subset ${I\subset{\mathbb N}}$ such that for every ${n\in I}$ we have
$\displaystyle \mu\left(A_n\right)\in[\theta,\theta\rho).$
Proof: Let ${N}$ be large enough that the intervals ${\big[\delta\rho^{k-1},\delta\rho^k\big)}$, with ${k=1,2,\dots,N}$ cover the interval ${(\delta,1]}$. Since ${\mu(A_n)\in(\delta,1]}$ for every ${n\in{\mathbb N}}$, the pigeonhole principle implies that there exists ${k\in\{1,\dots,N\}}$ for which the result holds with ${\theta=\delta\rho^{k-1}}$. $\Box$
We are now ready to prove Theorem 3.
Proof of Theorem 3: For each ${F\subset{\mathbb N}}$, denote by ${A_F}$ the intersection ${\bigcap_{n\in F}A_n}$, with the convention that ${A_\emptyset=X}$, and let ${\mu_F}$ denote the measure on ${X}$ defined by ${\mu_F(B)=\mu(B\cap A_F)/\mu(A_F)}$ whenever ${\mu(A_F)\neq0}$. Let ${\sigma>1}$ be such that ${\mu(A_n)>\delta\sigma}$ for every ${n\in{\mathbb N}}$.
We will construct, for each ${k=0,1,\dots}$, a set ${F_k}$ with ${|F_k|=k}$, such that ${F_{k-1}\subset F_k}$ and
$\displaystyle \forall\ \emptyset\neq F\subset F_k\quad\exists\theta_F>\delta:\quad\mu\left(A_F\right)\in\left[\theta_F^{|F|},\theta_F^{|F|} \sigma^{1-2^{-k}}\right) \ \ \ \ \ (1)$
and an infinite set ${I_k\subset{\mathbb N}}$ such that for any ${n\in I_k}$ we have
$\displaystyle \forall\ F\subset F_k\quad\exists\lambda_F>\delta\sigma^{2^{1-k}}\quad\forall n\in I_k,\quad\mu_F\left(A_n\right)\in\big[\lambda_F,\lambda_F\sigma^{2^{-k-1}}\big) \ \ \ \ \ (2)$
If we can construct such sequences, then taking ${I=\bigcup F_k}$ we obtain the desired conclusion from (1). For ${k=0}$ we set ${F_0=\emptyset}$. Apply Lemma 6 with ${\rho=\sqrt{\sigma}}$ to find ${\lambda_\emptyset\geq\delta\sigma}$ such that (2) holds for all ${n}$ in an infinite set ${I_0}$.
Suppose now that ${k>1}$ and that we have found ${F_{k-1}}$, ${I_{k-1}}$ satisfying (1) and (2). Enumerate the subsets of ${F_{k-1}}$ as ${\{S_1,\dots,S_{2^{k-1}}\}}$. Let ${J_0=I_{k-1}}$ and apply successively Lemma 2 for each ${i}$ in ${\{1,\dots,2^{k-1}\}}$ to obtain an infinite set ${J_{i}\subset J_{i-1}}$ such that for every distinct ${n,m\in J_{i}}$ we have
$\displaystyle \mu_{S_i}(A_n\cap A_m)\geq\lambda_{S_i}^2\sigma^{-2^{-k}}, \ \ \ \ \ (3)$
Let ${J=J_{2^{k-1}}}$, take ${x\in J}$ arbitrary and let ${F_{k}=F_{k-1}\cup\{x\}}$. Observe that for every ${i\in\{1,\dots,2^{k-1}\}}$ and ${n\in J\setminus\{x\}}$, combining (2) and (3), we have
$\displaystyle \mu_{S_i\cup x}(A_n)=\frac{\mu(A_{S_i}\cap A_x\cap A_n)}{\mu(A_{S_i}\cap A_x)}=\frac{\mu_{S_i}(A_x\cap A_n)}{\mu_{S_i}(A_x)}\geq \frac{\lambda_{S_i}^2\sigma^{-2^{-k}}}{\lambda_{S_i}\sigma^{2^{-k}}}= \lambda_{S_i}\sigma^{-2^{1-k}}>\delta\sigma^{2^{1-k}} \ \ \ \ \ (4)$
We run another refinement cycle, setting ${K_0:=J\setminus\{x\}}$ and successively using Lemma 6 for each ${i\in\{1,\dots,2^{k-1}\}}$ to find ${\lambda_F\geq\delta\sigma^{2^{1-k}}}$, where ${F=S_i\cup\{x\}}$, and an infinite set ${K_i\subset K_{i-1}}$ such that for every ${n\in K_i}$,
$\displaystyle \mu_{F}(A_n)\in\big[\lambda_F,\lambda_F\sigma^{2^{-k-1}}\big) \ \ \ \ \ (5)$
Finally, let ${I_k=K_{2^{k-1}}}$. Observe that (2) follows immediately from (5) and induction (for those ${F\subset F_k}$ which do not contain ${x}$).
To verify (1), let ${\emptyset\neq F\subset F_k}$. If ${x\notin F}$, then ${F\subset F_{k-1}}$ and the result follows by induction. If ${x\in F}$, let ${S:=F\setminus\{x\}}$ and notice that ${\mu(A_F)=\mu_S(A_x)\mu(A_S)}$. The fact that ${x\in I_{k-1}}$ together with (2) for ${S}$ (which is a subset of ${F_{k-1}}$) implies that
$\displaystyle \mu(A_F)\in \left[\theta_S^{|S|},\theta_S^{|S|} \sigma^{1-2^{1-k}}\right) \cdot \big[\lambda_S,\lambda_S\sigma^{2^{-k}}\big) =\left[\theta_S^{|S|}\lambda_S,\theta_S^{|S|}\lambda_S\sigma^{1-2^{-k}}\right)$
and (1) follows by setting ${\theta_F=\big(\theta_S^{|S|}\lambda_S\big)^{1/|F|}}$, which will be greater that ${\delta}$ since both ${\theta_F}$ and ${\lambda_F}$ are.
$\Box$ | 2019-09-18 22:24:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 225, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9865972399711609, "perplexity": 74.10174224392134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00258.warc.gz"} |
https://www.inferentialthinking.com/chapters/11/1/Assessing_Models | ### Assessing Models
In data science, a “model” is a set of assumptions about data. Often, models include assumptions about chance processes used to generate data.
Sometimes, data scientists have to decide whether or not their models are good. In this section we will discuss two examples of making such decisions. In later sections we will use the methods developed here as the building blocks of a general framework for testing hypotheses.
### U.S. Supreme Court, 1965: Swain vs. Alabama
In the early 1960’s, in Talladega County in Alabama, a black man called Robert Swain was convicted of raping a white woman and was sentenced to death. He appealed his sentence, citing among other factors the all-white jury. At the time, only men aged 21 or older were allowed to serve on juries in Talladega County. In the county, 26% of the eligible jurors were black, but there were only 8 black men among the 100 selected for the jury panel in Swain’s trial. No black man was selected for the trial jury.
In 1965, the Supreme Court of the United States denied Swain’s appeal. In its ruling, the Court wrote “… the overall percentage disparity has been small and reflects no studied attempt to include or exclude a specified number of Negroes.”
Jury panels are supposed to be selected at random from the eligible population. Because 26% of the eligible population was black, 8 black men on a panel of 100 might seem low.
### A Model
But one view of the data – a model, in other words – is that the panel was selected at random and ended up with a small number of black men just due to chance. This model is consistent with what the Supreme Court wrote in its ruling.
The model specifies the details of a chance process. It says the data are like a random sample from a population in which 26% of the people are black. We are in a good position to assess this model, because:
• We can simulate data based on the model. That is, we can simulate drawing at random from a population of whom 26% are black.
• Our simulation will show what a panel would look like if it were selected at random.
• We can then compare the results of the simulation with the composition of Robert Swain’s panel.
• If the results of our simulation are not consistent with the composition of Swain’s panel, that will be evidence against the model of random selection.
Let’s go through the process in detail.
### The Statistic
First, we have to choose a statistic to simulate. The statistic has to be able to help us decide between the model and alternative views about the data. The model says the panel was drawn at random. The alternative viewpoint, suggested by Robert Swain’s appeal, is that the panel was not drawn at random because it contained too few black men. A natural statistic, then, is the number of black men in our simulated sample of 100 men representing the panel. Small values of the statistic will favor the alternative viewpoint.
### Predicting the Statistic Under the Model
If the model were true, how big would the statistic typically be? To answer that, we have to start by working out the details of the simulation.
#### Generating One Value of the Statistic
First let’s figure out how to simulate one value of the statistic. For this, we have to sample 100 times at random from the population of eligible jurors and count the number of black men we get.
One way is to set up a table representing the eligible population and use sample as we did in the previous chapter. But there is also a quicker way, using a datascience function tailored for sampling at random from categorical distributions. We will use it several times in this chapter.
The sample_proportions function in the datascience library takes two arguments:
• the sample size
• the distribution of the categories in the population, as a list or array of proportions that add up to 1
It returns an array containing the distribution of the categories in a random sample of the given size taken from the population. That’s an array consisting of the sample proportions in all the different categories.
To see how to use this, remember that according to our model, the panel is selected at random from a population of men among whom 26% were black and 74% were not. Thus the distribution of the two categories can be represented as the list [0.26, 0.74], which we have assigned to the name eligible_population. Now let’s sample at random 100 times from this distribution, and see what proportions of the two categories we get in our sample.
eligible_population = [0.26, 0.74]
sample_proportions(100, eligible_population)
array([0.27, 0.73])
That was easy! The proportion of black men in the random sample is item(0) of the output array.
Because there are 100 men in the sample, the number of men in each category is 100 times the proportion. So we can just as easily simulate counts instead of proportions, and access the count of black men only.
Run the cell a few times to see how the output varies.
# count of black men in a simulated panel
(100 * sample_proportions(100, eligible_population)).item(0)
27.0
#### Running the Simulation
To get a sense of the variability without running the cell over and over, let’s generate 10,000 simulated values of the count.
The code follows the same steps that we have used in every simulation. First, we define a function to simulate one value of the count, using the code we wrote above.
def one_simulated_count():
return (100 * sample_proportions(100, eligible_population)).item(0)
Next, we create an array of 10,000 simulated counts by using a for loop.
counts = make_array()
repetitions = 10000
for i in np.arange(repetitions):
counts = np.append(counts, one_simulated_count())
### The Prediction
To interpret the results of our simulation, we start as usual by visualizing the results by an empirical histogram.
Table().with_column(
'Count in a Random Sample', counts
).hist(bins = np.arange(5.5, 46.6, 1))
The histogram tells us what the model of random selection predicts about our statistic, the count of black men in the sample.
To generate each simulated count, we drew at 100 times at random from a population in which 26% were black. So, as you would expect, most of the simulated counts are around 26. They are not exactly 26: there is some variation. The counts range from about 10 to about 45.
### Comparing the Prediction and the Data
Though the simulated counts are quite varied, very few of them came out to be eight or less. The value eight is far out in the left hand tail of the histogram. It’s the red dot on the horizontal axis of the histogram.
Table().with_column(
'Count in a Random Sample', counts
).hist(bins = np.arange(5.5, 46.6, 1))
plots.scatter(8, 0, color='red', s=30);
The simulation shows that if we select a panel of 100 jurors at random from the eligible population, we are very unlikely to get counts of black men as low as the eight that were in Swain’s jury panel. This is evidence that the model of random selection of the jurors in the panel is not consistent with the data from the panel.
When the data and a model are inconsistent, the model is hard to justify. After all, the data are real. The model is just a set of assumptions. When assumptions are at odds with reality, we have to question those assumptions.
While it is possible that a panel like Robert Swain’s could have been generated by chance, our simulation demonstrates that it is very unlikely. Thus our assessment is that the model of random draws is not supported by the evidence. Swain’s jury panel does not look like the result of random sampling from the population of eligible jurors.
This method of assessing models is very general. Here is an example in which we use it to assess a model in a completely different setting.
### Mendel’s Pea Flowers
Gregor Mendel (1822-1884) was an Austrian monk who is widely recognized as the founder of the modern field of genetics. Mendel performed careful and large-scale experiments on plants to come up with fundamental laws of genetics.
Many of his experiments were on varieties of pea plants. He formulated sets of assumptions about each variety; these were his models. He then tested the validity of his models by growing the plants and gathering data.
Let’s analyze the data from one such experiment to see if Mendel’s model was good.
In a particular variety, each plant has either purple flowers or white. The color in each plant is unaffected by the colors in other plants. Mendel hypothesized that the plants should bear purple or white flowers at random, in the ratio 3:1.
### Mendel’s Model
For every plant, there is a 75% chance that it will have purple flowers, and a 25% chance that the flowers will be white, regardless of the colors in all the other plants.
#### Approach to Assessment
To go about assessing Mendel’s model, we can simulate plants under the assumptions of the model and see what it predicts. Then we will be able to compare the predictions with the data that Mendel recorded.
### The Statistic
Our goal is to see whether or not Mendel’s model is good. We need to simulate a statistic that will help us make this decision.
If the model is good, the percent of purple-flowering plants in the sample should be close to 75%. If the model is not good, the percent purple-flowering will be away from 75%. It may be higher, or lower; the direction doesn’t matter.
The key for us is the distance between 75% and the percent of purple-flowering plants in the sample. Big distances are evidence that the model isn’t good.
Our statistic, therefore, is the distance between the sample percent and 75%:
### Predicting the Statistic Under the Model
To see how big the distance would be if Mendel’s model were true, we can use sample_proportions to simulate the distance under the assumptions of the model.
First, we have to figure out how many times to sample. To do this, remember that we are going to compare our simulation with Mendel’s plants. So we should simulate the same number of plants that he had.
Mendel grew a lot of plants. There were 929 plants of the variety corresponding to this model. So we have to sample 929 times.
#### Generating One Value of the Statistic
The steps in the calculation:
• Sample 929 times at random from the distribution specified by the model and find the sample proportion in the purple-flowering category.
• Multiply the proportion by 100 to get a pecent.
• Subtract 75 and take the absolute value of the difference.
That’s the statistic: the distance between the sample percent and 75.
We will start by defining a function that takes a proportion and returns the absolute difference between the corresponding percent and 75.
def distance_from_75(p):
return abs(100*p - 75)
To simulate one value of the distance between the sample percent of purple-flowering plants and 75%, under the assumptions of Mendel’s model, we have to first simulate the proportion of purple-flowering plants among 929 plants under the assumption of the model, and then calculate the discrepancy from 75%.
model_proportions = [0.75, 0.25]
proportion_purple_in_sample = sample_proportions(929, model_proportions).item(0)
distance_from_75(proportion_purple_in_sample)
1.7491926803014053
That’s one simulated value of the distance between the sample percent of purple-flowering plants and 75% as predicted by Mendel’s model.
#### Running the Simulation
To get a sense of how variable the distance could be, we have to simulate it many more times.
We will generate 10,000 values of the distance. As before, we will first use the code we developed above to define a function that returns one simulated value Mendel’s hypothesis.
def one_simulated_distance():
proportion_purple_in_sample = sample_proportions(929, model_proportions).item(0)
return distance_from_75(proportion_purple_in_sample)
Next, we will use a for loop to create 10,000 such simulated distances.
distances = make_array()
repetitions = 10000
for i in np.arange(repetitions):
distances = np.append(distances, one_simulated_distance())
### The Prediction
The empirical histogram of the simulated values shows the distribution of the distance as predicted by Mendel’s model.
Table().with_column(
'Distance between Sample % and 75%', distances
).hist()
Look on the horizontal axis to see the typical values of the distance, as predicted by the model. They are rather small. For example, a high proportion of the distances are in the range 0 to 1, meaning that for a high proportion of the samples, the percent of purple-flowering plants is within 1% of 75%, that is, the sample percent is in the range 74% to 76%.
### Comparing the Prediction and the Data
To assess the model, we have to compare this prediction with the data. Mendel recorded the number of purple and white flowering plants. Among the 929 plants that he grew, 705 were purple flowering. That’s just about 75.89%.
705 / 929
0.7588805166846071
So the observed value of our statistic – the distance between Mendel’s sample percent and 75 – is about 0.89:
observed_statistic = distance_from_75(705/929)
observed_statistic
0.8880516684607045
Just by eye, locate roughly where 0.89 is on the horizontal axis of the histogram. You will see that it is clearly in the heart of the distribution predicted by Mendel’s model.
The cell below redraws the histogram with the observed value plotted on the horizontal axis.
Table().with_column(
'Distance between Sample % and 75%', distances
).hist()
plots.scatter(observed_statistic, 0, color='red', s=30);
The observed statistic is like a typical distance predicted by the model. By this measure, the data are consistent with the histogram that we generated under the assumptions of Mendel’s model. This is evidence in favor of the model. | 2019-07-16 18:20:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6768332123756409, "perplexity": 822.7156861299204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524685.42/warc/CC-MAIN-20190716180842-20190716202842-00187.warc.gz"} |
https://flatearth.ws/t/3d | Simulation and Observation
To determine if an observation is consistent with the spherical Earth model, we can create simulations to understand the expected result, and then see if they match the actual observation.
Flat-Earthers like to reject the results of simulation as being unreal, not real-world observation. In reality, the simulations are presented not to dispute their observation, but to demonstrate that their observation is consistent with expectation if Earth is a sphere. | 2020-12-02 21:10:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.955784797668457, "perplexity": 561.4109805780366}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00690.warc.gz"} |
https://codereview.stackexchange.com/questions/30227/what-is-the-more-effective-way-to-get-array-of-data-out-of-xml-for-d3-chart | # What is the more effective way to get array of data out of xml for d3 chart?
I am writing a small application that would interactively allow user for xml file manipulation with d3 interactive charts.
My xml file has the following hierarchy :
<?xml version="1.0" encoding="UTF-8" ?>
<testcase>
<measurement>
<type>M8015</type>
<interval>15</interval>
</measurement>
<measurement>
<type>M8016</type>
<interval>15</interval>
</measurement>
<measurement>
<type>M8020</type>
<interval>15</interval>
</measurement>
...
</testcase>
What would be the most effective way to get the data out of xml into 2 arrays (one for x axis type, and one for y axis interval).
I have tried the following but don't know if that's a good approach.
$(function() { var$mydata = new Array(1);
$.get("./testcase.xml", function(xml) { var$chart = d3.select("body").append("div")
.attr("class", "chart")
.attr("id", "chart");
$(xml).find('measurement').each(function(){ var$meas = $(this); var$type = $meas.find('type').text(); var$interval = $meas.find('interval').text();$mydata.push({'type': $type, 'interval':$interval});
});
});
Any suggestions if that's the correct way of handling this problem ?
I suggest you use JSON when manipulating the data because it's easier and less code compared to the same XML counterpart code - and much easier.
I can think of two ways you can do this:
• Convert your files to JSON, if you can.
• If you can't convert to JSON files, stay with what you are doing, and read the XML but convert them to JS objects before handing them over to the chart. There are XML-to-JSON libraries out there which you can use.
A similar data structure in JSON would look like
{
"testcase" : {
"measurement" : [
{
"type" : "M8015",
"interval" : 15
},{
"type" : "M8016",
"interval" : 15
},{
"type" : "M8020",
"interval" : 15
}
]
}
}
And it's a matter of plucking out the data:
var myData = data.testcase.measurement; | 2021-01-26 10:04:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20959897339344025, "perplexity": 2843.295057230361}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00012.warc.gz"} |
https://optimization-online.org/2023/02/a-classification-method-based-on-a-cloud-of-spheres/ | # A classification method based on a cloud of spheres
In this article we propose a binary classification model to distinguish a specific class that corresponds to a characteristic that we intend to identify (fraud, spam, disease).
The classification model is based on a cloud of spheres that circumscribe the points of the class to be identified. It is intended to build a model based on a cloud and not on a disjoint set of clouds, establishing this condition on the connectivity of a graph induced by the spheres. To solve the problem, designed by a Cloud of Connected Spheres, a quadratic model with continuous and binary variables (MINLP) is proposed with the minimization of the number of spheres. The issue of connectivity implies in many models the imposition of an exponential number of constraints. However, due to the particular conditions of the problem under study, connectivity is imposed with K-1 linear constraints, where K is the total number of spheres. This classification model is effective when the structure of the class to be identified is highly non-linear and non-convex, also adapting to the case of linear separation. Unlike neural networks, the classification model is transparent, with the structure perfectly identified. It is not necessary to use meta-parameters unless it is intended also to maximize the separation margin as it is done in SVM. Finding the global optima for large instances is quite difficult and a heuristic is proposed that presents good results. | 2023-03-30 02:28:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6545620560646057, "perplexity": 322.33361075809734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00619.warc.gz"} |
https://tex.stackexchange.com/questions/15981/reusing-an-image-without-duplicating-it | # Reusing an image without duplicating it
PGF has a nice feature, declaring an image and re-using it again and again. With a PDF backend, it reduces the size of the output, as the image is embedded in the file only once. However, the PGF manual clearly states that LaTeX users should prefer `\includegraphics` to this mechanism.
Is it possible to mimic this feature, i.e., reusing an image without increasing the file size, using LaTeX's native graphics packages? Would putting the image in a box do this effect?
`\includegraphics` does the trick itself.
• For PDF output I'm sure of that, but not for DVI/PS. There it might depend on the `dvips` tool used. Also AFAIK the `pgf` graphic commands are using `\includegraphics` internally for PDF output. – Martin Scharrer Apr 17 '11 at 7:48
• DVI does not contain images. pdfTeX, dvipdfm(x), XeTeX and LuaTeX reuse the images for pdf output. However, indeed, dvips copys the code repeatedly and produces huge `.ps` file. It seems limited by the background graphics driver. I've no idea for that. – Leo Liu Apr 17 '11 at 9:08
• @Martin The `pgf` graphic commands directly use pdftex commands if the `pdftex` output driver is used. For other drivers (including ps unless I am mistaken), the fallback is to use `\includegraphics`. I believe that there might be a difference between `pgf` and `\includegraphics` when using the `pdftex` engine (I have related remarks of the pgf manual in mind... but perhaps `\includegraphics` has the related features now as well). – Christian Feuersänger Apr 22 '11 at 22:19 | 2019-06-20 03:21:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9374451041221619, "perplexity": 2451.77487838425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00528.warc.gz"} |
https://physics.stackexchange.com/questions/434758/electric-potential-of-non-uniformly-charged-infinite-plane/702337#702337 | Electric Potential of Non-Uniformly Charged Infinite Plane
A little background: I was tutoring an undergrad upperclassman when we came to a problem that he had been assigned which I couldn't make heads or tails of - at least in terms of what was being expected of him.
The problem asks to find the electric potential above an infinite sheet lying in the $$xy$$-plane and carrying a surface charge density of $$\sigma=\sigma_0 \sin(\kappa \ x)$$. The answer must be in terms of $$\sigma_0$$ and $$\kappa$$.
From the statement of the problem and the context of the class, it is clear that the solution is expected to be analytical, which immediately rules out a numerical or series solution.
The current topic of the class is solving Laplace's Equation using separation of variables, but the associated Poisson's Equation for this problem (viz. $$\frac{\partial^2 V}{\partial x^2}+\frac{\partial^2 V}{\partial z^2}=-\frac{\sigma_0}{\epsilon_0}\sin(\kappa \ x)\delta(z)$$) is clearly not separable.
On the other hand, a more straightforward approach such as integrating for the potential over the entire sheet leads to an intractable integral. Like this: $$V(x',z')=\frac{\sigma_0}{4\pi\epsilon_0}\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dx \ \Big[ \frac{\sin(\kappa \ x)}{\sqrt{(x'-x)^2+(y)^2+(z')^2}} \Big]$$ I also tried cutting the sheet into infinitesimally wide, infinitely long strips along the $$y$$-direction and then integrating over the potential of an infinite wire, but of course, this results in the same sorts of weird integrals involving the natural log.
Is there an analytical method for solving this problem? Am I forgetting a technique, or is there perhaps a trick to evaluating one of these weird integrals?
• There’s a bit of a problem with your question: as there are charges at infinity where do you define the reference potential? Oct 16, 2018 at 4:49
• @ZeroTheHero It's a good question. I would assume that since the sheet has no net monopole-moment we may choose the potential at $z=\infty$ to be zero without any problems. Oct 16, 2018 at 5:28
The final solution should be something like $$V\left (x, y, z \right) = \frac{\sigma_0}{2\kappa \epsilon_0}\sin\left(\kappa x \right)e^{-\kappa |z|}$$.
The main idea is to solve the Poisson equation outside the plane, $$\nabla ^2 V = 0$$. and to apply boundary conditions only after having found the general form for the potential.
Since the solution is unique, we can guess a form and if it solves the equation we have solved the problem.
To do this, we will study the symmetries of the problem and write down the consequences of these:
• Translation invariance along $$y$$
This means that the potential is function of $$x$$,$$z$$ only.
$$V\left (x,y,z\right) = V\left(x,z\right)$$
• Translation invariance along $$x$$ by $$\frac{2\pi}{\kappa}$$
$$V\left(x, z \right) = V\left(x+\frac{2\pi}{\kappa}, z\right)$$
• Symmetry under spatial reflection with respect to the $$x-y$$ plane.
$$V\left(x, z \right) = V \left(x, -z \right)$$
Notice that the latter condition can be used to study the problem in the half space $$z>0$$. Indeed once we find $$V$$ in the superior half space, we get it also in the $$z<0$$ half space by making the substitution $$z \rightarrow |z|$$ in $$V\left( x,z\right)$$.
Thanks to this consideration, we will focus on the $$z>0$$ half space from now on we.
We will look for a separate variable solution (we are in the vacuum outside the plate!) $$V\left( x,z\right) = A\left(x \right)B\left( z\right)$$. Plugging in Poisson equation and dividing by $$V$$ we get $$\frac{A''(x)}{A(x)} + \frac{B''(z)}{B(z)} = 0$$
A possible solution which respect the symmetry we want the solution to have is given by $$V\left(x,z\right)=V_0\sin(\kappa x) e ^{-\kappa z}$$
We only need to check if the electric field (calculated by taking the derivative along the normal direction on the plate) is equal to $$\frac{\sigma}{2\epsilon_0}$$ (the electric field near a charged sheet) This can be done by choosing $$V_0 = \frac{\sigma_0}{2\kappa\epsilon_0}$$. This is the potential in the upper half space ($$z>0$$).
In order to find $$V$$ in the whole space, we can do the substitution $$z\rightarrow |z|$$ as we discussed earlier.
• You know, I actually think that this answer is on the right track. I just think that you played a little too fast and loose with the numbers and missed a few minor factors. I actually think that the answer is $V(x,z)=-\frac{\sigma_0}{2\epsilon_0\kappa}\sin(\kappa x)e^{-\kappa|z|}$. I may write up a break down of this solution later when I have clarified my thoughts. Oct 16, 2018 at 5:45
• Oh yeah right, I thought that the problem was a metallic "half space" and that you had to find the $V$ only on one side of the plate since in the other side $\vec{E}$ is $0$. I guess that this also solves the parallel $\vec{E}$ problem since we shouldn't impose any additional condition (in the plate case I think is necessary if we want to have $\nabla \times \vec{E} = 0$) Oct 16, 2018 at 6:17
• Right. Also, it's worthy pointing out that this problem actually can't take place on a conductor since the charge distribution varies dramatically between points with the same curvature. Some of the surface is even negatively charged while other parts are positive, which would be impossible on a conductor. Oct 16, 2018 at 6:28
• The only thing I don't agree with your solution is the sign. There are 2 signs when we calculate the field given $V$ in our case. One comes from taking the gradient along $z (negative exponent)$, and the other by the definition of $\vec{E} = -\nabla V$. In the end we should get a positive $E_z$ right above the plate if the charge is positive in a given point. of the plate Oct 16, 2018 at 7:06
• You're right. I forgot to do the negative gradient. Oct 16, 2018 at 16:01
Here's my thoughts, hopefully closer to what the expected approach is (though I haven't gone through the solution yet):
This problem is equivalent to solving for the potential $$V(x,z)$$ in the $$z>0$$ half-space with the boundary conditions $$V(z\rightarrow \infty)=0$$, and the electric field is specified on the $$z=0$$ plane. The latter condition is a Neumann boundary condition, which we can obtain by using Gauss's law in integral form over a cylinder of infinitesimal height along the $$z=0$$ surface, which shows that, as with a uniformly charged infinite plane, the electric field on the $$z=0$$ boundary must be $$\mathbf{E}=\frac{\sigma}{2\epsilon}\hat z$$. Thus, we have \begin{align} V\bigg|_{z\rightarrow \infty} &= 0\\[1em] \frac{\partial V}{\partial z}\bigg|_{z=0} &= \frac{-\sigma}{2\epsilon}\\[1em] \frac{\partial V}{\partial x}\bigg|_{z=0} &= 0 \end{align} Now the problem is finding a solution to Laplace's equation in 2D, $$\frac{\partial^2 V}{\partial x^2} + \frac{\partial^2 V}{\partial z^2} = 0$$ Subject to the above boundary conditions, such that $$V$$ is periodic in $$x$$. I don't have time to go through it now, I'll try to revisit, but it should (hopefully) be a straightforward Laplace equation problem. | 2022-12-09 15:56:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573306441307068, "perplexity": 167.55985312129843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00743.warc.gz"} |
https://mathoverflow.net/questions/115457/existence-of-cut-based-pseudorandom-graphs-beating-the-random-graph | # Existence of (Cut-Based) pseudorandom graphs beating the random graph
The question is simply this: Does there exist a (family of) graph $G=(V,E)$ such that $\max_{S\subset V} |E(S,S^c)- \frac{|S||S^C|}{2}|\leq o(n^{3/2})$. Such graphs would be very pseudorandom as the edge density of all their cuts would be extremely close to the expected value if we had picked each edge with probability a half.
Background: As discussed in the following Math-overflow question Max cut value in a random graph, with high probability a random $G(n,\frac{1}{2})$ graph has a cut $(S,S^c)$ with more than $\frac{n^2}{8}+\Omega(n^{3/2})$ edges. This implies that the following max-deviation lower bound: For almost every $G=(V,E)$ we have,
$\max_{S\subset V} |E(S,S^c)- \frac{|S||S^C|}{2}| = \Omega(n^{3/2})$
This lower bound means that simply by taking a random graph you cannot solve the above problem.
It could be true that the above for amost every graph result actually holds for every graph and this quantity is always $\Omega(n^{3/2})$. A proof of this would be quite interesting and would give evidence to a conjecture that I have in mind.
-
If you look at a random partition into two parts of any graph with vertex degrees close to n/2, the variance of the number of edges across the cut is $\Theta(n^2)$, which is the same within a constant to what it is for random graphs. This proves nothing really, but I strongly suspect the worse of all $2^n$ cuts will still be $\Theta(n^{3/2})$ above expectation. – Brendan McKay Dec 5 '12 at 14:18
The approach of taking a random partition and analyzing higher moments would probably not be sufficiently strong to prove this. But one approach could be to use the fact that we know this for random graphs, and hence for graphs $O(n^{3/2})$ close to random graphs. And given a graph G we have to see what this non-randomness can give us. Maybe in graphs far away from random one can use a random partition to achieve this. (indeed if the relative density is bounded away from 1/2 this works)Also some have suggested that maybe \emph{discrepancy theory} is the keyword in this problem. – Nick B. Dec 5 '12 at 18:20
## 1 Answer
This isn't a full solution, just a couple of observations too long to fit into a comment.
First of all, it's worth noting that both of the "one-sided" versions of this statement are false. In the complete bipartite graph $K_{n/2,n/2}$, every cut satisfies $E(S,S^C) \geq \frac{1}{2}|S||S^C|$, and in the union of two disjoint copies of $K_{n/2}$ every cut satisfies $E(S,S^C) \leq \frac{1}{2}|S||S^C|$ (these examples can be modified by adding $O(n)$ edges so that their density is exactly $1/2$ if $n$ is a multiple of $4$). This rules out a number of arguments that attempt to construct (randomly or otherwise) an unusually dense cut.
Secondly, the following weaker statement IS true: If $G$ is an $n/2$-regular graph, then there are disjoint subsets $S$ and $T$ such that $E(S,T) \geq \frac{1}{2}|S| |T| + \Omega(n^{3/2})$. A rough sketch of the argument is to take $S$ to be a random subset of the vertices where each vertex is in $S$ with probability $0.1$, then take $T$ to be all the vertices outside of $S$ having at least $|S|/2+0.01 \sqrt{n}$ neighbors in $S$. This automatically forces $E(S,T) \geq |S||T|/2+0.01 |T| \sqrt{n}$, so we just need to show that $|T|$ is large for some $S$.
Each vertex is in $T$ with positive probability (this is where we need regularity), so the expected number of vertices in $T$ is at least $cn$. This means that some $S$ must have a $T$ at least this large.
Unfortunately, this probably does not extend to an argument giving a cut across the whole graph, since the conclusion is one-sided.
-
This is really interesting. I have two comments: First of all, Can't you deduce the general statement of your weak form a reduction: Let G=(V,E) be our graph and Take G1 and G2 to be two copies of G. Take G′=$G_1\cup G_2$ and now connect $v\in G_1$ with $u\in G_2$ if their preimage in G were not connected. Now the relative degree of each vertex would be 1/2 in G′. Now any cut ,or weak-cut, deviation in G′ will manifest itself with deviation in G losing a factor of 1/4 in the reduction. – Nick B. Dec 6 '12 at 22:22
We wouldn't be done with the reduction yet because the "cut" that we get in $G$ might be in the form $(S,T)$ such that $S\cap T\neq \emptyset$. But I think in that case if $S\cap T$ is large enough to be annoying, you should be able to still get the desired result by taking the intersection $S\cap T$ and taking a random cut across it. You basically reduce to the case that if a graph has relative density bounded away from 1/2 the above result is easy by taking random cuts. I hope the hand-wavy argument above actually goes through – Nick B. Dec 6 '12 at 22:53
Second comment: I don't see why your argument resolves the general case. I might be making a silly mistake but consider the following: Let $G$ be (a family) of counterexample to above,i.e. $|E(S,S^c)-|S||S^c|/2|\leq o(n^{3/2})$ .Then use your above construction to take $E(S,T)\geq 1/2 |S||T|+ \Omega(n^{3/2})$ .Let $U=(S\cup T)^c$. Apply the assumptions above to the cuts $(S, U\cup T)$ and $(T,S \cup U)$. This will imply that $E(S,U)≤\frac{|S||U|}{2}−\Omega(n^{3/2})$ and similarly for $(S,T)$. But this would imply a large deviation in the cut $(S\cup T,U)$. Doesn't it? – Nick B. Dec 6 '12 at 23:50 | 2016-02-08 12:41:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928940296173096, "perplexity": 192.0914737895568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153323.32/warc/CC-MAIN-20160205193913-00161-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.zbmath.org/authors/?q=ai%3Aantoine.xavier | ## Antoine, Xavier
Compute Distance To:
Author ID: antoine.xavier Published as: Antoine, Xavier; Antoine, X. External Links: MGP · ORCID
Documents Indexed: 100 Publications since 1998 Reviewing Activity: 97 Reviews Co-Authors: 67 Co-Authors with 95 Joint Publications 1,187 Co-Co-Authors
all top 5
### Co-Authors
4 single-authored 19 Besse, Christophe 16 Lorin, Emmanuel 15 Geuzaine, Christophe A. 8 Darbas, Marion 7 Barucq, Hélène 7 Klein, Pauline 7 Thierry, Bertrand 6 Tang, Qinglin 5 Duboscq, Romain 4 Bendali, Abderrahmane 4 Lu, Ya Yan 4 Ramdani, Karim 3 Bériot, Hadrien 3 Chniti, Chokri 3 Ehrhardt, Matthias 3 El Bouajaji, Mohamed 3 Marchner, Philippe 3 Vernhet, Laurent 3 Zhang, Jiwei 3 Zhang, Yong 2 Alzubaidi, Hasan 2 Bandrauk, André D. 2 Boubendir, Yassine 2 Fillion-Gourdeau, François 2 Gasperini, David 2 Kechroud, Riyad 2 Khajah, Tahsin 2 Modave, Axel 2 Pang, Gang 2 Rispoli, Vittorio 2 Schröder, Udo 2 Soulaimani, Azzeddine 1 Amalberti, Julien 1 Arnold, Anton 1 Bao, Weizhu 1 Beise, Hans-Peter 1 Biese, Hans-Peter 1 Bordas, Stéphane Pierre Alain 1 Burnard, Pete 1 Caudron, B. 1 Colignon, D. 1 Descombes, Stéphane 1 Dreyfuss, Pierre 1 Dsouza, Shaima M. 1 Gabard, Gwénaël 1 Hou, Fengji 1 Huang, Yuexia 1 Ji, Songsong 1 Lemou, Mohammed 1 Levitt, Antoine 1 Li, Dongfang 1 Lieu, Alice 1 MacLean, Steve 1 Marsic, N. 1 Mouysset, Vincent 1 Natarajan, Sundararajan 1 Petrov, Pavel S. 1 Pinçon, Bruno 1 Sater, J. 1 Schädle, Achim 1 Szeftel, Jérémie 1 Tang, Shaoqiang 1 Tournier, Simon 1 Vion, Alexandre 1 Yang, Xu 1 Yang, Yibo 1 Yuan, Jianhua
all top 5
### Serials
24 Journal of Computational Physics 7 Computer Physics Communications 6 Communications in Computational Physics 5 Computer Methods in Applied Mechanics and Engineering 3 International Journal for Numerical Methods in Engineering 3 SIAM Journal on Applied Mathematics 3 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 2 Mathematics of Computation 2 Applied Mathematics and Computation 2 Journal of Computational and Applied Mathematics 2 Numerische Mathematik 2 Journal of Scientific Computing 2 International Journal of Computer Mathematics 2 SIAM Journal on Scientific Computing 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Journal of Algorithms & Computational Technology 1 IMA Journal of Applied Mathematics 1 Journal of Mathematical Analysis and Applications 1 Quarterly Journal of Mechanics and Applied Mathematics 1 Wave Motion 1 SIAM Journal on Numerical Analysis 1 Applied Mathematics Letters 1 Asymptotic Analysis 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Numerical Algorithms 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Advances in Engineering Software 1 Numerical Linear Algebra with Applications 1 Comptes Rendus de l’Académie des Sciences. Série I. Mathématique 1 Mathematical and Computer Modelling of Dynamical Systems 1 Communications in Nonlinear Science and Numerical Simulation 1 Journal of Computational Acoustics 1 Cubo 1 Mathematical Geosciences
all top 5
### Fields
65 Numerical analysis (65-XX) 62 Partial differential equations (35-XX) 25 Optics, electromagnetic theory (78-XX) 17 Fluid mechanics (76-XX) 13 Quantum theory (81-XX) 9 Operator theory (47-XX) 5 Statistical mechanics, structure of matter (82-XX) 2 Real functions (26-XX) 2 Integral equations (45-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Mechanics of deformable solids (74-XX) 2 Geophysics (86-XX) 1 History and biography (01-XX) 1 Potential theory (31-XX) 1 Ordinary differential equations (34-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Calculus of variations and optimal control; optimization (49-XX)
### Citations contained in zbMATH Open
80 Publications have been cited 1,165 times in 561 Documents Cited by Year
Computational methods for the dynamics of the nonlinear Schrödinger/Gross-Pitaevskii equations. Zbl 1344.35130
Antoine, Xavier; Bao, Weizhu; Besse, Christophe
2013
A review of transparent and artificial boundary conditions techniques for linear and nonlinear Schrödinger equations. Zbl 1364.65178
Antoine, Xavier; Arnold, Anton; Besse, Christophe; Ehrhardt, Matthias; Schadle, Achim
2008
Bayliss-Turkel-like radiation conditions on surfaces of arbitrary shape. Zbl 0923.35179
Antoine, X.; Barucq, H.; Bendali, A.
1999
A quasi-optimal non-overlapping domain decomposition algorithm for the Helmholtz equation. Zbl 1243.65144
Boubendir, Y.; Antoine, X.; Geuzaine, C.
2012
Unconditionally stable discretization schemes of non-reflecting boundary conditions for the one-dimensional Schrödinger equation. Zbl 1037.65097
Antoine, X.; Besse, C.
2003
Numerical schemes for the simulation of the two-dimensional Schrödinger equation using non-reflecting boundary conditions. Zbl 1053.65072
Antoine, Xavier; Besse, Christophe; Mouysset, Vincent
2004
Alternative integral equations for the iterative solution of acoustic scattering problems. Zbl 1064.76095
Antoine, X.; Darbas, M.
2005
Generalized combined field integral equations for the iterative solution of the three-dimensional Helmholtz equation. Zbl 1123.65117
Antoine, Xavier; Darbas, Marion
2007
Artificial boundary conditions for one-dimensional cubic nonlinear Schrödinger equations. Zbl 1109.35102
Antoine, Xavier; Besse, Christophe; Descombes, Stephane
2006
Robust and efficient preconditioned Krylov spectral solvers for computing the ground states of fast rotating and strongly interacting Bose-Einstein condensates. Zbl 1349.82027
Antoine, Xavier; Duboscq, Romain
2014
An improved surface radiation condition for high-frequency acoustic scattering problems. Zbl 1120.76058
Antoine, Xavier; Darbas, Marion; Lu, Ya Yan
2006
Absorbing boundary conditions for general nonlinear Schrödinger equations. Zbl 1231.35223
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2011
Fast approximate computation of a time-harmonic scattered field using the on-surface radiation condition method. Zbl 1001.78008
Antoine, Xavier
2001
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations. I: Computation of stationary solutions. Zbl 1348.35003
Antoine, Xavier; Duboscq, Romain
2014
Efficient spectral computation of the stationary states of rotating Bose-Einstein condensates by preconditioned nonlinear conjugate gradient methods. Zbl 1380.81496
Antoine, Xavier; Levitt, Antoine; Tang, Qinglin
2017
Absorbing boundary conditions for the one-dimensional Schrödinger equation with an exterior repulsive potential. Zbl 1161.65074
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2009
Construction, structure and asymptotic approximations of a microdifferential transparent boundary condition for the linear Schrödinger equation. Zbl 1129.35324
Antoine, Xavier; Besse, Christophe
2001
On the numerical approximation of high-frequency acoustic multiple scattering problems by circular cylinders. Zbl 1135.65403
Antoine, Xavier; Chniti, Chokri; Ramdani, Karim
2008
Modeling and computation of Bose-Einstein condensates: stationary states, nucleation, dynamics, stochasticity. Zbl 1344.35114
Antoine, Xavier; Duboscq, Romain
2015
A quasi-optimal domain decomposition algorithm for the time-harmonic Maxwell’s equations. Zbl 1349.78066
El Bouajaji, M.; Thierry, B.; Antoine, X.; Geuzaine, C.
2015
On the ground states and dynamics of space fractional nonlinear Schrödinger/Gross-Pitaevskii equations with rotation term and nonlocal nonlinear interactions. Zbl 1380.65296
Antoine, Xavier; Tang, Qinglin; Zhang, Yong
2016
Analytic preconditioners for the boundary integral solution of the scattering of acoustic waves by open surfaces. Zbl 1189.76356
Antoine, Xavier; Bendali, Abderrahmane; Darbas, Marion
2005
Numerical accuracy of a Padé-type non-reflecting boundary condition for the finite element solution of acoustic scattering problems at high-frequency. Zbl 1113.76051
Kechroud, R.; Antoine, X.; Soulaïmani, A.
2005
Approximate local magnetic-to-electric surface operators for time-harmonic Maxwell’s equations. Zbl 1351.78010
El Bouajaji, M.; Antoine, X.; Geuzaine, C.
2014
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations. II: Dynamics and stochastic simulations. Zbl 1344.82004
Antoine, Xavier; Duboscq, Romain
2015
Absorbing boundary conditions for the two-dimensional Schrödinger equation with an exterior potential. I: Construction and a priori estimates. Zbl 1251.35096
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2012
Microlocal diagonalization of strictly hyperbolic pseudodifferential systems and application to the design of radiation conditions in electromagnetism. Zbl 0983.35138
Antoine, Xavier
2001
Analytic preconditioners for the electric field integral equation. Zbl 1210.65193
Antoine, X.; Bendali, A.; Darbas, M.
2004
Absorbing boundary conditions for relativistic quantum mechanics equations. Zbl 1349.81080
Antoine, X.; Lorin, E.; Sater, J.; Fillion-Gourdeau, F.; Bandrauk, A. D.
2014
Computational performance of simple and efficient sequential and parallel Dirac equation solvers. Zbl 1411.35234
Antoine, X.; Lorin, E.
2017
Corner treatments for high-order local absorbing boundary conditions in high-frequency acoustic scattering. Zbl 1453.65340
Modave, A.; Geuzaine, C.; Antoine, X.
2020
An integral preconditioner for solving the two-dimensional scattering transmission problem using integral equations. Zbl 1168.78001
Antoine, X.; Boubendir, Y.
2008
Domain decomposition method and high-order absorbing boundary conditions for the numerical simulation of the time dependent Schrödinger equation with ionization and recombination by intense electric field. Zbl 1332.78026
Antoine, X.; Lorin, E.; Bandrauk, A. D.
2015
Absorbing boundary conditions for the two-dimensional Schrödinger equation with an exterior potential. II: Discretization and numerical results. Zbl 1457.65093
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2013
High-order IMEX-spectral schemes for computing the dynamics of systems of nonlinear Schrödinger/Gross-Pitaevskii equations. Zbl 1422.65277
Antoine, Xavier; Besse, Christophe; Rispoli, Vittorio
2016
GetDDM: an open framework for testing optimized Schwarz methods for time-harmonic wave problems. Zbl 1380.65477
Thierry, B.; Vion, A.; Tournier, S.; El Bouajaji, M.; Colignon, D.; Marsic, N.; Antoine, X.; Geuzaine, C.
2016
Towards accurate artificial boundary conditions for nonlinear PDEs through examples. Zbl 1184.35014
Antoine, Xavier; Besse, Christophe; Szeftel, Jérémie
2009
A non-overlapping domain decomposition method with high-order transmission conditions and cross-point treatment for Helmholtz problems. Zbl 07337986
Modave, A.; Royer, A.; Antoine, X.; Geuzaine, C.
2020
High-frequency asymptotic analysis of a dissipative transmission problem resulting in generalized impedance boundary conditions. Zbl 0986.76080
Antoine, X.; Barucq, H.; Vernhet, L.
2001
Wide frequency band numerical approaches for multiple scattering problems by disks. Zbl 1280.78002
Antoine, Xavier; Ramdani, Karim; Thierry, Bertrand
2012
Approximation by generalized impedance boundary conditions of a transmission problem in acoustic scattering. Zbl 1074.78004
Antoine, Xavier; Barucq, Hélène
2005
Formulation and accuracy of on-surface radiation conditions for acoustic multiple scattering problems. Zbl 1410.78010
Alzubaidi, Hasan; Antoine, Xavier; Chniti, Chokri
2016
$$\mu$$-diff: an open-source Matlab toolbox for computing multiple scattering problems by disks. Zbl 1380.65478
Thierry, Bertrand; Antoine, Xavier; Chniti, Chokri; Alzubaidi, Hasan
2015
An analysis of Schwarz waveform relaxation domain decomposition methods for the imaginary-time linear Schrödinger and Gross-Pitaevskii equations. Zbl 1383.65122
Antoine, X.; Lorin, E.
2017
On the construction of approximate boundary conditions for solving the interior problem of the acoustic scattering transmission problem. Zbl 1152.76466
Antoine, X.; Barucq, H.
2005
Computing high-frequency scattered fields by beam propagation methods: a prospective study. Zbl 1210.78015
Antoine, Xavier; Huang, Yuexia; Lu, Ya Yan
2010
Modeling photonic crystals by boundary integral equations and Dirichlet-to-Neumann maps. Zbl 1143.78012
Yuan, Jianhua; Lu, Ya Yan; Antoine, Xavier
2008
Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime. Zbl 1349.65430
Lorin, E.; Yang, X.; Antoine, X.
2016
Lagrange-Schwarz waveform relaxation domain decomposition methods for linear and nonlinear quantum wave problems. Zbl 1334.65152
Antoine, X.; Lorin, E.
2016
Perfectly matched layer for computing the dynamics of nonlinear Schrödinger equations by pseudospectral methods. Application to rotating Bose-Einstein condensates. Zbl 1453.65353
Antoine, Xavier; Geuzaine, Christophe; Tang, Qinglin
2020
Asymptotic estimates of the convergence of classical Schwarz waveform relaxation domain decomposition methods for two-dimensional stationary quantum waves. Zbl 1407.65152
Antoine, Xavier; Hou, Fengji; Lorin, Emmanuel
2018
An algorithm coupling the OSRC and FEM for the computation of an approximate scattered acoustic field by a non-convex body. Zbl 1098.76566
Antoine, X.
2002
Wavelet approximations of a collision operator in kinetic theory. Zbl 1028.76059
Antoine, Xavier; Lemou, M.
2003
Far field modeling of electromagnetic time reversal and application to selective focusing on small scatterers. Zbl 1287.35054
Antoine, X.; Pinçon, B.; Ramdani, K.; Thierry, B.
2008
Towards perfectly matched layers for time-dependent space fractional PDEs. Zbl 1452.65266
Antoine, Xavier; Lorin, Emmanuel
2019
Spectral and condition number estimates of the acoustic single-layer operator for low-frequency multiple scattering in dense media. Zbl 1258.65100
Thierry, Bertrand; Antoine, Xavier
2013
Optimized Schwarz domain decomposition methods for scalar and vector Helmholtz equations. Zbl 1366.65110
Antoine, X.; Geuzaine, C.
2017
Acceleration of the imaginary time method for spectrally computing the stationary states of Gross-Pitaevskii equations. Zbl 1411.81026
Antoine, Xavier; Besse, Christophe; Duboscq, Romain; Rispoli, Vittorio
2017
An improved on-surface radiation condition for acoustic scattering problems in the high-frequency spectrum. Zbl 1073.76061
Antoine, Xavier; Darbas, Marion; Lu, Ya Yan
2005
Phase reduction models for improving the accuracy of the finite element solution of time-harmonic scattering problems. I: General approach and low-order models. Zbl 1161.65082
Antoine, Xavier; Geuzaine, Christophe
2009
Spectral and condition number estimates of the acoustic single-layer operator for low-frequency multiple scattering in dilute media. Zbl 1286.74054
Antoine, Xavier; Thierry, Bertrand
2013
A preconditioned conjugated gradient method for computing ground states of rotating dipolar Bose-Einstein condensates via kernel truncation method for dipole-dipole interaction evaluation. Zbl 1475.65142
Antoine, Xavier; Tang, Qinglin; Zhang, Yong
2018
Efficient numerical computation of time-fractional nonlinear Schrödinger equations in unbounded domain. Zbl 07416715
Zhang, Jiwei; Li, Dongfang; Antoine, Xavier
2019
On the rate of convergence of Schwarz waveform relaxation methods for the time-dependent Schrödinger equation. Zbl 1419.65049
Antoine, X.; Lorin, E.
2019
Absorbing boundary conditions for solving $$N$$-dimensional stationary Schrödinger equations with unbounded potentials and nonlinearities. Zbl 1373.81202
Klein, Pauline; Antoine, Xavier; Besse, Christophe; Ehrhardt, Matthias
2011
A performance study of plane wave finite element methods with a Padé-type artificial boundary condition in acoustic scattering. Zbl 1165.76025
Kechroud, R.; Soulaimani, A.; Antoine, X.
2009
Derivation and analysis of computational methods for fractional Laplacian equations with absorbing layers. Zbl 1468.65159
Antoine, Xavier; Lorin, Emmanuel; Zhang, Y.
2021
On the numerical solution and dynamical laws of nonlinear fractional Schrödinger/Gross-Pitaevskii equations. Zbl 07470677
Antoine, Xavier; Tang, Qinglin; Zhang, Jiwei
2018
Generalized Brakhage-Werner integral formulations for the iterative solution of acoustics scattering problems. Zbl 1046.78005
Antoine, Xavier; Darbas, Marion
2003
Pseudospectral computational methods for the time-dependent Dirac equation in static curved spaces. Zbl 1436.65127
Antoine, Xavier; Fillion-Gourdeau, François; Lorin, Emmanuel; MacLean, Steve
2020
A numerical study of a scattering problem involving a generalized impedance boundary condition using the on-surface radiation condition method. Zbl 0943.78014
Antoine, Xavier
1998
Approximate numerical solution of the acoustic scattering by a penetrable object using impedance boundary conditions. Zbl 0978.76081
Antoine, X.; Barucq, H.; Vernhet, L.
2000
Open boundary conditions and computational schemes for Schrödinger equations with general potentials and nonlinearities. Zbl 1218.35221
Antoine, Xavier; Klein, Pauline; Besse, Christophe
2010
A simple pseudospectral method for the computation of the time-dependent Dirac equation with perfectly matched layers. Zbl 1452.65267
Antoine, Xavier; Lorin, Emmanuel
2019
Multilevel preconditioning technique for Schwarz waveform relaxation domain decomposition method for real- and imaginary-time nonlinear Schrödinger equation. Zbl 1427.65233
Antoine, X.; Lorin, E.
2018
A construction of beam propagation methods for optical waveguides. Zbl 1364.78022
Antoine, Xavier; Dreyfuss, Pierre; Ramdani, Karim
2009
A non-overlapping Schwarz domain decomposition method with high-order finite elements for flow acoustics. Zbl 07413173
Lieu, Alice; Marchner, Philippe; Gabard, Gwénaël; Bériot, Hadrien; Antoine, Xavier; Geuzaine, Christophe
2020
A frequency domain method for scattering problems with moving boundaries. Zbl 07425570
Gasperini, D.; Beise, H. P.; Schroeder, U.; Antoine, X.; Geuzaine, C.
2021
ODE-based double-preconditioning for solving linear systems $$A^{\alpha}x=b$$ and $$f(A)x=b$$. Zbl 07478625
Antoine, Xavier; Lorin, Emmanuel
2021
Stable perfectly matched layers with Lorentz transformation for the convected Helmholtz equation. Zbl 07501589
Marchner, Philippe; Bériot, Hadrien; Antoine, Xavier; Geuzaine, Christophe
2021
Derivation and analysis of computational methods for fractional Laplacian equations with absorbing layers. Zbl 1468.65159
Antoine, Xavier; Lorin, Emmanuel; Zhang, Y.
2021
A frequency domain method for scattering problems with moving boundaries. Zbl 07425570
Gasperini, D.; Beise, H. P.; Schroeder, U.; Antoine, X.; Geuzaine, C.
2021
ODE-based double-preconditioning for solving linear systems $$A^{\alpha}x=b$$ and $$f(A)x=b$$. Zbl 07478625
Antoine, Xavier; Lorin, Emmanuel
2021
Stable perfectly matched layers with Lorentz transformation for the convected Helmholtz equation. Zbl 07501589
Marchner, Philippe; Bériot, Hadrien; Antoine, Xavier; Geuzaine, Christophe
2021
Corner treatments for high-order local absorbing boundary conditions in high-frequency acoustic scattering. Zbl 1453.65340
Modave, A.; Geuzaine, C.; Antoine, X.
2020
A non-overlapping domain decomposition method with high-order transmission conditions and cross-point treatment for Helmholtz problems. Zbl 07337986
Modave, A.; Royer, A.; Antoine, X.; Geuzaine, C.
2020
Perfectly matched layer for computing the dynamics of nonlinear Schrödinger equations by pseudospectral methods. Application to rotating Bose-Einstein condensates. Zbl 1453.65353
Antoine, Xavier; Geuzaine, Christophe; Tang, Qinglin
2020
Pseudospectral computational methods for the time-dependent Dirac equation in static curved spaces. Zbl 1436.65127
Antoine, Xavier; Fillion-Gourdeau, François; Lorin, Emmanuel; MacLean, Steve
2020
A non-overlapping Schwarz domain decomposition method with high-order finite elements for flow acoustics. Zbl 07413173
Lieu, Alice; Marchner, Philippe; Gabard, Gwénaël; Bériot, Hadrien; Antoine, Xavier; Geuzaine, Christophe
2020
Towards perfectly matched layers for time-dependent space fractional PDEs. Zbl 1452.65266
Antoine, Xavier; Lorin, Emmanuel
2019
Efficient numerical computation of time-fractional nonlinear Schrödinger equations in unbounded domain. Zbl 07416715
Zhang, Jiwei; Li, Dongfang; Antoine, Xavier
2019
On the rate of convergence of Schwarz waveform relaxation methods for the time-dependent Schrödinger equation. Zbl 1419.65049
Antoine, X.; Lorin, E.
2019
A simple pseudospectral method for the computation of the time-dependent Dirac equation with perfectly matched layers. Zbl 1452.65267
Antoine, Xavier; Lorin, Emmanuel
2019
Asymptotic estimates of the convergence of classical Schwarz waveform relaxation domain decomposition methods for two-dimensional stationary quantum waves. Zbl 1407.65152
Antoine, Xavier; Hou, Fengji; Lorin, Emmanuel
2018
A preconditioned conjugated gradient method for computing ground states of rotating dipolar Bose-Einstein condensates via kernel truncation method for dipole-dipole interaction evaluation. Zbl 1475.65142
Antoine, Xavier; Tang, Qinglin; Zhang, Yong
2018
On the numerical solution and dynamical laws of nonlinear fractional Schrödinger/Gross-Pitaevskii equations. Zbl 07470677
Antoine, Xavier; Tang, Qinglin; Zhang, Jiwei
2018
Multilevel preconditioning technique for Schwarz waveform relaxation domain decomposition method for real- and imaginary-time nonlinear Schrödinger equation. Zbl 1427.65233
Antoine, X.; Lorin, E.
2018
Efficient spectral computation of the stationary states of rotating Bose-Einstein condensates by preconditioned nonlinear conjugate gradient methods. Zbl 1380.81496
Antoine, Xavier; Levitt, Antoine; Tang, Qinglin
2017
Computational performance of simple and efficient sequential and parallel Dirac equation solvers. Zbl 1411.35234
Antoine, X.; Lorin, E.
2017
An analysis of Schwarz waveform relaxation domain decomposition methods for the imaginary-time linear Schrödinger and Gross-Pitaevskii equations. Zbl 1383.65122
Antoine, X.; Lorin, E.
2017
Optimized Schwarz domain decomposition methods for scalar and vector Helmholtz equations. Zbl 1366.65110
Antoine, X.; Geuzaine, C.
2017
Acceleration of the imaginary time method for spectrally computing the stationary states of Gross-Pitaevskii equations. Zbl 1411.81026
Antoine, Xavier; Besse, Christophe; Duboscq, Romain; Rispoli, Vittorio
2017
On the ground states and dynamics of space fractional nonlinear Schrödinger/Gross-Pitaevskii equations with rotation term and nonlocal nonlinear interactions. Zbl 1380.65296
Antoine, Xavier; Tang, Qinglin; Zhang, Yong
2016
High-order IMEX-spectral schemes for computing the dynamics of systems of nonlinear Schrödinger/Gross-Pitaevskii equations. Zbl 1422.65277
Antoine, Xavier; Besse, Christophe; Rispoli, Vittorio
2016
GetDDM: an open framework for testing optimized Schwarz methods for time-harmonic wave problems. Zbl 1380.65477
Thierry, B.; Vion, A.; Tournier, S.; El Bouajaji, M.; Colignon, D.; Marsic, N.; Antoine, X.; Geuzaine, C.
2016
Formulation and accuracy of on-surface radiation conditions for acoustic multiple scattering problems. Zbl 1410.78010
Alzubaidi, Hasan; Antoine, Xavier; Chniti, Chokri
2016
Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime. Zbl 1349.65430
Lorin, E.; Yang, X.; Antoine, X.
2016
Lagrange-Schwarz waveform relaxation domain decomposition methods for linear and nonlinear quantum wave problems. Zbl 1334.65152
Antoine, X.; Lorin, E.
2016
Modeling and computation of Bose-Einstein condensates: stationary states, nucleation, dynamics, stochasticity. Zbl 1344.35114
Antoine, Xavier; Duboscq, Romain
2015
A quasi-optimal domain decomposition algorithm for the time-harmonic Maxwell’s equations. Zbl 1349.78066
El Bouajaji, M.; Thierry, B.; Antoine, X.; Geuzaine, C.
2015
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations. II: Dynamics and stochastic simulations. Zbl 1344.82004
Antoine, Xavier; Duboscq, Romain
2015
Domain decomposition method and high-order absorbing boundary conditions for the numerical simulation of the time dependent Schrödinger equation with ionization and recombination by intense electric field. Zbl 1332.78026
Antoine, X.; Lorin, E.; Bandrauk, A. D.
2015
$$\mu$$-diff: an open-source Matlab toolbox for computing multiple scattering problems by disks. Zbl 1380.65478
Thierry, Bertrand; Antoine, Xavier; Chniti, Chokri; Alzubaidi, Hasan
2015
Robust and efficient preconditioned Krylov spectral solvers for computing the ground states of fast rotating and strongly interacting Bose-Einstein condensates. Zbl 1349.82027
Antoine, Xavier; Duboscq, Romain
2014
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations. I: Computation of stationary solutions. Zbl 1348.35003
Antoine, Xavier; Duboscq, Romain
2014
Approximate local magnetic-to-electric surface operators for time-harmonic Maxwell’s equations. Zbl 1351.78010
El Bouajaji, M.; Antoine, X.; Geuzaine, C.
2014
Absorbing boundary conditions for relativistic quantum mechanics equations. Zbl 1349.81080
Antoine, X.; Lorin, E.; Sater, J.; Fillion-Gourdeau, F.; Bandrauk, A. D.
2014
Computational methods for the dynamics of the nonlinear Schrödinger/Gross-Pitaevskii equations. Zbl 1344.35130
Antoine, Xavier; Bao, Weizhu; Besse, Christophe
2013
Absorbing boundary conditions for the two-dimensional Schrödinger equation with an exterior potential. II: Discretization and numerical results. Zbl 1457.65093
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2013
Spectral and condition number estimates of the acoustic single-layer operator for low-frequency multiple scattering in dense media. Zbl 1258.65100
Thierry, Bertrand; Antoine, Xavier
2013
Spectral and condition number estimates of the acoustic single-layer operator for low-frequency multiple scattering in dilute media. Zbl 1286.74054
Antoine, Xavier; Thierry, Bertrand
2013
A quasi-optimal non-overlapping domain decomposition algorithm for the Helmholtz equation. Zbl 1243.65144
Boubendir, Y.; Antoine, X.; Geuzaine, C.
2012
Absorbing boundary conditions for the two-dimensional Schrödinger equation with an exterior potential. I: Construction and a priori estimates. Zbl 1251.35096
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2012
Wide frequency band numerical approaches for multiple scattering problems by disks. Zbl 1280.78002
Antoine, Xavier; Ramdani, Karim; Thierry, Bertrand
2012
Absorbing boundary conditions for general nonlinear Schrödinger equations. Zbl 1231.35223
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2011
Absorbing boundary conditions for solving $$N$$-dimensional stationary Schrödinger equations with unbounded potentials and nonlinearities. Zbl 1373.81202
Klein, Pauline; Antoine, Xavier; Besse, Christophe; Ehrhardt, Matthias
2011
Computing high-frequency scattered fields by beam propagation methods: a prospective study. Zbl 1210.78015
Antoine, Xavier; Huang, Yuexia; Lu, Ya Yan
2010
Open boundary conditions and computational schemes for Schrödinger equations with general potentials and nonlinearities. Zbl 1218.35221
Antoine, Xavier; Klein, Pauline; Besse, Christophe
2010
Absorbing boundary conditions for the one-dimensional Schrödinger equation with an exterior repulsive potential. Zbl 1161.65074
Antoine, Xavier; Besse, Christophe; Klein, Pauline
2009
Towards accurate artificial boundary conditions for nonlinear PDEs through examples. Zbl 1184.35014
Antoine, Xavier; Besse, Christophe; Szeftel, Jérémie
2009
Phase reduction models for improving the accuracy of the finite element solution of time-harmonic scattering problems. I: General approach and low-order models. Zbl 1161.65082
Antoine, Xavier; Geuzaine, Christophe
2009
A performance study of plane wave finite element methods with a Padé-type artificial boundary condition in acoustic scattering. Zbl 1165.76025
Kechroud, R.; Soulaimani, A.; Antoine, X.
2009
A construction of beam propagation methods for optical waveguides. Zbl 1364.78022
Antoine, Xavier; Dreyfuss, Pierre; Ramdani, Karim
2009
A review of transparent and artificial boundary conditions techniques for linear and nonlinear Schrödinger equations. Zbl 1364.65178
Antoine, Xavier; Arnold, Anton; Besse, Christophe; Ehrhardt, Matthias; Schadle, Achim
2008
On the numerical approximation of high-frequency acoustic multiple scattering problems by circular cylinders. Zbl 1135.65403
Antoine, Xavier; Chniti, Chokri; Ramdani, Karim
2008
An integral preconditioner for solving the two-dimensional scattering transmission problem using integral equations. Zbl 1168.78001
Antoine, X.; Boubendir, Y.
2008
Modeling photonic crystals by boundary integral equations and Dirichlet-to-Neumann maps. Zbl 1143.78012
Yuan, Jianhua; Lu, Ya Yan; Antoine, Xavier
2008
Far field modeling of electromagnetic time reversal and application to selective focusing on small scatterers. Zbl 1287.35054
Antoine, X.; Pinçon, B.; Ramdani, K.; Thierry, B.
2008
Generalized combined field integral equations for the iterative solution of the three-dimensional Helmholtz equation. Zbl 1123.65117
Antoine, Xavier; Darbas, Marion
2007
Artificial boundary conditions for one-dimensional cubic nonlinear Schrödinger equations. Zbl 1109.35102
Antoine, Xavier; Besse, Christophe; Descombes, Stephane
2006
An improved surface radiation condition for high-frequency acoustic scattering problems. Zbl 1120.76058
Antoine, Xavier; Darbas, Marion; Lu, Ya Yan
2006
Alternative integral equations for the iterative solution of acoustic scattering problems. Zbl 1064.76095
Antoine, X.; Darbas, M.
2005
Analytic preconditioners for the boundary integral solution of the scattering of acoustic waves by open surfaces. Zbl 1189.76356
Antoine, Xavier; Bendali, Abderrahmane; Darbas, Marion
2005
Numerical accuracy of a Padé-type non-reflecting boundary condition for the finite element solution of acoustic scattering problems at high-frequency. Zbl 1113.76051
Kechroud, R.; Antoine, X.; Soulaïmani, A.
2005
Approximation by generalized impedance boundary conditions of a transmission problem in acoustic scattering. Zbl 1074.78004
Antoine, Xavier; Barucq, Hélène
2005
On the construction of approximate boundary conditions for solving the interior problem of the acoustic scattering transmission problem. Zbl 1152.76466
Antoine, X.; Barucq, H.
2005
An improved on-surface radiation condition for acoustic scattering problems in the high-frequency spectrum. Zbl 1073.76061
Antoine, Xavier; Darbas, Marion; Lu, Ya Yan
2005
Numerical schemes for the simulation of the two-dimensional Schrödinger equation using non-reflecting boundary conditions. Zbl 1053.65072
Antoine, Xavier; Besse, Christophe; Mouysset, Vincent
2004
Analytic preconditioners for the electric field integral equation. Zbl 1210.65193
Antoine, X.; Bendali, A.; Darbas, M.
2004
Unconditionally stable discretization schemes of non-reflecting boundary conditions for the one-dimensional Schrödinger equation. Zbl 1037.65097
Antoine, X.; Besse, C.
2003
Wavelet approximations of a collision operator in kinetic theory. Zbl 1028.76059
Antoine, Xavier; Lemou, M.
2003
Generalized Brakhage-Werner integral formulations for the iterative solution of acoustics scattering problems. Zbl 1046.78005
Antoine, Xavier; Darbas, Marion
2003
An algorithm coupling the OSRC and FEM for the computation of an approximate scattered acoustic field by a non-convex body. Zbl 1098.76566
Antoine, X.
2002
Fast approximate computation of a time-harmonic scattered field using the on-surface radiation condition method. Zbl 1001.78008
Antoine, Xavier
2001
Construction, structure and asymptotic approximations of a microdifferential transparent boundary condition for the linear Schrödinger equation. Zbl 1129.35324
Antoine, Xavier; Besse, Christophe
2001
Microlocal diagonalization of strictly hyperbolic pseudodifferential systems and application to the design of radiation conditions in electromagnetism. Zbl 0983.35138
Antoine, Xavier
2001
High-frequency asymptotic analysis of a dissipative transmission problem resulting in generalized impedance boundary conditions. Zbl 0986.76080
Antoine, X.; Barucq, H.; Vernhet, L.
2001
Approximate numerical solution of the acoustic scattering by a penetrable object using impedance boundary conditions. Zbl 0978.76081
Antoine, X.; Barucq, H.; Vernhet, L.
2000
Bayliss-Turkel-like radiation conditions on surfaces of arbitrary shape. Zbl 0923.35179
Antoine, X.; Barucq, H.; Bendali, A.
1999
A numerical study of a scattering problem involving a generalized impedance boundary condition using the on-surface radiation condition method. Zbl 0943.78014
Antoine, Xavier
1998
all top 5
### Cited by 778 Authors
62 Antoine, Xavier 21 Geuzaine, Christophe A. 21 Lorin, Emmanuel 17 Besse, Christophe 16 Barucq, Hélène 15 Bao, Weizhu 13 Zheng, Chunxiong 12 Cai, Yongyong 12 Djellouli, Rabia 12 Tang, Qinglin 12 Zhang, Jiwei 11 Darbas, Marion 8 Turc, Catalin 8 Wang, Tingchun 7 Boubendir, Yassine 7 Dehghan Takht Fooladi, Mehdi 7 Henning, Patrick 7 Tang, Shaoqiang 7 Thierry, Bertrand 7 Yin, Jia 7 Zlotnik, Alexander A. 6 Gander, Martin Jakob 6 Modave, Axel 6 Pang, Gang 6 Petrov, Pavel S. 6 Zhang, Yong 5 Acosta, Sebastian 5 Chaillat, Stéphanie 5 Čiegis, Raimondas 5 Claeys, Xavier 5 Duboscq, Romain 5 Han, Houde 5 Le Louër, Frédérique 5 Turkel, Eli L. 5 Wang, Pengde 5 Wu, Xiaonan 4 Abbaszadeh, Mostafa 4 Ashyralyev, Allaberen 4 Bandrauk, André D. 4 Bruno, Oscar P. 4 Chniti, Chokri 4 Collino, Francis 4 Diaz, Julien 4 Dolean, Victorita 4 Ehrhardt, Matthias 4 Farhat, Charbel H. 4 Huang, Chengming 4 Jiang, Chaolong 4 Joly, Patrick 4 Khajah, Tahsin 4 Levadoux, David P. 4 Li, Xianggui 4 Liang, Dong 4 Noble, Pascal 4 Pham, Ha Thanh 4 Sun, Zhizhong 4 Vaibhav, Vishal 4 Wang, Jilu 4 Zhang, Hui 4 Zhang, Rongpei 3 Abounouh, Mostafa 3 Al Moatassime, Hassan 3 Betcke, Timo 3 Birk, Carolin 3 Cassier, Maxence 3 Chabassier, Juliette 3 Coulombel, Jean-François 3 Cui, Jin 3 Duruflé, Marc 3 El Bouajaji, Mohamed 3 Guddati, Murthy N. 3 Guo, Boling 3 Harari, Isaac 3 He, Dongdong 3 Huang, Zhongyi 3 Jerez-Hanckes, Carlos 3 Jin, Jicheng 3 Ju, Lili 3 Klein, Pauline 3 Lakoba, Taras I. 3 Leng, Wei 3 Li, Buyang 3 Liang, Xiao 3 Liu, Wen-Jie 3 Luo, Songting 3 Ma, Ying 3 Mattesi, Vanessa 3 Medvinsky, Michael 3 Mei, Liquan 3 Pan, Kejia 3 Pérez-Arancibia, Carlos 3 Peterseim, Daniel 3 Pinaud, Olivier 3 Radziunas, Mindaugas 3 Ruan, Xinran 3 Saint-Guirons, A.-G. 3 Sirma, Ali 3 Song, Chongmin 3 Szeftel, Jérémie 3 Tezaur, Radek ...and 678 more Authors
all top 5
### Cited in 105 Serials
99 Journal of Computational Physics 28 Applied Numerical Mathematics 28 Journal of Scientific Computing 27 Computers & Mathematics with Applications 24 Journal of Computational and Applied Mathematics 21 Computer Methods in Applied Mechanics and Engineering 18 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 15 SIAM Journal on Numerical Analysis 14 Wave Motion 14 Applied Mathematics and Computation 14 Numerische Mathematik 14 International Journal of Computer Mathematics 13 SIAM Journal on Scientific Computing 11 Numerical Algorithms 10 Communications in Computational Physics 9 Engineering Analysis with Boundary Elements 8 Journal of Mathematical Analysis and Applications 8 International Journal for Numerical Methods in Engineering 7 Computer Physics Communications 7 Mathematics of Computation 7 Numerical Methods for Partial Differential Equations 6 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 6 SIAM Journal on Applied Mathematics 5 Mathematics and Computers in Simulation 5 Applied Mathematics Letters 4 Advances in Computational Mathematics 4 Communications in Nonlinear Science and Numerical Simulation 4 Kinetic and Related Models 4 East Asian Journal on Applied Mathematics 3 Journal of Mathematical Physics 3 Mathematical Methods in the Applied Sciences 3 Physica D 3 Applied Mathematical Modelling 3 Mathematical Problems in Engineering 3 Mathematical Modelling and Analysis 3 Comptes Rendus. Mathématique. Académie des Sciences, Paris 3 Advances in Difference Equations 3 Advances in Applied Mathematics and Mechanics 3 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings and Surveys 2 Applicable Analysis 2 Inverse Problems 2 Physics Letters. A 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 BIT 2 Computational Mechanics 2 Communications in Partial Differential Equations 2 Journal de Mathématiques Pures et Appliquées. Neuvième Série 2 SIAM Review 2 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 2 Journal of Mathematical Sciences (New York) 2 Doklady Mathematics 2 Journal of Inequalities and Applications 2 Computational Methods in Applied Mathematics 2 Multiscale Modeling & Simulation 2 International Journal of Computational Methods 2 Journal of Computational Acoustics 2 Boundary Value Problems 2 Acta Mechanica Sinica 2 SIAM Journal on Imaging Sciences 2 Science China. Mathematics 2 SMAI Journal of Computational Mathematics 2 Communications on Applied Mathematics and Computation 2 SN Partial Differential Equations and Applications 1 Archive for Rational Mechanics and Analysis 1 Communications in Mathematical Physics 1 Communications on Pure and Applied Mathematics 1 Journal of Engineering Mathematics 1 Physica A 1 Chaos, Solitons and Fractals 1 Annales de l’Institut Fourier 1 Calcolo 1 Journal of Functional Analysis 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Numerical Functional Analysis and Optimization 1 Journal of Computational Mathematics 1 Mathematical and Computer Modelling 1 SIAM Journal on Matrix Analysis and Applications 1 Journal of Integral Equations and Applications 1 Journal of Elasticity 1 Advances in Engineering Software 1 Computational and Applied Mathematics 1 Filomat 1 Russian Journal of Numerical Analysis and Mathematical Modelling 1 Journal of Difference Equations and Applications 1 Discrete and Continuous Dynamical Systems 1 Mathematical and Computer Modelling of Dynamical Systems 1 Discrete Dynamics in Nature and Society 1 International Journal of Applied Mathematics and Computer Science 1 The ANZIAM Journal 1 Archives of Computational Methods in Engineering 1 Foundations of Computational Mathematics 1 Discrete and Continuous Dynamical Systems. Series B 1 Acta Numerica 1 Science in China. Series G 1 Frontiers of Mathematics in China 1 Discrete and Continuous Dynamical Systems. Series S 1 Analysis and Mathematical Physics 1 Journal of Applied Analysis and Computation 1 Journal of Computational Dynamics 1 EMS Surveys in Mathematical Sciences ...and 5 more Serials
all top 5
### Cited in 38 Fields
446 Numerical analysis (65-XX) 377 Partial differential equations (35-XX) 80 Optics, electromagnetic theory (78-XX) 79 Fluid mechanics (76-XX) 63 Quantum theory (81-XX) 45 Mechanics of deformable solids (74-XX) 42 Statistical mechanics, structure of matter (82-XX) 14 Integral equations (45-XX) 12 Calculus of variations and optimal control; optimization (49-XX) 11 Dynamical systems and ergodic theory (37-XX) 10 Operator theory (47-XX) 8 Real functions (26-XX) 7 Approximations and expansions (41-XX) 7 Operations research, mathematical programming (90-XX) 6 Integral transforms, operational calculus (44-XX) 6 Functional analysis (46-XX) 5 Potential theory (31-XX) 5 Special functions (33-XX) 5 Ordinary differential equations (34-XX) 5 Computer science (68-XX) 4 Geophysics (86-XX) 4 Biology and other natural sciences (92-XX) 3 Difference and functional equations (39-XX) 3 Probability theory and stochastic processes (60-XX) 3 Astronomy and astrophysics (85-XX) 3 Systems theory; control (93-XX) 3 Information and communication theory, circuits (94-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Differential geometry (53-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Mechanics of particles and systems (70-XX) 1 History and biography (01-XX) 1 Topological groups, Lie groups (22-XX) 1 Functions of a complex variable (30-XX) 1 Manifolds and cell complexes (57-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) | 2022-05-29 01:56:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6561740040779114, "perplexity": 7792.183534907184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00744.warc.gz"} |
https://roualdes.us/teaching/math314/homework/sumdice.html | https://classroom.github.com/a/hjDKOuXG
Due: 2020-02-12 by 11:59pm
1. Simulate the sum of two rolled (random) die for a sufficiently large sample size, $$N$$.
2. Estimate the probability of observing a $$7$$. Explain in a complete English sentence why this has the highest probability of all the outcomes. Hint: think about how many ways there are to observe each outcome of this experiment.
3. Make a table of the proportions of each outcome.
4. Make a plot that displays the estimated proportions of each outcome. | 2020-02-21 11:31:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264979362487793, "perplexity": 569.5621493653792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00485.warc.gz"} |
https://www.erlang.com/reply/33495/ | Generic selectors
Exact matches only
Search in title
Search in content
# Reply To: Model Traffic Analysis
#33495
Rommel
Guest
In the following you will see that Erlang C was used for data traffic, the key factor being delay rather than blocking.
If you use the same concepts of Erlang C, but redefine the erlang to mean the amount of bits it takes to keep a traffic sensitive facility busy for 1 second, you can easily figure it out. Unfortunately, there are no Erlang C tables included so you will need to find some to figure it out.
http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/voipsol/ta_isd.pdf | 2021-04-22 22:55:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731721043586731, "perplexity": 1853.1799387983658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00286.warc.gz"} |
http://server3.wikisky.org/starview?object=NGC+5198 | WIKISKY.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# NGC 5198
Contents
### Images
DSS Images Other Images
### Related articles
The SAURON project - VI. Line strength maps of 48 elliptical and lenticular galaxiesWe present absorption line strength maps of 48 representative ellipticaland lenticular galaxies obtained as part of a survey of nearby galaxiesusing our custom-built integral-field spectrograph, SAURON, operating onthe William Herschel Telescope. Using high-quality spectra, spatiallybinned to a constant signal-to-noise ratio, we measure four key age,metallicity and abundance ratio sensitive indices from the Lick/IDSsystem over a two-dimensional field extending up to approximately oneeffective radius. A discussion of calibrations and offsets is given,along with a description of error estimation and nebular emissioncorrection. We modify the classical Fe5270 index to define a new index,Fe5270S, which maximizes the useable spatial coverage ofSAURON. Maps of Hβ, Fe5015, Mgb and Fe5270S arepresented for each galaxy. We use the maps to compute average linestrengths integrated over circular apertures of one-eighth effectiveradius, and compare the resulting relations of index versus velocitydispersion with previous long-slit work. The metal line strength mapsshow generally negative gradients with increasing radius roughlyconsistent with the morphology of the light profiles. Remarkabledeviations from this general trend exist, particularly the Mgb isoindexcontours appear to be flatter than the isophotes of the surfacebrightness for about 40 per cent of our galaxies without significantdust features. Generally, these galaxies exhibit significant rotation.We infer from this that the fast-rotating component features a highermetallicity and/or an increased Mg/Fe ratio as compared to the galaxy asa whole. The Hβ maps are typically flat or show a mild positiveoutwards radial gradient, while a few galaxies show strong central peaksand/or elevated overall Hβ strength likely connected to recent starformation activity. For the most prominent post-starburst galaxies, eventhe metal line strength maps show a reversed gradient. The SAURON project - V. Integral-field emission-line kinematics of 48 elliptical and lenticular galaxiesWe present the emission-line fluxes and kinematics of 48 representativeelliptical and lenticular galaxies obtained with our custom-builtintegral-field spectrograph, SAURON, operating on the William HerschelTelescope. Hβ, [OIII]λλ4959,5007 and[NI]λλ5198,5200 emission lines were measured using a newprocedure that simultaneously fits both the stellar spectrum and theemission lines. Using this technique we can detect emission lines downto an equivalent width of 0.1 Å set by the current limitations indescribing galaxy spectra with synthetic and real stellar templates,rather than by the quality of our spectra. Gas velocities and velocitydispersions are typically accurate to within 14 and 20 kms-1, respectively, and at worse to within 25 and 40 kms-1. The errors on the flux of the [OIII] and Hβ linesare on average 10 and 20 per cent, respectively, and never exceed 30 percent. Emission is clearly detected in 75 per cent of our samplegalaxies, and comes in a variety of resolved spatial distributions andkinematic behaviours. A mild dependence on the Hubble type and galacticenvironment is observed, with higher detection rates in lenticulargalaxies and field objects. More significant is the fact that only 55per cent of the galaxies in the Virgo cluster exhibit clearly detectedemission. The ionized-gas kinematics is rarely consistent with simplecoplanar circular motions. However, the gas almost never displayscompletely irregular kinematics, generally showing coherent motions withsmooth variations in angular momentum. In the majority of the cases, thegas kinematics is decoupled from the stellar kinematics, and in half ofthe objects this decoupling implies a recent acquisition of gaseousmaterial. Over the entire sample however, the distribution of the meanmisalignment values between stellar and gaseous angular momenta isinconsistent with a purely external origin. The distribution ofkinematic misalignment values is found to be strongly dependent on theapparent flattening and the level of rotational support of galaxies,with flatter, fast rotating objects hosting preferentially corotatinggaseous and stellar systems. In a third of the cases, the distributionand kinematics of the gas underscore the presence of non-axisymmetricperturbations of the gravitational potential. Consistent with previousstudies, the presence of dust features is always accompanied by gasemission while the converse is not always true. A considerable range ofvalues for the [OIII]/Hβ ratio is found both across the sample andwithin single galaxies. Despite the limitations of this ratio as anemission-line diagnostic, this finding suggests either that a variety ofmechanisms is responsible for the gas excitation in E and S0 galaxies orthat the metallicity of the interstellar material is quiteheterogeneous. The host galaxy/AGN connection in nearby early-type galaxies. A new view of the origin of the radio-quiet/radio-loud dichotomy?This is the third in a series of three papers exploring the connectionbetween the multiwavelength properties of AGN in nearby early-typegalaxies and the characteristics of their hosts. Starting from aninitial sample of 332 galaxies, we selected 116 AGN candidates requiringthe detection of a radio source with a flux limit of ~1 mJy, as measuredfrom 5 GHz VLA observations. In Paper I we classified the objects withavailable archival HST images into "core" and "power-law" galaxies,discriminating on the basis of the nuclear slope of their brightnessprofiles. We used HST and Chandra data to isolate the nuclear emissionof these galaxies in the optical and X-ray bands, thus enabling us (oncecombined with the radio data) to study the multiwavelength behaviour oftheir nuclei. The properties of the nuclei hosted by the 29 coregalaxies were presented in Paper II Core galaxies invariably host aradio-loud nucleus, with a median radio-loudness of Log R = 3.6 and anX-ray based radio-loudness parameter of Log RX = -1.3. Herewe discuss the properties of the nuclei of the 22 "power-law" galaxies.They show a substantial excess of optical and X-ray emission withrespect to core galaxies at the same level of radio luminosity.Conversely, their radio-loudness parameters, Log R 1.6 and LogRX -3.3, are similar to those measured in Seyfertgalaxies. Thus the radio-loudness of AGN hosted by early-type galaxiesappears to be univocally related to the host's brightness profile:radio-loud AGN are only hosted by core galaxies, while radio-quiet AGNare found only in power-law galaxies. The brightness profile isdetermined by the galaxy's evolution, through its merger history; ourresults suggest that the same process sets the AGN flavour. In thisscenario, the black holes hosted by the merging galaxies rapidly sinktoward the centre of the newly formed object, setting its nuclearconfiguration, described by e.g. the total mass, spin, mass ratio, orseparation of the SMBHs. These parameters are most likely at the originof the different levels of the AGN radio-loudness. This connection mightopen a new path toward understanding the origin of theradio-loud/radio-quiet AGN dichotomy and provide us with a further toolfor exploring the co-evolution of galaxies and supermassive black holes. The X-ray emission properties and the dichotomy in the central stellar cusp shapes of early-type galaxiesThe Hubble Space Telescope has revealed a dichotomy in the centralsurface brightness profiles of early-type galaxies, which havesubsequently been grouped into two families: core, boxy, anisotropicsystems; and cuspy (power-law'), discy, rotating ones. Here weinvestigate whether a dichotomy is also present in the X-ray propertiesof the two families. We consider both their total soft emission(LSX,tot), which is a measure of the galactic hot gascontent, and their nuclear hard emission (LHX,nuc), mostlycoming from Chandra observations, which is a measure of the nuclearactivity. At any optical luminosity, the highest LSX,totvalues are reached by core galaxies; this is explained by their beingthe central dominant galaxies of groups, subclusters or clusters, inmany of the logLSX,tot (ergs-1) >~ 41.5 cases.The highest LHX,nuc values, similar to those of classicalactive galactic nuclei (AGNs), in this sample are hosted only by core orintermediate galaxies; at low luminosity AGN levels, LHX,nucis independent of the central stellar profile shape. The presence ofoptical nuclei (also found by HST) is unrelated to the level ofLHX,nuc, even though the highest LHX,nuc are allassociated with optical nuclei. The implications of these findings forgalaxy evolution and accretion modalities at the present epoch arediscussed. The host galaxy/AGN connection in nearby early-type galaxies. Sample selection and hosts brightness profilesThis is the first of a series of three papers exploring the connectionbetween the multiwavelength properties of AGNs in nearby early-typegalaxies and the characteristics of their hosts. We selected twosamples, both with high resolution 5 GHz VLA observations available andproviding measurements down to 1 mJy level, reaching radio-luminositiesas low as 1019 W Hz-1. We focus on the 116radio-detected galaxies as to boost the fraction of AGN with respect toa purely optically selected sample. Here we present the analysis of theoptical brightness profiles based on archival HST images, available for65 objects. We separate early-type galaxies on the basis of the slope oftheir nuclear brightness profiles, into core and power-law galaxiesfollowing the Nuker's scheme, rather than on the traditionalmorphological classification (i.e. into E and S0 galaxies). Our sampleof AGN candidates is indistinguishable, when their brightness profilesare concerned, from galaxies of similar optical luminosity but hostingweaker (or no) radio-sources. We confirm previous findings thatrelatively bright radio-sources (Lr > 1021.5 WHz-1) are uniquely associated to core galaxies. However,below this threshold in radio-luminosity core and power-law galaxiescoexist and they do not show any apparent difference in theirradio-properties. Not surprisingly, since our sample is deliberatelybiased to favour the inclusion of active galaxies, we found a higherfraction of optically nucleated galaxies. Addressing the multiwavelengthproperties of these nuclei will be the aim of the two forthcomingpapers. The SAURON project - III. Integral-field absorption-line kinematics of 48 elliptical and lenticular galaxiesWe present the stellar kinematics of 48 representative elliptical andlenticular galaxies obtained with our custom-built integral-fieldspectrograph SAURON operating on the William Herschel Telescope. Thedata were homogeneously processed through a dedicated reduction andanalysis pipeline. All resulting SAURON data cubes were spatially binnedto a constant minimum signal-to-noise ratio. We have measured thestellar kinematics with an optimized (penalized pixel-fitting) routinewhich fits the spectra in pixel space, via the use of optimal templates,and prevents the presence of emission lines to affect the measurements.We have thus generated maps of the mean stellar velocity V, the velocitydispersion σ, and the Gauss-Hermite moments h3 andh4 of the line-of-sight velocity distributions. The mapsextend to approximately one effective radius. Many objects displaykinematic twists, kinematically decoupled components, central stellardiscs, and other peculiarities, the nature of which will be discussed infuture papers of this series. Peculiarities and populations in elliptical galaxies. I. An old question revisitedMorphological peculiarities, as defined from isophote asymmetries andnumber of detected shells, jets or similar features, have been estimatedin a sample of 117 E classified galaxies, and qualified by an ad hocΣ2 index. The overall frequency of peculiar'' objects(Pec subsample) is 32.5%. It decreases with the cosmic density of theenvironment, being minimal for the Virgo cluster, the densestenvironment in the sampled volume. This environmental effect is strongerfor galaxies with relatively large Σ2.The Pec subsample objects are compared with normal'' objects (Nopsubsample) as regards their basic properties. Firstly, theysystematically deviate from the Fundamental Plane and the Faber-Jacksonrelation derived for the Nop subsample, being too bright for their mass.Secondly, the dust content of galaxies, as estimated from IRAS fluxes,are similar in both subsamples. Third, the same is true of the frequencyof Kinematically Distinct cores (KDC), suggesting that KDC andmorphological peculiarities do not result from the same events in thehistory of E-galaxies.Using the Nop sample alone, we obtain very tight reference relationsbetween stellar population indicators (U-B, B-V, B-R, V-I,Mg2, Hβ, , Mgb) and the central velocitydispersion σ0. The discussion of the residuals of theserelations allows us to classify the Pec galaxies in two families i.e.the YP or NGC 2865 family, and the NP or NGC 3923 one. Galaxies in thefirst group show consistent evidence for a younger stellar populationmixed with the old one, in agreement with classical results (Schweizeret al. \cite{Schweizer1990}; Schweizer & Seitzer\cite{Schweizer1992}). The second group, however, has normal, orreddish, populations. It is remarkable that a fraction (circa 40%) ofmorphologically perturbed objects do not display any signature of ayoung population, either because the event responsible for thepecularity is too ancient, or because it did not produce significantstar formation (or eventually that the young sub-population has highmetallicity).A preliminary attempt is made to interpret the populations of Pecobjects by combining a young Single Stellar Population with a Nopgalaxy, with only limited success, perhaps largely due to uncertaintiesin the SSP indices used.Based in part on observations collected at the Observatoire deHaute-Provence.Figures \ref{fig1}-\ref{fig3} are only available in electronic form athttp://www.edpsciences.orgTable 10 is only available in electronic form at the CDS via anonymousftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/423/833 Formalism and quality of a proper motion link with extragalactic objects for astrometric satellite missionsThe accuracy of the link of the proper motion system of astrometricsatellite missions like AMEX and GAIA is discussed. Monte-Carlo methodswere used to simulate catalogues of positions and proper motions ofquasars and galaxies to test the link. The main conclusion is, thatfuture satellite missions like GAIA may be self-calibrated'' by theirmeasurements of QSOs, while additional measurements from radio stars orHST-data are needed to calibrate the less deep reaching astrometricsatellite missions of AMEX type. A new catalogue of ISM content of normal galaxiesWe have compiled a catalogue of the gas content for a sample of 1916galaxies, considered to be a fair representation of normality''. Thedefinition of a normal'' galaxy adopted in this work implies that wehave purposely excluded from the catalogue galaxies having distortedmorphology (such as interaction bridges, tails or lopsidedness) and/orany signature of peculiar kinematics (such as polar rings,counterrotating disks or other decoupled components). In contrast, wehave included systems hosting active galactic nuclei (AGN) in thecatalogue. This catalogue revises previous compendia on the ISM contentof galaxies published by \citet{bregman} and \citet{casoli}, andcompiles data available in the literature from several small samples ofgalaxies. Masses for warm dust, atomic and molecular gas, as well asX-ray luminosities have been converted to a uniform distance scale takenfrom the Catalogue of Principal Galaxies (PGC). We have used twodifferent normalization factors to explore the variation of the gascontent along the Hubble sequence: the blue luminosity (LB)and the square of linear diameter (D225). Ourcatalogue significantly improves the statistics of previous referencecatalogues and can be used in future studies to define a template ISMcontent for normal'' galaxies along the Hubble sequence. The cataloguecan be accessed on-line and is also available at the Centre desDonnées Stellaires (CDS).The catalogue is available in electronic form athttp://dipastro.pd.astro.it/galletta/ismcat and at the CDS via anonymousftp to\ cdsarc.u-strasbg.fr (130.79.128.5) or via\http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/405/5 Galaxy cores as relics of black hole mergersWe investigate the hypothesis that the cores of elliptical galaxies andbulges are created from the binding energy liberated by the coalescenceof supermassive binary black holes during galaxy mergers. Assuming thatthe central density profiles of galaxies were initially steep powerlaws, ρ ~r -2 , we define the mass deficit' as the massin stars that had to be removed from the nucleus in order to produce theobserved core. We use non-parametric deprojection to compute the massdeficit in a sample of 35 early-type galaxies with high-resolutionimaging data. We find that the mass deficit correlates well with themass of the nuclear black hole, consistent with the predictions ofmerger models. We argue that cores in haloes of non-interacting darkmatter particles should be comparable in size to those observed in thestars. The SAURON project - II. Sample and early resultsEarly results are reported from the SAURON survey of the kinematics andstellar populations of a representative sample of nearby E, S0 and Sagalaxies. The survey is aimed at determining the intrinsic shape of thegalaxies, their orbital structure, the mass-to-light ratio as a functionof radius, the age and metallicity of the stellar populations, and thefrequency of kinematically decoupled cores and nuclear black holes. Theconstruction of the representative sample is described, and itsproperties are illustrated. A comparison with long-slit spectroscopicdata establishes that the SAURON measurements are comparable to, orbetter than, the highest-quality determinations. Comparisons arepresented for NGC 3384 and 4365, where stellar velocities and velocitydispersions are determined to a precision of 6kms-1, and theh3 and h4 parameters of the line-of-sight velocitydistribution to a precision of better than 0.02. Extraction of accurategas emission-line intensities, velocities and linewidths from the datacubes is illustrated for NGC 5813. Comparisons with published linestrengths for NGC 3384 and 5813 reveal uncertainties of <~0.1Åon the measurements of the Hβ, Mg b and Fe5270 indices.Integral-field mapping uniquely connects measurements of the kinematicsand stellar populations to the galaxy morphology. The maps presentedhere illustrate the rich stellar kinematics, gaseous kinematics, andline-strength distributions of early-type galaxies. The results includethe discovery of a thin, edge-on, disc in NGC 3623, confirm theaxisymmetric shape of the central region of M32, illustrate the LINERnucleus and surrounding counter-rotating star-forming ring in NGC 7742,and suggest a uniform stellar population in the decoupled core galaxyNGC 5813. Bar Galaxies and Their EnvironmentsThe prints of the Palomar Sky Survey, luminosity classifications, andradial velocities were used to assign all northern Shapley-Ames galaxiesto either (1) field, (2) group, or (3) cluster environments. Thisinformation for 930 galaxies shows no evidence for a dependence of barfrequency on galaxy environment. This suggests that the formation of abar in a disk galaxy is mainly determined by the properties of theparent galaxy, rather than by the characteristics of its environment. The UZC-SSRS2 Group CatalogWe apply a friends-of-friends algorithm to the combined Updated ZwickyCatalog and Southern Sky Redshift Survey to construct a catalog of 1168groups of galaxies; 411 of these groups have five or more members withinthe redshift survey. The group catalog covers 4.69 sr, and all groupsexceed the number density contrast threshold, δρ/ρ=80. Wedemonstrate that the groups catalog is homogeneous across the twounderlying redshift surveys; the catalog of groups and their membersthus provides a basis for other statistical studies of the large-scaledistribution of groups and their physical properties. The medianphysical properties of the groups are similar to those for groupsderived from independent surveys, including the ESO Key Programme andthe Las Campanas Redshift Survey. We include tables of groups and theirmembers. Compact groups in the UZC galaxy sampleApplying an automatic neighbour search algorithm to the 3D UZC galaxycatalogue (Falco et al. \cite{Falco}) we have identified 291 compactgroups (CGs) with radial velocity between 1000 and 10 000 kms-1. The sample is analysed to investigate whether Tripletsdisplay kinematical and morphological characteristics similar to higherorder CGs (Multiplets). It is found that Triplets constitute lowvelocity dispersion structures, have a gas-rich galaxy population andare typically retrieved in sparse environments. Conversely Multipletsshow higher velocity dispersion, include few gas-rich members and aregenerally embedded structures. Evidence hence emerges indicating thatTriplets and Multiplets, though sharing a common scale, correspond todifferent galaxy systems. Triplets are typically field structures whilstMultiplets are mainly subclumps (either temporarily projected orcollapsing) within larger structures. Simulations show that selectioneffects can only partially account for differences, but significantcontamination of Triplets by field galaxy interlopers could eventuallyinduce the observed dependences on multiplicity. Tables 1 and 2 are onlyavailable in electronic at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/391/35 Relation between dust and radio luminosity in optically selected early type galaxiesWe have surveyed an optical/IR selected sample of nearby E/S0 galaxieswith and without nuclear dust structures with the VLA at 3.6 cm to asensitivity of 100 mu Jy. We can construct a Radio Luminosity Function(RLF) of these galaxies to ~ 1019 W Hz-1 and findthat ~ 50% of these galaxies have AGNs at this level. The space densityof these AGNs equals that of starburst galaxies at this luminosity.Several dust-free galaxies have low luminosity radio cores, and theirRLF is not significantly less than that of the dusty galaxies. A catalogue and analysis of X-ray luminosities of early-type galaxiesWe present a catalogue of X-ray luminosities for 401 early-typegalaxies, of which 136 are based on newly analysed ROSAT PSPC pointedobservations. The remaining luminosities are taken from the literatureand converted to a common energy band, spectral model and distancescale. Using this sample we fit the LX:LB relationfor early-type galaxies and find a best-fit slope for the catalogue of~2.2. We demonstrate the influence of group-dominant galaxies on the fitand present evidence that the relation is not well modelled by a singlepower-law fit. We also derive estimates of the contribution to galaxyX-ray luminosities from discrete-sources and conclude that they provideLdscr/LB~=29.5ergs-1LBsolar-1. Wecompare this result with luminosities from our catalogue. Lastly, weexamine the influence of environment on galaxy X-ray luminosity and onthe form of the LX:LB relation. We conclude thatalthough environment undoubtedly affects the X-ray properties ofindividual galaxies, particularly those in the centres of groups andclusters, it does not change the nature of whole populations. Dusty Nuclear Disks and Filaments in Early-Type GalaxiesWe examine the dust properties of a nearby distance-limited sample ofearly-type galaxies using WFPC2 of the Hubble Space Telescope. Dust isdetected in 29 out of 67 galaxies (43%), including 12 with small nucleardusty disks. In a separate sample of 40 galaxies biased for thedetection of dust by virtue of their detection in IRAS 100 μm band,dust is found in ~78% of the galaxies, 15 of which contain dusty disks.In those galaxies with detectable dust, the apparent mass of the dustcorrelates with radio and far-infrared luminosity, becoming moresignificant for systems with filamentary dust. A majority of IRAS andradio detections are also associated with dusty galaxies rather thandustless galaxies. This indicates that thermal emission from clumpy,filamentary dust is the main source of the far-IR radiation inearly-type galaxies. Dust in small disklike morphology tends to be wellaligned with the major axis of the host galaxies, while filamentary dustappears to be more randomly distributed with no preference for alignmentwith any major galactic structure. This suggests that, if the dustydisks and filaments have a common origin, the dust originates externallyand requires time to dynamically relax and settle in the galaxypotential in the form of compact disks. More galaxies with visible dustthan without dust display emission lines, indicative of ionized gas,although such nuclear activity does not show a preference for dusty diskover filamentary dust. There appears to be a weak relationship betweenthe mass of the dusty disks and central velocity dispersion of thegalaxy, suggesting a connection with a similar recently recognizedrelationship between the latter and the black hole mass. Based onobservations with the NASA/ESA Hubble Space Telescope, obtained at theSpace Telescope Science Institute, which is operated by the Associationof Universities for Research in Astronomy, Inc., under NASA contractNAS5-26555. WFPC2 Images of the Central Regions of Early-Type Galaxies. I. The DataWe present high-resolution R-band images of the central regions of 67early-type galaxies obtained with the Wide Field and Planetary Camera 2(WFPC2) aboard the Hubble Space Telescope (HST). This homogeneouslyselected sample roughly doubles the number of early-type galaxies thathave now been imaged at HST resolution and complements similar data onthe central regions of radio galaxies and the bulges of spiral galaxies.Our sample strikingly confirms the complex morphologies of the centralregions of early-type galaxies which have become apparent from previousstudies with HST. In particular, we detect dust, either in the form ofnuclear disks or with a filamentary distribution, in 43% of allgalaxies, in good agreement with previous estimates. In addition, wefind evidence for embedded stellar disks in a remarkably large fractionof 51%. In 14 of those galaxies the disklike structures are misalignedwith the main galaxy, suggesting that they correspond to stellar bars inS0 galaxies. We analyze the luminosity profiles of the galaxies in oursample and classify galaxies according to their central cusp slope. To alarge extent we confirm the results from previous HST surveys in thatearly-type galaxies reveal a clear dichotomy: the bright ellipticals(MB<~-20.5) are generally boxy and have luminosityprofiles that break from steep outer power laws to shallow inner cusps(referred to as core'' galaxies). The fainter ellipticals, on theother hand, typically have disky isophotes and luminosity profiles thatlack a clear break and have a steep central cusp (referred to aspower-law'' galaxies). The advantages and shortcomings ofclassification schemes utilizing the extrapolated central cusp slopeγ are discussed, and it is shown that γ might be aninadequate representation for galaxies whose luminosity profile slopechanges smoothly with radius rather than resembling a broken power law.Thus, we introduce a new, alternative parameter and show how thisaffects the classification. In fact, we find evidence for anintermediate'' class of galaxies that cannot unambiguously beclassified as either core or power-law galaxies and that have centralcusp slopes and absolute magnitudes intermediate between those of coreand power-law galaxies. It is unclear at present, however, whether thesegalaxies make up a physically distinct class or whether distance and/orresolution effects cause them to lose their distinct core or power-lawcharacteristics. A NICMOS Survey of Early-Type Galaxy Centers: The Relation Between Core Properties, Gas and Dust Content, and EnvironmentWe present a NICMOS 1.6 μm imaging isophotal study of 27 early-typegalaxies. Core galaxies have reduced ellipticity and boxiness near andwithin their core or break radius. This supports a core formationmechanism that mixes or scatters stars such as scattering caused by abinary black hole. We find the same trends between central surfacebrightness and luminosities as the WPFC studies. We find no correlationbetween core properties and dust mass or X-ray luminosity, suggestingthat processes determining the current gas content (e.g., such as minormergers and cooling flows) are unrelated to processes occurring duringcore formation. Core galaxies exist in a variety of environments rangingfrom poor groups to large clusters. A combined sample suggests thatgalaxy groups may harbor more luminous power-law galaxies than clusterssuch as Virgo and Fornax. Based on observations with the NASA/ESA HubbleSpace Telescope obtained at the Space Telescope Science Institute, whichis operated by the Association of University for Research in Astronomy,Inc. (AURA), under NASA contract NAS5-26555. Nearby Optical Galaxies: Selection of the Sample and Identification of GroupsIn this paper we describe the Nearby Optical Galaxy (NOG) sample, whichis a complete, distance-limited (cz<=6000 km s-1) andmagnitude-limited (B<=14) sample of ~7000 optical galaxies. Thesample covers 2/3 (8.27 sr) of the sky (|b|>20deg) andappears to have a good completeness in redshift (97%). We select thesample on the basis of homogenized corrected total blue magnitudes inorder to minimize systematic effects in galaxy sampling. We identify thegroups in this sample by means of both the hierarchical and thepercolation friends-of-friends'' methods. The resulting catalogs ofloose groups appear to be similar and are among the largest catalogs ofgroups currently available. Most of the NOG galaxies (~60%) are found tobe members of galaxy pairs (~580 pairs for a total of ~15% of objects)or groups with at least three members (~500 groups for a total of ~45%of objects). About 40% of galaxies are left ungrouped (field galaxies).We illustrate the main features of the NOG galaxy distribution. Comparedto previous optical and IRAS galaxy samples, the NOG provides a densersampling of the galaxy distribution in the nearby universe. Given itslarge sky coverage, the identification of groups, and its high-densitysampling, the NOG is suited to the analysis of the galaxy density fieldof the nearby universe, especially on small scales. The Cold and Hot Gas Content of Fine-Structure E and S0 GalaxiesWe investigate trends of the cold and hot gas content of early-typegalaxies with the presence of optical morphological peculiarities, asmeasured by the fine-structure index Σ. H I mapping observationsfrom the literature are used to track the cold gas content, and archivalROSAT Position Sensitive Proportional Counter data are used to quantifythe hot gas content. We find that E and S0 galaxies with a highincidence of optical peculiarities are exclusively X-ray underluminousand, therefore, deficient in hot gas. In contrast, more relaxed galaxieswith little or no signs of optical peculiarities span a wide range ofX-ray luminosities. That is, the X-ray excess anticorrelates withΣ. There appears to be no similar trend of cold gas content witheither fine-structure index or X-ray content. The fact that onlyapparently relaxed E and S0 galaxies are strong X-ray emitters isconsistent with the hypothesis that after strong disturbances, such as amerger, hot gas halos build up over a timescale of several gigayears.This is consistent with the expected mass loss from stars. A Test for Large-Scale Systematic Errors in Maps of Galactic ReddeningAccurate maps of Galactic reddening are important for a number ofapplications, such as mapping the peculiar velocity field in the nearbyuniverse. Of particular concern are systematic errors which vary slowlyas a function of position on the sky, as these would induce spuriousbulk flow. We have compared the reddenings of Burstein & Heiles (BH)and those of Schlegel, Finkbeiner, & Davis (SFD) to independentestimates of the reddening, for Galactic latitudes |b|>10^deg. Ourprimary source of Galactic reddening estimates comes from comparing thedifference between the observed B-V colors of early-type galaxies, andthe predicted B-V color determined from the B-V-Mg_2 relation. We havefitted a dipole to the residuals in order to look for large-scalesystematic deviations. There is marginal evidence for a dipolar residualin the comparison between the SFD maps and the observed early-typegalaxy reddenings. If this is due to an error in the SFD maps, then itcan be corrected with a small (13%) multiplicative dipole term. Weargue, however, that this difference is more likely to be due to a small(0.01 mag) systematic error in the measured B-V colors of the early-typegalaxies. This interpretation is supported by a smaller, independentdata set (globular cluster and RR Lyrae stars), which yields a resultinconsistent with the early-type galaxy residual dipole. BH reddeningsare found to have no significant systematic residuals, apart from theknown problem in the region 230^deg X-ray luminosities for a magnitude-limited sample of early-type galaxies from the ROSAT All-Sky SurveyFor a magnitude-limited optical sample (B_T <= 13.5 mag) ofearly-type galaxies, we have derived X-ray luminosities from the ROSATAll-Sky Survey. The results are 101 detections and 192 useful upperlimits in the range from 10^36 to 10^44 erg s^-1. For most of thegalaxies no X-ray data have been available until now. On the basis ofthis sample with its full sky coverage, we find no galaxy with anunusually low flux from discrete emitters. Below log (L_B) ~ 9.2L_⊗ the X-ray emission is compatible with being entirely due todiscrete sources. Above log (L_B) ~ 11.2 L_osolar no galaxy with onlydiscrete emission is found. We further confirm earlier findings that L_xis strongly correlated with L_B. Over the entire data range the slope isfound to be 2.23 (+/- 0.12). We also find a luminosity dependence ofthis correlation. Below log L_x = 40.5 erg s^-1 it is consistent with aslope of 1, as expected from discrete emission. Above this value theslope is close to 2, as expected from gaseous emission. Comparing thedistribution of X-ray luminosities with the models of Ciotti et al.leads to the conclusion that the vast majority of early-type galaxiesare in the wind or outflow phase. Some of the galaxies may have alreadyexperienced the transition to the inflow phase. They show X-rayluminosities in excess of the value predicted by cooling flow modelswith the largest plausible standard supernova rates. A possibleexplanation for these super X-ray-luminous galaxies is suggested by thesmooth transition in the L_x--L_B plane from galaxies to clusters ofgalaxies. Gas connected to the group environment might cause the X-rayoverluminosity. Arcsecond Positions of UGC GalaxiesWe present accurate B1950 and J2000 positions for all confirmed galaxiesin the Uppsala General Catalog (UGC). The positions were measuredvisually from Digitized Sky Survey images with rms uncertaintiesσ<=[(1.2")2+(θ/100)2]1/2,where θ is the major-axis diameter. We compared each galaxymeasured with the original UGC description to ensure high reliability.The full position list is available in the electronic version only. The Mass-to-Light Ratio of Binary GalaxiesWe report on the mass-to-light ratio determination based on a newlyselected binary galaxy sample, which includes a large number of pairswhose separations exceed a few hundred kpc. The probabilitydistributions of the projected separation and the velocity differencehave been calculated considering the contamination of optical pairs, andthe mass-to-light (M/L) ratio has been determined based on the maximumlikelihood method. The best estimate of the M/L in the B band for 57pairs is found to be 28-36 depending on the orbital parameters and thedistribution of optical pairs (solar unit: H_0=50 km s^-1 Mpc^-1). Thebest estimate of the M/L for 30 pure spiral pairs is found to be 12-16.These results are relatively smaller than those obtained in previousstudies but are consistent with each other within the errors. Althoughthe number of pairs with large separation is significantly increasedcompared with previous samples, the M/L does not show any tendency ofincrease but is found to be almost independent of the separation ofpairs beyond 100 kpc. The constancy of the M/L beyond 100 kpc mayindicate that the typical halo size of spiral galaxies is less than ~100kpc. Global X-ray emission and central properties of early type galaxiesHubble Space Telescope observations have revealed that the centralsurface brightness profiles of early type galaxies can be divided intotwo types: core" profiles and featureless power law profiles, withoutcores. On the basis of this and previous results, early type galaxieshave been grouped into two families. One consists of coreless galaxies,which are also rapidly rotating, nearly isotropic spheroids, and withdisky isophotes. The other is made of core galaxies, which are slowlyrotating and boxy-distorted. Here I investigate the relationship betweenglobal X-ray emission and shape of the inner surface brightness profile,for a sample of 59 early type galaxies. I find a clear dichotomy also inthe X-ray properties, in the sense that core galaxies span the wholeobserved range of L_X values (roughly two orders of magnitude in L_X ),while power law galaxies are confined to log L_X (ergs-1)<41. Moreover, the relation between L_X and the shapeof the central profile seems to be the strongest among the relations ofL_X with the basic properties characterizing the two families of earlytype galaxies. As an example, L_X is more deeply connected with theshape of the central profile than with the isophotal shape distortion,or the importance of galactic rotation. So, a global property such asL_X , that measures the hot gas content on a galactic scale, turns outto be surprisingly well linked to a nuclear property. Various possiblereasons are explored for the origin of the different L_X behavior ofcore and power law galaxies. While a few explanations can be imaginedfor the large spread in the X-ray luminosities of core galaxies, an openproblem is why power law ones never become very X-ray bright. It islikely that the presence of a central massive black hole, and possiblyalso the environment, play an important role in determining L_X (i.e.,the hot gas content). Therefore the problem of interpreting the X-rayproperties of early type galaxies turns out to be more complex thanthought so far. Groups of galaxies. III. Some empirical characteristics.Not Available A catalogue of Mg_2 indices of galaxies and globular clustersWe present a catalogue of published absorption-line Mg_2 indices ofgalaxies and globular clusters. The catalogue is maintained up-to-datein the HYPERCAT database. The measurements are listed together with thereferences to the articles where the data were published. A codeddescription of the observations is provided. The catalogue gathers 3541measurements for 1491 objects (galaxies or globular clusters) from 55datasets. Compiled raw data for 1060 galaxies are zero-point correctedand transformed to a homogeneous system. Tables 1, 3, and 4 areavailable in electronic form only at the CDS, Strasbourg, via anonymousftp 130.79.128.5. Table 2 is available both in text and electronic form. Total magnitude, radius, colour indices, colour gradients and photometric type of galaxiesWe present a catalogue of aperture photometry of galaxies, in UBVRI,assembled from three different origins: (i) an update of the catalogueof Buta et al. (1995) (ii) published photometric profiles and (iii)aperture photometry performed on CCD images. We explored different setsof growth curves to fit these data: (i) The Sersic law, (ii) The net ofgrowth curves used for the preparation of the RC3 and (iii) A linearinterpolation between the de Vaucouleurs (r(1/4) ) and exponential laws.Finally we adopted the latter solution. Fitting these growth curves, wederive (1) the total magnitude, (2) the effective radius, (3) the colourindices and (4) gradients and (5) the photometric type of 5169 galaxies.The photometric type is defined to statistically match the revisedmorphologic type and parametrizes the shape of the growth curve. It iscoded from -9, for very concentrated galaxies, to +10, for diffusegalaxies. Based in part on observations collected at the Haute-ProvenceObservatory. A catalogue of spatially resolved kinematics of galaxies: BibliographyWe present a catalogue of galaxies for which spatially resolved data ontheir internal kinematics have been published; there is no a priorirestriction regarding their morphological type. The catalogue lists thereferences to the articles where the data are published, as well as acoded description of these data: observed emission or absorption lines,velocity or velocity dispersion, radial profile or 2D field, positionangle. Tables 1, 2, and 3 are proposed in electronic form only, and areavailable from the CDS, via anonymous ftp to cdsarc.u-strasbg.fr (to130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html
Submit a new article | 2019-10-19 01:59:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5763380527496338, "perplexity": 5873.723318717626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00025.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-5-exponents-and-polynomials-5-2-multiplying-monomials-problem-set-5-2-page-201/86 | # Chapter 5 - Exponents and Polynomials - 5.2 - Multiplying Monomials - Problem Set 5.2 - Page 201: 86
#### Work Step by Step
Consider what this operation is telling a person to do. When you raise a power to a power, you are telling the person to multiply the original power by itself that many times. Since this involves multiplication, it makes sense that you multiply the powers by each other.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2019-09-23 07:52:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513591885566711, "perplexity": 748.0387566194428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00206.warc.gz"} |
https://winstonswaikikicondos.com/north-bay/separable-differential-equations-examples-with-answers-pdf.php | # Separable Differential Equations Examples With Answers Pdf
Calculus Maximus WS 7.3 Separable Diff EQ basd.net. solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example., Examples Solve the (separable) differential equation Solve the (separable) differential equation Solve the following differential equation: Sketch the family of solution curves. Videos See short videos of worked problems for this section. Quiz. Take a quiz. Exercises See Exercises for 3.3 Separable Differential Equations (PDF). Work online to solve the exercises for this section, or for any.
### Section 9.3 Separable Equations University of Portland
Separable equations introduction Differential equations. Separable Differential Equations We start with the definition of a separable differential equation. Definition 1.1. A separable equation is a first order differential equa- tion in which the expression for dy/dx can be factcored as a function of x times a function of y. In other words, it is an equation of the from dy dx = g(x) f(y) (we write it as a fraction for convenience). To solve, An example of a linear equation is because, for , it can be written in the form Notice that this differential equation is not separable because it’s impossible to factor the.
Basics and Separable Solutions We now turn our attention to differential equations in which the “unknown function to be deter- mined” — which we will usually denote by u … Example 1: Solve and find a general solution to the differential equation. y ' = 3 e y x 2 Solution to Example 1: We first rewrite the given equations in differential form and with variables separated, the y's on one side and the x's on the other side as follows.
Suppose we have the first order differential equation P(y) dy dx = Q(x) where Q(x) and P(y) are functions involving x and y only respectively. For example y2 dy dx = 1 x3 or 1 y2 dy dx = x− 3 x3. We can solve these differential equations using the technique of separatingvariables. General Solution By taking the original differential equation P(y) dy dx = Q(x) we can solve this by A separable differential equation is a common kind of differential equation that is especially straightforward to solve. Separable equations have the form $$\frac{dy}{dx}=f(x)g(y)$$, and are called separable because the variables $$x$$ and $$y$$ can be brought to opposite sides of the equation.
An example of a linear equation is because, for , it can be written in the form Notice that this differential equation is not separable because it’s impossible to factor the A first-order differential equation is called separable if it can be written in the form p(y) dy dx = q(x). (1.4.1) The solution technique for a separable differential equation is given in Theorem 1.4.2. Theorem 1.4.2 If p(y)and q(x)are continuous, then Equation (1.4.1) has the general solution p(y)dy= q(x)dx+c, (1.4.2) where c is an arbitrary constant. Proof
A first order differential equation $$y’ = f\left( {x,y} \right)$$ is called a separable equation if the function $$f\left( {x,y} \right)$$ can be factored into the product of two functions of $$x$$ and $$y:$$ Exact Differential Equations • Integrating Factors Exact Differential Equations In Section 5.6, you studied applications of differential equations to growth and decay problems. In Section 5.7, you learned more about the basic ideas of differential equa-tions and studied the solution technique known as separation of variables. In this chapter, you will learn more about solving differential
For similar discussion and examples, see David Lomen’s article “Solving Separable Differential Equations: Antidifferentiation and Domain Are Both Needed” in the Course Home Pages section of AP Calculus at the AP Central website. Example 1: Solve and find a general solution to the differential equation. y ' = 3 e y x 2 Solution to Example 1: We first rewrite the given equations in differential form and with variables separated, the y's on one side and the x's on the other side as follows.
Section 2-3 : Exact Equations. The next type of first order differential equations that we’ll be looking at is exact differential equations. Before we get into the full details behind solving exact differential equations it’s probably best to work an example that will help to show us just what an exact differential equation … A separable differential equation is a common kind of differential equation that is especially straightforward to solve. Separable equations have the form $$\frac{dy}{dx}=f(x)g(y)$$, and are called separable because the variables $$x$$ and $$y$$ can be brought to opposite sides of the equation.
View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet … Advanced Math Solutions – Ordinary Differential Equations Calculator, Bernoulli ODE Last post, we learned about separable differential equations. In this post, we will learn about Bernoulli differential...
Separable differential equations can be described as first-order first-degree differential equations where the expression for the derivative in terms of the variables is a multiplicatively separable function of the two variables. View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet …
Linear Differential Equations web.stanford.edu. Separable Differential Equations We start with the definition of a separable differential equation. Definition 1.1. A separable equation is a first order differential equa- tion in which the expression for dy/dx can be factcored as a function of x times a function of y. In other words, it is an equation of the from dy dx = g(x) f(y) (we write it as a fraction for convenience). To solve, S. Ghorai 1 Lecture III Solution of rst order equations 1 Separable equations These are equations of the form y0= f(x)g(y) Assuing gis nonzero, we divide by gand integrate to nd.
### y Separable Differential Equations Calculator - Symbolab
Separable Differential Equations Scribd. 4. DIFFERENTIAL EQUATIONS 4.1: CONSTRUCT THE DIFFERENTIAL EQUATIONS 4.1.1: Identify Type Of Differential Equations Order → The number of the highest derivative in a differential equation., An example of a linear equation is because, for , it can be written in the form Notice that this differential equation is not separable because it’s impossible to factor the.
### Separable Differential Equations Calcworkshop
Separable Differential Equations University of British. Separable Differential Equations A differential equation is an equation for an unknown function that involves the derivative of the unknown function. https://en.wikipedia.org/wiki/Separable_ordinary_differential_equation Answer interactive questions on separable differential equations. See what you know about specifics like how to solve a differential equations with 0 as a variable and how to identify a separable.
• Separable Equations First Order Equations Differential
• Separable First Order Differential Equations Basic
• Basics and Separable Solutions We now turn our attention to differential equations in which the “unknown function to be deter- mined” — which we will usually denote by u … Separable means that we can keep those two separately and do an integral of f and an integral of g and we're in business. OK. Examples. Suppose that f of y is 1. Then we have this simplest differential equation of all, dy/dt is some function of t. That's what calculus is for. y is the integral of g. Suppose there was no t. Just a 1 over f of y, with g of t equal one. Then I bring the f of y up
Separable equations are the class of differential equations that can be solved using this method. "Separation of variables" allows us to rewrite differential equations so we obtain an equality between two integrals we can evaluate. We now examine a solution technique for finding exact solutions to a class of differential equations known as separable differential equations. These equations are common in a wide variety of disciplines, including physics, chemistry, and engineering. We illustrate a few applications at …
S. Ghorai 1 Lecture III Solution of rst order equations 1 Separable equations These are equations of the form y0= f(x)g(y) Assuing gis nonzero, we divide by gand integrate to nd For similar discussion and examples, see David Lomen’s article “Solving Separable Differential Equations: Antidifferentiation and Domain Are Both Needed” in the Course Home Pages section of AP Calculus at the AP Central website.
Mixing Tank Separable Differential Equations Examples When studying separable differential equations, one classic class of examples is the mixing tank problems. Here we will consider a few variations on this classic. Example 1. A tank has pure water flowing into it at 10 l/min. The contents of the tank are kept thoroughly mixed, and the contents flow out at 10 l/min. Initially, the tank Answer interactive questions on separable differential equations. See what you know about specifics like how to solve a differential equations with 0 as a variable and how to identify a separable
A first order differential equation $$y’ = f\left( {x,y} \right)$$ is called a separable equation if the function $$f\left( {x,y} \right)$$ can be factored into the product of two functions of $$x$$ and $$y:$$ 25/08/2011 · A basic lesson on how to solve separable differential equations. Such equations have important applications in the modelling of dynamic phenomena. Such equations have important applications in the
For similar discussion and examples, see David Lomen’s article “Solving Separable Differential Equations: Antidifferentiation and Domain Are Both Needed” in the Course Home Pages section of AP Calculus at the AP Central website. Suppose we have the first order differential equation P(y) dy dx = Q(x) where Q(x) and P(y) are functions involving x and y only respectively. For example y2 dy dx = 1 x3 or 1 y2 dy dx = x− 3 x3. We can solve these differential equations using the technique of separatingvariables. General Solution By taking the original differential equation P(y) dy dx = Q(x) we can solve this by
Paul’s Online Notes, emphasizes this fact when stating that for a differential equation to be separable, all the y’s in the differential equation must be multiplied by the derivative, and all the x’s in the differential equation must be on the other side of the equal sign. Suppose we have the first order differential equation P(y) dy dx = Q(x) where Q(x) and P(y) are functions involving x and y only respectively. For example y2 dy dx = 1 x3 or 1 y2 dy dx = x− 3 x3. We can solve these differential equations using the technique of separatingvariables. General Solution By taking the original differential equation P(y) dy dx = Q(x) we can solve this by
Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria. 4. DIFFERENTIAL EQUATIONS 4.1: CONSTRUCT THE DIFFERENTIAL EQUATIONS 4.1.1: Identify Type Of Differential Equations Order в†’ The number of the highest derivative in a differential equation.
Determine whether each of the following differential equations is separable or not. a constant of integration is always present. 1) gives 1 = B 1^2/3 = B. We will use the general solutions from the previous examples. y) = (1. Simplifying this gives C = 1. y) = (1. Substituting (x. Paul’s Online Notes, emphasizes this fact when stating that for a differential equation to be separable, all the y’s in the differential equation must be multiplied by the derivative, and all the x’s in the differential equation must be on the other side of the equal sign.
## Separable First Order Differential Equations Basic
Separable differential equation Calculus. Basics and Separable Solutions We now turn our attention to differential equations in which the “unknown function to be deter- mined” — which we will usually denote by u …, Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria..
### Exact Differential Equations Cengage
Differential Equations Equations Differential Calculus. solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example., Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria..
For similar discussion and examples, see David Lomen’s article “Solving Separable Differential Equations: Antidifferentiation and Domain Are Both Needed” in the Course Home Pages section of AP Calculus at the AP Central website. For similar discussion and examples, see David Lomen’s article “Solving Separable Differential Equations: Antidifferentiation and Domain Are Both Needed” in the Course Home Pages section of AP Calculus at the AP Central website.
Separable differential equations can be described as first-order first-degree differential equations where the expression for the derivative in terms of the variables is a multiplicatively separable function of the two variables. Differential equations arise in many problems in physics, engineering, and other sciences. The following examples show how to solve differential equations in …
Separable equations are the class of differential equations that can be solved using this method. "Separation of variables" allows us to rewrite differential equations so we obtain an equality between two integrals we can evaluate. solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example.
1/02/2017В В· This calculus video tutorial explains how to solve first order differential equations using separation of variables. It explains how to integrate the function to find the general solution and how Answer interactive questions on separable differential equations. See what you know about specifics like how to solve a differential equations with 0 as a variable and how to identify a separable
Examples Solve the (separable) differential equation Solve the (separable) differential equation Solve the following differential equation: Sketch the family of solution curves. Videos See short videos of worked problems for this section. Quiz. Take a quiz. Exercises See Exercises for 3.3 Separable Differential Equations (PDF). Work online to solve the exercises for this section, or for any Mixing Tank Separable Differential Equations Examples When studying separable differential equations, one classic class of examples is the mixing tank problems. Here we will consider a few variations on this classic. Example 1. A tank has pure water flowing into it at 10 l/min. The contents of the tank are kept thoroughly mixed, and the contents flow out at 10 l/min. Initially, the tank
4. DIFFERENTIAL EQUATIONS 4.1: CONSTRUCT THE DIFFERENTIAL EQUATIONS 4.1.1: Identify Type Of Differential Equations Order в†’ The number of the highest derivative in a differential equation. solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example.
4. DIFFERENTIAL EQUATIONS 4.1: CONSTRUCT THE DIFFERENTIAL EQUATIONS 4.1.1: Identify Type Of Differential Equations Order в†’ The number of the highest derivative in a differential equation. Separable differential equations can be described as first-order first-degree differential equations where the expression for the derivative in terms of the variables is a multiplicatively separable function of the two variables.
solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example. differential equation at the twelve points indicated. b) Let y f x be the particular solution to the differential equation with the initial condition f 1 1.
Section 2-3 : Exact Equations. The next type of first order differential equations that we’ll be looking at is exact differential equations. Before we get into the full details behind solving exact differential equations it’s probably best to work an example that will help to show us just what an exact differential equation … 25/08/2011 · A basic lesson on how to solve separable differential equations. Such equations have important applications in the modelling of dynamic phenomena. Such equations have important applications in the
An example of a linear equation is because, for , it can be written in the form Notice that this differential equation is not separable because it’s impossible to factor the We now examine a solution technique for finding exact solutions to a class of differential equations known as separable differential equations. These equations are common in a wide variety of disciplines, including physics, chemistry, and engineering. We illustrate a few applications at …
For similar discussion and examples, see David Lomen’s article “Solving Separable Differential Equations: Antidifferentiation and Domain Are Both Needed” in the Course Home Pages section of AP Calculus at the AP Central website. Answer interactive questions on separable differential equations. See what you know about specifics like how to solve a differential equations with 0 as a variable and how to identify a separable
View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet … Section 2-3 : Exact Equations. The next type of first order differential equations that we’ll be looking at is exact differential equations. Before we get into the full details behind solving exact differential equations it’s probably best to work an example that will help to show us just what an exact differential equation …
We now examine a solution technique for finding exact solutions to a class of differential equations known as separable differential equations. These equations are common in a wide variety of disciplines, including physics, chemistry, and engineering. We illustrate a few applications at … Paul’s Online Notes, emphasizes this fact when stating that for a differential equation to be separable, all the y’s in the differential equation must be multiplied by the derivative, and all the x’s in the differential equation must be on the other side of the equal sign.
25/08/2011 · A basic lesson on how to solve separable differential equations. Such equations have important applications in the modelling of dynamic phenomena. Such equations have important applications in the Section 2-3 : Exact Equations. The next type of first order differential equations that we’ll be looking at is exact differential equations. Before we get into the full details behind solving exact differential equations it’s probably best to work an example that will help to show us just what an exact differential equation …
Examples Solve the (separable) differential equation Solve the (separable) differential equation Solve the following differential equation: Sketch the family of solution curves. Videos See short videos of worked problems for this section. Quiz. Take a quiz. Exercises See Exercises for 3.3 Separable Differential Equations (PDF). Work online to solve the exercises for this section, or for any Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria.
What is volcano sleep deprivation in high school students statistics translate a sentence into an equation and solve human resources current events i search paper ideas research approach and design teaching through problem solving pdf trigonometry word problems pdf with answers body language examples and meanings how to study for the bar exam in one month amway new diamonds 2016 … View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet …
Separable Differential Equations A differential equation is an equation for an unknown function that involves the derivative of the unknown function. View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet …
### Separable Differential Equations analyzemath.com
Differential Equations Equations Differential Calculus. Exact Differential Equations • Integrating Factors Exact Differential Equations In Section 5.6, you studied applications of differential equations to growth and decay problems. In Section 5.7, you learned more about the basic ideas of differential equa-tions and studied the solution technique known as separation of variables. In this chapter, you will learn more about solving differential, solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example..
### Linear Differential Equations web.stanford.edu
Separable differential equation Calculus. Separable Differential Equations A differential equation is an equation for an unknown function that involves the derivative of the unknown function. https://en.wikipedia.org/wiki/Inseparable_differential_equation Separable equations are the class of differential equations that can be solved using this method. "Separation of variables" allows us to rewrite differential equations so we obtain an equality between two integrals we can evaluate..
Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria. Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria.
Examples Solve the (separable) differential equation Solve the (separable) differential equation Solve the following differential equation: Sketch the family of solution curves. Videos See short videos of worked problems for this section. Quiz. Take a quiz. Exercises See Exercises for 3.3 Separable Differential Equations (PDF). Work online to solve the exercises for this section, or for any What is volcano sleep deprivation in high school students statistics translate a sentence into an equation and solve human resources current events i search paper ideas research approach and design teaching through problem solving pdf trigonometry word problems pdf with answers body language examples and meanings how to study for the bar exam in one month amway new diamonds 2016 …
Basics and Separable Solutions We now turn our attention to differential equations in which the “unknown function to be deter- mined” — which we will usually denote by u … We now examine a solution technique for finding exact solutions to a class of differential equations known as separable differential equations. These equations are common in a wide variety of disciplines, including physics, chemistry, and engineering. We illustrate a few applications at …
Mixing Tank Separable Differential Equations Examples When studying separable differential equations, one classic class of examples is the mixing tank problems. Here we will consider a few variations on this classic. Example 1. A tank has pure water flowing into it at 10 l/min. The contents of the tank are kept thoroughly mixed, and the contents flow out at 10 l/min. Initially, the tank Exact Differential Equations • Integrating Factors Exact Differential Equations In Section 5.6, you studied applications of differential equations to growth and decay problems. In Section 5.7, you learned more about the basic ideas of differential equa-tions and studied the solution technique known as separation of variables. In this chapter, you will learn more about solving differential
An example of a linear equation is because, for , it can be written in the form Notice that this differential equation is not separable because it’s impossible to factor the A first-order differential equation is called separable if it can be written in the form p(y) dy dx = q(x). (1.4.1) The solution technique for a separable differential equation is given in Theorem 1.4.2. Theorem 1.4.2 If p(y)and q(x)are continuous, then Equation (1.4.1) has the general solution p(y)dy= q(x)dx+c, (1.4.2) where c is an arbitrary constant. Proof
differential equation at the twelve points indicated. b) Let y f x be the particular solution to the differential equation with the initial condition f 1 1. solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example.
Separable differential equations can be described as first-order first-degree differential equations where the expression for the derivative in terms of the variables is a multiplicatively separable function of the two variables. A first order differential equation $$y’ = f\left( {x,y} \right)$$ is called a separable equation if the function $$f\left( {x,y} \right)$$ can be factored into the product of two functions of $$x$$ and $$y:$$
We now examine a solution technique for finding exact solutions to a class of differential equations known as separable differential equations. These equations are common in a wide variety of disciplines, including physics, chemistry, and engineering. We illustrate a few applications at … View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet …
differential equation at the twelve points indicated. b) Let y f x be the particular solution to the differential equation with the initial condition f 1 1. A first order differential equation $$y’ = f\left( {x,y} \right)$$ is called a separable equation if the function $$f\left( {x,y} \right)$$ can be factored into the product of two functions of $$x$$ and $$y:$$
A п¬Ѓrst-order differential equation is called separable if it can be written in the form p(y) dy dx = q(x). (1.4.1) The solution technique for a separable differential equation is given in Theorem 1.4.2. Theorem 1.4.2 If p(y)and q(x)are continuous, then Equation (1.4.1) has the general solution p(y)dy= q(x)dx+c, (1.4.2) where c is an arbitrary constant. Proof The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as the heat equation, wave equation, Laplace equation, Helmholtz equation and biharmonic equation.
Separable Differential Equations We start with the definition of a separable differential equation. Definition 1.1. A separable equation is a first order differential equa- tion in which the expression for dy/dx can be factcored as a function of x times a function of y. In other words, it is an equation of the from dy dx = g(x) f(y) (we write it as a fraction for convenience). To solve Separable Differential Equation. Sanjay is a microbiologist, and he's trying to come up with a mathematical model to describe the population growth of a certain type of bacteria.
A separable differential equation is a common kind of differential equation that is especially straightforward to solve. Separable equations have the form $$\frac{dy}{dx}=f(x)g(y)$$, and are called separable because the variables $$x$$ and $$y$$ can be brought to opposite sides of the equation. View, download and print Worksheet 5.1 - Separable Differential Equations With Answers - Calculus Maximus pdf template or form online. 392 Equation Worksheet …
Mixing Tank Separable Differential Equations Examples When studying separable differential equations, one classic class of examples is the mixing tank problems. Here we will consider a few variations on this classic. Example 1. A tank has pure water flowing into it at 10 l/min. The contents of the tank are kept thoroughly mixed, and the contents flow out at 10 l/min. Initially, the tank Answer interactive questions on separable differential equations. See what you know about specifics like how to solve a differential equations with 0 as a variable and how to identify a separable
Example 1: Solve and find a general solution to the differential equation. y ' = 3 e y x 2 Solution to Example 1: We first rewrite the given equations in differential form and with variables separated, the y's on one side and the x's on the other side as follows. solved separable differential equations. Such equations arise when investigating exponen- Such equations arise when investigating exponen- tial growth or decay, for example.
Differential equations arise in many problems in physics, engineering, and other sciences. The following examples show how to solve differential equations in … DIFFERENTIAL EQUATIONS PRACTICE PROBLEMS: ANSWERS 1. Find the solution of y0 +2xy= x,withy(0) = −2. This is a linear equation. The integrating factor is e
What is volcano sleep deprivation in high school students statistics translate a sentence into an equation and solve human resources current events i search paper ideas research approach and design teaching through problem solving pdf trigonometry word problems pdf with answers body language examples and meanings how to study for the bar exam in one month amway new diamonds 2016 … What is volcano sleep deprivation in high school students statistics translate a sentence into an equation and solve human resources current events i search paper ideas research approach and design teaching through problem solving pdf trigonometry word problems pdf with answers body language examples and meanings how to study for the bar exam in one month amway new diamonds 2016 …
25/08/2011 · A basic lesson on how to solve separable differential equations. Such equations have important applications in the modelling of dynamic phenomena. Such equations have important applications in the Differential equations arise in many problems in physics, engineering, and other sciences. The following examples show how to solve differential equations in … | 2021-04-14 07:53:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6875289082527161, "perplexity": 639.2494537912892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00588.warc.gz"} |
https://brilliant.org/problems/wait-thats-not-integrable/ | # Wait, That's Not Integrable!
Calculus Level 2
$\large\int_{0}^{1}\sin x^2 \, dx$
Which of the following series can be expressed as the value of the integral above?
Hint: Take its Macluarin series.
× | 2019-06-18 22:06:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925692617893219, "perplexity": 2294.9585251621966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998817.58/warc/CC-MAIN-20190618203528-20190618225528-00330.warc.gz"} |
https://earthscience.stackexchange.com/questions/19020/how-is-global-greenhouse-gas-emission-calculated/19060 | # How is global greenhouse gas emission calculated?
I am sure there are different methodologies to arrive at such a number. Can someone in layman terms describe some of the most widely used and trusted methods and what data are used? Sorry I am outside the field of climate modeling so the simpler the better.
I can imagine a satellite observation-based model that calculates emissions on a spatial basis but I am not sure if our technology is advanced enough to do that accurately. However, this method would not allow segregation of GHG by source/sectors (electricity generation, cement production, agriculture etc.)
• Welcome to EarthScience.SE. Emission modeling could be considered as a field of research of its own (at least from my experiance). The emissions are made together from different sources. In German I would say "Man bastelt sie zusammen" :-) . For the CO2 emissions from power production, the emission modelers take the power production/consumption in specific regions, look up the shared of coal and gas power plants, take emission factors for these power plants (CO2 emissions emitted per kWh electricity produced) and calculate the power-production-related CO2 emissions for this region. – daniel.heydebreck Jan 20 at 10:56
• For biomass buring emissions, satellite data (for area and time of fires) combined with emission factors might be used. In general, the emission modelers take basic data of one sector (power consumption, car density, aera of arable land, need for heating, ...), some usage statistics (types of power generation facilities, car usage per capita, ...) and CO2/CH4/N2O/... emission factors. From these information they calculate CO2 (or CO2-equivalent) emissions per sector. The sector emissions are, then, merged to total emissions. – daniel.heydebreck Jan 20 at 11:10
• You might look into Appendix II (p. 1288) of the Working Group III contribution to the IPCC’s Fifth Assessment Report (AR5). Looking at the pictures on pages 1294 to 1297 and scanning a bit through the text might give you some more infos (it is not too technical). – daniel.heydebreck Jan 20 at 11:12
• Thank you so much! – RicardoR Jan 21 at 4:58
• Please also note the recent publication Friedlingstein et al. (2019): "Global Carbon Budget 2019". – daniel.heydebreck Feb 5 at 12:48
Your question is simple enough, but the answer depends on what exactly you're looking for.
## Who is emitting where right now?
Real-time global monitoring of greenhouse gas emissions with a high spatial resolution is an emerging technology. We have very useful satellites (see Jean-Marie Privals answer), but they have limitations:
• All existing public satellites with instruments for high-resolution GHG measurements are in low Earth orbit, so they only see a particular place when they pass over, not all the time. So far only the Geostationary Interferometric Infrared Sounder (GIIRS) on the Chinese FengYung (FY)-4A satellite can in theory monitor CO₂ from a geostationary viewpoint, which allows for "continuous" monitoring (probably meaning hourly; it takes a while to scan the area of interest), but from what I've heard GIIRS is not performing very well. Europe plans to launch the Infra-Red Sounder (IRS) in 2023. Either way, their spatial resolution will be much worse than for the low Earth orbit satellites, because geostationary orbit is so far away.
• They rely on visible radiation. Although attempts to retrieve from infrared radiation exist, their information content is quite poor (for an Arctic methane example, see Holl et al. (2016)). In Jean-Marie Privals answer you can see an illustration of how reflected sunlight is used as a retrieval.
• They usually need clear skies. We can't look below the clouds, but even retrieving above the cloud would require a very accurate characterisation of the cloud, and clouds are tricky. So usually we just assume that the GHG concentration is the same with or without clouds, even when it probably isn't.
• Some private satellites exist with a very high spatial resolution, but I can't find much verifiable information about them. GHGSat claims a spatial resolution of less than 50 metre. Inevitably, that comes at the cost of field of view (12×12 km² claimed), it will only view a spot when actively pointing, so although it may be able to point anywhere on Earth, it will only view very specific areas and is in that sense not global. It appears many similar commercial instruments are planned in the near future.
I can imagine a satellite observation-based model that calculates emissions on a spatial basis but I am not sure if our technology is advanced enough to do that accurately. However, this method would not allow segregation of GHG by source/sectors (electricity generation, cement production, agriculture etc.)
The spatial resolution of about 2 km may be good enough for that, unless the electricity plant is next to the cement producer, the emissions occur at night, the factory is switched off when the satellite happens to pass over, or it's cloudy (there are attempts to retrieve in the presence of clouds, but it's harder).
So while satellites are certainly very useful in GHG monitoring, it's difficult to get everything from satellites alone.
## Where were emissions last month?
We can average daytime GHG contentration measurements over the period of a month. Combined with chemistry and circulation models, we can then try to estimate in what regions of the Earth those emissions may have occurred, but not with a precision high enough to tell "electricity or cement". This is an average of CO₂ measurements for July 2009:
The limitations are less serious now: the satellite has multiple attempts to capture a particular scene, and will usually see at least one clear-sky overpass per month, probably multiple. In the image above, there was probably also some form of data fusion to combine with other sources or fill gaps using neighbouring pixels. The longer the time period we average over, the smoother the distribution will look.
## What were global emissions last year?
However, if you are looking for global emissions averaged over a long time period, we can make use of the observation that CO₂, and to a lesser degree CH₄, is a well-mixed gas. Here, well-mixed means that it stays long enough in the atmosphere to reach pretty much everywhere given enough time. That means that ultimately, it doesn't matter where you emit. That's why "global" CO₂ concentrations may be measured at Mauna Kea (Hawaii, USA), even though this is far away from any emissions. However, that also means that it doesn't tell us whether the CO₂ was emitted in India, Italy, or Idaho. And much of the emitted CO₂ gets absorbed by the oceans, so the delta between this year and last is not enough to determine global emissions.
• About spatial resolution: I just found out there is another satellite called GHGSat-D (ghgsat.com/who-we-are/our-satellites/claire) which claims to have a 50m pixel. They have some pretty cool images of CH$_4$ plumes above coal mines or hydroelectric dams (see their "Case studies" section). Unfortunately it's private, so the data is not available... – Jean-Marie Prival Jan 20 at 15:10
• @Jean-MariePrival Huh, interesting. I've added a bullet point. The're not even listed in WMO Oscar, which many commercial Earth observation satellites are... – gerrit Jan 20 at 15:26
• Another way to measure emissions is to use economic data. Sources like the CIA World Factbook cia.gov/library/publications/the-world-factbook give figures for fossil fuel production. Assume that what's dug up or pumped up gets burned fairly soon, do a bit of chemistry, and you get a reasonable ballpark figure. – jamesqf Jan 20 at 18:39
• Interesting, I didn't know these satellites operate by visible lights. I guess there are a lot of interpolations going on! You said night emission is a problem, not sure if there is any diurnal variation of GHG emission that they need to account for before just "filling the gaps"? – RicardoR Jan 21 at 5:00
• @RicardoR Not really, because any CO₂ emitted at night is still going to be there the following day (it's a long lived gas, staying in the atmosphere for hundreds of years). It's just harder to attribute who emitted it, because it will have dispersed. Moonlight or city lights could work as well in principle, but the instrument would need to be very sensitive and the signal to noise ratio would be much poorer. – gerrit Jan 21 at 7:45
I can imagine a satellite observation-based model that calculates emissions on a spatial basis but I am not sure if our technology is advanced enough to do that accurately.
It is. The first satellite designed to measure GHG is GOSAT, from the Japanese space agency, launched in 2009 and still active today. It was followed by Nasa's OCO-2 in 2014. GOSAT measures CO$$_2$$ and CH$$_4$$, while OCO-2 measures only CO$$_2$$. There is also an OCO-3, which has been sent to the ISS last year, but I'm not sure if it's already active.
So, how does it work? Here is an image from the OCO-3 mission's website, labelled "Artist interpretation of OCO-3 measurement".
OCO-2 mission's website explains the basic principle better than I could do, so I will just copy an excerpt here:
To get the representative values of Xco$$_2$$, or the amount of CO$$_2$$ in the measured space, the OCO-2 instrument will measure at a given location, the intensity of reflected sunlight off the Earth's surface at specific wavelengths. Gas molecules in the atmosphere absorb the sunlight at specific wavelengths. So when light passes through the Earth's atmosphere, the gases that are present leave a distinguishing fingerprint that can be captured. The OCO-2 spectrometers, working like cameras, will detect these molecular fingerprints. Then the absorption levels shown in these spectra, like a captured image, will tell us how many molecules were in the region where the instrument measured.
There are also ground-based measurements, they are more precise but punctual, while satellite measurements have a global coverage. Ground-based measurements are actually used to calibrate the satellites. Also, ground-based measurements give concentrations at the surface, while satellite measurements give "column-averaged concentrations" through the atmosphere and are not able (yet) to do vertical profiles, i.e. to know at what altitude the gas contributing to the signal is located.
If you want to dive more in-depth into this, there is a nice "Guidebook on the use of satellite greenhouse gases observation data..." (Matsunaga & Maksyutov, 2018).
• The information content in night-time satellite-based methane retrievals is very poor, in particular in the far north. See this article I did in 2016. – gerrit Jan 20 at 14:09
• Well, you are certainly more qualified than me to answer, but since the OP asked for "layman terms" I figured I would give it a shot! :) – Jean-Marie Prival Jan 20 at 14:28 | 2020-11-29 17:34:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5578320622444153, "perplexity": 1766.4822825847632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00311.warc.gz"} |
https://forum.uipath.com/t/could-any-one-let-me-know-how-to-add-a-text-in-image-file-using-uipath/87304 | # Could any one let me know how to add a text in image file using UIPath
Could any one let me know how to add a text in image file using UIPath. I have a scenario where I have to modify text in an existing image (.JPG) file and replace it with my own text. Eg: Send birthday emails to employees on daily basis.
This is a sample c# code for writing text on image.
For writing we use Graphics library.
string firstText = "Hello";
string secondText = "World";
PointF firstLocation = new PointF(10f, 10f);
PointF secondLocation = new PointF(10f, 50f);
string imageFilePath = @"path\picture.bmp"
Bitmap bitmap = (Bitmap)Image.FromFile(imageFilePath);//load the image file
using(Graphics graphics = Graphics.FromImage(bitmap))
{
using (Font arialFont = new Font("Arial", 10))
{
graphics.DrawString(firstText, arialFont, Brushes.Blue, firstLocation);
graphics.DrawString(secondText, arialFont, Brushes.Red, secondLocation);
}
}
bitmap.Save(imageFilePath);//save the image file
More details on Graphics.DrawString -
Regards,
Karthik Byggari
2 Likes
Thanks Karthik.
Is it possible using UIPath.
Yes. You have to create variables of correct data types and with Assign Activities you can achieve that.
I don’t have sample workflow now. I will send you the sample workflow today or tomorrow at the latest.
Regards,
Karthik Byggari
Thanks much Karthik. I will try but plz provide me the sample workflow at your convenient time.
1 Like
Main.xaml (7.0 KB)
try this
Hi Karthik,
When I tried I am getting the below error. Kindly check and correct if anything missing.
import this dll from imports
Regards,
hello should be “hello”
Arial, 10 - should be of font type
Hi Karthik,
I even tried that.
Sry for bothering you.
Click on variables pane there i have defined a variable call font
Just change that as per your need
Regards
Please update the datatypes of the variables defined.
@Srinivasch
Thanks Aditya…now no errors but while running i am getting the below error.
Hi
please set input file path and new file path different
input file C:\path\to\image\file.jpg
new file path C:\path\to\image\file12324.jpg
Thanks Much…its working
@Srinivasch
Is there any possibility to get the data from excel file instead of hardcoding in FirstText.
@Srinivasch
make that firstText and newfilepath variables as arguments
and pass necessary excel data.
regards, | 2020-10-23 00:56:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3679894804954529, "perplexity": 10218.476794345845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00201.warc.gz"} |
https://www.ncatlab.org/nlab/show/hom-functor+preserves+limits | # nLab hom-functor preserves limits
Contents
### Context
#### Limits and colimits
limits and colimits
category theory
# Contents
## Idea
One of the basic facts of category theory is that the hom-functor on a category $\mathcal{C}$ preserve limits in both variables (remembering that a limit in the first variable, due to contravariance, is actually a colimit in $\mathcal{C}$).
## Statement
### Ordinary hom-functor
###### Proposition
(hom-functor preserves limits)
Let $\mathcal{C}$ be a category and write
$Hom_{\mathcal{C}} \;\colon\; \mathcal{C}^{op} \times \mathcal{C} \longrightarrow Set$
for its hom-functor. This preserves limits in both its arguments (recalling that a limit in the opposite category $\mathcal{C}^{op}$ is a colimit in $\mathcal{C}$).
More in detail, let $X_\bullet \colon \mathcal{I} \longrightarrow \mathcal{C}$ be a diagram. Then:
1. If the limit $\underset{\longleftarrow}{\lim}_i X_i$ exists in $\mathcal{C}$ then for all $Y \in \mathcal{C}$ there is a natural isomorphism
$Hom_{\mathcal{C}}\left(Y, \underset{\longleftarrow}{\lim}_i X_i \right) \simeq \underset{\longleftarrow}{\lim}_i \left( Hom_{\mathcal{C}}\left( Y, X_i \right) \right) \,,$
where on the right we have the limit over the diagram of hom-sets given by
$Hom_{\mathcal{C}}(Y,-) \circ X \;\colon\; \mathcal{I} \overset{X}{\longrightarrow} \mathcal{C} \overset{Hom_{\mathcal{C}}(Y,-) }{\longrightarrow} Set\,.$
2. If the colimit $\underset{\longrightarrow}{\lim}_i X_i$ exists in $\mathcal{C}$ then for all $Y \in \mathcal{C}$ there is a natural isomorphism
$Hom_{\mathcal{C}}\left(\underset{\longrightarrow}{\lim}_i X_i ,Y\right) \simeq \underset{\longleftarrow}{\lim}_i \left( Hom_{\mathcal{C}}\left( X_i , Y\right) \right) \,,$
where on the right we have the limit over the diagram of hom-sets given by
$Hom_{\mathcal{C}}(-,Y) \circ X \;\colon\; \mathcal{I}^{op} \overset{X}{\longrightarrow} \mathcal{C}^{op} \overset{Hom_{\mathcal{C}}(-,Y) }{\longrightarrow} Set\,.$
###### Proof
We give the proof of the first statement. The proof of the second statement is formally dual.
First observe that, by the very definition of limiting cones, maps out of some $Y$ into them are in natural bijection with the set $Cones\left(Y, X_\bullet \right)$ of cones over the diagram $X_\bullet$ with tip $Y$:
$Hom\left( Y, \underset{\longleftarrow}{\lim}_{i} X_i \right) \;\simeq\; Cones\left( Y, X_\bullet \right) \,.$
Hence it remains to show that there is also a natural bijection like so:
$Cones\left( Y, X_\bullet \right) \;\simeq\; \underset{\longleftarrow}{\lim}_{i} \left( Hom(Y,X_i) \right) \,.$
Now, again by the very definition of limiting cones, a single element in the limit on the right is equivalently a cone of the form
$\left\{ \array{ && \ast \\ & {}^{\mathllap{const_{p_i}}}\swarrow && \searrow^{\mathrlap{const_{p_j}}} \\ Hom(Y,X_i) && \underset{X_\alpha \circ (-)}{\longrightarrow} && Hom(Y,X_j) } \right\}_{i, j \in Obj(\mathcal{I}), \alpha \in Hom_{\mathcal{I}}(i,j) } \,.$
This is equivalently for each object $i \in \mathcal{I}$ a choice of morphism $p_i \colon Y \to X_i$ , such that for each pair of objects $i,j \in \mathcal{I}$ and each $\alpha \in Hom_{\mathcal{I}}(i,j)$ we have $X_\alpha \circ p_i = p_j$. And indeed, this is precisely the characterization of an element in the set $Cones\left( Y, X_\bullet\} \right)$.
### Internal hom-functor
###### Proposition
(internal hom-functor preserves limits)
Let $\mathcal{C}$ be a symmetric closed monoidal category with internal hom-bifunctor $[-,-]$. Then this bifunctor preserves limits in the second variable, and sends colimits in the first variable to limits:
$[X, \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} Y(j)] \;\simeq\; \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} [X, Y(j)]$
and
$[\underset{\underset{j \in \mathcal{J}}{\longrightarrow}}{\lim} Y(j),X] \;\simeq\; \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} [Y(j),X]$
###### Proof
For $X \in \mathcal{C}$ any object, $[X,-]$ is a right adjoint by definition, and hence preserves limits by adjoints preserve (co-)limits.
For the other case, let $Y \;\colon\; \mathcal{L} \to \mathcal{C}$ be a diagram in $\mathcal{C}$, and let $C \in \mathcal{C}$ be any object. Then there are isomorphisms
\begin{aligned} Hom_{\mathcal{C}}(C, [ \underset{\underset{j \in \mathcal{J}}{\longrightarrow}}{\lim} Y(j), X ] ) & \simeq Hom_{\mathcal{C}}( C \otimes \underset{\underset{j \in \mathcal{J}}{\longrightarrow}}{\lim} Y(j), X ) \\ & \simeq Hom_{\mathcal{C}}( \underset{\underset{j \in \mathcal{J}}{\longrightarrow}}{\lim} (C \otimes Y(j)), X ) \\ & \simeq \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} Hom_{\mathcal{C}}( (C \otimes Y(j)), X ) \\ & \simeq \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} Hom_{\mathcal{C}}( C, [Y(j), X] ) \\ & \simeq Hom_{\mathcal{C}}( C, \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} [Y(j), X] ) \end{aligned}
which are natural in $C \in \mathcal{C}$, where we used that the ordinary hom-functor respects (co)limits as shown (see at hom-functor preserves limits), and that the left adjoint $C \otimes (-)$ preserves colimits (see at adjoints preserve (co-)limits).
Hence by the fully faithfulness of the Yoneda embedding, there is an isomorphism
$\left[ \underset{\underset{j \in \mathcal{J}}{\longrightarrow}}{\lim} Y(j), X \right] \overset{\simeq}{\longrightarrow} \underset{\underset{j \in \mathcal{J}}{\longleftarrow}}{\lim} [Y(j), X] \,.$
Last revised on August 1, 2018 at 08:27:24. See the history of this page for a list of all contributions to it. | 2019-08-23 19:31:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993473887443542, "perplexity": 649.2320369866496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00552.warc.gz"} |
https://tex.stackexchange.com/questions/131350/long-frames-in-mdframed/131360 | # Long frames in mdframed
I need to use the mdframed package with frames extending many pages, e.g. more than 20 pages.. Consider the following code:
\documentclass{article}
\usepackage[framemethod=tikz]{mdframed}
\usepackage{expl3}
\ExplSyntaxOn
\cs_new_eq:NN \Repeat \prg_replicate:nn
\ExplSyntaxOff
\begin{document}
\begin{mdframed}
\Repeat{1500}{xxx\\}
\end{mdframed}
\end{document}
produces (with pdflatex) the following error
Underfull \hbox (badness 10000) in paragraph at lines 9--10
! Dimension too large.
<argument> \dimexpr \ht \mdf@splitbox@one
+\dp \mdf@splitbox@one \relax
l.10 \end{mdframed}
What may be the problem here?
• TeX is happy to build boxes whose dimensions exceed \maxdimen, as long as you don't try to use those dimensions. The maximum dimension is a bit less than six meters; assuming a text height of 25cm, no more than 23/24 pages can fit. – egreg Sep 3 '13 at 10:48
• I see :) It would be nice if this restriction could be circumvented somehow? – Håkon Hægland Sep 3 '13 at 10:52
• Just saw @egreg's comment after I posted my answer:-) As I comment below you can actually just ignore it in this case, circumventing it would probably involve re-writing mdframed to use the output routine rather than \vsplit so that the content is collected on the main vertical list rather than in a box, but that might be tricky.... – David Carlisle Sep 3 '13 at 11:21
It's actually a tricky error to trap, if a box contains more content than \maxdimen that isn't itself an error and you can typeset or split or unbox its contents but any reference to \ht of the box , even to test it with \ifdim\ht\mybox>... results in an error.
\batchmode
• Thanks! This seems to work. I wonder, is there any drawbacks of using \batchmode ? ( In "TeX by topic" page 232, I find: "\batchmode TEX fixes errors itself and performs an emergency stop on serious errors such as missing input files, but no terminal output is generated." – Håkon Hægland Sep 3 '13 at 12:00
• @HåkonHægland well the drawback is that you don't get informed (on the terminal) of errors, but if you don't make errors that isn't a concern (you can just do it locally for mdframed and put it back with \errorstopmode) – David Carlisle Sep 3 '13 at 12:27
• @MarcoDaniel one solution would be to literally do as here, insert \batchmode before you do the tests, making sure that you roder them such that the automatic recovery always takes the "large" branch. It is a global setting but you can use the etex \interactionmode to find out what the setting was initially and restore it to that after the test. A more complete fix would be to accumulate the body in bits and once you have more than a page worth split it off and ship it out so you never get this big, but that is probably a much bigger change to the code? – David Carlisle Sep 3 '13 at 18:00 | 2019-09-19 04:53:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871062397956848, "perplexity": 2220.392084312747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573439.64/warc/CC-MAIN-20190919040032-20190919062032-00499.warc.gz"} |
https://www.bloombergprep.com/gmat/practice-question/1/665/quantitative-section-quant-fundamentals-percents-percent-translation/ | We cover every section of the GMAT with in-depth lessons, 5000+ practice questions and realistic practice tests.
## Up to 90+ points GMAT score improvement guarantee
### The best guarantee you’ll find
Our Premium and Ultimate plans guarantee up to 90+ points score increase or your money back.
## Master each section of the test
### Comprehensive GMAT prep
We cover every section of the GMAT with in-depth lessons, 5000+ practice questions and realistic practice tests.
## Schedule-free studying
### Learn on the go
Study whenever and wherever you want with our iOS and Android mobile apps.
# Percents: Percent Translation
Thirty percent of forty percent of fifty is sixty percent of what percent of two hundred?
Incorrect. [[snippet]] You must have missed a zero somewhere along the way.
Incorrect. [[snippet]] You must have missed a zero somewhere along the way.
**Solve the resulting equation**: The 50 on the left side cancels with the factors of 10 and 5 in the denominator. That results in: > $$\displaystyle \frac{3}{10}\cdot\frac{2}{5}\cdot50 = \frac{3}{5}\cdot 2x$$ > $$\displaystyle 6 = \frac{6x}{5}$$ Multiplying by 5 and dividing by 6 gives $$5 = x$$. That is, of course 5%.
Incorrect. [[snippet]] Your equation should look like this: $$\displaystyle\frac{30}{100}\cdot\frac{40}{100}\cdot 50 = \frac{60}{100}\cdot\frac{x}{100} \cdot 200$$ The rest is reducing and combining fractions.
Incorrect. [[snippet]] You must have missed a zero somewhere along the way.
Correct. [[snippet]] Translate: Use the percent language to translate the question, "Thirty percent of forty percent of fifty is sixty percent of what percent of two hundred". > $$\displaystyle 30\% \mbox{ of } 40\% \mbox{ of } 50 = 60\% \mbox{ of } x\% \mbox{ of }200$$ > $$\displaystyle\frac{30}{100}\cdot\frac{40}{100}\cdot 50 = \frac{60}{100}\cdot\frac{x}{100} \cdot 200$$ Reduce the fractions: When you reduce the fraction, you get $$\frac{30}{100} = \frac{3}{10}$$, $$\frac{40}{100} = \frac{2}{5}$$, $$\frac{60}{100} = \frac{3}{5}$$. On the right side, the 200 reduces with the 100 in the denominator to give 2. > $$\displaystyle \frac{3}{10}\cdot\frac{2}{5}\cdot50 = \frac{3}{5}\cdot 2x$$
0.5%
1%
5%
50%
500%
Continue | 2020-10-24 12:12:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7926212549209595, "perplexity": 2201.9963322908766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00260.warc.gz"} |
http://openstudy.com/updates/4ebbbdfde4b021de86cef9b5 | ## King 4 years ago :Help...........for how many positive integers n is n^3 -8n^2+20n-13 a prime number??
1. anonymous
factorize it
2. anonymous
how watever i get is not a prime number
3. anonymous
why?
4. anonymous
oh
5. anonymous
but then ii have to plug in millions of values
6. anonymous
what u want to calculate
7. anonymous
(n^3 -8n^2+20n-13) a prime number
8. anonymous
or n is a prime number
9. anonymous
for n i will have to put millions of values to find how may n positive integers satisfy n^3 -8n^2+20n-13 to be a prime number
10. anonymous
ok
11. anonymous
so,what to do??
12. anonymous
anything i can do to narrow down the possobilities
13. anonymous
14. anonymous
ok see for 2 it is prime number
15. anonymous
wait i took wrong concept
16. anonymous
$(x-1) \times [(x-7)x + 13]$
17. anonymous
where x is n
18. anonymous
ok
19. anonymous
now see the first term is (x-1). and other than 2 no other even number is a prime number right
20. anonymous
21. anonymous
no n does not have to be a prime number n^3 -8n^2+20n-13 has to be a prime number
22. anonymous
yeah i know (x−1)×[(x−7)x+13] it has to be prime
23. anonymous
yup so x can be anything greater or equal to 2
24. anonymous
first term is (x-1) and two is the only even prime number.
25. anonymous
but if u will subtract 1 from any other prime number the resultant will be divisible by 2
26. anonymous
getting my point or not
27. anonymous
x can be 4
28. anonymous
yeah u r right
29. anonymous
it means that for any odd integer other than 3 u will not get a prime number
30. anonymous
yeah
31. anonymous
Superb sheggy!
32. anonymous
also the number cannot be graeter than 7
33. anonymous
oh yeah
34. anonymous
thanks fool bhai
35. anonymous
so u are having a limited space to work out that is from 1 to 7
36. anonymous
yes sheg u got it so answer is n=2,3,4
37. anonymous
no actually 2 to 7
38. anonymous
hahahaa right 2 to 7
39. anonymous
but for 7 it will be 0
40. anonymous
wait why cant it be greater than 7?
41. anonymous
try any number u will get the answer
42. anonymous
odd numbers cannot be included ok thats true
43. anonymous
as i had told u any odd number subtracted by one will result in even number so it will be divisible by 2.........do u accept it or not
44. anonymous
is 323 a prime number?
45. anonymous
yeah i acept
46. anonymous
its not div by 17
47. anonymous
323 is divisible by 17
48. anonymous
so 323 is not a prime number
49. anonymous
4387 ?
50. anonymous
what u have put x = ??
51. anonymous
n=20
52. anonymous
1443?
53. anonymous
gtg shall come bak in 15 min
54. anonymous
for n = 20 it should be equal to 5187
55. anonymous
oh yeah 1443?
56. anonymous
no
57. anonymous
20^3 - 8*20^2 + 20*20 -13 =8000 - 3200 + 400 -13 = 8400 - 3213 = 5187
58. anonymous
gtg cum bak in 15 mins
59. anonymous
dear i had told u for no value greater than 7 it would fit
60. anonymous
and both 1443 and 5187 is divisible by 3
61. anonymous
i am bak so what to do now?
62. anonymous
for no value greater than 7 that too even number it will not be a prime number
63. anonymous
http://www.wolframalpha.com/input/?i=x^3-8x^2%2B20x-13+where+x+%3D+20 visit this website and plug in different value of x u will get the answer
64. anonymous
king u there | 2016-09-28 10:16:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4038759768009186, "perplexity": 5189.190036660564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661349.6/warc/CC-MAIN-20160924173741-00059-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/Sharp-estimates-for-the-integrated-density-of-in-Desforges-Mayboroda/4852792e0ec7b0ad8a24a9856927f07f548df0d6 | # Sharp estimates for the integrated density of states in Anderson tight-binding models
@article{Desforges2020SharpEF,
title={Sharp estimates for the integrated density of states in Anderson tight-binding models},
author={Perceval Desforges and Svitlana Mayboroda and Shiwen Zhang and Guy David and Douglas N. Arnold and Wei Wang and Marcel Filoche},
journal={Physical Review A},
year={2020}
}
• Published 19 October 2020
• Mathematics
• Physical Review A
Recent work [1] has proved the existence of bounds from above and below for the Integrated Density of States (IDOS) of the Schrodinger operator throughout the spectrum, called the \emph{Landscape Law}. These bounds involve dimensional constants whose optimal values are yet to be determined. Here, we investigate the accuracy of the Landscape Law in 1D and 2D tight-binding Anderson models, with binary or uniform random distributions. We show, in particular, that in 1D, the IDOS can be…
4 Citations
## Figures and Tables from this paper
• Physics
• 2022
While the properties and the shape of the ground state of a gas of ultracold bosons are well understood in harmonic potentials, they remain for a large part unknown in the case of random potentials.
• Mathematics
Communications in Mathematical Physics
• 2022
The present paper extends the landscape theory pioneered in Filoche and Mayboroda (Proc Natl Acad Sci USA 109(37):14761–14766, 2012), Arnold et al. (Commun Partial Differ Equ 44(11):1186–1216, 2019)
We resolve both a conjecture and a problem of Z. Shen from the 90’s regarding non-asymptotic bounds on the eigenvalue counting function of the magnetic Schrödinger operator La,V = −(∇ − ia) + V with
## References
SHOWING 1-10 OF 48 REFERENCES
• F. Klopp
• Mathematics, Computer Science
• 2002
It is proved that, in the weak disorder regime, the spectrum in a neighborhood of size C \cdot \lambda \$ of a non-degenerate simple band edge is exponentially and dynamically localized.
• Physics
• 2015
The localization subregions of stationary waves in continuous disordered media have been recently demonstrated to be governed by a hidden landscape that is the solution of a Dirichlet problem
• Mathematics, Computer Science
• 1989
The precise form of the Lifshitz tail is derived here, by means of a field-theoretic description, and of instanton calculus, which provides new results for an arbitrary distribution of potentials, in arbitrary dimension.
Abstract: This paper is devoted to the study of localization of discrete random Schrödinger Hamiltonians in the weak disorder regime. Consider an i.i.d. Anderson model and assume that its left
• Mathematics
• 2016
This is a survey on the intermittent behavior of the parabolic Anderson model, which is the Cauchy problem for the heat equation with random potential on the lattice ℤd. We first introduce the model
• Physics
Proceedings of the National Academy of Sciences
• 2012
It is demonstrated that both Anderson and weak localizations originate from the same universal mechanism, acting on any type of vibration, in any dimension, and for any domain shape, which partitions the system into weakly coupled subregions.
• Mathematics
Communications in Partial Differential Equations
• 2019
Abstract We consider the localization of eigenfunctions for the operator on a Lipschitz domain Ω and, more generally, on manifolds with and without boundary. In earlier work, two authors of the
This is a comprehensive survey on the research on the parabolic Anderson model the heat equation with random potential or the random walk in random potential of the years 1990 2015. The investigation
Abstract (by Editor) A detailed report is given of the theoretical work carried out by the author during recent years on problems connected with the energy spectrum of a disordered solid. The | 2023-03-20 12:33:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7705913186073303, "perplexity": 1017.2495958805008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00723.warc.gz"} |
https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_(Martin_Neary_Rinaldo_and_Woodman)/10%3A_Linear_Momentum_and_the_Center_of_Mass | $$\require{cancel}$$
# 10: Linear Momentum and the Center of Mass
Learning Objectives
• Understand how to calculate linear momentum.
• Understand how to calculate impulse and that it corresponds to a change in momentum.
• Understand when and how to apply conservation of linear momentum to model situations.
• Understand the difference between elastic and inelastic collisions, and when mechanical energy is conserved.
• Understand how to calculate the center of mass of an object.
In this chapter, we introduce the concepts of linear momentum and of center of mass. Momentum is a quantity that, like energy, can be defined from Newton’s Second Law, to facilitate building models. Since momentum is often a conserved quantity within a system, it can make calculations much easier than using forces. The concepts of momentum and of center of mass will also allow us to apply Newton’s Second Law to systems comprised of multiple particles including solid objects.
Prelude
You hit a pool ball square on with the cue ball. If both balls have the same mass, and you can neglect any “english” on the cue ball, what happens to the cue ball?
1. It stops.
2. It continues, with half of its original speed.
3. It continues, with its original speed.
4. It rebounds, with its original speed. | 2020-08-08 09:06:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7190948724746704, "perplexity": 443.854351649475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00403.warc.gz"} |
http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198527596.001.0001/acprof-9780198527596-chapter-7 | ## Alexander A. Ivanov
Print publication date: 2004
Print ISBN-13: 9780198527596
Published to Oxford Scholarship Online: September 2007
DOI: 10.1093/acprof:oso/9780198527596.001.0001
# GETTING THE PARABOLICS TOGETHER
Chapter:
(p. 131 ) 7 GETTING THE PARABOLICS TOGETHER
Source:
The Fourth Janko Group
Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780198527596.003.0007
# Abstract and Keywords
This chapter assumes that G is the completion group of G which is constrained at level 2. It identifies the third geometric subgroup with the famous involution centralizer 21+12 + · 3 · Aut (M22) in J4. Another important subgroup in G is also recovered, which is 211 : M24. This enables the association with G of a coset geometry D(G) which eventually will be identified with the Ronan-Smith geometry for J4.
Keywords: parabolic geometry, M22, residues, maximal parabolics
From now on we assume that G is the completion group of 𝒢 which is constrained at level 2. We identify the third geometric subgroup with the famous involution centralizer $Display mathematics$ in J 4. We also recover another important subgroup in G which is $Display mathematics$ This enables us to associate with G a coset geometry 𝒟(G) which eventually will be identified with the Ronan–Smith geometry for J 4.
# 7.1 Encircling $2 + 1 + 12 ⋅ 3 ⋅ Aut ( M 22 )$
Let ϕ : 𝒢 → G be a faithful generating completion of the amalgam 𝒢 which is constrained at level 2. The existence of such a completion is guaranteed by (6.13.9). First we assume that ϕ : 𝒢 → G is universal among the completions which are constrained at level 2. Since the centre of N [2] = K [2] is trivial we can define such a completion in the following way (compare Section 2.5).
Let ϕ̃ : 𝒢 → G̃ be the universal completion of 𝒢, ϕ : 𝒢 → G be an arbitrary completion which is contrained at level 2, ψ : G̃ → G be the corresponding homomorphism of completions and Y be the kernel of ψ. Then the restriction of ψ to $Display mathematics$ is a homomorphism onto $Display mathematics$ with kernel Y [2] = YG̃[2]. Since ϕ : 𝒢 → G is constrained at level 2, we have $Display mathematics$ On the other hand, the restriction of ψ to ϕ̃(N [2]) is an isomorphism onto ϕ(N [2]) and therefore, $Display mathematics$ If we want G to be the ‘largest’ completion group subject to the property that it is constrained at level 2 we must take Y to be the smallest normal subgroup (p. 132 ) in G̃ which intersects G̃[2] in Y [2]. This means that Y should be taken to be the normal closure in G̃ of C G̃[2] (ϕ̃(N [2])).
Alternatively we can define G to be the universal completion of the rank 3 amalgam $Display mathematics$ It is worth mentioning that the existence of the amalgam 𝒥 is independent of the existence of completions of 𝒢 which are constrained at level 2. In fact, 𝒥 is the amalgam $Display mathematics$ factorised over C G̃[2] (ϕ̃(N [2])). On the hand, 𝒥 possesses a faithful completion if and only if 𝒢 possesses a completion constrained at level 2.
From now on (unless explicitly stated otherwise) $Display mathematics$ is assumed to be an arbitrary faithful completion of 𝒢 which is constrained at level 2. The amalgam 𝒢 will be identified with its image in G under ϕ, so that we can plainly write $Display mathematics$ By (4.2.6) and (4.2.8) N [2] and N [3] are non-trivial, so by (4.2.1 (iv)) G [2] and G [3] are proper subgroups in G. On the other hand by (5.4.1) N [4] = 1 and in Section 7.4 we will show that G [4] is in fact the whole of G.
Let Γ = Λ(𝒢, ϕ, G) be the coset graph corresponding to the completion ϕ: 𝒢 → G. Let x and {x, y} be defined as in the paragraph before (4.1.1) so that $Display mathematics$ Let Γ[2] and Γ[3] be the geometric subgraphs in Γ induced by the images of x under G [2] and G [3], respectively (compare (4.2.1 (iii))). Since Γ[2] is of valency 3 and G [2] induces on the vertex set of Γ[2] an action of G [2]/N [2] ≅ Sym5 on the cosets of G [02]/N [2] ≅ Sym3 × Sym2 the following statement is an immediate consequence of the definition of the Petersen graph.
Lemma 7.1.1 Γ[2] is isomorphic to the Petersen graph.
Since the action of G [3] on Γ[3] is locally projective of type (3, 2) and Γ[2] is a geometric cubic subgraph in Γ[3], (7.1.1), (5.2.3), and (11.4.3) imply the following.
Lemma 7.1.2 One of the following two possibilities takes place:
1. (i) Γ[3] is the octet graph Γ(M 22), C G [3] (N [3]) = Z(N [3]) = Z [3] ≅ 2, G [3]/N [3] ≅ Aut (M 22) and $Display mathematics$
2. (p. 133 )
3. (ii) Γ[3] is the Ivanov–Ivanov–Faradjev graph Γ(3 · M 22), C G [3] (N [3]) ≅ 2 × 3, G [3]/N [3] ≅ 3 · Aut (M 22) and $Display mathematics$
It will be proved in Section 7.4 that the possibility (7.1.2 (i)) takes place. Clearly G [3] is a completion of the amalgam $Display mathematics$ Since the completion ϕ : 𝒢 → G is constrained at level 2, it is rather straightforward to check that $Display mathematics$ and therefore the amalgam 𝒢̄[3] = {G [03]/N [3], G [13]/N [3], G [23]/N [3]} is isomorphic to the amalgam $𝒜 [ 3 ] ^$ defined before (5.2.1). Hence by (5.2.1) 𝒢̄[3] is isomorphic to the amalgam 𝒵 as in (11.4.1).
Lemma 7.1.3 Let C [3] be the universal completion of 𝒢[3]. Then C [3]/N [3] is the universal completion of 𝒢̄[3] ≅ 𝒵, therefore C [3]/N [3] ≅ 3 · Aut (M 22).
Proof Let K̃ ≅ 3 · Aut (M 22) be the universal completion of 𝒵 and let us identify 𝒵 with its image in K̃. Let α be a homomorphism of 𝒢[3] onto 𝒵 which is the composition of the canonical homomorphism ggN [3] of 𝒢[3] onto 𝒢̄[3] and an isomorphism of 𝒢̄[3] onto 𝒵. Let $Display mathematics$ be the direct product of C [3] and K̃ and let χ be the subset of C [3] × K̃ consisting of the pairs (c, k), such that α(c) = k. Then χ is isomorphic to 𝒢[3]. Furthermore if X is the subgroup in C [3] × K̃ generated by χ then the restriction to X of the canonical homomorphism of C [3] × K̃ onto K̃ is surjective and the claim follows. ■
By (7.1.3) if G [3] is the universal completion of 𝒢[3] then the possibility (ii) in (7.1.2) takes place. Therefore there is no way we can get down to the possibility (i) looking at the amalgam 𝒢 only and some further subgroups of G should be brought into play.
# 7.2 Tracking 211 : M24
In Sections 3.7 and 3.8 we have seen that G [1] is a semidirect product of Q [m] ≅ 211 and A [1] L [1] ≅ 24 : L 4(2). The relevant action is isomorphic to the action of the octad stabilizer in M 24 on the irreducible Todd module 𝒞11. In Section 7.4 we will prove the following.
Proposition 7.2.1 Let G [m] be the subgroup in G generated by the normalisers of Q [m] in G [1], G [2], and G [3]. Then $Display mathematics$ (p. 134 ) more specifically G [m] is the semidirect product of Q [m] ≅ 𝒞̄11 and M 24 with respect to the natural action.
For i = 1, 2 and 3 put G [mi] = N G [i] (Q [m]), 𝒢[m] = {G [mi] | 1 ≤ i ≤ 3} and 𝒢̄[m] = {G [mi]/Q [m] | 1 ≤ i ≤ 3}, so that 𝒢̄[m] is the quotient of 𝒢[m] over Q [m]. Notice that G [m1] = G [m].
Lemma 7.2.2 The following assertions hold:
1. (i) G [m1]/Q [m] ≅ 24 : L 4(2)
2. (ii) G [m2]/Q [m] ≅ 26 : (L 3(2) × Sym3);
3. (iii) either
1. (a) (7.1.2 (i) takes place and G [m3]/Q [m] ≅ 26:3 · Sym6, or
2. (b) (7.1.2 (ii)) takes place and G [m3]/Q [m] ≅ (26 × 3):3 · Sym6.
Proof Statement (i) follows is directly from (3.8.1). In order to establish (ii) we locate Q [m] inside G [2]. It is clear that Q [m] is contained in G [2] (for instance because [G [1] : G [12] = 15 is odd and Q [m] is a normal 2-subgroup in G [1]). By (4.2.7) |N [2]Q [m]| = 29 and the image of Q [m] in G [2]/N [2] ≅ Sym5 is an elementary abelian subgroup of order 4, which stabilizes an edge of Γ[2] as a whole but not vertexwisely. This means that Q [m] N [2]/N [2] is contained in the commutator subgroup of G [2]/N [2], isomorphic to Alt5.
Let S 7 be a Sylow 7-subgroup in G [2], C ≅ Sym5 be the complement to S 7 in CG [2](S 7) (compare (4.9.1)) and R be the elementary abelian subgroup of order 4 in C, such that RN [2]/N [2] = Q [m] N [2]/N [2]. We claim that R is contained in Q [m]. In fact, by (3.7.1) G [1]/Q [m] ≅ 24 : L 4(2) and since [G [1] : G [12]] = 15 is not divisible by 7, Q [m] is normalized by a Sylow 7-subgroup in G [2]. By Sylow's theorem without loss we assume that this subgroup is S 7. By (4.9.1) (ii) C Q [2] (S 7) = 1. Therefore $Display mathematics$ and the claim follows. Next we claim that Q [m] = RC Q [2] (R). Since RQ [m] and Q [m] is abelian, Q [m] is obviously in the centralizer of R in Q [2] and hence we only have to show that |C Q [2] (R)| is at most 29. The subgroup C G [2] (R) contains S 7, therefore C Q [2] (R) is normalized by S 7. Clearly C Q [2] (R) contains Z [2] and therefore every dent of Q [2] is either completely contained in C Q [2] (R) or intersects C Q [2] (R) in Z [2]. In addition, since R commutes with S 7 and every dent is the direct sum of two non-isomorphic S 7-modules, whenever R normalizes a dent, it necessarily centralizes it. Now it only remains to recall that by (4.7.4) C acts on the set of dents as it acts on the edge-set of the Petersen graph Γ[2]. Finally, R stabilizes exactly three edges of Γ[2] (these edges form the antipodal triple containing {x, y}).
By the above paragraph the number of conjugates of Q [m] in G [2] is equal to the number of conjugates of R in C (which is five). Since $Display mathematics$ (ii) follows.
(p. 135 ) The stabilizer in G of an edge e = {u, v} of Γ is a conjugate of G [1] and by (3.7.1) this stabilizer contains a unique normal elementary abelian subgroup Qe of order 211, which is of course a conjugate of Q [m]. By the above paragraph whenever two edges e and f are contained in a common geometric cubic subgraph Σ ≅ Γ[2] and are antipodal in the line graph of Σ, the equality $Display mathematics$ holds (notice that there are 15 edges in Σ and only 5 different conjugates of Q [m] in G [2]). Let Φ = ΦΓ be the local antipodality graph of Γ, so that Φ is a graph on the edge-set of Γ in which two edges are adjacent if they are contained in a common geometric cubic Petersen subgraph Σ and are antipodal in the line graph of Σ. Then Qe = Qf whenever e and f are in the same connected component of Φ.
Let us turn to (iii). It is clear that Q [m]G [3]. On the other hand, since $Q [ 3 ] = O 2 ( G [ 3 ] ) ≅ 2 + 1 + 12$ is extraspecial, while Q [m] is elementary abelian, |Q [m]Q [3]| ≤ 27 by (1.6.7). Let Ψ = ΦΓ[3] be local antipodality graph of Γ[3] and Ψ c be the connected component of Ψ containing {x, y}. Since Γ[3] is either the octet graph or the Ivanov–Ivanov–Faradjev graph, by (11.4.4) and the paragraph after that lemma Ψ c contains 15 or 45 edges of Γ[3] depending on whether we are in case (a) or (b). By the above paragraph the stabilizer S of Ψ c in G [3] is contained in G [m3]. Furthermore, S contains N [3] and S/N [3] is 24 : Sym6 and (24 × 3) · Sym6 in the respective cases (a) and (b). Since Q [m] N [3]/N [3] = O 2(S/N [3]), using the well-known fact that Kh = NK (O 2(Kh )) for the stabilizer Kh ≅ 24 : Sym6 of a hexad in K ≅ Aut (M 22), we conclude that S is the whole of G [m3], which completes the proof of (iii). ■
Lemma 7.2.3 Suppose that (7.1.2. (i)) takes place. Then
1. (i) the coset geometry corresponding to the embedding into G [m]/Q [m] of the amalgam $Display mathematics$ is described by the locally truncated diagram
2. (ii) G [m]/Q [m]M 24;
3. (iii) G [m] splits over Q [m].
Proof First notice that the assertion (7.2.2 (iii) (a)) holds. Calculating the intersections of the G [mi]'s we obtain (i). Now (ii) is by (i) and (11.2.1), while (iii) is by (ii), (3.7.1) and Gaschütz's theorem. ■
# (p. 136 ) 7.3P-geometry of G[4]
In this section for a subsequence α of 0123 we denote the subgroup G [α4] by F [α]. This convention also applies when α is empty, so that F = G [4]. Let ℱ = {F [0], F [1]} be the corresponding subamalgam in F, let Ξ = Λ[4] = Λ(𝒢[4], ϕ[4], F) be the coset graph associated with the completion $Display mathematics$ (which is the restriction of ϕ to 𝒢[4]). At this stage we do not know yet that F is the whole of G, but at any event the action of F on Ξ is faithful (since N [4] is trivial by (5.4.1)) and locally projective of type (4, 2). Let {u, v} be the edge of Ξ such that $Display mathematics$ where ℱ is identified with its image in F under ϕ[4]. For i = 2 and 3 let Ξ[i] be the geometric subgraph in Ξ induced by the images of u under F [i] and let I [i] be the vertexwise stabilizer of Ξ[i] in F
Lemma 7.3.1 The following assertions hold:
1. (i) F [0] = G [04] ≅ 24+4 : 26 : L 4(2);
2. (ii) $F [ 1 ] = 〈 G [ 014 ] , t 1 〉 ≅ 2 6 + 5 + 6 . ( L 3 ( 2 ) × 2 ) ≅ 2 11 : 2 + 1 + 6 : L 3 ( 2 ) ;$;
3. (iii) F [2] ≅ 23+12 · (sym4 × Sym5), Ξ[2] is the Petersen and I [2] ≅ 22+12 × Sym4;
4. (iv) $F [ 3 ] ≅ 2 + 1 + 12 ⋅ 3 ⋅ Aut ( M 22 )$, Ξ[3] is the Ivanov–Ivanov–Faradjev graph and $I [ 3 ] ≅ 2 + 1 + 12$.
Proof Since F [0] and F [1] are the stabilizers of U 1 in G [0] and G [1], respectively (i) and (ii) are quite clear. In terms of Section 3.8 F [1] is a semidirect product of Q [m] ≅ 211 and the stabilizer of U 1 in A [1] L [1] ≅ 24 : L 4(2). The latter stabilizer coincides with the centralizer of a central involution in L [0]L 5(2), isomorphic to $2 + 1 + 6 : L 3 ( 2 )$. We know that F [2] is the subgroup in G [2] generated by G [024] and G [124]. The set 𝒫 of geometric subgraphs of valency 7 in Γ containing Γ[2] is of size 7 (of course Γ[3] ∈ 𝒫). The action (isomorphic to L 3(2)) of N [2] on 𝒫 induces a structure of the projective plane of order 2. Then F [2] is the stabilizer in G [2] of a line in that projective plane structure which gives (iii).
The subgroup F [3] is generated by G [034] and G [134]. Since Q [3] = O 2(N [3]) = N [3]G [014] we immediately conclude that $Display mathematics$ On the other hand, the whole of N [3] could not be in I [3] since it is not even in G [014]. The action of F [3] on Ξ[3] is locally projective of type (3, 2) and by (ii) Ξ[2] is a geometric cubic subgraph in Ξ[3] isomorphic to the Petersen graph. By (11.4.3) this implies that Ξ[3] is either the octet graph Γ(M 22) of the (p. 137 ) Ivanov–Ivanov–Faradjev graph Γ(3 · M 22). Let $Display mathematics$ be the natural homomorphism. By (5.2.4) the image of χ is isomorphic to 3 · Aut (M 22). For α = 0 and 1 the subgroups G [α34] and $N [ 3 ] ≅ 2 + 1 + 12 : 3$ factorize G [α3] and hence by (5.2.1 (i), (ii)) we have $Display mathematics$ Since χ(G [3]) does not split over O 3(χ(G [3])), $Display mathematics$ and therefore F [3]/I [3] possesses a homomorphism onto 3 · Aut (M 22). Thus (iv) follows. ■
Let 𝒢(G [4]) be the geometry, whose elements of type 1 are the vertices of Ξ, the elements of type 2 are the edges of Ξ, the elements of type 3 are the geometric cubic subgraphs in Ξ and the elements of type 4 are the geometric subgraphs of valency 7 in Ξ; the incidence relation is via inclusion. As a direct consequence of (7.3.1) we obtain the following
Proposition 7.3.2 The geometry 𝒢(G [4]) is a P-geometry of rank 4 with the diagram
The group G [4] acts on 𝒢(G [4]) faithfully and flag-transitively. The residue in 𝒢(G [4]) of an element of type 4 is isomorphic to the geometry 𝒢(3 · M 22).
In Section 7.4 we will show that G [4] is the whole of G and by the Main Theorem the latter is J 4. Therefore the geometry in (7.3.2) is the P-geometry 𝒢(J 4) of J 4 first constructed in (Ivanov 1987).
For 1 ≤ i ≤ 3 put F [mi] = F [i]Q [m] and F [m] = 〈F [mi] | 1 ≤ i ≤ 3〉.
Lemma 7.3.3 The following assertions hold:
1. (i) $F [ m 1 ] / Q [ m ] ≅ 2 + 1 + 6 : L 3 ( 2 )$ and F [m1] splits over Q [m];
2. (ii) F [m2]/Q [m] ≅ 26 : (Sym4 × Sym3);
3. (iii) F [m3]/Q [m] ≅ 26 : 3 · Sym6;
4. (iv) the coset geometrycorresponding to the embedding of the amalgam {F [mi]/Q [m] | 1 ≤ i ≤ 3} into F [m]/Q [m] is described by the tilde diagram
5. (v) F [m]/Q [m]M 24 and ℳ ≅ 𝒢(M 24;
6. (vi) Q [m] is the irreducible Todd module 𝒞̄11;
7. (vii) F [m] splits over Q [m].
(p. 138 ) Proof A mere comparison of (7.2.2) and (7.3.1) gives (i) to (iii). The diagram of ℳ can be recovered by direct calculating the intersection of the F [mi]'s. Alternatively one can employ the following combinatorial realization of ℳ. Let ϒ = ΦΞ be the local antipodality graph of Ξ. Then, arguing as in the proof of (7.2.2), one can see that F [m] coincides with the stabilizer in F of the connected component ϒ c of ϒ containing {u, v}. The elements of ℳ are the vertices of ϒ c (which are edges of Ξ), the intersections of the vertex set of ϒ c with edge-sets of geometric subgraphs of valency 3 and 7. Then by (7.3.2) and the paragraph after (11.4.4) we obtain the desired diagram.
By (11.2.2) the assertions (i) to (iv) imply that F [m]/Q [m] is either M 24 or H e. Since Q [m] is a non-trivial module in which F [m3] stabilizes the 1-dimensional subspace Z [3] the latter possibility is excluded, since the index of 26 : 3 · Sym6 in H e is 29, 155 (cf. Conway et al. (1985) and Section 11.2 and hence (v) follows. The subgroup F [m2] stabilizes in Q [m] the 2-dimensional subspace Z [2] which contains the 1-dimensional subspace Z [3] stabilized by F [m3]. In terms of Ivanov and Shpectorov (2002) this means that Q [m] is a quotient of the universal representation group of ℳ ≅ 𝒢(M 24), so that (vi) follows from Proposition 4.3.1 in Ivanov and Shpectorov (2002). Finally (vii) follows from (i) in view of Gaschütz's theorem. ■
It is worth mentioning that the proof of (7.3.3 (v)) is the only place in the present volume where we essentially make use of a result (which is (11.2.2)) whose proof relies on computer-aided calculations.
# 7.4G[4] = G
First we show that G [m4] = G [m] (recall that G [m4] ≅ 211 : M 24 by (7.3.3 (v), (vi), (vii)).
Lemma 7.4.1 Suppose that G [m4] ≅ 211 : M 24 is a proper subgroup in G [m] Then the coset geometry 𝒩 corresponding to the embedding into G [m]/Q [m] of the amalgam $Display mathematics$ is described by the rank 4 tilde diagram
Proof We claim that under the hypothesis (7.1.2 (ii)) takes place. In fact, otherwise $Display mathematics$ by (7.2.3), (7.3.3 (v), (vi), (vii)) and the order comparison. Then the structure of the G [mi]'s can be read from (7.2 (i), (ii), (iii) (b)), and (7.3 (v), (vi), (vii)). Calculating the intersections we get the diagram. ■
(p. 139 ) Lemma 7.4.2 G [m4] = G [m].
Proof If the claim fails then by (7.4.1) G [m]/Q [m] acts flag-transitively on a rank 4 tilde geometry 𝒩. In terms of Ivanov and Shpectorov (2002) this geometry is of truncated M 24-type and it does not exist by Proposition 12.4.6 and 12.5.1 in Ivanov and Shpectorov (2002). ■
Proof of Proposition 7.2.1 The result is now immediate by (7.3.3) and (7.4.2). ■
Lemma 7.4.3 The possibilities (7.1.2 (i)) and (7.2.2 (iii) (a)) take place, so that $Display mathematics$
Proof By (7.2.1) G [m]/Q [m]M 24 and the latter group just does not contain subgroups as in (7.2.2 (iii) (b)) already by Lagrange theorem. ■
We are ready to prove the main result of the section.
Proposition 7.4.4 G [4] = G.
Proof By (7.4.1) G [m] = G [m4]G [4]. Also G [1]G [4] since G [1]G [m] (as remarked before (7.2.2)). But G [0] is generated by G [01] and G [04], and so also G [0]G [4]. This clearly implies G [4] = G. ■
We refer the reader to sections 9.5, 9.6 in Ivanov (1999) for general discussion about the existence/non-existence of geometric subgraphs.
Lemma 7.4.5 Let Y be a Sylow 3-subgroup in O 2,3(G [3]). Then
1. (i) C G [3] (Y) ≅ 6 · M 22 is a non-split central extension of a cyclic group of order 6 by M 22;
2. (ii) G [3] does not split over $Q [ 3 ] = O 2 ( G [ 3 ] ) ≅ 2 + 1 + 12$.
Proof Since Y is a Sylow 3-subgroup of N [2] the result is by (4.9.1 (iii)). ■
By (7.4.5) the Schur multiplier of M 22 possesses the cyclic group of order 6 as a factor-group. In 1976, when (Janko 1976) was published this cyclic group was believed to be the whole Schur multiplier of M 22. In (Mazet 1979) the multiplier of M 22 was proved to be the cyclic group of order 12.
# 7.5 Maximal parabolic geometry 𝒟
We start this section by summarizing the information about the action of G on Γ we have obtained so far.
(p. 140 ) Proposition 7.5.1 Let G be a completion of the amalgam 𝒢 which is constrained at level 2 and let Γ be the coset graph associated with this completion. Then
1. (i) Γ is connected of valency 31 and the action of G on Γ is locally projective of type (5, 2);
2. (ii) G(x) = G [0] ≅ 210 : L 5(2);
3. (iii) G{x, y} = G [1] ≅ 26+4+4 · (L 4(2) × 2) ≅ 211 : 24 : L 4(2);
4. (iv) the geometric cubic subgraph Γ [2] is isomorphic to the Petersen graph and $Display mathematics$
5. (v) the geometric subgraph Γ[3] of valency 7 is isomorphic to octet graph and $Display mathematics$
6. (v) there are no geometric subgraphs of valency 15 and G [4] ≔ 〈G [04], G [14]is the whole of G;
7. (vi) if Φ = ΦΓ is the local antipodality graph of Γ and Φ c is the connected component of Φ containing {x, y} then Φ c is isomorphic to the octad graph Γ(M 24) and $Display mathematics$
Proof (i) and (ii) are already in (4.1.1), (iii) is by (7.1.1), (iv) is by (7.4.3), (v) is by (7.4.4). Finally (vi) is by (7.2.3) since (7.1.2 (i)) takes place by (7.4.3). ■
Let ℱ(G) be a geometry such that
1. (0) the elements of type 0 are the vertices of Γ;
2. (1) the elements of type 1 are the edges of Γ;
3. (2) the elements of type 2 are the geometric cubic subgraphs;
4. (3) the elements of type 3 are the geometric cubic subgraphs of valency 7;
5. (i) the incidence relation is via inclusion.
Then it is immediate from (7.5.1) that ℱ(G) belongs to the locally truncated Petersen diagram
By the Main Theorem GJ 4 so ℱ(G) is another geometry for J 4 constructed in (Ivanov 1987).
More fruitful for our current purposes is the geometry 𝒟 = 𝒟(G) whose elements are as in ℱ(G), only instead of the elements of type 1 (which are the edges of Γ) we take elements of type m which are the connected components of the local antipodality graph Φ of Γ. The incidence relation between the elements of type 0, 2, and 3 is as in ℱ(G). A connected component of Φ (an element of type m) is adjacent to an element f ∈ ℱ(G) if f is incident in ℱ(G) to an edge of Γ contained in that connected component.
(p. 141 ) Since G is generated by G [0] and G [1] it is a standard result that both ℱ(G) and 𝒟(G) are connected.
# 7.6 Residues in 𝒟
Let 𝒟 = 𝒟(G) be the geometry defined in Section 7.5. Recall that the set of types of 𝒟 is {m, 0, 2, 3}. For i ∈ {m, 0, 2, 3} the set of elements of type i on 𝒟 will be denoted by 𝒟[i]. Often we will write the type of an element above its name, for instance we write $d 3$ for an element d of type 3. The stabilizer in G of this element will be denoted by $G d [ 3 ]$. The residue in 𝒟 of an element a (whose type will be clear from the context) will be denoted by 𝒟 a .
Recall that a path in 𝒟 is a sequence π = (a 0, a 1, a 2, a 3,…,as ) of its elements such that ai is incident to a i+1 but neither equal nor incident to a i+2 for every 1 ≤ is −1. In this case s is the length of π.
With every element $a i ∈ 𝒟$ we associate a certain combinatorial/geometrical structure (whose isomorphism type depends on i only). Then the residue 𝒟 a of a in 𝒟 and the stabilizer $G a [ i ]$ of a in G possess natural descriptions in terms of this structure. This works in the following way:
Type m: If $a m$ is an element of type m then there is a Witt design $W a [ 24 ]$ of type S(5, 8, 24). If ℬ a , 𝒯 a and 𝒮 a are the octads, trios and sextets of $W a [ 24 ]$, then $Display mathematics$ The incidence relation in 𝒟 a is via the refinement relation on the corresponding partitions of the element set of $W a [ 24 ]$. For instance suppose that $( B , α ) ∈ 𝒟 a [ 0 ]$ and $𝒮 ∈ 𝒟 a [ 3 ]$, where B is an octad of $W a [ 24 ]$ (identified with the partition of the set of 24 elements into the octad B and its complement), α ∈ GF(2) and S is a sextet. Then (B, α) and S are incident if and only if B is the union of two tetrads from S. In particular, α does not effect the incidence. The stabilizer $G a [ m ]$ is the semidirect product of the automorphism group $M a [ m ] ≅ M 24$ of $W a [ 24 ]$ and the irreducible Todd module $Q a [ m ] ≅ 𝒞 ¯ 11$. The module $Q a [ m ]$ is considered as a section of the GF(2)-permutation module of $M a [ m ]$ on the set of elements of $W a [ 24 ]$. In particular $M a [ m ]$ has two orbits on the set of non-zero vectors in $Q a [ m ]$; the elements in one of the orbits are indexed by pairs of elements of $W a [ 24 ]$, while those from the other orbit are indexed by the sextets from 𝒮 a . Dually, the hyperplanes in $Q a [ m ]$ are indexed by the octads from ℬ a and by the complementary pairs of dodecads. The subgroup $Q a [ m ] ≅ O 2 ( G a [ m ] )$ is the kernel of the action of $G a [ m ]$ on $𝒟 a [ 2 ] ∪ 𝒟 a [ 3 ]$. Every orbit $Q a [ m ]$ on $𝒟 a [ 0 ]$ is of the form {(B, 0), (B, 1)}, where B ∈ ℬ a . An element $q ∈ Q a [ m ]$ fixes this orbit elementwise if and only if q is in the hyperplane corresponding to B. For every α ∈ GF(2) the complement $M a [ m ]$ stabilizes {(B, α) | B ∈ ℬ a } as a whole and acts on it as it acts on ℬ a .
(p. 142 ) Type 0: If $b 0$ is an element of type 0 then there is a 5-dimensional vector space Vb over GF(2) such that $Display mathematics$ (where $[ V b i ]$ stands for the set of i-dimensional subspaces in Vb ). The incidence in 𝒟 b is by inclusion. The subspace corresponding to an element x in 𝒟 b will be denoted by Vb (x). The stabilizer $G b [ 0 ]$ is the semidirect product with respect to the natural action of the general linear group $L b [ 0 ] ≅ L 5 ( 2 )$ and the exterior square $Q b [ 0 ] ≅ 2 10$. The latter is the kernel of the action of $G b [ 0 ]$ on 𝒟 b , while $G b [ 0 ] / Q b [ 0 ] ≅ L b [ 0 ]$ acts in the natural way. If b is the vertex x of Γ as in (7.5.1 (ii)) then Vb = U 5, $G b [ 0 ] = G [ 0 ]$ etc.
Type 2: If $c 2$ is an element of type 2 then there is a Petersen graph Θ c and a 3-dimensional GF(2)-vector space $Z c [ 2 ]$ such that $Display mathematics$ and $𝒟 c [ m ]$ is the set of antipodal triples of edges of Θ c (considered also as 6-element subsets of V c )). Every element from $𝒟 c [ 3 ]$ is incident to every element from $𝒟 c [ 0 ] ∪ 𝒟 c [ m ]$ while the incidence between the elements from $𝒟 c [ 0 ]$ and the elements from $𝒟 c [ m ]$ is via inclusion. The stabilizer $Display mathematics$ is isomorphic to the pentad group. Furthermore, $Q c [ 2 ] = O 2 ( G c [ 2 ] )$ is the kernel of the action of $G c [ 2 ]$ on 𝒟 c $N c [ 2 ] ≅ 2 3 + 12 : L 3 ( 2 )$ is the kernel of the action of $G c [ 2 ]$ on Θ c . $L c [ 2 ] ≅ L 3 ( 2 )$ be a complement to $Q c [ 2 ]$ in $N c [ 2 ]$ and let $S c [ 2 ] ≅ Sym 5$ be a complement to $N c [ 2 ]$ in $G c [ 2 ]$ (recall that $Q c [ 2 ]$ is not complemented in $G c [ 2 ]$. If c is the geometric cubic subgraph Γ[2] in Γ as in (7.5.1 (iv)), then $G c [ 2 ] = G [ 2 ]$, $Q c [ 2 ] = Q [ 2 ]$ etc.
Type 3: If $d 3$ is an element of type 3 then there is a Witt design $W d [ 22 ]$ of type S(3, 6, 22) associated with d. If 𝒪 d is the set of octets, ℋ d is the set of hexads and 𝒫 d is the set of pairs in $W d [ 22 ]$ then $Display mathematics$ with the incidence relation as in the geometry ℋ(M 22). The stabilizer $G d [ 3 ]$ is of the form $Display mathematics$ (p. 143 ) and $N d [ 3 ] = O 2 , 3 ( G d [ 3 ] )$ is the kernel of the action of $G d [ 3 ]$ on the residue 𝒟 d . Let $M d [ 3 ] ≅ 6 ⋅ Aut ( M 22 )$ be the normalizer in $G d [ 3 ]$ of a Sylow 3-subgroup $Y d [ 3 ]$ in $N d [ 3 ]$ (compare (7.4.5)). Then $M d [ 3 ] ∩ Q d [ 3 ] = Z d [ 3 ]$ and $Q d [ 3 ] M d [ 3 ] = G d [ 3 ]$ (where $Q d [ 3 ] = O 2 ( G d [ 3 ] )$ and $Z d [ 3 ] = Z ( Q d [ 3 ] )$). If d is the geometric subgraph Γ[3] of valency 7 in Γ as in (7.5.1 (v)) then $G d [ 3 ] = G [ 3 ] , N d [ 3 ] = N [ 3 ]$ etc.
It is immediate from the above that 𝒟(G) belongs to the following diagram (cf. Section 10.4 for the definitions of the relevant rank 2 residues). Instead of types next to every node we indicate the structure of the corresponding stabilizer in G.
# 7.7 Intersections of maximal parabolics
Suppose that $x i$ and $y i$ are incident elements in 𝒟. We require a clear understanding of the structure of the intersection $G x [ i ] ∩ G y [ i ]$ in terms of the chief factors of $G x [ i ]$. This information, as summarized in lemmas below, is not so difficult to deduce, keeping in mind that $Display mathematics$ is the amalgam of maximal parabolic subgroups associated with the action of G on 𝒟.
The action of $G a [ m ]$ on 𝒟 a follows from the results in Sections 7.2 and 7.3, particularly from (7.2.2) and (7.3.3).
Lemma 7.7.1 Let a ∈ 𝒟[m] and let $G a [ m ] ≅ 2 11 : M 24$ be the stabilizer of a in G. Then
1. (i) if $b ∈ 𝒟 a [ 0 ]$ then
1. (1) b = (B, α), where B is an octad from a and α ∈ {0, 1};
2. (p. 144 )
3. (2) the subgroup $M a [ m ] ( b ) ≅ 2 4 : L 4 ( 2 )$ stabilizes a unique hyperplane Pa (B) in $Q a [ m ]$;
4. (3) $G a [ m ] ∩ G b [ 0 ] = P a ( B ) : M a [ m ] ( b )$;
5. (4) $Q b [ 0 ] = C P a ( B ) ( O 2 ( M a [ m ] ( b ) ) ) O 2 ( M a [ m ] ( b ) ) ≅ 2 10$;
2. (ii) if $c ∈ 𝒟 a [ 2 ]$ then
1. (5) c is a trio from a ;
2. (6) the subgroup $M a [ m ] ( c ) ≅ 2 6 : ( L 3 ( 2 ) × sym 3 )$ stabilises in $Q a [ m ]$ a unique subgroup Ra (c) of index 4;
3. (7) $G a [ m ] ∩ G c [ 2 ] = Q a [ m ] : M a [ m ] ( c )$;
4. (8) $Q c [ 2 ] = R a ( c ) O 2 ( M a [ m ] ( c ) ) ≅ 2 3 + 12$.
3. (iii) if $d ∈ 𝒟 a [ 3 ]$ then
1. (9) d is a sextet from 𝒮 a ;
2. (10) if Y is a Sylow 3-subgroup of $O 2 , 3 ( M a [ m ] ( d ) )$, where $M a [ m ] ( d ) ≅ 2 6 : 3 ⋅ Sym 6$, then $Q a [ m ] / [ Q a [ m ] , Y ] ≅ 2 5$;
3. (11) $G a [ m ] ∩ Q b [ 3 ] = Q a : M a [ m ] ( d )$;
4. (12) $Q d [ 3 ] = [ Q a [ m ] , Y ] O 2 ( M a [ m ] ( d ) ) ≅ 2 + 1 + 12$;
5. (13) Y is a Sylow 3-subgroup of $O 2 , 3 ( G d [ 3 ] )$.
An element b of type 0 in 𝒟(G) is a vertex of the locally projective graph Γ. The edges containing b are in the natural bijection with the elements of type m incident to b in 𝒟(G). Therefore the action of $G b [ 0 ]$ on 𝒟 b is isomorphic to the action of H [0] on the corresponding residue in the dual polar space 𝒪+(10,2) (cf. (2.1.2), (2.1.3)).
Lemma 7.7.2 Let b ∈ 𝒟[0] and $G b [ 0 ] ≅ 2 10 : L 5 ( 2 )$ be the stabilizer of b in G.
1. (i) If $x ∈ 𝒟 a [ i ]$ for i = m, 2, or 3 then
1. (1) x is a subspace in Vb of dimension 4, 3, or 2, respectively;
2. (2) $G b [ 0 ] ∩ G x [ i ] = Q b [ 0 ] : L b [ 0 ] ( x )$, where $L b [ 0 ] ( x )$ is isomorphic to $Display mathematics$ in the respective three cases;
3. (3) $Q x [ i ] ∩ G b [ 0 ] = [ Q b [ 0 ] , O 2 ( L b [ 0 ] ( x ) ) ] O 2 ( L b [ 0 ] ( x ) )$;
4. (4) If x is of type m then $O 2 ( G x [ m ] )$ intersects $G b [ 0 ]$ in a subgroup of index 2 in $O 2 ( G x [ m ] )$ and $Q b [ 0 ] / [ Q b [ 0 ] , O 2 ( L b [ 0 ] ( x ) ) ] ≅ 2 4$;
5. (5) the subgroup $O 2 ( G x [ i ] )$ is contained in $G b [ 0 ]$ for i = 2 and 3, while $Q b [ 0 ] / [ Q b [ 0 ] , O 2 ( L b [ 0 ] ( x ) ) ]$ is isomorphic to 2 and 23 in the respective cases.
The next result follows from the properties of the pentad group established in Sections 4.8 and 4.9.
(p. 145 ) Lemma 7.7.3 Let c ∈ 𝒟[2] and let $G c [ 2 ] ≅ 2 3 + 12 ⋅ ( L 3 ( 2 ) ⋅ Sym 5 )$ be the stabilizer of c in G. Then
1. (i) if $a ∈ 𝒟 c [ m ]$ then
1. (1) a is an antipodal triple in the Petersen graph Θ c ;
2. (2) $S c [ 2 ] ( a ) ≅ Sym 4$;
3. (3) $Q c [ 2 ] ∩ G a [ m ] = N c [ 2 ] S c [ 2 ] ( a )$;
4. (4) $Q a [ m ] = C Q c [ 2 ] ( O 2 ( S c [ 2 ] ( a ) ) ) O 2 ( S c [ 2 ] ( a ) )$;
5. (5) $Z c [ 2 ] ≤ Q a [ m ]$;
2. (ii) if $b ∈ 𝒟 c [ 0 ]$ then
1. (6) b is a vertex of Θ c ;
2. (7) $S c [ 2 ] ( b ) ≅ Sym 3 × 2$;
3. (8) $G c [ 2 ] ∩ G b [ 0 ] = N c [ 2 ] S c [ 2 ] ( b )$;
4. (9) $Q b [ 0 ] = C Q c [ 2 ] ( O 2 ( S c [ 2 ] ( b ) ) ) O 2 ( S c [ 2 ] ( b ) )$;
3. (iii) if $d ∈ 𝒟 c [ 3 ]$ then
1. (10) d is a 1-dimensional subspace in $Z c [ 2 ]$;
2. (11) $L c [ 2 ] ( d ) ≅ Sym 4$;
3. (12) $G c [ 2 ] ∩ G d [ 3 ] = Q c [ 2 ] S c [ 2 ] L c [ 2 ] ( d )$;
4. (13) $Q d [ 3 ] = C Q c [ 2 ] ( O 2 ( L c [ 2 ] ( d ) ) ) O 2 ( L c [ 2 ] ( d ) )$.
The structure of $G d [ 3 ]$ and its action on 𝒟 d follows from results in Sections 5.2, 7.3, and 7.4.
Lemma 7.7.4 Let d ∈ 𝒟[3] and let $G d [ 3 ] ≅ 2 + 1 + 12 ⋅ 3 ⋅ Aut ( M 22 )$ be the stabilizer of d in G. Then
1. (i) if $a ∈ 𝒟 d [ m ]$ then
1. (1) a is a hexad from d ;
2. (2) $M d [ 3 ] ( a ) ≅ 2 5 : 3 ⋅ Sym 6$;
3. (3) $G d [ 3 ] ∩ G a [ m ] = Q d [ 3 ] M d [ 3 ] ( a )$;
4. (4) $Q a [ m ] = C Q d [ 3 ] ( O 2 ( M d [ 3 ] ( a ) ) ) O 2 ( M d [ 3 ] ( a ) )$.
2. (ii) if $b ∈ 𝒟 d [ 0 ]$ then
1. (5) b is an octet from 𝒪 d ;
2. (6) $M d [ 3 ] ( b ) ≅ 2 × Sym 3 × 2 3 : L 3 ( 2 )$;
3. (8) $G d [ 3 ] ∩ G b [ 0 ] = Q d [ 3 ] M d [ 3 ] ( b )$;
4. (9) $Q b [ 0 ] = C Q d [ 3 ] ( O 2 ( M d [ 3 ] ( b ) ) ) O 2 ( M d [ 3 ] ( b ) ) ≅ 2 10$;
3. (iii) if $c ∈ 𝒟 d [ 2 ]$ then
1. (10) c is a pair from 𝒫 d ;
2. (11) $M d [ 3 ] ( c ) ≅ 2 6 : 3 : Sym 5$;
3. (p. 146 )
4. (12) $G d [ 3 ] ∩ G c [ 2 ] = Q d [ 3 ] M d [ 3 ] ( c )$;
5. (13) $Q c [ 2 ] = [ O 2 ( G d [ 3 ] ) , O 2 ( M d [ 3 ] ( c ) ) ] O 2 ( M d [ 3 ] ( c ) ) ≅ 2 3 + 12$.
Excerises
1. 1. Let 𝒥 = {G [0], G [1], G [2]} be the amalgam defined in Section 7.1. Show that the actions of N G [1] (Q [m]) and N G [2] (Q [m]) on Q [m] generate the Mathieu group M 24.
2. 2. Show directly that the amalgam {F [m1]/Q [m], F [m2]/Q [m], F [m3]/Q [m]} as in (7.3.3) is isomorphic to 𝒜(M 24).
3. 3. Give a computer-free proof of the simple connectedness of the rank 3 tilde geometry 𝒢(M 24). | 2013-05-19 11:01:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 41, "mathml": 148, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898013830184937, "perplexity": 1438.9839227817736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00095-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/499008/incorrect-indentation-for-the-end-word-of-a-for-loop | # Incorrect indentation for the “end” word of a for loop
I'm writing a pseudo code in latex using algorithm package, however, I have problem with indentation of the last end word for the for-loop if this for-loop contains an if statement. If it doesn't contain the if statement, all looks fine.
Example with no if statement and correct indentation:
Example with an if statement and incorrect indentation:
The code used to generate the latter one is following (just remove the if part to get the code used for first example):
\begin{algorithm}[H]
\caption{Test algorithm}
\SetAlgoLined
\For{$t \in T$}{
$t \gets$ 1
\If{ (t) \in P} {
t \gets 0
}
}
\end{algorithm}
Do you know how can this be fixed? Thanks
• Are you sure about "using algorithm package"? The \SetAlgoLined seems to be introduced by the algorithm2e package. Could you please add a minimal working example (MWE) to your question? Also, do you recieve any error messages? If so, please don't ignore then. Even if you get something that on first glance resembles a pdf file, there can still be issues with it. After an error, TeX only tries to recover enough to syntax check more of the file, it does not try to make sensible output after an error. – leandriis Jul 7 at 16:56
From your usage of \SetAlgoLined I assume that you are actually using the algorithm2e package. If I make a MWE with this package, I recieve the following error message Missing $inserted. that is caused by \in and \gets being used in text mode in line 12 and 13. If I add the missing$s, the MWE compiles perfectly fine and gives the desired result:
\documentclass{article}
\usepackage[ruled]{algorithm2e}
\begin{document}
\begin{algorithm}[H]
\caption{Test algorithm}
\SetAlgoLined
\For{$t \in T$}{
$t \gets$ 1
\If{ $(t) \in P$} {
$t \gets 0$
}
}
\end{algorithm}
\end{document}
• Thanks a lot, that was exactly the problem and sorry for stating misleading package name! – leopik Jul 7 at 17:08 | 2019-08-19 20:50:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403958916664124, "perplexity": 1662.9831736621961}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00070.warc.gz"} |
https://www.nature.com/articles/s41598-022-10395-6?error=cookies_not_supported&code=b12a81d1-a005-4c26-8a52-f3af3d1ea0ea | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
The effect of ambient temperature on in-hospital mortality: a study in Nanjing, China
Abstract
To reduce the inpatient mortality and improve the quality of hospital management, we explore the relationship between temperatures and in-hospital mortality in a large sample across 10 years in Nanjing, Jiangsu. We collected 10 years’ data on patient deaths from a large research hospital. Distributed lag non-linear model (DLNM) was used to find the association between daily mean temperatures and in-hospital mortality. A total of 6160 in-hospital deaths were documented. Overall, peak RR appeared at 8 °C, with the range of 1 to 20 °C having a significantly high mortality risk. In the elderly (age ≥ 65 years), peak RR appeared at 5 °C, with range − 3 to 21 °C having a significantly high mortality risk. In males, peak RR appeared at 8 °C, with the range 0 to 24 °C having a significantly high mortality risk. Moderate cold (define as 2.5th percentile of daily mean temperatures to the MT), not extreme temperatures (≤ 2.5th percentile or ≥ 97.5th percentile of daily mean temperatures), increased the risk of death in hospital patients, especially in elderly and male in-hospital patients.
Introduction
Climate as an influential factor of fluctuation in mortality has been paid more and more attention1,2. In recent years, many studies suggest that extreme temperatures may cause negatively affect health and increase the mortality risk of many diseases3. Rising or lowering temperature is related to the risk of heat-related illness. Gasparrini et al. collected data from 384 locations for quantifying the total mortality burden attributable to non-optimum ambient temperatures, and the relative contributions from moderate and extreme temperatures1. In their study, the temperature was responsible for advancing a substantial fraction of deaths, which corresponds to 7.71% of the mortality in selected countries. Chen et al. reported that cold weather generally increased emergency hospital admissions, especially for respiratory diseases and the elderly population4. A similar result by Luo et al. is that cold temperatures would impact stroke admissions in male and youth subjects. Besides, exposure to extreme cold was associated with increased hospitalizations for ischemic and hemorrhagic strokes5. Some similar studies in the Korean population showed that both high and low temperatures could increase the risk of hospitalization6,7,8,9.
This temperature-disease correlation has also been reported in China10,11,12. Literature reports that injuries, nervous, circulatory, and respiratory diseases are sensitive to heat, with the attributable fraction accounting for 6.5%, 4.2%, 3.9%, and 1.85%, respectively. Respiratory and circulatory diseases are sensitive to cold temperatures, with the attributable fraction accounting for 13.3% and 11.8%, respectively10. In addition, it has been reported that both cold and hot temperatures increase mortality risk and the relationship varies geographically, and among groups of people10,11,13. Compared with North China, South China had a higher minimum mortality temperature, and there was a more pronounced cold effect in southern parts of China and a more pronounced hot effect in northern parts11. Deng et al. reported that the elderly (≥ 65 years) are more susceptible to daily mean temperature and diurnal temperature range, and females are more susceptible to high diurnal temperature range (DTR) effect than males13.
However, as far as we know, most studies have looked only at the relationship between temperatures and mortality in the general population but not looked at specific groups, such as hospitalized patients. If there is a relationship between temperatures change and inpatient death, finding the regularity can provide a scientific basis for the hospital to reduce in-hospital mortality and improve hospital management quality. In recent years, distributed lag non-linear model (DLNM) has been used to study meteorological conditions and health effects, which is a modeling framework to flexibly describe associations showing potentially non-linear and delayed effects in time series data1,12,14. Thus, in this study, we investigated the impacts of ambient temperatures on in-hospital mortality in 10 years by DLNM.
Methods
Data collection
Nanjing, the provincial capital of Jiangsu, is an international metropolis with a population of over 9 million residents15. It is located in the east of China, on the Yangtze River and near the sea (31°14″ to 32°37″ north latitude, 118°22″ to 119°14″ east longitude) with a humid north subtropical climate. It has four distinct seasons, abundant rain, short spring and autumn, long winter and summer, and the difference between winter and summer temperatures is obvious16.
In this study, we collected daily meteorological data of Nanjing of the period ranging from January 1, 2010 to June 30, 2020 from the China Meteorological Data Service Center17, including daily mean temperature (Temp, °C), average air pressure (Ap, kPa), and average daily relative humidity (Rh, %). Mortality data included time of death, age, and gender of the patients that died from January 1, 2010 through June 30, 2020 were extracted from the medical records front page system of Jiangsu Province Hospital in Nanjing, Jiangsu. This hospital has a construction area of 410,000 m2, 4600 beds, and more than 6500 employees; it receives 10,000 to 20,000 outpatients every day18,19.
Statistical analysis
The relationship between meteorological factors and in-hospital mortality is nonlinear. Thus, a distributed lag non-linear model (DLNM) was used to investigate the potential exposure-lag response association between in-hospital mortality and daily average temperatures. To achieve our research purposes, Quasi-likelihood Poisson in generalized linear modeling (GLM) was used to model the natural logarithm of everyday in-hospital mortality counts. The statistical model was as follows:
$${\text{Log}}[{\text{E}}({\text{Y}}_{\text{t}})] = \alpha + cb(Tmean,\, lag) + {\text{ns(time, df)}} + {\text{ns}}({\text{Ap}}_{\text{t}}, {\text{df)}} + {\text{ns}}({\text{Rh}}_{\text{t}}^{\prime}, df) + \beta {\text{DOW}}_{\text{t}}$$
In this model, Ap, average daily Rh, and day of the week (DOW) were also taken in the model as a categorical variable. E(Yt) meant the expected daily counts of in-hospital mortality from January 1, 2010 to June 30, 2020. The cross-basis matrix of temperature [cb(Tmean)] was used to explore the daily mean temperature cumulative and delayed effects1,20. A cross-basis can be described as a bi-dimensional space of functions describing simultaneously the shape of the relationship along temperature and its distributed lag effects. Choosing a cross-basis amounts to choose two sets of basis functions, which will be combined to generate the cross-basis functions14,21. We used natural cubic spline defining the position of junctions or cut-off values of spline functions or formation functions over equally spaced pair values with 3 internal knot and defined the maximum lag range as 21 through literature review22. In this model α is the intercept; ns(.) means a natural cubic spline; β is the regression coefficient of DOWt; and t is the day of the week on day. We chose 1 df (degrees of freedom) for each year, 3 df for Ap, and 3df for Rh23. Then we plotted the exposure-lag-response diagram of temperature and in-hospital mortality by estimating the relative risk (RR) with 95% confidence interval (CI) of in-hospital dying on a day. When fitting the model, the RR of in-hospital death per day was lower at higher temperature. By comparison, 28 °C was chosen as the moderate temperature (MT) for the model (the higher 90% of daily temperatures). The temperature was divided into four grades, extreme cold (defined as ≤ 2.5th percentile of daily mean temperatures), moderate cold (define as 2.5th percentile of daily mean temperatures to the MT), moderate heat (define as MT to 97.5th percentile of daily mean temperatures), and extreme heat (≥ 97.5th percentile of daily mean temperatures).
The stratified analysis used in this study divided the suspected confounding factors into different levels, and then the association strength between exposure and disease was analyzed respectively in each level so that the influence of confounding factors on research results could be controlled to a certain extent24,25. We stratified age to < 65 years and ≥ 65 years and gender to males and females. Stratified analysis was further used to analyze the temperature effect between different age groups and gender. DLNM analyses were performed using the “dlnm” package of R (version 4.0.2 for Windows)26.
Ethics approval and consent to participate
The study was conducted according to the Declaration of Helsinki. The study was approved by the Institutional Ethics Committee of Jiangsu Province Hospital (2020-QT-14) and individual consent for this retrospective analysis was waived.
Results
Table 1 shows the summary statistics of the daily in-hospital deaths, average daily temperatures, average air pressure, and average daily relative humidity. This study includes 6160 deaths with 70.9% (4368) ≥ 65 years old and 67.0% (4127) male. The average daily mean temperature is 16.6 °C.
Figure 1 shows the trends of temperatures, relative humidity, and air pressure over quarter. We find that the seasonal trend of temperatures and pressure is the same in every year, but the seasonal trend of relative humidity is not obvious.
Figure 2 shows the relationships (lag 0–21 days) between daily mean temperatures and in-hospital death predicted by the DLNM, with 95% confidence interval (95% CI). It indicates the inverted V-shaped temperature-mortality relationships. Daily in-hospital deaths were first positive and then negative associated with daily mean temperatures. The peak of RR appeared at 8 °C with significant association (1.04, 95% CI 1.22–1.68). The RR with 95% CI between temperature 1–20 °C was higher than 1.0 (seen in Supplementary 1). Moderate cold related to higher RR value, we defined it as the moderate cold effect in this study.
Stratified analysis between age (< 65 years and ≥ 65 years groups) and gender (male and female) also showed the moderate cold effect (seen in Supplementary 1). The effect ≥ 65 years in Fig. 3a was more pronounced to < 65-year-old in Fig. 3b. RR with 95% CI > 1.0 was found between − 3 to 21 °C in group ≥ 65 years but was not found in < 65 years. The peak of RR appeared at 5 °C with a significant association in group ≥ 65 years (1.56, 95% CI 1.27–1.90), and 11 °C with no association in group < 65 years (1.27, 95% CI 0.94–1.62). The moderate cold effect on death was different between genders. RR with 95% CI > 1.0 was found between 0 to 24 °C in males but not in females (Fig. 3c,d). The peak of RR appeared at 8 °C with a significant association in males (1.66, 95% CI 1.37–2.01), and 8 °C with no association in females (1.06, 95% CI 0.81–1.40).
Furthermore, we plot the estimated lag-response curve with a − 1 °C (Fig. 4). The RR of in-hospital mortality was lower than 1.0 on first day (lag 0: RR (Total) = 0.76, 95% CI 0.55–1.06, RR (≥ 65) = 0.80, 95% CI 0.54–1.19, RR (< 65) = 0.69, 95% CI 0.37–1.28, RR (male) = 0.84, 95% CI 0.56–1.26, RR (female) = 0.62, 95% CI 0.35–1.11). Then sharply increased in the next 2 days and finally levelled off. The RR peak of in-hospital mortality was at lag 1.6 (1.23, 95% CI 1.05, 1.44) for total mortality, lag 1.7 (1.16, 95% CI 0.96–1.41) for ≥ 65 years, lag 1.5 (1.40, 95% CI 1.04–1.88) for < 65 years, lag 1.0 (1.72, 95% CI 1.16–2.55) for male and lag 1.1 (1.79, 95% CI 1.02–3.19) for female.
Discussion
There have been many similar studies in the general population in the past, but they have been of little use in guiding the management of a focus group such as hospitalized patients1,7,11. As far as we know, there was no study on the relationship between mortality risk and ambient temperatures among in-hospital patients. This study is more personalized and focuses on key groups, taking inpatients as the research target, which provides an important scientific basis for the management of inpatients. We explored the association of ambient temperatures and in-hospital mortality in a large research hospital, located in Nanjing, Jiangsu, China, using 10 years mortality data during 2010–2020 by DLNM analyses.
Normal physiologic Responses of healthy bodies respond to moderate high and low temperatures by sensing changes in skin and core temperatures. For heat, which would cause blood vessel dilation (vasodilation), a significant increase in pumping rate (cardiac output), and sweating. Blood pressure drops in warm temperatures due to vasodilation and dehydration27,28,29. By contrast, in cold temperatures, blood vessels narrow (vasoconstriction), slowing the process of transferring heat to the surface of the body30. Enhancement or impairment of any organ system (nervous, endocrine, renal, cardiovascular, or cutaneous) involved in thermoregulation alters sensitivity to high and low temperatures. Liu et al. reported that temperature-induced injury is thought to be associated with the sympathetic nervous system, enhanced sympathetic response to renin-angiotensin system activation, dehydration, and systemic inflammatory responses31. It has been reported that high temperatures may disrupt sleep, with one study in Detroit finding that blood pressure rose in the morning after a hot night32. Blood viscosity, cholesterol, and platelets had a seasonal pattern with a peak time in winter, which may increase the risk of a heart attack or stroke33. However, Song et al. reported that cold or heat waves that occur early in the cool or warm season may be more dangerous because of accumulation of susceptibility pools or lack of preparation for extreme temperatures, which is similar to our results34.
In this study, we found that moderate cold had an impact on the increased risk of in-hospital mortality. The temperature at the lowest death rate was 28 °C in the study, which was in 90% of all temperatures. The difference is that previous studies have reported that heat exposure is a health threat. A meta-analysis found that heat exposure was associated with increased risk of cardiovascular, cerebrovascular, and respiratory mortality35. Lu et al. indicated that the burden of cardiovascular hospitalizations caused by high temperature could increase in the context of global warming36. Consistent with this study, many studies indicated that while the burden of temperature-related mortality may shift to higher temperature in the future, cold temperature may be a bigger problem in temperate cities today27,37,38. A review reported that deaths and hospitalizations due to extreme heat increased sharply in the Detroit area, while deaths due to cold temperatures increased gradually27. A study conducted in Hong Kong noted that low temperatures had a greater impact on non-accidental, cardiovascular, respiratory, and cancer deaths than high temperatures38.
Furthermore, we found that moderate cold, not extreme temperatures, had the highest risk for mortality. These results were similar to some previous studies. Research in China had reported that cold temperatures was responsible for a higher proportion of deaths than heat39. Gasparrini et al. reported that the effect of days with extreme temperatures was less than milder but non-optimum weather not only in China but in other countries1. It has also been demonstrated that moderate cold (2.5th percentile to the MMT) attribute higher percent (6.66%) of mortality than extreme cold (0.63%) and extreme heat (0.23%)40. Another research in China had similar results that 1.14%, 10.49%, 2.08%, and 0.63% of the mortality were attributable to extreme cold (− 6.4 to − 1.4 °C), moderate cold (− 1.4 to 22.8 °C), moderate heat (22.8 to 29 °C), and extreme heat (29.0 to 31.6 °C), respectively2.
This finding may be explained by three reasons. First, compared with these places, Jiangsu Province had a temperate monsoon climate with fewer extreme weather16. Hence, the effect of extreme temperatures on mortality may be reduced. This was similar to Antonio Gasparrini's study that the temperatures percentile of minimum mortality varied from roughly the 60th percentile in tropical areas to about the 80–90th percentile in temperate regions1. Second, the effects of air conditioning can weaken the effect of extreme heat and cold but can highlight the risk of moderate cold. All the wards in the hospital are equipped with air conditioners. Whether the temperatures is high or low in summer and winter, the hospital will turn on the air conditioning. As a result, there is almost no extreme weather for hospitalized patients. However, hospital air conditioners are often turned off in mild temperatures, and it is more likely to produce negative effects on health for the inadaptation of inpatients. Although Alberini et al. reported that air conditioning ownership was not associated with self-reported heat illness in a study in Canada41. More reports suggest that air conditioning plays a positive role in the link between temperatures and mortality42,43,44,45,46. Deng et al. provided evidence that daily mean temperatures and DTR were significantly associated with non-accidental mortality and had delayed effects13. Third, Jiangsu province has a high level of economic development with the top three highest GDP in China, which is less susceptible to extreme temperatures47,48. The governments in high-GDP regions can build quality infrastructure and have a larger capacity to cope with extreme temperatures. On the contrary, governments in low-GDP areas have fewer resources for preventative and adaptive measures and lack the resources to cope with the effects of extreme temperatures47.
There are many factors leading to whether an individual is susceptible to temperatures, including medications and alcohol, homelessness, age, and so on29. Anticholinergic, antihypertensive, and antipsychotic drugs, used to treat disorders of the nervous, endocrine, renal, or cardiovascular systems, may impair thermoregulation which caused the decrease of individual's ability to sense heat or cold, or may inhibit other temperature-regulating responses49. Qualitative studies in Detroit confirmed the findings of qualitative and survey studies in other cities that cost is a major barrier to the use of air conditioning50,51. It also has been found that income or poverty is associated with heat-related mortality at the community level in the United States, China, and Japan52,53,54.With age, even in the absence of obvious heart disease and heart failure, the amount of blood pumped per heartbeat (medium air volume) decreases, as does the ability of blood vessels to dilate and contract55,56. At the same time, the loss of muscle mass also leads to a reduction in internal heat production, although increased fat storage retains heat57.
In this study we explore the relationship between in-hospital mortality and temperatures in patients of different genders and ages, we performed a stratified analysis for different age and sex subgroups in our study. We found that older patients (≥ 65 years) and males were more susceptible to moderate cold in in-hospital mortality, and have a longer range of risk temperatures. Current literature on the effect of temperatures on age are mostly consistent, that elderly people are more like be influenced by temperatures58,59. Park et al. found the mortality rates of elderly outdoor workers increased consistently with temperatures59. In addition, we found elderly patients had a narrower risk temperatures range (− 3 to 21 °C) than total patients (1 to 20 °C). This can verify the above results.
The effect of gender on the relationship between mortality and temperatures was conflicting. Some researchers reported that males were more vulnerable than females to temperatures60,61,62, which was consistent with this study. Junkka et al. reported that the OR of mortality at − 20 °C was 1.17 (0.88–1.54) among females, and 1.94 (1.53–2.45) among males62. Zhai et al. reported that the cold temperatures effect of males was stronger than that of females, and the number of death for males was 118,186 and for females was 111,00263. But there also had some studies with no differences between males and females. Basu et al. reported that no significant difference in mortality was found between males (2.8%, 95% CI 1.1, 4.6) and females (2.6%, 95% CI 1.2, 3.9)64. In addition, other studies confirmed that diurnal temperature range threatening to vulnerable groups, and females and elderly mortality65. Deng et al. also pointed out that females were more susceptible to high diurnal temperature range effect than males with 17.01, 11.82 death per day, respectively13.
In addition, there were also several limitations that should be addressed in this study. The temperature used in this study is the ambient temperature, not the actual temperature situation around the patient. The actual temperatures of the environment patients live can better reflect the health effects of temperatures. More studies should be conducted on the relationship between ambient temperatures around patients and in-hospital mortality. We did not analyze the kind of mortality in this study for the unavailable of data. In addition, although this hospital is the largest general hospital in Jiangsu Province, these results are still limited to some extent. Therefore, more relevant data from more hospitals should be collected in the future.
Conclusion
This study found a correlation between average temperature and inpatient deaths and was influenced by gender and age. Moderate cold temperatures was an increased risk for in-hospital death, with the elderly (≥ 65 years) and male patients being more sensitive to the effects of moderate cold. The results of this study need to be further confirmed in other hospitals, but it still has reference significance for the management of other hospitals and the reduction of hospital mortality. This result shows the importance of moderate temperatures for health, and hospital managers should pay more attention to patients during moderate cold temperatures. During periods of moderate cold, patient protection should be increased, such as personalized air conditioning and additional clothing and bedding. In addition, attention should be paid to older elderly and female patients in hospitals. In terms of the hospital environment and planning design, attention should be paid to the use and management of air conditioning, and strengthen the temperature monitoring of wards.
For the public health department, this evidence also has important implications to the planning of interventions to decrease the health risk of harmful temperatures in the hospital. It also suggests that health authorities cannot ignore the risk of moderate cold. In moderate cold weather, health administrative departments should strengthen temperature monitoring and guide hospital managers to adopt personalized temperature management programs such as air conditioning management in weather changes.
In addition, this study also suggests that we should pay more attention to the relationship between special groups and environment in the follow-up research in the future.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
1. Gasparrini, A. et al. Mortality risk attributable to high and low ambient temperature: A multicountry observational study. Lancet 386(9991), 369–375 (2015).
2. Chen, R. et al. Association between ambient temperature and mortality risk and burden: Time series study in 272 main Chinese cities. BMJ 363, k4306 (2018).
3. Sheridan, S.-C., Lee, C.-C. & Allen, M.-J. The mortality response to absolute and relative temperature extremes. Int. J. Environ. Res. Public Health 16(9), 1493 (2019).
4. Chen, T.-H. et al. Impacts of cold weather on emergency hospital admission in Texas, 2004–2013. Environ. Res. 169, 139–146 (2019).
5. Luo, Y. et al. The cold effect of ambient temperature on ischemic and hemorrhagic stroke hospital admissions: A large database study in Beijing, China between years 2013 and 2014—Utilizing a distributed lag non-linear analysis. Environ. Pollut. 23, 290–296 (2018).
6. Ha, J., Kim, H. & Hajat, S. Effect of previous-winter mortality on the association between summer temperature and mortality in South Korea. Environ. Health Perspect. 119(4), 542–546 (2011).
7. Kim, H., Ha, J.-S. & Park, J. High temperature, heat index, and mortality in 6 major cities in South Korea. Arch. Environ. Occup. Health 61(6), 265–270 (2006).
8. Lim, Y.-H. et al. Effect of diurnal temperature range on cardiovascular markers in the elderly in Seoul, Korea. Int. J. Biometeorol. 57(4), 597–603 (2013).
9. Son, J.-Y., Bell, M.-L. & Lee, J.-T. The impact of heat, cold, and heat waves on hospital admissions in eight cities in Korea. Int. J. Biometeorol. 58(9), 1893–1903 (2014).
10. Su, X. et al. Regional temperature-sensitive diseases and attributable fractions in China. Int. J. Environ. Res. Public Health 17(1), 184 (2019).
11. Ma, W. et al. The temperature-mortality relationship in China: An analysis from 66 Chinese communities. Environ. Res. 137, 72–77 (2015).
12. Dai, Q. et al. The effect of ambient temperature on the activity of influenza and influenza like illness in Jiangsu Province, China. Sci. Total Environ. 645, 684–691 (2018).
13. Deng, J. et al. Ambient temperature and non-accidental mortality: A time series study. Environ. Sci. Pollut. Res. Int. 27(4), 4190–4196 (2020).
14. Gasparrini, A. Distributed lag linear and non-linear models in R: The package dlnm. J. Stat. Softw. 43(8), 1–20 (2011).
15. Nanjing has a permanent population of 9.4234 million by the end of 2021_Dynamic work_Nanjing Bureau of Statistics, 2022/2/16. http://tjj.nanjing.gov.cn/gzdt/202202/t20220215_3293790.html
16. National Meteorological Information Center—China Meteorological Data Network, 2021/10/15. http://data.cma.cn/
17. Exploration on high-quality development of Jiangsu Provincial People's Hospital, 2021/10/15. https://mp.weixin.qq.com/s/URaSzszbn7IG5PwlpH8H9A
18. Brief introduction of Jiangsu Provincial People's Hospital (The First Affiliated Hospital of Nanjing Medical University), 2021/10/15. http://www.jsph.org.cn/yiyuangaikuang/jinrishengyi/
19. Gasparrini, A. & Leone, M. Attributable risk from distributed lag models. BMC Med. Res. Methodol. 14, 1–8 (2014).
20. Gasparrini, A., Armstrong, B. & Kenward, M.-G. Distributed lag non-linear models. Stat. Med. 29(21), 2224–2234 (2010).
21. Guo, Y. et al. Global variation in the effects of ambient temperature on mortality: A systematic evaluation. Epidemiology 25(6), 781–789 (2014).
22. Bhaskaran, K. et al. Time series regression studies in environmental epidemiology. Int. J. Epidemiol. 42(4), 1187–1195 (2013).
23. Miller, K. et al. The phase 3 COU-AA-302 study of abiraterone acetate plus prednisone in men with chemotherapy-naive metastatic castration-resistant prostate cancer: Stratified analysis based on pain, prostate-specific antigen, and Gleason score. Eur. Urol. 74(1), 17–23 (2018).
24. Petersen, T., Christensen, R. & Juhl, C. Predicting a clinically important outcome in patients with low back pain following McKenzie therapy or spinal manipulation: A stratified analysis in a randomized controlled trial. BMC Musculoskelet. Disord. 16, 1–7 (2015).
25. R: The R Project for Statistical Computing, 2021/10/15. https://www.r-project.org/
26. Gronlund, C.-J. et al. Climate change and temperature extremes: A review of heat- and cold-related morbidity and mortality concerns of municipalities. Maturitas 114, 54–59 (2018).
27. Kenny, G.-P. et al. Heat stress in older individuals and patients with common chronic diseases. CMAJ 182(10), 1053–1060 (2010).
28. Crandall, C.-G. & Gonzalez-Alonso, J. Cardiovascular function in the heat-stressed human. Acta Physiol. (Oxf) 199(4), 407–423 (2010).
29. Castellani, J.-W. & Young, A.-J. Human physiological responses to cold exposure: Acute responses and acclimatization to prolonged exposure. Auton. Neurosci. 196, 63–74 (2016).
30. Liu, C., Yavar, Z. & Sun, Q. Cardiovascular response to thermoregulatory challenges. Am. J. Physiol. Heart Circ. Physiol. 309(11), H1793–H1812 (2015).
31. Brook, R.-D. et al. Can personal exposures to higher nighttime and early-morning temperatures increase blood pressure?. J. Clin. Hypertens. (Greenwich) 13(12), 881–888 (2011).
32. Hopstock, L.-A. et al. Seasonal variation in cardiovascular disease risk factors in a subarctic population: The Tromso Study 1979–2008. J. Epidemiol. Community Health 67(2), 113–118 (2013).
33. Barnett, A.-G. et al. Cold and heat waves in the United States. Environ. Res. 112, 218–224 (2012).
34. Song, X. et al. Impact of ambient temperature on morbidity and mortality: An overview of reviews. Sci. Total Environ. 586, 241–254 (2017).
35. Lu, P. et al. Temporal trends of the association between ambient temperature and hospitalisations for cardiovascular diseases in Queensland, Australia from 1995 to 2016: A time-stratified case-crossover study. PLoS Med. 17(7), e1003176 (2020).
36. Huber, V. et al. Temperature-related excess mortality in German cities at 2 °C and higher degrees of global warming. Environ. Res. 186, 109447 (2020).
37. Liu, S. et al. The mortality risk and socioeconomic vulnerability associated with high and low temperature in Hong Kong. Int. J. Environ. Res. Public Health 17(19), 7326 (2020).
38. Zhang, Y. et al. Association between moderately cold temperature and mortality in China. Environ. Sci. Pollut. Res. Int. 27(21), 26211–26220 (2020).
39. Gasparrini, A. et al. Temporal variation in heat-mortality associations: A multicountry study. Environ. Health Perspect. 123(11), 1200–1207 (2015).
40. Alberini, A., Gans, W. & Alhassan, M. Individual and public-program adaptation: Coping with heat waves in five cities in Canada. Int. J. Environ. Res. Public Health 8(12), 4679–4701 (2011).
41. Anderson, B.-G. & Bell, M.-L. Weather-related mortality: How heat, cold, and heat waves affect mortality in the United States. Epidemiology 20(2), 205–213 (2009).
42. O’Neill, M.-S., Zanobetti, A. & Schwartz, J. Disparities by race in heat-related mortality in four US cities: The role of air conditioning prevalence. J. Urban Health 82(2), 191–197 (2005).
43. Braga, A.-L., Zanobetti, A. & Schwartz, J. The time course of weather-related deaths. Epidemiology 12(6), 662–667 (2001).
44. Ostro, B. et al. The effects of temperature and use of air conditioning on hospitalizations. Am. J. Epidemiol. 172(9), 1053–1061 (2010).
45. Medina-Ramon, M. & Schwartz, J. Temperature, temperature extremes, and mortality: A study of acclimatisation and effect modification in 50 US cities. Occup. Environ. Med. 64(12), 827–833 (2007).
46. Yang, Z., Wang, Q. & Liu, P. Extreme temperature and mortality: Evidence from China. Int. J. Biometeorol. 63(1), 29–50 (2019).
47. China's provincial GDP ranking 2020 Complete edition (2020 Provincial GDP Ranking), 2021/10/15. http://www.cwtea.net/article/18137.html
48. Gronlund, C.-J. et al. Vulnerability to extreme heat by socio-demographic characteristics and area green space among the elderly in Michigan, 1990–2007. Environ. Res. 136, 449–461 (2015).
49. Gronlund, C.-J. Racial and socioeconomic disparities in heat-related health effects and their mechanisms: A review. Curr. Epidemiol. Rep. 1(3), 165–173 (2014).
50. Sampson, N.-R. et al. Staying cool in a changing climate: Reaching vulnerable populations during heat events. Glob. Environ. Change 23(2), 475–484 (2013).
51. Madrigano, J. et al. Temperature, myocardial infarction, and mortality: Effect modification by individual- and area-level characteristics. Epidemiology 24(3), 439–446 (2013).
52. Chan, E.-Y. et al. A study of intracity variation of temperature-related mortality and socioeconomic status among the Chinese population in Hong Kong. J. Epidemiol. Community Health 66(4), 322–327 (2012).
53. Ng, C.-F. et al. Sociogeographic variation in the effects of heat and cold on daily mortality in Japan. J. Epidemiol. 24(1), 15–24 (2014).
54. Charkoudian, N. Mechanisms and modifiers of reflex induced cutaneous vasodilation and vasoconstriction in humans. J. Appl. Physiol. 109(4), 1221–1228 (2010).
55. Kenney, W.-L., Craighead, D.-H. & Alexander, L.-M. Heat waves, aging, and human cardiovascular health. Med. Sci. Sports Exerc. 46(10), 1891–1899 (2014).
56. Kenney, W.-L. & Munce, T.-A. Invited review: Aging and human temperature regulation. J. Appl. Physiol. 95(6), 2598–2603 (2003).
57. Xing, Q. et al. Impacts of urbanization on the temperature-cardiovascular mortality relationship in Beijing, China. Environ. Res. 191, 110234 (2020).
58. Park, J., Chae, Y. & Choi, S.-H. Analysis of mortality change rate from temperature in summer by age, occupation, household type, and chronic diseases in 229 Korean municipalities from 2007–2016. Int. J. Environ. Res. Public Health 16(9), 1561 (2019).
59. Bell, M.-L. et al. Vulnerability to heat-related mortality in Latin America: A case-crossover study in Sao Paulo, Brazil, Santiago, Chile and Mexico City, Mexico. Int. J. Epidemiol. 37(4), 796–804 (2008).
60. Bai, L. et al. Temperature and mortality on the roof of the world: A time-series analysis in three Tibetan counties, China. Sci. Total Environ. 485–486, 41–48 (2014).
61. Junkka, J. et al. Climate vulnerability of Swedish newborns: Gender differences and time trends of temperature-related neonatal mortality, 1880–1950. Environ. Res. 192, 110400 (2021).
62. Zhai, L. et al. Effects of ambient temperature on cardiovascular disease: A time-series analysis of 229288 deaths during 2009–2017 in Qingdao, China. Int. J. Environ. Health Res. 32, 181–190 (2022).
63. Basu, R. & Ostro, B.-D. A multicounty analysis identifying the populations vulnerable to mortality associated with high ambient temperature in California. Am. J. Epidemiol. 168(6), 632–637 (2008).
64. Zhao, Y.-Q. et al. Lagged effects of diurnal temperature range on mortality in 66 cities in China: A time-series study. Zhonghua Liu Xing Bing Xue Za Zhi 38(3), 290–296 (2017).
Acknowledgements
We thank all participants of the study. We thank the foundation of the National Natural Science Foundationof China and the Jiangsu Province’s Key Provincial Talents Program.
Funding
This study was supported by the National Natural Science Foundation of China (81572262), the Jiangsu Province’s Key Provincial Talents Program (ZDRCA2016028).
Author information
Authors
Contributions
W.G. and W.M. conceptualized this study. H.P.Y. and W.S.Q. drafted the manuscript. T.T. and X.P. performed the statistical analysis. W.G. and W.M. critically revised the manuscript and approved the final version. All authors reviewed the manuscript.
Corresponding authors
Correspondence to Haiping Yu, Wang Ma or Wen Gao.
Ethics declarations
Competing interests
The authors declare no competing interests.
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
Yu, H., Sheng, W., Tian, T. et al. The effect of ambient temperature on in-hospital mortality: a study in Nanjing, China. Sci Rep 12, 6304 (2022). https://doi.org/10.1038/s41598-022-10395-6
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-022-10395-6 | 2022-08-08 05:44:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6364638805389404, "perplexity": 6087.658960571785}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00350.warc.gz"} |
https://www.physicsforums.com/threads/borel-resummation.130049/ | # Borel resummation
1. Aug 29, 2006
### lokofer
Borel "resummation"...
Let be a divergent series:
$$\sum _{n=0}^{\infty} a(n)$$ (1)
then if you "had" that $$f(x)= \sum _{n=0}^{\infty} \frac{a(n)}{n!}x^{n}$$
You could obtain the "sum" of the series (1) as $$S= \int_{0}^{\infty}dte^{-t}f(t)$$ in case the integral converges...
- Yes that's "beatiful" the problem is ..what happens if the coefficients a(n) are complicate?..then how can you obtain the sum of the series?...
- By the way i think that Borel resummation can be applied if $$f(t)=O(e^{Mt})$$ M>0, but what happens if f(t) grows faster than any positive exponential?.. | 2017-02-21 16:46:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384434819221497, "perplexity": 2606.0438355095307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00601-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-5-number-theory-and-the-real-number-system-5-3-the-rational-numbers-exercise-set-5-3-page-284/13 | ## Thinking Mathematically (6th Edition)
Published by Pearson
# Chapter 5 - Number Theory and the Real Number System - 5.3 The Rational Numbers - Exercise Set 5.3 - Page 284: 13
#### Answer
$\dfrac{19}{8}$
#### Work Step by Step
To convert the given mixed number to an improper fraction, perform the following steps: (1) Multiply the denominator and the whole number part. (2) Add the numerator to the result in Step (1). (3) Write the result in Step (2) as the numerator and put the original denominator in the denominator. (4)Keep the sign of the given mixed number. When we perform the steps above, we find: (1) $8(2)=16$ (2) $16 + 3=19$ (3)$\dfrac{19}{8}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-11-14 08:34:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661088585853577, "perplexity": 854.8443786641392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00051.warc.gz"} |
https://www.wyzant.com/resources/answers/424765/calculate_the_volume_of_0_750_mol_l_sulfuric_acid_needed_to_neutralize_completely_20_g_if_sodium_hydroxide | DjeueuA B.
# calculate the volume of 0.750 mol/L sulfuric acid needed to neutralize completely 20 g if sodium hydroxide
How would you calculate the volume with the given information | 2021-06-23 17:45:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961992263793945, "perplexity": 4003.0910332568806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00079.warc.gz"} |
https://tex.stackexchange.com/questions/481537/modulo-2-binary-long-division-in-european-notation?noredirect=1 | # Modulo 2 binary long division in European notation [duplicate]
I need to represent binary modulo 2 long division in my tex document. Notation needed is same as
https://en.wikipedia.org/wiki/Long_division#Eurasia
under Austria, Germany, etc.
I know about longdiv package, but it doesn't seem to support this.
Is there any package to achieve this? If not, how can I manually do this.
Thanks in advance
• You can manually draw it using TikZ, but it is a bit painful, and very time-consuming if you have a lot of such divisions. – user156344 Mar 26 '19 at 14:49
• – Steven B. Segletes Mar 26 '19 at 15:08
## 2 Answers
The fresh new version of longdivision package v. 1.1.0 has almost the desired output you want, with the new german style. As TeXlive 2018 is currently frozen, you cannot use textlive utility for updating this package, but simply download the longdivision.sty file from here and add it in your local texmf directory or in place it along with your .tex file in the same directory.
\documentclass{article}
\usepackage{longdivision}
\begin{document}
\longdivision[style=german]{127}{4}
\end{document}
The differences with the output from Wikipedia are :
• no negative sign displayed for the subtraction operation
• dots instead of comma for the decimal separator
The documentation show a command \longdivdefinestyle for modifying the display of the output, but I'm not yet able to add a negative sign for the operation, nor suppress the dots.
• I'm sure your answer will make many happy users. – Steven B. Segletes Mar 26 '19 at 16:29
The German style?? as depicted here:
\documentclass[12pt]{article}
\usepackage{mathtools}
\usepackage[TABcline]{tabstackengine}
\TABstackMath
\begin{document}
\tabbedShortunderstack[r]{
&12&7& & &:\ 4\ =\ 31.75\\
-&12& & & &\\
\TABcline{2}
& 0&7& & &\\
& -&4& & &\\
\TABcline{3}
& &3&0& &\\
& -&2&8& &\\
\TABcline{3-4}
& & &2&0&\\
& &\mathllap{-}&2&0&\\
\TABcline{4-5}
& & & &0&
}
\end{document}
Here, I emulate the Cyprus/France version cited in the OP's link
\documentclass[12pt]{article}
\usepackage[TABcline]{tabstackengine}
\TABstackMath
\begin{document}
\begin{tabular}{r@{}|@{}l}
\tabbedShortunderstack[r]{
63&5&9\\
-51& &\\
\TABcline{1}
12&5&\\
-11&9&\\
\TABcline{1-2}
&6&9\\
-&6&8\\
\TABcline{2-3}
& &1
}
&
\tabbedShortunderstack[l]{
17&\\
\TABcline{1-2}
37&4
}
\end{tabular}
\end{document} | 2020-06-02 14:24:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685873746871948, "perplexity": 3807.6144402539767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00366.warc.gz"} |
https://newbedev.com/geodesics-of-anti-de-sitter-space | # Geodesics of anti-de Sitter space
"Every timelike geodesic will cross the same point after a time interval of $$\pi$$" will be true if the half-period is $$\pi$$. You found the general solution for $$x(\tau)$$, namely $$x(\tau)=A\sin\tau+B\cos\tau$$ or, alternately, $$x(\tau)=A\sin{(\tau-\tau_0)}.$$ When $$\tau$$ increases by $$\pi$$, $$x$$ does come back to what it was, after a half-period.
But we want to show that, when $$x$$ comes back, $$t$$, and not just $$\tau$$, has increased by $$\pi$$. So what is $$t$$ doing?
When you substitute $$x(\tau)=A\sin{(\tau-\tau_0)}$$ into $$\frac{\dot{x}^2}{1+x^2}-(1+x^2)\dot{t}^2=-1$$ and solve for $$t$$, you get $$t(\tau)=\tan^{-1}{[\sqrt{A^2+1}\tan{(\tau-\tau_0)}]}+t_0.$$
To see what is going on here, let's take $$\tau_0$$ and $$t_0$$ to be zero (since they just represent uninteresting time translations) and look at the function $$\tan^{-1}{(\sqrt{A^2+1}\tan{\tau})}$$. Here is a plot of it when $$A=\sqrt{3}$$ (just an arbitrary value as an example):
But $$t$$ isn't really discontinuous like this. The arctangent function is multivalued, and we have to take the appropriate branch of it so that t increases continuously with $$\tau$$. This means we move up the second blue curve by $$\pi$$, the third blue curve by $$2\pi$$, etc. to get a continuous function $$t(\tau)$$ that looks like this:
The result is that whenever $$\tau$$ increases by $$\pi$$, so does $$t$$!
So, to summarize, the timelike geodesics are
\begin{align} x&=A\sin\tau \\ t&=\tan^{-1}{[\sqrt{A^2+1}\tan{\tau}]} \end{align}
where we have dropped the uninteresting time-translation constants.
When $$\tau$$ increases by $$\pi$$, $$t$$ also increases by $$\pi$$, and $$x$$ comes back to what it was. This is what you were trying to show.
First, the statement
will cross the same point after a time interval of $$\pi$$
is wrong. In the cited paper the actual statement
… each timelike geodesic which intersects the $$t$$ axis at the point $$t=t_0$$ intersects that axis again at $$t=t_0+\pi$$.
So the $$\pi$$ interval refers to passing through the $$x=0$$, the actual period for a massive particle moving along a geodesic (as in, not only position but also velocity of the particle is the same) is $$2 \pi$$.
To make the “focusing property” of AdS space intuitive let us recall the canonical embedding of AdS space into the ambient pseudo-Riemannian $$\mathbb{R}^{2,1}$$ space with two timelike and one spacelike coordinates: $$ds^2=-dU^2-dV^2+dX^2$$.
AdS2 is defined as a hyperboloid $$-U^2-V^2+X^2=-1$$. Internal static coordinates $$(t,x)$$ are connected with coordinates of ambient space via: $$(U,V,X) = (\sqrt{1+x^2}\cos(t),\sqrt{1+x^2}\sin(t),x) .$$ It is easy to see that the points with static coordinates $$(x_0,t_0)$$ and $$(x_0,t_0+2\pi)$$ are actually the one and the same. If we “unroll” the $$t$$ variable by making them distinct we actually go from AdS space proper to universal covering space of AdS. Timelike geodesics on AdS are the sections of hyperboloid by a timelike plane of an embedding space passing through the origin. To show that, one could start by showing that circle $$X=0$$, $$U^2+V^2=1$$ (or alternatively $$U=\cos \tau$$, $$V=\sin\tau$$, $$\tau$$ is proper time) is a geodesic and then use AdS isometries (which is a Lorentz group $$SO(2,1)$$ of an embedding space) to make this geodesic into all other timelike geodesics. Since these sections are closed curves (ellipses) (for the AdS space proper), or winding curves periodic in $$t$$ coordinate with a period $$2\pi$$ (for the covering space) we have proven the statement in question (with a correct period), without explicit calculations. Incidentally, the solution $$x(\tau) = A \sin(\tau) + B \cos(\tau)$$ becomes kind of obvious by way of embedding space, with $$A$$ and $$B$$ coming from Lorentzian transformations of $$U$$ and $$V$$.
The actual calculations in the OP's question for the geodesic equation are correct up until the last equation. One should remember, that the condition $$g(u,u)=-1$$ gives us dependence between $$A$$ and $$B$$ constant of the $$x(\tau)$$ and the energy constant $$E$$. Namely, $$1+A^2+B^2=E^2$$. As a result if we shift $$\tau\to \tau+\delta$$ to eliminate $$A$$, we could integrate $$\dot{t}=f(\tau)$$ to obtain $$\tan(t-t_0)=\frac{\tan(\tau)}{\sqrt{1+B^2}}.$$ We see, that the phase difference between $$t$$ and $$\tau$$ is never large and becomes zero after every $$\pi$$. And so $$x(t)$$ would also be periodic with a period of $$2\pi$$. | 2023-03-31 12:17:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 78, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933822751045227, "perplexity": 194.34529269906432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00292.warc.gz"} |
https://quantiki.org/wiki/operational-measures | Operational measures
There are link building service entanglement measures which are defined by certain task which should be achieved optimally by means of local operations and classical communication. They are therefore called operational measures. The most common loan modification operational measures are Entanglement of distillation, Entanglement cost.
Typically and operational measure involves:
- input state
- class of allowed operations by means of which input state should be transformed which is the class of local operations and classical communication.
- output state
The typical task is to obtain the greatest amount of output states given certain number of input states. Then the operational measure equals the optimal rate of number of output states that can be obtained from input states via LOCC, per number of input states (in asymptotic limit).
Formally one defines operational measure as follows: Let ρ and σ be the input and output state. Consider a protocol i.e. sequence of LOCC operations P = {Pn} such, that Pn(ρ ⊗ n) = τn for each n. If limn → ∞∣∣τn − σ ⊗ m∣∣ = 0, we say that the protocol P achieves rate given by
$R_P(\rho \rightarrow \sigma):=\limsup_{n,m \rightarrow \infty} {m\over n}$.
Then the operational measure Eop is defined as Eop(ρ) = supPRP(ρ → σ)
In place of the input state and output state in the above definition one can consider the set of input state and the set of output states respectively. In such case the supremum in definition of Eop is taken also over input and output states.
Moreover the task may by modified, so that the otput state maximises certain function (see Distillable key). | 2019-08-22 19:57:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804301619529724, "perplexity": 749.3195799013168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00186.warc.gz"} |
https://noa.gwlb.de/receive/cop_mods_00054796 | # Improvement of the performance of a capacitive relative pressure sensor: case of large deflections
Capacitive pressure sensors are widely used in a variety of applications and are built using a variety of processes, including 3D printing technology. The use of this technology could lead us to a situation of large deflections, depending on the mechanical properties of the materials and the resolution of the machines used. This aspect is rarely reported in previous research works that focus on improving the performance in terms of linearity and sensitivity of these sensors. This paper describes the realization of relative pressure sensors designed as two different structures; the first one is the classical design composed of a single capacitor, while the second one is composed of two capacitors, designed in such a way that they both vary according to the applied pressure but in opposite senses to each other. The purpose is to study in particular the performance of the second structure in the case of large deflections for the context of educational use. Polylactic acid (PLA) is used as the manufacturing material to print the sensors by means of a printer based on fused deposing modeling, while conductive materials are used to provide the electrical conductivity required for the printed sensors. The manufactured sensors were tested under pressure in the range of [0; 9] KPa. Compared to the performance obtained with the first structure, simulation and experimental results show that the second structure improves linearity and allows the sensitivity to be increased from a minimum of inline-formula $M1inlinescrollmathmlnormal 9.98×{normal 10}^{-normal 2}$ 57pt14ptsvg-formulamathimg9349ed8392ca193e03a4edfad343cf70 jsss-9-401-2020-ie00001.svg57pt14ptjsss-9-401-2020-ie00001.png pF/hPa to a minimum of inline-formula $M2inlinescrollmathmlnormal 3.4×{normal 10}^{-normal 1}$ 51pt14ptsvg-formulamathimgfa0f4cf407ed69302df06751cf1b33b2 jsss-9-401-2020-ie00002.svg51pt14ptjsss-9-401-2020-ie00002.png pF/hPa.
### Zitieren
Zitierform:
Achouch, Samia / Regragui, Fakhita / Gharbi, Mourad: Improvement of the performance of a capacitive relative pressure sensor: case of large deflections. 2020. Copernicus Publications.
### Zugriffsstatistik
Gesamt:
Volltextzugriffe:
Metadatenansicht:
12 Monate:
Volltextzugriffe:
Metadatenansicht:
### Rechte
Rechteinhaber: Samia Achouch et al.
Nutzung und Vervielfältigung: | 2023-04-01 20:37:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43176135420799255, "perplexity": 963.8216175709341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00552.warc.gz"} |
https://tex.stackexchange.com/questions/385433/getting-a-partially-framed-mdframed | # Getting a partially framed mdframed
Using mdframed we can opt out any of the four bounding lines. But what I want is, both the vertical lines will be present, but instead of top and bottom lines, I need a partial line at top-left and another partial line at bottom-right. The following is the code I have tried so far:
\documentclass{article}
\usepackage[framemethod=tikz]{mdframed}
\usepackage{lipsum}
\begin{document}
\mdfdefinestyle{myboxstyle}{%
rightline=true,
innerleftmargin=10,
innerrightmargin=10,
linecolor=gray,
outerlinewidth=1.0mm,
topline=false,
rightline=true,
bottomline=false,
skipabove=\topsep,
skipbelow=\topsep
}
\begin{mdframed}[style=myboxstyle]
\lipsum[1]
\end{mdframed}
\end{document}
The above produces the following image where the lines at top and bottom are the ones I failed to get.
• You would be easier off using tcolorbox. Is that an option? – TeXnician Aug 8 '17 at 17:18
• @TeXnician Why not? Drawing is my weaker side in LaTeX. – Masroor Aug 8 '17 at 17:19
• – user31729 Aug 8 '17 at 20:28
Here's a tcolorbox version. Just adjust the 2cm to your liking.
\documentclass{article}
\usepackage[skins]{tcolorbox}
\newtcolorbox{mybox}{enhanced,sharp corners=all,colback=white,colframe=gray,toprule=0pt,bottomrule=0pt,leftrule=1pt,rightrule=1pt,overlay={
\draw[gray,line width=1pt] (frame.north west) -- ++(2cm,0pt);
\draw[gray,line width=1pt] (frame.south east) -- ++(-2cm,0pt);
}}
\usepackage{lipsum}
\begin{document}
\begin{mybox}
\lipsum[2]
\end{mybox}
\end{document}
Update: Title addition. To put more distance between title and text just use bottom=1pt (or more) in boxed title style.
\documentclass{article}
\usepackage[skins]{tcolorbox}
\newtcolorbox{mybox}[1]{enhanced,sharp corners=all,colback=white,colframe=gray,toprule=0pt,bottomrule=0pt,leftrule=1pt,rightrule=1pt,overlay={
\draw[gray,line width=1pt] (frame.north west) -- ++(2cm,0pt);
\draw[gray,line width=1pt] (frame.south east) -- ++(-2cm,0pt);
},attach boxed title to top left,boxed title style={frame hidden,interior hidden},title={\color{black}#1}}
\usepackage{lipsum}
\begin{document}
\begin{mybox}{Test}
\lipsum[2]
\end{mybox}
\end{document}
Update 2: If you have a boxed title which is of only one line height you could be interested in this hack:
\documentclass{article}
\usepackage[skins]{tcolorbox}
\newtcolorbox{mybox}[1]{enhanced,sharp corners=all,colback=white,colframe=gray,toprule=0pt,bottomrule=0pt,leftrule=1pt,rightrule=1pt,overlay={
\draw[gray,line width=1pt] (frame.north west) -- ++(2cm,0pt);
\draw[gray,line width=1pt] (frame.south east) -- ++(-2cm,0pt);
},attach boxed title to top left={yshift=-20pt},boxed title style={frame hidden,interior hidden},top=.75cm,title={\bfseries\color{black}#1}}
\usepackage{lipsum}
\begin{document}
\begin{mybox}{Test}
\lipsum[2]
\end{mybox}
\end{document}
Update 3: Here's the "correct" way to change the title height.
\documentclass{article}
\usepackage[skins]{tcolorbox}
\newtcolorbox{mybox}[1]{enhanced,sharp corners=all,colback=white,colframe=gray,toprule=0pt,bottomrule=0pt,leftrule=1pt,rightrule=1pt,overlay={
\draw[gray,line width=1pt] (frame.north west) -- ++(2cm,0pt);
\draw[gray,line width=1pt] (frame.south east) -- ++(-2cm,0pt);
},
coltitle=black,colbacktitle=white,titlerule=0pt,
title={\vskip5pt\bfseries#1}
}
\usepackage{lipsum}
\begin{document}
\begin{mybox}{This is a very long title which seems pretty ridiculous, but is used, although it is nonsense}
\lipsum[2]
\end{mybox}
\end{document}
• Thanks. Is it possible to put a title (without any background color) without messing with the line at top-left? – Masroor Aug 8 '17 at 17:48
• @Masroor Added an example. – TeXnician Aug 8 '17 at 17:56
• Perhaps I am stretching the limit, but what if I want to put the title below the line at top-left? That means the title will be inside the whole frame. – Masroor Aug 8 '17 at 18:02
• I found a quick hack of putting the title as the upper text, and the body text after \tcblower. I need to use segmentation empty, so that \tcbline is not drawn. – Masroor Aug 8 '17 at 18:29
• Or we can use lower separated=false in the above hack. But I will wait for the final words from you. – Masroor Aug 8 '17 at 18:33 | 2019-10-17 15:49:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736040592193604, "perplexity": 3603.8626429106625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00271.warc.gz"} |
https://pythoninformer.com/generative-art/generativepy-tutorial/regularpolygon/ | # Regular polygons in generativepy
Martin McBride, 2022-06-05
Tags generativepy tutorial fill stroke polygon regular polygon
Categories generativepy generativepy tutorial
This tutorial shows how to create regular polygons in generativepy, using the RegularPolygon class.
Here are some related tutorials that contain useful information about other features this tutorial uses:
## RegularPolygon example code
Here is the code to show some of the features of RegularPolygon:
from generativepy.drawing import make_image, setup
from generativepy.color import Color
from generativepy.geometry import RegularPolygon, Circle
import math
def draw(ctx, pixel_width, pixel_height, frame_no, frame_count):
setup(ctx, pixel_width, pixel_height, background=Color(0.8))
red = Color('crimson')
green = Color('darkgreen')
blue = Color('dodgerblue')
.fill(blue)\
.stroke(green, 5)
.fill(blue)\
.stroke(green, 5)
.fill(blue)\
.stroke(green, 5)
p = RegularPolygon(ctx).of_centre_sides_radius((150, 400), 5, 100)\
.fill(blue)\
.stroke(green, 5)
p = RegularPolygon(ctx).of_centre_sides_radius((400, 400), 5, 100)\
.fill(blue)\
.stroke(green, 5)
p = RegularPolygon(ctx).of_centre_sides_radius((650, 400), 5, 100)\
.fill(blue)\
.stroke(green, 5)
for v in p.vertices:
make_image("regularpolygons-tutorial.png", draw, 800, 550)
This code is available on github in tutorial/shapes/regularpolygons.py.
Here is the resulting image:
We will examine this code in the sections below.
## Drawing basic regular polygons
This section of the code draws the two polygons in the top left of the image:
RegularPolygon(ctx).of_centre_sides_radius((150, 150), 5, 100)\
.fill(blue)\
.stroke(green, 5)
.fill(blue)\
.stroke(green, 5)
This code creates two polygons with:
• A suitable centre point to position them on the page.
• 5 sides (pentagon) and 6 sides (hexagon).
• Radius of 100 to set the size.
• Filled in blue and outlined in green.
Notice that both shapes have horizontal bases.
## Drawing a rotated polygon
This code draws the hexagon in the top left of the image:
RegularPolygon(ctx).of_centre_sides_radius((650, 150), 6, 100, math.pi/12)\
.fill(blue)\
.stroke(green, 5)
This is drawn in the same way as the previous hexagon, but with an angle of math.pi/12 radians (15 degrees). This rotates the shape 15 degrees clockwise about its centre, so the base is no longer horizontal.
## Drawing an inner circle
This code draws the same pentagon as before, but with an inner circle (bottom left of the main image):
p = RegularPolygon(ctx).of_centre_sides_radius((150, 400), 5, 100)\
.fill(blue)\
.stroke(green, 5)
When we draw the pentagon, we also store the RegularPolygon object as p.
We then draw a circle. The circle has the same centre as the polygon, and a radius equal to the inner radius of the polygon. This is obtained from p.inner_radius.
The circle just fits inside the polygon.
## Drawing an outer circle
This code draws the same pentagon as before, but with an outer circle (bottom centre of the main image):
p = RegularPolygon(ctx).of_centre_sides_radius((400, 400), 5, 100)\
.fill(blue)\
.stroke(green, 5)
This time, the circle has the same centre as the polygon, and a radius equal to the outer radius of the polygon. This is obtained from p.outer_radius.
The polygon just fits inside the circle.
## Drawing the vertices
This code draws the same pentagon as before, but marks each corner with a dot (bottom right of the main image):
p = RegularPolygon(ctx).of_centre_sides_radius((650, 400), 5, 100)\
.fill(blue)\
.stroke(green, 5)
for v in p.vertices:
Then we use p.vertices to get the corners of the polygon. This returns a tuple of five coordinates (x, y) corresponding to the five vertices of the pentagon. | 2022-08-13 06:12:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2523816227912903, "perplexity": 4353.818025604299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00389.warc.gz"} |
https://stats.stackexchange.com/questions/58357/econometrics-multiple-regression-fisher-and-student-statistics | # Econometrics : Multiple regression Fisher and Student statistics
I am trying to estimate a production function called Cobb-Douglas.
For the period 1958 to 1972 and for the agricultural sector in Taiwan, we observed:
An: Year of observation
Y: Real production in millions New Dollars Taiwan (NDT)
L: Day of Labour, in millions
K: Real Capital in millions de NDT
We consider the following model M1: lm(formula = LY ~ LL + LK) for = 1958, ..., 1972 where the variables have been log-transformed.
1. I try to test H0: $\beta_{1}+\beta_{2}=1$ versus H1: $\beta_{1}+\beta_{2}\neq 1$ with a Fisher statistic (bilateral test)
2. I try to test the same hypothesis with a Student statistic this time.
3. I try to test (unilateral test) H0: $\beta_{1}+\beta_{2}\leq 1$ versus H1: $\beta_{1}+\beta_{2}>1$ with a Student statistic
I'm not sure if this is what you're looking for, but you can use the delta method to approximate the standard error of $\beta_{1}+\beta_{2}$:
After the regression command in R, you can type vcov(model) which gives you the variance-covariance matrix of the coefficients. The values on the diagonal of the variance-covariance matrix are the variances of the respective coefficients while the values off-diagonal represent the covariances between the corresponding coefficients.
With that you can calculate the confidence interval and the $t$-value (for a Wald-test):
$$t_{\beta_{1}+\beta_{2}}=\frac{(\beta_{1}+\beta_{2} ) - 1}{\SE(\beta_{1}+\beta_{2})}$$
And from that you can use the $t$-distribution to calculate a two- or one-sided $p$-value with 2*pt(-abs(t), df=n-1) for a two-sided $p$-value and pt(-abs(t), df=n-1)($\leq 1$) and 1-pt(-abs(t), df=n-1) ($\geq 1$) for the one-sided $p$-values. Please note that the $t$-distribution and the $F$-distribution are closely related: the square of the $t$-distribution with df degrees of freedom is the $F$-distribution with 1 numerator degree of freedom and df denominator degrees of freedom. It doesn't matter if you use the $t$-value and the $t$-distribution or the squared $t$-value and the $F$-distribution (pf(t^2, df1=1, df2=n-1)) to calculate the $p$-values. So I don't know what the difference between 1) and 2) is.
EDIT:
The R-code for the steps explained above are (it's not the fastest way but I concentrated on legibility):
1) Calculate the variance-covariance-matrix with
vcov.mat <- vocv(cobbdoug)
2) Calculate the approximate standard error of $\beta_{1}+\beta_{2}$:
se.b1b2 <- sqrt(vcov.mat[2,2] + vcov.mat[3,3] + 2*vcov.mat[2,3])
3) Calculate the $t$-value:
t.val <- ((coef(cobbdoug)[2] + coef(cobbdoug)[3]) - 1)/se.b1b2
4) Calculate the $p$-value for question 1):
2*pt(-abs(t.val), df=cobbdoug$df.residual) # assuming that you have 15 years and 3 coefficients, so df=15-3 = 12 5) Calculate the$p$-value for question 3): pt(-abs(t.val), df=cobbdoug$df.residual)
• Hi, I really thank you lots for your complete answer. But what I am looking for are the R codes. My starting R codes for the Cobb-Douglas function is : # Cobb-Douglas model LY=log(Y) LL=log(L) LK=log(K) mco=lm(Y~L+K) summary(mco) cobbdoug=lm(LY~LL+LK) What I would like to know is how to write the R codes for my 3 questions from my first mail. I already thanks you lots for your answers Looking forward to reading you – varin sacha May 7 '13 at 13:14
• I forgot to precise that I am "starting" with R. – varin sacha May 7 '13 at 13:28
• Do you have a question? – IMA May 7 '13 at 13:30
• Hi,Really thank you loads for your quick and very complete answers. Is it normal that when I write the codes in my R console I don't obtain any results except for the 2 p-value ? > vcov.mat <- vcov(cobbdoug) > > > se.b1b2 <- sqrt(vcov.mat[2,2] + vcov.mat[3,3] + 2*vcov.mat[2,3]) > > > t.val <- ((coef(cobbdoug)[2] + coef(cobbdoug)[3]) - 1)/se.b1b2 > > > 2*pt(-abs(t.val), df=12) LL 0.05915371 > > > pt(-abs(t.val), df=12) LL 0.02957686 – varin sacha May 7 '13 at 14:12
• Yes, that's normal. The other statements are assignements which means that the values are assigned to variable names with <-. I recommend that you consult an introduction to R to get familiar with R. – COOLSerdash May 7 '13 at 14:46 | 2019-06-19 13:08:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5810537338256836, "perplexity": 1857.2638205640947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998986.11/warc/CC-MAIN-20190619123854-20190619145854-00183.warc.gz"} |
http://jwbales.us/precal/part9/part9.2.html | ## The Geometric Definition
Given two points $$F_1$$ and $$F_2$$ in the plane lying a distance $$2c$$ apart and given a distance $$2a > 2c$$, the set of all points $$P$$ whose distances to $$F_1$$ and $$F_2$$ respectively sum to $$2a$$ is an ellipse. $$F_1$$ and $$F_2$$ are called the foci of the ellipse.
## Exercise 9.2.1
Get a compass, the type for drawing circles, a straight edge and a blank sheet of paper.
On the sheet of paper, mark two points $$F_1$$ and $$F_2$$ as the foci and draw a dotted line through them.
Construct a dotted perpendicular line to $$F_1F_2$$ through the midpoint of that segment.
Label the midpoint as the center.
Place the point of the compass at the center and open the compass to a fixed radius greater than the distance from the center to either focus.
Mark points $$V_1$$ and $$V_2$$ where the pencil of the compass crosses the line that passes through $$F_1$$ and $$F_2$$.
Without changing the radius on the compass, move the point of the compass to $$F_1$$ and mark points $$V_3$$and $$V_4$$ on the perpendicular that you constructed earlier.
Then the points $$V_1$$, $$V_2$$, $$V_3$$, $$V_4$$ are the vertices of an ellipse.
Draw a closed curve through the four points to represent the ellipse.
The line through the two foci is called the major axis, and the line through the center and perpendicular to the major axis is called the minor axis.
See Solution
## Exercise 9.2.2
Using the ellipse that you drew in Exercise 9.2.1, denote the distance from the center to $$V_1$$ and $$V_2$$ as $$a$$. Find the sum of the distances from $$V_1$$ to $$F_1$$ and $$V_1$$ to $$F_2$$ in terms of $$a$$. Do the same for the sum of the distances from $$V_2$$ to $$F_1$$ and $$V_2$$ to $$F_2$$. Recall how the points $$V_3$$ and $$V_4$$ were constructed. Find the sum of the distances from $$V_3$$ to $$F_1$$ and $$V_3$$ to $$F_2$$ in terms of $$a$$. Do the same for the sum of the distances from $$V_4$$ to $$F_1$$ and $$V_4$$ to $$F_2$$.
See Solution
## Exercise 9.2.3
Let $$a$$ denote the distance from the center to the major vertices $$V_1$$ or $$V_2$$ and let $$c$$ denote the distance from the center to the foci $$F_1$$ or $$F_2$$.
Let $$b$$ denote the distance from the center to the minor vertices $$V_3$$ or $$V_4$$.
Find an equation expressing the relationship between $$a, b$$ and $$c$$.
See Solution
## Horizontal and Vertical Ellipses
We will consider only ellipses with either a vertical or a horizontal major axis.
Let us begin by considering a horizontal ellipse with center at the origin and foci $$(\pm c,0 )$$ on the $$x$$-axis.
Let $$( \pm a, 0 )$$ denote the major vertices and $$( 0, \pm b )$$ the minor vertices.
The equation of this ‘horizontal’ ellipse is: $$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$
If this ellipse is shifted h units horizontally and k units vertically, so that the center is at $$h,k)$$ rather than $$(0,0)$$ the resulting ellipse will have equation: $$\dfrac{(x-h)^2}{a^2}+\dfrac{(y-k)^2}{b^2}=1$$
In either case $$c^2 = a^2 – b^2$$.
Next, consider an ellipse with a vertical major axis centered at the origin. Its major vertices are $$V_1=(0,a)$$ and $$V_2=(0,-a)$$ on the $$y$$-axis and its minor vertices are $$V_3=(b,0)$$ and $$V_4=(-b,0)$$ on the $$x$$-axis. Its foci are $$F_1=(0,c)$$ and $$F_2=(0,-c)$$ where $$c^2=a^2-b^2$$. The equation of this verical ellipse is $$\dfrac{x^2}{a^2}+\dfrac{y^2}{a^2}=1$$.
Notice that it is the case that $$a$$ is always greater than $$b$$. This allows one to determine from a glance at the equation whether the ellipse is horizontal or vertical: The graph of the equation $$\dfrac{x^2}{1}+\dfrac{y^2}{4}=1$$ is a vertical ellipse because the larger denominator has numerator $$y^2$$.
## Exercise 9.2.4
Find the equation and sketch the graph of the ellipse with center ( 0, 0 ), focus ( -3, 0 ) and minor vertex ( 0, 4 ).
See Solution
## Exercise 9.2.5
Find the equation and sketch the graph of the ellipse with center$$( -2, 3 )$$, focus$$( -3, 3 )$$ and major vertex $$( 5, 3)$$.
See Solution
## Exercise 9.2.6
Find the equation and sketch the graph of the ellipse with center $$( 0, 0 )$$, focus $$( 0, 2 )$$ and major vertex $$( 0, -4 )$$.
See Solution
## Exercise 9.2.7
Find the equation and sketch the graph of the ellipse with center $$( 1, 3 )$$, major vertex $$( 1, 0 )$$ and minor vertex $$( 2, 3 )$$.
See Solution
## Completing the square and Standard Form
Be able to complete the square to put the equation of an ellipse in standard form. Be able to find the center, foci and vertices of an ellipse, given its equation.
For example, consider the equation:
$$x^2 + 9 y^2 – 2 x + 36 y + 28 = 0$$
To put this in standard form, we first separate the variables
$$x^2 – 2 x + 9 y2 + 36 y = - 28$$
Then we complete the squares on the two variables:
$$(x^2 – 2 x + 1 ) + 9 (y^2 + 4 y + 4 ) = - 28 + 1 + 9 ( 4 )$$
$$( x – 1 )^2 + 9 ( y + 2 )^2 = 9$$
Then we divide by $$9$$ to get
Thus, the ellipse is horizontal, $$a = 3$$, $$b = 1$$, and $$c=\sqrt{9-1}=2\sqrt{2}$$.
The center is $$( 1, -2 )$$, the major vertices are $$( 1 \pm 3, -2 )$$ or $$( 4, -2 )$$ and $$( -2, -2 )$$. The minor vertices are $$( 1, -2 \pm 1 )$$ or $$( 1, -3 )$$ and $$( 1, -1 )$$. The ellipse may be sketched through these four vertices. The foci are $$(1\pm2\sqrt{2},-2)$$.
## Exercise 9.2.8
Find the center, vertices and foci and sketch the graph of the ellipse with equation
$$25 x^2 + 9 y^2 – 50 x - 18 y - 191 = 0$$
See Solution | 2018-09-20 20:17:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024575352668762, "perplexity": 199.23786465714326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156613.38/warc/CC-MAIN-20180920195131-20180920215531-00316.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/cea-01180196 | # Habemus Superstratum! A constructive proof of the existence of superstrata
Abstract : We construct the first example of a superstratum: a class of smooth horizonless supergravity solutions that are parameterized by arbitrary continuous functions of (at least) two variables and have the same charges as the supersymmetric D1-D5-P black hole. We work in Type IIB string theory on $T^4$ or $K^3$ and our solutions involve a subset of fields that can be described by a six-dimensional supergravity with two tensor multiplets. The solutions can thus be constructed using a linear structure, and we give an explicit recipe to start from a superposition of modes specified by an arbitrary function of two variables and impose regularity to obtain the full horizonless solutions in closed form. We also give the precise CFT description of these solutions and show that they are not dual to descendants of chiral primaries. They are thus much more general than all the known solutions whose CFT dual is precisely understood. Hence our construction represents a substantial step toward the ultimate goal of constructing the fully generic superstratum that can account for a finite fraction of the entropy of the three-charge black hole in the regime of parameters where the classical black hole solution exists.
Document type :
Journal articles
Domain :
Cited literature [73 references]
https://hal-cea.archives-ouvertes.fr/cea-01180196
Contributor : Emmanuelle de Laborderie Connect in order to contact the contributor
Submitted on : Monday, February 11, 2019 - 4:19:09 PM
Last modification on : Wednesday, October 20, 2021 - 12:09:01 AM
Long-term archiving on: : Sunday, May 12, 2019 - 2:48:34 PM
### File
Bena3.pdf
Files produced by the author(s)
`
### Citation
Iosif Bena, Stefano Giusto, Rodolfo Russo, Masaki Shigemori, Nicholas P. Warner. Habemus Superstratum! A constructive proof of the existence of superstrata. Journal of High Energy Physics, Springer Verlag (Germany), 2015, 05, pp.110. ⟨10.1007/JHEP05(2015)110⟩. ⟨cea-01180196⟩
Record views | 2021-10-28 02:50:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6435774564743042, "perplexity": 1262.4130610054901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00657.warc.gz"} |
https://en.wikipedia.org/wiki/Malloc | # C dynamic memory allocation
(Redirected from Malloc)
C dynamic memory allocation refers to performing manual memory management for dynamic memory allocation in the C programming language via a group of functions in the C standard library, namely malloc, realloc, calloc and free.[1][2][3]
The C++ programming language includes these functions for compatibility with C; however, the operators new and delete provide similar functionality and are recommended by that language's authors.[4]
Many different implementations of the actual memory allocation mechanism, used by malloc, are available. Their performance varies in both execution time and required memory.
## Rationale
The C programming language manages memory statically, automatically, or dynamically. Static-duration variables are allocated in main memory, usually along with the executable code of the program, and persist for the lifetime of the program; automatic-duration variables are allocated on the stack and come and go as functions are called and return. For static-duration and automatic-duration variables, the size of the allocation must be compile-time constant (except for the case of variable-length automatic arrays[5]). If the required size is not known until run-time (for example, if data of arbitrary size is being read from the user or from a disk file), then using fixed-size data objects is inadequate.
The lifetime of allocated memory can also cause concern. Neither static- nor automatic-duration memory is adequate for all situations. Automatic-allocated data cannot persist across multiple function calls, while static data persists for the life of the program whether it is needed or not. In many situations the programmer requires greater flexibility in managing the lifetime of allocated memory.
These limitations are avoided by using dynamic memory allocation in which memory is more explicitly (but more flexibly) managed, typically, by allocating it from the free store (informally called the "heap"), an area of memory structured for this purpose. In C, the library function `malloc` is used to allocate a block of memory on the heap. The program accesses this block of memory via a pointer that `malloc` returns. When the memory is no longer needed, the pointer is passed to `free` which deallocates the memory so that it can be used for other purposes.
Some platforms provide library calls which allow run-time dynamic allocation from the C stack rather than the heap (e.g. `alloca()`[6]). This memory is automatically freed when the calling function ends.
## Overview of functions
The C dynamic memory allocation functions are defined in `stdlib.h` header (`cstdlib` header in C++).[1]
Function Description
`malloc` allocates the specified number of bytes
`realloc` increases or decreases the size of the specified block of memory. Reallocates it if needed
`calloc` allocates the specified number of bytes and initializes them to zero
`free` releases the specified block of memory back to the system
### Differences between `malloc()` and `calloc()`
• `malloc()` takes a single argument (the amount of memory to allocate in bytes), while `calloc()` needs two arguments (the number of variables to allocate in memory, and the size in bytes of a single variable).
• `malloc()` does not initialize the memory allocated, while `calloc()` guarantees that all bytes of the allocated memory block have been initialized to 0.
## Usage example
Creating an array of ten integers with automatic scope is straightforward in C:
```int array[10];
```
However, the size of the array is fixed at compile time. If one wishes to allocate a similar array dynamically, the following code can be used:
```int * array = malloc(10 * sizeof(int));
```
This computes the number of bytes that ten integers occupy in memory, then requests that many bytes from `malloc` and assigns the result to a pointer named `array` (due to C syntax, pointers and arrays can be used interchangeably in some situations).
Because `malloc` might not be able to service the request, it might return a null pointer and it is good programming practice to check for this:
```int * array = malloc(10 * sizeof(int));
if (array == NULL) {
fprintf(stderr, "malloc failed\n");
return(-1);
}
```
When the program no longer needs the dynamic array, it must eventually call `free` to return the memory it occupies to the free store:
```free(array);
```
The memory set aside by `malloc` is not initialized and may contain cruft: the remnants of previously used and discarded data. After allocation with `malloc`, elements of the array are uninitialized variables. The command `calloc` will return an allocation that has already been cleared:
```int * array = calloc(10, sizeof (int));
```
With realloc we can resize the amount of memory a pointer points to. For example, if we have a pointer acting as an array of size ${\displaystyle n}$ and we want to change it to an array of size ${\displaystyle m}$, we can use realloc.
```int * arr = malloc(2 * sizeof(int));
arr[0] = 1;
arr[1] = 2;
arr = realloc(arr, 3 * sizeof(int));
arr[2] = 3;
```
Note that realloc must be assumed to have changed the base address of the block (i.e. if it has failed to extend the size of the original block, and has therefore allocated a new larger block elsewhere and copied the old contents into it). Therefore, any pointers to addresses within the original block are also no longer valid.
## Type safety
`malloc` returns a void pointer (`void *`), which indicates that it is a pointer to a region of unknown data type. The use of casting is required in C++ due to the strong type system, whereas this is not the case in C. The lack of a specific pointer type returned from `malloc` is type-unsafe behaviour according to some programmers: `malloc` allocates based on byte count but not on type. This is different from the C++ new operator that returns a pointer whose type relies on the operand. (See C Type Safety.)
One may "cast" (see type conversion) this pointer to a specific type:
```int * ptr;
ptr = malloc(10 * sizeof(int)); /* without a cast */
ptr = (int *)malloc(10 * sizeof(int)); /* with a cast */
```
• Including the cast may allow a C program or function to compile as C++.
• The cast allows for pre-1989 versions of `malloc` that originally returned a `char *`.[7]
• Casting can help the developer identify inconsistencies in type sizing should the destination pointer type change, particularly if the pointer is declared far from the `malloc()` call (although modern compilers and static analysers can warn on such behaviour without requiring the cast[8]).
• Under the C standard, the cast is redundant.
• Adding the cast may mask failure to include the header `stdlib.h`, in which the function prototype for `malloc` is found.[7][9] In the absence of a prototype for `malloc`, the C90 standard requires that the C compiler assume `malloc` returns an `int`. If there is no cast, C90 requires a diagnostic when this integer is assigned to the pointer; however, with the cast, this diagnostic would not be produced, hiding a bug. On certain architectures and data models (such as LP64 on 64-bit systems, where `long` and pointers are 64-bit and `int` is 32-bit), this error can actually result in undefined behaviour, as the implicitly declared `malloc` returns a 32-bit value whereas the actually defined function returns a 64-bit value. Depending on calling conventions and memory layout, this may result in stack smashing. This issue is less likely to go unnoticed in modern compilers, as C99 does not permit implicit declarations, so the compiler must produce a diagnostic even if it does assume `int` return.
• If the type of the pointer is changed at its declaration, one may also need to change all lines where `malloc` is called and cast.
## Common errors
The improper use of dynamic memory allocation can frequently be a source of bugs. These can include security bugs or program crashes, most often due to segmentation faults.
Most common errors are as follows:[10]
Not checking for allocation failures
Memory allocation is not guaranteed to succeed, and may instead return a null pointer. Using the returned value, without checking if the allocation is successful, invokes undefined behavior. This usually leads to crash (due to the resulting segmentation fault on the null pointer dereference), but there is no guarantee that a crash will happen so relying on that can also lead to problems.
Memory leaks
Failure to deallocate memory using `free` leads to buildup of non-reusable memory, which is no longer used by the program. This wastes memory resources and can lead to allocation failures when these resources are exhausted.
Logical errors
All allocations must follow the same pattern: allocation using `malloc`, usage to store data, deallocation using `free`. Failures to adhere to this pattern, such as memory usage after a call to `free` (dangling pointer) or before a call to `malloc` (wild pointer), calling `free` twice ("double free"), etc., usually causes a segmentation fault and results in a crash of the program. These errors can be transient and hard to debug – for example, freed memory is usually not immediately reclaimed by the OS, and thus dangling pointers may persist for a while and appear to work.
## Implementations
The implementation of memory management depends greatly upon operating system and architecture. Some operating systems supply an allocator for malloc, while others supply functions to control certain regions of data. The same dynamic memory allocator is often used to implement both `malloc` and the operator `new` in C++.[11]
### Heap-based
Implementation of the allocator is commonly done using the heap, or data segment. The allocator will usually expand and contract the heap to fulfill allocation requests.
The heap method suffers from a few inherent flaws, stemming entirely from fragmentation. Like any method of memory allocation, the heap will become fragmented; that is, there will be sections of used and unused memory in the allocated space on the heap. A good allocator will attempt to find an unused area of already allocated memory to use before resorting to expanding the heap. The major problem with this method is that the heap has only two significant attributes: base, or the beginning of the heap in virtual memory space; and length, or its size. The heap requires enough system memory to fill its entire length, and its base can never change. Thus, any large areas of unused memory are wasted. The heap can get "stuck" in this position if a small used segment exists at the end of the heap, which could waste any amount of address space. On lazy memory allocation schemes, such as those often found in the Linux operating system, a large heap does not necessarily reserve the equivalent system memory; it will only do so at the first write time (reads of non-mapped memory pages return zero). The granularity of this depends on page size.
### dlmalloc
Doug Lea has developed dlmalloc ("Doug Lea's Malloc") as a general-purpose allocator, starting in 1987. The GNU C library (glibc) uses ptmalloc,[12] an allocator based on dlmalloc.[13]
Memory on the heap is allocated as "chunks", an 8-byte aligned data structure which contains a header, and usable memory. Allocated memory contains an 8 or 16 byte overhead for the size of the chunk and usage flags. Unallocated chunks also store pointers to other free chunks in the usable space area, making the minimum chunk size 24 bytes.[13]
Unallocated memory is grouped into "bins" of similar sizes, implemented by using a double-linked list of chunks (with pointers stored in the unallocated space inside the chunk).[13]
For requests below 256 bytes (a "smallbin" request), a simple two power best fit allocator is used. If there are no free blocks in that bin, a block from the next highest bin is split in two.
For requests of 256 bytes or above but below the mmap threshold, recent versions of dlmalloc use an in-place bitwise trie algorithm. If there is no free space left to satisfy the request, dlmalloc tries to increase the size of the heap, usually via the brk system call.
For requests above the mmap threshold (a "largebin" request), the memory is always allocated using the mmap system call. The threshold is usually 256 KB.[14] The mmap method averts problems with huge buffers trapping a small allocation at the end after their expiration, but always allocates an entire page of memory, which on many architectures is 4096 bytes in size.[15]
### FreeBSD's and NetBSD's jemalloc
Since FreeBSD 7.0 and NetBSD 5.0, the old `malloc` implementation (phkmalloc) was replaced by jemalloc, written by Jason Evans. The main reason for this was a lack of scalability of phkmalloc in terms of multithreading. In order to avoid lock contention, jemalloc uses separate "arenas" for each CPU. Experiments measuring number of allocations per second in multithreading application have shown that this makes it scale linearly with the number of threads, while for both phkmalloc and dlmalloc performance was inversely proportional to the number of threads.[16]
### OpenBSD's malloc
OpenBSD's implementation of the `malloc` function makes use of mmap. For requests greater in size than one page, the entire allocation is retrieved using `mmap`; smaller sizes are assigned from memory pools maintained by `malloc` within a number of "bucket pages," also allocated with `mmap`.[17][better source needed] On a call to `free`, memory is released and unmapped from the process address space using `munmap`. This system is designed to improve security by taking advantage of the address space layout randomization and gap page features implemented as part of OpenBSD's `mmap` system call, and to detect use-after-free bugs—as a large memory allocation is completely unmapped after it is freed, further use causes a segmentation fault and termination of the program.
### Hoard malloc
Hoard is an allocator whose goal is scalable memory allocation performance. Like OpenBSD's allocator, Hoard uses `mmap` exclusively, but manages memory in chunks of 64 kilobytes called superblocks. Hoard's heap is logically divided into a single global heap and a number of per-processor heaps. In addition, there is a thread-local cache that can hold a limited number of superblocks. By allocating only from superblocks on the local per-thread or per-processor heap, and moving mostly-empty superblocks to the global heap so they can be reused by other processors, Hoard keeps fragmentation low while achieving near linear scalability with the number of threads.[18]
Every thread has local storage for small allocations. For large allocations mmap or sbrk can be used. TCMalloc, a malloc developed by Google,[19] has garbage-collection for local storage of dead threads. The TCMalloc is considered to be more than twice as fast as glibc's ptmalloc for multithreaded programs.[20][21]
### In-kernel
Operating system kernels need to allocate memory just as application programs do. The implementation of `malloc` within a kernel often differs significantly from the implementations used by C libraries, however. For example, memory buffers might need to conform to special restrictions imposed by DMA, or the memory allocation function might be called from interrupt context.[22] This necessitates a `malloc` implementation tightly integrated with the virtual memory subsystem of the operating system kernel.
## Overriding malloc
Because `malloc` and its relatives can have a strong impact on the performance of a program, it is not uncommon to override the functions for a specific application by custom implementations that are optimized for application's allocation patterns. The C standard provides no way of doing this, but operating systems have found various ways to do this by exploiting dynamic linking. One way is to simply link in a different library to override the symbols. Another, employed by Unix System V.3, is to make `malloc` and `free` function pointers that an application can reset to custom functions.[23]
## Allocation size limits
The largest possible memory block `malloc` can allocate depends on the host system, particularly the size of physical memory and the operating system implementation. Theoretically, the largest number should be the maximum value that can be held in a `size_t` type, which is an implementation-dependent unsigned integer representing the size of an area of memory. In the C99 standard and later, it is available as the `SIZE_MAX` constant from `<stdint.h>`. Although not guaranteed by ISO C, it is usually 2`CHAR_BIT` × `sizeof(size_t)` − 1.
## Extensions and alternatives
The C library implementations shipping with various operating systems and compilers may come with alternatives and extensions to the standard `malloc` package. Notable among these is:
• `alloca`, which allocates a requested number of bytes on the call stack. No corresponding deallocation function exists, as typically the memory is deallocated as soon as the calling function returns. `alloca` was present on Unix systems as early as 32/V (1978), but its use can be problematic in some (e.g., embedded) contexts.[24] While supported by many compilers, it is not part of the ANSI-C standard and therefore may not always be portable. It may also cause minor performance problems: it leads to variable-size stack frames, so that both stack and frame pointers need to be managed (with fixed-size stack frames, one of these is redundant).[25] Larger allocations may also increase the risk of undefined behavior due to a stack overflow.[26] C99 offered variable-length arrays as an alternative stack allocation mechanism - however, this feature was relegated to optional in the later C11 standard.
• POSIX defines a function `posix_memalign` that allocates memory with caller-specified alignment. Its allocations are deallocated with `free`.[27]
## References
1. ^ a b ISO/IEC 9899:1999 specification (PDF). p. 313, § 7.20.3 "Memory management functions".
2. ^ Godse, Atul P.; Godse, Deepali A. (2008). Advanced C Programming. p. 6-28: Technical Publications. p. 400. ISBN 978-81-8431-496-0.
3. ^ Summit, Steve. "C Programming Notes - Chapter 11: Memory Allocation". Retrieved 30 October 2011.
4. ^ Stroustrup, Bjarne (2008). Programming: Principles and Practice Using C++. 1009, §27.4 Free store: Addison Wesley. p. 1236. ISBN 978-0-321-54372-1.
5. ^ "gcc manual". gnu.org. Retrieved 14 December 2008.
6. ^ "alloca". Man.freebsd.org. 5 September 2006. Retrieved 18 September 2011.
7. ^ a b "Casting malloc". Cprogramming.com. Retrieved 9 March 2007.
8. ^ "clang: lib/StaticAnalyzer/Checkers/MallocSizeofChecker.cpp Source File". clang.llvm.org. Retrieved 1 April 2018.
9. ^ "comp.lang.c FAQ list · Question 7.7b". C-FAQ. Retrieved 9 March 2007.
10. ^ Reek, Kenneth (1997-08-04). Pointers on C (1 ed.). Pearson. ISBN 9780673999863.
11. ^ Alexandrescu, Andrei (2001). Modern C++ Design: Generic Programming and Design Patterns Applied. Addison-Wesley. p. 78.
12. ^ "Wolfram Gloger's malloc homepage". malloc.de. Retrieved 1 April 2018.
13. ^ a b c Kaempf, Michel (2001). "Vudo malloc tricks". Phrack (57): 8. Archived from the original on 22 January 2009. Retrieved 29 April 2009.
14. ^ "Malloc Tunable Parameters". GNU. Retrieved 2 May 2009.
15. ^ Sanderson, Bruce (12 December 2004). "RAM, Virtual Memory, Pagefile and all that stuff". Microsoft Help and Support.
16. ^ Evans, Jason (16 April 2006). "A Scalable Concurrent malloc(3) Implementation for FreeBSD" (PDF). Retrieved 18 March 2012.
17. ^ "libc/stdlib/malloc.c". BSD Cross Reference, OpenBSD src/lib/.
18. ^ Berger, E. D.; McKinley, K. S.; Blumofe, R. D.; Wilson, P. R. (November 2000). Hoard: A Scalable Memory Allocator for Multithreaded Applications (PDF). ASPLOS-IX. Proceedings of the ninth international conference on Architectural support for programming languages and operating systems. pp. 117–128. CiteSeerX . doi:10.1145/378993.379232. ISBN 1-58113-317-0.
19. ^ TCMalloc homepage
20. ^ Ghemawat, Sanjay; Menage, Paul; TCMalloc : Thread-Caching Malloc
21. ^ Callaghan, Mark (18 January 2009). "High Availability MySQL: Double sysbench throughput with TCMalloc". Mysqlha.blogspot.com. Retrieved 18 September 2011.
22. ^ "kmalloc()/kfree() include/linux/slab.h". People.netfilter.org. Retrieved 18 September 2011.
23. ^
24. ^ "Why is the use of alloca() not considered good practice?". stackoverflow.com. Retrieved 2016-01-05.
25. ^ Amarasinghe, Saman; Leiserson, Charles (2010). "6.172 Performance Engineering of Software Systems, Lecture 10". MIT OpenCourseWare. Massachusetts Institute of Technology. Retrieved 27 January 2015.
26. ^ "alloca(3) - Linux manual page". man7.org. Retrieved 2016-01-05.
27. ^ `posix_memalign` – System Interfaces Reference, The Single UNIX Specification, Issue 7 from The Open Group | 2018-09-25 13:20:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32415151596069336, "perplexity": 3734.2106299973902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161638.66/warc/CC-MAIN-20180925123211-20180925143611-00064.warc.gz"} |
http://math.stackexchange.com/questions/413787/solving-the-equation-11x2-6000x-27500-0-preferably-without-the-quadratic-fo/413804 | # Solving the equation $11x^2-6000x-27500 =0$, preferably without the quadratic formula
I obtained this form while solving an aptitude question.
$$\frac{3000}{x-50} + \frac{3000}{x+50} = 11$$
I changed it into quadratic equation
$$11x^2 -6000x - 27500 =0$$
but I don't know how to solve it.
I can't find two factor for 303500 that sums to 6000 or when I use formula the numbers become huge... Without using calculator how to solve it? is there any other simple way to solve [other method]? [or finding factor] I'm a beginner in math. Please explain your answer for me.
-
I hope the new title reflects what you hope to obtain in an answer. If not, please let me know. – Lord_Farin Jun 7 '13 at 13:14
Note that the two roots aren't factors of 27500 and don't sum to 6000. Here since the leading coefficient is not 1, their product and sum respectively have to be 27500/11 and 6000/11. – Milind Jun 7 '13 at 13:14
The two roots are $x_1\approx270$ and $x_1\approx275$. I obtained this using the quadratic formula but without using a calculator. Does it count? – Andrea Mori Jun 7 '13 at 13:27
No, the roots are $x_1=550$ and $x_2=-50/11$. WolframAlpha confirms. – Milind Jun 7 '13 at 13:29
Right, I copied the equation wrong on a piece of paper! :) Roots are indeed those, and can be actually computed by hand, no calculator needed. – Andrea Mori Jun 7 '13 at 13:35
There isn't any standard, guaranteed method apart from the quadratic formula to solve a quadractic equation. However sometimes there are "ad-hoc tricks" which might help you get one root.
The RHS of the equation is an integer; You might suspect that an $x$ such that both the terms on the LHS are integers might be a root (this does not have to be true at all, but it's not bad to try).
Also since $x-50$ and $x+50$ differ by $100$, you want a number $y$ such that both $y$ and $y+100$ divide $3000$. Noticing that $500$ and $600$ satisfy this gives $x=550$ as a root.
Using this, you can find the other root quite easily to be $x=-\frac{50}{11}$ since the product of the roots is $-27500/11$.
-
awesome. but how u figured its 500 and 600 [i'm a beginner i said.] – Dineshkumar Jun 7 '13 at 13:25
Ah, that step is one of the reasons I called this an ad-hoc method. I personally noticed that 5 and 6 are consecutive factors of 30. – Milind Jun 7 '13 at 13:27
How about -50/11 ? – Dineshkumar Jun 7 '13 at 13:35
@MilindHegde : also $150$ and $250$ are two integers with difference $100$ dividing $3000$. Also $50$ and $150$. Also $-50$ and $50$ ... – Andrea Mori Jun 7 '13 at 13:47
That's true. My mind tried multiples of 100 first. It is also a good idea too see approximately how big $3000/y$ is to quickly exclude possibilities like 50 and 100. – Milind Jun 7 '13 at 13:51
## Hint
Do a substitution $x = 50y$. Then the equation becomes $$\frac {3000}{50(y-1)} + \frac {3000}{50(y+1)} = 11 \\ \frac {60}{y-1} + \frac {60}{y+1} = 11$$ which should be a bit easier to solve... I guess...
-
explanation for 50(y-1) and 50(y+1) please. – Dineshkumar Jun 7 '13 at 13:38
@Dineshkumar $x-50 = 50y-50 = 50(y-1)$. – Kaster Jun 7 '13 at 14:43
@JoelReyesNoche Thanks. Quick typo. – Kaster Jun 7 '13 at 14:44
Multiply by 11, and replace $y=11x$. Then you get
$$y^2-6000y-302500=0 \,.$$
Now complete the square:
$$y^2-6000y+3000^2=3000^2+302500$$
Last:
$$3000^2+302500=3000\times 3000+3025\times 100=600 \times 5 \times 6 \times 500+121\times25\times100$$ $$=2500 \times (3600+121)=2500 \times 3721=50^2 \times 61^2$$
Thus you get
$$(y+3000)^2=3050^2$$
-
Nice way to remove the annoying 11 ! – Frédéric Grosshans Jun 7 '13 at 15:51
By the way, on the line you call "last", I find the following development, which completes the square, more intuitive $3000^2+ 302500= 3000^2+ 3025\times100= 3000^2 + 3000\times100+25\times100 = 3000^2+2\times3000\times50+50^2=3050^2$ – Frédéric Grosshans Jun 7 '13 at 15:56
The last line should read $(y - 3000)^2 = 3050^2$ – Happy Green Kid Naps Jun 7 '13 at 21:39
use this formula $ax^2+bx+c=0\implies x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}$
$$11x^2-6000x-27500=0$$
here $a=11,b=-6000,c=-27500$
just put these valuse in above formula and you got answer.
# second approach:
$$11x^2-6000x-27500=0$$ $$11x^2-6050x+50x-27500=0$$ $$11x(x-550)+50(x-550)=0$$ $$(x-550)(11x+50)=0$$ $$(x-550)=0\;\;,(11x+50)=0$$ $$x=550,-\dfrac{50}{11}$$
-
I tried but getting big value and some errors. – Dineshkumar Jun 7 '13 at 13:18
@Dineshkumar I think you face problem to solve square root of $37210000$. – iostream007 Jun 7 '13 at 13:22
How u added and subracted 50? bringing it 11x(x−550)+50(x−550)=0? i'm a beginner! – Dineshkumar Jun 7 '13 at 13:27
using second approach is a guess for big numbers.I want to part 6000 in such a way that multiplication of those part will be -302500.since multiplication sign is - so piece of 6000 is one is bigger than 6000 and one is less than.so just take some easy value and check their multiplication if it is near to your desired value than just little bit change the value.I solve it i n 3 times firstly I took 6500 and 500,then 6100,100 then 6050 and 50 – iostream007 Jun 7 '13 at 13:32
@Beska I don't know either, but $60^2=3600$ obviously, and it's not far from $3721$ from below, so $61^2$ is a natural guess because of the final digit $1$. Now, $61^2=(60+1)^2=60^2+2\cdot60\cdot1+1^2=3600+120+1=3721$ ... Bingo! ... All of this can easily be done in your head. – Andrea Mori Jun 7 '13 at 16:23 | 2016-04-30 21:19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585865497589111, "perplexity": 884.6212184022454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112727.96/warc/CC-MAIN-20160428161512-00126-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.zbmath.org/?q=an%3A1151.82316 | # zbMATH — the first resource for mathematics
Spanning forests on the Sierpinski gasket. (English) Zbl 1151.82316
Summary: We present the numbers of spanning forests on the Sierpinski gasket $$SG_d(n)$$ at stage $$n$$ with dimension $$d$$ equal to two, three and four, and determine the asymptotic behaviors. The corresponding results on the generalized Sierpinski gasket $$SG_{d,b}(n)$$ with $$d=2$$ and $$b=3,4$$ are obtained. We also derive the upper bounds of the asymptotic growth constants for both $$SG_d$$ and $$SG_{2,b}$$.
##### MSC:
82B20 Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs arising in equilibrium statistical mechanics 82B10 Quantum equilibrium statistical mechanics (general)
Full Text: | 2021-04-18 11:27:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3332784175872803, "perplexity": 734.2585275148757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00009.warc.gz"} |
http://math.stackexchange.com/questions/23681/comparing-symbolic-and-analog-descriptions | # Comparing symbolic and analog descriptions
I've never seen the following comparison before. Let me start with a specific example:
Given a finite structure with two symmetric binary relations, i.e. a graph $G$ with one vertex set $V$ and two edge sets $E_1$, $E_2$.
Giving an explicit description of $G$ in a formal language can be seen as a special case of defining a function $f$ from a set $T$ with a betweenness relation $B$ to the set $V\ \cup \big( E_1 \times \lbrace E_1 \rbrace \big) \cup \big( E_2 \times \lbrace E_2 \rbrace \big)$ such that $(v,w) \in E_i$ iff
$$(\exists x,y,z) f(x) = v \wedge f(y) = ((v,w),E_i) \wedge f(z) = w \wedge B(x,y,z)$$
Such a tuple $(x,y,z)$ represents the "sentence" that $(v,w) \in E_i$. The sentences may overlap and reuse "symbols", but this can be avoided. For example ($T = \mathbb{N}$):
|u|v|vwE|uwE|w| | | |...
represents the sentences $(u,w) \in E$ and $(v,w) \in E$. But also does:
|u|uwE|w|v|vwE|w| | | |...
In fact, it's enough to consider functions $f$ from $T$ to $V\ \cup \lbrace E_1 \rbrace \cup \lbrace E_2 \rbrace$ such that $(v,w) \in E_i$ iff
$$(\exists x,y,z) f(x) = v \wedge f(y)=E_i \wedge f(z) = w \wedge B(x,y,z)$$
This comes closer to the normal usage of formal languages. So
|u|E|w|v|E|w| | | |...
represents the sentences $(u,w) \in E$ and $(v,w) \in E$, which are oftenly written as $uEw$ and $vEw$.
This is what "symbolic" description essentially is: a "structure preserving" function from a medium to the structure. (Using intermediate symbols from an alphabet isn't essential: the elements of the structure may symbolize themselves.)
In contrast, "analog" description essentially is a "structure preserving" function from the structure to a medium:
Consider a function $f$ from the set $V\ \cup \big( E_1 \times \lbrace E_1 \rbrace \big) \cup \big( E_2 \times \lbrace E_2 \rbrace \big)$ to a set $T$ with a betweenness relation $B$ such that $(v,w) \in E_i$ iff $f((v,w),E_i))$ is between $f(v)$ and $f(w)$, or:
$$(\exists x,y,z) f(v) = x \wedge f(((v,w),E_i)) = y \wedge f(w) = z \wedge B(x,y,z)$$
For example ($T = \mathbb{N}^2$):
|u|uwE|w|
| |vwE| |
|v| | |
Again, we can ignore the specific edges and write/draw for short:
|u|E|w|
| |E| |
|v| | |
We even can ignore the specific vertices and write/draw for short (note, how this looks like a unlabelled graph!):
|V|E|V|
| |E| |
|V| | |
Analog description can be generalized for relations of arbitrary arity, but that's a bit tedious. In principle, it works.
Is this comparison in any sense enlightening, or is it just baublery?
As I said before, I've never seen this comparison made explicit. So can any references be given?
-
I can never really understand what your questions are asking, but it sounds like you are just talking about currying: en.wikipedia.org/wiki/Currying – Qiaochu Yuan Feb 25 '11 at 12:34
Below is what I understand, considering the first example. Your $T$ encodes bits of sentences and $f$ decodes them. What other data $T$ contains? $B$ distinguish a tuple of bits of some syntactically correct sentence. Here the question “what is a sentence, precisely?” arise. If it is a logical formula, like in first-order logic, then $B$ is too rigid to handle it. $B$ checks only 3 bits. On the contrary, carriers of term algebras are sets of terms. BTW, you can simplify the formula in the second example $B(f(v), f(((v,w),E_i)), f(w))$. – beroal Feb 25 '11 at 17:21
I wanted to show the similarity between the two formulas. – Hans Stricker Feb 25 '11 at 19:23 | 2014-10-21 12:43:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352643251419067, "perplexity": 715.4949046531881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444385.33/warc/CC-MAIN-20141017005724-00090-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://clay6.com/qa/14243/identify-the-correct-sequence-of-events-with-reference-to-conjugation-of-vo | Browse Questions
# Identify the correct sequence of events with reference to Conjugation of Vorticella
$\begin {array} {1 1} (A)\;Amphimixis & \quad (B)\;Disappearance \: of \: macronucleus \\ (C)\;Attachments \: of \: the \: conjugants & \quad (D)\;Post\: conjugation\: fissions \\ (E)\: Prezygotic\: nuclear\: divisions & \quad (F)\: Postzygotic\: nuclear\: divisions \end {array}$
$\begin {array} {1 1} (1)\;C \rightarrow B \rightarrow A \rightarrow E \rightarrow D \rightarrow F & \quad (2)\;C \rightarrow B \rightarrow E \rightarrow A \rightarrow F \rightarrow D \\ (3)\;F \rightarrow A \rightarrow D \rightarrow B \rightarrow C \rightarrow E & \quad (4)\;F \rightarrow D \rightarrow A \rightarrow E \rightarrow B \rightarrow C \end {array}$
(2) $C \rightarrow B \rightarrow E \rightarrow A \rightarrow F \rightarrow D$ | 2017-06-23 19:07:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862644612789154, "perplexity": 3100.8046448638934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320130.7/warc/CC-MAIN-20170623184505-20170623204505-00050.warc.gz"} |
https://codegolf.stackexchange.com/questions/1775/additive-persistence/1779 | The shortest code to pass all possibilities wins.
In mathematics, the persistence of a number measures how many times a certain operation must be applied to its digits until some certain fixed condition is reached. You can determine the additive persistence of a positive integer by adding the digits of the integer and repeating. You would keep adding the digits of the sum until a single digit number is found. The number of repetitions it took to reach that single digit number is the additive persistence of that number.
Example using 84523:
84523
8 + 4 + 5 + 2 + 3 = 22
2 + 2 = 4
It took two repetitions to find the single digit number.
So the additive persistence of 84523 is 2.
You will be given a sequence of positive integers that you have to calculate the additive persistence of. Each line will contain a different integer to process. Input may be in any standard I/O methods.
For each integer, you must output the integer, followed by a single space, followed by its additive persistence. Each integer processed must be on its own line.
## Test Cases
Input Output
99999999999 3
10 1
8 0
19999999999999999999999 4
6234 2
74621 2
39 2
2677889 3
0 0
• Your test cases include some values which are over 2^64, and your spec says that the program only has to handle values up to 2^32. Might be worth clearing that up. – Peter Taylor Mar 25 '11 at 23:40
• @Peter Taylor, forgot to remove those limits. If a program can handle the input I have provided, it shouldn't have an issue with limits. – Kevin Brown Mar 25 '11 at 23:49
• Isn't 999999999999's persistence 2 instead of 3? – Eelvex Mar 26 '11 at 2:13
• @Evelex, that was an incorrect last minute change I guess. Fixed. – Kevin Brown Mar 26 '11 at 21:45
• Several answers here aren't doing output on stdout but rather use J's "interactive" output by returning results after taking command line input. (This includes 2 other J answers and, I'm guessing, the K answer.) Is this considered legit? Because I can shed 18-ish characters if so. – Jesse Millikan Mar 27 '11 at 1:29
## K - 29 Chars
Input is a filename passed as an argument, 29 chars not including filename.
0:{5:x,-1+#(+/10_vs)\x}'.:'0:"file"
• 35 -> 31: Remove outside function.
• 31 -> 29: Remove parens.
• -1+# => #1_ – streetster Aug 6 '19 at 11:24
Python 84 Chars
while 1:
m=n=int(raw_input());c=0
while n>9:c+=1;n=sum(map(int,str(n)))
print m,c
• Challenge case:06234 .. result successful challenge :-) – Quixotic Mar 26 '11 at 6:01
• @Debanjan Thanks. Corrected. – fR0DDY Mar 26 '11 at 10:08
p[d]=0
p d=1+(p.show.sum$map((-48+).fromEnum)d) f n=n++' ':shows(p n)"\n" main=interact$(f=<<).lines
• You can save 6 bytes by using read.pure instead of (-48+).fromEnum, try it online! – ბიმო Apr 25 '18 at 15:17
## Python (93 bytes)
f=lambda n,c:n>9and f(sum(map(int,str(n))),c+1)or c
while 1:n=int(raw_input());print n,f(n,0)
• i think you can remove the space between 9 and err...and – st0le Mar 26 '11 at 8:52
• @st0le:Thanks :-) – Quixotic Mar 28 '11 at 6:17
• and input() instead of int(raw_input()).... – st0le Mar 28 '11 at 6:48
• @st0le:Try this input with that modification:06234. – Quixotic Mar 28 '11 at 7:12
# Husk, 10 15 bytes
+5 bytes for horrible I/O requirement
m(wΓ·,LU¡oΣdr)¶
Try it online!
### Explanation
To support multiple inputs, we need to use m(₁r)¶ (where ₁ is the function doing the interesting computation):
m(₁r)¶ -- expects newline-separated inputs: "x₁x₂…xₙ"
¶ -- split on newlines: ["x₁","x₂",…,"xₙ"]
m( ) -- map over each string
( r) -- | read integer: [x₁,x₂,…,xₙ]
(₁ ) -- | apply the function described below
The function ₁ does the following:
wΓ·,LU¡(Σd) -- input is an integer, eg: 1234
¡( ) -- iterate the following forever and collect results in list:
( d) -- | digits: [1,2,3,4]
(Σ ) -- | sum: 10
-- : [1234,10,1,1,1,…
U -- keep longest prefix until repetition: [1234,10,1]
Γ -- pattern match (x = first element (1234), xs = tail ([10,1])) with:
· L -- | length of xs: 2
, -- | construct tuple: (1234,2)
w -- join with space: "1234 2"
### bash, 105 chars
while read x
do
for((i=0,z=x;x>9;i++))do
for((y=0;x>0;y+=x%10,x/=10))do :
done
x=$y done echo$z $i done Hardly any golfing actually involved, but I can't see how to improve it. ## Haskell - 114 s t n|n>9=s(t+1)$sum$map(read.(:[]))$show n|1>0=show t
f n=show n++" "++s 0n++"\n"
main=interact$(f.read=<<).lines • You can save 4 bytes by using pure over (:[]) and defining an operator instead of s, try it online! – ბიმო Apr 25 '18 at 15:22 ## Ruby, 85 Chars puts$<.map{|n|v=n.chop!;c=0;[c+=1,n="#{n.sum-n.size*48}"] while n[1];[v,c]*' '}*"\n"
I had to borrow the "sum-size*48" idea from Alex, because it's just too neat to miss (in Ruby at least).
## Golfscript, 40 chars
n%{.:${;${48-}%{+}*:$,}%.,1>\1?+' '\n}% ## J - 45 Chars Reads from stdin (,' ',[:":@<:@#+/&.:("."0)^:a:)&><;._2(1!:1)3 • I was trying to use ^:a: myself but I couldn't find some proper documentation... any hints? – Eelvex Mar 26 '11 at 16:12 • The dictionary entry for u^:n has info on its usage but it is a bit dense. ^:a: is like any other call to power but it collects the results and ends when the argument to consecutive calls is the same(converges). – isawdrones Mar 26 '11 at 17:21 • @Eelvex FWIW I discovered a: through the ^:a: trick in the J Reference Card[PDF] – J B Mar 27 '11 at 0:55 • @JB: That's the only reference on ^:a: that I knew :D – Eelvex Mar 27 '11 at 0:59 • @Eelvex Oh. I had the opposite experience then. I discovered the functionality in the dictionary, and used it as some variant of ^:(<'') at first (probably for Kaprekar), until I spotted it in the card, and learned about a: for the occasion. – J B Mar 27 '11 at 10:43 ## c -- 519 (or 137 if you credit me for the framework...) Rather than solving just this one operation, I decided to produce a framework for solving all persistence problems. #include <stdio.h> #include <stdlib.h> #include <string.h> typedef char*(*O)(char*); char*b(char*s){long long int v=0,i,l=0;char*t=0;l=strlen(s);t=malloc(l+2); for(i=0;i<l;i++)v+=s[i]-'0';snprintf(t,l+2,"%lld",v);return t;} int a(char**s,O o){int r;char*n;n=o(*s);r=!strcmp(*s,n);free(*s); *s=n;return r;} int main(int c, char**v){size_t l, m=0;char *d,*n=0;O o=b;FILE*f=stdin; while(((l=getline(&n,&m,f))>1)&&!feof(f)){int i=0;n=strsep(&n,"\n"); d=strdup(n);while(!a(&n,o))i++;printf("%s %d\n",d,i);free(d);free(n);n=0;m=0;}} Only the two lines starting from char*b are unique to this problem. It treats the input as strings, meaning that leading "0"s are not strip before the output stage. The above has had comments, error checking and reporting, and file reading (input must come from the standard input) striped out of: /* persistence.c * * A general framework for finding the "persistence" of input strings * on opperations. * * Persistence is defined as the number of times we must apply * * value_n+1 <-- Opperation(value_n) * * before we first reach a fixed point. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include "../getline.h" /* A function pointer type for operations */ typedef char*(op_func)(char*); typedef op_func* op_ptr; /* Op functions must * + Accept the signature above * + return a point to a newly allocated buffer containing the updated str */ char* addop(char*s){ int i,l=0; long long int v=0; char *t=NULL; /* protect against bad input */ if (NULL==s) return s; /* allocate the new buffer */ l = strlen(s); t = malloc(l+2); if (NULL==t) return t; /* walk the characters of the original adding as we go */ for (i=0; i<l; i++) v += s[i]-'0'; //fprintf(stderr," '%s' (%d) yields %lld\n",s,l,v); snprintf(t,l+2,"%lld",v); //fprintf(stderr," %lld is converted to '%s'\n",v,t); return t; } /* Apply op(str), return true if the argument is a fixed point fo * falsse otherwise, */ int apply(char**str, op_ptr op){ int r; char*nstr; /* protect against bad input */ if ( NULL==op ) exit(1); if ( NULL==*str ) exit(4); /* apply */ nstr = op(*str); /* test for bad output */ if ( NULL==nstr ) exit(2); r = !strcmp(*str,nstr); /* free previous buffer, and reasign the new one */ free(*str); *str = nstr; return r; } int main(int argc, char**argv){ size_t len, llen=0; char *c,*line=NULL; op_ptr op=addop; FILE *f=stdin; if (argc > 1) f = fopen(argv[1],"r"); while( ((len=getline(&line,&llen,f))>1) && line!=NULL && !feof(f) ){ int i=0; line=strsep(&line,"\n"); // Strip the ending newline /* keep a copy for later */ c = strdup(line); /* count necessary applications */ while(!apply(&line,op)) i++; printf("%s %d\n",c,i); /* memory management */ free(c); free(line); line=NULL; llen=0; } } A little more could be saved if we were willing to leak memory like a sieve. Likewise by #defineing return and the like, but at this point I don't care to make it any uglier. ## J, 74 chars i=:<;._2(1!:1)3 i&((],' ',":@(0 i.~9<[:".([:":[:+/"."0)^:(i.9)))@>@{~)i.#i ### Edits • (86 → 83) Some Caps [: to Ats @ • (83 → 79) Unneeded parentheses • (79 → 75) Changing 0". to ". simplifies things • (75 → 74) Better Cutting ### E.g i=:<;._2(1!:1)3 74621 39 2677889 0 i&((],' ',":@(0 i.~9<[:".([:":[:+/"."0)^:(i.9)))@>@{~)i.#i 74621 2 39 2 2677889 3 0 0 • Output is formatted wrong for multiple inputs. See "single space" – Jesse Millikan Mar 27 '11 at 1:54 • @Jesse: I see nothing wrong. Could you write an example please? – Eelvex Mar 27 '11 at 2:09 • I have no idea, I'm seeing things I guess. – Jesse Millikan Mar 27 '11 at 2:18 I think this is about the best I can come up with. Ruby 101 Chars f=->(n){n.sum-n.size*48}$<.each{|l|i=0;i+=1 while(i+=1;n=f[(n||l.chop!).to_s])>10
puts "#{l} #{i}"}
• Actually, chop! instead of chomp! gives me a one character savings. 97 chars. – Alex Bartlow Mar 26 '11 at 2:20
• Just did some more golfing at it - 91 chars. – Alex Bartlow Mar 26 '11 at 2:26
PARI/GP 101 Chars
s(n)=r=0;while(n>0,r+=n%10;n\=10);r
f(n)=c=0;while(n>9,c++;n=s(n));c
while(n=input(),print(n," ",f(n)))
Unfortunately, there's no input function for GP, so i guess this lacks the IO part. :( Fixed: Thanks Eelvex! :)
• Sure there is: input() :) – Eelvex Mar 26 '11 at 5:23
• @Eelvex, done. :) – st0le Mar 26 '11 at 5:53
Javascript - 95
i=prompt();while(i>9){i=''+i;t=0;for(j=0;j<i.length;j++)t+=parseInt(i.charAt(j));i=t;}alert(t);
EDIT: Whoops does'nt do the multi-lines
• Just noticed this doesn't output it correctly. – Kevin Brown Mar 30 '11 at 19:04
# J, 78
f=:[:+/"."0&":
r=:>:@$:@f0:@.(=f) (4(1!:2)~LF,~[:":@([,r)".@,&'x');._2(1!:1)3 Recursive solution. Reads from stdin. Writes to stdout, so cut me some slack - it does take an extra 18-ish characters. ## Perl - 77 characters sub'_{split//,shift;@_<2?0:1+_(eval join'+',@_)}chop,print$_,$",(_$_),$/for<> # JavaScript, 57 47 bytes -10 bytes thanks to @l4m2! f=(s,c=0)=>s>9?f(eval([...s+""].join+),++c):c Try it online! • f=(s,c=0)=>s>9?f([...s+""].reduce((x,y)=>x*1+y*1),++c):c – l4m2 May 5 '18 at 11:10 • f=(s,c=0)=>s>9?f([...s+""].reduce((x,y)=>x- -y),++c):c – l4m2 May 5 '18 at 11:11 • f=(s,c=0)=>s>9?f(eval([...s+""].join+)),++c):c – l4m2 May 5 '18 at 11:11 • @l4m2 Thanks! s>9 and eval were great ideas. I think you had an extra paren in there, making it a total of 10 bytes you saved me :-) – Oliver May 7 '18 at 0:35 • Note the strict I/O ;) – Shaggy May 14 '18 at 14:28 # 05AB1E, 13 bytes ε.µΔSO¼}¾}<ø» Input as a list of integers. Try it online. Explanation: ε # Map each integer in the (implicit) input to: .µ # Reset the counter variable to 0 Δ # Loop until the integer no longer changes: S # Convert it to a list of digits O # And take the sum of those ¼ # Increase the counter variable by 1 }¾ # After the inner loop: Push the counter variable }< # After the map: decrease each value by 1 ø # Zip/transpose it with the (implicit) input to create a paired list » # Join each pair by a space, and then each string by newlines # (after which the result is output implicitly) # MathGolf, 11 bytes hÅ_Σ]▀£(k ? Try it online! Incredibly inefficient, but we don't care about that. Basically, using the fact that the additive persistence of a number is smaller than or equal to the number itself. Uses the fact that the additive persistence is less than or equal to the number of digits of the number. Passes all test cases with ease now. The input format, while suboptimal for some languages, is actually the standard method of taking multiple test cases as input in MathGolf. Each line of the input is processed as its own program execution, and output is separated by a single newline for each execution. ## Explanation (using n = 6234) h push length of number without popping (6234, 4) Å loop 4 times using next 2 operators _ duplicate TOS Σ get the digit sum ] wrap stack in array this gives the array [6234, 15, 6, 6, 6] ▀ unique elements of string/list ([6234, 15, 6]) £ length of array/string with pop (3) ( decrement (2) k ? push input, space, and rotate top 3 elements to produce output (6234 2) # K (ngn/k), 16 bytes Solution: {x,#1_(+/10\)\x} Try it online! Explanation: {x,#1_(+/10\)\x} / the solution { } / lambda taking implicit x ( )\x / iterate until convergence 10\ / split into base-10 (123 => 1 2 3) +/ / sum 1_ / drop first result (iterate returns input as first result) # / count length of result x, / prepend x (original input) # Stax, 8 11 bytes ªwæMε∞ö?îm⌐ Run and debug it +3 bytes thanks to @Khuldraeseth (the first answer didn't have compliant output) • I reached the same solution, but with i in place of u. Adhering to the draconian IO specifications, this becomes 11 bytes. – Khuldraeseth na'Barya Aug 6 '19 at 14:58 • Oops. I guess I didn't read the IO requirements very well. I'll update my answer. – recursive Aug 6 '19 at 15:04 ### scala 173: def s(n:BigInt):BigInt=if(n<=9)n else n%10+s(n/10) def d(n:BigInt):Int=if(n<10)0 else 1+d(s(n)) Iterator.continually(readInt).takeWhile(_>0).foreach(i=>println(i+" "+d(i))) # Perl 5, 65 bytes $b=0;$q=s/\n//r;$_=eval s/./+$&/gr while y///c>1&&++$b;say"$q$b"
Try it online!
# Java (OpenJDK 8), 79 bytes
a->{int d=0;while(a/10>0){int c=0;d++;while(a>0){c+=a%10;a/=10;}a=c;}return d;}
Try it online!
There's probable potential to golf it further, but I'll look into that in the future, but for now, I'm pretty happy with this little result.
• – Jonathan Frech Apr 15 '18 at 18:46
• Building on @JonathanFrech 67 bytes – ceilingcat Dec 8 '19 at 18:36
# Python 3, 82 bytes
while 1:f=lambda n:n//10and 1+f(sum(map(int,str(n))));i=input();print(i,f(int(i)))
proc P {v n\ 0} {set V $v while \$v>9 {set v [expr [join [split $v ""] +]] incr n} puts$V\ $n} Try it online! • Because the next newest answer is a full 6 years old, which i think is before TIO existed – fəˈnɛtɪk Apr 15 '18 at 0:35 # Japt, 28 bytes Ë+S+(@D=X©A<D©ì x ªD D<AÃa÷ Ë // Map over the inputs and return each, followed by +S+ // a space, followed by the number's persistence. D= ©ì x // To find it, fold the number up X©A<D ªD // if we can (handles unfoldable cases), (@ D<AÃa // until it can't be folded up any further. ÷ // Then, join everything up with newlines. Try it online! # PHP, 72+1 bytes +1 for -R flag. for($i=0,$a=$argn;$a>9;$i++)$a=array_sum(str_split($a));echo"$argn$i
";
Run as pipe with -R.
• running PHP as pipe will execute the code once for every input line
• but it does not unset variables inbetween; so $i must be initialized. (Also, it would print nothing instead of 0 for single digits without the initialization.) # Bash+coreutils, 83 bytes [$1 -le 9 ]&&exit $2 let x=$2+1
for z in fold -w1<<<$1 do let y+=$z
done
a $y$x
Try it online!
Should be saved to a script called a and placed in the system's PATH, as it calls itself recursively. Takes input from command line, like a 1999. Returns by exit code.
TIO has some limitations on what you can do with a script, so there's some boilerplate code to make this run in the header.
Prints an error to stderr` for input larger than bash integers can handle, but since the actual computation is done with strings, it still gives the right result anyway. | 2020-04-01 15:48:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3208518326282501, "perplexity": 7000.193884135994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00208.warc.gz"} |
http://blog.pgm-solutions.com/tag/statistics/ | ### Tag: Statistics
Kolmorov-Smirnov test: A usual error
## Kolmorov-Smirnov test: A usual error
The Kolmogorov-Smirnov test is a non-parametric test whose purpose is to test that if the true distribution of some i.i.d. sample is some specific distribution. However, the test is often used in order to test if the true distribution belongs to some family of distribution (for example, the Gaussian family) where we estimate the parameters. … | 2018-10-23 08:00:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815859794616699, "perplexity": 752.8303891478326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00291.warc.gz"} |
https://undergroundmathematics.org/exp-and-log/summing-to-one/problem-2 | Building blocks
## Problem 2
Can we choose integers $x$ and $y$ so that:
$\log_6 x +\log_6 y =1?$
How many different ways are there to do this?
How about using $\log_{12}$?
How about using $\log_{24}$?
How might your answers change if the equations were instead of the form $\log_n x - \log_n y = 1?$ | 2018-04-26 01:50:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6551737785339355, "perplexity": 489.64458723514423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00281.warc.gz"} |
http://sciforums.com/threads/sciforums-encyclopedia.63729/#post-1314838 | # SciForums Encyclopedia
Discussion in 'Site Feedback' started by Plazma Inferno!, Mar 3, 2007.
1. ### Plazma Inferno!Ding Ding Ding DingAdministrator
Messages:
4,610
Dear SciForums members!
SciForums Encyclopedia is now available to you!
It use same form as Wikipedia, and it will be used for information about forum and its members.
Access link for SF Encyclopedia is:
http://www.sciforums.com/encyclopedia
Also, you're be able to access it via link on the SF main page.
To access and to contribute to the SF Encyclopedia, you have to register. I suggest that you use your current forum name for encyclopedia too.
Many of you had chance to work with Wiki system. For those who hadn't, there is useful link:
http://www.mediawiki.org/wiki/Help:FAQ
Also, ENC tags are now available when you want to put direct links to the Encyclopedia in your posts. You could link it on existing topics in Encyclopedia, or you could create new topics based on particular word in your post.
This is an example of using ENC tags:
And, that would be look like this in the post:
But first, we must input some content in the Encyclopedia.
So, you could post here your suggestions about Encyclopedia's Main Page and possible categories that could be add.
Cheers!
Messages:
1,287
5. ### invert_nexusZe do caixaoValued Senior Member
Messages:
9,686
Wes should be proud to know that he's the subject of the very first (and at present only) article on the new wiki.
[enc]Wesdom[/enc]
7. ### KilljoyPropelling The Farce!!Valued Senior Member
Messages:
5,242
hmmm...
That almost looks like a signpost on the road to...
Well...
I'm an old so-and-so, and a lazy one to boot.
Probably a good thing - the latter characteristic, anyway - as what the aforementioned article prompted me to think of was an entry on the "goat-humpers" thing which meandered thru these environs a bit ago.
Perhaps one on the "Lost / Golden Age" of sciforums...
"...a time before the arguments got truly styoopid, and the advent of Unrestricted Trolling...
A Halcyon Epoch before the Invasion of the Mod-Force, and the ceaseless "reform" oriented clamp-downs...
When Titans of pettiness rather than petty titans sauntered the now greytoned halls of the buzz-saw logo'd and "rebranded" for'm, lashing out with unbridled ('cos there wer'n't a battalion O' Mod-bods) ferocity at whomever dared utter a peep of contradiction to their Nigh-Divine proclaimations on the nature of the universe...
Now - the blood of Numenor is spent...
I mean- uhhh...
You get the picture...
:crazy:
8. ### spuriousmonkeyBannedBanned
Messages:
24,066
Biology: [ENC]evolution[/ENC]
(testing)
Last edited: Mar 3, 2007
9. ### spuriousmonkeyBannedBanned
Messages:
24,066
Hmmm...
[ENC]regeneration[/ENC]
10. ### NickelodeonBannedBanned
Messages:
10,581
Test. Who the hell is responsible for [ENC]phonetic[/ENC]?
Messages:
9,686
12. ### spuriousmonkeyBannedBanned
Messages:
24,066
check the history: phonetic
13. ### vslayerRegistered Senior Member
Messages:
4,969
hmm, does it have an index anywhere?
14. ### The Devil InsideBannedBanned
Messages:
8,213
man..im seriously resisting the urge to vandalize that wiki.
*restrains himself*
there should be a feature that hides the encyclopedia from view until you have 100 posts (a reasonable benchmark of activity and dedication to the forum itself).
i know people that would search for links like that to vandalize beyond all repair.
15. ### vslayerRegistered Senior Member
Messages:
4,969
i think we need to liven up the main page. add the logo or something
16. ### KilljoyPropelling The Farce!!Valued Senior Member
Messages:
5,242
The Sciforums logo is on the main page.
17. ### James RJust this guy, you know?Staff Member
Messages:
36,974
This should be interesting...
Not a bad idea. I just hope it doesn't turn into a flame-fest.
18. ### KilljoyPropelling The Farce!!Valued Senior Member
Messages:
5,242
Forgone conclusion, in'nit ?
19. ### spuriousmonkeyBannedBanned
Messages:
24,066
You can always revert back to an old version of the page if I am not mistaken.
Messages:
24,066
21. ### spuriousmonkeyBannedBanned
Messages:
24,066
We can't upload pictures to the wiki. That means that this might lead to gaps in the articles.
22. ### AthelwulfRest in peace Kurt...Registered Senior Member
Messages:
5,060
I didn't see this thread until now. Amazing. Thanks Plazma!
Also, that ENC thingy is really spiffy. I saw it before while making a post today and was all "What the fuck is that?" and didn't want to click it to find out. Way to be creative with the software!
I noticed earlier that there's no direct link to the upload page. But are you sure the required page is there, but hidden somewhere? I haven't checked, but seeing as many of the other pages don't have direct links laying around where they're easily found, it wouldn't surprise me.
EDIT: Oh, I see what you mean now — it's disabled. Plazma, please enable it!
Last edited: Mar 4, 2007
23. ### AthelwulfRest in peace Kurt...Registered Senior Member
Messages:
5,060
I have encountered an odd glitch in the software. When I tried making an article for the Art & Culture subforum, it made an article called "Art" that had no content whatsoever. Apparently the ampersand breaks something.
I've worked around this by making these pages redirects to articles with the titles modified ([ENC]Art and Culture[/ENC]). But it really annoys me. And it's not listed at Wikipedia as a known technical restriction in article names.
Can this be fixed? | 2022-05-26 11:57:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2160274237394333, "perplexity": 7428.893106067729}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00220.warc.gz"} |
https://www.esaral.com/q/if-7th-and-13th-terms-of-an-a-p-be-34-and-64-respectively-then-its-18th-term-is-17204 | # If 7th and 13th terms of an A.P. be 34 and 64 respectively, then its 18th term is
Question:
If 7th and 13th terms of an A.P. be 34 and 64 respectively, then its 18th term is
(a) 87
(b) 88
(c) 89
(d) 90
Solution:
(c) 89
$a_{7}=34$
$\Rightarrow a+6 d=34$ ....(1)
Also, $a_{13}=64$
$\Rightarrow a+12 d=64$ ....(2)
Solving equations $(1)$ and $(2)$, we get:
a = 4 and d = 5
$\therefore a_{18}=a+17 d$
$=4+17(5)$
$=89$ | 2023-03-31 08:51:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764734506607056, "perplexity": 1850.6611150735569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00233.warc.gz"} |
http://blog.samuelmh.com/2016/02/statistical-interactions-testing.html | # samucoder
Samuel M.H. 's technological blog
Notebook
# Statistical Interactions (Testing a Potential Moderator)¶
Author: Samuel M.H. <samuel.mh@gmail.com> Date: 31-01-2016
## Instructions
The final assignment deals with testing a potential moderator. When testing a potential moderator, we are asking the question whether there is an association between two constructs for different subgroups within the sample.
Run an ANOVA, Chi-Square Test or correlation coefficient that includes a moderator.
## What to submit:
Following completion of the steps described above, create a blog entry where you submit syntax used to test moderation (copied and pasted from your program) along with corresponding output and a few sentences of interpretation.
## Dataset
In [19]:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy
import seaborn as sns
## Test of correlation
I am testing if there is a linear relationship (pearson correlation) between:
• Number of drinks of any alcohol usually consumed on days when drank alcohol in last 12 months.
• How many drinks can hold without feeling intoxicaded.
And if the sex is a moderator.
### Ingesting and curating the data
In [25]:
#Load
#Select
df1 = pd.DataFrame()
df1['drinks_usually'] = pd.to_numeric(data['S2AQ8B'],errors='coerce').replace(99, np.nan)
df1['drinks_till_drunk'] = pd.to_numeric(data['S2AQ11'],errors='coerce').replace(99, np.nan)
df1['sex'] = data['SEX']
df1 = df1.dropna()
print(df1.shape)
(19740, 3)
In [32]:
def corr(x,y,df):
r,p = scipy.stats.pearsonr(df[x],df[y])
print('Correlation coefficient (r): {0}'.format(r))
print('p-value: {0}'.format(p))
sns.lmplot(x=x, y=y, data=df)
In [33]:
corr('drinks_usually','drinks_till_drunk',df1)
Correlation coefficient (r): 0.439569778654
p-value: 0.0
There is medium (0,44) positive correlation with a p-value of 0, so it hasn't happened by chance.
## Moderator
Lets test if the sex is a moderator in the relationship.
In [34]:
#Split data by sex
df_male = df1[(df1['sex']==1)]
df_female = df1[(df1['sex']==2)]
### Males
In [35]:
corr('drinks_usually','drinks_till_drunk',df_male)
Correlation coefficient (r): 0.369791024379
p-value: 5.81135568843e-311
### Females
In [36]:
corr('drinks_usually','drinks_till_drunk',df_female)
Correlation coefficient (r): 0.484640967594
p-value: 0.0
## Summary
Both results are significant but the variables are medium-weakly correlated (they don't really fit a linear model).
It is easily seen that the correlation is stronger in women when talking about alcohol tolerance.
So in this case, the sex is a moderator in the relationship between the number of drinks a person usually drinks and the number of drinks a person can take before feeling intoxicated because it affects the strength of the relationship. | 2018-08-20 10:24:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43920081853866577, "perplexity": 4481.933324892107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216333.66/warc/CC-MAIN-20180820101554-20180820121554-00009.warc.gz"} |
http://www.researchgate.net/publication/43074583_Region-based_diagnostic_performance_of_multidetector_CT_for_detecting_peritoneal_seeding_in_ovarian_cancer_patients | Article
# Region-based diagnostic performance of multidetector CT for detecting peritoneal seeding in ovarian cancer patients
[more]
Department of Radiology, Asan Medical Center, University of Ulsan, Ulsan, Korea.
(Impact Factor: 1.36). 04/2010; 283(2):353-60. DOI: 10.1007/s00404-010-1442-0
Source: PubMed
ABSTRACT
To determine the accuracy of multi-detector CT (MDCT) compared with the surgical findings, such as peritoneal seeding and metastatic lymph nodes, in ovarian cancer patients.
Fifty-seven FIGO stage IA-IV ovarian cancer patients, who underwent MDCT before primary surgery, were included in this study. Two radiologists evaluated the following imaging findings in consensus: the presence of nodular, plaque-like or infiltrative soft-tissue lesions in peritoneal fat or on the serosal surface; presence of ascites; parietal peritoneal thickening or enhancement; and small bowel wall thickening or distortion. We also evaluated the presence of lymph node metastases. To allow region-specific comparisons, the peritoneal cavity was divided into 13 regions and retroperitoneal lymph nodes were divided into 3 regions. Descriptive statistical data were thus obtained.
The MDCT sensitivity, specificity, positive predictive values, and negative predictive values were 45, 72, 46, and 72%, respectively, for detecting peritoneal seeding and 21, 90, 52, and 69%, respectively, for detecting lymph node metastasis.
MDCT is moderately accurate for detecting peritoneal metastasis and lymph node metastasis in ovarian cancer patients.
0 Followers
·
• ##### Article: Retroperitoneal lymphadenectomy and survival of patients treated for an advanced ovarian cancer: The CARACO trial
[Hide abstract]
ABSTRACT: The standard management for advanced-stage epithelial ovarian cancer is optimum cytoreductive surgery followed by platinum based chemotherapy. However, retroperitoneal lymph node resection remains controversial. The multiple directions of the lymph drainage pathway in ovarian cancer have been recognized. The incidence and pattern of lymph node involvement depends on the extent of the disease and the histological type. Several published cohorts suggest the survival benefit of pelvic and para-aortic lymphadenectomy. A recent large randomized trial have demonstrated the potential benefit for surgical removal of bulky lymph nodes in term of progression-free survival but failed to show any overall survival benefit because of a critical methodology. Further randomised trials are needed to balance risks and benefits of systematic lymphadenectomy in advanced-stage disease. CARACO is a French ongoing trial, built to bring a reply to this important question. A huge effort for inclusion of the patients, and involving new teams, are mandatory.
Journal de Gynécologie Obstétrique et Biologie de la Reproduction 05/2011; 40(3):201-4. DOI:10.1016/j.jgyn.2011.02.009 · 0.56 Impact Factor
• Source
##### Article: State-of-the-Art Imaging of Peritoneal Carcinomatosis
[Hide abstract]
ABSTRACT: Imaging studies are essential in the evaluation of patients with suspected or known peritoneal malignancy. Despite major advances in imaging technology in the last few years, the early and adequate detection of a peritoneal dissemination remains challenging because of the great variety in size, morphology and location of the peritoneal lesions. New therapeutic approaches in peritoneal-based neoplasms combining cytoreductive surgery and peritonectomy with hyperthermic intraoperative chemotherapy (HIPEC) suggest improved long-term survival, provided that a complete (macroscopic) cytoreduction is achieved. The preoperative radiological assessment of the extent and distribution of peritoneal involvement plays a vital role in the patient selection process. Despite its known limited accuracy in detecting small peritoneal lesions and the involvement of the small bowel/mesentery, contrast-enhanced MDCT remains the standard imaging modality in the assessment of peritoneal carcinomatosis. MRI, especially with diffusion-weighted images, and FDG-PET/CT are promising methods for the evaluation of peritoneal carcinomatosis with superior results in recent studies, but still have a limited role in selected cases because of high costs and limited availability. Generally, to obtain the most precise readings of peritoneal carcinomatosis, an optimized examination protocol and dedicated radiologists with a deep knowledge of peritoneal pathways and variable morphologies of peritoneal disease are required.
RöFo - Fortschritte auf dem Gebiet der R 12/2011; 184(3):205-13. DOI:10.1055/s-0031-1281979 · 1.40 Impact Factor
• ##### Article: In Vivo Dual-Modality Terahertz/Magnetic Resonance Imaging Using Superparamagnetic Iron Oxide Nanoparticles as a Dual Contrast Agent
[Hide abstract]
ABSTRACT: Molecular imaging is one of the most promising tools for diagnosis of cancer. We assessed whether commercially available superparamagnetic iron oxide nanoparticles (SPIO; Feridex®) could be utilized as a dual modality contrast agent for terahertz (THz) imaging as well as magnetic resonance (MR) imaging. Feridex particles were transfected into SKOV3 cancer cells, at concentrations of 0, 0.35, 0.70, and 1.38 mM, and the magnetic and optical properties of the particles were examined by MR and THz reflection imaging. Mice were inoculated with Feridex-labeled SKOV3 cells, and in vivo MR and THz images were taken 1, 3, 7, and 14 days after inoculation. THz images and ${\rm T}2^{\ast}$ -weighted MR images of Feridex-labeled SKOV3 tumors showed similar patterns; the signal intensities of both image sets increased with Feridex concentration. The signal intensity of in vivo MR and THz images from mice decreased over time. H&E and Prussian blue staining results correlated with imaging data. Dual-modality molecular MR and THz imaging of Feridex-labeled cells may be used to identify cancer cells both in vivo and in vitro. Such a noninvasive multimodal imaging method may be valuable in future cellular and molecular studies.
IEEE Transactions on Terahertz Science and Technology 01/2012; 2(1):93-98. DOI:10.1109/TTHZ.2011.2177174 · 2.18 Impact Factor | 2015-10-04 11:40:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4212817847728729, "perplexity": 13575.72462604868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00146-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://oatao.univ-toulouse.fr/19596/ | # Integrated design of flight control surfaces and laws for new aircraft configurations
Denieul, Yann and Bordeneuve-Guibé, Joël and Alazard, Daniel and Toussaint, Clément and Taquin, Gilles Integrated design of flight control surfaces and laws for new aircraft configurations. (2017) In: IFAC World Congress 2017, 9 July 2017 - 14 July 2017 (Toulouse, France).
Preview
(Document in English)
PDF (Author's version) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
999kB
Official URL: http://dx.doi.org/10.1016/j.ifacol.2017.08.2085
## Abstract
Control architecture sizing is a main challenge for new aircraft design like blended wing-body design. This aircraft configuration typically features redundant elevons located at the trailing edge of the wing, acting simultaneously on pitch and roll axes. The problem of integrated design of control surface sizes and flight control laws for an unstable blended wing-body aircraft is addressed here. Latest tools for $H_\infty$ non-smooth optimization of structured controllers are used to optimize in a single step the gains for both longitudinal and lateral control laws, and a control allocation module, while minimizing control surfaces total span. Following constraints are ensured: maximal deflection angles and rates for 1) pilot longitudinal pull-up 2) pilot bank angle order and 3) longitudinal turbulence. Using this coupled approach, significant gains in terms of outer elevons span compared to the initial layout are demonstrated, while closed-loop handling qualities constraints are guaranteed.
Item Type: Conference or Workshop Item (Paper) Vol. 50, n°1 hal-01738089 International conference proceedings Other partners > Airbus (FRANCE)Université de Toulouse > Institut Supérieur de l'Aéronautique et de l'Espace - ISAE-SUPAERO (FRANCE)French research institutions > Office National d'Etudes et Recherches Aérospatiales - ONERA (FRANCE) download 20 Mar 2018 09:00
Repository Staff Only: item control page | 2021-04-22 13:22:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38339653611183167, "perplexity": 10828.896084170776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00317.warc.gz"} |
https://amarder.github.io/diamonds/ | How to Buy a Diamond
Looking to buy a diamond? This how-to guide describes three steps I took to identify great deals on Blue Nile. “Founded in 1999, Blue Nile has grown to become the largest online retailer of certified diamonds and fine jewelry.” The code used in this analysis is available on GitHub. This guide proceeds as follows:
1. download data from Blue Nile,
2. model price as a function of diamond characteristics, and
3. identify diamonds with extra low prices.
Downloading Data
I’ve written a Python script to make downloading data from Blue Nile easy. The script has been posted here. To download data on all round diamonds on Blue Nile use the following command:
python download.py --shape RD > my-diamonds.csv
For more information on the optional arguments the script accepts use:
python download.py --help
Most of the download script is pretty easy to follow. Blue Nile is using Apache Solr to serve JSON documents describing diamonds on the site. The trickiest part is you can only get information on the first 1,000 diamonds for each query; Blue Nile has limited how far we can page through results. To work around this constraint, the download script pages through results based on price. I only mention this if you want to dig deeper into the download script.
On November 6, 2015, I downloaded data on all 98,886 round diamonds on Blue Nile. Below is a plot of diamond price versus carat weight (both on log scales).
Modeling Price
Blue Nile’s buying guide describes how the four C’s (cut, color, clarity, and carat weight) are the most important characteristics when buying a diamond. It seems reasonable to model price as a function of those four characteristics. Having played around with the data bit, a multiplicative model seems like a good choice. I model price as a product of carat weight raised to the power $\beta$ times multipliers for the cut, color, and clarity of the diamond $$price_i \propto carat_i^\beta \cdot cut_i \cdot color_i \cdot clarity_i.$$ Taking $\log$’s of both sides allows this model to be estimated using a linear regression $$\log(price_i) = \alpha + \beta \log(carat_i) + \delta_{cut_i} + \delta_{color_i} + \delta_{clarity_i} + \epsilon_i.$$ Focusing on diamonds weighing between 1.00 and 1.99 carats, we can see the relationship between $\log(price_i)$ and $\log(carat_i)$ is remarkably linear, with diamond color shifting the intercept but not the slope of the relationship.
Below is a summary of the fitted linear model. Generally, I put very little weight on R-squared values, but this model explains 91.5% of the observed variance in log price!
##
## Call:
## lm(formula = fstring, data = diamonds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.58043 -0.09121 0.00130 0.08608 0.62971
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 7.904912 0.004815 1641.74 <2e-16 ***
## log(carat) 1.695256 0.005421 312.69 <2e-16 ***
## cut2 0.067069 0.003694 18.16 <2e-16 ***
## cut3 0.135940 0.003307 41.11 <2e-16 ***
## cut4 0.352969 0.005307 66.51 <2e-16 ***
## color2 0.161501 0.003914 41.26 <2e-16 ***
## color3 0.300675 0.003900 77.10 <2e-16 ***
## color4 0.409028 0.003890 105.16 <2e-16 ***
## color5 0.505657 0.003945 128.16 <2e-16 ***
## color6 0.587193 0.003949 148.71 <2e-16 ***
## color7 0.739886 0.004113 179.90 <2e-16 ***
## clarity2 0.155532 0.003710 41.92 <2e-16 ***
## clarity3 0.275004 0.003591 76.58 <2e-16 ***
## clarity4 0.341406 0.003642 93.74 <2e-16 ***
## clarity5 0.400936 0.003921 102.26 <2e-16 ***
## clarity6 0.498296 0.004064 122.61 <2e-16 ***
## clarity7 0.642261 0.005277 121.70 <2e-16 ***
## clarity8 0.952975 0.021096 45.17 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1408 on 19997 degrees of freedom
## Multiple R-squared: 0.9153, Adjusted R-squared: 0.9152
## F-statistic: 1.27e+04 on 17 and 19997 DF, p-value: < 2.2e-16
Exponentiating the coefficients from the regression model gives estimates of the price multipliers associated with different diamond characteristics. These multipliers can help a shopper decide what type of diamond to consider. The omitted categories (cut = Good, color = J, and clarity = SI2) have implicit coefficients of 0 and price multipliers of 1. Is a G-color diamond worth 1.51 times the price of a J-color diamond with the same cut, clarity, and carat weight?
Identifying Deals
Having read Blue Nile’s buying guide a few times, they’ve convinced me to care about all four of the four C’s. When purchasing a diamond, the following cut, color, and clarity are my baseline:
• $cut_i \ge$ Ideal: Represents roughly the top 3% of diamond quality based on cut. Reflects nearly all light that enters the diamond. An exquisite and rare cut.
• $color_i \ge$ H: Near-colorless. Color difficult to detect unless compared side-by-side against diamonds of better grades. An excellent value.
• $clarity_i \ge$ VS1: Very Slightly Included: Imperfections are not typically visible to the unaided eye. Less expensive than the VVS1 or VVS2 grades.
Below I plot the diamonds that meet my baseline. Fitting a linear relationship between $\log(price_i)$ and $\log(carat_i)$, I highlight the best 1% of deals, the diamonds where the difference between expected and actual price is greatest $$\alpha + \beta \log(carat_i) - \log(p_i) = -\epsilon_i.$$
The table below describes the top 10 diamonds found using my criteria. All of these diamonds have an ideal cut and H-color.
Residual Clarity Carat Price
-0.63 VS1 1.32 7,238
-0.61 VS1 1.15 5,830
-0.59 VS1 1.30 7,344
-0.59 VS1 1.30 7,346
-0.59 VVS2 1.62 10,866
-0.59 VS1 1.11 5,575
-0.58 VS1 1.22 6,623
-0.58 VS1 1.15 5,999
-0.58 VS1 1.21 6,566
-0.57 VVS2 1.32 7,691
Disclaimer: This is one way to identify deals. A more general solution would allow shoppers to enter preference parameters similar to the regression coefficients found above. Taking preference parameters $\beta$, the best deals would maximize the shopper’s utility $$u(X_i, p_i) = X_i \beta - p_i.$$ | 2017-08-18 20:01:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2976648509502411, "perplexity": 6333.019441570952}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00104.warc.gz"} |
https://www.yaklass.by/p/english-language/5-9-klass/vocabulary-6486/appearance-24224/re-4c7fd4ad-6853-4256-929c-715013a8724a | ### Условие задания:
3 Б.
Рис. $$1$$. Short haircut
What is appearance? Appearance is human look. People can look very differently: be skinny or plump, tall or short, well-built or fit, bald or curly. People have wrinkles, moles and freckles on their faces.
Hair can be very different: some people prefer cutting it short, some let it grow long. Some men shave their heads and are bald. Some people dye their hair, and some like its natural colour — blonde, brown, red or black. Often when we talk about senior people, we call their hair grey or 'salt and pepper' hair.
There is a proverb that says that the eyes are the windows of the soul. Probably it's true. People's eyes can be brown, blue, black or green. Some people have different-coloured eyes, for example, one blue and one brown. It's called heterochromia.
Some people are pale and some are dark-skinned. If you are very pale, be careful in summer — you can get sunburned in no time.
We call the manner of walking the walk or pace. Some people limp, some shuffle, some stumble.
Write the missing words into the gaps:
1. People have wrinkles, moles and on their faces.
2. Often when we talk about senior people, we call their hair or 'salt and pepper' hair.
3. We call the manner of walking the walk or .
Источники:
Рис. 1. Short haircut. Pixabay License CC0. https://pixabay.com/images/id-919048/ (Дата обращения 7.10.2021) | 2023-03-30 05:21:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3189034163951874, "perplexity": 10330.308923138226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00190.warc.gz"} |
https://metric2011.wordpress.com/2012/09/28/notes-of-itai-benjaminis-lecture-nr-3/ | ## Notes of Itai Benjamini’s lecture nr 2
Today we study a series of examples of random perturbations of homogeneous spaces. Typically, the perturbation depends one one real parameter. When the parameter reaches a critical value, the spaces obtained are more exotic.
1. Variants of percolation
1.1. First passage percolation
Theorem 1 (Richardson) For the ${{\mathbb Z}^d}$ grid, for exponentially distributed lengths, large balls converge to a deterministic convex centrally symmetric shape.
Simulation indicates, and Kesten proved in high dimensions, that the limiting shape is not round. It is distribution dependent. It can have flat parts.
Questions.
1. How do geodesics between two point behave ? In high dimensions, they should be concentrated along a line.
2. Variance ${\sim n^{2/3}}$ ? Known : variance ${\leq \frac{n}{\log n}}$.
3. How do geodesic rays behave ?
4. (Furstenberg). No two sided infinite geodesics ?
Very few results for ${{\mathbb Z}^d}$. Only trees are well understood.
1.2. Long range percolation
The random metric performs some averaging, so the perturbation is mild. Next we study stronger perturbation, where the underlying Euclidean geometry does not fully disappear, but fades away.
Definition 2 Start with ${n}$-cycle. Add an edge between ${i}$ and ${j}$ with probability ${\beta|i-j|^{-s}}$.
${s>2}$ does not change geometry much, diameter stays linear in ${n}$.
${s=2}$ is the critical case, see below.
${1 (Biskup), diameter is polylog.
${s=1}$ (Coppersmith, Gamarnik, Sviridenko) diameter is ${\frac{\log n}{\log\log n}}$.
${s<1}$ diameter is bounded.
Theorem 3 (Sly and Ding 2012) When ${s=2}$, diameter is ${\theta(n^{f(\beta)})}$, where ${f}$ is strictly between 0 and 1.
Not even clear whether ${f}$ is increasing, or continuous.
These random graphs look very different from vertex transitive graphs. Even in the subcritical case, ${1, the mixing time of the simple random walk is ${n^{s-1}}$. Plenty of bottlenecks. Critical case even worse.
1.3. CCCP
CCCP means Contracting clusters in critical percolation.
One contracts each cluster in a bond percolation on ${{\mathbb Z}^d}$. For ${p, this is probably quasi-isometric to ${{\mathbb Z}^d}$. But for ${p=p_c}$, it looks different. Locally, vertices have high degrees. Volume growth is huge (Benjamini-Gurel-Gurevich-Kosma), but no non constant bounded harmonic functions.
2. Random planar graphs
Motivated by quantum gravity.
2.1. Random subdivisions
Toy model: Start with the unit square, divide it in 4 squares, pick one at random and divide it, and iterate.
Conjecture. Diameter (minimal number of squares in a chain) grows at a deterministic speed.
This would imply that the limiting random planar length space is non trivial.
2.2. Stationary graphs
Analogue of stationary processes. Weakening of vertex transitivity.
Definition 4 A distribution on rooted graphs is stationary if invariant under re-rooting at the points of a simple random walk path.
Sample have polynomial volume growth, some examples have superquadratic volume growth. I.e. full of mushrooms. Nevertheless,
Theorem 5 (Benjamini-Papasoglu) Any doubling planar graph has linear size cuts at all scales, i.e. for all ${n}$, there is a plane domain, squeezed between ${B(o,n)}$ and ${B(o,6n)}$, with linear boundary length.
Conjecture. On such graphs, random walk is sub-diffusive. Indeed, it will probably get trapped in the deep mushrooms.
3. Local convergence
Definition 6 A random rooted graph ${G}$ is the local limit of a sequence of random finite graphs
Example 1 ${G_n=}$ binary tree of depth ${n}$.
Since a random vertex is close to the leaves, the limit is the canopy tree: a half-line with a bush of depth ${n}$ at vertex ${n}$, and a distribution on roots.
Theorem 7 (Benjamini-Schramm) Limits of bounded degree finite planar graphs are almost surely recurrent. | 2018-03-23 11:10:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290891647338867, "perplexity": 2146.2007705998913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648207.96/warc/CC-MAIN-20180323102828-20180323122828-00755.warc.gz"} |
http://math.stackexchange.com/questions/176010/is-a-polynomial-solvable-by-roots-iff-every-irreducible-factor-is | # Is a polynomial solvable by roots iff every irreducible factor is?
Let $F$ be a field, I asked myself if $p(x)\in F[x]$ is solvable by radicals iff every irreducible factor is solvable by radicals.
My thoughts: If every irreducible factor is solvable by roots then it imply that there are field extensions that are solvable by roots, I would like to use the simple fact that the composition field of solvable extensions is also solvable, but in my case the extensions I have are not subfields of one field.
I don't know about the other direction, and I have a feeling it is not true but due to lack of examples that I know I can't think of a counter example.
Is this statement correct ? If so, how can we prove it ?
-
Each individual root is by definition so expressible. Is anything more wanted? – André Nicolas Jul 27 '12 at 21:52
it is more clear in this way, but I was thinking of how to do this in terms of the definition (and not of existance of formula). i.e how can we show that the galois group is solvable ? (and I still don't know about the other direction...) – Belgi Jul 27 '12 at 21:56
## 2 Answers
Let $L$ be a splitting field for $fg$, let $K$ be a subfield of $L$ and a splitting field for $f$. Assume $G_{fg}$, the Galois group of $L$ over $F$, is solvable. Galois Theory says that the Galois group of $K$ over $F$ is the quotient of $G_{fg}$ by the Galois group of $L$ over $K$, and group theory says a quotient group of a solvable group is solvable, so the group of $K$ over $F$ is solvable.
-
For a polynomial $f \in \mathbf{Q}[x]$, we say $f$ is "solvable by radicals" if the Galois group $G_f$ of the splitting field of $f$ is a solvable group.
Now $G_{fg}$ is a subgroup of $G_f \times G_g$, by one of the corollaries to the Fundamental Theorem of Galois Theory. We also know that:
1. The direct product of solvable groups is solvable;
2. Subgroups of solvable groups are solvable.
Thus if $f,g$ are solvable by radicals, so is $fg$.
What about the converse?
-
Can you please explain why $G_{fg}$ is a subgroup of $G_f \times G_g$ ? I didn't learn this in class. – Belgi Jul 27 '12 at 21:59
The splitting field of $fg$ is the compositum of the splitting fields of $f$ and $g$. Do you know how to compute the Galois group of a compositum? – Bruno Joyal Jul 27 '12 at 22:05
No, I didn't learn that either (Although I do know why the composition is solvable) – Belgi Jul 27 '12 at 22:10
Well, if you know that the compositum of solvable extensions is solvable, then you're set! It's what I showed above. – Bruno Joyal Jul 27 '12 at 22:12
At first I didn't understand why we can take the composition, but than I figured that the both splitting fields ar subfields of the splitting fields of the product of the polynomials, so I this direction is ok. It is still interesting that $G_{fg}$ is a subgroup of $G_f \times G_g$, is it easy to prove ? do you have any idea about the other direction ? thank you for your time and help (+1) – Belgi Jul 27 '12 at 22:15 | 2015-08-02 02:46:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015998244285583, "perplexity": 120.8223987398537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00166-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://ksp-kos.github.io/KOS_DOC/structures/communication/message.html | # Message¶
Represents a single message stored in a CPU’s or vessel’s MessageQueue.
The main message content that the sender intended to send can be retrieved using Message:CONTENT attribute. Other suffixes are automatically added to every message by kOS.
Messages are serializable and thus can be passed along:
// if there is a message in the ship's message queue
// we can forward it to a different CPU
// cpu1
SET CPU2 TO PROCESSOR("cpu2").
CPU2:CONNECTION:SENDMESSAGE(SHIP:MESSAGES:POP).
// cpu2
PRINT "Original message sent at: " + RECEIVED:CONTENT:SENTAT.
## Structure¶
structure Message
Suffix Type Description
SENTAT TimeSpan date this message was sent at
RECEIVEDAT TimeSpan date this message was received at
SENDER Vessel or Boolean vessel which has sent this message, or Boolean false if sender vessel is now gone
HASSENDER Boolean Tests whether or not the sender vessel still exists.
CONTENT Structure message content
Note
This type is serializable.
Message:SENTAT
Type: TimeSpan
Date this message was sent at.
Message:RECEIVEDAT
Type: TimeSpan
Date this message was received at.
Message:SENDER
Type: Vessel or Boolean
Vessel which has sent this message, or a boolean false value if the sender vessel no longer exists.
If the sender of the message doesn’t exist anymore (see the explanation for HASSENDER), this suffix will return a different type altogether. It will be a Boolean (which is false).
You can check for this condition either by using the HASSENDER suffix, or by checking the :ISTYPE suffix of the sender to detect if it’s really a vessel or not.
Message:HASSENDER
Type: Boolean
Because there can be a delay between when the message was sent and when it was processed by the receiving script, it’s possibile that the vessel that sent the message might not exist anymore. It could have either exploded, or been recovered, or been merged into another vessel via docking. You can check the value of the :HASSENDER suffix to find out if the sender of the message is still a valid vessel. If HASSENDER is false, then SENDER won’t give you an object of type Vessel and instead will give you just a Boolean false.
Message:CONTENT
Type: Structure
Content of this message. | 2019-08-23 13:11:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3985133469104767, "perplexity": 3452.9990225855076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318421.65/warc/CC-MAIN-20190823130046-20190823152046-00544.warc.gz"} |
https://discourse.mc-stan.org/t/initialization-failed-relation-between-priors-and-bounds/23275 | # Initialization failed: relation between priors and bounds
I am having problems in the initialization phase of a collection of models. I have a likelihood for my data (generalized extreme value, GEV) and I am modeling some parameters as time series -sometimes I consider the parameters constant, others autoregressive of order 1 and others I include a seasonality term-, which I believe to be where the problem lies.
The following model -an example of the set- is failing to initialize. The location parameter is an autoregressive model of order 1 (AR(1)) with a seasonality term of order 12. The scale parameter is an AR(1).
#include ../gev_vvv.stan
data {
int<lower=0> N;
vector<lower=0>[N] y;
int idxs[N];
real<lower=3> nu; // Number of sigmas to avoid zero valued parameters
}
parameters {
// Location
real<lower=0> locm;
vector<lower=0>[N] loct;
real<lower=0> sigmal;
real<lower=0, upper=1> Phil;
real<lower=0, upper=1> Laml;
// Scale
real<lower=0> escm;
vector<lower=0>[N] esct;
real<lower=0> sigmae;
real<lower=0, upper=1> Phie;
// Shape
real<lower=-.5, upper=.5> form;
}
transformed parameters {
vector<lower=-.5, upper=.5>[N] fort;
// LOCATION
real<lower=0, upper=1> sigmaol= nu * sigmal / sqrt((1 - Phil^2) * (1 - Laml^2)) / locm;
real<lower=0> sigmasl = sqrt(sigmal^2 / (1 - Laml^2));
// SCALE
real<lower=0, upper=1> sigmaoe = nu * sigmae / sqrt((1 - Phie^2)) / escm;
// SHAPE
real<lower=0, upper=1> formplushalf = form + .5;
for (i in 1:N) {
fort[i] = form;
}
}
model {
// LOCATION
locm ~ gamma(5, 1./20);
sigmaol ~ beta(2, 4);
Phil ~ beta(2, 4);
Laml ~ beta(2, 4);
// SCALE
escm ~ gamma(4, 1./10);
sigmaoe ~ beta(2, 4);
Phie ~ beta(2, 4);
// SHAPE
formplushalf ~ beta(4, 4);
// LOCATION
loct[1] ~ normal(locm, sigmaol * locm / nu);
loct[2:13] ~ normal(Phil * loct[1:12] + (1. - Phil) * locm, sigmasl);
loct[14:N] ~ normal(Phil * loct[13:(N-1)] + Laml * loct[2:(N-12)] - Phil * Laml * loct[1:(N-13)] + (1. - Phil - Laml + Phil * Laml) * locm, sigmal);
// SCALE
esct[1] ~ normal(escm, sigmaoe * escm / nu);
esct[2:N] ~ normal(Phie * esct[1:(N-1)] + (1. - Phie) * escm, sigmae);
// SHAPE
y[idxs] ~ gev(loct[idxs], esct[idxs], fort[idxs]);
// LOCATION
target += -log(sigmaol) + log(sigmal);
// SCALE
target += -log(sigmaoe) + log(sigmae);
// SHAPE
}
I am afraid that I may have severely misunderstood some of the concepts related to bounds, priors and changes of variables, and I cannot wrap my head around the problem, so I would really appreciate any helping hand.
My thinking goes as follows:
When I define a parameter as an AR(1), this implies that I have the following parameters:
(1) the average of the process (\mu_m, locm in the code)
(2) the autocorrelation coefficient (\Phi_l, Phil in the code)
(3) the standard deviation of the noise process (\sigma_l, sigmal in the code).
In the example I have an additional parameter introduced by the seasonal component, but its behavior is similar, so I exclude it from this explanation.
The standard deviation of the AR(1) is \sigma^* = \dfrac{\sigma_l}{\sqrt{1 - \Phi^2}} and assuming my AR(1) process is going to always be within 5 \sigma^* of the mean[^1], I can define \sigma_{ol} = \dfrac{5 \sigma_l}{\mu_{m}\sqrt{1 - \Phi^2}}, bound it to be between 0 and 1, and give it a beta prior distribution.
[^1]: I know this it not always true, but I believe it may be a good enough approximation -I may be wrong though-.
If I give a prior and bounds to \mu_m, \Phi and \sigma_{ol}, does Stan automatically sample \sigma_l in such a way that all bounds are respected? Or as \sigma_l is not upper bounded it will be able to get \infty values despite all the specified priors?
I do not understand why when Stan tries to initialize \sigma_{ol}, it is producing values larger than 1, which in my mind should be impossible as the variable is bounded between 0 and 1, and assigned a beta prior. What have I not understood here? [I know the answer would be _so many things_ , but I would really appreciate a minimal set here :D]
Also, as I am giving a prior to \sigma_{ol} instead of to \sigma_{l} I am including the determinant of the jacobian of the inverse transformation. Is that the correct procedure? Because in some cases, the models seem to initialize if I don’t include the determinant, which seems counterintuitive to me.
I hope to have exposed my case clearly, but if not, please feel free to require as many clarifications as needed.
I really appreciate any help and hint that you can give me.
1 Like
If I understand correctly, \sigma_{ol} is one of the transformed parameters, so Stan isn’t explicitly initializing this at all. Stan only initializes variables declared in the parameters block. To clarify, the initialization procedure that Stan does automatically behind the scenes is as follows:
1. For every variable in the parameters block:
1a. Generate an initial value by sampling a uniform distribution from -2:2 (at least, that’s the default range if you don’t supply a value to cmdstan’s init argument).
1b. If the variable is unbounded, use that value; if the variable is bounded, impose the bounds by applying a transform that achieves the desired bounds and automatically add the jacobian associated with that transform so that the user can supply a prior for the output bounded variable.;
2. For every variable in the transfomed parameters block:
2a. Compute the variable given it’s relation to the initialized parameters.
2b. If the variable is bounded check that the value of the variable as computed falls within the bounds; if the check fails, terminate the initialization attempt and start back at 1 (generating new random inits for all parameters).
So as a user you’re responsible for making sure that the way you derive transformed parameters from the parameters will yield a value that meets the bounds you desire for the TP. Since there’s a re-try loop, there’s a bit of wiggle-room here in that initialization can succeed even when the P->TP relationship doesn’t fully guarantee the TP satisfies its bounds so long as there’s a sufficiently high likelihood that it will, but that can be fragile workflow-wise and I find it easiest to only ever use transforms I know are guaranteed to yield the desired bounding behaviour.
To discern where your transforms might be going awry, try removing the bounds on the TPs but add print statements immediately after their computation to show the parameter values and the TP values; that should show you which TPs are falling outside your expected bounds. From there you can check that you did your computations as intended and possibly motivate either different bounds or different computations.
1 Like
Thank you very much @mike-lawrence. Indeed, the problem is that I thought that Stan also initialized transformed parameters. The transformation I was making did not ensure that the bounds were respected and from there the initialization problem arose.
If I understand the implications of the initialization procedure correctly, it only makes sense to impose priors on parameters, am I right? That is, it does not serve any purpose to impose a prior on a transformed parameter, or does it?
Finally, is the prior supposed to respect the bounds imposed on a given parameter? From your explanation of the initialization procedure, it seems that Stan initializes the chain to some values, irrespectively of the prior, which does not make much sense to me, so I imagine I am still missing some nuance here.
Again, thank you very much for your help and your prompt response.
Under the hood, the Stan syntax that we call “putting a prior on a parameter” just means “increment the target density by some specific function of the parameter”. Thus:
Imposing priors on transformed parameters has meaning in a Stan model, because incrementing the target by a function of a transformed parameter will modify the prior that is being placed on the untransformed parameters. The most straightforward example involves placing a prior on a transformed parameter while implementing the relevant Jacobian adjustment. This creates a prior distribution that behaves as if you had declared the transformed parameter to be a parameter and placed a prior on it (note, however, that the details of the sampling algorithm will not be the same–the sampling will always happen on the unconstrained scale over the parameters declared in the parameters block).
Without Jacobian adjustments, “putting priors” on transformed parameters (when the transform isn’t linear and one-to-one) increments the target in ways that don’t correspond to our notion of “sampling the transformed parameter from the probability distribution implied by the sampling statement”. In general this is unadvisable unless you really know what you’re doing.
If the prior doesn’t respect the bounds, then you just get a different prior. For example, if we have a parameter bounded below by zero, and we write a line of Stan code that looks as though we are placing a standard normal prior on the parameter, we have simply given the variable a half-normal prior. The prior gets truncated according to the constraint. Again, this is because under the hood, the syntax that we call “putting a prior on a parameter” just means “increment the target density by some specific function of the parameter”. The target density that respects the bounds gets incremented, and there’s no density elsewhere to increment.
Yes, this is what Stan does. As a consequence, while it is fine to declare a prior that doesn’t respect a constraint (see above), it is not a good idea to fail to declare a constraint that the prior respects. For example, if the prior is lognormal, make sure that the parameter is declared with a lower bound of zero.
There are reasons for ignoring the prior here, including the computational problems that can arise from sampling inits from the tails of over-weak priors, the fact that Stan supports improper priors, the fact that Stan supports non-generative priors that, depending on the geometry, can be difficult to simulate from, and probably a bunch more. Perhaps the best reason is that the default initialization has proven successful in practice across a relatively broad range of models.
1 Like
Thank you very much @jsocolar. Your explanations have been very clarifying. With yours and @mike-lawrence’s help, I believe I have a better grasp at what Stan is doing and the implications it has for the model I am implementing.
In a more general setting, I still have the doubt of when should a change of variable be preferred over a reparameterization. In the cases that I can imagine, very similar to the ones shown in chapter 22 of the user manual, they seem to be equivalent procedures, mostly differing in their expressiveness, but this impression may be quite biased because of my ignorance on the subject. Are there situations where one or the other approach is more suitable?
Once again, thank you very much for your help. | 2022-05-23 04:30:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7014481425285339, "perplexity": 1568.5417066639598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00448.warc.gz"} |
https://rdrr.io/cran/ragt2ridges/man/CIGofVAR1.html | # CIGofVAR1: Conditional independence graphs of the VAR(1) model In ragt2ridges: Ridge Estimation of Vector Auto-Regressive (VAR) Processes
## Description
Constructs the global or contemporaneous conditional independence graph (CIG) of the VAR(1) model, as implied by the partial correlations.
## Usage
1 CIGofVAR1(sparseA, sparseP, type="global")
## Arguments
sparseA A matrix \mathbf{A} of regression parameters, which is assumed to be sparse. sparseP Precision matrix \mathbf{Ω}_{\varepsilon} the error, which is assumed to be sparse. type A character indicating whether the global or contemp (contemporaneous) CIG should be plotted.
## Author(s)
Wessel N. van Wieringen <w.vanwieringen@vumc.nl>
## References
Dahlhaus (2000), "Graphical interaction models for multivariate time series", Metrika, 51, 157-172.
Dahlhaus, Eichler (2003), "Causality and graphical models in time series analysis", Oxford Statistical Science Series, 115-137.
Miok, V., Wilting, S.M., Van Wieringen, W.N. (2017), "Ridge estimation of the VAR(1) model and its time series chain graph from multivariate time-course omics data", Biometrical Journal, 59(1), 172-191.
graphVAR1, sparsify, sparsifyVAR1.
1 2 3 4 5 6 # specify VAR(1) model parameters A <- matrix(c(-0.1, -0.3, 0, 0.5, 0, 0, 0, 0, -0.4), byrow=TRUE, ncol=3) P <- matrix(c(1, 0.5, 0, 0.5, 1, 0, 0, 0, 1), byrow=TRUE, ncol=3) # adjacency matrix of (global) conditional independencies. CIGofVAR1(A, P, type="global") | 2017-07-23 14:55:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458011269569397, "perplexity": 12633.52868916187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424564.72/warc/CC-MAIN-20170723142634-20170723162634-00019.warc.gz"} |
http://mathhelpforum.com/discrete-math/135454-recurrence-relation-binary-string-print.html | # Recurrence Relation for a binary string
• March 24th 2010, 12:01 PM
Math101
Recurrence Relation for a binary string
an is the number of strings of length n in which every 0 is immediately followed by at least two consecutive 1's. Example: the string 101111 is allowed, but 01110 is not.
so what the problem asks for is to find a recurrence relation
and initial conditions for an. now i had something going where:
if it starts with a 011, that would be an-3
starts with a 1, it could be 1011_________ = an-4, or 11011_____ = an-5, .....etc (where ____ could be 0 or 1)
i was just wondering if this is the right approach? and how to also figure in the possibility of another zero appearing somewhere along the string of _____.
• March 24th 2010, 01:12 PM
Plato
First find the first three.
$A_1:\{1\},~~a_1=1$
$A_2:\{11\},~~a_2=1$
$A_3:\{011,111\},~~a_3=2$
Note that no string in $A_n$ can end in a zero.
If you add a 1 to the right end of any string in $A_3$ you have a valid string in $A_4$.
What is the complete list for $A_4$?
That is a way to proceed.
• March 24th 2010, 02:00 PM
Math101
alright so A4 = {1011,0111,1111} = 3, A5 = {10111,01111,11111,11011}...etc
but how would i write this as a recurrance relation for an = ?
• March 24th 2010, 02:13 PM
Plato
Quote:
Originally Posted by Math101
alright so A4 = {1011,0111,1111} = 3, A5 = {10111,01111,11111,11011}...etc
but how would i write this as a recurrance relation for an = ?
Well that is for you to work on.
Here are more hints.
Any valid string must the last three $\cdots 011\text{ or }\cdots 111$.
It is safe to add a 1 to any string in $A_{n-1}$.
BUT that does not get all in $A_n$.
What else does? Be careful do not get repeats.
• March 25th 2010, 10:36 AM
Math101
alright so what i figured out for an is the following:
an = an-1 + (0.5 * an-1) + an-2 + (0.5* an-2) + ...a2 + (0.5 * a2) + a1.
where each (0.5 * an__) is rounded up. This is because to get lets say a5, you can add 1 to the end of every string in a4, and add also add 1 to the start of the string in a4 that starts with a 10.
so a4 = {0111,1111,1011}, and a5 = {01111,11111,10111,11011}
• March 25th 2010, 11:34 AM
Plato
Quote:
Originally Posted by Math101
alright so what i figured out for an is the following:
an = an-1 + (0.5 * an-1) + an-2 + (0.5* an-2) + ...a2 + (0.5 * a2) + a1.
Actually it should be $a_n=a_{n-1}+a_{n-3}$.
We add 1 to the end of every string in $A_{n-1}$ and add 011 to the end each string in $A_{n-3}$.
You should try that to see why it works with the $A_n$’s we already have.
• March 25th 2010, 03:32 PM
jones357
"Actually it should be http://www.mathhelpforum.com/math-he...d70bd2fd-1.gif."
I don't think that is correct.
if a(5)=4, a(3)=2
a(6) = a(5) + a(3) = 6
but:
a(6): {111111,101111,110111,111011,011111} = 5
Isn't it something like:
a(n) = a(n-1) +1 with initial conditions a(0)=a(1)=a(2)=1
where n >=3. I am not sure if this is a valid way of stating a recurrence relation though...
• March 25th 2010, 03:44 PM
Plato
Quote:
Originally Posted by jones357
a(6) = a(5) + a(3) = 6
but: a(6): {111111,101111,110111,111011,011111} = 5
But you missed one in $A_6=\{111111,101111,110111,111011,011111,{\color{b lue}011011}\}$
• March 25th 2010, 03:49 PM
jones357
I had misunderstood the original question to mean that after a 0, there must be all 1's of which there must be at least 2.
• March 25th 2010, 03:55 PM
Plato
Quote:
Originally Posted by jones357
there must be all 1's of which there must be at least 2.
"every 0 is immediately followed by at least two consecutive 1's." does not mean that. | 2014-12-25 21:23:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7685109972953796, "perplexity": 1048.7005588293598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548066.118/warc/CC-MAIN-20141224185908-00093-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/joaa/P._C._Agrawal | • P. C. Agrawal
Articles written in Journal of Astrophysics and Astronomy
• Low frequency quasi-periodic oscillations in the hard x-ray emission from cygnus x-1
The observations of the black hole binary Cygnus X-l were made in the energy band of 20–100keV with a balloon-borne Xenon-filled multiwire proportional counter telescope on 5th April 1992. Timing analysis of the data revealed the presence of Quasi-Periodic Oscillations (QPO) in the hard X-ray emission from the source. The QPO feature in the power density spectrum is broad with a peak at a frequency of 0.06 Hz. This result is compared with similar reports of QPOs in Cyg X-l in soft and hard X-rays. Short time scale random intensity variations in the X-ray light curve are described with a shot noise model.
• X-ray observation of XTE J2012+381 during the 1998 outburst
The outburst of X-ray transient source XTE J2012+381 was detected by the RXTE All-Sky Monitor on 1998 May 24th. Following the outburst, X-ray observations of the source were made in the 2–18 keV energy band with the Pointed Proportional Counters of the Indian X-ray Astronomy Experiment (IXAE) on-board the Indian satellite IRS-P3 during 1998 June 2nd–10th. The X-ray flux of the source in the main outburst decreased exponentially during the period of observation. No large amplitude short-term variability in the intensity is detected from the source. The power density spectrum obtained from the timing analysis of the data shows no indication of any quasi-periodic oscillations in 0.002–0.5 Hz band. The hardness ratio i.e. the ratio of counts in 6–18 keV to 2–6 keV band, indicates that the X-ray spectrum is soft with spectral index >2. From the similarities of the X-ray properties with those of other black hole transients, we conclude that the X-ray transient XTE J2012+381 is likely to be a black hole.
• AstroSat: From Inception to Realization and Launch
The origin of the idea of AstroSat multi wavelength satellite mission and how it evolved over the next 15 years from a concept to the successful development of instruments for giving concrete shape to this mission, is recounted in this article. AstroSat is the outcome of intense deliberations in the Indian astronomy community leading to a consensus for a multi wavelength Observatory having broad spectral coverage over five decades in energy covering near-UV, far-UV, soft X-ray and hard X-ray bands. The multi wavelength observation capability of AstroSat with a suite of 4 co-aligned instruments and an X-ray sky monitor on a single satellite platform, imparts a unique character to this mission. AstroSat owes its realization to the collaborative efforts of the various ISRO centres, several Indian institutions, and a few institutions abroad which developed the 5 instruments and various sub systems of the satellite. AstroSat was launched on September 28, 2015 from India in a near equatorial 650 km circular orbit. The instruments are by and large working as planned and in the past 14 months more than 200 X-ray and UV sources have been studied with it. The important characteristics of AstroSat satellite and scientific instruments will be highlighted.
• Large Area X-Ray Proportional Counter (LAXPC) Instrument on AstroSat and Some Preliminary Results from its Performance in the Orbit
Large area X-ray propositional counter (LAXPC) instrument on AstroSat is aimed at providing high time resolution X-ray observations in 3–80 keV energy band with moderate energy resolution. To achieve large collecting area, a cluster of three co-aligned identical LAXPC detectors, is used to realize an effective area in access of ∼6000cm2 at 15 keV. The large detection volume of the LAXPC detectors, filled with xenon gas at ∼2 atmosphere pressure, results in detection efficiency greater than 50%, above 30 keV. In this article, we present salient features of the LAXPC detectors, their testing and characterization in the laboratory prior to launch and calibration in the orbit. Some preliminary results on timing and spectral characteristics of a few X-ray binaries and other type of sources, are briefly discussed to demonstrate that the LAXPC instrument is performing as planned in the orbit.
• Large Area X-ray Proportional Counter (LAXPC) in orbit performance: Calibration, background, analysis software
The Large Area X-ray Proportional Counter (LAXPC) instrument on-board AstroSat has three nominally identical detectors for timing and spectral studies in the energy range of 3–80 keV. The performance of these detectors during the five years after the launch of AstroSat is described. Currently, only oneof the detector is working nominally. The variation in pressure, energy resolution, gain and background with time are discussed. The capabilities and limitations of the instrument are described. A brief account of available analysis software is also provided.
• Detection of X-ray pulsations at the lowest observed luminosity of Be/X-ray binary pulsar EXO 2030+375 with AstroSat
We present the results obtained from timing and spectral studies of Be/X-ray binary pulsar EXO 2030$+$375 using observations with the Large Area Xenon Proportional Counters and Soft X-ray Telescope of AstroSat, at various phases of its Type-I outbursts in 2016, 2018, and 2020. The pulsar was faint during these observations as compared to earlier observations with other observatories. At the lowest luminosity of $2.5\times10^{35}$ erg s$^{-1}$ in 0.5–30 keV energy range, $\approx$41.3 s pulsations were clearly detected in the X-ray light curves. This finding establishes the first firm detection of pulsations in EXO 2030$+$375 at anextremely low mass accretion rate to date. The shape of the pulse profiles is complex due to the presence of several narrow dips. Though pulsations were detected up to $\sim$80 keV when the source was brighter, pulsations were limited up to $\sim$25 keV during the third AstroSat observation at lowest source luminosity. A search for quasi-periodic oscillations in $2\times 10^4$ Hz to 10 Hz yielded a negative result. Spectral analysis of the AstroSat data showed that the spectrum of the pulsar was steep with a power-law index of $\sim$2. Thevalues of photon-indices at observed low luminosities follow the known pattern in sub-critical regime of the pulsar.
• LAXPC instrument onboard AstroSat: Five exciting years of new scientific results specially on X-ray binaries
With its large effective area at hard X-rays, high time resolution and having co-aligned other instruments, AstroSat/LAXPC was designed to usher in a new era in rapid variability studies and wide spectral band measurements of the X-ray binaries. Over the last five years, the instrument has successfully achieved to a significant extent these Science goals. In the coming years, it is poised to make more important discoveries. This paper highlights the primary achievements of AstroSat/LAXPC in unraveling the behavior of black hole and neutron star systems and discusses the exciting possibility of the instrument’s contributionto future science.
• AstroSat observations of eclipsing high mass X-ray binary pulsar OAO 1657-415
We present the results obtained from analysis of two AstroSat observations of the high mass Xray binary pulsar OAO 1657-415. The observations covered 0.681–0.818 and 0.808–0.968 phases of the $\sim$10.4 day orbital period of the system, in March and July 2019, respectively. Despite being outside theeclipsing regime, the power density spectrum from the first observation lacks any signature of pulsation or quasi-periodic oscillations. However, during July observation, X-ray pulsations at a period of 37.0375 s were clearly detected in the light curves. The pulse profiles from the second observation consist of a broad single peak with a dip-like structure in the middle across the observed energy range. We explored evolution of the pulse profile in narrow time and energy segments. We detected pulsations in the light curves obtained from0.808–0.92 orbital phase range, which is absent in the remaining part of the observation. The spectrum of OAO 1657-415 can be described by an absorbed power-law model along with an iron fluorescent emission line and a blackbody component for out-of-eclipse phase of the observation. Our findings are discussed in the frame of stellar wind accretion and accretion wake at late orbital phases of the binary.
• # Journal of Astrophysics and Astronomy
Volume 43, 2022
All articles
Continuous Article Publishing mode
• # Continuous Article Publication
Posted on January 27, 2016
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2022-05-22 15:16:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5534806251525879, "perplexity": 3235.8168802321984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00602.warc.gz"} |
https://socratic.org/questions/a-model-train-with-a-mass-of-4-kg-is-moving-along-a-track-at-9-cm-s-if-the-curva | # A model train with a mass of 4 kg is moving along a track at 9 (cm)/s. If the curvature of the track changes from a radius of 180 cm to 63 cm, by how much must the centripetal force applied by the tracks change?
##### 1 Answer
Feb 14, 2017
The centripetal force changes by $0.028 N$
#### Explanation:
The centripetal force is
$F = \frac{m {v}^{2}}{r}$
The change in centripetal force is
$\Delta F = \frac{m {v}^{2}}{\Delta r}$
So,
$\Delta F = \frac{4 \cdot {0.09}^{2}}{1.8 - 0.63} = \frac{0.0324}{1.17} N$
$= 0.028 N$ | 2020-07-08 02:22:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5763424634933472, "perplexity": 620.1859932586692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00416.warc.gz"} |
https://melczer.ca/files/TextbookCode/Chapter5/Example5-13-PermutationLCLT.html | ### Example 5.13 (A Family of Permutations with Restricted Cycles)¶
Illustrating a local central limit theorem for a family of permutations.
Requirements: None | 2021-02-26 06:36:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815844893455505, "perplexity": 935.5608950929517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00283.warc.gz"} |
https://byjus.com/question-answer/employment-elasticity-represents-a-convenient-way-of-summarising-the-employment-intensity-of-growth-or-sensitivity/ | Question
# Employment elasticity represents a convenient way of summarising the employment intensity of growth or sensitivity of employment to output growth. Elaborate.
Solution
## Approach: Write definition of Employment elasticity. Write its significance in understanding employment scenarios in an economy or sector. Employment elasticity is a measure of the percentage change in employment associated with a 1 % point change in economic growth.The employment elasticity indicates the ability of an economy to generate employment opportunities for its population as per cent of its growth (development) process.An employment elasticity of 1 denotes that employment grows at the same rate as economic growth. Elasticity of 0 denotes that employment does not grow at all, regardless of economic growth. Negative employment elasticity denotes that employment shrinks as the economy grows.This is crucial as it is commonly believed that economic growth alone will increase employment. Employment elasticity measurement generally faces two sets of criticisms: The relationship between employment and output need not be unidirectional and The notion of employment elasticity is valid for a given state of technology, wage rate and policies. The negative employment elasticity in agriculture indicates movement of people out of agriculture to other sectors where wage rates are higher. This migration of surplus workers to other sectors for productive and gainful employment is necessary for inclusive growth. However, the negative employment elasticity in manufacturing sector was a cause of concern particularly when the sector has shown positive growth in output. Employment elasticity represents a convenient way of summarising the employment intensity of growth or sensitivity of employment to output growth. It is also commonly used to track sectoral potential for generating employment and in forecasting future growth in employment.
Suggest corrections | 2021-11-28 06:37:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138264417648315, "perplexity": 2887.157759683693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00115.warc.gz"} |
https://buboflash.eu/bubo5/whats-new-on-day?day-number=42947 | # on 02-Aug-2017 (Wed)
#### Flashcard 1447858146572
Tags
#cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4
Question
imperfect competition is where an individual firm has [...] and is therefore able to exert some influence over price.
enough share of the market
(or can control a certain segment of the market)
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
imperfect competition is where an individual firm has enough share of the market (or can control a certain segment of the market) and is therefore able to exert some influence over price.
#### Original toplevel document
3. ANALYSIS OF REVENUE, COSTS, AND PROFITS
zation requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. <span>3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pure monopolist (i.e., the only seller in its market), there is a difference between market demand and the demand facing an individual firm. A later reading will devote much more time to understanding the various competitive environments (perfect competition, monopolistic competition, oligopoly, and monopoly), known as market structure . To keep the analysis simple at this point, we will note that competition could be either perfect or imperfect. In perfect competition , the individual firm has virtually no impact on market price, because it is assumed to be a very small seller among a very large number of firms selling essentially identical products. Such a firm is called a price taker . In the second case, the firm does have at least some control over the price at which it sells its product because it must lower its price to sell more units. Exhibit 4 presents total, average, and marginal revenue data for a firm under the assumption that the firm is price taker at each relevant level of quantity of goods sold. Consequently, the individual seller faces a horizontal demand curve over relevant output ranges at the price level established by the market (see Exhibit 5). The seller can offer any quantity at this set market price without affecting price. In contrast, imperfect competition is where an individual firm has enough share of the market (or can control a certain segment of the market) and is therefore able to exert some influence over price. Instead of a large number of competing firms, imperfect competition involves a smaller number of firms in the market relative to perfect competition and in the extreme case only one firm (i.e., monopoly). Under any form of imperfect competition, the individual seller confronts a negatively sloped demand curve, where price and the quantity demanded by consumers are inversely related. In this case, price to the firm declines when a greater quantity is offered to the market; price to the firm increases when a lower quantity is offered to the market. This is shown in Exhibits 6 and 7. Exhibit 4. Total, Average, and Marginal Revenue under Perfect Competition Quantity Sold (Q) Price (P) Total Revenue (TR) Average Re
#### Flashcard 1598500244748
Tags
#cfa-level-1 #financial-reporting-and-analysis #non-recurring-non-operating-items #understanding-income-statement
Question
Extraordinary items are BOTH [...] in nature AND [...] in occurrence, and material in amount.
unusual
infrequent
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Extraordinary items are BOTH unusual in nature AND infrequent in occurrence, and material in amount. They must be reported separately (below the line) net of income tax.
#### Original toplevel document
Subject 6. Non-Recurring Items and Non-Operating Items
. Subsidiaries and investees also qualify as separate components. Disposal of a portion of a business component does not qualify as discontinued operations. Instead, this is recorded as an unusual or infrequent item. <span>2. Extraordinary items Extraordinary items are BOTH unusual in nature AND infrequent in occurrence, and material in amount. They must be reported separately (below the line) net of income tax. Common examples are: Expropriations by foreign governments. Uninsured losses from earthquakes, eruptions, and tornadoes. Note that gains and losses from the early retirement of debt used to be treated as extraordinary items; SFAS No. 145 now requires them to be treated as part of continuing operations. 3. Unusual or infrequent items These are either unusual in nature OR infrequent in occurrence but not both. They may be disclosed separately (as a single-line
#### Flashcard 1612594941196
Tags
Question
You can identify the types of accruals and valuation entries in the [...] section of MD&A and in the significant accounting policies footnote, both found in the annual report.
Critical accounting policies/estimates
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
An important first step in analyzing financial statements is identifying the types of accruals and valuation entries in an entity’s financial statements. Most of these items will be noted in the critical accounting policies/estimates section of management’s discussion and analysi
#### Original toplevel document
7. USING FINANCIAL STATEMENTS IN SECURITY ANALYSIS
ions of balance sheet accounts. Accruals and valuation entries require considerable judgment and thus create many of the limitations of the accounting model. Judgments could prove wrong or, worse, be used for deliberate earnings manipulation. <span>An important first step in analyzing financial statements is identifying the types of accruals and valuation entries in an entity’s financial statements. Most of these items will be noted in the critical accounting policies/estimates section of management’s discussion and analysis (MD&A) and in the significant accounting policies footnote, both found in the annual report. Analysts should use this disclosure to identify the key accruals and valuations for a company. The analyst needs to be aware, as Example 4 shows, that the manipulation of earnings and a
#### Flashcard 1622013775116
Tags
#tvm
Question
FV of an annuity factor
$$FV_n=A {(1+r)^n-1 \over r}$$
status measured difficulty not learned 37% [default] 0
#### Flashcard 1622018231564
Tags
#tvm
Question
the present value factor formula (2)
$$PV =FV_n {1 \over (1+r)^n}$$
status measured difficulty not learned 37% [default] 0
#### Flashcard 1626085395724
Tags
#discounted-cashflow-applications
Question
In investment management applications, the internal rate of return is called the money-weighted rate of return because it accounts for [...] and [...] of all cash flows into and out of the portfolio
the timing and amount
status measured difficulty not learned 37% [default] 0
#### Flashcard 1644860935436
Tags
Question
A trade in two closely related stocks involving the short sale of one and the purchase of the other.
status measured difficulty not learned 37% [default] 0
#### Annotation 1647687109900
In investments, the question of whether one event (or characteristic) provides information about another event arises in both time-series settings and cross-sectional settings.
#### Flashcard 1647708343564
Tags
Question
A rule explaining the unconditional probability of an event in terms of probabilities of the event conditional on mutually exclusive and exhaustive scenarios.
Total probability rule
status measured difficulty not learned 37% [default] 0
#### Annotation 1647711489292
For readers familiar with mathematical treatments of probability, S, a notation usually reserved for a concept called the sample space, is being appropriated to stand for scenario.
#### Flashcard 1647713848588
Tags
Question
P(A) = P(AS)+P(ASC) = P(A|S)P(S) + P(A|SC)P(SC)
Total probability rule
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647717780748
Tags
Question
P(A|S1)P(S1) + P(A|S2)P(S2) +…+ P(A|Sn)P(Sn)
What does this equation say?
The probability of any event [P(A)] can be expressed as a weighted average of the probabilities of the event, given scenarios [terms such P(A | S1)]
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647723810060
Tags
Question
The scenarios for the total probability rule must be [...]
mutually exclusive and exhaustive
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647743470860
Tags
Question
The probability-weighted average of the possible outcomes of a random variable.
Expected value
status measured difficulty not learned 37% [default] 0
#### Annotation 1647745830156
Expected value
Expected value (for example, expected stock return) looks either to the future, as a forecast, or to the “true” value of the mean (the population mean, discussed in the reading on statistical concepts and market returns). We should distinguish expected value from the concepts of historical or sample mean. The sample mean also summarizes in a single number a central value. However, the sample mean presents a central value for a particular set of observations as an equally weighted average of those observations. To summarize, the contrast is forecast versus historical, or population versus sample.
#### Flashcard 1647755267340
Tags
Question
E(X)=
$$E(X) = {\displaystyle\sum_{i=1}^{n} P(Xi)Xi}$$
status measured difficulty not learned 37% [default] 0
#### Annotation 1647759461644
Expected value
For simplicity, we model all random variables in this reading as discrete random variables, which have a countable set of outcomes. For continuous random variables, which are discussed along with discrete random variables in the reading on common probability distributions, the operation corresponding to summation is integration.
#### Flashcard 1647761820940
Tags
Question
The [...] of a random variable is the expected value of squared deviations from the random variable’s expected value
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647763918092
Tags
Question
$$\sigma^2(X)=$$
$$\sigma^2 =E( {(X-E(X))^2}$$
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647772044556
Tags
Question
The two notations for variance are [...]
σ2(X) and Var(X).
status measured difficulty not learned 37% [default] 0
#### Annotation 1647775976716
Expected value
Variance is a number greater than or equal to 0 because it is the sum of squared terms. If variance is 0, there is no dispersion or risk. The outcome is certain, and the quantity X is not random at all. Variance greater than 0 indicates dispersion of outcomes. Increasing variance indicates increasing dispersion, all else equal.
#### Annotation 1647780433164
The unconditional variance of EPS is the sum of two terms:
1) the expected value (probability-weighted average) of the conditional variances (parallel to the total probability rules) and
2) the variance of conditional expected values of EPS.
The second term arises because the variability in conditional expected value is a source of risk. Term 1 is σ2(EPS) = P(declining interest rate environment) σ2(EPS | declining interest rate environment) + P(stable interest rate environment) σ2(EPS | stable interest rate environment) = 0.60(0.004219) + 0.40(0.0096) = 0.006371.
Term 2 is σ2[E(EPS | interest rate environment)] = 0.60($2.4875 −$2.34)2 + 0.40($2.12 −$2.34)2 = 0.032414. Summing the two terms, unconditional variance equals 0.006371 + 0.032414 = 0.038785.
#### Flashcard 1647782792460
Tags
Question
The order of calculation is always [...] , then [...] , then standard deviation.
expected value
variance
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647791181068
Tags
Question
Variance summarized proceedure equation:
Var(X)=
$$\sigma^2 = {P(Xi)}\left\{Xi-E(Xi)\right\}^2$$
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647803501836
Tags
Question
The expected value of a stated event given that another event has occurred.
Conditional expected value
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647821851916
Tags
Question
Total probability rule for expected value:
E(X)= [...]
E(X)=E(X|S)P(S)+E(X|SC)P(SC)
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647829191948
Tags
Question
so small or unimportant as to be not worth considering; insignificant.
Negligible
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647852522764
Tags
Question
A diagram with branches emanating from nodes representing either mutually exclusive chance events or mutually exclusive decisions.
Tree diagram
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647854882060
Tags
Question
Each value on a binomial tree from which successive moves or outcomes branch.
Nodes
status measured difficulty not learned 37% [default] 0
#### Flashcard 1647861959948
Tags
Question
The variance of one variable, given the outcome of another.
Condicional variances
status measured difficulty not learned 37% [default] 0 | 2020-09-23 02:28:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4997815787792206, "perplexity": 4674.4031870061735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00246.warc.gz"} |
https://machinelearning.technicacuriosa.com/convolutional-neural-nets-section-3/ | ## Convolutional Neural Nets, Section 3
### Convolution Demo
Below is a running demo of a CONV layer. Since 3D volumes are hard to visualize, all the volumes (the input volume (in blue), the weight volumes (in red), the output volume (in green)) are visualized with each depth slice stacked in rows. The input volume is of size W1=5,H1=5,D1=3, and the CONV layer parameters are K=2,F=3,S=2,P=1. That is, we have two filters of size 3×33×3, and they are applied with a stride of 2. Therefore, the output volume size has spatial size (5 – 3 + 2)/2 + 1 = 3. Moreover, notice that a padding of P=1 is applied to the input volume, making the outer border of the input volume zero. The visualization below iterates over the output activations (green), and shows that each element is computed by element-wise multiplying the highlighted input (blue) with the filter (red), summing it up, and then offsetting the result by the bias.
Implementation as Matrix Multiplication. Note that the convolution operation essentially performs dot products between the filters and local regions of the input. A common implementation pattern of the CONV layer is to take advantage of this fact and formulate the forward pass of a convolutional layer as one big matrix multiply as follows:
1. The local regions in the input image are stretched out into columns in an operation commonly called im2col. For example, if the input is [227x227x3] and it is to be convolved with 11x11x3 filters at stride 4, then we would take [11x11x3] blocks of pixels in the input and stretch each block into a column vector of size 11*11*3 = 363. Iterating this process in the input at stride of 4 gives (227-11)/4+1 = 55 locations along both width and height, leading to an output matrix X_col of im2col of size [363 x 3025], where every column is a stretched out receptive field and there are 55*55 = 3025 of them in total. Note that since the receptive fields overlap, every number in the input volume may be duplicated in multiple distinct columns.
2. The weights of the CONV layer are similarly stretched out into rows. For example, if there are 96 filters of size [11x11x3] this would give a matrix W_row of size [96 x 363].
3. The result of a convolution is now equivalent to performing one large matrix multiply np.dot(W_row, X_col), which evaluates the dot product between every filter and every receptive field location. In our example, the output of this operation would be [96 x 3025], giving the output of the dot product of each filter at each location.
4. The result must finally be reshaped back to its proper output dimension [55x55x96].
This approach has the downside that it can use a lot of memory, since some values in the input volume are replicated multiple times in X_col. However, the benefit is that there are many very efficient implementations of Matrix Multiplication that we can take advantage of (for example, in the commonly used BLAS API). Moreover, the same im2col idea can be reused to perform the pooling operation, which we discuss next.
Backpropagation. The backward pass for a convolution operation (for both the data and the weights) is also a convolution (but with spatially-flipped filters). This is easy to derive in the 1-dimensional case with a toy example (not expanded on for now).
1×1 convolution. As an aside, several papers use 1×1 convolutions, as first investigated by Network in Network. Some people are at first confused to see 1×1 convolutions especially when they come from signal processing background. Normally signals are 2-dimensional so 1×1 convolutions do not make sense (it’s just pointwise scaling). However, in ConvNets this is not the case because one must remember that we operate over 3-dimensional volumes, and that the filters always extend through the full depth of the input volume. For example, if the input is [32x32x3] then doing 1×1 convolutions would effectively be doing 3-dimensional dot products (since the input depth is 3 channels).
Dilated convolutions. A recent development (e.g. see paper by Fisher Yu and Vladlen Koltun) is to introduce one more hyperparameter to the CONV layer called the dilation. So far we’ve only discussed CONV filters that are contiguous. However, it’s possible to have filters that have spaces between each cell, called dilation. As an example, in one dimension a filter w of size 3 would compute over input x the following: w[0]*x[0] + w[1]*x[1] + w[2]*x[2]. This is dilation of 0. For dilation 1 the filter would instead compute w[0]*x[0] + w[1]*x[2] + w[2]*x[4]; In other words there is a gap of 1 between the applications. This can be very useful in some settings to use in conjunction with 0-dilated filters because it allows you to merge spatial information across the inputs much more agressively with fewer layers. For example, if you stack two 3×3 CONV layers on top of each other then you can convince yourself that the neurons on the 2nd layer are a function of a 5×5 patch of the input (we would say that the effective receptive field of these neurons is 5×5). If we use dilated convolutions then this effective receptive field would grow much quicker.
#### Pooling Layer
It is common to periodically insert a Pooling layer in-between successive Conv layers in a ConvNet architecture. Its function is to progressively reduce the spatial size of the representation to reduce the amount of parameters and computation in the network, and hence to also control overfitting. The Pooling Layer operates independently on every depth slice of the input and resizes it spatially, using the MAX operation. The most common form is a pooling layer with filters of size 2×2 applied with a stride of 2 downsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations. Every MAX operation would in this case be taking a max over 4 numbers (little 2×2 region in some depth slice). The depth dimension remains unchanged. More generally, the pooling layer:
• Accepts a volume of size W1×H1×D1
• Requires two hyperparameters:
• their spatial extent F
• the stride S,
• Produces a volume of size W2×H2×D2 where:
• W2=(W1F)/S+1
• H2=(H1F)/S+1
• D2=D1
• Introduces zero parameters since it computes a fixed function of the input
• Note that it is not common to use zero-padding for Pooling layers
It is worth noting that there are only two commonly seen variations of the max pooling layer found in practice: A pooling layer with F=3,S=(also called overlapping pooling), and more commonly F=2,S=2. Pooling sizes with larger receptive fields are too destructive.
General pooling. In addition to max pooling, the pooling units can also perform other functions, such as average pooling or even L2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to the max pooling operation, which has been shown to work better in practice.
##### Pooling layer downsamples the volume spatially, independently in each depth slice of the input volume. Left: In this example, the input volume of size [224x224x64] is pooled with filter size 2, stride 2 into output volume of size [112x112x64]. Notice that the volume depth is preserved. Right: The most common downsampling operation is max, giving rise to max pooling, here shown with a stride of 2. That is, each max is taken over 4 numbers (little 2×2 square).
Backpropagation. Recall from the backpropagation chapter that the backward pass for a max(x, y) operation has a simple interpretation as only routing the gradient to the input that had the highest value in the forward pass. Hence, during the forward pass of a pooling layer it is common to keep track of the index of the max activation (sometimes also called the switches) so that gradient routing is efficient during backpropagation.
Getting rid of pooling. Many people dislike the pooling operation and think that we can get away without it. For example, Striving for Simplicity: The All Convolutional Net proposes to discard the pooling layer in favor of architecture that only consists of repeated CONV layers. To reduce the size of the representation they suggest using larger stride in CONV layer once in a while. Discarding pooling layers has also been found to be important in training good generative models, such as variational autoencoders (VAEs) or generative adversarial networks (GANs). It seems likely that future architectures will feature very few to no pooling layers.
#### Normalization Layer
Many types of normalization layers have been proposed for use in ConvNet architectures, sometimes with the intentions of implementing inhibition schemes observed in the biological brain. However, these layers have since fallen out of favor because in practice their contribution has been shown to be minimal, if any. For various types of normalizations, see the discussion in Alex Krizhevsky’s cuda-convnet library API.
#### Fully-connected layer
Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks. Their activations can hence be computed with a matrix multiplication followed by a bias offset. See the Neural Network section of the notes for more information.
#### Converting FC layers to CONV layers
It is worth noting that the only difference between FC and CONV layers is that the neurons in the CONV layer are connected only to a local region in the input, and that many of the neurons in a CONV volume share parameters. However, the neurons in both layers still compute dot products, so their functional form is identical. Therefore, it turns out that it’s possible to convert between FC and CONV layers:
• For any CONV layer there is an FC layer that implements the same forward function. The weight matrix would be a large matrix that is mostly zero except for at certain blocks (due to local connectivity) where the weights in many of the blocks are equal (due to parameter sharing).
• Conversely, any FC layer can be converted to a CONV layer. For example, an FC layer with K=4096K=4096 that is looking at some input volume of size 7×7×5127×7×512 can be equivalently expressed as a CONV layer with F=7,P=0,S=1,K=4096. In other words, we are setting the filter size to be exactly the size of the input volume, and hence the output will simply be 1×1×40961×1×4096 since only a single depth column “fits” across the input volume, giving identical result as the initial FC layer.
FC->CONV conversion. Of these two conversions, the ability to convert an FC layer to a CONV layer is particularly useful in practice. Consider a ConvNet architecture that takes a 224x224x3 image, and then uses a series of CONV layers and POOL layers to reduce the image to an activations volume of size 7x7x512 (in an AlexNet architecture that we’ll see later, this is done by use of 5 pooling layers that downsample the input spatially by a factor of two each time, making the final spatial size 224/2/2/2/2/2 = 7). From there, an AlexNet uses two FC layers of size 4096 and finally the last FC layers with 1000 neurons that compute the class scores. We can convert each of these three FC layers to CONV layers as described above:
• Replace the first FC layer that looks at [7x7x512] volume with a CONV layer that uses filter size F=7, giving output volume [1x1x4096].
• Replace the second FC layer with a CONV layer that uses filter size F=1, giving output volume [1x1x4096]
• Replace the last FC layer similarly, with F=1, giving final output [1x1x1000]
Each of these conversions could in practice involve manipulating (e.g. reshaping) the weight matrix WW in each FC layer into CONV layer filters. It turns out that this conversion allows us to “slide” the original ConvNet very efficiently across many spatial positions in a larger image, in a single forward pass.
For example, if 224×224 image gives a volume of size [7x7x512] – i.e., a reduction by 32, then forwarding an image of size 384×384 through the converted architecture would give the equivalent volume in size [12x12x512], since 384/32 = 12. Following through with the next 3 CONV layers that we just converted from FC layers would now give the final volume of size [6x6x1000], since (12 – 7)/1 + 1 = 6. Note that instead of a single vector of class scores of size [1x1x1000], we’re now getting an entire 6×6 array of class scores across the 384×384 image.
Evaluating the original ConvNet (with FC layers) independently across 224×224 crops of the 384×384 image in strides of 32 pixels gives an identical result to forwarding the converted ConvNet one time.
Naturally, forwarding the converted ConvNet a single time is much more efficient than iterating the original ConvNet over all those 36 locations, since the 36 evaluations share computation. This trick is often used in practice to get better performance, where for example, it is common to resize an image to make it bigger, use a converted ConvNet to evaluate the class scores at many spatial positions and then average the class scores.
Lastly, what if we wanted to efficiently apply the original ConvNet over the image but at a stride smaller than 32 pixels? We could achieve this with multiple forward passes. For example, note that if we wanted to use a stride of 16 pixels we could do so by combining the volumes received by forwarding the converted ConvNet twice: First over the original image and second over the image but with the image shifted spatially by 16 pixels along both width and height.
• An IPython Notebook on Net Surgery shows how to perform the conversion in practice, in code (using Caffe).
### ConvNet Architectures
We have seen that Convolutional Networks are commonly made up of only three layer types: CONV, POOL (we assume Max pool unless stated otherwise) and FC (short for fully-connected). We will also explicitly write the RELU activation function as a layer, which applies elementwise non-linearity. In this section we discuss how these are commonly stacked together to form entire ConvNets.
#### Layer Patterns
The most common form of a ConvNet architecture stacks a few CONV-RELU layers, follows them with POOL layers, and repeats this pattern until the image has been merged spatially to a small size. At some point, it is common to transition to fully-connected layers. The last fully-connected layer holds the output, such as the class scores. In other words, the most common ConvNet architecture follows the pattern:
INPUT -> [[CONV -> RELU]*N -> POOL?]*M -> [FC -> RELU]*K -> FC
where the * indicates repetition, and the POOL? indicates an optional pooling layer. Moreover, N >= 0 (and usually N <= 3), M >= 0K >= 0 (and usually K < 3). For example, here are some common ConvNet architectures you may see that follow this pattern:
• INPUT -> FC, implements a linear classifier. Here N = M = K = 0.
• INPUT -> CONV -> RELU -> FC
• INPUT -> [CONV -> RELU -> POOL]*2 -> FC -> RELU -> FC. Here we see that there is a single CONV layer between every POOL layer.
• INPUT -> [CONV -> RELU -> CONV -> RELU -> POOL]*3 -> [FC -> RELU]*2 -> FC Here we see two CONV layers stacked before every POOL layer. This is generally a good idea for larger and deeper networks, because multiple stacked CONV layers can develop more complex features of the input volume before the destructive pooling operation.
Prefer a stack of small filter CONV to one large receptive field CONV layer. Suppose that you stack three 3×3 CONV layers on top of each other (with non-linearities in between, of course). In this arrangement, each neuron on the first CONV layer has a 3×3 view of the input volume. A neuron on the second CONV layer has a 3×3 view of the first CONV layer, and hence by extension a 5×5 view of the input volume. Similarly, a neuron on the third CONV layer has a 3×3 view of the 2nd CONV layer, and hence a 7×7 view of the input volume. Suppose that instead of these three layers of 3×3 CONV, we only wanted to use a single CONV layer with 7×7 receptive fields. These neurons would have a receptive field size of the input volume that is identical in spatial extent (7×7), but with several disadvantages. First, the neurons would be computing a linear function over the input, while the three stacks of CONV layers contain non-linearities that make their features more expressive. Second, if we suppose that all the volumes have CC channels, then it can be seen that the single 7×7 CONV layer would contain C×(7×7×C)=49C2C×(7×7×C)=49C2 parameters, while the three 3×3 CONV layers would only contain 3×(C×(3×3×C))=27C23×(C×(3×3×C))=27C2 parameters. Intuitively, stacking CONV layers with tiny filters as opposed to having one CONV layer with big filters allows us to express more powerful features of the input, and with fewer parameters. As a practical disadvantage, we might need more memory to hold all the intermediate CONV layer results if we plan to do backpropagation.
Recent departures. It should be noted that the conventional paradigm of a linear list of layers has recently been challenged, in Google’s Inception architectures and also in current (state of the art) Residual Networks from Microsoft Research Asia. Both of these (see details below in case studies section) feature more intricate and different connectivity structures.
In practice: use whatever works best on ImageNet. If you’re feeling a bit of a fatigue in thinking about the architectural decisions, you’ll be pleased to know that in 90% or more of applications you should not have to worry about these. I like to summarize this point as “don’t be a hero”: Instead of rolling your own architecture for a problem, you should look at whatever architecture currently works best on ImageNet, download a pretrained model and finetune it on your data. You should rarely ever have to train a ConvNet from scratch or design one from scratch. I also made this point at the Deep Learning school.
#### Layer Sizing Patterns
Until now we’ve omitted mentions of common hyperparameters used in each of the layers in a ConvNet. We will first state the common rules of thumb for sizing the architectures and then follow the rules with a discussion of the notation:
The input layer (that contains the image) should be divisible by 2 many times. Common numbers include 32 (e.g. CIFAR-10), 64, 96 (e.g., STL-10), or 224 (e.g., common ImageNet ConvNets), 384, and 512.
The conv layers should be using small filters (e.g., 3×3 or at most 5×5), using a stride of S=1, and crucially, padding the input volume with zeros in such way that the conv layer does not alter the spatial dimensions of the input. That is, when F=3, then using P=1 will retain the original size of the input. When F=5P=2. For a general F, it can be seen that P=(F1)/2 preserves the input size. If you must use bigger filter sizes (such as 7×7 or so), it is only common to see this on the very first conv layer that is looking at the input image.
The pool layers are in charge of downsampling the spatial dimensions of the input. The most common setting is to use max-pooling with 2×2 receptive fields (i.e., F=2), and with a stride of 2 (i.e., S=2). Note that this discards exactly 75% of the activations in an input volume (due to downsampling by 2 in both width and height). Another slightly less common setting is to use 3×3 receptive fields with a stride of 2, but this makes. It is very uncommon to see receptive field sizes for max pooling that are larger than 3 because the pooling is then too lossy and aggressive. This usually leads to worse performance.
Reducing sizing headaches. The scheme presented above is pleasing because all the CONV layers preserve the spatial size of their input, while the POOL layers alone are in charge of down-sampling the volumes spatially. In an alternative scheme where we use strides greater than 1 or don’t zero-pad the input in CONV layers, we would have to very carefully keep track of the input volumes throughout the CNN architecture and make sure that all strides and filters “work out”, and that the ConvNet architecture is nicely and symmetrically wired.
Why use stride of 1 in CONV? Smaller strides work better in practice. Additionally, as already mentioned stride 1 allows us to leave all spatial down-sampling to the POOL layers, with the CONV layers only transforming the input volume depth-wise.
Why use padding? In addition to the aforementioned benefit of keeping the spatial sizes constant after CONV, doing this actually improves performance. If the CONV layers were to not zero-pad the inputs and only perform valid convolutions, then the size of the volumes would reduce by a small amount after each CONV, and the information at the borders would be “washed away” too quickly.
Compromising based on memory constraints. In some cases (especially early in the ConvNet architectures), the amount of memory can build up very quickly with the rules of thumb presented above. For example, filtering a 224x224x3 image with three 3×3 CONV layers with 64 filters each and padding 1 would create three activation volumes of size [224x224x64]. This amounts to a total of about 10 million activations, or 72MB of memory (per image, for both activations and gradients). Since GPUs are often bottlenecked by memory, it may be necessary to compromise. In practice, people prefer to make the compromise at only the first CONV layer of the network. For example, one compromise might be to use a first CONV layer with filter sizes of 7×7 and stride of 2 (as seen in a ZF net). As another example, an AlexNet uses filter sizes of 11×11 and stride of 4.
#### Case Studies
There are several architectures in the field of convolutional networks that have a name. The most common are:
• LeNet. The first successful applications of Convolutional Networks were developed by Yann LeCun in 1990s. Of these, the best known is the LeNet architecture that was used to read zip codes, digits, etc.
• AlexNet. The first work that popularized Convolutional Networks in Computer Vision was the AlexNet, developed by Alex Krizhevsky, Ilya Sutskever and Geoff Hinton. The AlexNet was submitted to the ImageNet ILSVRC challenge in 2012 and significantly outperformed the second runner-up (top 5 error of 16% compared to runner-up with 26% error). The Network had a very similar architecture to LeNet, but was deeper, bigger, and featured Convolutional Layers stacked on top of each other (previously it was common to only have a single CONV layer always immediately followed by a POOL layer).
• ZF Net. The ILSVRC 2013 winner was a Convolutional Network from Matthew Zeiler and Rob Fergus. It became known as the ZFNet (short for Zeiler & Fergus Net). It was an improvement on AlexNet by tweaking the architecture hyperparameters, in particular by expanding the size of the middle convolutional layers and making the stride and filter size on the first layer smaller.
• GoogLeNet. The ILSVRC 2014 winner was a Convolutional Network from Szegedy et al. from Google. Its main contribution was the development of an Inception Module that dramatically reduced the number of parameters in the network (4M, compared to AlexNet with 60M). Additionally, this paper uses Average Pooling instead of Fully Connected layers at the top of the ConvNet, eliminating a large amount of parameters that do not seem to matter much. There are also several followup versions to the GoogLeNet, most recently Inception-v4.
• VGGNet. The runner-up in ILSVRC 2014 was the network from Karen Simonyan and Andrew Zisserman that became known as the VGGNet. Its main contribution was in showing that the depth of the network is a critical component for good performance. Their final best network contains 16 CONV/FC layers and, appealingly, features an extremely homogeneous architecture that only performs 3×3 convolutions and 2×2 pooling from the beginning to the end. Their pretrained model is available for plug and play use in Caffe. A downside of the VGGNet is that it is more expensive to evaluate and uses a lot more memory and parameters (140M). Most of these parameters are in the first fully connected layer, and it was since found that these FC layers can be removed with no performance downgrade, significantly reducing the number of necessary parameters.
• ResNetResidual Network developed by Kaiming He et al. was the winner of ILSVRC 2015. It features special skip connections and a heavy use of batch normalization. The architecture is also missing fully connected layers at the end of the network. The reader is also referred to Kaiming’s presentation (videoslides), and some recent experiments that reproduce these networks in Torch. ResNets are currently by far state of the art Convolutional Neural Network models and are the default choice for using ConvNets in practice (as of May 10, 2016). In particular, also see more recent developments that tweak the original architecture from Kaiming He et al. Identity Mappings in Deep Residual Networks (published March 2016).
VGGNet in detail. Lets break down the VGGNet in more detail as a case study. The whole VGGNet is composed of CONV layers that perform 3×3 convolutions with stride 1 and pad 1, and of POOL layers that perform 2×2 max pooling with stride 2 (and no padding). We can write out the size of the representation at each step of the processing and keep track of both the representation size and the total number of weights:
INPUT: [224x224x3] memory: 224*224*3=150K weights: 0
CONV3-64: [224x224x64] memory: 224*224*64=3.2M weights: (3*3*3)*64 = 1,728
CONV3-64: [224x224x64] memory: 224*224*64=3.2M weights: (3*3*64)*64 = 36,864
POOL2: [112x112x64] memory: 112*112*64=800K weights: 0
CONV3-128: [112x112x128] memory: 112*112*128=1.6M weights: (3*3*64)*128 = 73,728
CONV3-128: [112x112x128] memory: 112*112*128=1.6M weights: (3*3*128)*128 = 147,456
POOL2: [56x56x128] memory: 56*56*128=400K weights: 0
CONV3-256: [56x56x256] memory: 56*56*256=800K weights: (3*3*128)*256 = 294,912
CONV3-256: [56x56x256] memory: 56*56*256=800K weights: (3*3*256)*256 = 589,824
CONV3-256: [56x56x256] memory: 56*56*256=800K weights: (3*3*256)*256 = 589,824
POOL2: [28x28x256] memory: 28*28*256=200K weights: 0
CONV3-512: [28x28x512] memory: 28*28*512=400K weights: (3*3*256)*512 = 1,179,648
CONV3-512: [28x28x512] memory: 28*28*512=400K weights: (3*3*512)*512 = 2,359,296
CONV3-512: [28x28x512] memory: 28*28*512=400K weights: (3*3*512)*512 = 2,359,296
POOL2: [14x14x512] memory: 14*14*512=100K weights: 0
CONV3-512: [14x14x512] memory: 14*14*512=100K weights: (3*3*512)*512 = 2,359,296
CONV3-512: [14x14x512] memory: 14*14*512=100K weights: (3*3*512)*512 = 2,359,296
CONV3-512: [14x14x512] memory: 14*14*512=100K weights: (3*3*512)*512 = 2,359,296
POOL2: [7x7x512] memory: 7*7*512=25K weights: 0
FC: [1x1x4096] memory: 4096 weights: 7*7*512*4096 = 102,760,448
FC: [1x1x4096] memory: 4096 weights: 4096*4096 = 16,777,216
FC: [1x1x1000] memory: 1000 weights: 4096*1000 = 4,096,000
TOTAL memory: 24M * 4 bytes ~= 93MB / image (only forward! ~*2 for bwd)
TOTAL params: 138M parameters
As is common with Convolutional Networks, notice that most of the memory (and also compute time) is used in the early CONV layers, and that most of the parameters are in the last FC layers. In this particular case, the first FC layer contains 100M weights, out of a total of 140M.
#### Computational Considerations
The largest bottleneck to be aware of when constructing ConvNet architectures is the memory bottleneck. Many modern GPUs have a limit of 3/4/6GB memory, with the best GPUs having about 12GB of memory. There are three major sources of memory to keep track of:
• From the intermediate volume sizes: These are the raw number of activations at every layer of the ConvNet, and also their gradients (of equal size). Usually, most of the activations are on the earlier layers of a ConvNet (i.e., first Conv Layers). These are kept around because they are needed for backpropagation, but a clever implementation that runs a ConvNet only at test time could in principle reduce this by a huge amount, by only storing the current activations at any layer and discarding the previous activations on layers below.
• From the parameter sizes: These are the numbers that hold the network parameters, their gradients during backpropagation, and commonly also a step cache if the optimization is using momentum, Adagrad, or RMSProp. Therefore, the memory to store the parameter vector alone must usually be multiplied by a factor of at least 3 or so.
• Every ConvNet implementation has to maintain miscellaneous memory, such as the image data batches, perhaps their augmented versions, etc.
Once you have a rough estimate of the total number of values (for activations, gradients, and misc), the number should be converted to size in GB. Take the number of values, multiply by 4 to get the raw number of bytes (since every floating point is 4 bytes, or maybe by 8 for double precision), and then divide by 1024 multiple times to get the amount of memory in KB, MB, and finally GB. If your network doesn’t fit, a common heuristic to “make it fit” is to decrease the batch size, since most of the memory is usually consumed by the activations. | 2019-02-18 07:44:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6155898571014404, "perplexity": 1404.6784168792851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484772.43/warc/CC-MAIN-20190218074121-20190218100121-00039.warc.gz"} |
https://codereview.stackexchange.com/questions/109963/validating-users-with-roles | # Validating users with Roles
I am using the following to validate my user which works but i wish to incoperate roles so I used the reg iis to add the standard tables aspnet_UsersInRoles.
What I basically want is the ability to have variables canView canDelete canEdit and that I can just access them when my dal is called.
protected void btnLogin_Click1(object sender, EventArgs e)
{
else
{
UserData userData = new UserData
{
};
FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(
1, // ticket version
DateTime.Now, // issueDate
isPersistent, // true to persist across browser sessions
userData, // can be used to store additional user data
// Encrypt the ticket using the machine key
string encryptedTicket = FormsAuthentication.Encrypt(ticket);
lblerror.Text = "Success!";
Response.Redirect("default.aspx");
}
}
My Question is how would i adapt the following routine to inlcude the roles aspnet_UsersInRoles table to check if has administrative privellages Of a guid type of ED85788D-72DA-4D0A-8D5E-B5378FC00592.
public Users VerifyPassword(string userName, string password)
{
//The ".FirstOrDefault()" method will return either the first matched
//result or null
Users myUser = SoccerEntities.Users
{
return myUser;
}
else //User was found
{
isPlayer = true;
return myUser;
//Do something to let them know that their credentials were not valid
}
}
Cause at min im doing this setting the isAdmin variable on a other function isAdmindb so thats three hits to db which is not very effiecent
protected bool isAdmindb(string userName)
{
try{
Users adminUsers = (from users in SoccerEntities.Users
select users).FirstOrDefault();
return true;
else
return false;
}
catch (Exception ex)
{
}
}
Edit a had a typo in my return !!!
• VerifyPassword() always returns null? What's the point of that? – 200_success Nov 6 '15 at 8:22
• i had a typo sorry updated with correct code – David Buckley Nov 6 '15 at 8:48
• Hi! Welcome to Code Review. Good job on your first post! – TheCoffeeCup Nov 15 '15 at 3:15
public Users VerifyPassword(string userName, string password)
{
//The ".FirstOrDefault()" method will return either the first matched
//result or null
Users myUser = SoccerEntities.Users
{
return myUser;
}
else //User was found
{
isPlayer = true;
return myUser;
//Do something to let them know that their credentials were not valid
}
}
The comments in this method don't serve any purpose an don't add any value to the code, so get rid of them. Comments should describe why something is done but shouldn't state the obvious.
A class named Users implies to hold a collections of User but in your case it seems to be singular so you should rename this class to User.
Not using braces {} will lead to error prone code. You should use them always although they might be optional.
The else part is redundant because if myUser == null it won't be reached, you should get rid of the else.
If for instance isAdmin had been true before this call it will stay that way which is bad. Just assign the returned value of isAdmindb to the variable. The same goes for isPlayer.
Applying this points except for Users vs User
public Users VerifyPassword(string userName, string password)
{
Users user = SoccerEntities.Users
if (user == null) { return null; }
return user;
}
looks much cleaner, doesn't it ? But hold on, we can do better.
If we take a look at isAdmindb() we notice at first, that the method name doesn't match the NET naming guidelines regarding the used case of the name. Method names should be named using PascalCase casing.
The next thing is that a user is considered an admin if its roleId matches a specific Guid. So let us take this Guid and use it in the VerifyPassword() method. But wait, this doesn't seem right because the responsibility of that method should be only to verify that the used username and password are correct. So there shouldn't be anything related like in the former method with the isAdmin stuff.
So let us change the isAdmindb() method to take a Users opject as a method argument and compare the user's roleId with the said Guid which we extract to a class member constant.
private const Guid adminRoleId = new Guid("ED85788D-72DA-4D0A-8D5E-B5378FC00592");
{
}
now thats short and readable and doesn't need to hit the db.
Now we should take the VerifyPassword() and change it to GetUser() and remove all this stuff about the isAdmin and isPlayer leaving only this
public Users GetUser(string userName, string password)
{
return SoccerEntities.Users
}
Now let us add a FormsAuthenticationTicket GetAuthenticationTicket(Users) method which will do some of the work which had formerly been done by the eventhandler of that button.
private FormsAuthenticationTicket GetAuthenticationTicket(Users user)
{
return new FormsAuthenticationTicket(
version: 1,
issueDate: DateTime.Now,
isPersistent: isPersistent,
userData: user.ToString(),
}
as you see by using named arguments there is no need for any comment anymore.
Now the former eventhandler code will look like
protected void btnLogin_Click1(object sender, EventArgs e)
{
if (user == null)
{
return;
}
FormsAuthenticationTicket ticket = GetAuthenticationTicket(user);
isPlayer = IsPlayer(user); // should be implemented by you in a similiar way
string encryptedTicket = FormsAuthentication.Encrypt(ticket);
lblerror.Text = "Success!";
Response.Redirect("default.aspx");
}
Naming
Users adminUsers = (from users in SoccerEntities.Users
select users).FirstOrDefault();
It's not clear why the type is called Users or why the variable is called adminUsers when both the type and the variable appear to refer to only a single instance. Consider User and adminUser instead.
Var
Use var when the type of the local variable is obvious by its assignment.
UserData userData = new UserData
{
};
Should be
var userData = new UserData
{
};
Design
You have a lot of logic in what looks to be view code behind. Consider a framework such as MVVM to separate your view and domain logic.
I am concerned that your DAL has an isPlayer field on it. What does this do? It sounds like it would be better to place this in your Users object.
isAdminDB seems like a useless function to me. Why go back to the database to get a field that should be on your object?
public Users VerifyPassword(string userName, string password)
{
//The ".FirstOrDefault()" method will return either the first matched
//result or null
Users myUser = SoccerEntities.Users
{
return myUser;
}
else //User was found
{
isPlayer = true;
if(myUser.roleId == "ED85788D-72DA-4D0A-8D5E-B5378FC00592")
{
}
return myUser;
//Do something to let them know that their credentials were not valid
}
}
Presumably you can do the same with your isPlayerdb function.
Also, you have a massive magic string with your GUID. Make it a const, give it a name. No maintenance developer is going to know what "ED85788D-72DA-4D0A-8D5E-B5378FC00592" means off the top of their head.
Lastly, your use of GUIDs is concerning to me. I'm not convinced this is the best way to represent the role for a user. Instead, could you give the user a reference to the object from the role table? If you brought this over at the same time you wouldn't even need to check. Just go:
if(myUser.Role.Type == Admin)
or something like that.
• how do i had constants in c# though is that not just a vb thing – David Buckley Nov 15 '15 at 3:46
• well then how would you suggest i add the type into the role table then – David Buckley Nov 15 '15 at 3:52
• You very much can use constants in C#. Check out msdn.microsoft.com/en-us/library/… – Nick Udell Nov 17 '15 at 9:15
• As for how to add the "type" into the role table, I recommend adding a field to your role table called "type" with either "admin" or "player" entered as values. – Nick Udell Nov 17 '15 at 9:16 | 2020-01-17 23:17:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30046191811561584, "perplexity": 3614.55292305817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00462.warc.gz"} |
https://tex.stackexchange.com/questions/515153/mini-table-of-contents-links-does-not-work | The little hand that appears in my screenshot does not work in my document the same way. It only works when you place the little hands over the numbers. How can I fix it?
MWE:
\documentclass{book}
\usepackage{hyperref}
\usepackage{xcolor}
\usepackage[titles]{tocloft}
\usepackage{minitoc}
\usepackage{theorem}
\newtheorem{theorem}{Theorem}
\hypersetup{
filecolor=blue,
urlcolor=blue,
citecolor=red,
%pdfpagemode=FullScreen,
}
\begin{document}
\dominitoc
\frontmatter
\tableofcontents
\mainmatter
\chapter{This is the first chapter}
\minitoc
}
\section{section}
This is a citation~\cite{ref1}. Theorem~\ref{thm1} provides some interesting information.
\begin{theorem}\label{thm1}
Rain gets you wet.
\end{theorem}
\section{section}
\section{section}
\bibliographystyle{alpha}
\begin{thebibliography}{Smi19}
\bibitem[Smi19]{ref1}
John Smith.
\newblock Citing in red.
\newblock {\em Journal of Hyperlink Colors}, 2019.
\end{thebibliography}
\end{document}
• Try to comment linktoc=page – flav Nov 6 '19 at 5:38
As per your code, link came for page numbers in MiniTOC, if you need the link for text, then please remove the tag linktoc=page from your setup (as already this was advised by flav) | 2021-04-15 12:09:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8147079944610596, "perplexity": 3613.7139291394055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084765.46/warc/CC-MAIN-20210415095505-20210415125505-00008.warc.gz"} |
http://math.ecnu.edu.cn/RCFOA/seminar_template.php?id=623 | A topological proof of K-triviality of flat bundles
Valerio Proietti (University of Copenhagen)
14:00-16:00, Nov 13, 2018 Îĸ½Â¥206
Abstract:
Given a finitely generated Hilbert module bundle over a compact Hausdorff space, one gets a K-theory class for the algebra of continuous functions on the base with values in the ground C^*-algebra. A topological argument involving the Chern character shows that this class is trivial and this simple fact implies an index theorem in the style of Atiyah's celebrated result on the equality between the ordinary index and the L^2-index. | 2019-01-17 21:49:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304097056388855, "perplexity": 466.9540906345639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659340.1/warc/CC-MAIN-20190117204652-20190117230652-00098.warc.gz"} |
http://mathoverflow.net/feeds/question/33426 | String of integers puzzle - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-20T03:30:48Z http://mathoverflow.net/feeds/question/33426 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/33426/string-of-integers-puzzle String of integers puzzle Erik 2010-07-26T17:59:10Z 2010-09-12T04:52:14Z <p>I apologize for not have the math background to put this question in a more formal way. I'm looking to create a string of 796 letters (or integers) with certain properties.</p> <p>Basically, the string is a variation on a De Bruijn sequence B(12,4), except order and repetition within each n-length subsequence are disregarded. i.e. ABBB BABA BBBA are each equivalent to {AB}. In other words, the main property of the string involves looking at consecutive groups of 4 letters within the larger string (i.e. the 1st through 4th letters, the 2nd through 5th letters, the 3rd through 6th letters, etc) And then producing the set of letters that comprise each group (repetitions and order disregarded)</p> <p>For example, in the string of 9 letters:</p> <p>A B B A C E B C D</p> <p>the first 4-letter groups is: ABBA, which is comprised of the set {AB} the second group is: BBAC, which is comprised of the set {ABC} the third group is: BACE, which is comprised of the set {ABCE} etc.</p> <p>The goal is for every combination of 1-4 letters from a set of N letters to be represented by the 1-4-letter resultant sets of the 4-element groups once and only once in the original string.</p> <p>For example, if there is a set of 5 letters {A, B, C, D, E} being used Then the possible 1-4 letter combinations are: A, B, C, D, E, AB, AC, AD, AE, BC, BD, BE, CD, CE, DE, ABC, ABD, ABE, ACD, ACE, ADE, BCD, BCE, BDE, CDE, ABCD, ABCE, ABDE, ACDE, BCDE</p> <p>Here is a working example that uses a set of 5 letters {A, B, C, D, E}.</p> <p>D D D D E C B B B B A E C C C C D A E E E E B D A A A A C B D D B</p> <p>The 1st through 4th elements form the set: D The 2nd through 5th elements form the set: DE The 3rd through 6th elements form the set: CDE The 4th through 7th elements form the set: BCDE The 5th through 8th elements form the set: BCE The 6th through 9th elements form the set: BC The 7th through 10th elements form the set: B etc.</p> <p><strong>* I am hoping to find a working example of a string that uses 12 different letters (a total of 793 4-letter groups within a 796-letter string) starting (and if possible ending) with 4 of the same letter. *</strong></p> <p>Here is a working solution for 7 letters:</p> <p>AAAABCDBEAAACDECFAAADBFBACEAGAADEFBAGACDFBGCCCCDGEAFAGCBEEECGFFBFEGGGGFDEEEEFCBBBBGDCFFFFDAGBEGDDDDBE</p> http://mathoverflow.net/questions/33426/string-of-integers-puzzle/35676#35676 Answer by James Currie for String of integers puzzle James Currie 2010-08-15T18:08:08Z 2010-08-15T18:08:08Z <p>An interesting variation would be to close the sequence off to form a "necklace" or "circular word", so that (for example) taking the last two letters and affixing the first two letters would give a set not generated from any of the length-4 substrings. An example for the set {A,B,C,D,E} is:</p> <p>AAAABCDDDDABEEEEDACCCCEDBBBBCE</p> <p>Note that whereas your sequence for {A,B,C,D,E} has length 33, the circular sequence has length 30. If we simply tack on the first 3 letters:</p> <p>AAAABCDDDDABEEEEDACCCCEDBBBBCEAAA</p> <p>we get a sequence of your type of length 33.</p> http://mathoverflow.net/questions/33426/string-of-integers-puzzle/37104#37104 Answer by ABh for String of integers puzzle ABh 2010-08-30T00:38:53Z 2010-09-11T21:17:59Z <p>You could, of course, use brute-force but simplify the search tree by pruning with a backtracking algorithm. However, it intrigues me to think of it in terms of a more elegant approach, like the <em>necklace</em> listed below. Here are some of my working ideas on it thus far...</p> <p>Start by breaking down the requirements into succint grouping of requisite <strong>necessary</strong> sequences, and groupings of <strong>forbidden</strong> sequences.</p> <p>Since your alphabet has the letters $S_{letters}=${$A,B,...,K,L$} in your specific case of 12 letters, then the string must contain the sets { {$A$}, {$B$}, ..., {$K$}, {$L$} }. Each of the sets must be represented by the string of four repetitions of the letters of the alphabet, therefore your sequence must contain </p> <p>$e^4$ for $e\in S_{letters}$</p> <p>Thus the lower bound for the size of the target string is $|S_{letters}|\times4 = 48$.</p> <p>The sequence $x^4 y^4, x\in S, y\in S$ is forbidden as the substrings of $xxxxyyyy$ will lead to the set {xy} being replicated 3 times:</p> <p>{x}, {xy}, {xy}, {xy}, {y}.</p> <p>More strongly, you can also say that sequence $x^3 y^3, x\in S, y\in S$ is forbidden as the substrings of $xxxyyy$ will lead to the set {xy} also being replicated 3 times</p> <p>{xy}, {xy}, {xy} </p> <p>Even more strongly, it can be said that $x^3y^2$ is forbidden as is $x^2y^3$ as each leads to a duplicate set sequence {xy},{xy}.</p> <p>Thus each of the <em>4-repeats</em> $e_i^4$ must be surrounded by letters which are not duplicated even once, e.g. $e_a e_b e_i^4 e_x e_y$, where $e_i \notin${$e_a, e_b, e_x, e_y$}</p> <p>In response to a comment below, let me explain that if $e_a=e_i$ or if $e_y=e_i$, then the sequence $e_a e_b e_i^4 e_x e_y$ will contain duplictes:</p> <ul> <li><p>If $e_a = e_i$, then $e_i e_b e_i^4 e_x e_y$ leads to $e_i e_b e_i e_i ...$ and $...e_b e_i e_i e_i ...$, both of which belong to the set {$e_b e_i$}</p></li> <li><p>If $e_y = e_i$, then $e_a e_b e_i^4 e_x e_i$ leads to $...e_i e_i e_i e_x ...$ and $...e_i e_i e_x e_i ...$, both of which belong to the set {$e_i e_x$}</p></li> </ul> <p>Also in the necessary set are the $\binom{12}{4} = 495$ ways to pick 4 out of the 12 letters of your alphabet. These could be strung together and overlap, but they must be included in the sequence.</p> http://mathoverflow.net/questions/33426/string-of-integers-puzzle/38446#38446 Answer by fedja for String of integers puzzle fedja 2010-09-12T04:52:14Z 2010-09-12T04:52:14Z <p>This is something that doesn't solve your problem but may give you some idea.</p> <p>Consider the 11 letter case and ask for each set of 4 <em>distinct</em> letters to appear exactly once. Put the letters on the circle. Look at every triple that is oriented counterclockwise. Now let $a$ and $b$ be the distances (along the circle counterclockwise) between the first and the second and the second and the third letters respectively. Suppose we have two triples that can be joined together into a quadruple with distances $a,b$ and $b,c$. We put an edge between them if first, $a+b+c<11$ and second, the number $d$ that complements the sum to 11 is the largest number in the set <code>$\{a,b,c,d\}$</code> that does not repeat (we call such triple <code>$\{a,b,c\}$</code> admissible). Also, write the corresponding quadruple on that edge. Then every set ABCD of four letters appears once on some edge (the only way to get it is to have the A,B,C,D arranged so that they go counterclockwise and then the side of the corresponding quadrangle that is the largest (as measured along the circle) non-repeating one should not be there, which determines the triples). Also the in-degree of each triple is equal to the out-degree. Thus, it remains to show that the graph is connected to get the cycle going over each edge once. The key is that if $a,b$ forms an admissible triple with anything at all, then it does so with 1. Thus we can go $a,b,1,1,a',b'$ to take care of the length switch and $a,b,1,a,b$ to take care of the position switch. </p> <p>There are two drawbacks in this argument:</p> <p>1) It doesn't allow to do subsets of smaller size immediately. This, I believe, can be fixed with some effort.</p> <p>2) One can spend eternity looking at the set $(3,3,3,3)$ and deciding which $3$ here is exceptional (needless to say, my "largest non-repeating number" rule was there just to define a unique exceptional element in any set of four positive integers adding up to 11. Almost any other rule would work just as well). This is a real trouble that also tempts me to ask if it is a pure coincidence that you gave examples for 5 and 7 letters but conveniently skipped 6.</p> | 2013-05-20 03:30:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7952408790588379, "perplexity": 605.4126877074649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00002-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.quizover.com/online/course/13-7-more-applications-of-magnetism-by-openstax?page=5&= | # 13.7 More applications of magnetism (Page 6/12)
Page 6 / 12
[link] shows a long straight wire just touching a loop carrying a current ${I}_{1}$ . Both lie in the same plane. (a) What direction must the current ${I}_{2}$ in the straight wire have to create a field at the center of the loop in the direction opposite to that created by the loop? (b) What is the ratio of ${I}_{1}/{I}_{2}$ that gives zero field strength at the center of the loop? (c) What is the direction of the field directly above the loop under this circumstance?
Find the magnitude and direction of the magnetic field at the point equidistant from the wires in [link] (a), using the rules of vector addition to sum the contributions from each wire.
$7\text{.}\text{55}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}T$ , $\text{23.4º}$
Find the magnitude and direction of the magnetic field at the point equidistant from the wires in [link] (b), using the rules of vector addition to sum the contributions from each wire.
What current is needed in the top wire in [link] (a) to produce a field of zero at the point equidistant from the wires, if the currents in the bottom two wires are both 10.0 A into the page?
10.0 A
Calculate the size of the magnetic field 20 m below a high voltage power line. The line carries 450 MW at a voltage of 300,000 V.
Integrated Concepts
(a) A pendulum is set up so that its bob (a thin copper disk) swings between the poles of a permanent magnet as shown in [link] . What is the magnitude and direction of the magnetic force on the bob at the lowest point in its path, if it has a positive $0\text{.}\text{250 μC}$ charge and is released from a height of 30.0 cm above its lowest point? The magnetic field strength is 1.50 T. (b) What is the acceleration of the bob at the bottom of its swing if its mass is 30.0 grams and it is hung from a flexible string? Be certain to include a free-body diagram as part of your analysis.
(a) $9\text{.}\text{09}×{\text{10}}^{-7}\phantom{\rule{0.25em}{0ex}}N$ upward
(b) $3\text{.}\text{03}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2}$
Integrated Concepts
(a) What voltage will accelerate electrons to a speed of $6\text{.}\text{00}×{\text{10}}^{-7}\phantom{\rule{0.25em}{0ex}}\text{m/s}$ ? (b) Find the radius of curvature of the path of a proton accelerated through this potential in a 0.500-T field and compare this with the radius of curvature of an electron accelerated through the same potential.
Integrated Concepts
Find the radius of curvature of the path of a 25.0-MeV proton moving perpendicularly to the 1.20-T field of a cyclotron.
60.2 cm
Integrated Concepts
To construct a nonmechanical water meter, a 0.500-T magnetic field is placed across the supply water pipe to a home and the Hall voltage is recorded. (a) Find the flow rate in liters per second through a 3.00-cm-diameter pipe if the Hall voltage is 60.0 mV. (b) What would the Hall voltage be for the same flow rate through a 10.0-cm-diameter pipe with the same field applied?
Integrated Concepts
(a) Using the values given for an MHD drive in [link] , and assuming the force is uniformly applied to the fluid, calculate the pressure created in ${\text{N/m}}^{2}\text{.}$ (b) Is this a significant fraction of an atmosphere?
(a) $1\text{.}\text{02}×{\text{10}}^{3}\phantom{\rule{0.25em}{0ex}}{\text{N/m}}^{2}$
(b) Not a significant fraction of an atmosphere
Integrated Concepts
(a) Calculate the maximum torque on a 50-turn, 1.50 cm radius circular current loop carrying $\text{50 μA}$ in a 0.500-T field. (b) If this coil is to be used in a galvanometer that reads $\text{50 μA}$ full scale, what force constant spring must be used, if it is attached 1.00 cm from the axis of rotation and is stretched by the $\text{60º}$ arc moved?
a perfect square v²+2v+_
kkk nice
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
or infinite solutions?
Kim
y=10×
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
rolling four fair dice and getting an even number an all four dice
Kristine 2*2*2=8
Differences Between Laspeyres and Paasche Indices
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
is it 3×y ?
J, combine like terms 7x-4y
im not good at math so would this help me
how did I we'll learn this
f(x)= 2|x+5| find f(-6)
f(n)= 2n + 1
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
how did you get the value of 2000N.What calculations are needed to arrive at it
Got questions? Join the online conversation and get instant answers! | 2018-02-24 19:47:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6101153492927551, "perplexity": 698.7171708154686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815934.81/warc/CC-MAIN-20180224191934-20180224211934-00672.warc.gz"} |
https://proofwiki.org/wiki/Book:William_E._Boyce/Elementary_Differential_Equations_and_Boundary_Value_Problems/Fourth_Edition | # Book:William E. Boyce/Elementary Differential Equations and Boundary Value Problems/Fourth Edition
## William E. Boyce and Richard C. DiPrima: Elementary Differential Equations and Boundary Value Problems (4th Edition)
Published $\text {1986}$ | 2023-04-02 05:25:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17126713693141937, "perplexity": 7043.909094140658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00278.warc.gz"} |
https://www.physicsforums.com/threads/help-quantum-mechanics.184203/ | # Help! Quantum Mechanics
1. Sep 12, 2007
### Ming0407
my Q1 ans. when U=100v, wavelength=1.228 x 10^-10 m
when U=10000v, wavelength=1.228 x 10^-11 m
Q2 ans. help me........
Last edited: Sep 12, 2007
2. Sep 12, 2007
### Kurdt
Staff Emeritus
Welcome to PF! I'm afraid you will need to show your attempt at a solution before anybody can help you. It is the policy of this forum.
3. Sep 12, 2007
### Ming0407
Sorry, i don't know the policy of this forum. Anybody can help me?
4. Sep 13, 2007
### Kurdt
Staff Emeritus
Question 1 seems to be fine. You've found the momentum and used the de Broglie wavelength equation. For question 2 you will have to use the plane wave $\Psi (x,t) = Ae^{i(kx-\omega t)}$ in the wave equation and perform the differentiations. Then you should be able to do part one and part two.
5. Sep 13, 2007
### Ming0407
i don't know how to perform the differentiation. because i have not study differentiation, i don't know question 2 too.
6. Sep 13, 2007
### malawi_glenn
Well why are you doing QM if you haven't done enough math before?
And there are tons of tutorials on the internet on calculus. Do you want us to show you some of those?
7. Sep 14, 2007
### TimNguyen
You're taking QM and have not taken Calculus I?
How is that possible?
8. Sep 14, 2007
### Kurdt
Staff Emeritus
I'm not sure we can help you other than giving you the answer in that case, which is against the spirit of the forum. I could spend 30 minutes typing out a tutorial on how to differentiate exponentials but frankly I don't think it would help when you have never taken any calculus and you're doing a QM course.
I must ask, are you self-studying or on a college or university course? | 2017-11-22 09:37:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43471986055374146, "perplexity": 1541.9169927173832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806543.24/warc/CC-MAIN-20171122084446-20171122104446-00538.warc.gz"} |
http://arxiver.moonhats.com/2017/11/13/theoretical-investigation-on-the-mass-loss-impact-on-asteroseismic-grid-based-estimates-of-mass-radius-and-age-for-rgb-stars-ssa/ | # Theoretical investigation on the mass loss impact on asteroseismic grid-based estimates of mass, radius, and age for RGB stars [SSA]
We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch phase (RGB). We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] Msun. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing $\Delta \nu,$ and the frequency of maximum oscillation power $\nu_{\rm max}$. The mass loss was modelled following a Reimers parametrization with the two different efficiencies $\eta = 0.4$ and $\eta = 0.8$. In the RGB phase, the average error owing only to observational uncertainty on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range ($\eta \in [0.0, 0.8]$) as long as the target is in an evolutionary phase preceding the RGB bump.
G. Valle, M. DellOmodarme, P. Moroni, et. al.
Mon, 13 Nov 17
6/46
Comments: Accepted for publication in A&A | 2017-12-11 02:00:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7569220662117004, "perplexity": 1828.6601879091759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00103.warc.gz"} |
https://www.physicsforums.com/threads/products-of-embedded-submanifolds.718588/ | # Products of Embedded Submanifolds
1. Oct 24, 2013
### Arkuski
I'm trying to come up with a simple proof that if $M$ is an embedded submanifold of $N$, and $P$ is an embedded submanifold of $Q$, then $M×P$ is an embedded submanifold of $N×Q$. I'm thinking this could be easily done using showing that $M×P$ satisfies the local $k$-slice condition, or that the product of smooth embeddings (from the respective inclusion maps) is also a smooth embedding.
2. Oct 24, 2013
### R136a1
Yes, both the slice-condition as the map-condition work. But what did you try? Where are you stuck? | 2018-07-18 05:30:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217599034309387, "perplexity": 257.6354391677121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00467.warc.gz"} |
https://physics.paperswithcode.com/paper/gravitational-collapse-of-a-circularly | Gravitational collapse of a circularly symmetric star
26 Sep 2015 · Sharma Ranjan, Das Shyam, Rahaman Farook, Shit Gopal Chandra ·
We investigate the collapse of a circularly symmetric star with outgoing radiation in ($2+1$)-dimensional anti-de Sitter spacetime. The exterior spacetime of the collapsing star is assumed to be described by the non-static generalization of the Ba\~nados, Teitelboim and Zanelli [{\em Phys... Rev. Lett.} {\bf 69} (1992) 1849 ] metric. Making use of the junction conditions joining smoothly the interior and the exterior spacetimes across the boundary, we analyze the impacts of various factors on the evolution of the star which begins its collapse from an initial static configuration. In particular, depending on initial conditions, two possible outcomes of the collapse process are shown: (i) formation of a BTZ black hole, and (ii) evaporation of all mass-energy even before the singularity is reached. read more
PDF Abstract | 2021-07-31 11:28:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417375087738037, "perplexity": 1772.2517243454488}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00436.warc.gz"} |
https://cosmologyquestionoftheweek.blogspot.com/2014/10/eddington-temperature.html | ## Wednesday, October 22, 2014
### Eddington temperature
Astrophysical objects are limited in their luminosity by the Eddington luminosity, where radiation pressure exceeds gravity. Would it be sensible to define a limiting Eddington-temperature for Planckian spectra?
#### 1 comment:
1. quite so: although the Eddington-luminosity makes no assumption about spectral distribution (apart from the fact that it uses the Thomson-cross section which is only valid for small photon energies $\ll m_ec^2$), it's sensible to express the luminosity by the Stefan-Boltzmann-relation in terms of a temperature, $L\propto \sigma_{SB}T^4$. | 2017-06-24 20:46:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597278833389282, "perplexity": 2374.5563015153907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00130.warc.gz"} |
https://johncarlosbaez.wordpress.com/category/physics/ | ## Geometric Quantization (Part 1)
1 December, 2018
I can’t help thinking about geometric quantization. I feel it holds some lessons about the relation between classical and quantum mechanics that we haven’t fully absorbed yet. I want to play my cards fairly close to my chest, because there are some interesting ideas I haven’t fully explored yet… but still, there are also plenty of ‘well-known’ clues that I can afford to explain.
The first one is this. As beginners, we start by thinking of geometric quantization as a procedure for taking a symplectic manifold and constructing a Hilbert space: that is, taking a space of classical states and contructing the corresponding space of quantum states. We soon learn that this procedure requires additional data as its input: a symplectic manifold is not enough. We learn that it works much better to start with a Kähler manifold equipped with a holomorphic hermitian line bundle with a connection whose curvature is the imaginary part of the Kähler structure. Then the space of holomorphic sections of that line bundle gives the Hilbert space we seek.
That’s quite a mouthful—but it makes for such a nice story that I’d love to write a bunch of blog articles explaining it with lots of examples. Unfortunately I don’t have time, so try these:
• Matthias Blau, Symplectic geometry and geometric quantization.
• A. Echeverria-Enriquez, M.C. Munoz-Lecanda, N. Roman-Roy, C. Victoria-Monge, Mathematical foundations of geometric quantization.
But there’s a flip side to this story which indicates that something big and mysterious is going on. Geometric quantization is not just a procedure for converting a space of classical states into a space of quantum states. It also reveals that a space of quantum states can be seen as a space of classical states!
To reach this realization, we must admit that quantum states are not really vectors in a Hilbert space $H$; from a certain point of view they are really 1-dimensonal subspaces of a Hilbert space, so the set of quantum states I’m talking about is the projective space $PH.$ But this projective space, at least when it’s finite-dimensional, turns out to be the simplest example of that complicated thing I mentioned: a Kähler manifold equipped with a holomorphic hermitian line bundle whose curvature is the imaginary part of the Kähler structure!
So a space of quantum states is an example of a space of classical states—equipped with precisely all the complicated extra structure that lets us geometrically quantize it!
At this point, if you don’t already know the answer, you should be asking: and what do we get when we geometrically quantize it?
The answer is exciting only in that it’s surprisingly dull: when we geometrically quantize $PH,$ we get back the Hilbert space $H.$
You may have heard of ‘second quantization’, where we take a quantum system, treat it as classical, and quantize it again. In the usual story of second quantization, the new quantum system we get is more complicated than the original one… and we can repeat this procedure again and again, and keep getting more interesting things:
• John Baez, Nth quantization.
The story I’m telling now is different. I’m saying that when we take a quantum system with Hilbert space $H,$ we can think of it as a classical system whose symplectic manifold of states is $PH,$ but then we can geometrically quantize this and get $H$ back.
The two stories are not in contradiction, because they rely on two different notions of what it means to ‘think of a quantum system as classical’. In today’s story that means getting a symplectic manifold $PH$ from a Hilbert space $H.$ In the other story we use the fact that $H$ itself is a symplectic manifold!
I should explain the relation of these two stories, but that would be a big digression from today’s intended blog article: indeed I’m already regretting having drifted off course. I only brought up this other story to heighten the mystery I’m talking about now: namely, that when we geometrically quantize the space $PH,$ we get $H$ back.
The math is not mysterious here; it’s the physical meaning of the math that’s mysterious. The math seems to be telling us that contrary to what they say in school, quantum systems are special classical systems, with the special property that when you quantize them nothing new happens!
This idea is not mine; it goes back at least to Kibble, the guy who with Higgs invented the method whereby the Higgs boson does its work:
• Tom W. B. Kibble, Geometrization of quantum mechanics, Comm. Math. Phys. 65 (1979), 189–201.
This led to a slow, quiet line of research that continues to this day. I find this particular paper especially clear and helpful:
• Abhay Ashtekar, Troy A. Schilling, Geometrical formulation of quantum mechanics, in On Einstein’s Path, Springer, Berlin, 1999, pp. 23–65.
so if you’re wondering what the hell I’m talking about, this is probably the best place to start. To whet your appetite, here’s the abstract:
Abstract. States of a quantum mechanical system are represented by rays in a complex Hilbert space. The space of rays has, naturally, the structure of a Kähler manifold. This leads to a geometrical formulation of the postulates of quantum mechanics which, although equivalent to the standard algebraic formulation, has a very different appearance. In particular, states are now represented by points of a symplectic manifold (which happens to have, in addition, a compatible Riemannian metric), observables are represented by certain real-valued functions on this space and the Schrödinger evolution is captured by the symplectic flow generated by a Hamiltonian function. There is thus a remarkable similarity with the standard symplectic formulation of classical mechanics. Features—such as uncertainties and state vector reductions—which are specific to quantum mechanics can also be formulated geometrically but now refer to the Riemannian metric—a structure which is absent in classical mechanics. The geometrical formulation sheds considerable light on a number of issues such as the second quantization procedure, the role of coherent states in semi-classical considerations and the WKB approximation. More importantly, it suggests generalizations of quantum mechanics. The simplest among these are equivalent to the dynamical generalizations that have appeared in the literature. The geometrical reformulation provides a unified framework to discuss these and to correct a misconception. Finally, it also suggests directions in which more radical generalizations may be found.
Personally I’m not interested in the generalizations of quantum mechanics: I’m more interested in what this circle of ideas means for quantum mechanics.
One rather cynical thought is this: when we start our studies with geometric quantization, we naively hope to extract a space of quantum states from a space of classical states, e.g. a symplectic manifold. But we then discover that to do this in a systematic way, we need to equip our symplectic manifold with lots of bells and whistles. Should it really be a surprise that when we’re done, the bells and whistles we need are exactly what a space of quantum states has?
I think this indeed dissolves some of the mystery. It’s a bit like the parable of ‘stone soup’: you can make a tasty soup out of just a stone… if you season it with some vegetables, some herbs, some salt and such.
However, perhaps because by nature I’m an optimist, I also think there are interesting things to be learned from the tight relation between quantum and classical mechanics that appears in geometric quantization. And I hope to talk more about those in future articles.
## Noether’s Theorem
12 September, 2018
I’ve been spending the last month at the Centre of Quantum Technologies, getting lots of work done. This Friday I’m giving a talk, and you can see the slides now:
• John Baez, Getting to the bottom of Noether’s theorem.
Abstract. In her paper of 1918, Noether’s theorem relating symmetries and conserved quantities was formulated in term of Lagrangian mechanics. But if we want to make the essence of this relation seem as self-evident as possible, we can turn to a formulation in term of Poisson brackets, which generalizes easily to quantum mechanics using commutators. This approach also gives a version of Noether’s theorem for Markov processes. The key question then becomes: when, and why, do observables generate one-parameter groups of transformations? This question sheds light on why complex numbers show up in quantum mechanics.
The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).
This workshop celebrates the 100th anniversary of Noether’s famous paper connecting symmetries to conserved quantities. Her paper actually contains two big theorems. My talk is only about the more famous one, Noether’s first theorem, and I’ll change my talk title to make that clear when I go to London, to avoid getting flak from experts. Her second theorem explains why it’s hard to define energy in general relativity! This is one reason Einstein admired Noether so much.
I’ll also give this talk at DAMTP—the Department of Applied Mathematics and Theoretical Physics, in Cambridge—on Thursday October 4th at 1 pm.
The organizers of London workshop on the philosophy and physics of Noether’s theorems have asked me to write a paper, so my talk can be seen as the first step toward that. My talk doesn’t contain any hard theorems, but the main point—that the complex numbers arise naturally from wanting a correspondence between observables and symmetry generators—can be expressed in some theorems, which I hope to explain in my paper.
## The Philosophy and Physics of Noether’s Theorems
11 August, 2018
I’ll be speaking at a conference celebrating the centenary of Emmy Noether’s work connecting symmetries and conservation laws:
The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).
They write:
2018 brings with it the centenary of a major milestone in mathematical physics: the publication of Amalie (“Emmy”) Noether’s theorems relating symmetry and physical quantities, which continue to be a font of inspiration for “symmetry arguments” in physics, and for the interpretation of symmetry within philosophy.
In order to celebrate Noether’s legacy, the University of Notre Dame and the LSE Centre for Philosophy of Natural and Social Sciences are co-organizing a conference that will bring together leading mathematicians, physicists, and philosophers of physics in order to discuss the enduring impact of Noether’s work.
There’s a registration fee, which you can see on the conference website, along with a map showing the conference location, a schedule of the talks, and other useful stuff.
Here are the speakers:
John Baez (UC Riverside)
Jeremy Butterfield (Cambridge)
Anne-Christine Davis (Cambridge)
Sebastian De Haro (Amsterdam and Cambridge)
Ruth Gregory (Durham)
Yvette Kosmann-Schwarzbach (Paris)
Peter Olver (UMN)
Sabrina Pasterski (Harvard)
Oliver Pooley (Oxford)
Tudor Ratiu (Shanghai Jiao Tong and Geneva)
Kasia Rejzner (York)
Robert Spekkens (Perimeter)
I’m looking forward to analyzing the basic assumptions behind various generalizations of Noether’s first theorem, the one that shows symmetries of a Lagrangian give conserved quantities. Having generalized it to Markov processes, I know there’s a lot more to what’s going on here than just the wonders of Lagrangian mechanics:
• John Baez and Brendan Fong, A Noether theorem for Markov processes, J. Math. Phys. 54 (2013), 013301. (Blog article here.)
I’ve been trying to get to the bottom of it ever since.
## The Behavioral Approach to Systems Theory
19 June, 2018
Two more students in the Applied Category Theory 2018 school wrote a blog article about something they read:
• Eliana Lorch and Joshua Tan, The behavioral approach to systems theory, The n-Category Café, 15 June 2018.
Eliana Lorch is a mathematician based in San Francisco. Joshua Tan is a grad student in computer science at the University of Oxford and one of the organizers of Applied Category Theory 2018.
They wrote a great summary of this paper, which has been an inspiration to me and many others:
• Jan Willems, The behavioral approach to open and interconnected systems, IEEE Control Systems 27 (2007), 46–99.
They also list many papers influenced by it, and raise a couple of interesting problems with Willems’ idea, which can probably be handled by generalizing it.
## Dynamical Systems and Their Steady States
17 June, 2018
As part of the Applied Category Theory 2018 school, Maru Sarazola wrote a blog article on open dynamical systems and their steady states. Check it out:
• Maru Sarazola, Dynamical systems and their steady states, The n-Category Café, 2 April 2018.
She compares two papers:
• John Baez and Blake Pollard, A compositional framework for reaction networks, Reviews in Mathematical Physics 29 (2017), 1750028.
(Blog article here.)
It’s great, because I’d never really gotten around to understanding the precise relationship between these two approaches. I wish I knew the answers to the questions she raises at the end!
## MiniBooNE
2 June, 2018
Big news! An experiment called MiniBooNE at Fermilab in Chicago has found more evidence that neutrinos are not acting as the Standard Model says they should:
• The MiniBooNE Collaboration, Observation of a significant excess of electron-like events in the MiniBooNE short-baseline neutrino experiment.
In brief, the experiment creates a beam of muon neutrinos (or antineutrinos—they can do either one). Then they check, with a detector 541 meters away, to see if any of these particles have turned into electron neutrinos (or antineutrinos). They’ve been doing this since 2002, and they’ve found a small tendency for this to happen.
This seems to confirm findings of the Liquid Scintillator Neutrino Detector or ‘LSND’ at Los Alamos, which did a similar experiment in the 1990s. People in the MiniBooNE collaboration claim that if you take both experiments into account, the results have a statistical significance of 6.1 σ.
This means that if the Standard Model is correct and there’s no experimental error or other mistake, the chance of seeing what these experiments saw is about 1 in 1,000,000,000.
There are 3 known kinds of neutrinos: electron, muon and tau neutrinos. Neutrinos of any kind are already known to turn into those of other kinds: these are called neutrino oscillations, and they were first discovered in the 1960’s, when it was found that 1/3 as many electron neutrinos were coming from the Sun as expected.
At the time this was a big surprise, because people thought neutrinos were massless, moved at the speed of light, and thus didn’t experience the passage of time. Back then, the Standard Model looked like this:
The neutrinos stood out as weird in two ways: we thought they were massless, and we thought they only come in a left-handed form—meaning roughly that they spin clockwise around the axis they’re moving along.
People did a bunch of experiments and wound up changing the Standard Model. Now we know neutrinos have nonzero mass. Their masses, and also neutrino oscillations, are described using a 3×3 matrix called the lepton mixing matrix. This is not a wacky idea: in fact, quarks are described using a similar 3×3 matrix called the quark mixing matrix. So, the current-day Standard Model is more symmetrical than the earlier version: leptons are more like quarks.
There is, however, still a big difference! We haven’t seen right-handed neutrinos.
MiniBooNE and LSND are seeing muon neutrinos turn into electron neutrinos much faster than the Standard Model theory of neutrino oscillations predicts. There seems to be no way to adjust the parameters of the lepton mixing matrix to fit the data from all the other experiments people have done, and also the MiniBooNE–LSND data. If this is really true, we need a new theory of physics.
And this is where things get interesting.
The most conservative change to the Standard Model would be to add three right-handed neutrinos to go along with the three left-handed ones. This would not be an ugly ad hoc trick: it would make the Standard Model more symmetrical, by making leptons even more like quarks.
If we do this in the most beautiful way—making leptons as similar to quarks as we can get away with, given their obvious differences—the three new right-handed neutrinos will be ‘sterile’. This means that they will interact only with the Higgs boson and gravity: not electromagnetism, the weak force or the strong force. This is great, because it would mean there’s a darned good reason we haven’t seen them yet!
Neutrinos are already very hard to detect, since they don’t interact with electromagnetism or the strong force. They only interact with the Higgs boson (that’s what creates their mass, and oscillations), gravity (because they have energy), and the weak force (which is how we create and detect them). A ‘sterile’ neutrino—one that also didn’t interact with the weak force—would be truly elusive!
In practice, the main way to detect sterile neutrinos would be via oscillations. We could create an ordinary neutrino, and it might turn into a sterile neutrino, and then back into an ordinary neutrino. This would create new kinds of oscillations.
And indeed, MiniBooNE and LSND seem to be seeing new oscillations, much more rapid than those predicted by the Standard Model and our usual best estimate of the lepton mixing matrix.
So, people are getting excited! We may have found sterile neutrinos.
There’s a lot more to say. For example, the SO(10) grand unified theory predicts right-handed neutrinos in a very beautiful way, so I’m curious about what the new data implies about that. There are also questions about whether a sterile neutrino could explain dark matter… or what limits astronomical observations place on the properties of sterile neutrinos. One should also wonder about the possibility of experimental error!
I would enjoy questions that probe deeper into this subject, since they might force me to study and learn more. Right now I have to go to Joshua Tree! But I’ll come back and answer your questions tomorrow morning.
## Effective Thermodynamics for a Marginal Observer
8 May, 2018
guest post by Matteo Polettini
Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. But he may be well-intentioned. You opt for not being rude, roll your sleeves, and put your hands into the dirt, holding the Second Law as lodestar.
Keep in mind that there are two fundamental sources of error: either he is not considering certain input currents (“hey, what about that tiny hidden cable entering your machine from the electrical power line?!”, “uh, ah, that’s just to power the “ON” LED”, “mmmhh, you sure?”), or else he is not measuring the energy input correctly (“hey, why are you using a Geiger counter to measure input voltages?!”, “well, sir, I ran out of voltmeters…”).
In other words, the observer might only have partial information about the setup, either in quantity or quality. Because he has been marginalized by society (most crackpots believe they are misunderstood geniuses) we will call such observer “marginal,” which incidentally is also the word that mathematicians use when they focus on the probability of a subset of stochastic variables.
In fact, our modern understanding of thermodynamics as embodied in statistical mechanics and stochastic processes is founded (and funded) on ignorance: we never really have “complete” information. If we actually had, all energy would look alike, it would not come in “more refined” and “less refined” forms, there would not be a differentials of order/disorder (using Paul Valery’s beautiful words), and that would end thermodynamic reasoning, the energy problem, and generous research grants altogether.
Even worse, within this statistical approach we might be missing chunks of information because some parts of the system are invisible to us. But then, what warrants that we are doing things right, and he (our correspondent) is the crackpot? Couldn’t it be the other way around? Here I would like to present some recent ideas I’ve been working on together with some collaborators on how to deal with incomplete information about the sources of dissipation of a thermodynamic system. I will do this in a quite theoretical manner, but somehow I will mimic the guidelines suggested above for debunking crackpots. My three buzzwords will be: marginal, effective, and operational.
### “Complete” thermodynamics: an out-of-the-box view
The laws of thermodynamics that I address are:
• The good ol’ Second Law (2nd)
• The Fluctuation-Dissipation Relation (FDR), and the Reciprocal Relation (RR) close to equilibrium.
• The more recent Fluctuation Relation (FR)1 and its corollary the Integral Fluctuation Relation (IFR), which have been discussed on this blog in a remarkable post by Matteo Smerlak.
The list above is all in the “area of the second law”. How about the other laws? Well, thermodynamics has for long been a phenomenological science, a patchwork. So-called stochastic thermodynamics is trying to put some order in it by systematically grounding thermodynamic claims in (mostly Markov) stochastic processes. But it’s not an easy task, because the different laws of thermodynamics live in somewhat different conceptual planes. And it’s not even clear if they are theorems, prescriptions, or habits (a bit like in jurisprudence2).
Within stochastic thermodynamics, the Zeroth Law is so easy nobody cares to formulate it (I do, so stay tuned…). The Third Law: no idea, let me know. As regards the First Law (or, better, “laws”, as many as there are conserved quantities across the system/environment interface…), we will assume that all related symmetries have been exploited from the offset to boil down the description to a minimum.
This minimum is as follows. We identify a system that is well separated from its environment. The system evolves in time, the environment is so large that its state does not evolve within the timescales of the system3. When tracing out the environment from the description, an uncertainty falls upon the system’s evolution. We assume the system’s dynamics to be described by a stochastic Markovian process.
How exactly the system evolves and what is the relationship between system and environment will be described in more detail below. Here let us take an “out of the box” view. We resolve the environment into several reservoirs labeled by index $\alpha$. Each of these reservoirs is “at equilibrium” on its own (whatever that means4). Now, the idea is that each reservoir tries to impose “its own equilibrium” on the system, and that their competition leads to a flow of currents across the system/environment interface. Each time an amount of the reservoir’s resource crosses the interface, a “thermodynamic cost” has to be to be paid or gained (be it a chemical potential difference for a molecule to go through a membrane, or a temperature gradient for photons to be emitted/absorbed, etc.).
The fundamental quantities of stochastic thermodynamic modeling thus are:
• On the “-dynamic” side: the time-integrated currents $\Phi^t_\alpha$, independent among themselves5. Currents are stochastic variables distributed with joint probability density
$P(\{\Phi_\alpha\}_\alpha)$
• On the “thermo-” side: The so-called thermodynamic forces or “affinities”6 $\mathcal{A}_\alpha$ (collectively denoted $\mathcal{A}$). These are tunable parameters that characterize reservoir-to-reservoir gradients, and they are not stochastic. For convenience, we conventionally take them all positive.
Dissipation is quantified by the entropy production:
$\sum \mathcal{A}_\alpha \Phi^t_\alpha$
We are finally in the position to state the main results. Be warned that in the following expressions the exact treatment of time and its scaling would require a lot of specifications, but keep in mind that all these relations hold true in the long-time limit, and that all cumulants scale linearly with time.
FR: The probability of observing positive currents is exponentially favoured with respect to negative currents according to
$P(\{\Phi_\alpha\}_\alpha) / P(\{-\Phi_\alpha\}_\alpha) = \exp \sum \mathcal{A}_\alpha \Phi^t_\alpha$
Comment: This is not trivial, it follows from the explicit expression of the path integral, see below.
IFR: The exponential of minus the entropy production is unity
$\big\langle \exp - \sum \mathcal{A}_\alpha \Phi^t_\alpha \big\rangle_{\mathcal{A}} =1$
Homework: Derive this relation from the FR in one line.
2nd Law: The average entropy production is not negative
$\sum \mathcal{A}_\alpha \left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \geq 0$
Homework: Derive this relation using Jensen’s inequality.
Equilibrium: Average currents vanish if and only if affinities vanish:
$\left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \equiv 0, \forall \alpha \iff \mathcal{A}_\alpha \equiv 0, \forall \alpha$
Homework: Derive this relation taking the first derivative w.r.t. ${\mathcal{A}_\alpha}$ of the IFR. Notice that also the average depends on the affinities.
S-FDR: At equilibrium, it is impossible to tell whether a current is due to a spontaneous fluctuation (quantified by its variance) or to an external perturbation (quantified by the response of its mean). In a symmetrized (S-) version:
$\left. \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} + \left. \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = \left. \left\langle \Phi^t_{\alpha} \Phi^t_{\alpha'} \right\rangle \right|_{0}$
Homework: Derive this relation taking the mixed second derivatives w.r.t. ${\mathcal{A}_\alpha}$ of the IFR.
RR: The reciprocal response of two different currents to a perturbation of the reciprocal affinities close to equilibrium is symmetrical:
$\left. \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} - \left. \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = 0$
Homework: Derive this relation taking the mixed second derivatives w.r.t. ${\mathcal{A}_\alpha}$ of the FR.
Notice the implication scheme: FR ⇒ IFR ⇒ 2nd, IFR ⇒ S-FDR, FR ⇒ RR.
### “Marginal” thermodynamics (still out-of-the-box)
Now we assume that we can only measure a marginal subset of currents $\{\Phi_\mu^t\}_\mu \subset \{\Phi_\alpha^t\}_\alpha$ (index $\mu$ always has a smaller range than $\alpha$), distributed with joint marginal probability
$P(\{\Phi_\mu\}_\mu) = \int \prod_{\alpha \neq \mu} d\Phi_\alpha \, P(\{\Phi_\alpha\}_\alpha)$
Notice that a state where these marginal currents vanish might not be an equilibrium, because other currents might still be whirling around. We call this a stalling state.
$\mathrm{stalling:} \qquad \langle \Phi_\mu \rangle \equiv 0, \quad \forall \mu$
My central question is: can we associate to these currents some effective affinity $\mathcal{Q}_\mu$ in such a way that at least some of the results above still hold true? And, are all definitions involved just a fancy mathematical construct, or are they operational?
First the bad news: In general the FR is violated for all choices of effective affinities:
$P(\{\Phi_\mu\}_\mu) / P(\{-\Phi_\mu\}_\mu) \neq \exp \sum \mathcal{Q}_\mu \Phi^t_\mu$
This is not surprising and nobody would expect that. How about the IFR?
Marginal IFR: There are effective affinities such that
$\left\langle \exp - \sum \mathcal{Q}_\mu \Phi^t_\mu \right\rangle_{\mathcal{A}} =1$
Mmmhh. Yeah. Take a closer look this expression: can you see why there actually exists an infinite choice of “effective affinities” that would make that average cross 1? Which on the other hand is just a number, so who even cares? So this can’t be the point.
The fact is, the IFR per se is hardly of any practical interest, as are all “absolutes” in physics. What matters is “relatives”: in our case, response. But then we need to specify how the effective affinities depend on the “real” affinities. And here steps in a crucial technicality, whose precise argumentation is a pain. Basing on reasonable assumptions7, we demonstrate that the IFR holds for the following choice of effective affinities:
$\mathcal{Q}_\mu = \mathcal{A}_\mu - \mathcal{A}^{\mathrm{stalling}}_\mu$,
where $\mathcal{A}^{\mathrm{stalling}}$ is the set of values of the affinities that make marginal currents stall. Notice that this latter formula gives an operational definition of the effective affinities that could in principle be reproduced in laboratory (just go out there and tune the tunable until everything stalls, and measure the difference). Obviously:
Stalling: Marginal currents vanish if and only if effective affinities vanish:
$\left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \equiv 0, \forall \mu \iff \mathcal{A}_\mu \equiv 0, \forall \mu$
Now, according to the inference scheme illustrated above, we can also prove that:
Effective 2nd Law: The average marginal entropy production is not negative
$\sum \mathcal{Q}_\mu \left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \geq 0$
S-FDR at stalling:
$\left. \frac{\partial}{\partial \mathcal{A}_\mu}\left\langle \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} + \left. \frac{\partial}{\partial \mathcal{A}_{\mu'}}\left\langle \Phi^t_{\mu} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} = \left. \left\langle \Phi^t_{\mu} \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}}$
Notice instead that the RR is gone at stalling. This is a clear-cut prediction of the theory that can be experimented with basically the same apparatus with which response theory has been experimentally studied so far (not that I actually know what these apparatus are…): at stalling states, differing from equilibrium states, the S-FDR still holds, but the RR does not.
### Into the box
You’ve definitely gotten enough at this point, and you can give up here. Please exit through the gift shop.
If you’re stubborn, let me tell you what’s inside the box. The system’s dynamics is modeled as a continuous-time, discrete configuration-space Markov “jump” process. The state space can be described by a graph $G=(I, E)$ where $I$ is the set of configurations, $E$ is the set of possible transitions or “edges”, and there exists some incidence relation between edges and couples of configurations. The process is determined by the rates $w_{i \gets j}$ of jumping from one configuration to another.
We choose these processes because they allow some nice network analysis and because the path integral is well defined! A single realization of such a process is a trajectory
$\omega^t = (i_0,\tau_0) \to (i_1,\tau_1) \to \ldots \to (i_N,\tau_N)$
A “Markovian jumper” waits at some configuration $i_n$ for some time $\tau_n$ with an exponentially decaying probability $w_{i_n} \exp - w_{i_n} \tau_n$ with exit rate $w_i = \sum_k w_{k \gets i}$, then instantaneously jumps to a new configuration $i_{n+1}$ with transition probability $w_{i_{n+1} \gets {i_n}}/w_{i_n}$. The overall probability density of a single trajectory is given by
$P(\omega^t) = \delta \left(t - \sum_n \tau_n \right) e^{- w_{i_N}\tau_{i_N}} \prod_{n=0}^{N-1} w_{j_n \gets i_n} e^{- w_{i_n} \tau_{i_n}}$
One can in principle obtain the probability distribution function of any observable defined along the trajectory by taking the marginal of this measure (though in most cases this is technically impossible). Where does this expression come from? For a formal derivation, see the very beautiful review paper by Weber and Frey, but be aware that this is what one would intuitively come up with if one had to simulate with the Gillespie algorithm.
The dynamics of the Markov process can also be described by the probability of being at some configuration $i$ at time $t$, which evolves via the master equation
$\dot{p}_i(t) = \sum_j \left[ w_{ij} p_j(t) - w_{ji} p_i(t) \right]$.
We call such probability the system’s state, and we assume that the system relaxes to a uniquely defined steady state $p = \mathrm{lim}_{t \to \infty} p(t)$.
A time-integrated current along a single trajectory is a linear combination of the net number of jumps $\#^t$ between configurations in the network:
$\Phi^t_\alpha = \sum_{ij} C^{ij}_\alpha \left[ \#^t(i \gets j) - \#^t(j\gets i) \right]$
The idea here is that one or several transitions within the system occur because of the “absorption” or the “emission” of some environmental degrees of freedom, each with different intensity. However, for the moment let us simplify the picture and require that only one transition contributes to a current, that is that there exist $i_\alpha,j_\alpha$ such that
$C^{ij}_\alpha = \delta^i_{i_\alpha} \delta^j_{j_\alpha}$.
Now, what does it mean for such a set of currents to be “complete”? Here we get inspiration from Kirchhoff’s Current Law in electrical circuits: the continuity of the trajectory at each configuration of the network implies that after a sufficiently long time, cycle or loop or mesh currents completely describe the steady state. There is a standard procedure to identify a set of cycle currents: take a spanning tree $T$ of the network; then the currents flowing along the edges $E\setminus T$ left out from the spanning tree form a complete set.
The last ingredient you need to know are the affinities. They can be constructed as follows. Consider the Markov process on the network where the observable edges are removed $G' = (I,T)$. Calculate the steady state of its associated master equation $(p^{\mathrm{eq}}_i)_i$, which is necessarily an equilibrium (since there cannot be cycle currents in a tree…). Then the affinities are given by
$\mathcal{A}_\alpha = \log w_{i_\alpha j_\alpha} p^{\mathrm{eq}}_{j_\alpha} / w_{j_\alpha i_\alpha} p^{\mathrm{eq}}_{i_\alpha}$.
Now you have all that is needed to formulate the complete theory and prove the FR.
Homework: (Difficult!) With the above definitions, prove the FR.
How about the marginal theory? To define the effective affinities, take the set $E_{\mathrm{mar}} = \{i_\mu j_\mu, \forall \mu\}$ of edges where there run observable currents. Notice that now its complement obtained by removing the observable edges, the hidden edge set $E_{\mathrm{hid}} = E \setminus E_{\mathrm{mar}}$, is not in general a spanning tree: there might be cycles that are not accounted for by our observations. However, we can still consider the Markov process on the hidden space, and calculate its stalling steady state $p^{\mathrm{st}}_i$, and ta-taaa: The effective affinities are given by
$\mathcal{Q}_\mu = \log w_{i_\mu j_\mu} p^{\mathrm{st}}_{j_\mu} / w_{j_\mu i_\mu} p^{\mathrm{st}}_{i_\mu}$.
Proving the marginal IFR is far more complicated than the complete FR. In fact, very often in my field we will not work with the current’ probability density itself, but we prefer to take its bidirectional Laplace transform and work with the currents’ cumulant generating function. There things take a quite different and more elegant look.
Many other questions and possibilities open up now. The most important one left open is: Can we generalize the theory the (physically relevant) case where the current is supported on several edges? For example, for a current defined like $\Phi^t = 5 \Phi^t_{12} + 7 \Phi^t_{34}$? Well, it depends: the theory holds provided that the stalling state is not “internally alive”, meaning that if the observable current vanishes on average, then also should $\Phi^t_{12}$ and $\Phi^t_{34}$ separately. This turns out to be a physically meaningful but quite strict condition.
### Is all of thermodynamics “effective”?
Let me conclude with some more of those philosophical considerations that sadly I have to leave out of papers…
Stochastic thermodynamics strongly depends on the identification of physical and information-theoretic entropies — something that I did not openly talk about, but that lurks behind the whole construction. Throughout my short experience as researcher I have been pursuing a program of “relativization” of thermodynamics, by making the role of the observer more and more evident and movable. Inspired by Einstein’s Gedankenexperimenten, I also tried to make the theory operational. This program may raise eyebrows here and there: Many thermodynamicians embrace a naive materialistic world-view whereby what only matters are “real” physical quantities like temperature, pressure, and all the rest of the information-theoretic discourse is at best mathematical speculation or a fascinating analog with no fundamental bearings. According to some, information as a physical concept lingers alarmingly close to certain extreme postmodern claims in the social sciences that “reality” does not exist unless observed, a position deemed dangerous at times when the authoritativeness of science is threatened by all sorts of anti-scientific waves.
I think, on the contrary, that making concepts relative and effective and by summoning the observer explicitly is a laic and prudent position that serves as an antidote to radical subjectivity. The other way around—clinging to the objectivity of a preferred observer, which is implied in any materialistic interpretation of thermodynamics, e.g. by assuming that the most fundamental degrees of freedom are the positions and velocities of gas’s molecules—is the dangerous position, expecially when the role of such preferred observer is passed around from the scientist to the technician and eventually to the technocrat, who would be induced to believe there are simple technological fixes to complex social problems
How do we reconcile observer-dependency and the laws of physics? The object and the subject? On the one hand, much like the position of an object depends on the reference frame, so much so entropy and entropy production do depend on the observer and the particular apparatus that he controls or experiment he is involved with. On the other hand, much like motion is ultimately independent of position and it is agreed upon by all observers that share compatible measurement protocols, so much so the laws of thermodynamics are independent of that particular observer’s quantification of entropy and entropy production (e.g., the effective Second Law holds independently of how much the marginal observer knows of the system, if he operates according to our phenomenological protocol…). This is the case even in the every-day thermodynamics as practiced by energetic engineers et al., where there are lots of choices to gauge upon, and there is no other external warrant that the amount of dissipation being quantified is the “true” one (whatever that means…)—there can only be trust in one’s own good practices and methodology.
So in this sense, I like to think that all observers are marginal, that this effective theory serves as a dictionary by which different observers practice and communicate thermodynamics, and that we should not revere the laws of thermodynamics as “true” idols, but rather as tools of good scientific practice.
### References
• M. Polettini and M. Esposito, Effective fluctuation and response theory, arXiv:1803.03552.
In this work we give the complete theory and numerous references to work of other people that was along the same lines. We employ a “spiral” approach to the presentation of the results, inspired by the pedagogical principle of Albert Baez.
• M. Polettini and M. Esposito, Effective thermodynamics for a marginal observer, Phys. Rev. Lett. 119 (2017), 240601, arXiv:1703.05715.
This is a shorter version of the story.
• B. Altaner, M. Polettini and M. Esposito, Fluctuation-dissipation relations far from equilibrium, Phys. Rev. Lett. 117 (2016), 180601, arXiv:1604.0883.
An early version of the story, containing the FDR results but not the full-fledged FR.
• G. Bisker, M. Polettini, T. R. Gingrich and J. M. Horowitz, Hierarchical bounds on entropy production inferred from partial information, J. Stat. Mech. (2017), 093210, arXiv:1708.06769.
Some extras.
• M. F. Weber and E. Frey, Master equations and the theory of stochastic path integrals, Rep. Progr. Phys. 80 (2017), 046601, arXiv:1609.02849.
Great reference if one wishes to learn about path integrals for master equation systems.
### Footnotes
1 There are as many so-called “Fluctuation Theorems” as there are authors working on them, so I decided not to call them by any name. Furthermore, notice I prefer to distinguish between a relation (a formula) and a theorem (a line of reasoning). I lingered more on this here.
2 “Just so you know, nobody knows what energy is.”—Richard Feynman.
I cannot help but mention here the beautiful book by Shapin and Schaffer, Leviathan and the Air-Pump, about the Boyle vs. Hobbes diatribe about what constitutes a “matter of fact,” and Bruno Latour’s interpretation of it in We Have Never Been Modern. Latour argues that “modernity” is a process of separation of the human and natural spheres, and within each of these spheres a process of purification of the unit facts of knowledge and the unit facts of politics, of the object and the subject. At the same time we live in a world where these two spheres are never truly separated, a world of “hybrids” that are at the same time necessary “for all practical purposes” and unconceivable according to the myths that sustain the narration of science, of the State, and even of religion. In fact, despite these myths, we cannot conceive a scientific fact out of the contextual “network” where this fact is produced and replicated, and neither we can conceive society out of the material needs that shape it: so in this sense “we have never been modern”, we are not quite different from all those societies that we take pleasure of studying with the tools of anthropology. Within the scientific community Latour is widely despised; probably he is also misread. While it is really difficult to see how his analysis applies to, say, high-energy physics, I find that thermodynamics and its ties to the industrial revolution perfectly embodies this tension between the natural and the artificial, the matter of fact and the matter of concern. Such great thinkers as Einstein and Ehrenfest thought of the Second Law as the only physical law that would never be replaced, and I believe this is revelatory. A second thought on the Second Law, a systematic and precise definition of all its terms and circumstances, reveals that the only formulations that make sense are those phenomenological statements such as Kelvin-Planck’s or similar, which require a lot of contingent definitions regarding the operation of the engine, while fetishized and universal statements are nonsensical (such as that masterwork of confusion that is “the entropy of the Universe cannot decrease”). In this respect, it is neither a purely natural law—as the moderns argue, nor a purely social construct—as the postmodern argue. One simply has to renounce to operate this separation. While I do not have a definite answer on this problem, I like to think of the Second Law as a practice, a consistency check of the thermodynamic discourse.
3 This assumption really belongs to a time, the XIXth century, when resources were virtually infinite on planet Earth…
4 As we will see shortly, we define equilibrium as that state where there are no currents at the interface between the system and the environment, so what is the environment’s own definition of equilibrium?!
5 This because we have already exploited the First Law.
6 This nomenclature comes from alchemy, via chemistry (think of Goethe’s The elective affinities…), it propagated in the XXth century via De Donder and Prigogine, and eventually it is still present in language in Luxembourg because in some way we come from the “late Brussels school”.
7 Basically, we ask that the tunable parameters are environmental properties, such as temperatures, chemical potentials, etc. and not internal properties, such as the energy landscape or the activation barriers between configurations. | 2018-12-18 22:13:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 72, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.68684983253479, "perplexity": 761.865059748905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829812.88/warc/CC-MAIN-20181218204638-20181218230638-00595.warc.gz"} |
https://survive-python.readthedocs.io/generated/survive.Breslow.html | # survive.Breslow¶
class survive.Breslow(*, conf_type='log', conf_level=0.95, var_type='aalen', tie_break='discrete')[source]
Breslow nonparametric survival function estimator.
Parameters: conf_type : {‘log’, ‘linear’} Type of confidence interval to report. conf_level : float Confidence level of the confidence intervals. var_type : {‘aalen’, ‘greenwood’} Type of variance estimate to compute. tie_break : {‘discrete’, ‘continuous’} Specify how to handle tied event times.
survive.NelsonAalen
Nelson-Aalen cumulative hazard function estimator.
Notes
The Breslow estimator is a nonparametric estimator of the survival function of a time-to-event distribution defined as the exponential of the negative of the Nelson-Aalen cumulative hazard function estimator $$\widehat{A}(t)$$:
$\widehat{S}(t) = \exp(-\widehat{A}(t)).$
This estimator was introduced in a discussion [1] following [2]. It was later studied by Fleming and Harrington in [3], and it is sometimes called the Fleming-Harrington estimator.
The parameters of this class are identical to the parameters of survive.NelsonAalen. The Breslow survival function estimates and confidence interval bounds are transformations of the Nelson-Aalen cumulative hazard estimates and confidence interval bounds, respectively. The variance estimate for the Breslow estimator is computed using the variance estimate for the Nelson-Aalen estimator using the Nelson-Aalen estimator’s asymptotic normality and the delta method:
$\widehat{\mathrm{Var}}(\widehat{S}(t)) = \widehat{S}(t)^2 \widehat{\mathrm{Var}}(\widehat{A}(t))$
Comparisons of the Breslow estimator and the more popular Kaplan-Meier estimator (cf. survive.KaplanMeier) can be found in [3] and [4]. One takeaway is that the Breslow estimator was found to be more biased than the Kaplan-Meier estimator, but the Breslow estimator had a lower mean squared error.
References
[1] (1, 2) N. E. Breslow. “Discussion of Professor Cox’s Paper”. Journal of the Royal Statistical Society. Series B (Methodological), Volume 34, Number 2 (1972), pp. 216–217.
[2] (1, 2) D. R. Cox. “Regression Models and Life-Tables”. Journal of the Royal Statistical Society. Series B (Methodological), Volume 34, Number 2 (1972), pp. 187–202. JSTOR.
[3] (1, 2, 3) Thomas R. Fleming and David P. Harrington. “Nonparametric Estimation of the Survival Distribution in Censored Data”. Communications in Statistics - Theory and Methods, Volume 13, Number 20 (1984), pp. 2469–2486. DOI.
[4] (1, 2) Xuelin Huang and Robert L. Strawderman. “A Note on the Breslow Survival Estimator”. Journal of Nonparametric Statistics, Volume 18, Number 1 (2006), pp. 45–56. DOI.
Attributes: conf_level Confidence level of the confidence intervals. conf_type Type of confidence intervals to report. data_ Survival data used to fit the estimator. random_state Seed for this model’s random number generator. summary Get a summary of this estimator. tie_break How to handle tied event times. var_type Type of variance estimate to compute.
Methods
check_fitted() Check whether this model is fitted. fit(time, **kwargs) Fit the Breslow estimator to survival data. plot(*groups[, ci, ci_style, ci_kwargs, …]) Plot the estimates. predict(time, *[, return_se, return_ci]) Compute estimates. quantile(prob, *[, return_ci]) Empirical quantile estimates for the time-to-event distribution. to_string([max_line_length]) String representation of this model.
check_fitted()[source]
Check whether this model is fitted. If not, raise an exception.
conf_level
Confidence level of the confidence intervals.
Returns: conf_level : float The confidence level.
conf_type
Type of confidence intervals to report.
Returns: conf_type : str The type of confidence interval.
data_
Survival data used to fit the estimator.
This property is only available after fitting.
Returns: data : SurvivalData The survive.SurvivalData instance used to fit the estimator.
fit(time, **kwargs)[source]
Fit the Breslow estimator to survival data.
Parameters: time : one-dimensional array-like or str or SurvivalData The observed times, or all the survival data. If this is a survive.SurvivalData instance, then it is used to fit the estimator and any other parameters are ignored. Otherwise, time and the keyword arguments in kwargs are used to initialize a survive.SurvivalData object on which this estimator is fitted. **kwargs : keyword arguments Any additional keyword arguments used to initialize a survive.SurvivalData instance. survive.nonparametric.NelsonAalen This estimator.
survive.SurvivalData
Structure used to store survival data.
survive.NelsonAalen
Nelson-Aalen cumulative hazard estimator.
plot(*groups, ci=True, ci_style='fill', ci_kwargs=None, mark_censor=True, mark_censor_kwargs=None, legend=True, legend_kwargs=None, colors=None, palette=None, ax=None, **kwargs)[source]
Plot the estimates.
Parameters: *groups : list of group labels Specify the groups whose curves should be plotted. If none are given, the curves for all groups are plotted. ci : bool, optional If True, draw pointwise confidence intervals. ci_style : {“fill”, “lines”}, optional Specify how to draw the confidence intervals. If ci_style is “fill”, the region between the lower and upper confidence interval curves will be filled. If ci_style is “lines”, only the lower and upper curves will be drawn (this is inspired by the style of confidence intervals drawn by plot.survfit in the R package survival). ci_kwargs : dict, optional Additional keyword parameters to pass to fill_between() (if ci_style is “fill”) or step() (if ci_style is “lines”) when plotting the pointwise confidence intervals. mark_censor : bool, optional If True, indicate the censored times by markers on the plot. mark_censor_kwargs : dict, optional Additional keyword parameters to pass to scatter() when marking censored times. legend : bool, optional Indicates whether to display a legend for the plot. legend_kwargs : dict, optional Keyword parameters to pass to legend(). colors : list or tuple or dict or str, optional Colors for each group. This is ignored if palette is provided. This can be a sequence of valid matplotlib colors to cycle through, or a dictionary mapping group labels to matplotlib colors, or the name of a matplotlib colormap. palette : str, optional Name of a seaborn color palette. Requires seaborn to be installed. Setting a color palette overrides the colors parameter. ax : matplotlib.axes.Axes, optional The axes on which to plot. If this is not specified, the current axes will be used. **kwargs : keyword arguments Additional keyword arguments to pass to step() when plotting the estimates. matplotlib.axes.Axes The Axes on which the plot was drawn.
predict(time, *, return_se=False, return_ci=False)[source]
Compute estimates.
Parameters: time : array-like One-dimensional array of times at which to make estimates. return_se : bool, optional If True, also return standard error estimates. return_ci : bool, optional If True, also return confidence intervals. estimate : pandas.DataFrame DataFrame of estimates. Each columns represents a group, and each row represents an entry of time. std_err : pandas.DataFrame, optional Standard errors of the estimates. Same shape as estimate. Returned only if return_se is True. lower : pandas.DataFrame, optional Lower confidence interval bounds. Same shape as estimate. Returned only if return_ci is True. upper : pandas.DataFrame, optional Upper confidence interval bounds. Same shape as estimate. Returned only if return_ci is True.
quantile(prob, *, return_ci=False)[source]
Empirical quantile estimates for the time-to-event distribution.
Parameters: prob : array-like One-dimensional array of values between 0 and 1 representing the probability levels of the desired quantiles. return_ci : bool, optional Specify whether to return confidence intervals for the quantile estimates. quantiles : pandas.DataFrame The quantile estimates. Rows are indexed by the entries of time and columns are indexed by the model’s group labels. Entries for probability levels for which the quantile estimate is not defined are nan (not a number). lower : pandas.DataFrame, optional Lower confidence interval bounds for the quantile estimates. Returned only if return_ci is True. Same shape as quantiles. upper : pandas.DataFrame, optional Upper confidence interval bounds for the quantile estimates. Returned only if return_ci is True. Same shape as quantiles.
Notes
For a probability level $$p$$ between 0 and 1, the empirical $$p$$-quantile of the time-to-event distribution with estimated survival function $$\widehat{S}(t)$$ is defined to be the time at which the horizontal line at height $$1-p$$ intersects with the estimated survival curve. If such a time is not unique, then instead there is a time interval on which the estimated survival curve is flat and coincides with the horizontal line at height $$1-p$$. In this case the midpoint of this interval is taken to be the empirical $$p$$-quantile estimate (this is just one of many possible conventions, and the one used by the R package survival [1]). If the survival function estimate never gets as low as $$1-p$$, then the $$p$$-quantile cannot be estimated.
The confidence intervals computed here are based on finding the time at which the horizontal line at height $$1-p$$ intersects the upper and lower confidence interval for $$\widehat{S}(t)$$. This mimics the implementation in the R package survival [1], which is based on the confidence interval construction in [2].
References
[1] (1, 2, 3) Terry M. Therneau. A Package for Survival Analysis in S. version 2.38 (2015). CRAN.
[2] (1, 2) Ron Brookmeyer and John Crowley. “A Confidence Interval for the Median Survival Time.” Biometrics, Volume 38, Number 1 (1982), pp. 29–41. DOI.
random_state
Seed for this model’s random number generator. This may not be an numpy.random.RandomState instance. The internal RNG is not a public attribute and should not be used directly.
Returns: random_state : object The seed for this model’s RNG.
summary
Get a summary of this estimator.
Returns: summary : NonparametricEstimatorSummary The summary of this estimator.
tie_break
How to handle tied event times.
to_string(max_line_length=75)[source]
String representation of this model.
Parameters: max_line_length : int, optional Specifies the maximum length of a line. If None, everything will be on one line. model_string : str A string representation of this model which should be able to be used to instantiate a new identical model.
var_type
Type of variance estimate to compute. | 2022-09-24 22:41:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2821202874183655, "perplexity": 2781.82498975307}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00105.warc.gz"} |
https://cosmocoffee.info/viewtopic.php?f=11&t=3411&p=9377&sid=0182d0f6dbdf58e02367659a952b313c | ## CosmoMC on cluster problem finding libclik_f90.so
Use of Cobaya. camb, CLASS, cosmomc, compilers, etc.
Ira *Wolfson
Posts: 69
Joined: January 24 2013
Affiliation: MPA
Contact:
### CosmoMC on cluster problem finding libclik_f90.so
Hi,
I am trying to run CosmoMC on our cluster.
I have compiled CosmoMC with planck data and ACTPol data.
Everything runs fine on my local node albeit very slowly (not surprised).
I thus try to run this on our cluster (SGE).
Unfortunately I get an error message:
Code: Select all
./cosmomc: error while loading shared libraries: libclik_f90.so: cannot open shared object file: No such file or directory
I checked that the PATH and LD_LIBRARY_PATH are identical on my local machine and the cluster.
I checked that the file is in it's proper place within the cluster and it is found readily by the system.
Is there something I have missed?
Maybe in CosmoMC compilation? | 2021-06-21 01:29:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3119182884693146, "perplexity": 14298.250461852576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00294.warc.gz"} |
http://rosalind.info/glossary/spectrum-graph/ | # Glossary
## Spectrum graph
A digraph constructed from a weighted alphabet and collection of positive real numbers by creating a node for each number and constructing a directed edge $(u, v)$ if $v > u$ and $v - u$ is equal to the weight of a symbol in the alphabet. | 2020-01-26 17:17:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5611270070075989, "perplexity": 251.69006620580268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00366.warc.gz"} |
https://discourse.julialang.org/t/any-other-method-to-calculate-indefinite-integral-besides-using-sympy/90707 | # Any Other Method to Calculate Indefinite Integral Besides Using SymPy?
Hi all,
I use this code to calculate the indefinite integral of a complex trigonometric function:
# To compute an indefinite integral
using SymPy
# we can also use @vars x y z
x = symbols("x")
integrate((sin((x^(2) + 1)^(4)))^(3)*(cos(x^(2) + 1)^(4))*((x^(2)+1)^(3))*x)
it took long time, just like the computer is thinking and forgetting the trigonometric formula thus making it very long for me to wait. Maybe the problem is my processor or RAM is not good enough. If there is any other method to calculate the complex function of indefinite integral do tell me.
1 Like
You seem to have a typo with your integrand, I can’t be sure. The following has an integral identified using SymbolicNumericIntegration:
sin((x^(2) + 1)^(4))^(3)*(cos((x^(2) + 1)^(4))*((x^(2)+1)^(3)))*x
With SymPy it seems to need a bit of help with the substitution. This somewhat excessive pattern mirrors what might be done in a textbook:
using SymPy
@syms x dx v dv
constant(ex) = prod(x for x in ex.as_ordered_factors() if x.is_constant())
ex = sin((x^(2) + 1)^(4))^(3)*(cos((x^(2) + 1)^(4))*((x^(2)+1)^(3)))*x * dx
u = (x^2 + 1)^4
du = diff(u, x) * dx
c = constant(du)
duₑ = du/c
ex1 = subs(ex, duₑ => dv/c, u=>v, dv=>1)
integrate(ex1, v)(v => u)
the compiling is not even finished till now… I will try your code now then. Thanks @j_verzani
This is the solution from the textbook:
Why is the code gives a solution of :
2 \sin^{4} v
how to know what is v ?
Did you try Julia?
julia> using Symbolics
julia> using SymbolicNumericIntegration
julia> @variables x
1-element Vector{Num}:
x
julia> expr=sin((x^(2) + 1)^(4))^(3)*(cos((x^(2) + 1)^(4))*((x^(2)+1)^(3)))*x
x*((1 + x^2)^3)*(sin((1 + x^2)^4)^3)*cos((1 + x^2)^4)
julia> @time integrate(expr)
74.138733 seconds (323.74 M allocations: 16.519 GiB, 4.00% gc time, 77.46% compilation time: 0% of which was recompilation)
(0, x*(sin(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4))^3)*cos(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4)) + (x^7)*(sin(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4))^3)*cos(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4)) + 3(x^3)*(sin(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4))^3)*cos(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4)) + 3(x^5)*(sin(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4))^3)*cos(1 + x^8 + 4(x^2) + 4(x^6) + 6(x^4)), Inf)
I fixed my example to substitute back (and adjust the constant, which I had incorrect.)
Thanks @ufechner7 , but isn’t btime better to be used than time?
You change this
ex1 = subs(ex, duₑ => c*dv, u=>v, dv=>1)
into this
ex1 = subs(ex, duₑ => dv/c, u=>v, dv=>1)
what is the logic of dv/c and c*dv ?
Just that subs didn’t work with that constant 8 that comes out of the derivative. So it is taken out then put back in. | 2022-11-29 15:42:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.744954526424408, "perplexity": 8060.215461827755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00661.warc.gz"} |
https://forum.allaboutcircuits.com/threads/resolver-rotor-shape-resolver-rule-2.188134/ | # Resolver rotor shape (Resolver rule 2)
#### Sergio155
Joined Jul 12, 2022
8
Hi everyone! I have a resolver with 2 pole pairs, but i need this one with 5 pole pairs. I would like to replace the 2 pp rotor with the 5pp rotor. Does anyone know, how to determine the correct shape of the rotor? Are any artikles or books about this? I have the equipment to make such a new rotor, and I don't want to buy a new resolver because i have many 2pp rotors.
Joined Jul 18, 2013
26,000
What application is it used on? Why do you need the extra pairs?
#### Sergio155
Joined Jul 12, 2022
8
I need to control an electromotor PMSM, 10kW. The motor has 5 pole pair, so the resolver must have the same number of poles.
Joined Jul 18, 2013
26,000
Does the motor presently have an encoder, shaft or thro-hole type?
I used to obtain an encoder with the commutation track on them from Renco Encoder, unfortunately they have since been taken over by Heidenhain so may be harder to get and/or more \$'s.
There are other suppliers, It may be more effective and efficient to go this way if possible.
Resolvers are not so popular for this application now. | 2022-12-07 21:14:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6017310619354248, "perplexity": 2239.571271500393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00161.warc.gz"} |
https://stat.ethz.ch/R-manual/R-devel/library/survival/html/concordancefit.html | concordancefit {survival} R Documentation
## Compute the concordance
### Description
This is the working routine behind the concordance function. It is not meant to be called by users, but is available for other packages to use. Input arguments, for instance, are assumed to all be the correct length and type, and missing values are not allowed: the calling routine is responsible for these things.
### Usage
concordancefit(y, x, strata, weights, ymin = NULL, ymax = NULL,
timewt = c("n", "S", "S/G", "n/G2", "I"), cluster, influence =0,
ranks = FALSE, reverse = FALSE, timefix = TRUE, keepstrata=10,
std.err = TRUE)
### Arguments
y the response. It can be numeric, factor, or a Surv object x the predictor, a numeric vector strata optional numeric vector that stratifies the data weights options vector of case weights ymin, ymax restrict the comparison to response values in this range timewt the time weighting to be used cluster, influence, ranks, reverse, timefix see the help for the concordance function keepstrata either TRUE, FALSE, or an integer value. Computations are always done within stratum, then added. If the total number of strata greater than keepstrata, or keepstrata=FALSE, those subtotals are not kept in the output. std.err compute the standard error; not doing so saves some compute time.
### Details
This function is provided for those who want a “direct” call to the concordance calculations, without using the formula interface. A primary use has been other packages. The routine does minimal checking of its input arguments, under the assumption that this has already been taken care of by the calling routine.
### Value
a list containing the results
### Author(s)
Terry Therneau
concordance | 2023-04-01 02:19:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.369616836309433, "perplexity": 4633.087051768359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00219.warc.gz"} |
https://brilliant.org/problems/an-algebra-problem-by-rakshit-chaudhary/ | Algebra Level 4
To became a real survivor one has to kill:
1 male walker on 1st day.
10 male walkers, 9 female walkers, 3 enemies on the 2nd day.
215 male walkers , 148 female walkers on the 3rd day.
1276 male walkers, 4043 female walkers, 5 enemies on the 4th day.
And so on.
If you want to became a real survivor, you do this for 12 days if total killings, $$K$$ can be expressed in the form $$e*K= {a}^b .c- d$$ then what is the value of $$a + b + c + d +e-117$$?
× | 2018-07-16 03:12:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26871445775032043, "perplexity": 4554.722886854936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589172.41/warc/CC-MAIN-20180716021858-20180716041858-00321.warc.gz"} |
https://structurescentre.com/assessing-the-stability-of-frames/ | # Assessing the Stability of Frames |Second Order Effects|
#### This article attempts to give a background on the subject of frame stability with a primary focus on the practicalities of structural steel-work design to Eurocode 3
For every structure and structural material, frame stability is an important area of consideration. Designers of structural steel-work were the first to recognize the importance of assessing the stability of frames. Even though the requirement to ensure that frames are made sufficiently stiff has always being included in earlier codes, no guidance on assessing frame stability was explicitly given. The topic only became a subject of importance in BS 5950 now superseded by BS EN 1993.
This article attempts to give a background on the subject of frame stability with a primary focus on the practicalities of structural steel-work. The article does this using the guidance specified in Eurocode 3, however, there is no doubt that designers working to different codes of practices and with different materials will gain useful insights on the subject.
Frame stability can be defined as the effect of displaced vertical loads that are no longer concentric with their normal positions. This effect usually manifest in the form of lateral displacement which can either be caused by externally applied load such as wind, or due to the out of plumb of the frame by some degree. The latter is mostly the case and leads to the vertical loads applied on the frame being displaced. The displaced vertical load in turn causes further lateral displacement. This behaviour is classified as Second-Order Effect and most design codes require that the magnitude of this effect be assessed and allowed for within the design where necessary.
Although second-order effect described above can be ignored in some frames where their effects are small enough, they’re always present and must be always be checked.
### Defining the Terminologies
Structural steel-work designers oftentimes refer to sway sensitive frames and non sway frames. The latter description is false since all frames under the application of loads will sway. The distinction between the two terminologies in real sense lies in the significance of the sway effects. Some designers also refer to second-order effects as P-delta effects. To be clear P-delta effects are due to the likely initial imperfections within the length of members. These are usually allowed for automatically in the actual design of structural members, hence would not be addressed in this article.
Other designers also refer to “sway frames” when a proper terminology would probably be “unbraced frame.” In an unbraced frame, resistance to lateral loads is provided by the continuity of structural elements (Moment resisting frames). Whereas, braced frames in contrast to unbraced frames derive their resistance to lateral forces from the disposition of steel bracings and diagonal steel members or perhaps by the provision of a concrete core. This distinction as well as correct understanding of these terminologies is very important and should be given utmost importance, because a braced frame can be “sway sensitive.” Same way an unbraced frame can be sufficiently stiff such that second order effects are then small enough to be ignored.
Before the stability of a steel frame can be assessed, one very important parameter must be determined. This is known as the elastic critical load, Pcr. The elastic critical load can be described as the load at which the entire frame will collapse under the application of vertical loads only. It is a function of the frames property and shape of loading. For example, consider a frame subjected to vertical loads only (Figure 2). If the vertical loads on this frame is gradually increased, at some point the frame will collapse. Now consider that this frame was initially out of plumb by some degree whilst the vertical loads was increased, the additional deformations due to the vertical loads just before reaching the elastic critical load would be significant. Hence the ratio between the elastic critical load and the applied load is an important pointer towards second order effects.
In EC3 this ratio is known as Fcr/FEd. This ratio indicates that, as FEd increases the ratio reduces, indicating increased sensitivity to second order effects.
### The Eurocode Approach to Frame Stability
The basic procedure involves estimating the applied vertical loads on the frame and the magnitude of elastic critical load for the frame and shape of loading. Estimating Fcr would often be a tedious task for manual analysis. Thus software have been frequently used to determine it magnitude. However, as an alternative, design standards provides a simplified but conservative method of determining alpha critical. In Eurocode 3 assessment of frame stability is explicitly dealt with in section 5.2.
The underlying expression can be written as:
{ \alpha }_{ cr }=\left[ \frac { { H }_{ Ed } }{ { V }_{ Ed } } \right] \left( \frac { h }{ { \delta }_{ h,Ed } } \right)
Where:
h is the storey height
HEd is the horizontal shear at the base of the storey. This is equal to the sum of the lateral loads applied at all floor and roof levels above the storey under consideration. In general, these lateral loads will be the Equivalent Horizontal Forces (EHF) prescribed in Clause 5.3.2 together with any wind forces ( if the wind is part of the combination of actions being considered).
VEd is the total vertical load at the base of the storey. This is equal to the sum of the vertical loads from all the floors and roof, above the storey under consideration.
δH.Ed is the lateral displacement over the storey i.e. the displacement between levels due to the lateral loads only.
This expression is evaluated for each storey, starting from the lowest storey to the top of the frame. In simple steel frames containing several similar bracings, the magnitude of δH.Ed can be determined by analyzing just one bracing. Where this is the case, the horizontal and vertical actions applied should be in proportion of stiffnesses.
It is as well important to state that, the approximate formula given in section 5.2 of EC3 has certain limitations. It can not be applied to irregular frames or portal frames with significant axial forces in the rafters.
### Second-Order Effects
The primary reason for assessing the stability of a frame is to determine its sensitivity to sway, in order words its propensity to second order effects. Eurocode defines when second order effects are small enough to be ignored. For frames designed elastically, second order effects may be neglected if αcr is greater than 10. If αcr is less than 10, the frame is susceptible to buckling failure and a second order analysis needs to be carried out. However, the Eurocode again simplifies this through the use of an amplifier, for amplifying the horizontal actions (wind, EHF’s etc.).
\quad load\quad amplifier\quad =\quad \frac { 1 }{ 1-\frac { 1 }{ { \alpha }_{ cr } } }
Note: that the amplification factor can only be applied if αcr is greater than 3. Where the reverse is the case, a full blown second order analysis must now be undertaken. The simple amplification is only one way to allow for second-order effects. There are other approaches, including the use of software.
To conclude, steel frames are relatively lightweight when compared to concrete frames, so sensitivity to second order effects should always be expected. There is nothing absolutely wrong with a structure having αcr less than 10. It is indeed expected that many frames will fall into this category, hence the provisions in the codes. The use of software which will allow for these effects is one convenient approach. For straightforward frames, the Eurocode contains a simple method to assess the significance of second order effects and how to allow for them if necessary.
### Worked Example
Figure 2.0 shows the typical braced bay of a one suspended floor office building consisting of UKB’s, UKC’s and diagonal bracings. Assess the sway sensitivity of the structure. Are second order effects significant? if so what are the amplification factors.
The design actions has been apportioned according to the stiffnesses of the braced bays and are shown as Table 1 & 2. Table 1 shows the design data where imposed load is the leading variable action and wind as accompanying variable action, while combination two shown as table 2 has wind as leading variable action with the imposed load as accompanying variable action. In each case, EHF has been considered and included in the horizontal actions.
Floor Vertical Actions (kN) Horizontal Actions (kN) Total Deflection (mm)
Roof-1st 4034 34.1 9.3
1st-Ground 9176 73.6 5.1
Floor Vertcal Actions (kN) Horizontal Actions (kN) Total Deflection (mm)
Roof-1st 3519 45.3 11.9
1st - Ground 7331 96.7 6.5
The last column in both tables shows the total displacement at each storey, this has been obtained by analyzing the frame for the horizontal loads only per requirement.
{ \alpha }_{ cr }=\left[ \frac { { H }_{ Ed } }{ { V }_{ Ed } } \right] \left( \frac { h }{ { \delta }_{ h,Ed } } \right)
a) Roof – 1st Floor
{ H }_{ Ed }=34.1kN\quad { V }_{ Ed }=4034kN;\\h =3000mm\\ { \delta }_{ H,Ed }=9.3-5.1=4.2mm
{ \alpha }_{ cr }=\frac { 34.1 }{ 4034 } \times \frac { 3000 }{ 4.2 } =6.04<10
b) 1st – Ground Floor
{ H }_{ Ed }=34.1+73.6=107.7kN\\ { V }_{ Ed }=4034+9176=13210kN
h\quad =3000mm\quad { \delta }_{ H,Ed }=5.1mm
{ \alpha }_{ cr }=\frac { 107.7 }{ 13210 } \times \frac { 3500 }{ 5.1 } =5.6<10
Therefore the worst case of αcr is = 5.6
\quad load\quad amplifier\quad =\quad \frac { 1 }{ 1-\frac { 1 }{ { \alpha }_{ cr } } }
=\frac { 1 }{ 1-\frac { 1 }{ 5.6 } } =1.22
Thus, all horizontal actions acting on the frame for combination one must be increased by 22% in order to allow for second-order effects.
a) Roof – 1st Floor
{ H }_{ Ed }=45.3kN\quad { V }_{ Ed }=3519kN;\\ h\quad =3000mm\\ { \delta }_{ H,Ed }=11.9-6.5=5.4mm
{ \alpha }_{ cr }=\frac { 45.3 }{ 3519 } \times \frac { 3000 }{ 5.4 } =7.15<10
b) 1st – Ground Floor
{ H }_{ Ed }=45.3+96.7=142.0kN\\{ V }_{ Ed }=3519+7331=10850kN
h\quad =3500mm\quad { \delta }_{ H,Ed }=6.5mm
{ \alpha }_{ cr }=\frac { 142.0 }{ 10850 } \times \frac { 3500 }{ 6.5 } =7.05<10
Therefore the worst case of αcr is = 7.05
\quad load\quad amplifier\quad =\quad \frac { 1 }{ 1-\frac { 1 }{ { \alpha }_{ cr } } }
=\frac { 1 }{ 1-\frac { 1 }{ 7.05 } } =1.17
Thus, all horizontal actions acting on the frame for combination two must be increased by 17% in order to allow for second-order effects.
We can conclude that this frame is sensitive to second-order effects and thus allowances must be made in it analysis and design.
Thank You!!!
## 48 Replies to “Assessing the Stability of Frames |Second Order Effects|”
1. Basit Kareem says:
What a great and enlightening article. Thanks very much sir 💕😊
2. can i buy viagra in india – generic prescription viagra | 2022-09-24 16:13:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157372117042542, "perplexity": 2047.0618604019535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00051.warc.gz"} |
http://math.stackexchange.com/questions/5616/geometric-progression | # Geometric Progression
If S1, S2, and S are the sums of n terms, 2n terms and to infinity of a G.P. Then, find the value of S1(S1-S).
PS:Nothing is given about the common ratio.
-
Of course no,this again comes from my test paper without any kind of explanation,except the the answer. – Quixotic Sep 28 '10 at 5:43
@Deb: You should state the source of the problem in the post. People are resistive to homework-like questions as one is supposed to do their own homework. – KennyTM Sep 28 '10 at 6:33
@Debanjan: I mean you can just copy your first comment into the post in your next question (if any). – KennyTM Sep 28 '10 at 8:24
@Debanjan: I suppose you have been asked if it is homework for earlier questions (hence your usage of word 'again'). Why don't you just mention that that is the case (from a test) and avoid getting questions like these ('is it homework')? In any case, why don't you also show some working? Test questions are like homework, in a way. – Aryabhata Sep 28 '10 at 16:24
@ Moron: The wike defination(en.wikipedia.org/wiki/Homework) and to my understanding homework is something which is to be assigned by my teacher and in case you didn't manage to do it he/she is there to help me doing it, where as I don't think test questions are since in some cases questions are not well defined and there is error in the solutions thanks to problem-setters.Besides you need a fast/tricky approach to get things done during the test. – Quixotic Sep 29 '10 at 4:58
I change your notation from S1, S2 and S to $S_{n},S_{2n}$ and $S$.
The sum of $n$ terms of a geometric progression of ratio $r$
$u_{1},u_{2},\ldots ,u_{n}$
is given by
$S_{n}=u_{1}\times \dfrac{1-r^{n}}{1-r}\qquad (1)$.
Therefore the sum of $2n$ terms of the same progression is
$S_{2n}=u_{1}\times \dfrac{1-r^{2n}}{1-r}\qquad (2)$.
Assuming that the sum $S$ exists, it is given by
$S=\lim S_{n}=u_{1}\times \dfrac{1}{1-r}\qquad (3)$.
Since the "answer is S(S1-S2)", we have to prove this identity
$S_{n}(S_{n}-S)=S(S_{n}-S_{2n})\qquad (4).$
Plugging $(1)$, $(2)$ and $(3)$ into $(4)$ we have to prove the following equivalent algebraic identity:
$u_{1}\times \dfrac{1-r^{n}}{1-r}\left( u_{1}\times \dfrac{1-r^{n}}{1-r}% -u_{1}\times \dfrac{1}{1-r}\right)$
$=u_{1}\times \dfrac{1}{1-r}\left( u_{1}\times \dfrac{1-r^{n}}{1-r}-u_{1}\times \dfrac{1-r^{2n}}{1-r}\right) \qquad (5)$,
which, after simplifying $u_1$ and the denominator $1-r$, becomes:
$\dfrac{1-r^{n}}{1}\left( \dfrac{1-r^{n}}{1}-\dfrac{1}{1}\right) =\left( \dfrac{% 1-r^{n}}{1}-\dfrac{1-r^{2n}}{1}\right) \qquad (6)$.
This is equivalent to
$\left( 1-r^{n}\right) \left( -r^{n}\right) =-r^{n}+r^{2n}\iff 0=0\qquad (7)$.
Given that $(7)$ is true, $(5)$ and $(4)$ are also true.
-
That's what I have done but let me ask you why are you assuming that r is less than 1 ? This satisfies the relation of-course.But in real time I can only spare a mint or so in this problem,so I guess the problem is not well defined ?! – Quixotic Sep 29 '10 at 4:52
I used formula (1) to evaluate the limit of $S_n$ as $n$ tends to $\infty$. This limits exists if and only if $|r|\lt 1$. en.wikipedia.org/wiki/Geometric_series – Américo Tavares Sep 29 '10 at 7:53
HINT $\quad\:$ In $\rm\ \ (1-X)\ (1-(1-X))\ =\ 1-X^2-(1-X)\ \ \$ put $\rm\ \ \ X = x^n\$
then multiply both sides by $\rm\ 1/(1-x)^2\ =\ S/(1-x)\:.\ \$ More generally one has
$\rm\ \ (1-x^a)\:(1-x^b)\ =\ (1-x^a) + (1-x^b) - (1-x^{a+b})$
$\rm\quad\quad\quad\ \Rightarrow\quad\quad S_a\ S_b\ =\ S\ (S_a + S_b - S_{a+b})\:,\quad S_n = \displaystyle\frac{1-x^n}{1-x},\quad S = S_\infty = \frac{1}{1-x}$
This generalizes to arbitrary products $\rm\: S_{a}\: S_b\: S_c\cdots S_k\:$ using the Inclusion–exclusion principle.
- | 2014-10-01 00:21:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210085272789001, "perplexity": 614.0747605332126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00365-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://discuss.mxnet.apache.org/t/environment/2347 | # Environment
“For this scheme to work, we need that each data point in the target (test time) distribution had nonzero probability of occurring at training time. If we find a point where q(x)>0 but p(x)=0, then the corresponding importance weight should be infinity.”
Isn’t it the other way around?
q(x)=0 will cause the \beta \to \inf
I think there is a typo in formula 4.9.2, where the denominator should be q as well.
\int p(\mathbf{x}) f(\mathbf{x}) dx & = \int p(\mathbf{x}) f(\mathbf{x}) \frac{q(\mathbf{x})}{q(\mathbf{x})} dx
Hi @Siyang, great catch! Thanks!
At the end of section 4.9.1.5 “Covariate Shift Correction” it is stated that the correction factor is infinity for p(x)=0 and q(x)>0. This conflicts with the definition of beta(x)=p(x)/q(x) (following equation 4.9.2). Should q(x) and p(x) be switched?
Can someone explain " When the distribution of labels shifts over time 𝑝(𝑦)≠𝑞(𝑦)p(y)≠q(y) but the class-conditional distributions stay the same 𝑝(𝐱)=𝑞(𝐱)p(x)=q(x), our importance weights will correspond to the label likelihood ratios 𝑞(𝑦)/𝑝(𝑦)q(y)/p(y)."
what is the connection here?
Just found this video to clear the confusion https://www.youtube.com/watch?v=nAqQF-jU_YM | 2021-10-28 06:08:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090755581855774, "perplexity": 2509.7468541709986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00140.warc.gz"} |
https://cs.stackexchange.com/questions/82784/universal-lossless-compression | # Universal Lossless Compression? [closed]
It is not possible to losslessly compress all files of size $n$ using a single algorithm, as there are more files of size $n (2^n)$ than of size $p, p: p < n ( 2^n-1)$. Via the pigeon hole principle, if we only tried to compress files of size $n$ with a single algorithm, there would be at least one file it was impossible to compress.
If we wanted to be able to compress files with differing lengths $n_k$, the number of files of length $n_k$ we can compress for each $n_k$ becomes even smaller.
Today when reading a story about how a file that was several gigabytes when compressed uncompressed to one gigabyte, I had an idea for a universal compression algorithm.
Let $a_i$ be a compression algorithm.
Let $g_j$ be a file.
$|g_j|$ denotes the length of $g_j$.
Let $f(a_i, g_j)$ be a function that returns $(|g_j| - |a_i(g_j)|)$.
Let $S_N = \{g_j : |g_j| \le N\}$.
Let $A = \{a_i : \, \forall \, g_j \in S_N \, \exists a_i in A : f(a_i, g_j) \gt \lceil(\log_2{\#A})\rceil\}$.
$\#A$ denotes the number of elements in $A$.
Let $m$ be the length of the label of the compression algorithm chosen. The first $m$ bytes of every compressed file denote the compression algorithm chosen.
$m = \lceil(\log_2{\#A})\rceil$.
Then you can compress all $g_j \in S_N$, by iterating through A until you find $a_i : f(a_i,g_j) - m \gt 0$.
Even better.
For each $g_j$, let $a_j$ be the corresponding compression algorithm.
Let $h(a_i, g_j) = f(a_i,g_j) - m$.
$${ \, \forall \, a_i \in A, g_j \in S_N, a_j = \displaystyle{ \underset{a_i \in A, g_j \in S} { \operatorname{argmax} } } \, (h(a_i, g_j))}$$
Is there a reason why the above is not done?
While the above is an algorithm, and one could argue that the pigeon hole principle thus applies, this does not imply what it may at first seem to imply. The above algorithm call it $a^v$ is a little different.
Let $a_i: S_N \to Y_N^i$ denote that algorithm $a_i$ maps a family of files $(S_N = \{g_j : |g_j| \le N\})$ is mapped to another family of files $Y_N = \{y_j : y_j = a_i(g_j)\}$.
$\forall a_i \in A, a_i: S_N \to Y_N^i$.
However, $a^v : S_{N+m} \to Y_{N+m}^v$.
So $a^v$ compresses a different family of files from $a_i \ in A$.
The pigeon hole principle merely states that $a^v$ cannot compress all files of length $N+m$; this is irrelevant, since $a^v$ only intends to compress a small subset of files of length $N+m$ (those whose first $m$ bits are the labels of some $a_i \in A$.
## closed as unclear what you're asking by David Richerby, Evil, Yuval Filmus, fade2black, Luke MathiesonOct 24 '17 at 23:02
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Please state upfront what you suppose your envisioned universal compression algorithm to achieve: can any input be thrown at it - and be reconstructed faithfully? – greybeard Oct 21 '17 at 11:54
• @greybeard no, only files in $S_N$. – Tobi Alafin Oct 21 '17 at 12:02
• @TobiAlafin You cannot compress all files of length at most $N$ for the same reason that you cannot compress all files. There are $2^{N+1}-1$ files that you need to compress. If you compress every file of length at most $N$, then every compressed file must have length strictly less than $N$. But there are only $2^N-1$ such files, which isn't enough. – David Richerby Oct 21 '17 at 15:22
• You say "Then we can compress all $g_j\in S_n$", which is a claim of universal lossless compression. The impossibility of universal lossless compression has nothing to do with algorithms: it is a simple matter of the cardinality of sets. "Choose an algorithm from a family of algorithms" is still an algorithm: if property_1 then compress_with_algo_1; else if property_2 then compress_with_algo_2; else... That doesn't help anything: it's still an algorithm and no algorithm can compress everything. – David Richerby Oct 21 '17 at 15:30
• You claim that you have an algorithm that can compress all files of length $N$. It doesn't matter what that algorithm does. You know that no such algorithm can exist. – David Richerby Oct 21 '17 at 15:44
Then you can compress all $g_j ∈ S_N$, by iterating through $A$ until you find $a_i|f(a_i,g_j)−m>0$.
I'm not sure what the notation means (the pipe here, and also $\#A$ elsewhere), but still: this is not a meaningful algorithm since the set
$\qquad A = \{a_i : \, \forall \, g_j \in S_N \, f(a_i, g_j) \gt \lceil(\log_2{\#A})\rceil\}$
is empty.
I think you got lost in notation. $S_N$ = $\Sigma^{\leq n}$ and $A$ is the set of all algorithms that compress all strings in $S_N$ by at least some non-zero number of bits. As you cited, there are no such algorithms.
In the updated question, you write:
$\qquad A = \{a_i : \, \forall \, g_j \in S_N \, \exists a_i \in A : f(a_i, g_j) \gt \lceil(\log_2{\#A})\rceil\}$
This definition is circular, hence $A$ is not well-defined. Did you mean
$\qquad A = \{ a \mid \exists g \in S_N. f(a, g) > x \}$
with $x > 0$ something that does not depend on $A$?
Now $A$ is infinite (and undecidable as per Rice's theorem) and, arguably, completely useless: any string s can be compressed well by the trivial algorithm
compress_s(x) {
if x == s
return "0"
else
return "0" + x
end
}
Note that this version of $A$ contains all these bogus algorithms. So, you have thrown all information while compressing, instead encoding the string in the algorithm's source code (mathematically and literally).
And, as others note, the resulting algorithm would still be subject to the pidgeon-hole principle.
Is there a reason why the above is not done?
Even if it were possible, it'd be horribly inefficient. In essence, the idea of your algorithm is:
Try all (compression) algorithms (I know); pick the smallest result and encode the result together with its code.
That's clearly neither a clever nor a useful algorithmic idea.
• Do you think that Kolmogorov complexity might be good to describe here? The last paragraph pictures that case ;) – Evil Oct 21 '17 at 16:50
• @TobiAlafin You need to make up your mind about what you want to define before posting questions about it, and make sure you know what you are defining. Otherwise you're wasting everbody's time. This is the last edit I'll do; you've been given enough explanations for why what you think you're doing is impossible. – Raphael Oct 21 '17 at 18:56
• @Evil I don't see an immediate connection; if you do, why not post an answer? – Raphael Oct 21 '17 at 18:56 | 2019-06-24 14:11:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7988917231559753, "perplexity": 562.6516123723379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00350.warc.gz"} |
https://www.iitianacademy.com/ib-dp-maths-topic-8-1-de-morgans-laws-distributive-associative-and-commutative-laws-for-union-and-intersection-hl-paper-3/ | # IB DP Maths Topic 8.1 De Morgan’s laws: distributive, associative and commutative laws (for union and intersection) HL Paper 3
## Question
Given the sets $$A$$ and $$B$$, use the properties of sets to prove that $$A \cup (B’ \cup A)’ = A \cup B$$, justifying each step of the proof.
## Markscheme
$$A \cup (B’ \cup A)’ = A \cup (B \cap A’)$$ De Morgan M1A1
$$= (A \cup B) \cap (A \cup A’)$$ Distributive property M1A1
$$= (A \cup B) \cap U$$ (Union of set and its complement) A1
$$= A \cup B$$ (Intersection with the universal set) AG
Note: Do not accept proofs using Venn diagrams unless the properties are clearly stated.
Note: Accept double inclusion proofs: M1A1 for each inclusion, final A1 for conclusion of equality of sets.
[5 marks]
[N/A]
## Question
Consider the sets A = {1, 3, 5, 7, 9} , B = {2, 3, 5, 7, 11} and C = {1, 3, 7, 15, 31} .
Find $$\left( {A \cup B} \right) \cap \left( {A \cup C} \right)$$.
[3]
a.i.
Verify that A \ C ≠ A.
[2]
a.ii.
Let S be a set containing $$n$$ elements where $$n \in \mathbb{N}$$.
Show that S has $${2^n}$$ subsets.
[3]
b.
## Markscheme
EITHER
$$\left( {A \cup B} \right) \cap \left( {A \cup C} \right) = \left\{ {1,\,2,\,3,\,5,\,7,\,9,\,11} \right\} \cap \left\{ {1,\,3,\,5,\,7,\,9,\,15,\,31} \right\}$$ M1A1
OR
$$A \cup \left( {B \cap C} \right) = \left\{ {1,\,3,\,5,\,7,\,9,\,11} \right\} \cup \left\{ {3,\,7} \right\}$$ M1A1
OR
$${B \cap C}$$ is contained within A (M1)A1
THEN
= {1, 3, 5, 7, 9} (= A) A1
Note: Accept a Venn diagram representation.
[3 marks]
a.i.
A \ C = {5, 9} A1
= {15, 31} A1
so A \ C ≠ A AG
Note: Accept a Venn diagram representation.
[2 marks]
a.ii.
METHOD 1
if $$S = \emptyset$$ then $$n = 0$$ and the number of subsets of S is given by 20 = 1 A1
if $$n > 0$$
for every subset of S, there are 2 possibilities for each element $$x \in S$$ either $$x$$ will be in the subset or it will not R1
so for all $$n$$ elements there are $$\left( {2 \times 2 \times \ldots \times 2} \right){2^n}$$ different choices in forming a subset of S R1
so S has $${2^n}$$ subsets AG
Note: If candidates attempt induction, award A1 for case $$n = 0$$, R1 for setting up the induction method (assume $$P\left( k \right)$$ and consider $$P\left( {k + 1} \right)$$ and R1 for showing how the $$P\left( k \right)$$ true implies $$P\left( {k + 1} \right)$$ true).
METHOD 2
$$\sum\limits_{k = 0}^n {\left( \begin{gathered} n \hfill \\ k \hfill \\ \end{gathered} \right)}$$ is the number of subsets of S (of all possible sizes from 0 to $$n$$) R1
$${\left( {1 + 1} \right)^n} = \sum\limits_{k = 0}^n {\left( \begin{gathered} n \hfill \\ k \hfill \\ \end{gathered} \right)} \left( {{1^k}} \right)\left( {{1^{n – k}}} \right)$$ M1
$${2^n} = \sum\limits_{k = 0}^n {\left( \begin{gathered} n \hfill \\ k \hfill \\ \end{gathered} \right)}$$ (= number of subsets of S) A1
so S has $${2^n}$$ subsets AG
[3 marks]
b.
[N/A]
a.i.
[N/A]
a.ii.
[N/A]
b. | 2021-12-03 04:47:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7700338363647461, "perplexity": 5276.370981860199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00129.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-6-percent-6-2-percents-and-fractions-6-2-exercises-page-399/72 | ## Basic College Mathematics (10th Edition)
$\frac{27}{100}$ 0.27 27%
27 out of 100 include a coloring activity. Fraction Write the ratio as a fraction and simplify. $\frac{27}{100}$ Decimal Write the ratio as a fraction and divide. To divide by 100 simply move the decimal two places to the left. $\frac{27}{100}$ = 0.27 Percent Write as a proportion and solve for x using cross products. $\frac{27}{100} = \frac{x}{100}$ $100x = 27\times100$ $\frac{100x}{100} = \frac{2700}{100}$ $x = 27$ | 2021-03-02 11:26:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901733934879303, "perplexity": 879.720989527902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00557.warc.gz"} |
https://puzzling.stackexchange.com/questions/47479/absurd-equation | # Absurd Equation
$37$ $12$ $9$ $×$ $2.5$ $÷$ $+$ $=$ $X$
Solve for $X$
Hint 1
Uhyhuvh Srolvk Qrwdwlrq
Hint 2
Hint 1 needs to be decoded
• 2 straight hints? :-/ You should hold on before posting hints. – Techidiot Jan 3 '17 at 15:23
• I thought the spoilers would be enough, so that everyone can decide for themselves if they want to use the hints. – jrenk Jan 3 '17 at 15:25
On the face of it, this looks like postfix notation, if that's so, then
it can be rewritten as ((12*9)/2.5)+32=X, or X = 80.2 | 2019-07-23 12:06:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.773571252822876, "perplexity": 1614.649795867124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00178.warc.gz"} |