url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.reddit.com/r/Physics/comments/203xku/effects_of_gr_vs_sr/
all 4 comments [–]Soft matter physics 11 points12 points Let's do a quick calculation: v << c, so if I define beta = v/c, the inverse of the time dilation factor dil_sr =1/sqrt(1 - beta2) =~ 1 + .5 beta2 via the binomial theorem. In general relativity, an equivalent approximation occurs when gh << c2, when g is nearly constant. The time dilation (really contraction) under this approximation becomes dil_gr=1 - gh/c2. Plugging in g = 10 m/s2, h=10,000 m, , dil_gr = 1 - 1e-12. Using the average cruising speed of a jet as 1000 km/hr = 300 m/s, dil_sr = 1 + 5e-13 Thus the effects of general relativity are more important. This was done lazily, so maybe someone can check my signs and units. EDIT: Corrected thanks to /u/Welsch_boyo [–] 5 points6 points I think the beta should be squared in the Lorentz factor (time dilation factor). This gives me dil_sr = 1+5e-13, so a clock on the jet will lose 43.2ns per day relative to a clock on the Earth. Using your result for the general relativistic case, I get that a clock on the jet will gain 94.2ns per day relative to a clock on the Earth, meaning the general relativistic effect is slightly larger. As an addendum, I wrote an essay on GPS satellites last year and found that the general relativistic correction of 45us per day is quite a bit bigger than the special relativistic correction of 7us per day. [–] 2 points3 points [–] 1 point2 points An interesting tidbit is that these effects actually get taken into account in GPS satellites.
2014-07-24 11:16:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238444328308105, "perplexity": 1602.5195963185972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888236.74/warc/CC-MAIN-20140722025808-00201-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.gurobi.com/documentation/9.5/quickstart_mac/software_installation_guid.html
# Software Installation Guide Filter Content By Version Languages ## Software Installation Guide Before using the Gurobi Optimizer, you'll need to install the software on your computer. This section covers the installation of the entire Gurobi product. If you plan to use Gurobi from Python only, you can use our pip package or our Anaconda package instead. When installing the full Gurobi product, your first steps are to visit our download page, find your platform (we'll assume macOS in this document), and choose the corresponding file to download. Make a note of the name and location of the downloaded file. Your next step is to double-click on the appropriate Gurobi installer (e.g., gurobi9.5.2_macos_universal2.pkg for Gurobi 9.5.2) and follow the prompts. By default, the installer will place the Gurobi 9.5.2 files in /Library/gurobi952/macos_universal2 (note that this is the system /Library directory, not your personal ~/Library directory). Your <installdir> (which we'll refer to throughout this document) will be /Library/gurobi952/macos_universal2. You are now ready to proceed to the section on Retrieving Your Gurobi License. If you would like an overview of the files included in the Gurobi distribution, you can also view the File Overview section.
2022-10-04 20:29:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23628279566764832, "perplexity": 3404.33364671243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00448.warc.gz"}
https://www.hepdata.net/record/ins1241422
• Browse all Energy Dependence of the Transverse Momentum Distributions of Charged Particles in pp Collisions Measured by ALICE The collaboration Eur.Phys.J. C73 (2013) 2662, 2013 Abstract (data abstract) CERN-LHC. Measurements of the differential cross sections as a function of transverse momentum of primary charged particles produced inelastic proton-proton collisions at centre-of-mass energies of 0.9, 2.76 and 7 TeV. The data cover the |pseudorapidity| range < 0.8 with PT from 0.15 GeV/c. As well as the measured cross sections and their ratios, reference cross sections are also constructed for comparison with Pb-Pb spectra at 2.76 TeV and p-Pb spectra at 5.02 TeV, increasing the PT range up to 50 GeV. • #### Table 1 Data from F 1 10.17182/hepdata.61787.v1/t1 The normalized differential primary charged particle cross sections measured at 0.9, 2.76 and 7 TeV centre-of- mass energies. Additional systematic... • #### Table 2 Data from F 1 10.17182/hepdata.61787.v1/t2 The ratios of differential cross sections of charged particles at different collisions energies. • #### Table 3 Data from F 5 10.17182/hepdata.61787.v1/t3 The constructed reference P-P spectra for comparison with PB-PB and p-PB spectra.
2020-02-23 02:43:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108158349990845, "perplexity": 5671.471724093812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00540.warc.gz"}
https://doc.cgal.org/5.1/Kernel_23/structCGAL_1_1Cartesian.html
CGAL 5.1 - 2D and 3D Linear Geometry Kernel CGAL::Cartesian< FieldNumberType > Struct Template Reference #include <CGAL/Cartesian.h> ## Definition A model for Kernel that uses Cartesian coordinates to represent the geometric objects. In order for Cartesian to model Euclidean geometry in $$E^2$$ and/or $$E^3$$, for some mathematical field $$E$$ (e.g., the rationals $$\mathbb{Q}$$ or the reals $$\mathbb{R}$$), the template parameter FieldNumberType must model the mathematical field $$E$$. That is, the field operations on this number type must compute the mathematically correct results. If the number type provided as a model for FieldNumberType is only an approximation of a field (such as the built-in type double), then the geometry provided by the kernel is only an approximation of Euclidean geometry. Is Model Of: Kernel Implementation All geometric objects in Cartesian are reference counted. CGAL::Simple_cartesian<FieldNumberType> CGAL::Homogeneous<RingNumberType> CGAL::Simple_homogeneous<RingNumberType>
2021-09-17 10:44:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30640971660614014, "perplexity": 1162.2782164708162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055632.65/warc/CC-MAIN-20210917090202-20210917120202-00352.warc.gz"}
https://solvedlib.com/n/gel-x27-amsuuvuis-103-sujod-s-x27-mopaq-uoissajdxa-a41j0,19849583
# GEL'aMSUuVUIS )103(sujod S) 'Mopaq uoissajdxa a41J0 anie^ Jexa 341 pulJ S9( ###### Question: GEL 'aMSUuV UIS )103 (sujod S) 'Mopaq uoissajdxa a41J0 anie^ Jexa 341 pulJ S 9( #### Similar Solved Questions ##### Let $A$ be an $n \times n$ matrix and let $\mathbf{x}$ and $\mathbf{y}$ be vectors in $\mathbb{R}^{n} .$ Show that if $A \mathbf{x}=A \mathbf{y}$ and $\mathbf{x} \neq \mathbf{y},$ then the matrix $A$ must be singular. Let $A$ be an $n \times n$ matrix and let $\mathbf{x}$ and $\mathbf{y}$ be vectors in $\mathbb{R}^{n} .$ Show that if $A \mathbf{x}=A \mathbf{y}$ and $\mathbf{x} \neq \mathbf{y},$ then the matrix $A$ must be singular.... ##### A $2.00 \times 102-\mathrm{kg}^{2} .00 \times 10^{2}-\mathrm{kg}$ block is hoisted by pulleys that are massless and frictionless, as shown in Figure 4-39. If a force of $1.50 \times 103 \mathrm{~N}$ $1.50 \times 10^{3} \mathrm{~N}$ is applied to the massless rope, what is the acceleration of the suspended mass? Example 4-9 A $2.00 \times 102-\mathrm{kg}^{2} .00 \times 10^{2}-\mathrm{kg}$ block is hoisted by pulleys that are massless and frictionless, as shown in Figure 4-39. If a force of $1.50 \times 103 \mathrm{~N}$ $1.50 \times 10^{3} \mathrm{~N}$ is applied to the massless rope, what is the acceleration of the sus... ##### 16. To generate a uniform 2T magnetic field for an MRI machine with a coil of... 16. To generate a uniform 2T magnetic field for an MRI machine with a coil of diameter 50 cm and 10 000 turns over the 1 m distance of the coil, (a) Solve for the current through the coil needed to create this large magnetic field. 2 (b) Approximate the total length of wire needed to produce 10 000 ... ##### Let us consider the polar equation $r=\frac{2}{1+\cos \theta} .$ Explain why a graphing utility gives the following graphs with the specified window parameters: a. [-2,2] by [-4,4] with $\theta$ step $=\frac{\pi}{2}$ b. [-2,2] by [-4,4] with $\theta$ step $=\frac{\pi}{3}$ (FIGURE CAN'T COPY) Let us consider the polar equation $r=\frac{2}{1+\cos \theta} .$ Explain why a graphing utility gives the following graphs with the specified window parameters: a. [-2,2] by [-4,4] with $\theta$ step $=\frac{\pi}{2}$ b. [-2,2] by [-4,4] with $\theta$ step $=\frac{\pi}{3}$ (FIGURE CAN'T COPY)... ##### What are the differences between right issues and bonus issues? Your answer should include the advantages... What are the differences between right issues and bonus issues? Your answer should include the advantages and disadvantages of both right issues and bonus issues. (8marks)... ##### Fiud the inverse ol chemAtrixHinc; considler block matrices: Fiud the inverse ol che mAtrix Hinc; considler block matrices:... ##### Powder contains FeSO . 7 H,0 (molar mass 278.01 e/mol), among other components: 455 sample of the powder dissolved in HNO and heated tO convert all iron t0 Fe The addition of NH; precipitated Fe_O, xHO. which was subsequently ignited produce 0.502 FezWhat was the mass ot FeSO4 7H_0 in the 455 sample?WasiFeSO ' 7HO: powder contains FeSO . 7 H,0 (molar mass 278.01 e/mol), among other components: 455 sample of the powder dissolved in HNO and heated tO convert all iron t0 Fe The addition of NH; precipitated Fe_O, xHO. which was subsequently ignited produce 0.502 Fez What was the mass ot FeSO4 7H_0 in the 455 sampl... ##### Dam phauName tho following (points H,C -CHz-( ~CH,CH; eiwy' ciner Draw the structures for 2-olhoxybutane oxirane (ethylene oxide) (pointeCircle any which are infrared "infrared inaclive"CH-Brz IICI(points 4)Draw major Iragment = Hoinis positive ion electron impact mass spectroscopy from chECib CHs5. In ass spoctrum carbon and hydrogen containg compound has molecmiar ion (parent ion) with an atomic mass Zle 121_ The compound contains only one of the following atoms( and one of that dam phau Name tho following (points H,C -CHz-( ~CH,CH; eiwy' ciner Draw the structures for 2-olhoxybutane oxirane (ethylene oxide) (pointe Circle any which are infrared "infrared inaclive" CH-Brz IICI (points 4) Draw major Iragment = Hoinis positive ion electron impact mass spectrosco... ##### Answe of the Given for 2 <x < 6 tangent (x)J 0: 4x2 equalent for ~1 Jine for bot both f pue and g(x) then 4 - thesl 0 < Answe of the Given for 2 <x < 6 tangent (x)J 0: 4x2 equalent for ~1 Jine for bot both f pue and g(x) then 4 - thesl 0 <... ##### 3. There are CD on the shelves: 5 CD with music, 2 CD with movies, and... 3. There are CD on the shelves: 5 CD with music, 2 CD with movies, and 3 CD with document texts. We have taken 2 CD from the shelves. What are probabilities that we have taken a) 2 CD with movies c) one CD with music and one CD with documents d) no CD with documents b) at least one CD with movie... ##### 2. (20%) Find equations of (&) the tangent plane and (b) the normal line to the given surface at the specified point. I+y+z = etye (0,0,4) 2. (20%) Find equations of (&) the tangent plane and (b) the normal line to the given surface at the specified point. I+y+z = etye (0,0,4)... ##### $\begin{array}{ll}{\text { 11. Suppose that } \int_{1}^{2} f(x) d x=5 . \text { Find }} \\ {\text { a. } \int_{1}^{2} f(u) d u} & {\text { b. } \int_{1}^{2} \sqrt{3} f(z) d x} \\ {\text { c. } \int_{2}^{1} f(t) d t} & {\text { d. } \int_{1}^{2}[-f(x)] d x}\end{array}$ $\begin{array}{ll}{\text { 11. Suppose that } \int_{1}^{2} f(x) d x=5 . \text { Find }} \\ {\text { a. } \int_{1}^{2} f(u) d u} & {\text { b. } \int_{1}^{2} \sqrt{3} f(z) d x} \\ {\text { c. } \int_{2}^{1} f(t) d t} & {\text { d. } \int_{1}^{2}[-f(x)] d x}\end{array}$...
2023-02-01 15:05:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6211109757423401, "perplexity": 5508.195048415445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00416.warc.gz"}
http://mathhelpforum.com/calculus/167227-semiaxes-ellipse-3d.html
# Math Help - Semiaxes of an ellipse (3D) 1. ## Semiaxes of an ellipse (3D) Find the semiaxes of the ellipse in which the plane $z=x$ meets the cylinder $x^2+y^2=4$. 2. Have you visualised the plane and the cylinder? And their intersection? Then check semi axes at Ellipse - Wikipedia, the free encyclopedia and just find how far the intersection is from the origin, along the y axis, and along the line z = x in the x,z plane. If visualising is the problem, do say. 3. The parametric equations of the cylinder are: $C \equiv\begin{Bmatrix}x=2\cos t\\y=2\sin t\\z=2\cos t\end{matrix}\quad (t\in [0,2\pi])$ The square of the distance from the center $(0,0,0)$ of the ellipse to any point of $C$ is: $d^2(x,y,z)=\ldots=4+4\cos^2 t$ Easily you can find the maximum $M$ and the minimum $m$ absolute, of $d^2$ . The semi-axes are $\sqrt{M}$ and $\sqrt{m}$ . Fernando Revilla 4. Originally Posted by FernandoRevilla The parametric equations of the cylinder are: $C \equiv\begin{Bmatrix}x=2\cos t\\y=2\sin t\\z=2\cos t\end{matrix}\quad (t\in [0,2\pi])$ The square of the distance from the center $(0,0,0)$ of the ellipse to any point of $C$ is: $d^2(x,y,z)=\ldots=4+4\cos^2 t$ Easily you can find the maximum $M$ and the minimum $m$ absolute, of $d^2$ . The semi-axes are $\sqrt{M}$ and $\sqrt{m}$ . Fernando Revilla I think you meant 'parametric equations of the ellipse'. Using your method, I get the semi-major axis as $2\sqrt{2}$ and semi-minor axis as $2$. 5. Originally Posted by alexmahone I think you meant 'parametric equations of the ellipse'. Yes, of course I meant ellipse. Using your method, I get the semi-major axis as $2\sqrt{2}$ and semi-minor axis as $2$. Right. Fernando Revilla
2015-05-29 07:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899108171463013, "perplexity": 680.9164723281576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929899.62/warc/CC-MAIN-20150521113209-00172-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/24369/findroot-gives-a-wrong-solution-which-obviously-should-not-be-there
# FindRoot gives a wrong solution which obviously should not be there I got stuck on FindRoot and I didn't see any similar problem posted, so let me explain what I am trying to do and what problem I meet here. I try to find roots of a particular function, which in the lowest order is just a second-order polynomial and hence can be solved by hand easily. However I want to generalize my code to compute situations of higher orders, in which will be transcendental equations, so it is still necessary to see things work correctly from the very beginning. Here is the code and the output in the lowest order: a = FindRoot[(W -100 + I/2 (1 + 1/10))^2 + 1/4 == 0, {W, i I}, AccuracyGoal -> Infinity, PrecisionGoal -> MachinePrecision, WorkingPrecision -> MachinePrecision] // Hold c = Table[{Re[(a // ReleaseHold)[[1, 2]]-100], Im[(a // ReleaseHold)[[1, 2]]]}, {i, -2, 0, 0.001}]; c = DeleteDuplicates[c, (Abs[#1[[2]] - #2[[2]]] < 0.1 &)]; ListPlot[c, PlotRange -> {{-4.2, 4.2}, {-1.5, 0.5}}, PlotMarkers -> Automatic] so basically I create a table of different initial values, each one differs by 0.001, and use them to "scan" the position of roots on the complex plane by putting them into FindRoot one by one. Finally I get rid of duplicates by certain criteria and do a ListPlot. The result I expect is simply two complex numbers: 100-i/20 and 100-21i/20. However, no matter how I rewrite my code I always get three solutions (the middle one in the result of ListPlot, in which I shift the origin by 100). Only when I enlarge the grid size from 0.001 to 0.1 can I kill the extra solution which is obvious wrong. But this is really bothering me because when I go to higher orders, I expect to have more points in the region I plot. By that time I would certainly need a smaller grid size in order to do the scanning. For example, the next order I want to consider is the following: a = FindRoot[(W -100 + I/2 (1 + 1/10))^2 + Exp[I W Pi/100]/4 == 0, {W, i}, AccuracyGoal -> Infinity, PrecisionGoal -> MachinePrecision, WorkingPrecision -> MachinePrecision] //Hold c = Table[{Re[(a // ReleaseHold)[[1, 2]]-100], Im[(a // ReleaseHold)[[1, 2]]]}, {i, 100-4, 100+4, 10^-3}]; c = DeleteDuplicates[c, (Abs[#1[[2]] - #2[[2]]] < 0.1 &) || Abs[#1[[1]] - #2[[1]]] < 10^-9 &] ListPlot[c, PlotRange -> {{-4.2, 4.2}, {-1.5, 0.5}}, PlotMarkers -> Automatic] This time there will be infinite number of roots since now it's transcendental, but in the region I plot I still expect only two roots for physical reasons. However once again you can see that there is a third one right on the imaginary axis which shouldn't be there. (One can easily show that no root can locate on the imaginary axis by setting W=100+yi and separating the real part and imaginary part, but let me omit the analysis here.) Does anyone ever meet a similar situation? I really have no idea what's wrong here...I appreciate any suggestion/hint/answer! ps. this is my first time posting a question here, please let me know how I can improve my question to be more well-organized or well-defined. Thanks! - FindRoot is not guaranteed to find a valid solution, but it will always return a number. The numerical method may stop before it has found a solution and will will warn you that the solution is not correct. – Szabolcs Apr 30 '13 at 16:41 Unrelated: you can improve your code by using a:=... (not a=...) and getting rid of the Hold/ReleaseHold. – Szabolcs Apr 30 '13 at 16:43 You might be interested in FindAllCrossings2D[] to assist you in searching for roots in the complex plane. – J. M. Apr 30 '13 at 16:48 @Leo You need to understand the numerical method that's being used. I believe it uses Newton's method, which is a very simple method (look it up). If you give a starting point with -0.55 I as the imaginary part, the imaginary part will be stable under the Newton-type iteration: if you substitute -0.55 I into your equation, you'll see that the LHS will have 0 as its imaginary part. While this is an unstable fixed point, the numerical errors seem not to be big enough to let the method leave it, so it's stuck there forever. – Szabolcs Apr 30 '13 at 16:58 You can check the result by backsubstituting it into the equation, but beware of numerical errors! The warning you get is not a guarantee, just a warning that something may be wrong and you should investigate further. Generally, with numerical methods like this often there are no guarantees about the correctness or completeness of the solution. But there are ways to diagnose the solution and get hints about what's going on. – Szabolcs Apr 30 '13 at 17:00
2016-07-24 02:56:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.609794557094574, "perplexity": 597.9713997879932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823935.18/warc/CC-MAIN-20160723071023-00256-ip-10-185-27-174.ec2.internal.warc.gz"}
https://iris.unibas.it/handle/11563/159220
A study of excited Lambda(0)(b) baryons is reported, based on a data sample collected in 2016-2018 with the CMS detector at the LHC in proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of up to 140fb(-1). The existence of four excited Lambda(0)(b) states: Lambda(0)(b) (5912)(0), Lambda(0)(b) (5920)(0), Lambda(0)(b) (6146)(0), and Lambda(0)(b) (6152)(0) in the Lambda(0)(b)pi(+)pi(-) mass spectrum is confirmed, and their masses are measured. The Lambda(0)(b)pi(+)pi(-) mass distribution exhibits a broad excess of events in the region of 6040-6100 MeV, whose origin cannot be discerned with the present data. (C) 2020 The Author(s). Published by Elsevier B.V. Study of excited $\Lambda_\mathrm{b}^0$ states decaying to $\Lambda_\mathrm{b}^0\pi^+\pi^-$ in proton-proton collisions at $\sqrt{s}=$ 13 TeV Abstract A study of excited Lambda(0)(b) baryons is reported, based on a data sample collected in 2016-2018 with the CMS detector at the LHC in proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of up to 140fb(-1). The existence of four excited Lambda(0)(b) states: Lambda(0)(b) (5912)(0), Lambda(0)(b) (5920)(0), Lambda(0)(b) (6146)(0), and Lambda(0)(b) (6152)(0) in the Lambda(0)(b)pi(+)pi(-) mass spectrum is confirmed, and their masses are measured. The Lambda(0)(b)pi(+)pi(-) mass distribution exhibits a broad excess of events in the region of 6040-6100 MeV, whose origin cannot be discerned with the present data. (C) 2020 The Author(s). Published by Elsevier B.V. Scheda breve Scheda completa Scheda completa (DC) 2020 File in questo prodotto: Non ci sono file associati a questo prodotto. I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11563/159220 Attenzione Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo • ND • 34 • 25
2023-03-26 02:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8341681361198425, "perplexity": 12223.424437917383}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00058.warc.gz"}
http://jblindsay.github.io/Courses/F15/GEOG3480/Lectures/Lecture5/slides.html
Press 'o' to toggle the slide overview and 'f' for full-screen mode. Choose the theme in which to view this presentation: Black - White - League - Sky - Beige - Simple Serif - Blood - Night - Moon - Solarized ## GEOG*3480 ### Statistical Analysis of Spatial Data John Lindsay Fall 2015 • Jensen and Jensen Chapter 8 ### Topics • Over the next two lectures, we'll discuss: • Descriptive Statistics • Descriptive Spatial Statistics • Spatial Autocorrelation • Point Pattern Analysis • Nearest-Neighbour Analysis • Directional Analysis ### Descriptive Statistics • Measures of central tendency • Mode, median, and mean ($$\overset{-}x$$) • $$\overset{-}x = \frac {\underset{i=1}{\overset{N}{\Sigma}} x} {N}$$ ### Descriptive Statistics • Measures of dispersion • Variance ($$s^2$$) • Standard deviation ($$s$$) • $$s^2 = \frac {\underset{i=1}{\overset{N}{\Sigma}} (x_i - \overset{-}x)^2} {N - 1}$$ • $$s = \sqrt \frac {\underset{i=1}{\overset{N}{\Sigma}} (x_i - \overset{-}x)^2} {N - 1}$$ ### Descriptive Statistics • Skewness • Measure of the asymmetry of a distribution • Kurtosis • Measure of the peakedness of a distribution (From: Jensen & Jensen 2013) (From: Jensen & Jensen 2013) ### Descriptive Spatial Statistics • Mean Centre • Measure of central tendency that can be used to determine the centre of a distribution plotted in geographic coordinates. • Standard Distance • Measure of dispersion of geographically distributed data. (From: Jensen & Jensen 2013) ### Tobler’s first law • The first law of geography: “everything is related to everything else, but near things are more related than distant things.” (Tobler, 1970) • This simple statement forms the basis for a great deal of geographical analysis and is concept underlying the idea of spatial autocorrelation. • Synonymous with the concept of spatial dependence in geostatistics ### Spatial autocorrelation • Correlation of a variable with itself through space. • Correlation versus spatial autocorrelation • Actually bad news and good news • Good because, “if geography is worth studying at all, it must be because phenomena do not vary randomly through space” (O'Sullivan and Unwin, 2003, pg. 28) • Essential for spatial modelling through Interpolation ### Spatial autocorrelation • Three possibilities: • Clustered (positive autocorrelation): nearby locations are likely to be similar to one another. • Random (autocorrelation near zero): no spatial effect is discernible, and observations seem to vary randomly through space • Dispersed (negative autocorrelation): observations from nearby observations are likely to be different from one another. ### Moran's $$I$$ • Moran's $$I$$ measures the interdependence in spatial distributions. • Used with interval/ratio level data • Used to detect spatial trends • -1 ≤ $$I$$ ≤ 1 • $$I$$ = -1 = dispersed • $$I$$ = 0 = random • $$I$$ = +1 = clustered ### Moran's $$I$$ $$I = \frac {N}{\underset{i=1}{\overset{N} \Sigma} \underset{j=1}{\overset{N} \Sigma} w_{ij}} \frac {\underset{i=1}{\overset{N} \Sigma} \underset{j=1}{\overset{N} \Sigma} w_{ij} (x_i - \overset{-} x) (x_j - \overset{-} x)}{\underset{i=1}{\overset{N} \Sigma} {(x_i - \overset{-} x)^2}}$$ Where $$\overset{-} x$$ is the mean of variable $$x$$; $$x_i$$ is the value at $$i$$; $$j$$ is a neighbour of $$i$$; $$w_{ij}$$ is the weight between neighbours $$i$$ and $$j$$. (From: Jensen & Jensen 2013) (From: Jensen & Jensen 2013) ### Point Pattern Analysis • Mapped point data often exhibit distinct patterning. • Patterns result from the spatial component of a control on the phenomenon. • Understanding the pattern can help with understanding the controlling forces on the phenomenon. ### Point Pattern Analysis • The patterns that we're interested in with Point Pattern Analysis (PPA) result from the locations of individual points and not on their attributes, for which spatial autocorrelation is more relevant. • Quadrat Analysis and Nearest-Neighbour Analysis the the two most common methods for PPA • A quadrat is a user-defined geographic area, usually a square or rectangle, used to measure the distribution of a spatial phenomenon. • Quadrat analysis can be used to test whether the phenomenon is uniformly distributed. • The Chi Square test is used with quadrats. (From: Jensen & Jensen 2013) (From: Jensen & Jensen 2013) • The value of Chi Square is compared with a table of critical values to determine whether the points are statistically significantly different from a uniformly distribution. • The size, shape, and number of quadrats will impact the results of the quadrat analysis. ### Nearest-neighbour Analysis • NNA is used in GIS to determine whether point sets are random or non-random. • If a point set is found to be non-random then we are left with the task of determining what controls the distribution. • For each point in the set, find the distance to the closest neighbour. (From: Jensen & Jensen 2013) (From: Jensen & Jensen 2013) ### Nearest-neighbour Analysis $$d_e = \frac 1 {2 \sqrt{N/A}} = \frac 1 {2 \sqrt{p}}$$ • where $$d_e$$ is the expected density (assuming random distribution); $$N$$ is the number of points; $$A$$ is the study area; $$p$$ is the point density. $$NNR = \frac {Dist_{Obs}} {Dist_{Ran}} = \frac {d_a} {d_e}$$ • where $$NNR$$ is the nearest-neighbour ratio; $$Dist_{Obs}$$ is the mean NN distance; $$Dist_{Ran}$$ is the expected distance for a random distribution. (From: Jensen & Jensen 2013) ### Nearest-neighbour Analysis • Warning: Our estimates of the point density is dependent on our definition of the study area. • If we change the extent of the study area, we change the results. Not so clusteredVery clustered ### Nearest-neighbour Analysis • NNA is also sensitive to the non-uniformity of underlying space. • NNA assumes that points are free to locate anywhere. • Consider the gap in stream channel heads below. It's the result of Lake Ontario. ### Circular Data • Geographers distinguish between directional (0°-360°) and axial (a.k.a. oriented; 0°-180°) data. • Wind is directional; a road is axial. • Directional and axial data can be plotted using Rose Diagrams, which are like circular histograms. ### Circular Data $$\overset{-}\theta = tan^{-1}(\frac{\overset{N}{\underset{i=1}{\Sigma}}{sin \theta_i}} {\overset{N}{\underset{i=1}{\Sigma}}{cos \theta_i}})$$ • where $$\overset{-}\theta$$ is the mean direction, derived from the vector resultant. ### Circular Data $$\overset{-}R = \frac 1 N \sqrt{(\overset{N}{\underset{i=1}{\Sigma}}{cos \theta_i})^2 + ({\overset{N}{\underset{i=1}{\Sigma}}{sin \theta_i}})^2}$$ • where $$\overset{-}R$$ is the standardized length of the vector resultant, called the mean resultant length, and is a measure of dispersion. • 0 ≤ $$\overset{-}R$$ ≤ 1, where values near 1 indicate small angular dispersion and vice versa. ### Circular Data • Axial (oriented) data cannot easily be treated as vectors because there is nothing to distinguish one end of the line from the other. • An angle of 179° is very close to one of 1°. • To solve this double all the angles, calculate the statistics with the doubled data, and then halve the angles to get the mean direction, mean resultant length, etc. • 45° × 2 = 90° • 225° × 2 = 450° = 450° - 360° = 90°
2017-06-27 22:43:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42845892906188965, "perplexity": 3675.714805072365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321938.75/warc/CC-MAIN-20170627221726-20170628001726-00306.warc.gz"}
https://tex.stackexchange.com/questions/278968/my-figure-float-border-is-always-larger-than-the-textwidth
# My figure float border is always larger than the textwidth My float border, regardless of the image size, is always wider than the textwidth. Reducing the textwidth percentage simply makes the vertical size small (as it keeps aspect ratio) but does not make the float border any smaller See image, note the border extends beyond the textwidth: %DOCUMENT PREPPING% %Defines Document Class \documentclass[]{article} \usepackage[british]{babel} \usepackage{csquotes} \usepackage{hyperref} \usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex} \usepackage{lipsum} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{placeins} \usepackage{float} %defines foats (i.e. figures/pictures) as having a border \floatstyle{boxed} \restylefloat{figure} %Assigns image directory \graphicspath{ {images/}} %declares language mapping \DeclareLanguageMapping{british}{british-apa} %defines bib files %Defines command to hide @ for email spam \newcommand{\at}{\makeatletter @\makeatother} % %Figure Information. Note, DOCUMENT INFO OMITTED% \begin{figure}[!ht] \end{figure} \FloatBarrier Example document: %DOCUMENT PREPPING% %Defines Document Class \documentclass[]{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{float} %defines foats (i.e. figures/pictures) as having a border \floatstyle{boxed} \restylefloat{figure} %DOCUMENT INFORMATION% %Title Information \title{Image Test} \author{Test} \date{Last Updated: November, 2015} %Begin document \begin{document} \maketitle %start first section \section{Introduction} \clearpage \begin{figure}[!htb] \includegraphics[width=0.8\textwidth]{example-image.jpg} \caption{The paper and pencil version of the NASA-TLX.} \end{figure} \end{document} • Welcome to the site! It would help if you made your example a document that people could test (removing presumably irelevant packages such as lipsum, csquotes etec, and using example-image as the image name (which is a sample image that most people have already in their tex distribution) – David Carlisle Nov 19 '15 at 9:02 • Is there a suggested protocol for uploading an example document? – Michael Anderson Nov 19 '15 at 9:12 • Weirdly in the example image, the border is going wider than the textwidth. This perplexes me. – Michael Anderson Nov 19 '15 at 9:21 • In the complete document as posted in the edited question the border is textwidth exactly. – David Carlisle Nov 19 '15 at 9:28 The float boxed style is always slightly wider than \textwidth. If you don't want the box around the caption, just put an \fbox{..} around the image. \documentclass{article} \usepackage{showframe} \usepackage{mwe} \begin{document} \begin{figure} \fboxsep=1em \fboxrule=1pt \fbox{\begin{minipage}{\dimexpr \textwidth -2\fboxsep-2\fboxrule} \centering\includegraphics[width=0.8\textwidth]{example-image}% \caption\blindtext \end{minipage}} \end{figure} \end{document} Here I implemented everything as an environment. \documentclass{article} \usepackage{environ} \usepackage{showframe} \usepackage{mwe} \NewEnviron{boxedfigure}[1][]% {\begin{figure}[#1]% \fboxsep=1em \fboxrule=1pt \fbox{\begin{minipage}{\dimexpr \textwidth-2\fboxsep-2\fboxrule} \BODY \end{minipage}}% \end{figure}} \begin{document} \begin{boxedfigure} \centering\includegraphics[width=0.8\textwidth]{example-image}% \caption\blindtext \end{boxedfigure} \end{document} `
2019-09-16 06:08:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318787813186646, "perplexity": 8167.381073288013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572491.38/warc/CC-MAIN-20190916060046-20190916082046-00065.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/38433-klein-gordon-equation.html
1. Klein gordon equation I am being really thick here I have this wave equation, the massless klien gordon equation $\displaystyle \partial_{\mu}\partial^{\mu}\phi(x)=0$ where the summation over $\displaystyle \mu$ is over 0,1,2,3 the general solution is a superposition of plane waves yes? i.e $\displaystyle \phi(x)=\int d^4 p \overline{\phi}(p)exp(i p_{\mu}x^{\mu})$ where $\displaystyle \overline{\phi}$ is the weighting function. When you susbsitute this back into the klein gordon equation you get down two factors of p, i.e $\displaystyle p_{\mu}p^{\mu}$ which equals zero. (mass shell constraint) My question is, is $\displaystyle \overline{\phi}(p)$ arbitrary? I don't really understand why this is so, let alone believe it. Hope peeps understand the question. 2. Originally Posted by ppyvabw I am being really thick here I have this wave equation, the massless klien gordon equation $\displaystyle \partial_{\mu}\partial^{\mu}\phi(x)=0$ where the summation over $\displaystyle \mu$ is over 0,1,2,3 the general solution is a superposition of plane waves yes? i.e $\displaystyle \phi(x)=\int d^4 p \overline{\phi}(p)exp(i p_{\mu}x^{\mu})$ where $\displaystyle \overline{\phi}$ is the weighting function. When you susbsitute this back into the klein gordon equation you get down two factors of p, i.e $\displaystyle p_{\mu}p^{\mu}$ which equals zero. (mass shell constraint) My question is, is $\displaystyle \overline{\phi}(p)$ arbitrary? I don't really understand why this is so, let alone believe it. Hope peeps understand the question. The K-G equation is linear and plane waves (with the mass shell constraint) are solutions to it. Therefore arbitrary linear combinations of plane waves are solutions to it. Therefore $\displaystyle \overline{\phi}(p)$ is arbitary.
2018-04-26 02:52:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266622066497803, "perplexity": 1076.348668183005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00284.warc.gz"}
https://dolo.readthedocs.io/en/latest/steady_state.html
The deterministic state of a model corresponds to steady-state values $$\overline{m}$$ of the exogenous process. States and controls satisfy: $$\overline{s} = g\left(\overline{m}, \overline{s}, \overline{x}, \overline{m} \right)$$ $$0 = \left[ f\left(\overline{m}, \overline{s}, \overline{x}, \overline{m}, \overline{s}, \overline{x} \right) \right]$$ where $$g$$ is the state transition function, and $$f$$ is the arbitrage equation. Note that the shocks, $$\epsilon$$, are held at their deterministic mean. The steady state function consists in solving the system of arbitrage equations for the steady state values of the controls, $$\overline{x}$$, which can then be used along with the transition function to find the steady state values of the state variables, $$\overline{s}$$. dolo.algos.steady_state.residuals(model: dolo.compiler.model.Model, calib=None) → Dict[str, List[float]]
2019-10-21 20:39:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7617369294166565, "perplexity": 481.2535142297453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00450.warc.gz"}
http://dict.cnki.net/h_3394374000.html
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多 (α 的翻译结果: 查询用时:0.507秒 在分类学科中查询 所有学科 更多类别查询 历史查询 α (alpha(0) “(α”译为未确定词的双语例句 NMR STUDY ON THE CONFORMATION CHANGE IN THE AGGREGATION PROCESSES OF 1,3-DI(α-NAPHTHYL)PROPANE AND DI(α-NAPHTHYLMETHYL)ETHER NMR STUDY ON THE CONFORMATION CHANGE IN THE AGGREGATION PROCESSES OF 1,3-DI(α-NAPHTHYL)PROPANE AND DI(α-NAPHTHYLMETHYL)ETHER 短句来源 EFFECT OF GLYCYRRHETINIC ACID ON DNA DAMAGE AND UNSCHEDULED DNA SYNTHESIS INDUCED BY BENZO (α)PYRENE EFFECT OF GLYCYRRHETINIC ACID ON DNA DAMAGE AND UNSCHEDULED DNA SYNTHESIS INDUCED BY BENZO(α)PYRENE 短句来源 Study on the Synergistic Extraction of Ytterbium with 1-Phenyl-3-Methyl-4-(α-Furoyl)-5-Pyrazolone and Neutral Extractant Study on the Synergistic Extraction of Ytterbium with 1-Phenyl-3-Methyl-4-(α-Furoyl)-5-Pyrazolone and Neutral Extractant 短句来源 Crystal Structure of O,O-Diethyl-(α-p-Hydroxyphenyl-β-Nitroethyl) phosphonate Crystal Structure of O,O-Diethyl-(α-p-Hydroxyphenyl-β-Nitroethyl)phosphonate 短句来源 EPR Study of a New Crystal of the Binuclear Copper(Ⅱ) Cluster Compound- Cu_2(α-C_(10)H_7CH_2CO_2)_4-(DMF)_2 ·(DMF)_2·H_2O EPR Study of a New Crystal of the Binuclear Copper(Ⅱ) Cluster Compound-〔Cu_2(α-C_(10)H_7CH_2CO_2)_4-(DMF)_2〕·(DMF)_2·H_2O 短句来源 更多 相似匹配句对 α . α。 短句来源 If R satisfies the condition <α>,R is commutative. <α>:For (?) <α>:(?) 短句来源 O2 the secretion of tumor necrosis factor-α (TNF-α) from the cells stimulated by LPS were investigated. (TNF-α)的产量。 短句来源 α-Congruences on Variants of S(X) (Ⅱ)α-Congruences S(X)的变种半群的α同余 短句来源 我想查看译文中含有:的双语例句 (alpha Gabor time-frequency lattices are sets of functions of the form $g_{m \alpha , n \beta} (t) =e^{-2 \pi i \alpha m t}g(t-n \beta)$ generated from a given function $g(t)$ by discrete translations in time and frequency. It was recently observed that the behavior of a lattice $(m \alpha , n \beta )$ can be connected to that of a dual lattice $(m/ \beta , n /\alpha ).$ Here we establish this interesting relationship and study its properties. One outcome is a simple proof that for $g_{m \alpha , n \beta}$ to span $L^2,$ the lattice $(m \alpha , n \beta )$ must have at least unit density. It can be used to study the weighted average of the form $(T^\alpha (\hbox {ln }T)^\beta)^{-1}\int _0^T h(t) \, dt.$ More precisely, we study the existence of \varepsilon } {f(y)} K(x,y)\left( {1 - \frac{\varepsilon }{{\left| {x - y} \right|}}} \right)^\alpha dy a.e. 更多 Measurement of the intensity of total scattering of x-rays by a number of polyatomic gases was made for scattering angles between 15° and 130 ° using an ionization method of recording the scattered intensity. Balanced filters of ZrO2, and SrO were used to separate the MoKa rays and Soller slits were placed in front of the ionization chamber to obtain a definite scattering angle. The gases studied are CL2, CO2, N2O, H2S, CC14 and CHCl3. In each case the absolute values of the scattered intensity were determined... Measurement of the intensity of total scattering of x-rays by a number of polyatomic gases was made for scattering angles between 15° and 130 ° using an ionization method of recording the scattered intensity. Balanced filters of ZrO2, and SrO were used to separate the MoKa rays and Soller slits were placed in front of the ionization chamber to obtain a definite scattering angle. The gases studied are CL2, CO2, N2O, H2S, CC14 and CHCl3. In each case the absolute values of the scattered intensity were determined by comparison with the scattering from oxygen, the results of Wollan for the latter gas being taken as correct. The experimental results are actually compared with Woo's theory of the scattering of x-rays by polyatomic gases and the agreement seems to be satisfactory. 吴有训氏最近对于多原子气体散射线之理论,曾作详尽的探讨。吴氏得到一个公式,表示由多原子气体所散射之强度,其中一部为相干的散射,另一部为不相干的散射。 以前关于多原子气体散射X-线之实验,为数甚少,且为定性的结果。最近美人Wollan,对于由O_2及N_2(双原子气体)所散射钼的K_3α线之强度,曾作绝对的度量。Wollan的结果,与吴有训氏的理论,甚属相符。本篇目的,在测定由 Cl_2,CO_2,N_2O,H_2S,CCl_4及 CHCl六种气体所散射X-线之强度,每一实验,均与由0_2者互相比较,根据Wollan的结果,每种气体所散射之绝对强度,皆一一量得。所用之入射X-线为钼之Kα线,系藉Ross的平衡过滤法分出。强度之测量,系用一游离方法。散射角度的范围,自15度至130度。每种气体的实验结果,均与吴氏的理论,互相比较,证明理论与实验,甚属相符。在计算时,原子的“构造因数”,系由Hartree的方法算得,一分子中两原子的相隔距离,则由带光谱的结果推得。 The potential between two nucleons suggested by K. C. Wang, viz. 本篇应用王淦昌所建议之两个核子间之相互位能 即 V=V(r)=- Ae~(klr) 在r≥α时; 及V=V(α) 在r<α时。以求下列数点:(1)重子在正常情形时之波函数,(2)中子与质子间之扩散断面值,及(3)重子被γ射线击破之断面值。本篇所计算得之各断面值之结果与实验所得者,颇相近似。 The anhydro ring in a-methyl 2:3-anhydro-4:6-benzylidene-D-allopyranosidehas been opened by the action of sodium benzylate to give 53% yield ofα-methyl 2-benzyl-4:6-benzylidene-D-altropyranoside and 9% of a-methyl 3-benzyl-4:6-benzylidene-D-glucopyranoside.After the removal of the benzy-lidene residue by acid hydrolysis,the position of attachment of the benzylgroup is determined by periodate oxidation.Catalytic hydrogenolysis of thebenzyl group gives the known α-methyl D-altroside and a-methyl D-glucosiderespectively... The anhydro ring in a-methyl 2:3-anhydro-4:6-benzylidene-D-allopyranosidehas been opened by the action of sodium benzylate to give 53% yield ofα-methyl 2-benzyl-4:6-benzylidene-D-altropyranoside and 9% of a-methyl 3-benzyl-4:6-benzylidene-D-glucopyranoside.After the removal of the benzy-lidene residue by acid hydrolysis,the position of attachment of the benzylgroup is determined by periodate oxidation.Catalytic hydrogenolysis of thebenzyl group gives the known α-methyl D-altroside and a-methyl D-glucosiderespectively and their constitutions are thus proved.While an ethylene oxide ring in a sugar molecule can be opened by alkaline reagents,such as sodium hydroxide,sodium methoxide,ammonia,etc,the use of sodium benzylate has the advantage that one of the hydroxylgroups is protected after the scission by the benzyl group which can in turnbe removed by catalytic hydrogenation. 应用苯甲醇钠可以裂解α-甲基2∶3-内醚-4∶6-苯亚甲基-D-同侧醣氧六圜配醣物的醚环,生成53%的α-甲基2-苯甲基-4∶6-苯亚甲基-D-2-异侧醣氧六圜配醣物及9%的α-甲基3-苯甲基-4∶6-苯亚甲基-D-葡萄糖氧六圜配醣物。此两化合物的结构,应用水解除去苯亚甲基及接触氢解除去苯甲基后,证明是已知的α-甲基 D-2-异侧醣氧六圜配醣物,及α-甲基 D-葡萄糖氧六圜配醣物;苯甲基在醣分子中的衔接位置,则应用高碘酸钠氧化测定。 << 更多相关文摘 相关查询 CNKI小工具 在英文学术搜索中查有关(α的内容 在知识搜索中查有关(α的内容 在数字搜索中查有关(α的内容 在概念知识元中查有关(α的内容 在学术趋势中查有关(α的内容 CNKI主页 |  设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索 2008 CNKI-中国知网 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社
2020-01-27 21:38:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6957553625106812, "perplexity": 4720.56921610614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00327.warc.gz"}
http://www.iucnredlistassessments.org/y21cih/page.php?id=ratio-problems-worksheet-bd6a9c
100/6000 = 1/60 The ratio of the length to the area in simplest form is 1/60 Ratio Problems 2 Name: _____ Instructions • Use black ink or ball-point pen. Solve the problems below. Answer each question about ratio. You can also click on the "[?]" Distributive property of multiplication worksheet - I. Distributive property of multiplication worksheet - II. What is the Ratio? Ratio Proportion Worksheet 6-7 solve the ratio and proportion word problems. View Ratio and Proportion Word Problems Worksheets.pdf from ENGLISH 102 at Hendersonville High School. • Diagrams are NOT accurately drawn, unless otherwise indicated. Use the "Hint" button to get a free letter if an answer is giving you trouble. Ratio word problems. Damian is making brownies to serve at the family picnic. 37.5 cups We can understand how the two or more things relate to each other and find out which is better than the other. Equal Ratios Proportions Youtube Maxresdefault Ratio And ... #227542. • Answer the questions in the spaces provided – there may be more space than you need. Showing top 8 worksheets in the category - 11 Plus Ratio Problems. If the difference in the ratio share is 7 parts, and the difference in the number of books read is 63, then we can work out the number of books read that 1 share of the ratio represents: 63 \div 7 = 9\text{ books} Hard ratio word problems. This page contains a series of intermediate-level word problems. Showing top 8 worksheets in the category - Ratio Year 6. Trigonometric ratio table. Worksheet combining ratio with percentage, fractions, problem solving. Free, printable math worksheets from K5 Learning. Ratios are used to make a comparison between numbers or units. • Answer all questions. However, not many students know what ratio tables are and how can they help them solve ratio problems. Area and perimeter worksheets. 3. Ratio Worksheets. Some of the worksheets below are Ratios and Rates Worksheets, Topics to cover : Definition of Ratios, Definition of Rates, Ratio Tables, Comparing and Graphing Ratios, Percents, Solving Percent Problems and Converting Measures, Proportionality: Ratios and Rates : Representing Ratios and Rates, Using Ratios and Rates to Solve Problems, Applying Ratios and Rates, Converting Measurements, … Students can use simple ratios to solve these word problems; the arithmetic is kept simple so as to focus on the understanding of the use of ratios. Fill in all the gaps, then press "Check" to check your answers. Finding Ratios Worksheet 1. Linear inequalities word problems. Ratio Problems Worksheet Solve. Ratio Problems. Grade 6 Ratio - Displaying top 8 worksheets found for this concept.. The ratio of boys to girls in a class room is 7 to 11. Finding Ratios Worksheet. Solve Ratio Word Problems (Worksheets and Solutions) Objective: I know how to evaluate ratio word problems. Here you find our ratio and proportion worksheets for math classes 5 and 6. Ratio Worksheets 6th Grade Sixth Grade Math Worksheets for ... #227543. 2. There are 8 questions, each with a template of a ratio table for students to use to solve the problem. Topic : Ratio Word Problems - Worksheet 1 Solve the following: 1. Some of the worksheets for this concept are Unit 9 grade 7 ratio and rate, Ratios rates unit rates, Rates and ratios 2, Ratios word problems, Ratio word problems work, Ratio word problems work, Science 7th grade ratios proportions crossword name, P 7 unit rates. 1 a. Primary Resources - free worksheets, lesson plans and teaching ideas for primary and elementary teachers. Some of the worksheets for this concept are Financial arithmetic, Ratios word problems, Ratio word problems work, Year 8 maths topic ratio rates chapter 4 revision, Ratios rates unit rates, Proportions grade, Learning to think mathematically with the ratio table, Grade 6 ratios word problems. Tap on PRINT, PDF or IMAGE button to print or download this grade-6 ratio & proportion worksheet to practice how to find the quantity based on the ratio. Practice or assess your students' understanding of ration and proportion using our Ratio and Proportion worksheet. Simplifying ratios (word problems) worksheet with answers for 6th grade math curriculum is available online for free in printable and downloadable (pdf & image) format. Sum of the angles in a triangle is 180 degree worksheet. Many people think they are one and the same. Find the ratio and proportion (word problems) worksheet with answers for 6th grade math curriculum is available online for free in printable and downloadable (pdf & image) format. Ratio Word Problem Worksheets. Word problems on ages. Tap on PRINT, PDF or IMAGE button to print or download this grade-6 ratio & proportion worksheet to practice how to write the ratios for the following statements. Ratio and proportion word problems. In this ratio, the share is 8 parts to 1 part, so we can conclude that the difference between the ratio share is 7 parts (8 - 1 = 7). button to get a clue. If the problem asks for a ratio, give it in simplified form. A group of preschoolers has 8 boys and 24 girls. Ratio Tables Worksheets. When solving ratio problems, students can use the ratio tables. Objective: I know how to evaluate ratio word problems. The ratio of marbles to stones in a large pot is 6 to 7. Ratios are when two quantities with the same unit are compared and these are denoted using colons. Ratio Proportion Worksheet 4-5 mixed questions on finding proportions and ratios. If there are a total of 49 boys in the classroom, then how many boys and girls are there altogether? Some of the worksheets for this concept are Bratiob word problems bwork b, Bgradeb b6b bratiosb word problems, Bratiob and proportion, 1 15b6b 212 59, 1 math bgradeb b6b bratiosb rates amp percents, Find the unit rates 6th bgradeb bratiob bwork b, Bratiob bwork b, Common core standards for mathematics bgradeb b6b … Types of angles worksheet. A ratio is s a fraction like 3/4. 5th through 7th Grades. Writing and evaluating expressions worksheet. Interesting ratio problems and activities are generally meant for grades 6 and higher. This Ratio Problems Worksheet 1 Interactive is suitable for 6th - 8th Grade. Introduction to Ratios (Pictures) FREE . Ratios can be written by using the word 'to', 3 to 2, by using a colon:, 3 : 2, or by expressing the ratio as a fraction: 1/2. If the ratio of blue bikes to black bikes is 4 to 5, how many of the bikes are blue? The online worksheets for grades 6 and higher follow the curriculum of the schools and promise to be fun and exciting along the way! ... Reduce The Ratio Worksheet 12. It can come many forms such as 3 out of 4 equal parts or 3:4, but fundamentally it is a fraction. If the recipe calls for 2 ½ cups of cocoa to serve 4 people, how many cups will he need if there will be 60 people at the picnic? These worksheets feature basic and intermediate-level ratio activities. Ratio And Rate For Grade 7 - Displaying top 8 worksheets found for this concept.. Compare two quantities with ratios. Some of the worksheets displayed are Ratio word problems work, Ratios word problems, Simple ratios work, The ratio of to, Ratio dividing quantities 1, Percent word problems, Learning to think mathematically with the ratio … Ratios and Proportions Worksheets What's the Difference Between a Ratio and a Proportion? In mathematics, however, ratios are used to compare two or more numbers that signify their related sizes. All of the problems use whole numbers, and If there are 42 marbles, then how many stones are there? TRIGONOMETRY. View PDF. A math club has 25 members, of which 11 are males and the rest are females. Ratio to Percent Conversion Worksheet. A Java book is Solution: The area of the field is 60 × 100 = 6000 The ratio of the length to the area is 100 to 6000, 100:6000 or 100/6000. Ratio Worksheets Online. Name: _ Date _ Topic : Ratio and Proportion Word Problems- Worksheet 1 1. Ratio Word Problems Math Worksheets Ratios 6th Grade Free ... #227541. Word problems on sets and venn diagrams. What is the ratio … 1. Some of the worksheets for this concept are Ratio word problems work, Grade 6 ratios word problems, Name date, Gr 6, Proportions word problems, Ratios rates unit rates, Rates and ratios 2, Learning to think mathematically with the ratio table. Ratio problems are the second step after solving ratio activities and ratio worksheets.Ratio word-problems can be a little complex for kids to comprehend, and can help teachers gauge the students’ understanding of the concept. Middle schoolers complete word problems dealing with ratios of numbers up to three digits. 2 a. Paul can walk 15 steps in 5 minute s. How long does it take Paul Example #4: Suppose the width of a soccer field 60 meters and the length is 100 meters. Displaying top 8 worksheets found for - 6th Grade Rates And Ratio Word Problems. Explore this compilation of printable ratio worksheets with a range of ratio problems to understand the concept of ratio. What is the ratio of males to all club members? Pupils complete eight problems. What is the ratio in simplest form of the length to the area of the field? Use the "Hint" button to get a free letter if an answer is giving you trouble. Ratio and Proportions Word Problem 1: The Brownie Recipe . Time and work word problems. 1 sheet with 6 word problems (mostly made up, a couple adapted) and one extension sheet with conversion rates and questions. Finding Ratios Worksheet 2. The fact is they are very different. Check out Math Blaster’s collection of ratio problems to give the little ones some much-needed practice! SOHCAHTOA. Ratio Word Problems. Fill in all the gaps, then press "Check" to check your answers. Complementary and supplementary word problems worksheet. Ideal for extending more able and to accompany unit 2 & 3 Edexcel modular. Ratio Proportion worksheet 8-10 work out the proportion in specified recipes. Ratio To Percent Worksheets Percent To Ratio Worksheets Proportion Worksheets Ratio in Simplest Form Worksheets Proportion Word Problems Solving Proportions Worksheets. There are a total of 63 bikes. Solving Proportion Word Problems with Ratio TablesThis is a great worksheet for students to practice using ratio tables to solve proportion word problems. Displaying top 8 worksheets found for - Grade 8 Ratio. Generally, ratios are defined as a way to compare things. Determine if the relationship is proportional worksheet. 2. These pdf worksheets are 100% free and cover key skills like ratio of part to part, equivalent ratios, writing ratios in three different ways and much more. Nature of the roots of a quadratic equation worksheets. What Are Ratio Tables? Some of the worksheets displayed are 11 for you maths paper sample questions, Ratio word problems work, Worked answers to 11 plus standard worded problems for top, Algebra in 11 plus maths, Ratios word problems, 11 plus maths revision, Lesson 14 multi step ratio problems, Fractions and percentages mep pupil text 11. Ratio Word Problem Worksheets and Solutions. Ratio Proportion Worksheet 3 convert the currencies. Related Topics: More Math Worksheets. 1 solve the ratio of marbles to stones in a class room is to... Percentage, fractions, problem solving these are denoted using colons Math classes 5 and 6 for Math 5. Grade 7 - displaying top 8 worksheets in the classroom, then press check '' to check answers! Understand how the two or more things relate to each other and find out which better... To be fun and exciting along the way free worksheets, lesson and! 100 meters activities are generally meant for grades 6 and higher follow the curriculum the... Problems - worksheet 1 Interactive is suitable for 6th - 8th Grade 2 name: _____ Instructions • black. Problems worksheet 1 1 Math worksheets for... # 227543 the problem - 8th Grade extending. Between numbers or units help them solve ratio problems, students can use the [? ] there... Are generally meant for grades 6 and higher problems with ratio TablesThis is a great worksheet for to! I. Distributive property of multiplication worksheet - II and higher follow the curriculum of the length to the area the! Sixth Grade Math worksheets for grades 6 and higher of males to all club?... Fun and exciting along the way Word Problems- worksheet 1 Interactive is suitable 6th! Is the ratio of boys to girls in a triangle is 180 worksheet! A triangle is 180 degree worksheet triangle is 180 degree worksheet unit 2 & 3 Edexcel modular Recipe! Are 42 marbles, then how many boys and girls are there?. A series of intermediate-level Word problems 4: Suppose the width of a soccer field 60 meters and the are... Ratios are when two quantities with the same a group of preschoolers has 8 boys girls. And Rate for Grade 7 - displaying top 8 worksheets found for this concept, students can use . Free worksheets, lesson plans and teaching ideas for primary and elementary teachers as 3 out of 4 equal or... They are one and the length to the area of the roots of a soccer field 60 and! Rate for Grade 7 - displaying top 8 worksheets found for - 6th Sixth! Property of multiplication worksheet - II: ratio Word problems check '' to check answers! An answer is giving you trouble be fun and exciting along the way to 11 '' to check answers... Is Distributive property of multiplication worksheet - I. Distributive property of multiplication -! Word problem 1: the Brownie Recipe free letter if an answer is you... 6 ratio - displaying top 8 worksheets found for - 6th Grade free... # 227541 of ratio... Teaching ideas for primary and elementary teachers multiplication worksheet - I. Distributive property multiplication... - free worksheets, lesson plans and teaching ideas for primary and elementary teachers how many are... And Proportion Word problems is suitable for 6th - 8th Grade Resources - free worksheets, plans! Then how many stones are there altogether you find our ratio and Proportions Word problem 1: the Recipe... More numbers that signify their related sizes numbers or units 37.5 cups Topic: and! Family picnic ( worksheets and Solutions ) objective: I know how to evaluate Word!, lesson plans and teaching ideas for primary and elementary teachers I. Distributive property multiplication. Grade free... # 227541 pot is 6 to 7 classroom, then press check to. The width of a soccer field 60 meters and the length to the area of the are. Showing top 8 worksheets found for - 6th Grade free... # 227541 worksheet 1 1 relate! Diagrams are not accurately drawn, unless otherwise indicated you can also click on the [? ''...: _____ Instructions • use black ink or ball-point pen _____ Instructions • use black or. Up to three digits, lesson plans and teaching ideas for primary and elementary teachers Suppose width... Which 11 are males and the rest are females are a total 49... Using ratio tables solve Proportion Word problems triangle is 180 degree worksheet also click on the ''... Ratio in Simplest form of the angles in a large pot is 6 to 7 100 meters complete Word (! Form worksheets Proportion worksheets ratio in Simplest form worksheets Proportion Word problems dealing ratios. 7 to 11 is 180 degree worksheet then how many stones are there ink or ball-point.. 24 girls are denoted using colons solve Proportion Word problems their related sizes a triangle is 180 degree.... Equation worksheets and 6 check '' to check your answers 24 girls and exciting the., then how many of the field Simplest form of the field girls in a large pot is to! Or more things relate to each other and find out which is better than the.! The classroom, then press check '' to check your answers and are. Top 8 worksheets found for - 6th Grade free... # 227543 things relate to other. One and the same unit are compared and these are denoted using colons are and. A ratio table for students to use to solve the ratio tables are and how can they them. Worksheets for Math classes 5 and 6 a template of a ratio problems worksheet field 60 meters and the rest females! If there are 8 questions, each with a template of a soccer field meters. Along the way Grade Rates and questions Grade Rates and questions field 60 and! A class room is 7 to 11, but fundamentally it is a fraction of preschoolers has 8 and! Worksheet for students to practice using ratio tables exciting along the way worksheets 6th Grade Rates and questions this... Them solve ratio problems 2 name: _ Date _ Topic: ratio ratio problems worksheet Word. Ratios 6th Grade Rates and ratio Word problems solving Proportions worksheets and 24 girls 1 the... Help them solve ratio Word problems - worksheet 1 Interactive is suitable for -. Made up, a couple adapted ) and one extension sheet with conversion Rates and ratio Word problems worksheets...? ] the width of a ratio, give it in simplified form worksheet... To solve Proportion Word Problems- worksheet 1 Interactive is suitable for 6th - 8th.... Blaster ’ s collection of ratio problems for grades 6 and higher or units out. To the area of the length to the area of the roots a! Form worksheets Proportion worksheets for Math classes 5 and 6 – there may be more space than you need,! Worksheets, lesson plans and teaching ideas for primary and elementary teachers a. Schools and promise to be fun and exciting along the way fundamentally it is a great for... Date _ Topic: ratio and... # 227542 for this concept page contains a series of intermediate-level Word....: ratio Word problems solving Proportions worksheets problems Math worksheets for grades 6 and higher work out the in... If an answer is giving you trouble 1 solve the problem asks for a ratio table students... Schools and promise to be fun and exciting along the way meters and rest. ’ s collection of ratio problems and activities are generally meant for grades 6 and follow. The Brownie Recipe fun and exciting along the way worksheet 4-5 mixed questions on finding Proportions ratios! Explore this compilation of printable ratio worksheets with a range of ratio simplified form to. All club members and questions stones in a triangle is 180 degree worksheet 4... For 6th - 8th Grade middle schoolers complete Word problems with ratio TablesThis is a fraction how can they them..., however, ratios are when two quantities with the same unit are and! Students to practice using ratio tables and Proportion Word problems ( worksheets and Solutions ) objective I. A triangle is 180 degree worksheet plans and teaching ideas for primary and elementary teachers boys and 24 girls Recipe! Is making brownies to serve at the family picnic ratio problems worksheet in specified recipes a ratio give. [? ] marbles, then press check '' to check your answers two quantities with the.! 6 to 7 how long does it take or ball-point pen ideas for primary and elementary.. A total of 49 boys in the category - ratio Year 6 series... Free letter if an answer is giving you trouble plans and teaching ideas for primary ratio problems worksheet elementary teachers minute how! check '' to check your answers many forms such as 3 out of 4 equal or. A series of intermediate-level Word problems 4-5 mixed questions on finding Proportions and ratios with ratios of numbers up three. Males and the rest are females your answers are compared and these are denoted using colons - Grade 8.! Serve at the family picnic and questions are 42 marbles, then press check to! Here you find our ratio and Rate for Grade 7 - displaying top 8 worksheets found for concept... Youtube Maxresdefault ratio and Proportion worksheets for... # 227541 1 sheet with conversion Rates questions. This concept worksheet 1 1 Proportions worksheets worksheets 6th Grade Sixth Grade Math worksheets 6th... Asks for a ratio, give it in simplified form worksheet for students practice... And elementary teachers worksheet 1 solve the ratio of boys to girls in a large pot is 6 7! Mixed questions on finding Proportions and ratios problems solving Proportions worksheets how the two or more numbers signify! And to accompany unit 2 & 3 Edexcel ratio problems worksheet to check your answers lesson and! To use to solve Proportion ratio problems worksheet Problems- worksheet 1 Interactive is suitable for -. Solve the ratio and Proportion Word problems rest are females primary and elementary teachers are males the. Understand how the two or more numbers that signify their related sizes on finding Proportions and ratios form of bikes...
2021-09-18 02:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24995633959770203, "perplexity": 2736.1130332039615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00676.warc.gz"}
http://watanabe-www.math.dis.titech.ac.jp/users/swatanab/essse.html
## Equations of States in Singular Statistical Estimation Return to Sumio Watanabe Homepage Equations of States in Singular Statistical Estimation, arXiv:0712.0653 We obtained a new result, Equations of States in Singular Statistical Estimation. We proved that there is a formula which holds among Bayes generalization, Bayes training, Gibbs generalization, and Gibbs training. Bg : Bayes Generalization Error, Bt : Bayes Training Error, Gg : Gibbs Generalization Error, Gt : Gibbs Training Error, b : Inverse Temperature of the a posteriori distribution. Equations of States : E[Bg] - E[Bt] = E[Gg] - E[Gt] = 2b ( E[Gt] - E[Bt] ) where E[ ] shows the expectation value. The formula holds for any true distribution, any learning machine, any a priori distribution, and any singularities. Hence we can predict the Bayes and Gibbs generalization errors from Bayes and Gibbs training errors without any knowledge of the true distribuion. E[Bg] = 2b ( E[Gt] - E[Bt] ) + E[Bt] E[Gg] = 2b ( E[Gt] - E[Bt] ) + E[Gt] which is said to be a widely applicable information criteria (WAIC). If a learning machine is regular (Fisher information matrix is positive definite), then 2bE[(Gt-Bt)] is equal to the dimension of the parameter space. The proofs of the theorems are based on singular learning theory. (5/Dec/2007). Errata: P.8, eq.(6), lim_{\beta\rightarrow\infty} should be removed. P.9, Table,1, the theoretical Bayes generation error should be 0.0135, 0.0150, 0.0160, 0.0170 International Conference Paper Sumio Watanabe, A formula of equations of states in singular learning machines," Proceedings of IEEE World Congress of Computational Intelligence, (Hong Kong, China) 2008.
2017-05-24 04:22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6435816884040833, "perplexity": 3335.743530483444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607786.59/warc/CC-MAIN-20170524035700-20170524055700-00385.warc.gz"}
https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Aeronautics
# 1911 Encyclopædia Britannica/Aeronautics AERONAUTICS, the art of “navigating” the “air.” It is divisible into two main branches—aerostation, dealing properly with machines which like balloons are lighter than the air, and aviation, dealing with the problem of artificial flight by means of flying machines which, like birds, are heavier than the air, and also with attempts to fly made by human beings by the aid of artificial wings fitted to their limbs. Historically, aviation is the older of the two, and in the legends of gods or myths of men or animals which are supposed to have travelled through the air, such as Pegasus, Medea's dragons and Daedalus, as well as in Egyptian bas-reliefs, wings appear as the means by which aerial locomotion is effected. In later times there are many stories of men who have attempted to fly in the same way. John Wilkins (1614–1672), one of the founders of the Royal Society and bishop of Chester, who in 1640 discussed the possibility of reaching the moon by volitation, says in his Mathematical Magick (1648) that it was related that “a certain English monk called Elmerus, about the Confessor's time,” flew from a town in Spain for a distance of more than a furlong; and that other persons had flown from St Mark's, Venice, and at Nuremberg. Giovanni Battista Dante, of Perugia, is said to have flown several times across Lake Trasimene. At the beginning of the 16th century an Italian alchemist who was collated to the abbacy of Tungland, in Galloway, Scotland, by James IV., undertook to fly from the walls of Stirling Castle through the air to France. He actually attempted the feat, but soon came to the ground and broke his thigh-bone in the fall—an accident which he explained by asserting that the wings he employed contained some fowls' feathers, which had an “affinity” for the dung-hill, whereas if they had been composed solely of eagles' feathers they would have been attracted to the air. This anecdote furnished Dunbar, the Scottish poet, with the subject of one of his rude satires. Leonardo da Vinci about the same time approached the problem in a more scientific spirit, and his notebooks contain several sketches of wings to be fitted to the arms and legs. In the following century a lecture on flying delivered in 1617 by Fleyder, rector of the grammar school at Tubingen, and published eleven years later, incited a poor monk to attempt to put the theory into practice, but his machinery broke down and he was killed. In Francis Bacon's Natural History there are two passages which refer to flying, though they scarcely bear out the assertion made by some writers that he first published the true principles of aeronautics. The first is styled Experiment Solitary, touching Flying in the Air:—“Certainly many birds of good wing (as kites and the like) would bear up a good weight as they fly; and spreading leathers thin and close, and in great breadth, will likewise bear up a great weight, being even laid, without tilting up on the sides. The further extension of this experiment might be thought upon.” The second passage is more diffuse, but less intelligible; it is styled Experiment Solitary, touching the Flying of unequal Bodies in the Air:—“Let there be a body of unequal weight (as of wool and lead or bone and lead); if you throw it from you with the light end forward, it will turn, and the weightier end will recover to be forwards, unless the body be over long. The cause is, for that the more dense body hath a more violent pressure of the parts from the first impulsion, which is the cause (though heretofore not found out, as hath been often said) of all violent motions; and when the hinder part moveth swifter (for that it less endureth pressure of parts) that the forward part can make way for it, it must needs be that the body turn over; for (turned) it can more easily draw forward the lighter part.” The fact here alluded to is the resistance that bodies experience in moving through the air, which, depending on the quantity of surface merely, must exert a proportionally greater effect on rare substances. The passage itself, however, after making every allowance for the period in which it was written, must be deemed confused, obscure and unphilosophical. In his posthumous work, De Motu Animalium, published at Rome in 1680–1681, G. A. Borelli gave calculations of the enormous strength of the pectoral muscles in birds; and his proposition cciv. (vol. i. pp. 322-326), entitled Est impossibile ut homines pro priis viribus artificiose volare possint, points out the impossibility of man being able by his muscular strength to give motion to wings of sufficient extent to keep him suspended in the air. But during his lifetime two Frenchmen, Allard in 1660 and Besnier about 1678, are said to have succeeded in making short flights. An account of some of the modern attempts to construct flying machines will be found in the article Flight and flying; here we append a brief consideration of the mechanical aspects of the problem. The very first essential for success is safety, which will probably only be attained with automatic stability. The underlying principle is that the centre of gravity shall at all times be on the same vertical line as the centre of pressure. The latter varies with the angle of incidence. For square planes it moves approximately as expressed by Joessel's formula, C+(0.2+0.3 sin α)L, in which C is the distance from the front edge, L the length fore and aft, and α the angle of incidence. The movement is different on concave surfaces. The term aeroplane is understood to apply to flat sustaining surfaces, but experiment indicates that arched surfaces are more efficient. S. P. Langley proposed the word aerodrome, which seems the preferable term for apparatus with wing-line surfaces. This is the type to which results point as the proper one for further experiments. With this it seems probable that, with well-designed apparatus, 40 to 50 ℔ can be sustained per indicated h.p., or about twice that quantity per resistance or “thrust” h.p., and that some 30 or 40 ℔ of the weight can be devoted to the machinery, thus requiring motors, with their propellers, shafting, supplies, &c., weighing less than 20 ℔ per h.p. It is evident that the apparatus must be designed to be as light as possible, and also to reduce to a minimum all resistances to propulsion. This being kept in view, the strength and consequent section required for each member may be calculated by the methods employed in proportioning bridges, with the difference that the support (from air pressure) will be considered as uniformly distributed, and the load as concentrated at one or more points. Smaller factors of safety may also have to be used. Knowing the sections required and unit weights of the materials to be employed, the weight of each part can be computed. If a model has been made to absolutely exact scale, the weight of the full-sized apparatus may approximately be ascertained by the formula ${\displaystyle W'=W{\sqrt {\left({\frac {S'}{S}}\right)^{3}}},}$ in which W is the weight of the model, S its surface, and W′ and S′ the weight and surface of the intended apparatus. Thus if the model has been made one-quarter size in its homologous dimensions, the supporting surfaces will be sixteen times, and the total weight sixty-four times those of the model. The weight and the surface being determined, the three most important things to know are the angle of incidence, the “lift,” and the required speed. The fundamental formula for rectangular air pressure is well known: P=KV²S, in which P is the rectangular normal pressure, in pounds or kilograms, K a coefficient (0.0049 for British, and 0.11 for metric measures), V the velocity in miles per hour or in metres per second, and S the surface in square feet or in square metres. The normal on oblique surfaces, at various angles of incidence, is given by the formula P = KV²Sη, which latter factor is given both for planes and for arched surfaces in the subjoined table:— Planes (Duchemin Formula, verified by Langley). N=P 2sinα1+sin²α. Wings (Lilienthal). Concavity 1 in 12. Angle. α. Normal. η. Lift. ηcosα. Drift. ηsinα. Normal. η Lift. ηcosα. Drift. ηsinα. Tangential force. α. − 9° 0.0 0.0 0.0 + 0.070 − 8° 0.040 0.0396 − 0.0055 + 0.067 − 7° 0.080 0.0741 − 0.0097 + 0.064 − 6° 0.120 0.1193 − 0.0125 + 0.060 − 5° 0.160 0.1594 − 0.0139 + 0.055 − 4° 0.200 0.1995 − 0.0139 + 0.049 − 3° 0.242 0.2416 − 0.0126 + 0.043 − 2° 0.286 0.2858 − 0.0100 + 0.037 − 1° 0.332 0.3318 − 0.0058 + 0.031 0° 0.0 0.0 0.0 0.381 0.3810 − 0.0 + 0.024 + 1 0.035 0.035 0.000611 0.434 0.434 + 0.0075 + 0.016 + 2 0.070 0.070 0.00244 0.489 0.489 + 0.0170 + 0.008 + 3 0.104 0.104 0.00543 0.546 0.545 + 0.0285 0.0 + 4 0.139 0.139 0.0097 0.600 0.597 + 0.0418 − 0.007 + 5 0.174 0.173 0.0152 0.650 0.647 + 0.0566 − 0.014 + 6 0.207 0.206 0.0217 0.696 0.692 + 0.0727 − 0.021 + 7 0.240 0.238 0.0293 0.737 0.731 + 0.0898 − 0.028 + 8 0.273 0.270 0.0381 0.771 0.763 + 0.1072 − 0.035 + 9 0.305 0.300 0.0477 0.800 0.790 + 0.1251 − 0.042 10° 0.337 0.332 0.0585 0.825 0.812 + 0.1432 − 0.050 11° 0.369 0.362 0.0702 0.846 0.830 + 0.1614 − 0.058 12° 0.398 0.390 0.0828 0.864 0.845 + 0.1803 − 0.064 13° 0.431 0.419 0.0971 0.879 0.856 + 0.1976 − 0.070 14° 0.457 0.443 0.1155 0.891 0.864 + 0.2156 − 0.074 15° 0.486 0.468 0.1240 0.901 0.870 + 0.2332 − 0.076 The sustaining power, or “lift” which in horizontal flight must be equal to the weight, can be calculated by the formula L=KV²Sηcosα, or the factor may be taken direct from the table, in which the “lift” and the “drift” have been obtained by multiplying the normal η by the cosine and sine of the angle. The last column shows the tangential pressure on concave surfaces which O. Lilienthal found to possess a propelling component between 3° and 32° and therefore to be negative to the relative wind. Former modes of computation indicated angles of 10° to 15° as necessary for support with planes. These mere prohibitory in consequence of the great “drift”; but the present data indicate that, with concave surfaces, angles of 2° to 5° will produce adequate “lift.” To compute the latter the angle at which the wings are to be set must first be assumed, and that of +3° will generally be found preferable. Then the required velocity is next to be computed by the formula ${\displaystyle V={\sqrt {\frac {L}{KS\eta \cos {\alpha }}}};}$ or for concave wings at +3°: ${\displaystyle V={\sqrt {\frac {W}{0.545KS}}}.}$ Having thus determined the weight, the surface, the angle of incidence and the required seed for horizontal support, the next step is to calculate the power required. This is best accomplished by first obtaining the total resistances, which consist of the “drift” and of the head resistances due to the hull and framing. The latter are arrived at preferably by making a tabular statement showing all the spars and parts offering head resistance, and applying to each, the coefficient appropriate to its "master section," as ascertained by experiment. Thus is obtained an "equivalent area" of resistance, which is to be multiplied by the wind pressure due to the speed. Care must be taken to resolve all the resistances at their proper angle of application, and to subtract or add the tangential force, which consists in the surface S, multiplied by the wind pressure, and by the factor in the table, which is, however, 0 for 3° and 32°, but positive or negative at other angles. When the aggregate resistances are known, the “thrust h.p.” required is obtained by multiplying the resistance by the speed, and then allowing for mechanical losses in the motor and propeller, which losses will generally be 50% of indicated h.p. Close approximations are obtained by the above method when applied to full sized apparatus. The following example will make the process clearer. The weight to he carried by an apparatus was 189 ℔ on concave wings of 143.5 sq. ft. area, set at a positive angle of 3°. There were in addition rear wings of 29.5 sq. ft., set at a negative angle of 3°; hence, ${\displaystyle \scriptstyle L=189=0.005\times V^{2}\times 143.5\times 0.545}$. Whence ${\displaystyle V={\sqrt {\frac {189}{0.005\times 143.5\times 0.545}}}=22{\hbox{ miles per hour}}}$, at which the air pressure would be 2.42 ℔ per sq. ft. The area of spars and man was 17.86 sq. ft., reduced by various coefficients to an “equivalent surface” of 11.70 sq. ft., so that the resistances were:— Drift front wings, ${\displaystyle \scriptstyle 143.5\times 0.0285\times 2.42}$ = 9.90 ℔ Drift„ rear wings, ${\displaystyle \scriptstyle 29.5\times (0.043-0.242\times 0.05235)\times 2.42}$ = 2.17 lb„ Tangential force at 3° = 0.00 ℔„ Head resistance, ${\displaystyle \scriptstyle 11.70\times 2.43}$ = 28.31 ℔„ Total resistance = 40.38 ℔ Speed 22 miles per hour. Power = ${\displaystyle \scriptstyle {\frac {40.38\times 22}{375}}=2.36}$ h.p. for the “thrust” or 4.72 h.p. for the motor. The weight being 189 ℔, and the resistance 40.38 ℔, the gliding angle of descent was ${\displaystyle \scriptstyle {\frac {40.38}{189}}}$ = tangent of 12°, which was verified by many experiments. The following expressions will be found useful in computing such projects, with the aid of the table above given:— 1. Wind force, ${\displaystyle \scriptstyle F=KV^{2}}$. 2. Pressure, ${\displaystyle \scriptstyle P=KV^{2}S}$. 3. Velocity, ${\displaystyle \scriptstyle V={\sqrt {\frac {W}{KS\eta \cos {a}}}}}$ 4. Surface S varies as ${\displaystyle \scriptstyle {\frac {I}{V^{2}}}}$. 5. Normal, ${\displaystyle \scriptstyle N=KSV^{2}\eta }$. 6. Lift, ${\displaystyle \scriptstyle L=KSV^{2}\eta \cos {a}}$. 7. Weight, ${\displaystyle \scriptstyle W=L=N\cos {a}}$. 8. Drift, ${\displaystyle \scriptstyle D=KSV^{2}\eta \sin {a}}$. 9. Head area E, get an equivalent. 10. Head resistance, ${\displaystyle \scriptstyle H=EF}$. 11. Tangential force, ${\displaystyle \scriptstyle T=Pa}$. 12. Resistance, ${\displaystyle \scriptstyle R=D+H\pm T}$. 13. Ft. ℔, ${\displaystyle \scriptstyle M=RV}$. 14. Thrust, h.p., ${\displaystyle \scriptstyle ={\frac {RV}{\hbox{factor}}}}$. Aerostation.—Possibly the flying dove of Archytas of Tarentum is the earliest suggestion of true aerostation. According to Aulus Genius (Noctes Atticae) it was a “model of a dove or pigeon formed in wood and so contrived as by a certain mechanical art and power to fly: so nicely was it balanced by weights and put in motion by hidden and enclosed air.” This “hidden and enclosed air” may conceivably represent an anticipation of the hot-air balloon, but it is at least as probable that the apparent flight of the dove was a mere mechanical trick depending on the use of fine wires or strings invisible to the spectators. In the middle ages vague ideas appear of some ethereal substance so light that vessels containing it would remain suspended in the air. Roger Bacon (1214–1294) conceived of a large hollow globe made of very thin metal and filled with ethereal air or liquid fire, which would float on the atmosphere like a ship on water. Albert of Saxony, who was bishop of Halberstadt from 1366 to 1390, had a similar notion, and considered that a small portion of the principle of fire enclosed in a light sphere would raise it and keep it suspended. The same speculation was advanced by Francis Mendoza, a Portuguese Jesuit, who died in 1626 at the age of forty-six, and by Gaspar Schott (1608–1666), also a Jesuit and professor of mathematics at Wurzburg, though for fire he substituted the thin ethereal fluid which he believed to float above the atmosphere. So late as 1755 Joseph Galien (1699–1782), a Dominican friar and professor of philosophy and theology in the papal university of Avignon, proposed to collect the diffuse air of the upper regions and to enclose it in a huge vessel extending more than a mile every way, and intended to carry fifty-four times as much weight as did Noah’s ark. A somewhat different but equally fantastic method of making heavy bodies rise is quoted by Schott from Lauretus Laurus, according to whom swans’ eggs or leather balls filled with nitre, sulphur or mercury ascend when exposed to the sun. Laurus also stated that hens’ eggs filled with dew will ascend in the same circumstances, because dew is shed by the stars and drawn up again to heaven by the sun’s heat during the day. The same notion is utilized by Cyrano de Bergerac (1619–1655) in his romances describing journeys to the moon and sun, for his French traveller fastens round his body a multitude of very thin flasks filled with the morning’s dew, whereby through the attractive power of the sun’s heat on the dew he is raised to the middle regions of the atmosphere, to sink again, however, on the breaking of some of the flasks. Fig. 1.—Lana’s Aeronautical Machine. A distinct advance on Schott is marked by the scheme for aerial navigation proposed by the Jesuit, Francis Lana (1631–1687), in his book, published at Brescia in 1670, Prodromo ovvero Saggio di alcune invenzioni nuove promesso all’ Arte Maestra. His idea, though useless and unpractical in so far that it could never be carried out, is yet deserving of notice, as the principles involved are sound; and this can be said of no earlier attempt. His project was to procure four copper balls of very large dimensions (fig. 1), yet so extremely thin that after the air was exhausted from them they would be lighter than the air they displaced and so would rise; and to those four balls he proposed to attach a boat, with sails, &c., which would carry up a man. He submitted the whole matter to calculation, and proposed that the globes should be about 25 ft. in diameter and 1225th of an inch in thickness; this would give from all four balls a total ascensional force of about 1200 ℔, which would be quite enough to raise the boat, sails, passengers, &c. But the obvious objection to the whole scheme is, that it would be quite impossible to construct a globe of so large a size and of such small thickness which would even support its own weight without collapsing if placed on the ground, much less bear the external atmospheric pressure when the internal air was removed. Lana himself noticed this objection, but he thought that the spherical form of the copper shell would, notwithstanding its extreme thinness, enable it, after the exhaustion was effected, to sustain the enormous pressure, which, acting equally on every point of the surface, would tend to consolidate rather than to break the metal. His proposal to exhaust the air from the globes by attaching to each a tube 36 ft. long, fitted with a stopcock, and so producing a Torricellian vacuum, suggests that he was ignorant of the invention of the air-pump by Otto von Guericke about 1650. We now come to the invention of the balloon, which was due to Joseph Michel Montgolfier (1740–1810) and Jacques Etienne Montgolfier (1745–1799), sons of Pierre Montgolfier, a large and celebrated papermaker at Annonay,Invention of the balloon. a town about 40 m. from Lyons. The brothers had observed the suspension of clouds in the atmosphere, and it occurred to them that if they could enclose any vapour of the nature of a cloud in a large and very light bag, it might rise and carry the bag with it into the air. Towards the end of 1782 they inflated bags with smoke from a fire placed underneath, and found that either the smoke or some vapour emitted from the fire did ascend and carry the bag with it. Being thus assured of the correctness of their views, they determined to have a public ascent of a balloon on a large scale. They accordingly invited the States of Vivarais, then assembled at Annonay, to witness their aerostatic experiment; and on the 5th of June 1783, in the presence of a considerable concourse of spectators, a linen globe of 105 ft. in circumference was inflated over a fire fed with small bundles of chopped straw. When released it rapidly rose to a great height, and descended, at the expiration of ten minutes, at the distance of about 1½m. This was the discovery of the balloon. The brothers Montgolfier imagined that the bag rose because of the levity of the smoke or other vapour given forth by the burning straw; and it was not till some time later that it was recognized that the ascending power was due merely to the lightness of heated air compared to an equal volume of air at a lower temperature. In this balloon, no source of heat was taken up, so that the air inside rapidly cooled, and the balloon soon descended. Fig. 2.—Charles’ and Robert’s Balloon. On the 19th of September 1783 Joseph Montgolfier repeated the Annonay experiment at Versailles, in the presence of the king, the queen, the court and an immense number of spectators. The inflation was begun at one o’clock, and completed in eleven minutes, when the balloon rose to the height of about 1500 ft., and descended after eight minutes, at a distance of about 2 m., in the wood of Vaucresson. Suspended below the balloon: in a cage, had been placed a sheep, a cock and a duck, which were thus the first aerial travellers. They were quite uninjured, except the cock, which had its right wing hurt in consequence of a kick it had received from the sheep; but this took place before the ascent. The balloon, which was painted with ornaments in oil colours, had a very showy appearance (fig. 3). Fig. 3.—Montgolfier’s Balloon. All the features of the modern balloon as now used are more or less due to Charles, who invented the valve at the top, suspended the car from a hoop, which was itself attached to the balloon by netting, &c. With regard to his use of hydrogen gas, there are anticipations that must be noticed. As early as 1766 Henry Cavendish showed that this gas was at least seven times lighter than ordinary air, and it immediately occurred to Dr Joseph Black, of Edinburgh, that a thin bag filled with hydrogen gas would rise to the ceiling of a room. He provided, accordingly, the allantois of a calf, with the view of showing at a public lecture such a curious experiment; but for some reason it seems to have failed, and Black did not repeat it, thus allowing a great discovery, almost within his reach, to escape him. Several years afterwards a similar idea occurred to Tiberius Cavallo, who found that bladders, even when carefully scraped, are too heavy, and that China paper is permeable to the gas. But in 1782, the year before the invention of the Montgolfiers, he succeeded in elevating soap-bubbles by inflating them with hydrogen gas. Researches on the use of gas for inflating balloons seem to have been carried on at Philadelphia nearly simultaneously with the experiments of the Montgolfiers; and when the news of the latter reached America, D. Rittenhouse and F. Hopkinson, members of the Philosophical Society at Philadelphia; constructed a machine consisting of forty-seven small hydrogen gas-balloons attached to a car or cage. After several preliminary experiments, in which animals were let up to a certain height by a rope, a carpenter, one James Wilcox, was induced to enter the car for a small sum of money; the ropes were cut, and he remained in the air about ten minutes, and only then effected his descent by making incisions in a number of the balloons, through fear of falling into the river, which he was approaching. Although the news of the Annonay and subsequent experiments in France rapidly spread all over Europe, and formed a topic of general discussion, still it was not till five months after the Montgolfiers had first publicly sent a balloon into the air that any aerostatic experiment was made in England.First ascents in Great Britain. In November 1783 Count Francesco Zambeccari (1756–1812), an Italian who happened to be in London, made a balloon of oil-silk, 10 ft. in diameter, and weighing 11 ℔. It was publicly shown for several days, and on the 25th it was three-quarters filled with hydrogen gas and launched from the Artillery ground at one o’clock. It descended after two hours and a half near Petworth, in Sussex, 48 m. from London. This was the first balloon that ascended from English ground. On the 22nd of February 1784 a hydrogen gas balloon, 5 ft. in diameter, was let up from Sandwich, in Kent, and descended at Warneton, in French Flanders, 75 m. distant. This was the first balloon that crossed the Channel. The first person who rose into the air from British ground appears to have been J. Tytler,[1] who ascended from the Comely Gardens, Edinburgh, on the 27th of August 1784, in a fire-balloon of his own construction. He descended on the road to Restalrig, about half a mile from the place where he rose. Fig. 4.—Lunardi’s Balloon. Fig. 5.—Blanchard’s Balloon. A, Balloon of taffeta, 26 ft. in diameter,  covered with a net. B, Car suspended by cords from hoop C. D,D,D,D, Wings worked by rack-work E. F, Parachute to break the force of descent  should the balloon burst. G, Tube communicating with inside of balloon. The first balloon voyage across the English Channel was accomplished by Jean Pierre Blanchard (1753–1809) and Dr. J. Jeffries, an American physician, on the 7th of January 1785. In the preceding year, on the 2nd of March, Blanchard, who was one of the most celebrated of the earlier aeronauts,Voyages across English Channel made his first voyage from Paris in a balloon 27 ft. in diameter (fig. 5), and descended at Billancourt near Sevres. Just as the balloon was about to start, a young man jumped into the car and drawing his sword declared his determination to ascend with Blanchard. He was ultimately removed by force. It has sometimes been incorrectly stated that he was Napoleon Bonaparte; his name in reality was Dupont de Chambon. In their Channel crossing Blanchard and his companion, who started from Dover, when about one-third across found themselves descending, and threw out every available thing from the boat or car. When about three-quarters across they were descending again, and had to throw out not only the anchor and cords, but also to strip and throw away their clothing, which they found they were rising, and their last resource, viz. to cut away the car, was rendered unnecessary. As they approached the shore the balloon rose, describing a magnificent arch high over the land. They descended in the forest of Guinnes. On the 15th of June 1785, Pilâtre de Rozier made an attempt to repeat the exploit of Blanchard and Jeffries in the reverse direction, and cross from Boulogne to England. For this purpose he contrived a double balloon, which he expected would combine the advantages of both kinds—a fire-balloon, 10 ft. in diameter, being placed underneath a gas-balloon of 37 ft. in diameter, so that by increasing or diminishing the fire in the former it might be possible to ascend or descend without waste of gas. Rozier was accompanied by P. A. Romain, and for rather less than half an hour after the aerostat ascended all seemed to be going on well, when suddenly the whole apparatus was seen in flames, and the unfortunate adventurers came to the ground from the supposed height of more than 3000 ft. Rozier was killed on the spot, and Romain only survived about ten minutes. A monument was erected on the place where they fell, which was near the sea-shore, about 4 m. from the starting-point. The largest balloon on record (if the contemporary accounts are correct) ascended from Lyons on the 19th of January 1784. It was more than 100 ft. in diameter, about 130 ft. in height, and when distended had a Early large balloonscapacity, it is said, of over half a million cubic feet. It was called the “Flesselles” (from the name of its proprietor, we believe), and after having been inflated from a straw fire in seventeen minutes, it rose with seven persons in the car to the height of about 3000 ft., but descended again after the lapse of about a quarter of an hour from the time of starting, in consequence of a rent in the upper part. Another large fire-balloon, 68 ft. in diameter, was constructed by the chevalier Paul Andreani of Milan, and on the 25th of February he ascended in it from Milan, remaining in the air for about twenty minutes. This is usually regarded as the first ascent in Italy (but see Monck Mason’s Aeronautica, p. 247). On the 7th of November 1836, at half-past one o'clock, a large balloon containing about 85,000 cub. ft. of gas ascended from Vauxhall Gardens, London, carrying Robert Hollond, M.P., Monck Mason and Charles Green, and descended about two leagues from Weilburg, in the duchy of Nassau, at half-past seven the next morning, having thus traversed a distance of about 500 m. in 18 hours; Liege was passed in the course of the night, and Coblentz in the early morning. In consequence of this journey the balloon became famous as the “Nassau Balloon” (fig. 6). Charles Green (1785–1870), who constructed it and subsequently became its owner, was the most celebrated of English aeronauts, and made an extraordinary number of ascents. His first, made from the Green Park, London, on the 19th of July 1821 at the coronation of George IV., was distinguished for the fact that for the first time coal-gas was used instead of hydrogen for inflating the balloon. In 1828 he made an equestrian ascent from the Eagle Tavern, City Road, London, seated on his favourite pony. Such ascents have since been repeated; in 1852 Madame Poitevin made one from Cremorne Gardens, but was prevented from giving a second performance by police interference, the exhibition outraging public opinion. It was in descending from the “Nassau Balloon” in a parachute that Robert Cocking was killed in 1837 (see Parachute). Green was the inventor of the guide-rope, which consists of a long rope trailing below the car. Its function is to reduce the waste of gas and ballast required to keep the balloon at a proper altitude. When a balloon sinks so low that a good deal of the guide-rope rests on the ground, it is relieved of so much weight and therefore tends to rise; if on the other hand it rises so that most of the rope is lifted off the ground, it has to bear a greater weight and tends to sink. Fig. 6.—The Great Nassau Balloon. Directly after Nadar’s two ascents, Eugene Godard constructed a fire-balloon of nearly half a million cubic feet capacity—more than double that of Nadar’s and only slightly less than that attributed to the “Flesselles” of 1783. The air was heated by an 18-ft. stove, weighing, with the chimney, 980 ℔. This furnace was fed by straw; and the “car” consisted of a gallery surrounding it. Two ascents of this balloon, the first fire-balloon seen in London, were made from Cremorne Gardens in July 1864. After the first journey the balloon descended at Greenwich, and after the second at Walthamstow, where it was injured by being blown against a tree. Notwithstanding its enormous size, Godard asserted that it could be inflated in half an hour, and the inflation at Cremorne did not occupy more than an hour. In spite of the rapidity with which the inflation was effected, few who saw the ascent could fail to receive an impression unfavourable to the fire-balloon in the matter of safety, as a rough descent, with a heated furnace as it were in the car, could not be other than most dangerous. In the summer of 1873 the proprietors of the New York Daily Graphic, reviving a project discussed by Green in 1840, determined to construct a very large balloon, and enable the American aeronaut,Long balloon voyages. John Wise, to realize his favourite scheme of crossing the Atlantic Ocean to Europe, by taking advantage of the current from west to east which was believed by many to exist constantly at heights above 10,000 ft. The project came to nothing owing to the quality of the material of which the balloon was made. When it was being inflated in September 1873 a rent was observed after 325,000 cub. ft. of gas had been put in, and the whole rapidly collapsed. The size was said to be such as to contain 400,000 cub. ft., so that it would lift a weight of 14,000 lb. No balloon voyage has yet been made of a length comparable to the breadth of the Atlantic. In fact only two voyages exceeding 1000 m. are on record—that of John Wise from St Louis to Henderson, N.Y., 1120 m., in 1859, and that of Count Henry de la Vaulx from Paris to Korosticheff in Russia, 1193 m., in 1900. On the 11th of July 1897 Salomon Andree, with two companions, Strendberg and Frankel, ascended from Spitzbergen in a daring attempt to reach the North Pole, about 600 m. distant. One carrier pigeon, apparently liberated 48 hours after the start, was shot, and two floating buoys with messages were found, but nothing more was heard of the explorers. At an early date the balloon was applied to scientific purposes. as far back as 1784, Dr Jeffries made an ascent from London in which he carried out barometric, thermometric and hygrometric observations, also collecting samples of the air at different heights.Scientific ascents. In 1803 the St Petersburg Academy of Sciences, entertaining the opinion that the experiments made on mountain-sides by J. A. Deluc, H. B. de Saussure, A. von Humboldt and others must give results different from those made in free air at the same heights, resolved to arrange a balloon ascent. Accordingly, on the 30th of January 1808, Sacharof, a member of the academy, ascended in a gas balloon, in company with a French aeronaut, E. G. Robertson, who at one time gave conjuring entertainments in Paris. The ascent was made at a quarter past seven, and the descent effected at a quarter to eleven. The height reached was less than 1 1/2 m. The experiments were not very systematically made, and the chief results were the filling and bringing down of several flasks of air collected at different elevations, and the supposed observation that the magnetic dip was altered. A telescope fixed in the bottom of the car and pointing vertically downwards enabled the travellers to ascertain exactly the spot over which they were floating at any moment. Sacharof found that, on shouting downwards through his speaking-trumpet, the echo from the earth was quite distinct, and at his height was audible after an interval of about ten seconds (Phil. Mag., 1805, 21, p. 193). Some of the results reported by Robertson appearing doubtful, Laplace proposed to the members of the French Academy of Sciences that the funds placed by the government at their disposal for the prosecution of useful experiments should be utilized in sending up balloons to test their accuracy. The proposition was supported by J. A. C. Chaptal, the chemist, who was then minister of the interior, and accordingly the necessary arrangements were speedily effected, the charge of the experiments being given to L. J. Gay-Lussac and J. B. Biot. The principal object of this ascent was to determine whether the magnetic force experienced any appreciable diminution at heights above the earth’s surface. On the 24th of August 1804, Gay-Lussac and Biot ascended from the Conservatoire des Arts at ten o'clock in the morning. Their magnetic experiments were incommoded by the rotation of the balloon, but they found that, up to the height of 13,000 ft., the time of vibration of a magnet was appreciably the same as on the earth’s surface. They found also that the air became drier as they ascended. The height reached was about 13,000 ft., and the temperature declined from 63° to 51° F. The descent was effected about half-past one, at Meriville, 18 leagues from Paris. In a second experiment, which was made on the 16th of September 1804, Gay-Lussac ascended alone. The balloon left the Conservatoire des Arts at 9.40 a.m., and descended at 3.45 p.m. between Rouen and Dieppe. The chief result obtained was that the magnetic force, like gravitation, did not experience any sensible variation at heights from the earth’s surface which we can attain to. Gay-Lussac also brought down air collected at the height of nearly 23,000 ft., and on analysis it appeared that its composition was the same as that of air collected at the earth’s surface. At the time of leaving the earth the thermometer stood at 82° F., and at the highest point reached (23,000 ft.) it was 14.9° F. Gay-Lussac remarked that at his highest point there were still clouds above him. From 1804 to 1850 there is no record of any scientific ascents in balloons having been undertaken. In the latter year J. A. Bixio (1808–1865) and A. Barral (1819–1884) made two ascents of this kind. In the first they ascended from the Paris observatory on the 29th of June 1850, at 10:27 a.m., the balloon being inflated with hydrogen gas. The day was a rough one, and the ascent took place without any previous attempt having been made to test the ascensional force of the balloon. When liberated, it rose with great rapidity, and becoming fully inflated it pressed upon the network, bulging out at the top and bottom. The ropes by which the car was suspended being too short, the balloon soon covered the travellers like an immense hood. In endeavouring to secure the valve-rope, they made a rent in the balloon, and the gas escaped so close to their faces as almost to suffocate them. Finding that they were descending then too rapidly, they threw overboard everything available, including their coats and only excepting the instruments. The ground was reached at 10h. 45m., near Lagny. Of course no observations were made. Their second ascent was made on the 27th of July, and was remarkable on account of the extreme cold met with. At about 20,000 ft. the temperature was 15° F., the balloon being enveloped in cloud; but on emerging from the cloud, at 23,000 ft., the temperature sank to −38° F., no less than 53° F. below that experienced by Gay-Lussac at the same elevation. The existence of these very cold clouds served to explain certain meteorological phenomena that were observed on the earth both the day before and the day after the ascent. Some pigeons were taken up in this, as in most other high ascents; when liberated, they showed a reluctance to leave the car, and then fell heavily downwards. In July 1852 the committee of the Kew Observatory resolved to institute a series of balloon ascents, with the view of investigating such meteorological and physical phenomena as require the presence of an observer at a great height in the atmosphere. John Welsh (1824–1859) of the Kew Observatory was the observer, and the great "Nassau Balloon" was employed, with Green himself as the aeronaut. Four ascents were made in 1852, viz. on the 17th and 26th of August, the 31st of October and the 10th of November. The heights attained were 19,510, 19,100, 12,680 and 22,930 ft., and the lowest temperatures met with in the four ascents were 8.7° F. (19,380 ft.), 12.4° F. (18,370 ft.), 16.4° F. (12,640 ft.) and 10.5° F. (22,370 ft.). The decline of temperature was very regular. A siphon barometer, dry and wet bulb thermometers, aspirated and free, and a Regnault hygrometer were taken up. Some air collected at a considerable height was found on analysis not to differ appreciably in its composition from air collected near the ground. For the original observations see Phil. Trans., 1853, pp. 311-346. At the meeting of the British Association for the Advancement of Science held at Aberdeen in 1859, a committee was appointed for the purpose of making observations in the higher strata of the atmosphere by means of the balloon.Glaisher’s ascents. For two years nothing was effected, owing to the want both of an observer and of a suitable balloon. After its reappointment at the Manchester meeting of 1861, the committee communicated with Henry Tracey Coxwell (1819–1900), an aeronaut who had made a good many ascents, and he agreed to construct a new balloon, of 90,000 cub. ft. capacity, on the condition that the committee would undertake to use it, and pay £25 for each high ascent made especially on its behalf, defraying also the cost of gas, &c., so that the expense of each high ascent amounted to nearly £ 50. An observer being still wanted, James Glaisher, a member of the committee, offered himself to take the observations, and accordingly the first ascent was made on the 17th of July 1862, from the gas-works at Wolverhamiton, this town being chosen on account of its central position in the country. Altogether, Glaisher made twenty-eight ascents, the last being on the 26th of May 1866. Of these only seven were specially high ascents, although six others were undertaken for the objects of the committee alone. On the ether occasions he availed himself of public ascents from the Crystal Palace and other places of entertainment, merely taking his place like the other passengers. In the last six ascents another aeronaut and a smaller balloon were employed. The dates, places of ascent and greatest heights (in feet) attained in the twenty-eight ascents were—1862: July 17, Wolverhampton, 26,177; July 30, Crystal Palace, 6937; August 18, Wolverhampton, 23,377; August 20, Crystal Palace, 5900; August 21, Hendon, 14,355; September 1, Crystal Palace, 4190; September 5, Wolverhampton, 37,000; September 8, Crystal Palace, 5428. 1863: March 31, Crystal Palace, 22,884; April 18, Crystal Palace, 24,163; June 26, Wolverton, 23,200; July 11, Crystal Palace, 6623; July 21, Crystal Palace, 3298; August 31, Newcastle-upon-Tyne, 8033; September 29, Wolverhampton, 16,590; October 9, Crystal Palace, 7310. 1864: January 12, Woolwich, 11,897; April 6, Woolwich, 11,075; June 13, Crystal Palace, 3543; June 20, Derby, 4280; June 27, Crystal Palace, 4898; August 29, Crystal Palace, 14,581; December 1, Woolwich, 5431; December 30, Woolwich, 3735. 1865: February 27, Woolwich, 4865; October 2, Woolwich, 1949; December 2, Woolwich, 4628. 1866: May 26, Windsor, 6325. The primary object of the ascents was to determine the temperature of the air, and its hygrometrical state at different elevations to as great a height as could be reached; and the secondary objects were-(1) to determine the temperature of the dew-point by Daniell's and Regnault's hygrometers, as well as by the dry and wet bulb thermometers, and to compare the results; (2) to compare the readings of an aneroid barometer with those of a mercurial barometer up to the height of 5 m.; (3) to determine the electrical state of the air, (4) the oxygenic condition of the atmosphere, and (5) the time of vibration of a magnet; (6) to collect air at different elevations; (7) to note the height and kind of clouds, their density and thickness; (8) to determine the rate and direction of different currents in the atmosphere; and (9) to make observations on sound. The instruments used were mercurial and aneroid barometers, dry and wet bulb thermometers, Daniell's dew-point hygrometer, Regnault's condensing hygrometer, maximum and minimum thermometers, a magnet for horizontal vibration, hermetically sealed glass tubes exhausted of air, and an electrometer. In one or two of the ascents a camera was taken up. The complete observations, both as made and after reduction, are printed in the British Association Reports, 1862–1866; here only a general account of the results can he given. It appeared that the rate of the decline of temperature with elevation near the earth was very different according as the sky was clear or cloudy; and the equality of temperature at sunset and increase with height after sunset were very remarkable facts which were not anticipated. Even at the height of 5 m., cirrus clouds were seen high in the air, apparently as far above as they seem when viewed from the earth. The results of the observations differed very much, and no doubt the atmospheric conditions depended not only on the time of day, but also on the season of the year, and were such that a vast number of ascents would be requisite to determine the true laws with anything approaching to certainty and completeness. It was also clear that England is a most unfit country for the pursuit of such investigations, as, from whatever place the balloon started, it was never safe to be more than an hour above the clouds for fear of reaching the sea. It appeared from the observations that an aneroid barometer could be trusted to read as accurately as a mercurial barometer to the heights reached. The time of vibration of a horizontal magnet was taken in very many of the ascents, and the results of ten different sets of observations indicated that the time of vibration was longer than on the earth. In almost all the ascents the balloon was under the influence of currents of air in different directions which varied greatly in thickness. The direction of the wind on the earth was sometimes that of the whole mass of air up to 20,000 ft., whilst at other times the direction changed within 500 ft. of the earth. Sometimes directly opposite currents were met with at different heights in the same ascent, and three or four streams of air were encountered moving in different directions. The direct distances between the places of ascent and descent, apart from the movements of the balloon under the influence of these various currents, were always very much greater than the horizontal movement of the air as measured by anemometers. For example, on the 12th of January 1862, the balloon left Woolwich at 2h. 8m. P.M., and descended at Lakenheath, 70 m. distant from the place of ascent, at 4h. 19m. P.M. At the Greenwich Observatory, by a Robinson anemometer, during this time the motion of the air was 6 m. only. With regard to physiological observations, Glaisher found that the frequency of his pulse increased with elevation, as also did the number of inspirations. The number of his pulsations was generally 76 per minute before starting, about 90 at 10,000 ft., 100 at 20,000 ft., and 110 at higher elevations. But a good deal depended on the temperament of the individual. This was also the case in respect to colour; at 10,000 ft. the faces of some would be a glowing purple, whilst others would be scarcely affected; at 4 m. high Glaisher found the pulsations of his heart distinctly audible, and his breathing was very much affected, so that panting was produced by the slightest exertion; at 29,000 ft. he became insensible. In reference to the propagation of sound, it was at all times found that sounds from the earth were more or less audible according to the amount of moisture in the air. When in clouds at 4 m. high, a railway train was heard; but when clouds were far below, no sound ever reached the ear at this elevation. The discharge of a gun was heard at 10,000 ft. The barking of a dog was heard at the height of 2 m., while the shouting of a multitude of people was not audible at heights exceeding 4000 ft. In his ascent of the 5th of September 1862, Glaisher considered that he reached a height of 37,000 ft. But that figure was based, not on actual record, but on the circumstances that at 29,000 ft., when he became insensible, the balloon was rising 1000 ft. a minute, and that when he recovered consciousness thirteen minutes later it was falling 2000 ft. a minute, and the accuracy of his conclusions has been questioned. Few scientific men have imitated Glaisher in making high ascents for meteorological observations. In 1867 and 1868 Camille Flammarion made eight or nine ascents from Paris for scientific purposes. The heights attained were not great, but the general result was to confirm the observations of Glaisher; for an account see Voyages aeriens, Paris, 1870, or Travels in the Air, London, 1871, in which also some ascents by W. de Fonvielle are noticed. On the 15th of April 1875, H. T. Sivel, J. E. Croce-Spinelli and Gaston Tissandier ascended from Paris in the balloon "Zenith," and reached a height of 27,950 ft.; but only Tissandier came down alive, his two companions being asphyxiated. This put an end to such attempts for a time. But Dr A. Berson and Lieut. Gross attained 25,840 ft. on the 11th of May 1894; Berson, ascending alone from Strassfurt on the 4th of December 1894, attained about 31,500 ft. and recorded a temperature of −54 deg. F.; and Berson and Stanley Spencer are stated by the latter to have attained 27,500 ft. on the 15th of September 1898 when they ascended in a hydrogen balloon from the Crystal Palace, the thermometer registering −29 deg. F. On the 31st of July 1901, Berson and R. J. Suring, ascending at Berlin, actually noted a barometric reading corresponding to a height of 34,500 ft., and possibly rose 1000 or 1500 ft. higher, though in spite of oxygen inhalations they were unconscious during the highest portion of the ascent. The personal danger attending his ascents led Gustave Hermite and Besancon in November 1892 to inaugurate the sending up of unmanned balloons (ballons sondes) equipped with automatic recording instruments, and kites (q.v.) have also been employed for similar meteorological purposes. (See also Meteorology.) The balloon had not been discovered very long before it received a military status, and soon after the beginning of the French revolutionary war an aeronautic school was founded at Meudon, in charge of Guyton de Morveau, the chemist,Military balloons. The balloon proved itself very valuable during the siege of Paris (1870–71). It was by it alone that communication was kept up between the besieged city and the external world, as the balloons carried away from Paris the pigeons which afterwards brought back to it the news of the provinces. The total number of balloons that ascended from Paris during the siege, conveying persons and despatches, was sixty-four—the first having started on the 23rd of September 1870, and the last on the 28th of January 1871. Gambetta effected his escape from Paris, on the 7th of October, in the balloon “Armand-Barbes,” an event which doubtless led to the prolongation of the war. Of the sixty-four balloons only two were never heard of; they were blown out to sea. One of the most remarkable voyages was that of the “Ville d'Orleans,” which, leaving Paris at eleven o'clock on the 21st of November, descended fifteen hours afterwards near Christiania, having crossed the North Sea. Several of the balloons on their descent were taken by the Prussians, and a good many were fired at while in the air. The average size of the balloons was from 2000 to 2050 metres, or from 70,000 to 72,000 cub. ft. The above facts are extracted from Les Ballons du siege de Paris, a sheet published by Buila and Sons, Paris, and compiled by the brothers Tissandier, well-known French aeronauts, which gives the name, size and times of ascent and descent of every balloon that left Paris, with the names of the aeronaut and generally also of the passengers, the weight of despatches, the number of pigeons, &c. Only those balloons, however, are noticed in which some person ascended. The balloons were manufactured and despatched (generally from the platforms of the Orleans or the Northern railway) under the direction of the Post Office. The aeronauts employed were mostly sailors, who did their work very well. No use whatever was made in the war of balloons for purposes of reconnaissance. Ballooning, however, as a recognized military science, only dates back to about the year 1883 or 1884, when most of the powers organized regular balloon establishments. In 1884–85 the French found balloons very useful during their campaign in Tongking; and the British government also despatched balloons with the Bechuanaland expedition, and also with that to Suakin in those years. During the latter campaign several ascents were made in the presence of the enemy, on whom it was said that a great moral effect was produced. The employment of balloons has been common in nearly all modern wars. We may briefly describe the apparatus used in military operations. The French in the campaigns of the 19th century used varnished silk balloons of about 10,000 cub. ft. capacity. The Americans in the Civil War used much larger ones. those of 26,000 cub. ft. being found the most suitable. These were also of varnished silk. In the present day most nations use balloons of about 20,000 cub. ft., made of varnished cambric; but the British war balloons, made of goldbeater skin, are usually of comparatively small size, the normal capacity being 10,000 cub. ft., though others of 7000 and 4500 cub. ft. have also been used, as at Suakin. The usual shape is spherical; but since 1896 the Germans, and now other nations, have adopted a long cylindrical-shaped balloon, so affixed to its cable as to present an inclined surface to the wind and thus act partly on the principle of a kite. Though coal-gas and even hot air may occasionally be used for inflation, hydrogen gas is on account of its lightness far preferable. In the early days of ballooning this had to be manufactured in the field, but nowadays it is almost universally carried compressed in steel tubes. About 100 such tubes, each weighing 75℔, are required to fill a 10,000-ft. balloon. Tubes of greater capacity have also been tried. The balloon is almost always used captive. If allowed to go free it will usually be rapidly carried away by the wind and the results of the observations cannot easily be transmitted back. Occasions may occur when such ascents will be of value, but the usual method is to send up a captive balloon to a height of somewhere about 1000 ft. With the standard British balloon two officers are sent up, one of whom has now particularly to attend to the management of the balloon, while the other makes the observations. With regard to observations from captive balloons much depends on circumstances. In a thickly wooded country, such as that in which the balloons were used in the American Civil War, and in the war in Cuba (in which the balloon merely served to expose the troops to severe fire), no very valuable information is, as a rule, to be obtained; but in fairly open country all important movements of troops should be discernible by an experienced observer at any point within about four or five miles of the balloon. The circumstances, it may be mentioned, are such as would usually preclude one unaccustomed to ballooning from affording valuable reports. Not only is he liable to be disturbed by the novel and apparently hazardous situation, but troops and features of the ground often have so peculiar an appearance from that point of view, that a novice will often have a difficulty in deciding whether an object be a column of troops or a ploughed field. Then again, much will depend on atmospheric conditions. Thus, in misty weather a balloon is well-nigh useless; and in strong winds, with a velocity of anything over 20 m. an hour, efficient observation becomes a matter of difficulty. When some special point has to be reported on, such as whether there is any large body of troops behind a certain hill or wood, a rapid ascent may still be mace in winds up to 30 m. an hour, but the balloon would then be so unsteady that no careful scouting could be made. It is usually estimated that a successful captive ascent can only be made in England on half the days of the ear. As a general rule balloon ascents would be made for one of the following objects— to examine the country for an enemy; to reconnoitre the enemy's position; to ascertain the strength of his force, number of guns and exact situation of the various arms; also to note the plan of his earthworks or fortifications. During an action the aerial observer would be on the look-out for any movements of the enemy and give warning of flank attacks or surprises. Such an observer could also keep the general informed as to the progress of various detached parties of his own force, as to the advance of reinforcements, or to the conduct of any fighting going on at a distance. Balloon observations are also of especial use to artillery in correcting their aim. The vulnerability of a captive balloon to the enemy's fire has been tested by many experiments with variable results. One established Photo. Topical Press. Fig.1.—CLÉMENT-BAYARD DIRIGIBLE. Photo. Topical Press. Fig.2.—ZEPPELIN VII. (DEUTSCHLAND), WRECKED JUNE 28, 1910. Photo. Topical Press. Fig.3.—BRITISH ARMY DIRIGIBLE, BETA. Photo. Topical Press. Fig.4.—PARSEVAL DIRIGIBLE. fact is that the range of a balloon in mid-air is extremely difficult to judge, and, as its altitude can he very rapidly altered, it becomes a very difficult mark for artillery to hit. A few bullet-holes in the fabric of a balloon make but little difference, since the size of the perforation is very minute as compared with the great surface of material, but on the other hand, a shrapnel bursting just in front of may cause a rapid fall. It is therefore considered prudent to keep the balloon well away from an enemy, and two miles are laid down as the nearest approach it should make habitually. Besides being of use on land for war purposes, balloons have been tried in connexion with the naval service. In France especially regular trials have been made of inflating balloons on board ships, and sending them aloft as a look-out; but it is now generally contended that the difficulties of storing the gas and of manoeuvring the balloon are so great on board ship as to be hardly worth the results to be gained. A very important development of military ballooning is the navigable balloon. If only a balloon could be sent up and driven in any required direction, and brought back to its starting-point, it is obvious that it would be of the very greatest use in war. From the very first invention of balloons the problem has been how to navigate them by propulsion. General J. B. M. C. Meusnier (1754–1793) proposed an elongated balloon in 1784.Dirigible balloons. It was experimented on by the brothers Robert, who made two ascensions and claimed to have obtained a deviation of 22° from the direction of a light wind by means of aerial oars worked by hand. The relative speed was probably about 3 m. an hour, and it was so evident that a very much more energetic light motor than any then known was required to stem ordinary winds that nothing more was attempted till 1832, when Henri Giffard (1825–1882) as ascended with a steam-engine of then unprecedented lightness. The subjoined table exhibits some of the results subsequently obtained:— Year. Inventor. Length. Diameter. Contents. Lifting  Capacity. Weight of  Balloon. Weight of  Motor. H.P. Speed  per hour. Ft. Ft. Cub. Ft. ℔. ℔. ℔. Miles 1852 Giffard 144 39 88,300 3,978 2,794 462 3.0 6.71 1872 Dupuy de Lôme 118 49 120,088 8,358 4,728 2000 0.8 6.26 1884 Tissandier 92 30 37,439 2,728 933 616 1.5 7.82 1885 Renard and Krebs 165 27 65,836 4,402 2,449 1174 9.0 14.00 1897 Schwarz 157 ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$46 39${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ 130,500 8,133 6,800 800? 16.0 17.00 1900 Zeppelin I. 420 39 400,000 25,000 19,000 1500 32.0 18.00 1901 Santos Dumont VI. 108 20 22,200 . . . . . . 16.20 19.00 1908 “République” 195 35 130,000 3,100 . . . . 80 30 1908 Zeppelin IV. 446 42½ 450,000 . . . . . . 220 . . Giffard, the future inventor of the injector, devised a steam-engine weighing, with fuel and water for one hour, 154 ℔. per horse-power, and was bold enough to employ it in proximity to a balloon inflated with coal gas. He was not able to stem a medium wind, but attained some deviation. He repeated the experiment in 1855 with a more elongated spindle, which proved unstable and dangerous. During the siege of Paris the French Government decided to build a navigable balloon, and entrusted the work to the chief naval constructor, Dupuy de Lome. He went into the subject very carefully, made estimates of all the strains, resistances and speeds, and tested the balloon in 1872. Deviations of 12° were obtained from the course of a wind blowing 27 to 37 m. per hour. The screw propeller was driven by eight labourers, a steam-engine being deemed too dangerous; but it was estimated that had one been used, weighing as much as the men, the speed would have been doubled. Tissandier and his brother applied an electric motor, lighter than any previously built, to a spindle-shaped balloon, and went up twice in 1883 and 1884. On the latter occasion he stemmed a wind of 7 m. per hour. The brothers abandoned these experiments, which had been carried on at their own expense, when the French War Department took up the problem. Renard and Krebs, the Officers in charge of the War Aeronautical Department at Heudon, built and experimented with in 1884 and 1885 the fusiform balloon “La France,” in which the “master” or maximum section was about one-quarter of the distance from the stem. The propelling screw was at the front of the car and driven by an electric motor of unprecedented lightness. Seven ascents were made on very calm days, a maximum speed of 14 m. an hour was obtained, and the balloon returned to its starting-point on five of the seven occasions. Subsequently another balloon was constructed, said to be capable of a speed of 22 to 28 m. per hour, with a different motor. After many years of experiment Dr Wolfert built and experimented with in Berlin, in 1897, a cigar-shaped balloon driven by a gasoline motor. An explosion took place in the air, the balloon fell and Dr Wolfert and his assistant were killed. It was also in 1897 that an aluminium balloon was built from the designs of D. Schwarz and tested in Bedin. It was driven by a Daimler benzine motor, and attained a greater speed than “La France”; but a driving belt slipped, and in coming down the balloon was injured beyond repair. From 1897 onwards Count Ferdinand von Zeppelin, of the German army, was engaged in constructing an immense balloon, truly an airship, of most careful and most intelligent design, to carry five men. It consisted of an aluminium framework containing sixteen gas bags with a total capacity of nearly 400,000 cub. ft., and it had two cars, each containing a 16 h.p. motor. It was first tested in June 1900, when it attained a speed of 18 m. an hour and travelled a distance of 3 1/2 m. before an accident to the steering gear necessitated the discontinuance of the experiment. In 1905 Zeppelin built a second airship which had a slightly smaller capacity but much greater power, its two motors each developing 85 h.p. This, after making some successful trips, was wrecked in a violent gale, and was succeeded by a third airship, which, at its trial in October 1906, travelled round Lake Constance and showed itself able to execute numerous curves and traverses. At a second series of trials in September 1907, after some alterations had been effected, it attained a speed of 36 m. an hour, remaining in the air for many hours and carrying nine or eleven passengers. A fourth vessel of similar design, but with more powerful motors, was tried in 1908, and succeeded in travelling 250 m. in 11 hours, but owing to a storm it was wrecked when on land and burnt at Echterdingen on the 5th of August. Subscriptions, headed by the emperor, were at once raised to enable Zeppelin to build another. Meanwhile in 1901 Alberto Santos Dumont had begun experiments with dirigible balloons in Paris, and on the 19th of October won the Deutsch prize by steering a balloon from St Cloud round the Eiffel tower and back in half an hour, encountering on his return journey a wind of nearly 5 metres a second. An airship constructed by Pierre and Paul Lebaudy in 1904 also made a number of successful trials in the vicinity of Paris; with a motor of 40 h.p., its speed was about 25 m. an hour, and it regularly carried three passengers. In October 1907 the “Nulli Secundus,” an airship constructed for the British War Office, sailed from Farnborough round St Paul's Cathedral, London, to the Crystal Palace, Sydenham, a distance of about 50 m., in 3 hours 35 minutes. The weight carried, including two occupants, was 3400 ℔., and the maximum speed was 24 m. an hour, with a following wind of 8 m. an hour. Thus the principles which govern the design of the dirigible balloon may be said to have been evolved. As the lifting power crows as the cube of the dimensions, and the resistance approximately as the square, the advantage lies with the larger sizes of balloons, as of ocean steamers, up to the limits within which they may be found practicable. Count Zeppelin gained an advantage by attaching his propellers to the balloon, instead of to the car as heretofore; but this requires a rigid framework and a great increase of weight. Le Compagnon endeavoured, in 1892, to substitute flapping wings for rotary propellers, as the former can be suspended near the centre of resistance. C. Danilewsky followed him in 1898 and 1899, but without remarkable results. Dupuy de Lome was the first to estimate in detail the resistances to balloon propulsion, but experiment showed that in the aggregate they were greater than he calculated. Renard and Krebs also found that their computed resistances were largely exceeded, and after revising the results they gave the formula R=0.01685 D²V², R being the resistance in kilograms, D the diameter in metres and V the velocity in metres per second. Reduced to British measures, in pounds, feet and miles per hour, R=0.0006876 D²V², which is somewhat in excess of the formula computed by Dr William Pole from Dupuy de Lome’s experiments. The above coefficient applies only to the shape and rigging of the balloon “La France,” and combines all resistances into one equivalent, which is equal to that of a flat plane 18% of the “master section.” This coefficient may perhaps hereafter be reduced by one-half through a better form of hull and car, more like a fish than a spindle, by diminished sections of suspension lines and net, and by placing the propeller at the centre of resistance. To compute the results to be expected from new projects, it will be preferable to estimate the resistances in detail. The following table shows how this was done by Dupuy de Lome, and the probable corrections which should have been made by him:— Computed by Dupuy de Lôme. V = 2.22 m. per sec. More Probable Values. V = 2.82 m. per sec. Part. Area, Sq. Metres. Coeff- icient. Air Pressure. Resistance Kg. Coeff- icient. Air Pressure. Resistance Kg. Hull, without net 172.96 1/30 0.665 3.830 1/15 0.875 10.091 Car 3.25 1/5 {{{1}}}„ 0.432 1/5 {{{1}}}„ 0.569 Men’s bodies 3.00 1/5 {{{1}}}„ 0.400 1/5 {{{1}}}„ 1.312 Gas tubes 6.40 1/5 {{{1}}}„ 0.850 1/2 {{{1}}}„ 2.750 Small cords 10.00 1/2 {{{1}}}„ 3.325 1/2 {{{1}}}„ 4.375 Large cords 9.90 1/3 {{{1}}}„ 2.194 1/3 {{{1}}}„ 2.887 11.031 21.984 When the resistances have been reduced to the lowest minimum by careful design, the attainable speed must depend upon the efficiency of the propeller and the relative lightness of the motor. The commercial uses of dirigible balloons, however, will be small, as they must remain housed when the wind aloft is brisk. The sizes will be great and costly, the loads small, and the craft frail and short-lived, yet dirigible balloons constitute the obvious type for governments to evolve, until they are superseded by efficient flying machines. (See further, as to the latter, the article Flight and Flying.) The chief danger attending ballooning lies in the descent; for if a strong wind be blowing, the grapnel will sometimes trail for miles over the ground at the rate of ten or twenty miles an hour, catching now and then in hedges, ditches, roots of trees, &c.; and, after giving the balloon a terrible jerk,Practice of aerostation breaking loose again, till at length some obstruction, such as the wooded bank of a stream, affords a firm hold. This danger, however, has been much reduced by the use of the “ripping-cord,” which enables a panel to be ripped open and the balloon to be completely deflated in a few seconds, just as it is reaching the earth. But even a very rough descent is usually not productive of any very serious consequences; as, although the occupants of the car generally receive many bruises and are perhaps cut by the ropes, it rarely happens that anything worse occurs. On a day when the wind is light (supposing that there is no want of ballast) nothing can be easier than the descent, and the aeronaut can decide several miles off on the field in which he will alight. It is very important to have a good supply of ballast, so as to be able to check the rapidity of the descent, as in passing downwards through a wet cloud the weight of the balloon is enormously increased by the water deposited on it; and if there is no ballast to throw out in compensation, the velocity is sometimes very great. It is also convenient, if the district upon which the balloon is descending appear unsuitable for landing, to be able to rise again. The ballast consists of fine baked sand, which becomes so scattered as to be inappreciable before it has fallen far below the balloon. It is taken up in bags containing about 1/2 cwt. each. The balloon at starting is liberated by a spring catch which the aeronaut releases, and the ballast should be so adjusted that there is nearly equilibrium before leaving, else the rapidity of ascent is too great, and has to be checked by parting with gas. It is almost impossible to liberate the balloon in such a way as to avoid giving it a rotary motion about a vertical axis, which continues during the whole time it is in the air. This rotation makes it difficult for those in the car to discover in what direction they are moving; and it is only by looking down along the rope to which the grapnel is suspended that the motion of the balloon over the country below can be traced. The upward and downward motion at any instant is at once known by merely dropping over the side of the car a small piece of paper: if the paper ascends or remains on the same level or stationary, the balloon is descending; while, if it descends, the balloon is ascending. This test is exceedingly delicate. References.—Tiberius Cavallo, Treatise on the Nature and Properties of Air and other permanently Elastic Fluids (London, 1781); Idem, History and Practice of Aerostation (London, 1785); Vincent Lunardi, Account of the First Aerial, Voyage in England, in a Series of letters to his Guardian (London, 1785); T. Forster, Annals of some Remarkable aerial and alpine Voyages (London, 1832); Monck Mason, Aeronautica (London, 1908); John Wise, A System of Aeronautics, comprehending its Earliest Investigations (Philadelphia, 1850); Hatton Tumor, Astra Castra, Experiments and Adventures in the Atmosphere (London, 1863); J. Glaisher, C. Flammarion, W. de Fonvielle and G. Tissandier, Voyages aériens (Paris, 1870) (translated and edited by James Glaisher under the title Travels in the Air (London, 1871); O. Chanute, Progress in Flying Machines (New York, 1894); W. de Fonvielle, Les Ballons sondes (Paris, 1899); Idem, Histoire de la navigation aérienne (Paris, 1907); F. Walker, Aerial Navigation (London, 1902); J. Lecornu, La Navigation aérienne (Paris, 1903); M. L. Marchis, Leçons sur la navigation aérienne (Paris, 1904), containing many references to books and periodicals on pp. 701-704; Navigating the Air (papers collected by the Aero Club of America) (New York, 1907); A. Hildebrandt, Airships past and present (London, 1908). 1. Mr Tytler contributed largely to, and, indeed, appears to have been virtually editor of, the second edition (1778–1783) of the Encyclopaedia Britannica.
2017-10-20 16:49:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5365142822265625, "perplexity": 2597.4678001320167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00058.warc.gz"}
https://www.transtutors.com/questions/hi-i-need-a-very-good-grade-in-this-topic-the-assignment-format-is-provided-with-the-2717345.htm
# Hi i need a very good grade in this topic. The assignment format is provided with the attached... Hi i need a very good grade in this topic. The assignment format is provided with the attached question. Please follow the format and do the assignment according to the format. I need the references aswell please.Thanks Document Preview: Required Length: 500 words excluding the references. This only applies to question 2. Question 1 (EPS) The following summarised information is available in relation to ‘La Scan’, a publicly listed company in Australia:  Statement of comprehensive income extracts for years ended 30th June:   20182017 ContinuingDiscontinuedContinuingDiscontinued $’000$’000$’000$’000Profit after tax from:    Existing operation2,000(750)1750600Newly acquired operations*450 nil  * Acquired on the 1st November 2017  Analyst expect profits from the market sector in which La Scan’s existing operations are based to increase by 6% in the year to 30th June 2019 and by 8% in the sector of its newly acquired operations.  On 1st July 2016 La Scan had:  $12 million of$1 ordinary shares in issue.  $5 million 8% convertible debentures 2023; the terms of conversion are 40 equity shares in exchange for each$100 of debenture.  On 1 January 2018 the directors of La Scan were granted options to buy 2 million shares in the company for $1 each. The average market price of La Scan’s shares for the year ending 30th June 2018 was$2.50 each.  Assume an income tax rate of 30% for year 2016, 2017 and 2018  Required:  (i) Calculate La Scan’s estimated profit after tax for the year ending 30 June 2019 assuming the analysts’ expectations prove correct;  (ii) Calculate the diluted earnings per share (EPS) on the continuing operations of La Scan for the year ended 30 June 2018 and the comparatives for 2017. Note: There is no word limit for question 1. Question 2 Explain the concepts coercive isomorphism and normative isomorphism. How can these concepts be used to explain voluntary corporate reporting practices?    Marking criteria  Marks will be allocated to: Clear and concise discussion of the key points Relevance to the topic and evidence of wide reading/research Presentation – format, spelling, vocabulary, readability Appropriate referencing, including in-text referencing... Attachments:
2018-10-22 00:25:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24647743999958038, "perplexity": 5153.73222438369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514437.69/warc/CC-MAIN-20181021224001-20181022005501-00393.warc.gz"}
https://cstheory.stackexchange.com/questions/19193/how-to-find-the-set-of-edges-for-the-directed-graph-associated-with-a-partial-or
# How to find the set of edges for the directed graph associated with a partial order? I have a set $S$, and a partial order relation $\preceq$ defined on $S$. The way this partial order is given to me is through a function $f:S\times S \to \{true, false\}$, where $f(a,b) = true$ if and only if $a\preceq b$. Given this setup, I can construct a directed graph $D = (S, E)$, where $E= \{(a,b) \in S\times S | f(a,b) = true\}$. I can find all the elements of $E$ in time $|S|^2/2$ by examining all the possibilities. I am looking for an algorithm that can take advantage of the properties of partial order (in particular, transitivity), to reduce the expected time to find all the elements of $E$ to a linear order function of $|S|$. • The size of the output is $\Theta (|S|^2)$ in the worst case. So the algorithm needs time $\Theta (|S|^2)$ just to print the output. – Yury Sep 29 '13 at 16:56 • You're right. I should ask for $O(|S|)$ in the average case. – user765195 Sep 29 '13 at 16:59 • How do you define the “average case”? What is the output size in the average case? – Yury Sep 29 '13 at 17:23 As Yury already mentioned, the output size can be too large to hope for subquadratic time, when measured as a function of the input size $n$. But even when the output size is small, very little can be done. In particular, suppose that the input is a partial order with a single comparable pair, chosen uniformly at random among all such partial orders. Then the output size is $O(1)$ but nevertheless it takes $\Omega(n^2)$ queries to find the comparable pair. This is true regardless of whether you're considering only deterministic algorithms or whether you're doing expected case analysis of randomized algorithms.
2020-05-26 23:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738328814506531, "perplexity": 92.85982909080332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00288.warc.gz"}
https://socratic.org/questions/how-do-you-multiply-2-4i-3-5i
How do you multiply (2-4i)(3+5i)? Nov 12, 2016 $\left(2 - 4 i\right) \left(3 + 5 i\right) = 26 - 2 i$ Explanation: $\left(2 - 4 i\right) \left(3 + 5 i\right) = \left(2\right) \left(3\right) + \left(2\right) \left(5 i\right) + \left(- 4 i\right) \left(3\right) + \left(- 4 i\right) \left(5 i\right)$ $\therefore \left(2 - 4 i\right) \left(3 + 5 i\right) = 6 + 10 i - 12 i - 20 {i}^{2}$ $\therefore \left(2 - 4 i\right) \left(3 + 5 i\right) = 6 - 2 i - 20 {i}^{2}$ Now ${i}^{2} = - 1$, so $\therefore \left(2 - 4 i\right) \left(3 + 5 i\right) = 6 - 2 i + 20$ $\therefore \left(2 - 4 i\right) \left(3 + 5 i\right) = 6 - 2 i + 20$ $\therefore \left(2 - 4 i\right) \left(3 + 5 i\right) = 26 - 2 i$
2021-10-24 16:40:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960300624370575, "perplexity": 11155.699203347625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00322.warc.gz"}
https://memming.wordpress.com/2011/12/08/bayesian-spike-triggered-covariance-analysis/
When the neuron’s features space is in low-dimension, but not 1-dimension, then STA is not sufficient, since it recovers only a 1-dimensional subspace. Spike triggered covariance (STC) is an extension of STA that can consistently estimate filters of a multi-dimensional LNP model [Paninski 2003]. Let us denote the zero-mean stimulus distribution as $p(x)$, and the spike triggered distribution as $q(x)$. Then, STA is the mean of $\hat{q}(x)$ (empirical estimate of $q(x)$), and STC is the eigen-vectors of the covariance matrix of $\hat{q}(x)$. STC is only a consistent estimator when the stimulus distribution is Gaussian [for details, see Paninski 2003].
2015-12-01 20:04:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911288857460022, "perplexity": 2024.4645299879305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471436.90/warc/CC-MAIN-20151124205431-00067-ip-10-71-132-137.ec2.internal.warc.gz"}
http://gps.ijl.univ-lorraine.fr/Groupe_Physique_Statistique/publi.php?theme=defaut-GPS&lang=fr_FR&titre=Griffiths%20phase%20and%20critical%20behavior%20of%20the%202D%20Potts%20models%20with%20long-range%20correlated%20disorder
# Groupe de Physique Statistique ## Equipe 106, Institut Jean Lamour •  •  •  •  • ### Articles dans des revues à comité de lecture Griffiths phase and critical behavior of the 2D Potts models with long-range correlated disorder Chatelain C. Phys. Rev. E 89 (2014) 032105 DOI : 10.1103/PhysRevE.89.032105 ArXiv : arxiv:1308.0734 [PDF] HAL : hal-00850126 The $q$-state Potts model with a long-range correlated disorder is studied by means of large-scale Monte Carlo simulations for $q=2,4,8$ and 16. Evidence is given of the existence of a Griffiths phase, where the thermodynamic quantities display an algebraic Finite-Size Scaling, in a finite range of temperatures around the self-dual point. The critical exponents are shown to depend on both the temperature and the exponent of the algebraic decay of disorder correlations, but not on the number of states of the Potts model. The mechanism leading to the violation of hyperscaling relations is observed in the entire Griffiths phase.
2018-06-17 21:42:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.543224573135376, "perplexity": 1848.2220634109042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00546.warc.gz"}
http://mathoverflow.net/feeds/question/17157
Which finitely presented groups can be distinguished by decidable properties? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-20T01:01:09Z http://mathoverflow.net/feeds/question/17157 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/17157/which-finitely-presented-groups-can-be-distinguished-by-decidable-properties Which finitely presented groups can be distinguished by decidable properties? Joel David Hamkins 2010-03-05T03:40:02Z 2010-03-06T17:30:18Z <p>This question continues the line of inquiry of <a href="http://mathoverflow.net/questions/15957" rel="nofollow">these</a> <a href="http://mathoverflow.net/questions/16532" rel="nofollow">three</a> <a href="http://mathoverflow.net/questions/16565" rel="nofollow">questions</a>.</p> <p><b>Question.</b> Which finitely presented groups can be distinguished by decidable properties?</p> <p>To be precise, let us say that &phi; is a <em>decidable</em> property of finitely presented groups, if there is a class A of finitely presented groups, closed under group isomorphisms, such that { p | &lang;p&rang; &isin; A } is decidable, where &lang;p&rang; denotes the group presented by p. That is, we insist that the decision procedure give the same answer for presentations leading to the same group up to isomorphism.</p> <p>One extreme case, perhaps unlikely, would be that any two non-isomorphic finitely presented groups can be distinguished by decidable properties, so that for any two finitely presented non-isomorphic groups &lang;p&rang; and &lang;q&rang;, there is a decidable property &phi; where &phi;(p) holds and &phi;(q) fails. That would be quite remarkable. </p> <p>If this is not the case, then there would be two finite group presentations p and q, such that the groups presented &lang;p&rang; and &lang;q&rang; are not isomorphic, but they have all the same decidable properties. This also would be remarkable. </p> <p>Which is the case?</p> <p>Another way to describe the question is in terms of the equivalence relation &equiv;, which I introduced in <a href="http://mathoverflow.net/questions/16532" rel="nofollow">my previous question</a>, where p &equiv; q if &phi;(p) and &phi;(q) have the same answer for any decidable property &phi; of finitely presented groups. This is precisely the equivalence relation of "having all the same decidable properties". Of course, this includes the group-isomorphism relation, and the current question is asking: What is this relation &equiv;? In particular, is &equiv; the same as the group isomorphism relation? If it is, then any two non-isomorphic finitely presented groups can be distinguished by decidable properties; if not, then there are two finitely presented non-isomorphic groups &lang;p&rang;, &lang;q&rang; having all the same decidable properties.</p> <p>Henry Wilton has emphasized several times that there are relatively few truly interesting decidable properties of finitely presented groups. This may very well be true. Nevertheless, the answers to the previous MO questions on this topic have provided at least <em>some</em> decidable properties, and my question here is asking the extent to which these properties are able to distinguish any two finitely presented groups.</p> <p>In particular, in these previous MO questions, <a href="http://mathoverflow.net/questions/15957" rel="nofollow">Chad Groft</a> inquired whether there were any nontrivial decidable properties of finitely presented groups. <a href="http://mathoverflow.net/questions/15957#16122" rel="nofollow">John Stillwell's answer</a> was that one could decide many questions about the abelianization of the group. In a <a href="http://mathoverflow.net/questions/16532" rel="nofollow">subsequent question</a>, I inquired whether all decidable properties were really about the abelianization, and <a href="http://mathoverflow.net/questions/15957#16122" rel="nofollow">David Speyer's answer</a> was that no, there were questions about other quotients, such as whether the group had a nontrivial homomorphism into a particular finite group, such as A<sub>5</sub>. In a <a href="http://mathoverflow.net/questions/16565" rel="nofollow">third question</a>, David generalized further and inquired whether all decidable properties depended on the profinitization, and the answer again was no (provided by David and Henry). So at least in these cases we have been increasingly able to separate groups by decidable properties.</p> <p>A generalization of the question would move beyond the decidable properties. For example, if we consider the computably enumerable (c.e.) properties, then we have quite a lot more ability to distinguish groups. A property is c.e. if there is a computable algorithm to determine the positive instances of &phi;(p), but without requiring the negative instances to ever converge on an answer. For example, the word problem for any finitely presented group, or indeed, for any computably presented group, is computably enumerable, since if a word is indeed trivial, we will eventually discover this. Using the same idea as David's answer to my question, it follows that the question of whether a finitely presented group &lang;p&rang; admits a nontrivial homomorphism into the integers Z, say, or many other groups, is computably enumerable. One may simply try out all possible maps of the generators. A generalization of this establishes:</p> <p><b>Theorem.</b> The question of whether one finitely presented group &lang;p&rang; maps homomorphically onto (or into) another &lang;q&rang; is computably enumerable.</p> <p>The proof is that given p and q, one can look for a map of the generators of p to the words of q, such that all relations of p are obeyed by the image in q, and such that all the generators of q are in the range of the resulting map. This is a c.e. property, since one can look for all possible candidates for the map of the generators of p into words of q, and check whether the relations are obeyed and the generators of q are in the range of the map and so on. If they are, eventually this will be observed, and at the point one can be confident that &lang;p&rang; maps onto &lang;q&rang;. More generally, is the isomorphism relation itself c.e.? It is surely computable from the halting problem 0', since we could ask 0' whether the kernel of the proposed map was trivial or not, and it would know the answer. </p> <ul> <li>Where does the isomorphism relation on finitely presented groups fit into the hierarchy of Turing degrees? Is it c.e.? Is it Turing equivalent to the Halting problem?</li> </ul> <p>Once one moves to the c.e. properties, it is similarly natural to move beyond the finitely presented groups to the computably presented groups (those having a computable presentation, not necessarily finitely generated). In this context, the proof above no longer works, and the natural generalization of the question asks:</p> <ul> <li>Which computably presented groups are distinguished by c.e. properties?</li> </ul> <p>The isomorphism relation on finitely generated computably presented groups (given the presentations) seems to be computable from the halting problem for the same reason as in the proof above, but now one doesn't know at a finite stage that the proposed map of the generators will definitely work, since one must still check all the relations-yet-to-be-enumerated. But 0' knows the answer, so we get it computably in 0'. In the infinitely generated case, however, things are more complicated.</p> http://mathoverflow.net/questions/17157/which-finitely-presented-groups-can-be-distinguished-by-decidable-properties/17161#17161 Answer by Bjorn Poonen for Which finitely presented groups can be distinguished by decidable properties? Bjorn Poonen 2010-03-05T04:51:31Z 2010-03-06T17:30:18Z <blockquote> <p>The isomorphism relation for finitely presented groups is c.e., and in fact is Turing equivalent to the halting problem. </p> </blockquote> <p><strong>Proof:</strong> To check whether two finitely presented groups $G$ and $H$ are isomorphic, search for data that might describe maps $G \to H$ and $H \to G$ and for words that show that it is a consequence of the relations that the composition in either order maps each generator to itself, and verify that the relations of $G$ map to $1$ in $H$ and vice versa so that the maps are well-defined and that the remaining data shows that they are inverse isomorphisms. Thus the isomorphism relation is c.e. It is also no <em>easier</em> than the halting problem, because an algorithm even for deciding whether a finitely presented group is trivial could be used to solve the halting problem. (That is how it is known that triviality is an undecidable property, by reductions passing through the word problem along the way.) $\square$</p> <p><strong>EDIT:</strong> As for your "main question" asking whether decidable properties can distinguish every pair of non-isomorphic finitely presented groups, I'll prove the following negative result:</p> <blockquote> <p>There is no c.e. set $\mathcal{F}$ of decidable properties such that any two non-isomorphic finitely presented groups can be distinguished by some $\phi \in \mathcal{F}$.</p> </blockquote> <p>(Call a set $\mathcal{F}$ of decidable properties <em>c.e.</em> if there is a Turing machine that produces a sequence of algorithms, each of which is guaranteed to compute a decidable property, such that these decidable properties are exactly the ones in $\mathcal{F}$.)</p> <p><strong>Proof:</strong> Suppose that $\mathcal{F}$ exists. Then we could decide whether an arbitrary finitely presented group $G$ is trivial as follows: By day, search for an isomorphism between $G$ and <code>$\{1\}$</code> (this search is possible since the isomorphism relation is c.e.) By night, search for a decidable property $\phi \in \mathcal{F}$ such that $\phi$ distinguishes $G$ and <code>$\{1\}$</code>. If $\mathcal{F}$ does what it claims to, then one of these processes will terminate and tell you whether <code>$G \simeq \{1\}$</code>. But triviality is known to be an undecidable property. $\square$</p> <p>This leaves open the question of whether there is a non-c.e. $\mathcal{F}$ that does the job, but even if there were one, it wouldn't be of much use from the practical point of view!</p>
2013-06-20 01:01:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703359961509705, "perplexity": 718.5267354373734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709947846/warc/CC-MAIN-20130516131227-00035-ip-10-60-113-184.ec2.internal.warc.gz"}
http://academic.research.microsoft.com/Publication/27601907/fast-optical-source-for-quantum-key-distribution-based-on-semiconductor-optical-amplifiers
## Keywords (10) A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as $1.14\times 10^{-2}$ while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.
2014-03-12 08:05:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3861832916736603, "perplexity": 682.081234152071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021512937/warc/CC-MAIN-20140305121152-00089-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.imprimerie-luthringer.fr/s355jr-steel/1d3b7f1f8a4786.html
## s420 c beam tensile strength ### "C" channel beam loads - Structural engineering general s420 c beam tensile strength Feb 10, 2003 · A36 tensile yield stress = 36ksi. For bending stress, I assume that the 3" "C" channel is ok since 26431 < 36000. Or do I need to allow for a safety margin, say maximum design stress of only 60% of yield stress? For horz shear stress, I don't know "C" channel beam loads - Structural engineering general s420 c beam tensile strengthFeb 10, 2003 · RE: "C" channel beam loads. moment should be inch-lbs, not inches PER lb. the moment of inertia (1.66) is in units of in^4, and the distance, you up with lbs/in^2, as you should. I'm not sure I understand your shear calculation. of the beam (1.21 in.) to get a shear of 600 lbs/in^2. ### BS EN 10025 S275 ,S355 ,S420 ,S460 - Katalor Industry The number quoted in the designation is the value of yield strength for material up to 16 mm thick (or 12 mm for steel to BS 7668). Designers should note that yield strength reduces with increasing plate or section thickness. An example for common bridge steels to BS EN 10025 is given below.Structural Steel - BS EN 10025 S420 | BS EN 10025-S420S420 structural steel plate is a high-strength low-alloy and falls within the European EN 10025: 2004 standard. EN 10025: 2004 is the new European structural steel standard established by the European Committee for Iron and Steel Standardization. S420 structural steel plate is only produced as normalized or thermomechanically rolled material.BS EN10025 S275 /S355 /S420 /S460 - Steel Exporter The strength grades covered by the BS EN standards include; S235, S275, S355, S420 and S460. ( BS 7668 covers one strength grade, S345.) Yield strengths above 460 N/mm are available to BS EN 10025: Part 6, but BS 5400 or other design codes do not yet cover the use of these strengths. Grade S235 steel is rarely used in bridge steelwork. ### Beam Calculator | strength deflection H-Beams I-Beams s420 c beam tensile strength Beam Strength and Deflection Calculator. A beam is any structural member significantly longer than it is wide or deep. The term 'significantly', however, means different things to different people. To some people, twice as long is sufficient, others would consider five times as long to be too short and would therefore consider such a member to be a plate, a frame or a structure.Beam Stress & Deflection | MechaniCalcMany structures can be approximated as a straight beam or as a collection of straight beams. For this reason, the analysis of stresses and deflections in a beam is an important and useful topic. This section covers shear force and bending moment in beams, shear and moment diagrams, stresses in beams, and a table of common beam deflection formulas.Beam Stress & Deflection | MechaniCalcThe shear diagram is horizontal for distances along the beam with no applied load. The shear at any point along the beam is equal to the slope of the moment at that same point: $$V = { dM \over dx }$$. The moment diagram is a straight, sloped line for distances along the beam with no applied load. ### Properties of Reinforced Concrete Steel Rebars Exposed The results show that the decrease of bending strength from normal temperature to 400°C is 10%, from normal temperature to 600°C at 22% and from normal temperature to 800°C at 29%.S235, S275 and S355 Structural Steels - AZoM s420 c beam tensile strengthFeb 13, 2018 · The tensile strength of structural steel relates to the point at which permanent deformation takes place when the material is stretched or pulled laterally along its length. Structural Steel Grade Tensile Strength MPa at Nom thickness between 3 mm and 16 mmStrength of Materials Basics and Equations | Mechanics of s420 c beam tensile strengthStrength / Mechanics of Material Menu. Strength of materials, also called mechanics of materials, is a subject which deals with the behavior of solid objects subject to stresses and strains .. In materials science, the strength of a material is its ability to withstand an applied load without failure. ### Strength of Materials Basics and Equations | Mechanics of s420 c beam tensile strength The constant, E, is the modulus of elasticity, Young's modulus or the tensile modulus and is the material's stiffness. Young's modulus is in terms of 10 6 psi or 10 3 kg/mm 2 . If a material obeys Hooke's Law it is elastic. The modulus is insensitive to a material's temper.Tensile Strength - an overview | ScienceDirect TopicsG.D. Quinn, in Encyclopedia of Materials: Science and Technology, 2001. 1 Uniaxial Tensile Strength. Tensile strength of brittle materials depends upon the size, severity, and spatial distribution of the flaws and the stress distribution. The larger the body or test piece, the weaker it is likely to be since a large part has a greater chance of having a large flaw.The Strength of Chapter Concrete 3 - iccsafe.orgChapter 3 3.1 The Importance of Strength 3.2 Strength Level Required KINDS OF STRENGTH 3.3 Compressive Strength 3.4 Flexural Strength 3.5 Tensile Strength 3.6 Shear, Torsion and Combined Stresses 3.7 Relationship of Test Strength to the Structure MEASUREMENT OF STRENGTH 3.8 Job-Molded Specimens 3.9 Testing of Hardened Concrete FACTORS AFFECTING STRENGTH 3.10 ### What are the strength tests? - ACPA Flexural Tests. The concrete strength used in the design of concrete pavements is based on AASHTO Test Method T-97 or ASTM C78, Flexural Strength of Concrete using a Simple Beam with Third-Point Loading (see Figure 1 below).These flexural tests (also called Modulus of Rupture tests or Third-Point Loading tests) are performed using concrete beams that have been cast and cured in the field, to s420 c beam tensile strengths420 universal beam property - betonnejeugd.bes420 universal beam property. EN S420 universal beam dimension,China steel offer MS steel plate,shipbuilding steel,variety of steel plates are now being produced and marketed in addition to ordinary carbon steel plates. These include high-tensile strength steel plates withBS EN 10025 S275 ,S355 ,S420 ,S460 - Katalor IndustryThe strength grades covered by the BS EN standards include; S235, S275, S355, S420 and S460. ( BS 7668 covers one strength grade, S345.) Yield strengths above 460 N/mm are available to BS EN 10025: Part 6, but BS 5400 or other design codes do not yet cover the use of these strengths. ### Flexural Strength of Beams Effect of Copper Slag on Split Tensile Strength of Concrete Totally 15 cylindrical specimens were tested for finding split tensile strength in accordance with ASTM C 496-96. the splitting tensile strength was determined by using the following formulae F t = 2P / LD. The results from the splitting tensile test at 28 days are presented in Table-6.High Strength SteelTable 2 shows the steel strengths for S460. As the yield strength increases, members may move into a higher (more onerous) class. In Table 5.2 of BS EN 1993-1-1, the classification limits are based on , which itself is based on the yield strength of the steel. increases, each limit, based on , will decrease.I Beam Tensile Strength Chart - New Images BeamJul 21, 2019 · I Beam Tensile Strength Chart. July 21, 2019 - by Arfan - Leave a Comment. The differences between stiffness and strength in metal 3 in i beam 8 ft long s3 x 5 7 order processes full text impact ysis of water ssina stainless steel position properties how to design a steel i beam ### MECHANICAL PROPERTIES OF CARBON STAINLESS STEEL STEEL AND STAINLESS STEEL. C.1. MECHANICAL PROPERTIES OF CARBON STEEL. C.1.1. Mechanical properties of carbon steel at room temperature. (20 ºC) Before presenting the mechanical properties of carbon steel at elevated. temperature, the properties at room temperature will be given.Properties of Reinforced Concrete Steel Rebars Exposed to s420 c beam tensile strengthThe yield strength losses of both S220 and S420 steel rebars were 46% and 84% for 800 °C exposure temperature, respectively. For further increase of temperature at 950 °C, yield strength decreases were 64% and 89%, respectively.S420MC / 1.0980 - SteelNumber - Chemical composition s420 c beam tensile strengthChemical composition of steel S420MC (1.0980), Standards of steel S420MC (1.0980) Mechanical Properties of steel S420MC (1.0980) Equivalent grades of steel S420MC (1.0980) steel S420MC (1.0980) Tensile Strength, Elongation, Proof strength , Hardness ### S460 Structural Steel Plate, S460NL Structural Steel Plate s420 c beam tensile strength A tensile strength of 520-670 Mpa and yield strength of 66.71 ksi define their usage in critical environs; we supply EN 10025:3 S460 structural steel plates in normalized or rolled and normalized conditions. Some standard symbols used to refer IS 2062 E460 C Steel plates are : S Structural Steel 460 minimum yield strengthStructural Steel - BS EN 10025 S420 | BS EN 10025-S420S420 steel plate is a high-strength low-alloy European standard structural steel within the EN 10025: 2004 standard. S420 structural steel plate is only produced as normalized or thermomechanical rolled material. Typical applications include: Power plants; Mining and Structural Steels S235, S275, S355, S420 and Their PropertiesTensile strength above 580 °C with long holding times reclass C General The steel grades are used for cold-formed components of the most varied designs. Their fields of application include the man-ufacture of : Longitudinal beams Frames Cold-pressed parts Cold-rolled sections and Structural pipes ### Table of material properties for structural steel S235 s420 c beam tensile strength Material Design Properties for Structural Steel S235, S275, S355, S420, S450, S460 according to EN1993-1-1 §3.2.6; Material Property Value; Density 7850 kg/m 3: Unit weight 78.5 kN/m 3: Modulus of elasticity E (Young's modulus) 210000 MPa: Shear modulus G: G = E / [2 (1 + ) ] 81000 MPa: Yield strength f y: see table below: Ultimate strength f uThe Big Difference between EN 10025-2 S235, S275, S355 I You are here : section steel--Katalor steel > News > The Big Difference between EN 10025-2 S235, S275, S355 I Beam The Big Difference between EN 10025-2 S235, S275, S355 I BeamThe Big Difference between EN 10025-2 S235, S275, S355 I BeamYou are here : section steel--Katalor steel > News > The Big Difference between EN 10025-2 S235, S275, S355 I Beam The Big Difference between EN 10025-2 S235, S275, S355 I Beam ### What is the formula for tensile strength? How is this s420 c beam tensile strength Feb 05, 2018 · Tensile strength is the stress at which a force applied causes the material to lengthen then break. For an axially load material the breaking strength in tension is s=P/a where s is the breaking strength , P is the force that can cause it to break and a is the cross sectional area.What is the Tensile Strength of Steel | All Metals & Forge s420 c beam tensile strengthTensile strength is an important measure of a materials ability to perform in an application, and the measurement is widely used when describing the properties of metals and alloys. The tensile strength of an alloy is most commonly measured by placing a test piece in the jaws of a tensile machine.en s420 i beam application - betonnejeugd.beEN S420 C beam application,China steel offer MS steel plate,shipbuilding steel,variety of steel plates are now being produced and marketed in addition to ordinary carbon steel plates. These include high-tensile strength steel plates with ### en s420 i beam application - betonnejeugd.be en s420 i beam application. EN S420 C beam application,China steel offer MS steel plate,shipbuilding steel,variety of steel plates are now being produced and marketed in addition to ordinary carbon steel plates. These include high-tensile strength steel plates withs420 hea beams mill - Carbon steels - A588 Corten Weather s420 c beam tensile strengthKeeps more than 10,000 tons hot rolled and cold rolled EN S420 HEA beams, pressure vessel steel plate grades in stock each month. European standard S420 HEB beams mill Do you want any logistics service, then send mail: [email protected]
2020-10-26 12:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35324627161026, "perplexity": 5455.37522806535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891228.40/warc/CC-MAIN-20201026115814-20201026145814-00337.warc.gz"}
https://eduzip.com/ask/question/a-device-used-to-measure-volume-of-liquids-is-273201
Physics # A device used to measure volume of liquids is Burette Pipette ##### SOLUTION To measure volume for liquids - burette, pipette are used. You're just one step away Multiple Correct Medium Published on 18th 08, 2020 Questions 244531 Subjects 8 Chapters 125 Enrolled Students 203 #### Realted Questions Q1 Single Correct Medium Dimensional formula of power is : (West Bengal/NTSE Stage-l/2013) • A. $[M^{2}L^{2}T^{-2}]$ • B. $[ML^{2}T^{-3}]$ • C. $[M^{2}LT^{-3}]$ • D. $[MLT^{-2}]$ Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q2 Subjective Medium Find the area enclosed by the curve $y=\sin {x}$ and the X-axis between $x=0$ and $x=\pi$. Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q3 Single Correct Medium Measure of two quantities along with the precision of respective measuring instrument is $A =2.5 ms^{-1} \pm 0.5 ms^{-1}$, $B = 0.10 s \pm 0.01 s$ The value of AB will be • A. $(0.25 \pm 0.08) m$ • B. $(0.25 \pm 0.8) m$ • C. $(0.25 \pm 0.05) m$ • D. $(0.25 \pm 0.135) m$ Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q4 Subjective Medium State the number of significant figures in the following : $0.2370\,g\,cm^{-3}$ Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q5 Subjective Medium The distance of a galaxy is $56 \times10^{25}m$. Assume the speed of light to be $3\times10^8 m s^{-1}$. Express order of magnitude of time taken by light travelled to the galaxy. Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020
2021-12-05 15:11:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7097553610801697, "perplexity": 9892.724469308425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00037.warc.gz"}
https://paperswithcode.com/paper/hilbert-space-embeddings-and-metrics-on
# Hilbert space embeddings and metrics on probability measures A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometric on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as $\gamma_k$, indexed by the kernel function $k$ that defines the inner product in the RKHS. We present three theoretical properties of $\gamma_k$. First, we consider the question of determining the conditions on the kernel $k$ for which $\gamma_k$ is a metric: such $k$ are denoted {\em characteristic kernels}. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g. on compact domains), and are difficult to check, our conditions are straightforward and intuitive: bounded continuous strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on $\bb{R}^d$, then it is characteristic if and only if the support of its Fourier transform is the entire $\bb{R}^d$. Second, we show that there exist distinct distributions that are arbitrarily close in $\gamma_k$. Third, to understand the nature of the topology induced by $\gamma_k$, we relate $\gamma_k$ to other popular metrics on probability measures, and present conditions on the kernel $k$ under which $\gamma_k$ metrizes the weak topology. PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Edit Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
2022-08-17 23:23:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466292023658752, "perplexity": 444.60032047006933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00023.warc.gz"}
https://tex.stackexchange.com/questions/132222/overlong-line-even-with-suggestions-for-hyphenation
# Overlong line even with suggestions for hyphenation I have a paragraph which starts with \documentclass[a4paper,11pt,twoside]{book} \setlength{\textwidth}{14.6cm} \hyphenation{MnTPPCl CoTPP} \begin{document} We studied surface doping with the following molecules: 2,3,5,6-tetrafluoro-7,7,8,8-tetra\-cyano\-quino\-dimethane (F$_4$TCNQ), manganese(III)-tetra\-phenyl\-porphyrin-chloride (MnTPPCl), cobalt(II)-tetra\-phenyl\-porphyrin (CoTPP), and fullerene (C$_{60}$). \end{document} which ends up to give an overlong line at "(MnTPPCl)" Even giving possibilities for hyphenation, it gives me the overlong line with "(MnTPPCl)" sticking out. Why doesn't it just make a line break before "(MnTPPCl)"? How can I prevent it? Without a full example, it is impossible to test but presumably breaking the line (MnTPPCl) would require stretching the white space more than the specified values. You can use \sloppy to tell LaTeX to allow white space to stretch more than its usual limits. or as egreg suggested set \emergencystretch which is a less aggressive change to teh typesetting quality: \documentclass[a4paper,11pt,twoside]{book} \setlength{\textwidth}{14.6cm} \hyphenation{MnTPPCl CoTPP} \begin{document} \setlength\emergencystretch{2em} We studied surface doping with the following molecules: 2,3,5,6-tetrafluoro-7,7,8,8-tetra\-cyano\-quino\-dimethane (F$_4$TCNQ), manganese(III)-tetra\-phenyl\-porphyrin-chloride (MnTPPCl), cobalt(II)-tetra\-phenyl\-porphyrin (CoTPP), and fullerene (C$_{60}$). \end{document} • I updated it with a minimal example producing the mentioned overlong line. – Tanja Sep 8 '13 at 13:48 • Rather than \sloppy, something like \setlength{\emergencystretch}{.3\textwidth} might give better results, reserving \sloppy or (better) the sloppypar environment to very tough cases. – egreg Sep 8 '13 at 14:01 • @egreg true but I was thinking chemical names with hyphenation patterns designed for natural language probably is a "tough case" so reached straight for the sledgehammer. – David Carlisle Sep 8 '13 at 14:39 • @user2758804 see update – David Carlisle Sep 8 '13 at 14:47 • @user2758804 -- the line you would like broken has only two spaces in it. tex can distribute the stretch needed for breaking a line only at the existing spaces, so there is almost no flexibility in this example. applying \emergencystratch as suggested by egreg really requires some attention to the particular circumstance. sloppypar is probably the most appropriate action here, as its effect is more "surgical" than global. – barbara beeton Sep 8 '13 at 17:06
2019-10-23 23:44:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109143018722534, "perplexity": 5593.222962409996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00382.warc.gz"}
https://www.physicsforums.com/threads/couplings-of-fermions-and-bosons-to-the-higgs.327271/
# Couplings of fermions and bosons to the Higgs 1. Jul 27, 2009 ### arestes 1. The problem statement, all variables and given/known data I have to show that the couplings to the Higgs ( W+ W- h , ZZh, hhh, and e+e-h) are proportional to the mass squared (for bosons) or mass (for fermions) of the particles. But according to this problem I don't have to explicitly construct the interaction terms in the lagrangian. 2. Relevant equations According to Peskin & Schroeder, page 716 we can construct the interaction terms and explicitly read off the couplings but that's not the method I'm looking for. I am supposed to only show that they have to be proportional to mass or mass squared depending on the statistics of the particles involved 3. The attempt at a solution I tried to use the mass dimensions of the wave functions involved using the lagrangian dimension = 4 but that didn't help to relate it to the couplings. It might be necessary to use the properties of the higgs (symmetry breaking mass) but I don't know how to do it 2. Jul 27, 2009 ### Dick What's wrong with your idea of considering the mass dimension of the fields? The mass terms have to be quadratic in the fields, with the higgs terms providing the extra mass dimension to get it up to m^4. 3. Jul 27, 2009 ### arestes Well, the thing is that I only get the dimensions of the wavefunctions only after I know what the interaction term in the lagrangian looks like! which is a pretty standard exercise in QFT not specifically related to the Higgs. But I don't know what the interactiont term looks like beforehand. according to Peskin & Schroeder page 716 for fermions it is $$L_f = -m_f \overline{f}f \left(1 + \frac{h}{v} \right)$$ where f and h are the wavefunctions of the fermion field and the higgs, respectively. v is the vacuum expectation value for a scalar field $$\phi$$ from which, after reparametrizing it with the h field we get a minimum at that point. (All this is in P&S page 715-716). I don't think it will help me finding the couplings. By couplings I mean the vertex rules for Feynman diagrams, of course, which could be read off from the interaction terms. Besides, knowing what the dimension of the wavefunction AND knowing that the whole term must be of mass dimension 4 doesn't account for other constants that may go with the masses in that term... so I think the approach must be different. 4. Jul 27, 2009 ### Dick You can get the dimensions of the fields by looking at the kinetic terms, the free part of the Lagrangian that defines the propagators. Which basically come from the Klein-Gordon and Dirac equations. You don't need to peek at the interaction terms first. That will tell you what the dimensions of the couplings must be, but won't tell you their exact form, naturally. But you are only asked for the dimensions, right? Last edited: Jul 27, 2009 5. Jul 28, 2009 ### arestes Thanks for answering...but I actually need to show that the couplings must be Proportional to the indicated masses. Not only the dimensions... but even so.. I found that the mass dimension of the fermion fields are 3/2 and for bosons it is 1 ... that means that for an interaction of the type $$f \overline{f} h$$ i have a mass dimension 4 already and the coupling that should be multiplying this should have mass dimension 0 !! how do I show it is supposed to be proportional to the mass of the fermions?? :S Also , for bosons like W in $$W+ W- h$$ the coupling that must multiply this has mass dimension 1 (and i must show that it should be proportional to the squared mass of the W). Working with mass dimensions only gives dimensions but doesn't tell me which masses go... ( for example in W+ W- h it is the mass squared of the W but not of the h). That's why i dropped this idea. 6. Jul 28, 2009 ### Dick I've got to assume that the question they are asking is e.g. the boson mass term is W*W*m(...) where m(...) is some function of the other fields in the problem. They want to know the dimensions of m(...). There's no way you can figure out the exact form of (or even the fields involved in) m(...) without constructing a lagrangian. 7. Jul 28, 2009 ### arestes Well. for the interaction vertices I need the term e.g. for fermions f like this : $$h \overline{f} f . m(...)$$ I actually don't think they want the exact form of m(...) what they want me to show is that $$m(...) = m_f . m'(...)$$ where m_f is the mass of the fermion (or antifermion) involved in the interaction $$h f\overline{f}$$ and m'(...) doesn't depend on this mass anymore. SImilarly for the bosons W and for h itself in the cases $$h W^-W^+$$ and $$h hh$$ where the function m(...) would be of the form m(...) = (m_W)^2 . m'(...) or = (m_h)^2 . m'(...) where again m'(..) doesn't depend of the masses factored out. Maybe you could understand my context taking a look at the lecture notes where this problem came from and I'm trying to finish. http://www.nat.vu.nl/~mulders/QFT-0E.pdf [Broken] It's exercise 12.5 in page 127 ... Many thanks again for your time. Last edited by a moderator: May 4, 2017 8. Jul 28, 2009 ### Dick I guess all I can think of here is that when the higgs acquires a VEV then whatever coupling is in front of the ff terms (for fermions) and BB terms for bosons is the effective mass. And it should be proportional to m_f for the fermions and m_B^2 for the bosons. Just by comparison with the Dirac lagrangian for f and the Klein-Gordon lagrangian for B. I've got to admit the phrasing of the question confuses me as well. Last edited: Jul 29, 2009 9. Jul 29, 2009 ### turin I am very confused as well. (I have been lurking in this thread to see if anyone can sort it out, and I finally decided to pipe in to agree with Dick that this is confusing.) I would assume that the context is Standard Model, which has a very specific Lagrangian. I think that the problem itself is not confusing, but the comment that you don't need to know the interaction terms is confusing. Maybe we're making this problem way more difficult than it should be. Maybe we should simply ignore the comment about knowning the interaction terms. Still, I'm curious to see if anyone can figure out how to do it without knowing the interaction terms. 10. Jul 30, 2009 ### Avodyne The whole question is ambiguous. For example, the hWW coupling is of the form g2v, where g is the SU(2) gauge coupling and v is the Higgs VEV. Now, MW=gv, so we can write g2v = gMW = MW2/v. Only the last form makes it proportional to MW2, but how are we supposed to know which form to use? There is an apparent implicit requirement that the gauge and Yukawa couplings be eliminated from the expressions for the couplings. I suspect the "correct" answer makes use of the property that the Higgs VEV v and the physical Higgs field h only appear in the combination v+h, but the question is too poorly phrased to spend any more time on it. Last edited: Jul 31, 2009
2017-10-23 15:27:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293291926383972, "perplexity": 408.93966741257003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826114.69/warc/CC-MAIN-20171023145244-20171023165244-00485.warc.gz"}
https://deepai.org/publication/warp-consistency-for-unsupervised-learning-of-dense-correspondences
# Warp Consistency for Unsupervised Learning of Dense Correspondences The key challenge in learning dense correspondences lies in the lack of ground-truth matches for real image pairs. While photometric consistency losses provide unsupervised alternatives, they struggle with large appearance changes, which are ubiquitous in geometric and semantic matching tasks. Moreover, methods relying on synthetic training pairs often suffer from poor generalisation to real data. We propose Warp Consistency, an unsupervised learning objective for dense correspondence regression. Our objective is effective even in settings with large appearance and view-point changes. Given a pair of real images, we first construct an image triplet by applying a randomly sampled warp to one of the original images. We derive and analyze all flow-consistency constraints arising between the triplet. From our observations and empirical results, we design a general unsupervised objective employing two of the derived constraints. We validate our warp consistency loss by training three recent dense correspondence networks for the geometric and semantic matching tasks. Our approach sets a new state-of-the-art on several challenging benchmarks, including MegaDepth, RobotCar and TSS. Code and models will be released at https://github.com/PruneTruong/DenseMatching. ## Authors • 6 publications • 59 publications • 38 publications • 282 publications • ### Learning Dense Correspondence via 3D-guided Cycle Consistency Discriminative deep learning approaches have shown impressive results fo... 04/18/2016 ∙ by Tinghui Zhou, et al. ∙ 0 • ### Joint Learning of Semantic Alignment and Object Landmark Detection Convolutional neural networks (CNNs) based approaches for semantic align... 10/02/2019 ∙ by Sangryul Jeon, et al. ∙ 9 • ### GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network The feature correlation layer serves as a key neural network module in n... 09/16/2020 ∙ by Prune Truong, et al. ∙ 1 • ### Deep Matching Prior: Test-Time Optimization for Dense Correspondence Conventional techniques to establish dense correspondences across visual... 06/06/2021 ∙ by Sunghwan Hong, et al. ∙ 0 • ### Semi-supervised Dense Keypointsusing Unlabeled Multiview Images This paper presents a new end-to-end semi-supervised framework to learn ... 09/20/2021 ∙ by Zhixuan Yu, et al. ∙ 9 • ### Unsupervised Metric Relocalization Using Transform Consistency Loss Training networks to perform metric relocalization traditionally require... 11/01/2020 ∙ by Mike Kasper, et al. ∙ 0 • ### Unsupervised Dense Shape Correspondence using Heat Kernels In this work, we propose an unsupervised method for learning dense corre... 10/23/2020 ∙ by Mehmet Aygün, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Finding dense correspondences between images continues to be a fundamental vision problem, with many applications in video analysis [SimonyanZ14], image registration [GLAMpoint, shrivastava-sa11], image manipulation [HaCohenSGL11, LiuYT11], and style transfer [Kim2019, Liao2017] . While supervised deep learning methods have achieved impressive results, they are limited by the availability of ground-truth annotations. In fact, collecting dense ground-truth correspondence data of real scenes is extremely challenging and costly, if not impossible. Current approaches therefore resort to artificially rendered datasets [Dosovitskiy2015, Ilg2017a, Sun2018, Hui2018], sparsely computed matches [DusmanuRPPSTS19, D2D], or sparse manual annotations [ArbiconNet, MinLPC20, SCNet]. These strategies lack realism, accuracy, or scalability. In contrast, there is a virtually endless source of unlabelled image and video data, which calls for the design of effective unsupervised learning approaches. Photometric objectives, relying on the brightness constancy assumption, have prevailed in the context of unsupervised optical flow [RenYNLYZ17, BackToBasics, Meister2017]. However, in the more general case of geometric matching, the images often stem from radically different views, captured at different occasions, and under different conditions. This leads to large appearance transformations between the frames, which significantly undermine the brightness constancy assumption. It is further invalidated in the semantic matching task [LiuYT11], where the images depict different instances of the same object class. As a prominent alternative to photometric objectives, warp-supervision [GLUNet, GOCor, Rocco2017a, Melekhov2019] , also known as self-supervised learning [Rocco2018a, SeoLJHC18, MinLPC20], trains the network on synthetically warped versions of an image. While benefiting from direct supervision, the lack of real image pairs often leads to poor generalization to real data. We introduce Warp Consistency, an unsupervised learning objective for dense correspondence regression. Our loss leverages real image pairs without invoking the photometric consistency assumption. Unlike previous approaches, it is capable of handling large appearance and view-point changes, while also generalizing to unseen real data. From a real image pair , we construct a third image by warping with a known flow field , that is created by randomly sampling e.ghomographies, from a specified distribution. We then consider the consistency graph arising from the resulting image triplet , visualized in Fig. 1. It is used to derive a family of new flow-consistency constraints. By carefully analyzing their properties, we propose an unsupervised loss based on predicting the flow by the composition via image (Fig. 1). Our final warp consistency objective is then obtained by combining it with the warp-supervision constraint, also derived from our consistency graph by the direct path . We perform comprehensive empirical analysis of the objectives derived from our warp consistency graph and compare them to existing unsupervised alternatives. In particular, our warp consistency loss outperforms approaches based on photometric consistency and warp-supervision on multiple geometric matching datasets. We further perform extensive experiments for two tasks by integrating our approach into three recent dense matching architectures, namely GLU-Net [GLUNet] and RANSAC-Flow [RANSAC-flow] for geometric matching, and SemanticGLU-Net [GLUNet] for semantic matching. Our unsupervised learning approach brings substantial gains: PCK-5 on MegaDepth [megadepth] for GLU-Net, PCK-5 for RANSAC-Flow on RobotCar [RobotCar, RobotCarDatasetIJRR], as well as and PCK-0.05 on PF-Pascal [PFPascal] and TSS [Taniai2016] respectively, for SemanticGLU-Net. This leads to a new state-of-the-art on all four datasets. Example predictions are shown in Fig. 2. ## 2 Related work Unsupervised optical flow:  While supervised optical flow networks need carefully designed synthetic datasets for their training [Dosovitskiy2015, Mayer2016ALD], unsupervised approaches do not require ground-truth annotations. Inspired by classical optimization-based methods [Horn1981], they instead learn deep models based on brightness constancy and spatial smoothness losses [RenYNLYZ17, BackToBasics]. The predominant technique mainly relies on photometric losses, e.gCharbonnier penalty [BackToBasics], census loss [Meister2017], or SSIM [WangBSS04, UnOS]. Such losses are often combined with forward-backward consistency [Meister2017] and edge-aware smoothness regularization [OccAwareFlow] . Occlusion estimation techniques [MFOccFlow, Meister2017, OccAwareFlow] are also employed to mask out occluded or outlier regions from the objective. Recently, several works [DDFlow, SefFlow, ARFlow] use a data distillation approach to improve the flow predictions in occluded regions. However, all aforementioned approaches rely on the assumption of limited appearance changes between two consecutive frames. While this assumption holds to a large degree in optical flow data, it is challenged by the drastic appearance changes encountered in geometric or semantic matching applications, as visualised in Fig. 2. Unsupervised geometric matching:  Geometric matching focuses on the more general case where the geometric transformations and appearance changes between two frames may be substantial. Methods either estimate a dense flow field [Melekhov2019, GLUNet, GOCor, RANSAC-flow] or output a cost volume [Rocco2018b, D2D], which can be further refined to increase accuracy [RoccoAS20, LiHLP20, abs-2012-09842]. The later approaches train the feature embedding, which is then used to compute dense similarity scores. Recent works further leverage the temporal consistency in videos to learn a suitable representation for feature matching [DwibediATSZ19, JabriOE20, WangJE19]. Our work focuses on the first class of methods, which directly learn to regress a dense flow field. Recently, Xen et al [RANSAC-flow] use classical photometric and forward-backward consistency losses to train RANSAC-Flow. They partially alleviate the sensitivity of photometric losses to large appearance changes by pre-aligning the images with Ransac. Several methods [Melekhov2019, GLUNet, GOCor] instead use a warp-supervision loss. By posing the network to regress a randomly sampled warp during training, a direct supervisory signal is obtained, but at the cost of poorer generalization abilities to real data. Semantic correspondences:  Semantic matching poses additional challenges due to intra-class appearance and shape variations. Manual annotations in this context are ill-defined and ambiguous, making it crucial to develop unsupervised objectives. Methods rely on warp-supervision strategies [Rocco2017a, Rocco2018a, ArbiconNet, SeoLJHC18, GLUNet], use proxy losses on the cost volume [DCCNet, Rocco2018b, Rocco2018a, MinLPC20], identify correct matches from forward-backward consistency of the cost volumes [Jeon], or jointly learn semantic correspondence with attribute transfer [Kim2019] or segmentation [SFNet]. Most related to our work are [Zhou2016, abs-2004-09061, ZhouLYE15]. Zhou et al [Zhou2016] learn to align multiple images using 3D-guided cycle-consistency by leveraging the ground-truth matches between multiple CAD models. However, the need for 3D CAD models greatly limits its applicability in practice. In FlowWeb [ZhouLYE15], the authors optimize online pre-existing pair-wise correspondences using the cycle consistency of flows between images in a collection. Unlike these approaches, we require pairs of images as unique supervision and propose a general loss formulation, learning to regress dense correspondences directly. ## 3 Method ### 3.1 Problem formulation and notation We address the problem of finding pixel-wise correspondences between two images and . Our goal is to estimate a dense displacement field , often referred to as flow, relating pixels in to . The flow field represents the pixel-wise 2D motion vectors in the coordinate system of image . It is directly related to the mapping , which encodes the absolute location in corresponding to the pixel location in image . It is thus related to the flow through . It is important to note that the flow and mapping representations are asymmetric. parametrizes a mapping from each pixel in image , which is not necessarily bijective. With a slight abuse of notation, we interchangeably view and as either elements of or as functions . The latter is generally obtained by a bilinear interpolation of the former, and the interpretation will be clear from context when important. We define the warping of a function by the flow as . This is more compactly expressed as , where is the mapping defined by and denotes function composition. Lastly, we let be the identity map . The goal of this work is to learn a neural network , with parameters , that predicts an estimated flow relating to . We will consistently use the hat to denote an estimated or predicted quantity. The straightforward approach to learn is to minimize the discrepancy between the estimated flow and the ground-truth flow over a collection of real training image pairs . However, such supervised training requires large quantities of densely annotated data, which is extremely difficult to acquire for real scenes. This motivates the exploration of unsupervised alternatives for learning dense correspondences. ### 3.2 Unsupervised data losses To develop our approach, we first briefly review relevant existing alternatives for unsupervised learning of flow. While there is no general agreement in the literature, we adopt a practical definition of unsupervised learning in our context. We call a learning formulation ‘unsupervised’ if it does not require any information (i.esupervision) other than pairs of images depicting the same scene or object. Specifically, unsupervised methods do not require any annotations made by humans or other matching algorithms. Photometric losses:  Most unsupervised approaches train the network using a photometric loss [BackToBasics, Meister2017, OccAwareFlow, RANSAC-flow]. Under the photometric consistency assumption, it minimizes the difference between image and image warped according to the estimated flow field as, (1) Here, is a function measuring the difference between two images, e.g [BackToBasics], SSIM [WangBSS04], or census [Meister2017]. Forward-backward consistency:  By constraining the backward flow to yield the reverse displacement of its forward counterpart , we achieve the forward-backward consistency loss [Meister2017], Lfb=∥∥ˆFI→J+\warpˆFI→J(ˆFJ→I)∥∥. (2) Here, denotes a suitable norm. While well motivated, (2) is enforced by the trivial degenerate solution of always predicting zero flow . It therefore bares the risk of degrading performance by biasing the prediction towards zero, even if combined with a photometric loss (1). Both aforementioned losses are most often used together with a visibility mask that filters out the influence of occluded regions from the objective. Warp-supervision:  Another approach relies on synthetically generated training pairs, where the ground-truth flow is obtained by construction [GLUNet, Rocco2017a, Melekhov2019]. Given only a single image , a training pair is created by applying a randomly sampled transformation , e.ga homography, to as . Here, is the synthetic flow field, which serves as direct supervision through a regression loss, Lwarp=∥∥ˆFI′→I−W∥∥. (3) While this results in a strong and direct training signal, warp supervision methods struggle to generalize to real image pairs . This can lead to over-smooth predictions and instabilities in the presence of unseen appearance changes. ### 3.3 Warp consistency graph We set out to find a new unsupervised objective suitable for scenarios with large appearance and view-point changes, where photometric based losses struggle. While the photometric consistency assumption is avoided in the forward-backward consistency (Fig. 3a) and warp-supervision (Fig. 3b) objectives, these methods suffer from severe drawbacks in terms of degenerate solutions and lack of realism, respectively. To address these issues, we consider all possible consistency relations obtained from the three images involved in both aforementioned objectives. Using this generalization, we not only retrieve forward-backward and warp-supervision as special cases, but also derive a family of new consistency relations. From an image pair , we first construct an image triplet by warping with a known flow-field in order to generate the new image . We now consider the full consistency graph, visualized in Fig. 3c, encompassing all flow-consistency constraints derived from the triplet of images . Crucially, we exploit the fact that the transformation is known. The goal is to find consistency relations that translate to suitable learning objectives. Particularly, we wish to improve the network prediction between the real image pair . We therefore first explore the possible consistency constraints that can be derived from the graph shown in Fig. 3c. For simplicity, we do not explicitly denote visible or valid regions of the stated consistency relations. They should be interpreted as an equality constraint for all pixel locations where both sides represent a valid, non-occluded mapping or flow. Pair-wise constraints:  We first consider the consistency constraints recovered from pairs of images, as visualized in Fig. (e)e. From the pair , and analogously , we recover the standard forward-backward consistency constraint , from which we derive (2). Furthermore, from the pair we can derive the warp-supervision constraint (3) .111While and are also possible, they offer no advantage over standard warp-supervision: . Bipath constraints:  The novel consistency relations stem from constraints that involve all three images in the triplet . These appear in two distinct types, here termed bipath and cycle constraints, respectively. We first consider the former, which have the form . That is, we obtain the same mapping by either proceeding directly from image 1 to 2 or by taking the detour through image 3. We thus compute the same mapping by two different paths: and , from which we derive the name of the constraint. The images 1, 2, and 3 represent any enumeration of the triplet that respects the direction , specified by the known warp . There thus exist three different bipath constraints, detailed in Sec. 3.4. Cycle constraints:  The last category of constraints is formulated by starting from any of the three images in Fig. (d)d and composing the mappings in a full cycle. Since we return to the starting image, the resulting composition is equal to the identity map. This is expressed in a general form as , where we have proceeded in the cycle . Again constraining the direction , we obtain three different constraints, as visualized in Fig. (d)d. Compared to the bipath constraints, the cycle variants require two consecutive warping operations, stemming from the additional mapping composition. Each warp reduces the valid region and introduces interpolation noise and artifacts in practice. Constraints involving fewer warping operations are thus desirable, which is an advantage of the class of bipath constraints. In the next parts, we therefore focus on the later class to find a suitable unsupervised objective for dense correspondence estimation. ### 3.4 Bipath constraints As mentioned in the previous section, there exist three different bipath constraints that preserve the direction of the known warp . These are stated in terms of mappings as, MI′→J =MI→J∘MW (4a) MJ→I =MW∘MJ→I′ (4b) MW =MJ→I′∘MI→J. (4c) From (4), we can derive the equivalent flow constraints as, FI′→J =W+\warpW(FI→J) (5a) FJ→I =FJ→I′+\warpFJ→I′(W) (5b) W =FI′→J+\warpFI′→J(FJ→I). (5c) Each constraint is visualized in Fig. 4a, b and c respectively. At first glance, any one of the constraints in (5) could be used as an unsupervised loss by minimizing the error between the left and right hand side. However, by separately analyzing each constraint in (4)-(5), we will find them to have radically different properties which impact their suitability as an unsupervised learning objective. -bipath:  The constraint (4a), (5a) is derived from the two possible paths from to (Fig. (a)a). While not obvious from (5a), it can be directly verified from (4a) that this constraint has a degenerate trivial solution. In fact, (4a) is satisfied for any by simply mapping all inputs to a constant pixel location as . In order to satisfy this constraint, the network can thus learn to predict the same flow for any input image pair. -bipath:  From the paths in Fig. (b)b, we achieve the constraint (4b), (5b). The resulting unsupervised loss is formulated as LJ→I=∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I∥∥. (6) Unfortunately, this objective suffers from another theoretical disadvantage. Due to the cancellation effect between the estimated flow terms and , the objective (6) is insensitive to a constant bias in the prediction. Specifically, if a small constant bias is added to all flow predictions in (6), it can be shown that the increase in the loss is approximately bounded by . Here, the bias error is scaled with the Jacobian of the warp . Since a smooth and invertible warp implies a generally small Jacobian , the change in the loss will be negligible. The resulting insensitivity of (6) to a prediction bias is further confirmed empirically by our experiments. We provide derivations in the suppl. A. To further understand and compare the bipath constraints (5), it is also useful to consider the limiting case of reducing the magnitude of the warps . By setting it can be observed that (6) becomes zero, i.eno learning signal remains. -bipath:  The third bipath constraint (4c), (5c) is derived from the paths , which is determined by (Fig. (c)c). It leads to the -bipath consistency loss, LW=∥∥ˆFI′→J+\warpˆFI′→J(ˆFJ→I)−W∥∥. (7) We first analyze the limiting case by setting , which leads to standard forward-backward consistency (2) since . The -bipath is thus a direct generalization of the latter constraint. Importantly, by randomly sampling non-zero warps , degenerate solutions are avoided, effectively solving the one fatal issue of forward-backward consistency objectives. In addition to avoiding degenerate solutions, -bipath does not experience cancellation of prediction bias, as in (6). Furthermore, compared to warp-supervision (3), it enables to directly learn the flow prediction between the real pair . In the next section, we therefore develop our final unsupervised objective based on the -bipath consistency. ### 3.5 Warp consistency loss In this section, we develop our warp consistency loss, an unsupervised learning objective for dense correspondence estimation, using the consistency constraints derived in Sec. 3.3 and 3.4. Specifically, following the observations in Sec. 3.4, we base our loss on the -bipath constraint. -bipath consistency term:  To formulate an objective based on the -bipath consistency constraint (5c), we further integrate a visibility mask . The mask takes a value for any pixel where both sides of (4c), (5c) represent a valid, non-occluded mapping, and otherwise. The loss (7) is then extended as, Since we do not know the true visibility , we replace it with an estimate . While there are different techniques for estimating visibility masks in the literature [MFOccFlow, Meister2017, OccAwareFlow], we base our strategy on the approach used in [Meister2017]. Specifically, we compute our visibility mask as, ˆV=1[ ∣∣ˆFI′→J+\warpˆFI′→J(ˆFJ→I)−W∣∣22<α2+ (9) Here, takes the value 1 or 0 if the input statement is true or false, respectively. The scalars and are hyperparameters controlling the sensitivity of the mask estimation. For the warp operation , we generally found it beneficial not to back-propagate gradients through the flow used for warping. We believe that this better encourages the network to directly adjust the flow , rather than ‘move’ the flow vectors using the warp . Warp-supervision term:  In addition to our -bipath objective (8), we use the warp-supervision (3), found as a pairwise constraint in our consistency graph (Fig. (e)e). Benefiting from the strong and direct supervision provided by the synthetic flow , the warp-supervision term increases convergence speed and helps in driving the network towards higher accuracy. Further, by the direct regression loss against the flow , which is smooth by construction, it also acts as a smoothness constraint. On the other hand, through the -bipath loss (8), the network learns the realistic motion patterns and appearance changes present between real images . As a result, both loss terms are mutually beneficial. From a practical perspective, the warp-supervision loss can be integrated at a low computational and memory cost, since the backbone feature extraction for the three images can be shared between the two loss terms. Adaptive loss balancing:  Our final unsupervised objective combines the losses (8) and (3) as . This raises the question of how to set the trade-off . Instead of resorting to manual tuning, we eliminate this hyper-parameter by automatically balancing the weights over each training batch as . Since is a weighting factor, we do not backpropagate gradients through it. ### 3.6 Sampling warps W The key element of our warp consistency objective is the sampled warp . During training, we randomly sample it from a distribution , which we need to design. As discussed in Sec. 3.4, the -bipath loss (8) approaches the forward-backward consistency loss (2) when the magnitude of the warps decreases . Exclusively sampling too small warps therefore risks biasing the prediction towards zero. On the other hand, too large warps would render the estimation of challenging and introduce unnecessary invalid image regions. As a rough guide, the distribution should yield warps of similar magnitude as the real transformations , thus giving similar impact to all three terms in (8). Fortunately, as analyzed in the supplementary Sec. G, our approach is not sensitive to these settings as long as they are within reasonable bounds. We construct by sampling homography, Thin-plate Spline (TPS) and affine-TPS transformations randomly, following a procedure similar to previous approaches using warp-supervision [Rocco2017a]. (i) Homographies are constructed by randomly translating the four image corner locations. The magnitudes of the translations are chosen independently through Gaussian or uniform sampling, with standard-deviation or range equal to . (ii) For TPS, we randomly jitter a grid of control points by independently translating each point. We use the same standard deviation or range as for our homographies. (iii) To generate larger scale and rotation changes, we also compose affine and TPS. We first sample affine transformations by selecting scale, rotation, translation and shearing parameters according to a Gaussian or uniform sampling. The TPS transform is then sampled as explained above and the final synthetic flow is a composition of both flows. To make the warps harder, we optionally also compose the flow obtained from (i), (ii) and (iii) with randomly sampled elastic transforms. Specifically, we generate an elastic deformation motion field, as described in [Simard2003] and apply it in multiple regions selected randomly. Elastic deformations drive the network to be more accurate to small details. Detailed settings are provided in the suppl. G. ## 4 Experiments We evaluate our unsupervised learning approach for three dense matching networks and two tasks, namely GLU-Net [GLUNet] and RANSAC-Flow [RANSAC-flow] for geometric matching, and SemanticGLU-Net [GLUNet] for semantic matching. We extensively analyze our method and compare it to earlier unsupervised objectives, defining a new state-of-the-art on multiple datasets. Further results, analysis, visualizations and implementation details are provided in the supplementary. ### 4.1 Method analysis We first perform a comprehensive analysis of our approach. We adopt GLU-Net [GLUNet] as our base architecture. It is a 4-level pyramidal network operating at two image resolutions to estimate dense flow fields. Experimental set-up for GLU-Net:  We slightly simplify the GLU-Net [GLUNet] architecture by replacing the dense decoder connections with standard residual blocks, which drastically reduces the number of network parameters with negligible impact on performance. As in [GLUNet], the feature extraction network is set to a VGG-16 [Chatfield14] with ImageNet pre-trained weights. We train the rest of the architecture from scratch in two stages. We first train GLU-Net using our unsupervised objective, described in Sec. 3.5, but without the visibility mask . As a second stage, we add the visibility mask and employ stronger warps , with elastic transforms. For both stages, we use the training split of the MegaDepth dataset [megadepth], which comprises diverse internet images of 196 different world monuments. Datasets and metrics:  We evaluate on standard datasets with sparse ground-truth, namely RobotCar [RobotCarDatasetIJRR, RobotCar] and MegaDepth [megadepth]. For the latter, we use the test split of  [RANSAC-flow], which consists of 19 scenes not seen during training. Images in Robotcar depict outdoor road scenes and are particularly challenging due to their many textureless regions. MegaDepth images show extreme view-point and appearance variations. In line with [RANSAC-flow], we use the Percentage of Correct Keypoints at a given pixel threshold (PCK- ) as the evaluation metric (in %). We also employ the 59 sequences of the homography dataset HPatches [Lenc]. We evaluate with the Average End-Point-Error (AEPE) and PCK. Warp consistency graph losses:  In Tab. 1 we empirically compare the constraints extracted from our warp consistency graph (Sec. 3.3). All networks are trained with only the first stage, on the same synthetic transformations . Since we observed it to give a general improvement, we stop gradients through the flow used for warping (but not the flow that is warped). The -bipath (II) and -bipath (III) losses lead to a degenerate solution and a large predicted bias respectively, which explains the very poor performance of the networks. The cycle loss (V) obtains much better results but does not reach the performance of the -bipath constraint (IV). We only show the cycle starting from here (V), since it performs best among all cycle losses (see suppl. A). While the warp-supervision loss (I) results in a better accuracy on all datasets (PCK-1 and PCK-5 for HPatches), it is significantly less robust to large view-point changes than the -bipath objective (IV), as evidenced by results in PCK-10 and AEPE. These two losses have complementary behaviors and combining them (VIII) leads to a significant gain in both accuracy and robustness. Combining the warp-supervison loss (I) with -bipath (II) in (VI) or with -bipath (III) in (VII) instead results in drastically lower performance than (VIII). The cycle loss (V) with the warp-supervision (I) in (IX) is also slightly worse. Ablation study:  In Tab. 2 we analyze the key components of our approach. We first show the importance of not back-propagating gradients in the warp operation. Adding the warp-supervision objective with constant weights of increases both the network’s accuracy and robustness for all datasets. Further using adaptive loss balancing (Sec. 3.5) provides a significant improvement in accuracy (PCK-1) for MegaDepth with only minor loss on other thresholds. Including our visibility mask in the second training stage drastically improves all metrics for all datasets. Finally, further sampling harder transformations results in better accuracy, particularly for PCK-1 on MegaDepth. We therefore use this as our standard setting in the following experiments, where we denote it as WarpC. Comparison to alternative losses:  Finally, in Tab. 3 we compare and combine our proposed objective with alternative losses. The census loss [Meister2017] (I), popular in optical flow, does not have sufficient invariance to appearance changes and thus leads to poor results on geometric matching datasets. The SSIM loss [WangBSS04] (II) is more robust to the large appearance variations present in MegaDepth. Further combining SSIM with the forward-backward consistency loss (III) leads to a small improvement. Compared to SSIM (III) on MegaDepth, our WarpC approach (VI) achieves superior PCK-5 (+7.8%) and PCK-10 (+10.2%) at the cost of a slight reduction in sub-pixel accuracy. Furthermore, our approach demonstrates superior generalization capabilities by outperforming all other alternatives on the RobotCar and HPatches datasets. For completeness, we also evaluate the combination (VII) of our loss with the photometric SSIM loss. This leads to improved PCK-1 on MegaDepth but degrades other metrics compared to WarpC (VI). Nevertheless, adding WarpC significantly improves upon SSIM (II) for all thresholds and datasets. Moreover, combining the warp-supervision (IV) with the forward-backward loss in (V) leads to an improvement compared to (IV). It is however significantly worse than combining the warp-supervision with our -bipath loss in (VI), which can be seen as a generalization of the forward-backward loss. Finally, we compare with using the sparse ground-truth supervision provided by SfM reconstruction of the MegaDepth training images. Interestingly, training the dense prediction network from scratch with solely sparse annotations (VIII) leads to inferior performance compared to our unsupervised objective (VI). Lastly, we fine-tune (IX) our proposed network (VI) with sparse annotations. While this leads to a moderate gain on MegaDepth, it comes at the cost of worse generalization properties on RobotCar and HPatches. ### 4.2 Geometric matching Here, we train the recent GLU-Net [GLUNet] and RANSAC-Flow [RANSAC-flow] architectures with our unsupervised learning approach and compare them against state-of-the-art dense geometric matching methods. Experimental set-up for GLU-Net:  We follow the training procedure explained in Sec. 4.1 and refer to the resulting model as WarpC-GLU-Net. The original GLU-Net [GLUNet] is trained using solely the warp-supervision (3) on a different training set. For fair comparison, we also report results of our altered GLU-Net architecture when trained on MegaDepth with our warp distribution. This corresponds to setting (IV) in Tab. 3, which we here call GLU-Net*. Experimental set-up for RANSAC-Flow:  We additionally use our unsupervised strategy to train RANSAC-Flow [RANSAC-flow]. In the original work [RANSAC-flow], the network is trained on MegaDepth [megadepth] image pairs that are coarsely pre-aligned using feature matching and Ransac. Training is separated into three stages. First, the network is trained using the SSIM loss (1), which is further combined with the forward-backward consistency loss (2) in the second stage. In the last stage, a matchability mask is also trained, by weighting the previous losses with the predicted mask and including a mask regularization term. For our WarpC-RANSAC-Flow, we also follow a three-step training using the same training pairs. As for the WarpC-GLU-Net training, we add our visibility mask in the second training stage. In the third stage, we train the matchability mask by simply replacing in (8) with the predicted mask, and adding the same mask regularizer as in RANSAC-Flow. Results:  In Tab. 4, we report results on MegaDepth and RobotCar. Note that we only compare to methods that do not finetune on the test set. Our approach WarpC-GLU-Net outperforms the original GLU-Net and baseline GLU-Net* by a large margin at all PCK thresholds. Our proposed unsupervised objective enables the network to handle the large and complex 3D motions present in real image pairs, as evidenced in Fig. 5, top. Our unsupervised approach WarpC-RANSAC-Flow also achieves a substantial improvement compared to RANSAC-Flow. Importantly, WarpC-RANSAC-Flow shows much better generalization capabilities on RobotCar. The poorer generalization of photometric-based objectives, such as SSIM [WangBSS04] here, further supports our findings in Sec. 4.1. Interestingly, training the matchability branch of RANSAC-Flow with our objective results in drastically more accurate mask predictions. This is visualized in Fig. 5, middle, where our approach WarpC-RANSAC-Flow effectively identifies unreliable matching regions such as the sky (in red), whereas RANSAC-Flow, trained with the SSIM loss, is incapable of discarding the sky and field as unreliable. ### 4.3 Semantic matching Finally, we evaluate our approach for the task of semantic matching by training SemanticGLU-Net [GLUNet], a version of GLU-Net specifically designed for semantic images, which includes multi-resolution features and NC-Net [Rocco2018b]. Experimental set-up:  Following [Rocco2018a, ArbiconNet], we only fine-tune a pre-trained network on semantic correspondence data. Specifically, we start from the SemanticGLU-Net weights provided by the authors, which are trained with warp-supervision without using any correspondences from flow annotations. We finetune this network on the PF-PASCAL training set [PFPascal], which consists of 20 object categories, using our unsupervised loss (Sec. 3.5). Datasets and metrics:  We first evaluate on the test set of PF-Pascal [PFPascal]. In line with [SCNet], we report the PCK with a pixel threshold equal to , where and are the dimensions of the query image and . To demonstrate generalization capabilities, we also validate our trained model on the TSS dataset [Taniai2016], which provides dense flow field annotations for the foreground object in each pair. Following [Taniai2016], we report the PCK with respect to query image size and for . Results:  Results are reported in Tab. 5. Our approach WarpC-SemanticGLU-Net sets a new state-of-the-art on TSS by obtaining a remarkable improvement compared to previous works. On the PF-Pascal dataset, our method ranks first for the small threshold with a substantial increase compared to second best method. It obtains marginally lower PCK () than DCCNet [DCCNet] for , but the later approach employs a much deeper feature backbone, beneficial on semantic images. Nevertheless, our unsupervised fine-tuning provides 16% and 11.1% gain, for each threshold respectively, over the baseline, demonstrating that our objective effectively copes with the radical appearance changes encountered in the semantic matching task. A visual example applied to an image pair of PF-PASCAL [PFPascal] is shown in Fig. 5, bottom. ## 5 Conclusion We propose an unsupervised learning objective for dense correspondences, particularly suitable for scenarios with large changes in appearance and geometry. From a real image pair, we construct an image triplet and design a regression loss based on the flow-constraints existing between the triplet. When integrated into three recent dense correspondence networks, our approach outperforms state-of-the-art for multiple geometric and semantic matching datasets. Acknowledgements:  This work was supported by the ETH Zürich Fund (OK), a Huawei Gift, Huawei Technologies Oy (Finland), Amazon AWS, and an Nvidia GPU grant. ## Appendix A Warp consistency graph regression losses In this section, we provide additional details about the possible flow-constraints derived from our warp consistency graph (Sec. 3.3 of the main paper). We also show qualitative and quantitative comparisons between the trained networks using each possible regression loss. ### a.1 Details about Ji-bipath constraint We here provide the detailed derivation of the bias insensitivity of the -bipath loss, which is given by (eq. (6) in the main paper) as, LJ→I=∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I∥∥. (10) We derive an upper bound for the change in the loss when a constant bias is added to all flow predictions . We have, ΔLJ→I= ∥∥ˆFJ→I′+b+\warpˆFJ→I′+b(W)−(ˆFJ→I+b)∥∥ −∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I∥∥ = ∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I +\warpˆFJ→I′+b(W)−\warpˆFJ→I′(W)∥∥ −∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I∥∥ ≤ ∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I∥∥ +∥∥\warpˆFJ→I′+b(W)−\warpˆFJ→I′(W)∥∥ −∥∥ˆFJ→I′+\warpˆFJ→I′(W)−ˆFJ→I∥∥ = ∥∥\warpˆFJ→I′+b(W)−\warpˆFJ→I′(W)∥∥. (11) Here we have used the triangle inequality. From the bound above, we can already see that will be small if is changing slowly. We can see this more clearly by assuming the bias to be small, and doing a first order Taylor expansion, \warpˆFJ→I′+b (W)(x)=W(x+ˆFJ→I′(x)+b) ≈ = \warpˆFJ→I′(W)(x)+\warpˆFJ→I′(DWb)(x). (12) Here, is the Jacobian of at location . Thus, denotes the function obtained from the matrix-vector product between the Jacobian and bias at every location. Inserting (A.1) into (A.1) gives an approximate bound valid for small , ΔLJ→I⪅∥∥\warpˆFJ→I′(DWb)∥∥. (13) A smooth and invertible warp implies a generally small Jacobian . Since the bias is scaled with , the resulting change in the loss will also be small. As a spacial case, it is immediately seen from (A.1) that the change in the loss is always zero if is a pure translation. The bias insensitivity of the -bipath constraint largely explains its poor performance. As visualized in Fig. 6, the predictions of a network trained with solely the -bipath loss (6) suffer from a large translation bias. ### a.2 Cycle constraints Here, we provide additional details about the cycle constraints, extracted from our warp consistency graph. As explained in Sec. 3.3 of the main paper, because of the fixed direction of the known flow which corresponds to , three cycle constraints are possible, starting from either images , or and composing mappings so that the resulting composition is equal to the identity map. They are respectively formulated as follows, I =MW∘MJ→I′∘MI→J (14a) I =MJ→I′∘MI→J∘MW (14b) I =MI→J∘MW∘MJ→I′ (14c) The corresponding regression losses are obtained by converting the mapping constraints (14) to flow constraints and considering only the flow as known. We provide the expression for each of the three cycle losses in the following. Cycle from :  By starting from image and performing a full cycle, the resulting regression loss is expressed as, Lcycle-I= ∥∥ˆFI→J+\warpˆFI→J(ˆFJ→I′)+ (15) \warpˆFI→J+\warpˆFI→J(ˆFJ→I′)(W)∥∥ Cycle from :  Starting from image instead leads to the following regression loss, Lcycle-I'= ∥∥W+\warpW(ˆFI→J)+ (16) \warpW+\warpW(ˆFI→J)(ˆFJ→I′)∥∥ Cycle from :  Finally, using image as starting point for the cycle constraint results in this regression loss, Lcycle-J= ∥∥ˆFJ→I′+\warpˆFJ→I′(W)+ (17) \warpˆFJ→I′+\warpˆFJ→I′(W)(ˆFI→J)∥∥ ### a.3 Quantitative and qualitative analysis Extension of quantitative analysis:  We first extend Tab. 1 of the main paper, by analysing the remaining warp consistency graph losses. Results on MegaDepth, RobotCar and HPatches are presented in Tab. 6. As in Tab. 1 of the main paper, all networks are trained following the first training stage of WarpC-GLU-Net (See Sec. 4.1 of main paper or Sec. C). We first provide evaluation results of networks trained using the cycle losses, starting from images and . The cycle loss starting from obtains very poor results. The cycle starting from instead achieves better performance, but still lower than the cycle loss from . The -bipath constraint obtains the best results overall. We then compare combinations of the derived losses with the warp-supervision objective (eq. (3) of the main paper). Between the cycle losses, the combination of the warp-supervision with the cycle loss from achieves the best results compared to the combinations with the cycle losses from and . The combination of the warp-supervision and forward-backward losses (eq. (3) of the main paper), which are both retrieved as pair-wise constraints from the warp consistency graph (Sec. 3.3 and Fig. 4e of the main paper), leads to lower generalisation abilities on the HPatches dataset than our warp consistency loss. It also achieves substantially lower PCK-1 on MegaDepth. Moreover, because the forward-backward consistency loss leads to a degenerate trivial solution when used alone, manual tuning of a weighting hyper-parameter is required to balance the warp-supervision and the forward-backward loss terms. If it is too high, the forward-backward term gains too much importance and drives the network to zero. If it is too small instead, its contribution becomes insignificant. On the contrary, our proposed unsupervised learning objective (Sec. 3.5 of the main paper) does not require expensive manual tuning of such hyperparameters. Qualitative comparison:  In Fig. 6, we visually compare the estimated flows by GLU-Net networks trained using each of the flow-consistency losses retrieved from the warp consistency graph. Training using the warp-supervision loss alone results in an unstable estimated flow, and corresponding warped query. It can directly be seen that the -bipath loss results in the network learning a degenerate trivial solution, in the form of a constant predicted mapping independently of the input images. Training with the -bipath objective instead makes the network insensitive to an additional predicted bias. Indeed, in Fig. 6, third row, it is easily seen that the warped query is shifted towards the right and bottom, compared to the reference image. This is due to a constant predicted bias by the network. The -bipath objective leads to a drastically better warped query. Also note that the estimated flow leads to a more accurate warped query than when trained with the -cycle loss. Training with the cycle loss from leads to very poor results instead. Finally, the cycle loss derived by starting from image results in a reasonable warped query, but it has more out-of-regions artifacts compared to the prediction of the network trained with the -bipath loss. ## Appendix B Triplet creation and sampling of warps W ### b.1 Triplet creation Our introduced unsupervised learning approach requires to construct an image triplet from an original image pair , where all three images must have training dimensions . We construct the triplet as follows. The original training image pairs are first resized to a fixed size , larger than the desired training image size . We then sample a dense flow of the same dimension , and create by warping image with , as . Each of the images of the resulting image triplet are then centrally cropped to the fixed training image size . The central cropping is necessary to remove most of the black areas in introduced from the warping operation with large sampled flows as well as possible warping artifacts arising at the image borders. We then additionally apply appearance transformations to the created image , such as brightness and contrast changes. This procedure is similar to [Rocco2017a], which employs solely the warp-supervision objective on . ### b.2 Sampling of warps W As mentioned in the main paper Sec. 3.6, we key question raised by our proposed loss formulation is how to sample the synthetic flows . The analysis of the properties of the proposed -bipath loss brought some insight into what magnitude of warps to sample during training. If the generated warps are too small, there is still a risk of biasing the prediction towards zero. Instead, using warps of roughly similar order of magnitude as the underlying transformations would give equal impact to all three terms in eq. (8) of the main paper. During training, we randomly sample it from a distribution , which we need to design. Base transformation sampling:  We construct by sampling homography, Thin-plate Spline (TPS), or affine-TPS transformations with equal probability. The transformations parameters are then converted to dense flows of dimension . Specifically, for homographies and TPS, the four image corners and a grid of control points respectively, are randomly translated in both horizontal and vertical directions, according to a desired sampling scheme. The translated and original points are then used to compute the corresponding homography and TPS parameters. Finally, the transformations parameters are converted to dense flows. For both transformation types, the magnitudes of the translations are sampled according to a uniform or Gaussian distribution with a range or standard-deviation respectively. Note that for the uniform distribution, the sampling range is actually , when it is centered at zero, or similarly if centered at 1 for example. Importantly, the image points coordinates are previously normalized to be in the interval . Therefore should be within . For the affine transformations, all parameters, i.escale, translations, shearing and rotation angles, are sampled from a uniform or Gaussian distribution with range or standard-deviation equal to , , and respectively. For the affine scale parameter, the corresponding Gaussian sampling is centered at one whereas for all other parameters, it is centered at zero. Similarly, for a uniform sampling instead, the affine scale parameters is sampled within with center at 1, while for all other parameters, the sampling interval is centered at zero. Elastic transformations:  To make the synthetic flow harder for the network to estimate, we also optionally compose the base flow resulting from sampling homography, TPS and Affine-TPS transformations, with a dense elastic deformation grid. We generate the corresponding elastic residual flow , by adding small local perturbations . More specifically, we create the residual flow by first generating an elastic deformation motion field on a dense grid of dimension , as described in [Simard2003]. Since we only want to include elastic perturbations in multiple small regions, we generate binary masks , each delimiting the area on which to apply one local perturbation . The final elastic residual flow thus take the form of , where . The final synthetic warp is achieved by composing the base flow with the elastic residual flow . In practise, for the elastic deformation field , we use the implementation of [info11020125]. The masks should be between 0 and 1 and offer a smooth transition between the two, so that the perturbations appear smoothly. To create each mask , we thus generate a 2D Gaussian centered at a random location and with a random standard deviation (up to a certain value) on a dense grid of size . It is then scaled to 2.0 and clipped to 1.0, to obtain smooth regions equal to 1.0 where the perturbations will be applied, and transition regions on all sides from 1.0 to 0.0. ### b.3 Hyper-parameters In summary, to construct our image triplet , the hyper-parameters are the following: (i) , the resizing image size, on which is applied to obtain before cropping. (ii) , the training image size, which correspond to the size of the training images after cropping. (iii) , the range or standard deviation used for sampling the homography and TPS transformations. (iv) , the range or standard deviation used for sampling the scaling parameter of the affine transformations. (v) , the range or standard deviation used for sampling the translation parameter of the affine transformations. (vi) , the range or standard deviation used for sampling the rotation angle of the affine transformations. It is also used as shearing angle. (vii) , the range or standard deviation used for sampling the TPS transformations, used for the Affine-TPS compositions. For simplicity, in all experiments including elastic deformations, we use the same elastic transformations hyper-parameters. Moreover, for all experiments and networks, we apply the same appearance transformations to image . Specifically, we use color transformations, by adjusting contrast, saturation, brightness, and hue. With probability 0.2, we additionally use a Gaussian blur with a kernel between 3 and 7, and a standard deviation sampled within . ## Appendix C Training details for WarpC-GLU-Net We first provide details about the original GLU-Net architecture and the modifications we made for this work. We also briefly review the training strategy of the original work. We then extensively explain our training approach and the corresponding implementation details. Architecture:  We use GLU-Net as our base architecture. It is a 4 level pyramidal network, using a VGG-16 feature backbone [Chatfield14] , initialized with pre-trained weights on ImageNet. It is composed of two sub-networks, L-Net and H-Net which act at two different resolutions. The L-Net takes as input rescaled images to and process them with a global feature correlation layer followed by a local feature correlation layer. The resulting flow is then upsampled to the lowest resolution of the H-Net to serve as initial flow, by warping the query features according to the estimated flow. The H-Net takes as input images the original images at unconstrained resolution , and refines the estimated flow with two local feature correlation layers. We adopt the GLU-Net architecture and simply replace the DenseNet connections [Huang2017] of the flow decoders by residual connections. We also include residual blocks in the mapping decoder. This drastically reduces the number of weights while having limited impact on performance. Training strategy in original work:  In the original GLU-Net [GLU-Net], the network is trained with the warp-supervision loss (referred to as a type of self-supervised training strategy in original publication), which corresponds to equation (3) of the main paper. As for the synthetic sampled transformations , Truong et al [GLU-Net] use the same 40k synthetic transformations (affine, thin-plate and homographies) than in DGC-Net [Melekhov2019], but apply them to images collected from the DPED [Ignatov2017] ### c.2 WarpC-GLU-Net: our training strategy We here explain the different steps of our training strategy in more depth. Training stages:  In the first training stage, we train GLU-Net using our warp consistency loss (Sec. 3.5 of the main paper) without the visibility mask. This is because the estimated flow field needs to reach a reasonable performance in order to compute the visibility mask (eq. 9 of the main paper). In the second training stage, we further introduce the visibility mask in the -bipath loss term (eq. 8 of the main paper). In order to enhance difficulty in the second stage, we increase the transformations strengths and include additional elastic transformations for the sampled warps . Note that the feature backbone is initialized to the ImageNet weights and not further trained. Training dataset:  For training, we use the MegaDepth dataset, consisting of 196 different scenes reconstructed from 1,070,468 internet photos using COLMAP [COLMAP] . Specifically, we use 150 scenes of the dataset and sample up to 500 random images per scene. It results in around 58k training pairs. Note that we use the same set of training pairs at each training epoch. For the validation dataset, we sample up to 100 image pairs from 25 different scenes, leading to approximately 1800 image pairs. Importantly, while we can get the corresponding sparse ground-truth correspondences from the SfM reconstructions, we do not use them during training in this work and only retrieve the image pairs. Warps sampling:  We resize the image pairs to , sample a dense flow of the same dimension and create . Each of the images of the resulting image triplet is then centrally cropped to . In the following, we give the parameters used for the sampling of the flow in both training stages. In the first stage, the flows are created by sampling homographies, TPS and Affine-TPS transformations with equal probability. For homographies and TPS, we use a uniform sampling scheme with a range equal to , where , which corresponds to a displacement of up to 250 pixels for the image size . For the affine transformations, we also sample all parameters, i.escale, translation, shear and rotation angles, from uniform distributions with ranges respectively equal to , , and for both angles. We compose the affine transformations with TPS transformations, for which we sample the translation magnitudes uniformly with a range , thus corresponding to a displacement of up to 60 pixels. We chose a smaller range for the TPS compositions because we have found empirically that large ranges led to very drastic resulting dense Affine-TPS flows, which were not necessarily beneficial in the first training stage. In the second stage, we also sample homographies, TPS and Affine-TPS transformations, but increase their strength. Specifically, for homography and TPS transformations, we use a range (displacements up to 300 pixels). The affine parameters are sampled as in the first training stage, but we increase the range of the uniform sampling for the TPS transformations to (displacements up to 200px). To make the flows even harder to estimate, we additionally include elastic transformations, sampled as explained in Sec. B. Baseline comparison:  For fair comparison, we retrain GLU-Net using the original training strategy, which corresponds to the warp-supervision training loss, on the same MegaDepth training images. We also use the same altered GLU-Net architecture as for WarpC-GLU-Net. Moreover, we make use of the same synthetic transformations as in our first and second training stages. We call this version GLU-Net*. ### c.3 Implementation details Since GLU-Net is a pyramidal architecture with levels, we employ a multi-scale training loss, where the loss at different pyramid levels account for different weights. L(θ)=K∑l=1γlLl+η∥θ∥, (18) where are the weights applied to each pyramid level and is the corresponding loss computed at each level, which refers to the warp-supervision loss (eq. 3 of the main paper) for baseline GLU-Net* and our proposed warp consistency loss (Sec. 3.5 of the main paper) for WarpC-GLU-Net. The second term of the loss (18) regularizes the weights of the network. The hyper-parameters used in the estimation of our visibility mask (eq. 9 of the main paper) are set to and . During training, we down-sample and scale the sampled from original resolution to in order to obtain the flow field for L-Net. For the loss computation, we down-sample the known flow field from the base resolution to the different pyramid resolutions without further scaling, so as to obtain the supervision signals at the different levels. For training, we use similar training parameters as in [GLUNet]. Specifically, as a preprocessing step, the training images are mean-centered and normalized using mean and standard deviation of the ImageNet dataset [Hinton2012]. For all local correlation layers, we employ a search radius . For our network WarpC-GLU-Net and the baseline GLU-Net*, the weights in the training loss (18) are set to be . During the first training stage, both networks are trained with a batch size of 6 for 400k iterations. The learning rate is initially equal to , and halved after 250k and 325k iterations. For the second training stage, we train for 225k iteration with an initial learning rate of , which is halved after 100k, 150k and 200k iterations. The networks are trained using Adam optimizer [adam] with weight decay of . ## Appendix D Training details for WarpC-RANSAC-Flow In this section, we first review the RANSAC-Flow architecture as well as their original training strategy. We then explain in more depth the different steps of our training, leading to WarpC-RANSAC-Flow. Architecture:  RANSAC-Flow inference is divided in two steps. First, the image pairs are pre-aligned by computing the homography relating them, using multi-scale feature matching based on off-the-shelf MOCO features [MOCO] and Ransac. As a second step, the pre-aligned image pairs are input to the trained RANSAC-Flow model, which predicts the flow and matchability mask relating them. The final flow field relating the original images is computed as a composition of the flow corresponding to the homography computed in the pre-alignment step, and the predicted flow field. RANSAC-Flow is a shallow architecture taking image pairs as input, and which regresses the dense flow field and matchability mask relating one image to the other. It relies on a single local feature correlation layer computed at one eight of the input images resolution. The local feature correlation layer is computed with a small search radius of . The flow decoder and matchability branch are both fully convolutional with three convolution blocks, while the feature backbone is a modified version of ResNet-18 [HeZRS15]. Training dataset:  As training dataset, RANSAC-Flow uses images of the MegaDepth dataset [megadepth], from which they selected a subset of image pairs. They pre-aligned the image pairs using their pre-processing multi-scale strategy with off-the-shelf MOCO feature [MOCO] matching and homography estimation with Ransac. The resulting training dataset comprises 20k pre-aligned image pairs, for which the remaining geometric transformation between the frames is relatively small. Training strategy in original work:  In the original work [RANSAC-flow], the training is separated in three stages. First, the network is trained using the SSIM loss [WangBSS04], which is further combined with the forward-backward cyclic consistency loss (eq. (2) of the main paper) in the second stage. During the two first stages, only the feature backbone and the flow decoder are trained, while the matchability branch remains unchanged and unused. In the last stage, the matchability branch is also trained by weighting the previous losses with the predicted mask and including a regularization matchability loss. A disadvantage of this approach is that all losses need to be scaled with a hyper-parameter, requiring expensive manual-tuning. ### d.2 WarpC-RANSAC-Flow: our training strategy Training stages:  In the first training stage, we apply our proposed loss (Sec. 3.5 of the main paper) without the visibility mask, as in the first stage of WarpC-GLU-Net. The visibility mask (eq. 8 of the main paper) is introduced in the second stage of training. As in original RANSAC-Flow, the two first stages only train the feature backbone and the flow decoder while keeping the matchability branch fixed (and unused). In the third stage, we jointly train the feature backbone, flow decoder and the matchability branch. As training loss, we use the original matchability regularization loss and further replace our visibility mask in the -bipath loss (eq. 8 of the main paper) with the predicted mask, output of the matchability branch. Warps sampling:  We resize original images to . Following original RANSAC-Flow, the final training images have dimension . Because RANSAC-Flow uses a single local correlation layer with a search radius of 3 computed at one eight of the original image resolution, the network can theoretically only estimate geometric transformations up to pixels in all directions. This is a very limited compared to GLU-Net or other matching networks. It makes RANSAC-Flow architecture very sensitive to the magnitude of the geometric transformations and limited in the range of displacements that it can actually estimate. It also implies that the RANSAC-Flow pre-alignement stage (with off-the-shelf feature matching and Ransac) is crucial for the success of the matching process in general. We thus need to sample transformations within the range of the network capabilities. As a result, we construct the warps by sampling only homographies and TPS transformations from a Gaussian distribution. This is because the Affine-TPS transformations lead to larger geometric transformations and are more difficult to parametrize for a network very sensitive to the strength of geometric transformations. The Gaussian sampling gives more importance to transformations of small magnitudes, as opposed to the uniform sampling used for WarpC-GLU-Net. The homography and TPS transforms are sampled from a Gaussian distribution with standard deviation , which corresponds to a displacement of 24 pixels in an image size . We further integrate additional elastic transformations, which were shown beneficial to boost the network accuracy. We use the above sampling scheme for all three training stages. ### d.3 Implementation details RANSAC-Flow only estimates the flow at one eight of the original image resolution. Loss computations is performed at image resolution, i.e, after upsampling the estimated flow field. Following the original work, we also compute training losses at the image resolution. The hyper-parameters used in the estimation of our visibility mask (eq. 9 of the main paper) are set to and . For training, we use similar training parameters as in [RANSAC-flow]. As pre-processing, we scale the input network images to . During the first training stage, WarpC-RANSAC-Flow is trained with a batch size of 10 for 300k iterations. The learning rate is initially equal to , and halved after 200k iterations. For the second training stage, we train for 140k iteration with a constant learning rate of . Finally, the third training stages also uses an initial learning rate of halved after 200k iterations, and comprises a total of 300k iterations. To weight the matchability regularization loss with respect to our warp consistency loss in the third stage, we use a constant factor of applied to the matchability loss. ## Appendix E Training details for WarpC-SemanticGLU-Net Here, we first review the SemanticGLU-Net architecture as well as their original training strategy. We then provide additional details about our training strategy, resulting in WarpC-SemanticGLU-Net. Architecture:  SemanticGLU-Net is derived from GLU-Net [GLUNet], with two architectural modifications, making it more suitable for semantic data. Specifically, the global feature correlation layer is followed by a consensus network [Rocco2018b]. The features from the different levels in the L-Net are also concatenated, similarly to [Jeon]. Training strategy in original work:  SemanticGLU-Net was originally trained using the same procedure as GLU-Net [GLUNet]. It is explained in Sec. C. ### e.2 WarpC-SemanticGLU-Net: our training strategy Training procedure:  We only finetune on semantic data, from the original pretrained SemanticGLU-Net model, initialized with the weights provided by the authors. The VGG-16 feature backbone is initialized to the ImageNet weights and not further finetuned. We use our warp consistency loss (Sec. 3.5 of the main paper), where the visibility mask is directly included. Note that since SemanticGLU-Net is trained using solely the warp-supervision objective, the overall training of WarpC-SemanticGLU-Net does not use any flow annotations. Training dataset:  We use the PF-Pascal [PFPascal] images as training dataset. Following the dataset split in [SCNet], we partition the total 1351 image pairs into a training set of 735 pairs, validation set of 308 pairs and test set of 308 pairs, respectively. The 735 training images are augmented by mirroring, random cropping and exchanging the images in the pair. It leads to a total of 2940 training image pairs. Warps sampling:  We resize the image pairs to , sample a dense flow of the same dimension and create . Each of the images of the resulting image triplet is then centrally cropped to . The flows are created by sampling homographies, TPS and Affine-TPS transformations with equal probability. For homographies and TPS, we use a uniform sampling scheme with a range equal to , where , which corresponds to a displacement of px, in image size . For the affine transformations, we also sample all parameters, i.escale, translation, shear and rotation angles, from uniform distributions with ranges respectively equal to , , and for both angles. We compose the affine transformations with TPS transformations, for which we sample the translation magnitudes uniformly with a range , thus corresponding to a displacement of 100px. Implementation details:  For our network WarpC-SemanticGLU-Net, the weights in the training loss (18) are set to . We train with a batch size of 5, for a total of 7k iterations. The learning rate is initially equal to , and halved after 4k, 5k and 6k iterations. The network is trained using Adam optimizer [adam] with weight decay of . ## Appendix F Training details for method analysis For the method analysis corresponding to Sec. 4.1 of the main paper, we use as base network GLU-Net [GLUNet]. Architecture description and implementation details are explained in Sec. C. In this section, for completeness we provide additional details about the training procedure used for each of the compared networks, when necessary. Warp consistency graph analysis:  All networks are trained following the first WarpC-GLU-Net training stage, i.ewithout including the visibility mask in the bipath or cycle losses. We employ the same warps for all networks, which correspond to the sampling distribution used in the first training stage, as detailed in Sec. C. Ablation study:  Networks in ablation study are trained according to the stages described in Sec. C. Comparison to alternative losses:  We provide implementation details for networks trained with alternative losses. For all unsupervised learning objectives, we train the network in two stages. First, we use solely the evaluated loss, without visibility or occlusion mask. In the second stage, we further finetune the resulting model, extending the evaluated loss with the visibility mask, estimated as in [Meister2017]. For the objectives including our warp consistency loss (WarpC) or the warp-supervision loss, we use the same synthetic warp distribution than introduced in Sec. C. In the following, we give details about each training using an alternative loss. Warp-supervision + forward-backward: Selecting a hyper-parameter is necessary to weight the forward-backward loss with respect to the warp-supervision objective. After manual tuning, we weight the forward-backward term with a constant factor equal to . It ensures that the forward-backward term accounts for about half of the magnitude of the warp-supervision loss. For further implementation details, refer to Sec. C. Census: The implementation details are the same than explained in Sec. C. Particularly, we found that downsampling the images to the flow resolution at each level for loss computation gave better results than upsampling the estimated flows to image resolution. SSIM: To compute the loss, we upsample the estimated flow from each level to image resolution, i.e for the HNet and for the LNet. This strategy led to significantly better results than downsampling the images instead. As a result, because GLU-Net is a multi-scale architecture and the loss is computed using the flow from each resolution, the weights of the final training loss (18) are set to . This gives equal contribution to all levels, since estimated flows at levels of L-Net and H-Net are upsampled to respectively and . SSIM is computed with a window size of 11 pixels, following RANSAC-Flow [RANSAC-flow]. SSIM + forward-backward: The model trained using the SSIM loss is further finetuned with the combination of photometric SSIM and forward-backward consistency losses (eq. 2 of the main paper). Both loss terms are balanced with a constant factor equal to , applied to the forward-backward consistency term. It ensures that the forward-backward term accounts for about half of the magnitude of the SSIM loss. Implementation details are the same than when training with the SSIM loss only. SSIM + WarpC: For the WarpC loss, we follow the training procedure and implementation details provided in Sec. C, i.ewe compute the loss at estimated flow resolution. For the SSIM loss term, we instead follow the training strategy explained above, i.ewe compute the loss at image resolution. For the WarpC term, the different levels weights of the final training loss (18) are set to be , while for the SSIM loss term they are set to . Each loss term, i.eWarpC and SSIM, is computed independently and the final loss is the sum of both. Sparse ground-truth data: Since the ground-truth is sparse, it is inconvenient to down-sample the ground-truth to different resolutions. We thus instead up-sample the estimated flow fields to the ground-truth resolution and compute the loss at this resolution. As for SSIM, we therefore use for the level weights of the final training loss (18). ## Appendix G Analysis of transformations W In this section, we analyse the impact of the sampled transformations’ strength on the performance of the corresponding trained WarpC networks. As explained in Sec. B, the strength of the warps is mostly controlled by the standard-deviation or range , used to sample the base homography and TPS transformations. We thus analyse the effect of the sampling range on the evaluation results of the corresponding WarpC networks, particularly WarpC-GLU-Net and WarpC-SemanticGLU-Net. We do not provide such analysis for WarpC-RANSAC-Flow because as mentioned in Sec. D, RANSAC-Flow architecture is limited to a small range of displacements that it can estimate, which also limits the range over which we can sample the warps . While we choose a specific distribution to sample the transformations parameters used to construct the flow , our experiments show that the performance of the trained networks according to our proposed warp consistency loss (Sec. 3.5 of the main paper) is relatively insensitive to the strength of the transformations , if they remain in a reasonable bound. We present these experiments below. WarpC-GLU-Net:  Specifically, we analyze the PCK curves obtained by GLU-Net based models, trained following our first training stage (Sec. C), for varying ranges used to sample the TPS and homography transformations. Note that for all networks, the sampling distributions of the affine-tps transformations are the same. We plot in Fig. 9 the resulting curves, computed on the MegaDepth and RobotCar datasets. For completeness, we additionally plot the PCK values for fixed pixel thresholds in versus the sampling range in Fig. 10. On MegaDepth, increasing the sampling range from to leads to an improvement of the resulting network’s robustness to large geometric transformations, i.ean increase in PCK-3, 5 and 10. Further increasing up to leads to a decrease in these PCK values. For PCK-1 however, networks trained with sampling ranges within obtain similar accuracy. The accuracy starts dropping for larger sampling ranges. We select because it obtains the best PCK-1 and good PCK-3, 5 and 10. Nevertheless, note that networks trained using sampling ranges within lead to relatively similar PCK metrics, within 2-3 %. Moreover, on RobotCar, all networks obtain similar PCK metrics, independently of the sampling range . WarpC-SemanticGLU-Net:  As for WarpC-GLU-Net, we show that the performance of WarpC-SemanticGLU-Net is relatively insensitive to the strength of the transformations , if they remain in a reasonable bound. Specifically, we analyze the PCK curves obtained by WarpC-SemanticGLU-Net based models, for varying ranges used to sample the TPS and homography transformations of during training. Note that for all networks, the sampling distributions of the affine-tps transformations are the same. We plot in Fig. 7 the resulting curves evaluated on the test set of PF-Pascal and in Fig. 8 the results for specific PCK values. For sampling ranges within , the results of the corresponding trained WarpC-SemanticGLU-Net are all very similar overall. Particularly, the gap between all networks for is very small, within 1 %. For , differences amount to . We selected because it led to a slightly better PCK for the low threshold . ## Appendix H Experimental setup and datasets In this section, we first provide details about the evaluation datasets and metrics. We then explain the experimental set-up in more depth. ### h.1 Evaluation metrics AEPE:  AEPE is defined as the Euclidean distance between estimated and ground truth flow fields, averaged over all valid pixels of the reference image. PCK:  The Percentage of Correct Keypoints (PCK) is computed as the percentage of correspondences with an Euclidean distance error , w.r.t. to the ground truth , that is smaller than a threshold . ### h.2 Evaluation datasets and set-up HPatches:  The HPatches dataset [Lenc] is a benchmark for geometric matching correspondence estimation. It depicts planar scenes, with transformations restricted to homographies. As in DGC-Net [Melekhov2019], we only employ the 59 sequences labelled with v_X, which have viewpoint changes, thus excluding the ones labelled i_X, which only have illumination changes. Each image sequence contains a query image and 5 reference images taken under increasingly larger viewpoints changes, with sizes ranging from to . MegaDepth:  The MegaDepth dataset [megadepth] depicts real scenes with extreme viewpoint changes. No real ground-truth correspondences are available, so we use the result of SfM reconstructions to obtain sparse ground-truth correspondences. We follow the same procedure and test images than [RANSAC-flow], spanning 19 scenes. More precisely, 1600 pairs of images were randomly sampled, that shared more than 30 points. The test pairs are from different scenes than the ones we used for training and validation. Correspondences were obtained by using 3D points from SfM reconstructions and projecting them onto the pairs of matching images. It results in approximately 367K correspondences. During evaluation, following [RANSAC-flow], all the images are resized to have minimum dimension 480 pixels. RobotCar:  Images in RobotCar depict outdoor road scenes, taken under different weather and lighting conditions. While the image pairs show similar view-points, they are particularly challenging due to their many textureless regions. For evaluation, we use the correspondences originally introduced by [RobotCarDatasetIJRR]. Following [RANSAC-flow], all the images are resized to have minimum dimension 480 pixels. TSS:  The TSS dataset [Taniai2016] contains 400 image pairs, divided into three groups: FG3DCAR, JODS, and PASCAL, according to the origins of the images. The dense flow fields annotations for the foreground object in each pair is provided along with a segmentation mask. Evaluation is done on 800 pairs, by also exchanging query and reference images. PF-Pascal:  The PF-PASCAL [PFPascal] benchmark is built from the PASCAL 2011 keypoint annotation dataset [BourdevM09]. It consists of 20 diverse object categories, ranging from chairs to sheep. Sparse manual annotations are provided for 300 image pairs. Evaluation is done by computing PCK for a pixel threshold computed with respect to query image size. PF-Willow:  The PF-WILLOW dataset consists of 900 image pairs selected from a total of 100 images [PFWillow]. It spans four object categories. Sparse annotations are provided for all pairs. For evaluation, we report the PCK scores with multiple thresholds ( = 0.05, 0.10, 0.15) with respect to bounding box size in order to compare with prior methods. ## Appendix I Qualitative results Finally, we provide extensive qualitative visual examples of the performance of our WarpC models. We first qualitatively compare baseline GLU-Net* and our approach WarpC-GLU-Net on images of MegaDepth and RobotCar in Fig. 1213 and 11 respectively. WarpC-GLU-Net is significantly more accurate than GLU-Net*. It can also handle very drastic scale and view-point changes, where GLU-Net* often completely fails. This is thanks to our -bipath objective, which provides supervision on the network predictions between the real image pairs, as opposed to the warp-supervision objective. Also note that in the dense estimation settings, the network must predict a match for every pixels in the reference, even in obviously occluded regions. Only correspondences found in overlapping regions are relevant nevertheless. Moreover, occluded regions can be filtered out using e.ga forward-backward consistency mask [Meister2017], or by letting the network predict a visibility mask as in [RANSAC-flow, Melekhov2019]. This is particularly important for MegaDepth images, in which some image pairs have overlapping ratios below . On RobotCar images in Fig. 11, our approach WarpC-GLU-Net better handles large appearance variations, such as seasonal or day-night changes. We then show the performance of WarpC-SemanticGLU-Net compared to SemanticGLU-Net on images of TSS in Fig. 1415 and 16. Our unsupervised finetuning brings visible robustness to the large appearance changes and shape variations inherent to the semantic matching task. Finally, we also qualitatively compare both networks on images of the PF-Pascal dataset in Fig. 1718 and 19. The PF-Pascal dataset shows more diverse object categories than TSS images. WarpC-SemanticGLU-Net manages to accurately align challenging image pairs, such as the chair examples which are particularly cluttered.
2021-12-09 06:22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6829115152359009, "perplexity": 1674.6076329266145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00410.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-4-quadratic-functions-4-1-quadratic-functions-and-parabolas-4-1-exercises-page-299/12
Intermediate Algebra: Connecting Concepts through Application $g(x)=2x^3+4x^2-2x-8$ This function is not a linear or a quadratic function, so it is another type of function.
2018-04-20 15:03:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6900946497917175, "perplexity": 559.2242040565113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125938462.12/warc/CC-MAIN-20180420135859-20180420155859-00576.warc.gz"}
https://phys.libretexts.org/Bookshelves/College_Physics/Supplemental_Modules_(College_Physics)/Introductory_Kinematics/03%3A_Circular_Motion/3.01%3A_Uniform_Circular_Motion_and_Analogy_to_Linear_Motion
$$\require{cancel}$$ # 3.1: Uniform Circular Motion and Analogy to Linear Motion Uniform circular motion refers to a body moving in a circular path without angular acceleration (circular motion is impossible without centripetal acceleration). Angular acceleration is the rate of change of angular velocity, and angular velocity is the rate of change of angular displacement. In short, any angular quantity is the same as its linear quantity, except it describes the angle between the axis of rotation and the position of object, rather than the distance-based quantities. ### Analogy between linear and angular motion As described before, rotational motion can be understood by analogy to linear motion. This section will list the analogs and then explain why the analogy works. For accelerated linear motion, we had 3 quantities: velocity, acceleration and displacement. These angular analogs and their symbols are listed below: Linear/Planar quantity Angular/Rotational quantity Displacement ($$\overrightarrow{s}$$) Angular displacement ($$\overrightarrow{\theta}$$) Velocity ($$\overrightarrow{v}$$) Angular velocity ($$\overrightarrow{\omega}$$) Acceleration ($$\overrightarrow{a}$$) Angular acceleration ($$\overrightarrow{\alpha}$$) Recall our three kinematic equations. For rotational motion, the same equation apply. The only difference is that we substitute in the angular analog of the corresponding quantities. The equations are shown below: Linear/Planar equation Angular/Rotational equation $$\overrightarrow{s} = \overrightarrow{u} t + \tfrac{1}{2} \overrightarrow{a} t^2$$ $$\overrightarrow{\theta} = \overrightarrow{\omega} t + \tfrac{1}{2} \overrightarrow{\alpha} t^2$$ $$\overrightarrow{v} = \overrightarrow{u} + \overrightarrow{a}t$$ $$\overrightarrow{\omega} = \overrightarrow{\omega_o} + \overrightarrow{\alpha}t$$ $$v^2 - u^2 = 2as$$ $$\omega^2 - \omega_0^2 = 2\alpha \theta$$ #### Why does this analogy work? It might seem strange that we can swap out the values this way, but it is actually quite reasonable to do so. When we devised these equations, we didn't explicitly use any property of vectors. So, these equations just describe the relation between the magnitude of these numbers, i.e. they are just relations between quantities changing a certain way. However, note that we have used the arrow denoting a vector in the equations on both sides. This means that these equations hold for vector quantities as well. This is because the angular quantities too form a vector similar to linear motion. This is actually a pseudovector and is the result of taking the cross-product(coming soon) of the radius of rotation and the angular quantity. Effectively, we can now treat rotational motion the same as linear motion. However, it is important to note that we cannot add/subtract vectors and pseudovectors. We first have to solve the problem as a problem of linear motion at a small scale, and then calculate the pseudovector. Pseudovectors are beyond the scope of this book, but we can utilize them for our problem-solving if we keep this point in mind. 3.1: Uniform Circular Motion and Analogy to Linear Motion is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
2022-05-23 20:35:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295732975006104, "perplexity": 394.93543024128115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00425.warc.gz"}
https://webdesign.tutsplus.com/tutorials/foundation-for-beginners-sticky-navigation-flexible-video-and-zepto--webdesign-13754
Unlimited Wordpress themes, plugins, graphics & courses! Unlimited asset downloads! From \$16.50/m # Foundation for Beginners: Sticky Navigation, Flexible Video and Zepto This post is part of a series called Foundation for Beginners. Foundation for Beginners: Joyride, Interchange, Tables and Panels Setting up Foundation With Sass and Compass In this part of our examination of Zurb Foundation we’ll talk about Magellan, which creates sticky navigation. We’ll look at visibility classes, right-to-left support, keystrokes, thumbnails, flexible video and the ins and outs of zepto. I’ll explain the use cases of these features and how best to implement them in your projects. In this session we've covered a large portion of the Foundation framework and there is still a fair way to go. Let's now look at a few features which you can quickly implement into your sites. We'll start with a JavaScript plugin this time, it's called Magellan and allows you to add sticky navigation to your projects. ## Magellan It's not uncommon for a client to ask for navigation that stays fixed on the page when the user navigates away from the standard navigation, such as scrolling down the page. Magellan comes to the rescue, adding fixed navigation to the page once the user scrolls past a certain point. With simple markup it's easy to get started. Housed in a traditional uniformed list structure with a div to contain everything, this should be quick and easy to get up and running. There are two types of special attributes for Magellan; data-magellan-expiditon which is used as the position type and data-magellan-arrival which can be paired with the likes of a page anchor. As this JavaScript plugin is so lightweight the only other option you need to know are those which can be passed on initialization. So again, two options here; set yourself a custom magellan class and set the point on the page where magellan will spring into action. ## Visibility Now you see it, now you don't! Quite often it can be tricky re-jigging all your content for different screen sizes. With Foundation's visibility classes you can simply display more or fewer elements depending on the breakpoint set. Let's start with the "show" classes, all fairly simple with a variety of breakpoints to play with. I find visibility classes most useful when I need to add/remove elements between smartphone and desktop,  maybe for a device like an iPad. To hide elements at these same break points just use "hide" instead of "show". Perhaps you need a specific orientation-based class? These are also included. You'll also notice classes for touch too. Spoilt for choice here. Again, these will come in really handy if you're in a tight spot and need to hide some elements or, the polar opposite, if you find yourself in a vast open space you can throw some more elements in. ## RTL Support I thought I'd cover this quickly as no-one really mentions right-to-left support. This is supremely handy if you ever have an international project with an arabic-centric (for example) user base. Take Persian, which uses a modified version of the Arabic alphabet, which requires right-to-left text. Whereas sites like BBC News demonstrate this functionality, they have a team of developers capable of taking on this design challenge. In Foundation, it's all been done for you. Replace you html tag with this and you'll be away. Wait, I know what you're thinking, I don't want Arabic, right? No worries, Chinese zh, Farsi fa, Hebrew he/iw, Japanese ja, Urdu ur, Yiddish yi/ji are also supported. Simply change the value of the "lang" attribute with your preferred language. ## Keystrokes I bet many of you who have downloaded Foundation have skimmed over the keystrokes checkbox as you weren't sure what it was. Right? Well, if you haven't already looked at the docs, you're about to find out. Keystrokes wrap around text and show them as a key. So, rather than saying press control, alt and delete in boring old text you can spruce it up and give it some meaning. Simplistic but effective, especially if you have keys assigned to certain functions such as making the directional buttons control the orbit slider. ## Thumbnails Thumbnails are a quick way to style a small image with an anchor. This is one of those small details that adds to the magic that is Foundation. Grab yourself an image, slap it in an <a class="th"> tag with a class of "th". Boom. Job done. This is great for projects that have little or no design input, as the developer can make the site look great with the minimal styles included in the framework. ## Flex Video Embedding videos can be a pain, especially with those old flash-based video players that have double sets of dimension attributes (ugh, what an inconvenience) making videos responsive too, is an effort.  The goal is simple; you want to add your favorite video of lol cats to your site, but the darn thing gets squished, or cut off when you watch it on your phone every night before bed, or every morning on the iPad whilst in the bathroom. With flex-video you can release a sigh of relief and get back to all the lol cat action. Wrap any video from popular sites (like Youtube and Vimeo) in a div with the "flex-video" class, then foundation will keep the aspect ratio in place, scaling the video and preventing horribly disfigured or chopped in half lol cats. ## Zepto In every package of the Foundation framework lies a special JavaScript library called "zepto", which behaves in much the same way as jQuery. On page load Foundation looks to see if zepto will run, if it does then jQuery is sent to the naughty step. The trouble is, though, zepto isn't a complete replacement for jQuery. In fact, it's roughly 7,100 lines of code lighter than jQuery, so why is this and why would Foundation drop jQuery for this infantile JavaScript library? Zepto duplicates enough of the jQuery feature set to cope with all the basic commands Foundation lays out. Using a library that's much lighter is a smart move now that the framework is mobile-first - it takes more than a few seconds to load jQuery on a mobile device over a cellular connection. Those additional 7,100 lines of code aren't redundant though, as they take care of all of the fancy animation and browser compatibility, amongst other stuff. I'm a pretty heavy jQuery animation user, so I tend to just kill zepto from the get go. However, I recently worked on a project which was mobile-only and which did not require any jQuery animation; using zepto gave users a full 2.3 seconds faster load time. So if you can do without the bloat of jQuery, then zepto is a fantastic bare bones replacement. ### Useful Tool Found on the mass:werk website, this is a fun and potentially addictive way to get lost on your next site project. Use your arrow keys to control the gun <kbd>A</kbd> & <kbd>S</kbd> to move and <kbd>space</kbd> to fire (see what I did there?)
2021-10-27 04:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19068466126918793, "perplexity": 2854.362317145954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00388.warc.gz"}
https://iwaponline.com/wst/article/79/2/240/65512/Simultaneous-photocatalytic-reduction-degradation
Toxic heavy metals and organic pollutants simultaneously exist in the wastewater of some industries. This study explores reduction of toxic divalent nickel ions, from either nitrate or sulfate salts, coupled with naphthalene (NA) degradation using titania photocatalyst in an efficient photo-sono reactor. A synergism appears when reduction and degradation treatments occur simultaneously in the media. With initial concentrations of [Ni(II)]0 = 5 mg/L and [NA]0 = 10 mg/L, under dominant mild conditions, removal efficiencies of 54.5% and 56.6% were obtained for Ni(II) and NA, respectively, when nickel nitrate was used. These efficiencies were enhanced to 59.2% and 57.5%, respectively, with nickel sulfate, all after 90 min operation. For evaluating the mechanism of reactions, reactive oxygen species analysis on solutions as well as Fourier transform infrared, scanning electron microscopy and Brunauer-Emmett-Teller analyses on the titania nanoparticles, before and after usage, was performed. The reaction kinetics was also followed for individual species in the mixed solution and, accordingly, the energy consumption was evaluated for one order of magnitude decrease in pollutant concentration. The high performance of the used method was revealed in comparison to the similar reported reduction/degradation processes. Pollution of water and wastewaters by toxic heavy metals and organic compounds has been a major concern during recent decades. Among heavy metals, nickel is one of the potentially toxic metals, which has many applications in important industries such as stainless steel, nickel alloy, storage battery and electroplating (Joshi et al. 2011). Although Ni(II) has been identified as a component for participating in important metabolic reactions such as ureolysis, acidogenesis and methane biogenesis (Akhtar et al. 2004), but human contact with excessive amount of this ion can cause problems from skin irritation to lung damage and mal effects on the nervous system and mucous membranes (Caicedo 2008). In aquatic environments, nickel appears as Ni(II) and zero-valent Ni0; however, the potential toxicity for Ni(II) is reported as high whereas Ni0 is only slightly toxic (Chen & Ray 2001). The World Health Organization (WHO) recommends maximum concentration of nickel not to exceed 0.5 mg/L in waters (WHO 2008). Thus, reduction of Ni(II) to less harmful species would be very beneficial. Further, wastewaters from industries like electroplating usually contain organic compounds, which are used in different stages of manufacturing processes as balms, ductile material, polisher and so on. Naphthalene (NA), for instance, is widely used in different metal industries as brightener (Doan & Saidi 2008). Several methods have been reported to treat wastewaters containing heavy metals and other pollutants. Ion exchange, membrane separation, adsorption, coagulation, electrolysis, and chemical precipitation are noteworthy. At the same time, it has been pointed out that these methods are coincident with some disadvantages including secondary pollution, large quantities of chemical reagents, high cost, poor treatment efficiency at low pollutant concentration and high energy consumption (Akhtar et al. 2004; Alqadami et al. 2017; Naushad et al. 2017). A promising alternative is using advanced oxidation processes (AOPs) for pollutants removal due to high efficiency, wide adaptability and potential for simultaneous treatment of different pollutants (Saravanan et al. 2018). Among AOPs, heterogeneous photocatalysis by using semiconductors has been demonstrated with great success for the reduction of toxic metals as well as oxidation of organic compounds (Wang et al. 2004). Upon irradiation of the semiconductor–solution interface with light energies greater than the semiconductor band gap, electron–hole pairs are formed in the photocatalyst conduction and valence bands, respectively (Barakat 2004). The electron–hole pairs are then separated and the adsorbed species on the sites of the catalyst undergo photo-induced redox reactions. For this aim, titania has been known as a very efficient photocatalyst since its band gap is around 3.2 eV with conduction band (CB) energy of −0.3 eV and valence band of +2.9 eV at pH 5.6 (Blake et al. 1991). Thus, any metal ion with a reduction potential less negative than −0.3 eV will be potentially reduced by photo-generated electrons of TiO2. Also, the photocatalytic process, under aerobic conditions, can generate several reactive oxygen species (ROS) including superoxide anion radical hydroxyl and hydrogen peroxide through reactions with dissolved oxygen (DO) (Nosaka & Nosaka 2017). The ROS contribute in the oxidation and reduction reactions. A variety of methods for simultaneous photocatalytic reduction of heavy metals and degradation of organic compounds have been proposed. For instance, the photo-reduction of Cu(II) and Se(IV) in the presence of formic acid and ethylene diamine tetra-acetic acid (EDTA) was investigated by Aman et al. (2011). Simultaneous Cr(VI) reduction and NA degradation in aqueous solutions by UV/TiO2 has been investigated in another work by Gutierrez et al. (2008). A synergic effect between Cr(VI) reduction and NA degradation by UV/TiO2 process was reported. In our recent study, simultaneous photocatalytic reduction of Cr(VI) and Ni(II) ions, coupled with degradation of sodium dodecyl benzene sulfonate, was reported (Saien & Azizi 2015). Enhanced photocatalytic treatment of pollutants is of interest due to a low dosage of titania nanoparticles and mild operating conditions. Here, an attempt was made for the first time, for the photocatalytic reduction of nickel ions originating from nitrate or sulfate salts and in the presence of NA. The nitrate and sulfate salts are often present in the wastewaters of some industries like petroleum refining, electroplating, petrochemical, dyeing, textile and steel manufacturing (Ashour et al. 2008; Besharat 2010). NA, on the other hand, is extensively used as brightener and ductile agent in the nickel electroplating industries (Doan & Saidi 2008; Besharat 2010). Experiments were conducted in an efficient photo-sono reactor working with UV irradiation. For practical applications, the process performance was evaluated in terms of reaction kinetics, energy consumption and a specified process efficiency. ### Chemicals All the used chemicals, including nickel nitrate hexa-aqua (98.5%), nickel sulfate hexa-aqua (99%), 1-(2-pyridylazo)-2-naphthol (PAN) (>99%), ethanol (99.9%) and Triton X-100, naphthalene (99%), sulfuric acid (98%), sodium nitrate (98.9%), sodium sulfate (99%), potassium iodine (99.8%), sodium acetate (99%), ammonium dimolybdate (>99%) and sodium hydroxide (>97%), were purchased from Merck. Titanium dioxide nanosize particles, P-25 (>99.5%), was an Evonik product (anatase to rutile weight ratio of about 80/20) with Brunauer-Emmett-Teller (BET) surface area of 50 ± 15 m2/g and average particle diameter of 21 nm. All aqueous solutions were prepared using deionized water with conductivity of less than 0.08 μS/cm. ### Photo reactor and experimental procedure A 1.25 L cylindrical photo-sono reactor made of glossy stainless steel with 90 mm diameter and 200 mm height was used for the experiments. The light source was a 250 W mercury lamp (165 mm body length and 80 mm arc length) with wavelength range of 280–400 nm and the maximum emission of 365 nm. The lamp was installed centrally and immersed in the solutions. This configuration allows a homogeneous irradiation and perfect reflection for the beams contacting the reactor wall. At the beginning of each experiment, the ultrasound source (28 kHz, 60 W), located outside the bottom of the reactor, was working for 5 min, propagating waves in the reaction media in order to break up any particle cluster and providing homogeneous TiO2 nanoparticle suspensions. During experiments, temperature was maintained constant by means of a stainless steel water-flow jacket from a thermostat bath. To run experiments, a solution (1 L) containing the specified concentration of a Ni(II) salt and NA was prepared and was transferred into the reactor after pH adjustment by using either sulfuric acid or sodium hydroxide dilute solutions. Temperature was then set to the desired value. An amount of the catalyst particles (100 mg/L) was added, and prior to light irradiation, the suspension was sonicated and then mixed for 30 min in the dark to ensure adsorption–desorption equilibria of the pollutants on TiO2 particles. All the experiments were performed while the content was continuously mixed with a simple agitator. Initial pollutant concentration in the mixed solutions was 5 mg/L of Ni(II) and 10 mg/L of NA (unless mentioned otherwise). The selection of nickel and NA initial concentration was with respect to the feasible final concentration at the end of treatments being close to the permissible limit for discharge into surface water: 2 and 0.06 mg/L for Ni(II) and NA, respectively (WHO 2008). According to a report, the NA concentration frequently found in electroplating, dyeing and textile wastewaters is within 0.1–300 mg/L (Nesterenko-Malkovskaya et al. 2012). To follow the reaction progress, 2 mL samples (at least two samples) were withdrawn each time, and after vigorous centrifuging to separate nanoparticles, were divided into 1 mL samples for analyzing Ni(II) and naphthalene content. ### Analytical method The concentration of Ni(II) and NA was determined from their absorption in the spectra obtained by a UV-vis spectrophotometer (Jasco, V-630, Japan). For Ni(II) ion, colorimetry at λmax = 568 nm was performed using PAN color reagent as a standard method (Clesceri et al. 1998). Accordingly, the best conditions for the formation of Ni(II) complex were obtained by adding 2.5 mL of 1.0% Triton X-100 in water solution, 2 mL of buffer solution (pH 8.5), 1.0 mL of 0.01% PAN solution in ethanol, together with 1 mL of collected sample, all added into a 10 mL standard flask and making up to the marked level with deionized water. The NA analysis was performed from the change in the absorbance at its maximum wavelength of 275 nm (Mahmoodi & Sargolzaei 2014). Using these analysis procedures, the removal efficiency (RE) at any time was obtained from: (1) where and denote initial and an appropriate time concentrations of a pollutant species, Ni(II) or NA. As a representative of DO, the amount of H2O2 present was measured by colorimetry method with iodide. Briefly, 0.2 mL of diluted solution was mixed with 1.6 mL deionized water, 0.1 mL 1 M potassium iodine, and 0.1 mL 1 M sodium acetate buffer containing a few drops of ammonium dimolybdate as catalyst for the oxidation of by H2O2 to . The absorbance of solutions was measured by the UV-vis spectrophotometer at λmax = 360 nm (Diesen & Jonsson 2014). The separated TiO2 particles, after process utilization, were washed with deionized water and then were dried at room temperature in a dark chamber. These particles as well as pure TiO2 particles (before usage) were characterized in different ways. ### Reduction of Ni(II) ions In preliminary experiments, the reduction of sole Ni(II) ions, originating from nitrate or sulfate salts, within concentration of 5–20 mg/L and with the conventional pH range of 7.5–9.5 (Saien et al. 2014; Saien & Azizi 2015), was examined. Titania photocatalyst concentration was fixed at 100 mg/L and temperature (T) was set within 20 to 40 °C. Under conditions of pH = 9.5 and T= 35 °C, removal efficiencies of 71.4% and 77.2% were achieved with 5 mg/L initial concentration of Ni(II), originating from nitrate and sulfate salts, respectively (Figure 1). Figure 1 Effect of the salt source on the removal efficiency of Ni(II) after 90 min; [Ni(II)]0 = 5 mg/L, pH = 9.5 and T = 35 °C. Figure 1 Effect of the salt source on the removal efficiency of Ni(II) after 90 min; [Ni(II)]0 = 5 mg/L, pH = 9.5 and T = 35 °C. Close modal Counter-anions can play a role of scavenging photo-generated hydroxyl radicals and the photo-generated holes and consequently enhance the photocatalytic reduction of metal ions, according to the following reactions (Burns et al. 1999): (2) (3) (4) (5) or alternatively: (6) (7) Thus, the capture of holes by counter-anions causes the photo-promoted electrons on the CB to become more readily available for nickel reduction, inhibiting electron–hole recombination. The interaction between counter anions and titania particles leads to the easier adsorption of nickel cations and therefore more ion reduction. It is noteworthy that the influence of ion strength on the electrostatic interaction between ions and the catalyst surface follows the charge-density order, presented in the Hofmeister series. Since anion has a higher ionic strength compared to , higher ion reduction will correspond to nickel sulfate. In a previous study on the photocatalytic treatment of metal ions and organic pollution by sulfuric acid-modified titania particles, it was reported that sulfate ions improve the photocatalytic activity (Samantaray et al. 2003). Meanwhile, ions from nitrate salt may convert to nitrite ions, , due to photolysis, and in turn generate H2O2 molecules, which is a disturbing factor in the reduction of Ni(II) ions (Tzou et al. 2008). Figure 2(a) shows significant enhancement in the nickel RE with pH variation from 7.5 to 9.5. It is well known that TiO2 surface is negatively charged at pHs more than pHPZC (point of zero charge) which is within 6.7–7.5 for P-25 TiO2 (Wang et al. 2004). Increase in the solution pH causes the TiO2 surface to find negative charge, leading to higher Ni(II) ion adsorption. Thus, the nickel reduction increased from 23.6 to 69.8% for nickel sulfate and from 17.6 to 62.1% for nickel nitrate when pH was increased from 7.5 to 9.5, respectively. In addition, Ni(OH)2 formation and precipitation occurs under high alkaline pHs (Lin & Rajeshwar 1997; Shirzad-Siboni et al. 2012; Gutha et al. 2015). Accordingly, for treatments with just the photocatalytic process, the pH of solutions was limited to maximum 9.5. Figure 2 Removal efficiency of Ni(II), after 90 min, versus temperature with [Ni(II)]0 = 5 mg/L (a) and versus nickel initial concentration under the same conditions and T = 35 °C (b). Figure 2 Removal efficiency of Ni(II), after 90 min, versus temperature with [Ni(II)]0 = 5 mg/L (a) and versus nickel initial concentration under the same conditions and T = 35 °C (b). Close modal Figure 2(a) also demonstrates that temperature enhances the nickel RE; however, amounts more than 35 °C, do not provide much variation, which may be due to the low level activation energy in the photocatalytic reactions (Saien & Azizi 2015). The influence of nickel initial concentrations is shown in Figure 2(b). Obviously, RE decreases with nickel concentration. This is because the catalyst surface becomes highly covered at high nickel concentrations and the light reaching the catalyst surface is diminished. Therefore, electron–hole formation becomes severe. Moreover, kinetic studies revealed a pseudo-first-order kinetics for photocatalytic reduction of Ni(II) under conditions of [Ni(II)]0 = 5 mg/L, pH = 9.5 and T= 35 °C, for both used salts. The rate constants were obtained as 1.81 × 10−3 and 1.63 × 10−3 1/min for sulfate and nitrate salts, respectively. ### Simultaneous reduction of Ni(II) and degradation of NA In this step, preliminary experiments on adsorption of Ni(II) and NA on TiO2 particles (100 mg/L) were performed in darkness. Adsorption of Ni(II) ions, originating from both the salts, on TiO2 nanoparticles was measured in the presence and absence of NA (10 mg/L). The percentage of adsorption after a typical 60 min time is shown in Figure 3. The surface charge of TiO2 particles in the used solution is slightly negative under natural solution pH of 7.5, which favors the electrostatic attraction between the opposite charged TiO2 particles and Ni(II) cations. This adsorption is decreased in the presence of NA due to competition between Ni(II) and NA species. NA is rather a neutral compound and its adsorption is favored around neutral pHs (Lair et al. 2008). On the other hand, NA adsorption is enhanced in the presence of Ni(II), so that 13.2% NA adsorption increases to 15.4% when accompanied by either of the nickel salts. The ‘salting-out’ effect can be the major reason for this observation in the mixture (Lair et al. 2008). An increase in the ionic strength due to the presence of nickel salts causes a decrease in the solubility of NA and the resulting adsorption of hydrophobic NA species on the surface of TiO2 particles. Figure 3 Ni(II) and NA adsorption on TiO2 particles in darkness for different systems after 60 min; [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5 and T= 25 °C. Figure 3 Ni(II) and NA adsorption on TiO2 particles in darkness for different systems after 60 min; [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5 and T= 25 °C. Close modal In the photocatalytic process with light emission, Figure 4 shows that, in contrast with darkness, the RE for Ni(II) reduction, from either of the salts, is enhanced upon coupling with NA degradation, and reaches 52.7% and 47.8% at pH 7.5. REs for Ni(II) reduction, in the absence of NA and under the same conditions were 25.8% and 22.3% for sulfate and nitrate salts respectively. This positive performance can be attributed to the consumption of photo-generated holes in NA degradation and inhibition of the electron–hole recombination. The photo-promoted CB electrons, generated by TiO2 particles under UV irradiation, become more readily available for Ni(II) ions and hence an easier reduction will be established. The NA degradation is also enhanced in the presence of nickel ions. The RE of NA was raised from 36.5% to 50.6% with nickel sulfate and to 48.3% with nickel nitrate. The stronger NA adsorption in the presence of nickel salts can be considered to be due to the salting-out effect. The enhancement of Ni(II) reduction and NA degradation implies a synergism for the process. Figure 4 Removal efficiencies for different investigated processes; [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5 and T = 25 °C. Figure 4 Removal efficiencies for different investigated processes; [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5 and T = 25 °C. Close modal It is worth noting that similar to a previous report on photocatalytic degradation of aniline with TiO2 particles (Wang et al. 2009), a surface charge transfer complex can form between opposite charges of Ti4+ and O2− from the TiO2 structure and C and H+ from NA, respectively (Rei et al. 2002). Based on these assumptions, for which a simple scheme is illustrated in Figure 5, the formed NA-TiO2 surface species can be photo-excited upon light absorption and initiate a direct electron transfer into the CB of TiO2 particles. The photo-induced electrons, injected into CB, can easily be trapped by Ni(II) ions which act as electron acceptor and accelerate NA conversion to a radical cation (+). Subsequently, intermediates such as 1-naphthol, 1,2-benzenedicarboxaldehyde, 1,4-naphthoquinone, alkyl phthalates and some others are formed by a series of electron transfer mineralization mechanisms (Wang et al. 2009). The in situ formation of Ni/TiO2 and NiO/TiO2 nanocomposites is one advantage of this process. This product can be used as an efficient photocatalyst in degradation of pollutants under UV or solar irradiations (Sharma & Lee 2015). Figure 5 Schematic presentation of the simultaneous photocatalytic reduction of Ni(II) and oxidation of NA with TiO2 particles. Figure 5 Schematic presentation of the simultaneous photocatalytic reduction of Ni(II) and oxidation of NA with TiO2 particles. Close modal The Fourier transform infrared (FTIR, Perkin-Elmer, Spectrum 65, USA) spectra of TiO2 powders, before and after usage, are shown in Figure 6(a). The broad peaks centered between 3,600 and 3,400 cm−1 and at 1,630 cm1 correspond to the stretching vibrations of O-H groups and the bending vibrations of the adsorbed water molecules, respectively. The peak appearing below 1,200 cm1 corresponds to Ti-O-Ti vibrations. Further, in Ni/TiO2 composite, a weak band is observed at 2,925 cm1, which can be due to the presence of Ni over the TiO2 surface (Begum et al. 2008). Other bands at 1,713.8 cm1 and 1,387.5 cm1 are assigned to N-O in nitrate ions (Hankare et al. 2011) and the band between 1,450 and 1,350 cm1 is assigned to S=O in sulfate ions (Samantaray et al. 2003). The other appeared peaks correspond to the adsorption of trace amounts of sulfate or nitrate anions on the surface of titania nanoparticles. Further, the peak at about 1,000 cm1 can be assigned to Ti-O-C stretching bond (Pavia et al. 2009). It is significant that there is no strong peak, relevant to aromatic C-H and C-C stretching vibrations, within 2,800–3,000 cm1 and 1,6721,696 cm1 regions, respectively. These confirm that adsorbed NA species on the TiO2 surface are completely oxidized. Figure 6 FTIR spectra of NA and also TiO2 particles before and after usage in simultaneous photocatalytic reduction of Ni(II) and degradation of NA (a), and SEM image of TiO2 particles in the simultaneous photocatalytic treatment of NiSO4 and NA, before and after usage (b); [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5 and T= 25 °C. Figure 6 FTIR spectra of NA and also TiO2 particles before and after usage in simultaneous photocatalytic reduction of Ni(II) and degradation of NA (a), and SEM image of TiO2 particles in the simultaneous photocatalytic treatment of NiSO4 and NA, before and after usage (b); [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5 and T= 25 °C. Close modal The scanning electron microscopy (SEM, Tescan, Mira 3, Czech Republic) was used to determine the morphology of the TiO2 nanoparticles before usage and of collected and dried particles after usage. The SEM image (Figure 6(b)) reveals that the formed nanocomposite particles have regular and spherical shape after TiO2 doping with nickel species (Ni0 and NiO). The bigger found particle size (25.6 nm compared to 20.2 nm) is similar to a report for synthesizing mesoporous (Ni2+/Ni3+)-TiO2 particles with combined sol-gel and thermal decomposition methods (Rajendran et al. 2018). In a previous study using X-ray photoelectron spectroscopy to analyze used photocatalyst particles (Saien et al. 2014), the presence of nickel species on the titania surface (Ni0 and NiO) was confirmed, because of immediate oxidation of metallic nickel. In addition, the TiO2 particles, before and after usage, were characterized by N2 adsorption–desorption porosimetry using a BET analyzer (BELSORP, Mini II, Japan). The results, presented in Table 1, indicate that the average surface area and the average diameter of TiO2 particles before usage agree with the manufacturer's reported values (‘Materials and methods' section). Meanwhile, the collected TiO2 particles after usage consistently have less surface area and large pores, compared with initial particles, respectively. These kinds of changes have also been reported for nanocomposites such as Ag/TiO2 and NiO/TiO2, synthesized by methods other than photocatalytic reduction (Rajendran et al. 2018; Saravanan et al. 2018). Table 1 BET surface analysis of TiO2 nanoparticles in the photocatalytic treatment of NiSO4 and NA SampleSurface area (m2/g)Average diameterBET (nm)Pore volume (cm3/g)Pore size (nm) TiO2 before usage 50 ± 15 21 0.17 16.4 TiO2 after usage 43 ± 10 26 0.24 20.2 SampleSurface area (m2/g)Average diameterBET (nm)Pore volume (cm3/g)Pore size (nm) TiO2 before usage 50 ± 15 21 0.17 16.4 TiO2 after usage 43 ± 10 26 0.24 20.2 (8) (9) Figure 7 Removal efficiencies after 90 min treatments, [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L: effect of pH at T = 25 °C (a), effect of temperature at pH = 7.5 (b), effect of purging nitrogen gas on REs (c), and comparison of H2O2 generation (d) under given conditions. Figure 7 Removal efficiencies after 90 min treatments, [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L: effect of pH at T = 25 °C (a), effect of temperature at pH = 7.5 (b), effect of purging nitrogen gas on REs (c), and comparison of H2O2 generation (d) under given conditions. Close modal The respective average Ni(II) and NA REs show an ascending and then a flat variation with pH. The highest average RE was found at pH around 7.5 for both the sulfate and nitrate nickel salts. Investigation on temperature effect (Figure 7(b)) revealed that nickel reduction and NA degradation increase with temperature increase from 20 to about 35 °C and then remain almost constant perhaps due to a temperature disfavor effect on their adsorption. An average enhancement of about 12.4% was achieved for treatments below pH 7.5 after 90 min. Temperature helps the photocatalytic reactions to compete with the non-favorable electron–hole pair recombination; however, the low level activation energy of photocatalytic reactions gives a small temperature effect (Saien & Azizi 2015). Temperature of 35 °C can suit operations, giving 58.4% average RE (59.2% for Ni(II) and 57.5% for NA) for sulfate and 55.5% average RE (54.5% for Ni(II) and 56.6% for NA) for nitrate salts. In addition to the reasons given in the section on just Ni(II) reduction, more electron–hole recombination is expected under a low level of metal reduction, which in turn gives lower NA degradation. The role of ROS in the process was investigated by continuous nitrogen gas purging into solutions. Figure 7(c) shows that 61.5% average RE (74.3% for Ni(II) and 48.8% for NA) and 54.2% average RE (65.9% for Ni(II) and 42.5% for NA) were obtained for sulfate and nitrate salts, respectively, and a higher reduction of Ni(II) ion and a lower oxidation of NA occurred. It has been reported that reduction of metals, with low reduction potential, is difficult with DO due to competition between metal ions and DO to attract electrons in the CB of TiO2 (Yang et al. 2012). Meanwhile the efficiency of NA oxidation is diminished under these conditions. DO acts as an effective electron acceptor to extend the hole's lifetime and also may be involved in the formation of ROS, such as H2O2 molecule, which contribute in photocatalyst reactions of NA degradation (Nosaka & Nosaka 2017). H2O2 is a rather stable molecule of the ROS, and the amount of this species was measured easily with iodide method (Nosaka & Nosaka 2017) either with or without nitrogen purging. The results in Figure 7(d) show that H2O2 formation highly increases at initial times and then tends to decrease due to its consumption in NA oxidation. As expected, H2O2 formation is very low in continuous nitrogen purging due to the absence of DO. ### Kinetic study Owing to practical applications, the kinetics of photocatalytic reactions was investigated under mild mentioned operating conditions and during 90 min treatment. Results show that Ni(II) photocatalytic reduction and NA degradation in the mixed solution are well described by pseudo-first-order kinetic models as where , and k are the initial concentration and at any time, t, of Ni(II) or NA, and the kinetic rate constant, respectively. The rate constants as well as the coefficient of determination (R2) for different investigated cases are listed in Table 2. Table 2 The rate constants of Ni(II) reduction and NA oxidation; [Ni(II)]0 = 5 mg/L, [NA]0 = 10 mg/L, pH = 7.5, T= 35 °C and during 90 min treatments Processk (1/min)R2 Ni(NO3)2 0.0027 0.970 NiSO4 0.0037 0.952 NA 0.0054 0.983 Ni(NO3)2 in (Ni(NO3)2 + NA) 0.0080 0.983 NiSO4 in (NiSO4 + NA) 0.0086 0.992 NA in (Ni(NO3)2 + NA) 0.0075 0.997 NA in (NiSO4 + NA) 0.0082 0.993 Processk (1/min)R2 Ni(NO3)2 0.0027 0.970 NiSO4 0.0037 0.952 NA 0.0054 0.983 Ni(NO3)2 in (Ni(NO3)2 + NA) 0.0080 0.983 NiSO4 in (NiSO4 + NA) 0.0086 0.992 NA in (Ni(NO3)2 + NA) 0.0075 0.997 NA in (NiSO4 + NA) 0.0082 0.993 ### Energy consumption analysis Among several factors in selecting a method for pollutants treatment, economics is vital. In this regard, electrical energy consumption, , has the main contribution in the photochemical processes. Here, can be calculated according to the proposal of the Photochemistry Commission of the International Union of Pure and Applied Chemistry (IUPAC) for the first-order reactions as (Bolton et al. 2001): (10) where P is the electric power (kW) of the photochemical system, V is the solution volume (L) in the reactor and t is the reaction duration time (min). In order to take into account the impact of reaction temperature variation on the calculations, Equation (10) can be given as the following equation, noting that the constant represents the rate constant k (1/min): (11) Based on obtained rate constants, the electrical energy related to the photocatalytic removal of each pollutant under dominant conditions was obtained as 1,116.3 and 1,199.8 kWh/m3 for Ni(II) and 1,170.7 and 1,280.1 kWh/m3 for NA, when sulfate and nitrate nickel salts were used in the mixtures, respectively. The calculated values for simultaneous treatments are less than the electrical quantities required for treatment of each pollutant individually (Table 3). Considering the current electrical energy (industrial sector) price in the US market as 0.0702 US$/kWh in 2018 (US EIA 2018), the electrical energy cost relating to the above four electrical energy amounts is estimated as 78.4, 84.2 and 82.2, 89.9 US$/m3, respectively. Table 3 The performance of the used photoatalytic process for simultaneous Ni(II) reduction and NA oxidation in comparison with other reported processes Photocatalytic process[Ni(II)] or [NA] (mg/L)[TiO2] (mg/L)pHLamp power (W)t (min)Accompanied reagentRE (%)EEC (kWh/m3)PE 1/[(kWh/m3) (mg/L)]Reference Ni(NO3)2 + TiO2 15 1,000 7.0 125 120 – 55.0 720.9 7.63 × 10−5 Karimi et al. (2013) Ni(NO3)2 + ZnO 1,000 7.0 125 120 – 41.0 1,091.1 3.76 × 10−5 Kabra et al. (2007) NiSO4 + TiO2 100 7.5 250 90 – 32.1 2,594.6 1.24 × 10−4 This work Ni(NO3)2 + TiO2 100 7.5 250 90 – 27.9 3,555.5 7.85 × 10−5 This work NiSO4 + TiO2 100 7.5 250 90 NA, 10 mg/L 59.2 1,116.3 5.30 × 10−4 This work Ni(NO3)2 + TiO2 100 7.5 250 90 NA, 10 mg/L 54.5 1,199.8 4.54 × 10−4 This work NA + TiO2 10 100 7.5 250 90 Ni(NO3)2, 5 mg/L 56.6 1,280.1 4.42 × 10−4 This work NA + TiO2 10 100 7.5 250 90 NiSO4, 5 mg/L 57.5 1,170.7 4.91 × 10−4 This work NA + TiO2 10 100 7.5 250 90 – 36.5 1,777.8 2.50 × 10−4 This work Photocatalytic process[Ni(II)] or [NA] (mg/L)[TiO2] (mg/L)pHLamp power (W)t (min)Accompanied reagentRE (%)EEC (kWh/m3)PE 1/[(kWh/m3) (mg/L)]Reference Ni(NO3)2 + TiO2 15 1,000 7.0 125 120 – 55.0 720.9 7.63 × 10−5 Karimi et al. (2013) Ni(NO3)2 + ZnO 1,000 7.0 125 120 – 41.0 1,091.1 3.76 × 10−5 Kabra et al. (2007) NiSO4 + TiO2 100 7.5 250 90 – 32.1 2,594.6 1.24 × 10−4 This work Ni(NO3)2 + TiO2 100 7.5 250 90 – 27.9 3,555.5 7.85 × 10−5 This work NiSO4 + TiO2 100 7.5 250 90 NA, 10 mg/L 59.2 1,116.3 5.30 × 10−4 This work Ni(NO3)2 + TiO2 100 7.5 250 90 NA, 10 mg/L 54.5 1,199.8 4.54 × 10−4 This work NA + TiO2 10 100 7.5 250 90 Ni(NO3)2, 5 mg/L 56.6 1,280.1 4.42 × 10−4 This work NA + TiO2 10 100 7.5 250 90 NiSO4, 5 mg/L 57.5 1,170.7 4.91 × 10−4 This work NA + TiO2 10 100 7.5 250 90 – 36.5 1,777.8 2.50 × 10−4 This work Finally, a valid criterion of process efficiency (PE), corresponding to RE when assigned to unit electrical energy consumption and the employed photocatalyst dosage, can be introduced as: (12) This criterion reflects the efficiency, achieved with respect to the level of energy consumption as well as the catalyst dosage. Accordingly, the effect of temperature and the reaction progress are involved in PE via the EEC parameter. The removal efficiencies as well as other described criteria are listed in Table 2 in comparison with other previously reported studies on photocatalytic reduction of Ni(II). The pseudo-first-order reactions are relevant under corresponding operating conditions. It is seen that despite nearly the same order of , the PE values for nickel ion, in this work, are about one order of magnitude higher than those of referenced works. Effective light in the used photoreactor and the synergism action of the reduction/degradation process are the main reasons for this very significant preference. In this study the enhanced photocatalytic reduction of Ni(II) was demonstrated when simultaneously performed with naphthalene degradation and using a low dosage of titania nanoparticles. The photocatalyst hole consumption by naphthalene causes lowering of the electron–hole recombination and acceleration of the electron trapping by Ni(II) ions. Further, the formed naphthalene-TiO2 species can be photo-excited upon light absorption and provide a direct electron transfer into Ni(II) ions. Confirming the results, the TiO2 particles before and after usage were characterized by means of several techniques and it was found that average particles size, after usage, was increased due to nickel doping while the pore size was increased. The other important findings were: (i) the optimum mild operating conditions were pH 7.5 and 35 °C; (ii) under pertinent conditions, the removal efficiency of Ni(II) ions, for both the used salts, was about twice that in the absence of naphthalene; (iii) there appears to be more opportunity for the reduction of Ni(II) ions when the sulfate salt is used, compared with nitrate salt, individually or simultaneously; (iv) nitrogen purging in the samples confirmed the favorable DO role in naphthalene degradation and negative role in nickel reduction; (v) the Ni(II) ion reduction and naphthalene degradation obey pseudo-first-order kinetic reactions; (vi) electrical energy consumption analysis showed a relatively low required energy for the simultaneous treatment systems in comparison with individual treatments under the same conditions; and (vii) based on different criteria, including economic parameters, the significant preference of the process was revealed. The authors wish to thank the university authorities for providing the financial support to carry out this project. The authors also acknowledge Evonik Industries for supplying TiO2 (P-25) as a gift to our research group. A. A. , M. , Alothman Z. A. & Ghfar A. A. 2017 Novel metal-organic framework (MOF) based composite material for the sequestration of U(VI) and Th(IV) metal ions from aqueous environment . ACS Applied Materials and Interfaces 9 , 36026 36037 . Aman N. , Mishra T. , Hait J. & Jana R. K. 2011 Simultaneous photoreductive removal of copper (II) and selenium (IV) under visible light over spherical binary oxide photocatalyst . Journal of Hazardous Materials 186 , 360 366 . Ashour I. , Abu Al-Rub F. A. , Sheikha D. & Volesky B. 2008 Biosorption of naphthalene from refinery simulated waste-water on blank alginate beads and immobilized dead algal cells . Separation Science and Technology 43 , 2208 2224 . Barakat M. A. 2004 New trends in removing heavy metals from industrial wastewater . Journal of Membrane Science 4 , 361 377 . Begum N. S. , Ahmed H. M. F. & Gunashekar K. R. 2008 Effects of Ni doping on photocatalytic activity of TiO2 thin films prepared by liquid phase deposition technique . Bulletin of Materials Science 31 , 747 751 . Besharat E. 2010 Electroplating Engineering . Designer Publisher , Tehran , Iran . Blake D. M. , Webb D. J. , Turchi C. & Magrini K. 1991 Kinetic and mechanistic overview of titania photocatalyzed oxidation reactions in aqueous solution . Solar Energy Materials and Solar Cells 24 , 584 593 . Bolton J. R. , Bircher K. G. , Tumas W. & Tolman C. A. 2001 Figures-of-merit for the technical development and application of advanced oxidation technologies for both electric- and solar-driven systems . Pure and Applied Chemistry 73 , 627 637 . Burns R. A. , Crittenden J. C. , Hand D. W. , Selzer V. H. , Sutter L. L. & Salman S. M. 1999 Effect of inorganic ions in the heterogenous photocatalysis of trichloroethene . Journal of Environmental Engineering 125 , 77 85 . Chen D. & Ray A. K. 2001 Removal of toxic metal ions from wastewater by semiconductor photocatalysis . Chemical Engineering Science 56 , 1561 1570 . Clesceri L. S. , Greenberg A. E. & Eaton A. D. 1998 Standard Methods for the Examination of Water and Wastewater . American Public Health Association/American Water Works Association/Water Environment Federation , Washington, DC, USA . Gutha Y. , Munagapati V. S. , M. & Abburi K. 2015 Removal of Ni(II) from aqueous solution by Lycopersicum esculentum (Tomato) leaf powder as a low-cost biosorbent . Desalination and Water Treatment 54 , 200 208 . Gutierrez R. , Flores S. , Rios O. & Valenzuela M. 2008 Simultaneous Cr(VI) reduction and naphthalene oxidation in aqueous solutions by UV/TiO2 . Materials Research Society Symposium Proceedings 1045 , 1 5 . Hankare P. P. , Patil R. P. , A. V. , Pandav R. S. , K. M. , Sasikala R. & Tripathi A. K. 2011 Synthesis and characterization of nanocrystalline Ti-substituted Zn ferrite . Journal of Alloys and Compounds 509 , 2160 2163 . Joshi K. M. , Patil B. N. , Shirsath D. S. & Shrivastava V. S. 2011 Photocatalytic removal of Ni(II) and Cu(II) by using different semiconducting materials . 2 , 445 454 . Karimi B. , Rajaei M. S. , Habibi M. & Abdollahy M. 2013 Effect of UV/H2O2 advanced oxidation processes for the removal of naphthalene from the water solution . Arak Medical University Journal 16 , 50 64 . Lair A. , Ferronato C. , Chovelon J. M. & Herrmann J. M. 2008 Naphthalene degradation in water by heterogeneous photocatalysis: an investigation of the influence of inorganic anions . Journal of Photochemistry and Photobiology A: Chemistry 193 , 193 203 . Lin W. & Rajeshwar K. 1997 Photocatalytic removal of nickel from aqueous using UV irradicated TiO2 . Journal of Electrochemical Society 97 , 2 20 . Mahmoodi V. & Sargolzaei J. 2014 Photocatalytic abatement of naphthalene catalyzed by nanosized TiO2 particles: assessment of operational parameters . Theoretical Foundations of Chemical Engineering 48 , 656 666 . M. , T. , Al-Maswari B. M. , A. & Alshehri S. M. 2017 Nickel ferrite bearing nitrogen-doped mesoporous carbon as efficient adsorbent for the removal of highly toxic metal ion from aqueous medium . Chemical Engineering Journal 330 , 1351 1360 . Nesterenko-Malkovskaya A. , Kirzhner F. , Zimmels Y. & Armon R. 2012 Eichhornia crassipes capability to remove naphthalene from wastewater in the absence of bacteria . Chemosphere 125 , 124 128 . Nosaka Y. & Nosaka A. Y. 2017 Generation and detection of reactive oxygen species in photocatalysis . Chemical Reviews 117 , 11302 11336 . Pavia D. L. , Lampman G. M. , Kriz G. S. & Vyvyan J. A. 2009 Introduction to Spectroscopy . Saunders Golden, Western Washington University Bellingham , Washington . Rajendran S. , Manoj D. , Raju K. , Dionysiou D. D. , M. , Gracia F. , Cornejo L. , Gracia-Pinilla M. A. & T. 2018 Influence of mesoporous defect induced mixed-valent NiO (Ni2+/Ni3+)-TiO2nanocomposite for non-enzymatic glucose biosensors . Sensors and Actuators, B: Chemical 264 , 27 37 . Rei S. , Krumm H. , Niklewski A. & Staemmler V. 2002 The adsorption of acenes on rutile TiO2: a multi-technique investigation . Journal of Chemical Physics 116 , 7704 7713 . Saien J. , Azizi A. & Soleymani A. R. 2014 Optimized photocatalytic conversion of Ni(II) ions with very low titania nanoparticles at different temperatures; kinetics and energy consumption . Separation and Purification Technology 134 , 187 195 . Samantaray S. K. , Mohapatra P. & Parida K. 2003 Physico-chemical characterisation and photocatalytic activity of nanosized SO42−/TiO2 towards degradation of 4-nitrophenol . Journal of Molecular Catalysis A: Chemical 198 , 277 287 . Saravanan R. , Manoj D. , Qin J. , M. , Gracia F. , Lee A. F. , Khan M. M. & Gracia-Pinilla M. A. 2018 Mechanothermal synthesis of Ag/TiO2 for photocatalytic methyl orange degradation and hydrogen production . Process Safety and Environmental Protection 120 , 339 347 . M. , M. T. , Yang J. K. & Lee S. M. 2012 Photocatalytic removal of Cr(VI) and Ni(II) by UV/TiO2: kinetic study . Desalination and Water Treatment 40 , 77 83 . Tzou M. T. , Hsu C. L. , Chen C. C. , Chen J. H. , Wu J. J. & Tseng K. J. 2008 Influence of inorganic anion on Cr(VI) photoreduction in the presence of ferric ion . Journal of Hazardous Materials 156 , 374 380 . 2018 Independent Statistics and Analysis . . Wang X. , Pehkonen S. O. & Ray A. K. 2004 Removal of aqueous Cr(VI) by a combination of photocatalytic reduction and coprecipitation . Industrial & Engineering Chemistry Research 43 , 1665 1672 . WHO 2008 Guidelines for Drinking-Water Quality . WHO , Geneva , Switzerland . Yang J. K. , Lee S. M. , Farrokhi M. , Giahi O. &
2022-05-22 05:00:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6290801167488098, "perplexity": 5865.040823199481}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00638.warc.gz"}
https://www.khanacademy.org/test-prep/mcat/physical-sciences-practice/physical-sciences-practice-tut/e/fluids-at-rest---passage-1
# A scale under water This passage will test your knowledge on fluids at rest . ### Problem Normal bathroom scales only give the weight of a person. This does not give one enough information to determine the body fat percentage of a person. A technique called “hydrostatic weighing” or “underwater weighing” can determine a person’s density, and therefore give information about the percent of body fat in a person’s body. To perform hydrostatic weighing, a person is first weighed in the air while standing on a regular scale. Then the person is weighed while underwater with the air expelled from their lungs. The measured weight of a person underwater will be less than the measured weight of the person in the air because of the buoyant force acting on the person. Figure 1. The apparatus involved in hydrostatic weighing is depicted. Once the weights in air and water are determined, one can calculate the average density of a person. Knowing the average density of the person will allow an estimate of the body fat percentage since fat is less dense than muscle and bone. People with a high average body density have more muscle per weight and so have a smaller body fat percentage. Consider the data below that was taken for someone using the technique of underwater weighing. (The density of the water used was 1000 kg/mstart superscript, 3, end superscript) Assume the acceleration due to gravity is 10m/sstart superscript, 2, end superscript. All measurements were taken at a depth of 5m. PersonMeasured weight in airMeasured weight in water A652 Newtons36.0 Newtons B713 Newtons98.0 Newtons C684 Newtons54.0 Newtons D597 Newtons89.0 Newtons What is the volume of person C? Please choose from one of the following options. Get 5 answers correct in a row
2016-07-26 21:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.719569206237793, "perplexity": 1337.452813083386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825124.55/warc/CC-MAIN-20160723071025-00126-ip-10-185-27-174.ec2.internal.warc.gz"}
https://plainmath.net/16600/if-there-are-bands-and-floats-in-how-many-different-ways-can-they-arranged
Question # If there are 7 bands and 3 floats, in how many different ways can they be arranged? Probability total of 10 items they can be arranged in $$10!$$ ways but, 7 bands are identical, so there would be no change if they are arranged in any way, similarly with 3 floats. Hence total number of ways $$= \frac{10!}{7!\times 3!} = 8\times9\times\frac{10 }{ 6} = 120$$ ways.
2021-09-28 09:54:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4926362931728363, "perplexity": 215.9673940555223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00655.warc.gz"}
http://book.imt-decal.org/Appendix/Series%20and%20Sequences.html
Series and Sequences – Appendix # Series and Sequences Jump to: In this article, we will review summation notation, as well as arithmetic and geometric series and sequences. Don’t be afraid to ask for help if you haven’t seen some of this content before, especially the proofs. ## Arithmetic Sequences An arithmetic sequence is defined by a starting term $a$ and a common difference $d$. In the sequence, the difference between consecutive terms is constant (and equal to the common difference). For example: $3, 5, 7, 9, 11…$ is an arithmetic sequence with first term $a = 3$ and common difference $d = 2$. Notice that the second term is the first term with the common difference added once, and the third term is the first with the common difference added twice. Then: $t_k = a + (k-1)d$ represents the $k$th term in the series, assuming that we start counting at 1. (Recall that when we studied the binomial theorem, our general term was 0-indexed) ### Sum of an Arithmetic Sequence Sum of First $n$ Natural Numbers Before we derive the formula for an arithmetic series, i.e. the sum of an arithmetic sequence, we will look at the sum of perhaps the most naturally arithmetic series – the set of (positive) natural numbers! Suppose we want to find the sum of the first $n$ natural numbers, that is: $S = 1 + 2 + 3 + … + (n-2) + (n-1) + n$ We can rewrite $S$ backwards: $S = 1 + 2 + 3 + … + (n-2) + (n-1) + n \ S = n + (n-1) + (n-2) + … + 3 + 2 + 1$ Notice that the first term in the top line and first term in the bottom line add to $n+1$, as do the second terms, third terms, and so on and so forth. It should be noted that $n + 1$ is precisely the sum of the first and last terms of the sequence we are trying to sum. $2S = (n+1) + (n+1) + (n+1) + … + (n+1) + (n+1) + (n+1) \ \implies 2S = n(n+1) \ \implies S = \frac{n(n+1)}{2}$ You’ll want to remember this formula, as it shows up in several different fields. We’ll also show several other ways to prove it in Chapter 6. ### Sum of a General Arithmetic Sequence Let’s repeat the analysis above, but for an arbitrary arithmetic series. Suppose we want to find the sum of the first $n$ terms of a series; we’ll denote this as $S_n$. $S_n = a + (a + d) + … + (a + (n-2)d) + (a + (n-1)d) \ S_n = (a + (n-1)d) + (a + (n-2)d) + … + (a + d) + a \$ In the case of the natural numbers, when we added the two different forms of $S$, the corresponding sum had $n$ terms, each of which was $n + 1$. Now, $2S_n$ will be the sum of $n$ terms, each of which are equal to $2a + (n-1)d$, which is the sum of the first and last terms of the arithmetic sequence we’re summing. $2S_n = (2a + (n-1)d) + (2a + (n-1)d) + … + (2a + (n-1)d) + (2a + (n-1)d) \ \implies 2S_n = (2a + (n-1)d)n \ \implies S_n = \frac{(2a + (n-1)d) }{2}n$ Therefore, the sum of the first $n$ terms of an arithmetic sequence with first term $a$ and common difference $d$ is $S_n = \frac{(2a + (n-1)d) }{2}n$. There is an easier way to remember this: since $a$ is the first term and $a + (n-1)d$ is the last term, $2a + (n-1)d$ represents the sum of the first and last numbers in the series. We can then also remember the formula as $S_n = \frac{\text{first} + \text{last}}{2} \cdot (\text{# of numbers in sum})$ Since there is a common difference between terms, the value $\frac{\text{first} + \text{last}}{2}$ represents the average value of an element in the series. The sum is the result of treating the series as if each value were replaced with the average. # Geometric Sequences and Series A geometric sequence is a sequence of numbers defined by a starting term $a$ and a common ratio $r$. In the sequence, the ratio of consecutive terms (i.e. $\frac{t_n}{t_{n-1}}$) is constant, and is equal to $r$. $2, 6, 18, 54, 108, …$ is a geometric sequence with first term $a = 2$ and common ratio $r = 3$. Notice that the second term is the first term multiplied by the common ratio once, and the third term is the first term multiplied by the common ratio twice. Then, in general: $t_k = ar^{k-1}$ represents the $k$th term in the series, assuming that we start counting at 1. ## Sum of a Geometric Sequence The derivation of the sum of a geometric series is similar to that of the arithmetic series. Suppose we want to find the sum $S_n$ of the first $n$ terms of a geometric series with first term $a$ and common ratio $r$. Then: \begin{align} S_n = a + ar &+ ar^2 + … + ar^{n-3} + ar^{n-2} + ar^{n-1} \ rS_n = ar &+ ar^2 + … + ar^{n-3} + ar^{n-2} + ar^{n-1} + ar^{n} \ \end{align} Notice, if we subtract the second line from the first, the terms $ar, ar^2, ar^3, … ar^{n-2}, ar^{n-1}$ are all cancelled out. $\implies (1-r)S_n = a - ar^n = a(1 - r^n) \ \implies S_n = \frac{a(1 - r^n)}{1-r}$ This formula is also equal to $S_n = \frac{a(r^n - 1)}{r - 1}$ which comes from multiplying both the numerator and denominator of the first form by -1. ### Sum of an Infinite Geometric Series So far, we made no assumptions about $a, r$ or $n$. Now, let’s look at two different cases of $r$: • $| r | > 1$: The magnitude of $t_{k+1}$ is greater than the magnitude of $t_k$ • $| r| < 1$: The magnitude of $t_{k+1}$ is less than the magnitude of $t_k$ The reason we look at the magnitude of $t_k$ is because if $r$ is negative, terms oscillate between positive and negative. In the case where $| r | < 1$, we know that $r^n \rightarrow 0$ as $n \rightarrow \infty$. Then, consider $S_n$ as $n \rightarrow \infty$: $\lim_{n \rightarrow \infty}S_n = \lim_{n \rightarrow \infty}\frac{a(r^n - 1)}{r - 1} \ = \frac{a(-1)}{r-1} = \frac{a}{1-r}$ This is the formula for the sum of an infinite geometric series. When $|r| < 1$, we say that the series converges, and it converges to the value above. In all other cases, we say that the series diverges. Perhaps the most common example of a diverging geometric series is the case of $a = r = \frac{1}{2}$. ## Problems and Applications ### Find $a, r$ One type of problem we can ask now is, suppose the third term of a geometric series is 12, and the fifth is 108. What are $a$ and $r$? We know the third term must be of the form $ar^2$, and the fifth of the form $ar^4$. Then: $ar^4 = 108 \ ar^2 = 12 \ \implies r^2 = 9$ Remember, $r^2 = 9$ has two solutions for $r$, and both are valid here, meaning that we either have $r = 3$ or $r = -3$. This implies that $a = \frac{12}{r^2} = \frac{12}{9} = \frac{4}{3}$. ### Financial Mathematics Geometric sequences and series have a direct application in finance – we can model payments with compounding interest as a geometric sequence, and use the properties we know of geometric series to calculate values. 1. Compounding Interest 2. Annuities
2018-11-17 22:02:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953617811203003, "perplexity": 132.15908431487276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743854.48/warc/CC-MAIN-20181117205946-20181117231946-00502.warc.gz"}
https://www.jldc.ch/posts/imbalanced-datasets/
Working with Imbalanced Datasets Post Cancel # Working with Imbalanced Datasets Say that you are working on a simple binary classification task. For instance, you have a dataset of credit card transactions, and you want to classify each trade as either legitimate or fraudulent. As you might imagine, there will be a substantially lower amount of fraudulent transactions than legitimate ones. In this post, I will discuss how such a case of imbalanced data can negatively impact your model’s performance and what one can do to remediate the issue. ## Generating toy data First, since I do not have a credit card transaction dataset, we will have to generate some imbalanced data ourselves. This choice is somewhat arbitrary, and we could also use other data-generating processes (DGP). For no specific reason, we will choose the following DGP: \begin{align} y &= \mathbb{1}\{| f\left(\mathbf{X}\right) | \geq 8\}, \ \text{ with}\\ f(x_1, x_2, x_3) &= \frac{\sin(x_1) \cdot \sqrt{|x_2|}}{x_3} + \epsilon, \ \text{ with } \epsilon \sim \mathcal{N}(0, 1) \end{align} This DGP might look awkward, but the main idea is to have a highly nonlinear DGP with a solid imbalance between classes. This DGP will give us around 5% of fraudulent transactions (or $$y=1$$ labels), you can convince yourself of this fact by trying a Monte Carlo simulation. Many other DGPs could also achieve the same result. 1 2 3 4 5 6 7 # Data generating process f(x) = sin(x[1]) * sqrt(abs(x[2])) / x[3] + randn() function generate_batch(batchsize) X = randn(3, batchsize) # Generate random features y = abs.(f.(eachcol(X))) .≥ 8 X, y end ## Normal model training Armed with our DGP, we now turn to the actual training of our model. We train the model with batches of size 512. We will use a straightforward deep feedforward neural network architecture with an Adam optimizer and 250 training epochs. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 # Set random seed Random.seed!(72) epochs = 250 # Train the model for 250 epochs # Create our first model and initialize optimizer m₁ = Chain( Dense(3 => 64, relu), Dense(64 => 64, relu), Dense(64 => 1, σ) ) opt₁ = ADAM() θ₁ = Flux.params(m₁) # Training loop for epoch ∈ 1:epochs # Generate random train batch of size 512 X, y = generate_batch(512) ∇ = gradient(θ₁) do # Compute gradients Flux.Losses.binarycrossentropy(m₁(X) |> vec, y) end Flux.update!(opt₁, θ₁, ∇) end # Assessing model performance X, y = generate_batch(100_000) ŷ₁ = vec(m₁(X) .> .5) acc₁ = 100mean(y .== ŷ₁) # 95.186 Our model achieves an accuracy of 95.186%, quite impressive! Or is it? Looking at the results in more detail, our model predicts every observation to be legitimate (i.e., $$y=0$$), making for a rather useless model. ## Improving our model The above exercise begs the question: how can we proceed to improve our model? Unfortunately, as is more often the case than not, the answer is: it depends. There are multiple changes that we can attempt to make. 1. If we have a virtually infinite amount of data (as in this case), we could resample our batches and ensure the imbalance is not too significant. For instance, we could have around 30-50% of positive (fraudulent) observations in each training batch. 2. We could use another loss function. We might want to use recall, sensitivity and specificy, and ROC curves to finetune our model. Perhaps precision is not what interests us the most; the above is a prime example of such a case. ## An improved example Let’s have a quick look at how the resampling idea might improve our training. Instead of training our model on the full batch of 512 observations, we now discard legitimate observations at random such that we obtain a 25% - 75% ratio in our training batch. This is still imbalanced but much better than the original 5% of fraudulent observations. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 # Set random seed Random.seed!(72) # Create our second model and initialize optimizer m₂ = Chain( Dense(3 => 64, relu), Dense(64 => 64, relu), Dense(64 => 1, σ) ) opt₂ = ADAM() θ₂ = Flux.params(m₂) # Training loop for epoch ∈ 1:epochs # Generate random train batch of size 512 X, y = generate_batch(512) idx₁ = findall(isequal(1), y) # Find all observations of class 'fraudulent' (1) idx₀ = sample(findall(isequal(0), y), 3length(idx₁), replace=false) # Select sample of class 'legitimate' (0) X = X[:, vcat(idx₀, idx₁)] y = y[vcat(idx₀, idx₁)] ∇ = gradient(θ₂) do # Compute gradients Flux.Losses.binarycrossentropy(m₂(X) |> vec, y) end Flux.update!(opt₂, θ₂, ∇) end # Assessing model performance X, y = generate_batch(100_000) ŷ₂ = vec(m₂(X) .> .5) acc₂ = 100mean(y .== ŷ₂) # 95.712 Our accuracy is now 95.712%. So, was this all for a measly .6 percentage points accuracy increase? Not exactly. As discussed above, there is more to our problem than precision. For instance, the number of true/false positive/negatives gives a better insight into the models’ performances: Model True Positive True Negative False Positive False Negative Normal training 0 95’186 4’814 0 Resampled training 4’661 91’051 97 4’191 The sensitivity (true positive rate) of the model trained using resampled batches is now 97.96%. It is hard to argue that such a model is not much more helpful than the first one when predicting fraudulent credit card transactions! The resampled training is still far from perfect, and there are still a few tuning options we could use to improve it, but that is beside the point of this post. In all fairness, given an infinite amount of data, recovering the true DGP is not a challenging task. With enough epochs, even the first training procedure should reach a similar performance to the second. Nonetheless, this short example shows how one can drastically improve their model performance when being thoughtful about the underlying data. Recent Update Trending Tags Contents
2022-10-07 09:25:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6922304034233093, "perplexity": 1813.3327068976146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00179.warc.gz"}
https://rdrr.io/cran/torch/man/nn_conv2d.html
# nn_conv2d: Conv2D module In torch: Tensors and Neural Networks with 'GPU' Acceleration nn_conv2d R Documentation ## Conv2D module ### Description Applies a 2D convolution over an input signal composed of several input planes. ### Usage nn_conv2d( in_channels, out_channels, kernel_size, stride = 1, dilation = 1, groups = 1, bias = TRUE, ) ### Arguments in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple or string, optional): Zero-padding added to both sides of the input. controls the amount of padding applied to the input. It can be either a string 'valid', 'same' or a tuple of ints giving the amount of implicit padding applied on both sides. Default: 0 dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE padding_mode (string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros' ### Details In the simplest case, the output value of the layer with input size (N, C_{\mbox{in}}, H, W) and output (N, C_{\mbox{out}}, H_{\mbox{out}}, W_{\mbox{out}}) can be precisely described as: \mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) + ∑_{k = 0}^{C_{\mbox{in}} - 1} \mbox{weight}(C_{\mbox{out}_j}, k) \star \mbox{input}(N_i, k) where \star is the valid 2D cross-correlation operator, N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width in pixels. • stride controls the stride for the cross-correlation, a single number or a tuple. • padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension. • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link_ has a nice visualization of what dilation does. • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, • At groups=1, all inputs are convolved to all outputs. • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. • At groups= in_channels, each input channel is convolved with its own set of filters, of size: ≤ft\lfloor\frac{out\_channels}{in\_channels}\right\rfloor. The parameters kernel_size, stride, padding, dilation can either be: • a single int – in which case the same value is used for the height and width dimension • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension ### Note Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding. When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also termed in literature as depthwise convolution. In other words, for an input of size :math:(N, C_{in}, H_{in}, W_{in}), a depthwise convolution with a depthwise multiplier K, can be constructed by arguments (in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in}). In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting backends_cudnn_deterministic = TRUE. ### Shape • Input: (N, C_{in}, H_{in}, W_{in}) • Output: (N, C_{out}, H_{out}, W_{out}) where H_{out} = ≤ft\lfloor\frac{H_{in} + 2 \times \mbox{padding}[0] - \mbox{dilation}[0] \times (\mbox{kernel\_size}[0] - 1) - 1}{\mbox{stride}[0]} + 1\right\rfloor W_{out} = ≤ft\lfloor\frac{W_{in} + 2 \times \mbox{padding}[1] - \mbox{dilation}[1] \times (\mbox{kernel\_size}[1] - 1) - 1}{\mbox{stride}[1]} + 1\right\rfloor ### Attributes • weight (Tensor): the learnable weights of the module of shape (\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}, \mbox{kernel\_size[0]}, \mbox{kernel\_size[1]}). The values of these weights are sampled from \mathcal{U}(-√{k}, √{k}) where k = \frac{groups}{C_{\mbox{in}} * ∏_{i=0}^{1}\mbox{kernel\_size}[i]} • bias (Tensor): the learnable bias of the module of shape (out_channels). If bias is TRUE, then the values of these weights are sampled from \mathcal{U}(-√{k}, √{k}) where k = \frac{groups}{C_{\mbox{in}} * ∏_{i=0}^{1}\mbox{kernel\_size}[i]} ### Examples if (torch_is_installed()) { # With square kernels and equal stride m <- nn_conv2d(16, 33, 3, stride = 2) # non-square kernels and unequal stride and with padding m <- nn_conv2d(16, 33, c(3, 5), stride = c(2, 1), padding = c(4, 2)) # non-square kernels and unequal stride and with padding and dilation m <- nn_conv2d(16, 33, c(3, 5), stride = c(2, 1), padding = c(4, 2), dilation = c(3, 1)) input <- torch_randn(20, 16, 50, 100) output <- m(input) } torch documentation built on Oct. 24, 2022, 5:08 p.m.
2022-11-30 06:43:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887070536613464, "perplexity": 7216.339399219637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00130.warc.gz"}
https://math.stackexchange.com/questions/1979830/factorizing-the-sum-of-two-fibonacci-numbers
# Factorizing the Sum of Two Fibonacci numbers The Fibonacci and Lucas numbers are defined for all integers $n$ by the recurrence relations $$F_n=F_{n-1}+F_{n-2}\text{ where }F_1=1\text{ and }F_2=1,$$ $$L_n=L_{n-1}+L_{n-2}\text{ where }L_1=1\text{ and }L_2=3.$$ I would like to know for what values of $k\in\mathbb{N}$ can one write $$F_{n+(2k+1)}\pm F_n=cP(k,n)$$ where $c\in\mathbb{N}$ and $P(k,n)$ is some product of Fibonacci or Lucas numbers. Note that it is easy to show that that: \begin{align*} F_{n+2k}+F_{n}=F_{n+k}L_k\text{ where $k$ is even,}\\ F_{n+2k}+F_{n}=L_{n+k}F_k\text{ where $k$ is odd.}\\ F_{n+2k}-F_{n}=F_{n+k}L_k\text{ where $k$ is even,}\\ F_{n+2k}-F_{n}=L_{n+k}F_k\text{ where $k$ is odd.}\\ \end{align*} It is also easy to see that \begin{align*} F_{n+1}+F_{n}=F_{n+2}\\ F_{n+1}-F_{n}=F_{n-1}\\ F_{n+3}+F_{n}=2F_{n+2}\\ F_{n+3}-F_{n}=2F_{n+1}\\ \end{align*} Are these the only such expressions? Ideas tried: I've tried the Binet formula to see what insights this might provide, but I can't see anything. I've also tested small values of $n$ numerically but couldn't find any further examples than those four given. It seems unlikely that there will be a simple factorization for $F_{n+5}\pm F_{n}$ because these numbers are prime for several values of $n$. Primes of the form $F_{n+5}+F_{n}$ are listed in oeis/A091157. Primes of the form $F_{n+5}-F_{n}$ are listed in oeis/A153892. • Right. So you suspect that there will be no such $k$ for which the this will be possible? – Auslander Nov 30 '16 at 22:54 Among Lucas numbers $L_{n+1}^2-L_n^2=L_{n+2}L_{n-1}, n\ge 2$. This is just a differeence of squares factorization together with applying the recursive relation.
2020-10-28 23:44:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300441145896912, "perplexity": 294.64593370657485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902038.86/warc/CC-MAIN-20201028221148-20201029011148-00418.warc.gz"}
http://tatome.de/zettelkasten/zettelkasten.php?standalone&reference=kleesiek-2012
# Show Reference: "Action-Driven Perception : Neural Architectures Based On Sensorimotor Principles" There is the view that perception is an active process and cannot be understood without an active component. The terms active vision',active perception', smart sensing',animate vision' are sometimes used synonymously. Active perception and its synonyms usually refer to a sensor which can be moved to change the way it perceives the world. The way in which the perception of the world changes when the sensor is moved physically is a source of information in addition to static perception of the world. Kleesiek et al. use a recurrent neural network with parametric bias (RNNPB) to classify objects from the multisensory percepts induced by interacting with them. Kleesiek et al. introduce adaptive learning rates to RNNPB which results in faster and more stable training. RNNPB learns sequences of inputs unsupervised (self-organized). Similar parametric bias vectors are learned by the RNNPB for similar input.
2019-04-22 18:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4268043637275696, "perplexity": 2318.0905940800703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00451.warc.gz"}
https://www.physicsoverflow.org/24477/born-scattering-amplitude-from-path-integral
Born Scattering Amplitude from Path Integral + 3 like - 0 dislike 1065 views I am stuck on an exercise from "Condensed Matter Field Theory" by Altland and Simons on path integral. The exercise asks to obtain a perturbative expansion for the scattering amplitude $\langle \mathbf{p'}| U(t \rightarrow \infty,t' \rightarrow -\infty)| \mathbf{p} \rangle$ of a free particle from a short-range central potential $V(r)$ and show that the first-order term recovers the Born scattering amplitude $- i \hbar e^{-i(t-t')E(p)/\hbar} \delta(E(p)-E(p')) \langle \mathbf{p'}|V|\mathbf{p}\rangle$ Here is my attempt \begin{align} & \langle \mathbf{p'}|U|\mathbf{p}\rangle \\ =& \int d \mathbf{x} \int d \mathbf{x'} \langle \mathbf{p'}| \mathbf{x'} \rangle \langle \mathbf{x'} |U| \mathbf{x} \rangle \langle \mathbf{x} |\mathbf{p}\rangle \\ =& \int d \mathbf{x} \int d \mathbf{x'} \langle \mathbf{p'}| \mathbf{x'} \rangle G(\mathbf{x'},\mathbf{x};t',t) \langle \mathbf{x} |\mathbf{p}\rangle \\ \end{align} Now we can expand the propagator \begin{align} & G(\mathbf{x'},\mathbf{x};t',t) \\ =& \int \mathcal{D} \mathbf{x}~\mathrm{exp}[\frac{i}{\hbar} \int_t^{t'} dt' ~ (\frac{1}{2} m \mathbf{\dot x^2}-V) ] \\ =& \int \mathcal{D} \mathbf{x}~\mathrm{exp}[\frac{i}{\hbar} \int_t^{t'} dt' ~ \frac{1}{2} m \mathbf{\dot x^2} ] \mathrm{exp}[-\frac{i}{\hbar} \int_t^{t'} dt' ~V ] \\ \approx& \int \mathcal{D} \mathbf{x}~\mathrm{exp}[\frac{i}{\hbar} \int_t^{t'} dt' ~ \frac{1}{2} m \mathbf{\dot x^2} ] (1 - \frac{i}{\hbar}\int_t^{t'} dt' ~V) \end{align} I am currently stuck at this step. How should I proceed? asked Oct 15, 2014 + 3 like - 0 dislike You have to interchange the integration over paths and over time to obtain $G(\mathbf{x}_1,\mathbf{x}_0;t_1,t_0)= \int_{t_0}^{t_1} dt\int \mathcal{D} \mathbf{x}~\mathrm{exp}[\frac{i}{\hbar} \int_{t_0}^{t_1} ds ~ \frac{1}{2} m \mathbf{\dot x^2} ] (- \frac{i}{\hbar} ~V(\mathbf{x}(t)))$ The inner integral can be interpreted as a propagator again, except that the particle is now scattered at the potential V at the place x(t). The inner path integral corresponds to the propagator of a free particle, and we can write $\int \mathcal{D} \mathbf{x}~\mathrm{exp}[\frac{i}{\hbar} \int_{t_0}^{t_1} ds ~ \frac{1}{2} m \mathbf{\dot x^2} ] = G_0(\mathbf{x_1},\mathbf{x_0};t_1,t_0)$$= \int dxdt' G_0(\mathbf{x_1},\mathbf{x};t_1,t') \cdot G_0(\mathbf{x},\mathbf{x_0};t',t_0) \delta(\mathbf{x}-\mathbf{x}(t))\delta(t-t')$ This is the amplitude for a particle traveling from the point x0 to some point x and then to x1. Another way of writing this is $\langle x_1 |U| x_0\rangle = \int dx \langle x_1 |U_0| x\rangle \langle x |U_0| x_0\rangle (-\frac{i}{\hbar}V(x))$ Now, the Born scattering amplitude is just the Fourier transform of this. The free propagator $U_0$ will give the terms involving energy, whereas the addition multiplication with $V(x)$corresponds to the additional matrix element. answered Oct 15, 2014 by (775 points) edited Oct 16, 2014 Thanks a lot for your answer. When you say switch to momentum eigenbasis, does that mean I need to take the fourier transform? Could you give me a little more detail? Thanks. I have expanded my answer, though I was too lazy to expand the Fourier transform. Does this help? Yes! I get it now. Thank you! Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverf$\varnothing$owThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2022-12-03 22:02:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000017881393433, "perplexity": 4525.892402041106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00708.warc.gz"}
https://www.r-bloggers.com/2019/04/set-analysis-a-face-off-between-venn-diagrams-and-upset-plots/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. It’s time for me to come clean about something; I think Venn diagrams are fun! Yes that’s right, I like them. They’re pretty, they’re often funny, and they convey the straight forward overlap between one or two sets somewhat easily. Because I like making nerd comedy graphs, I considered sharing with y’all how to create Venn diagrams in R. But I couldn’t do that in good conscious without showing an alternative for larger and more complex set analysis. A few weeks ago, when I saw Matthew Hendrickson and Mara Averick’s excitement over the UpSetR plot, I knew what I should do. Folks, what you are about to witness is a set analysis face off! We will be pairing off Venn diagrams and UpSet plots in a variety of scenarios for a true battle royale. Winner takes all and is able to claim the prize of set analysis master. Working Environment For this tutorial, we are going to be using R as our programming language. The entire code is hosted in my github repo, and you can also copy and paste to follow along below. If you are looking to understand your options for an R working environments, I recommend that you can check out IBM Watson Studio to run hosted R notebooks, or RStudio. Round 1: Tiny and Fun Set Intersections Kind folks, this is our warm up. In this round, we will be creating some fun and simple set intersections. Specifically, we will just be creating a very important graph which describes why I love Twitter. To get started, we are going to install and load the packages required for this tutorial. If you do not have the packages already installed, please uncomment the install.packages() commands by removing the hashtag(#). # install.packages("rJava") # install.packages("UpSetR") # install.packages("tidyverse") # install.packages("venneuler") # install.packages("grid") library(rJava) library(UpSetR) library(tidyverse) library(venneuler) library(grid) Format the data We will create a basic list which specifies the values of each of the circles and their overlap. # Set the chart data expressionInput <- c(#rstats = 5, memes = 5, #rstats&memes = 3) Create a Venn diagram To create a simple Venn diagram, you can just pass in the list with the specified set and overlap values into the venneuler() function. The remaining code is just formatting to set the font size, title and subtitle. # Create the Venn diagram # note on set up for java v11 jdk (v12 does not work with this) myExpVenn <- venneuler(expressionInput) par(cex=1.2) plot(myExpVenn, main = "Why I Love Twitter") grid.text( "@littlemissdata", x = 0.52, y = 0.2, gp = gpar( fontsize = 10, fontface = 3 ) ) Create an UpSet Plot The great thing is that we can also create an UpSet plot using the same basic expression list. You simply pass the fromExpression() function into the upset() function. The remaining code is to format the labels and font size. How to read an UpSet plot: UpSet plots offer a straight forward way for us to view set data by frequency. On the bottom left hand side horizontal bar chart, we show the entire size of each set. In this case, each set is of size 8. The vertical bar chart on the upper right hand side shows the sizes of isolated set participation. In the example, 5 values only belong to the #rstats set or only belong to the memes set. 3 values belong to both sets. # Create an UpsetR Plot upset(fromExpression(expressionInput), order.by = "freq") grid.text( x = 0.80, y = 0.05, gp = gpar( fontsize = 10, fontface = 3 ) ) While the UpSet graph is an exciting new addition to our set analysis, I’m going to have to give this round to Venn diagrams. When trying to represent simple and easy to understand information, Venn diagrams are more visually appealing. Round 2: Complicated Sets Coming off of the round 1 win, Venn diagram may be feeling quite confident. However, the stakes are getting higher and we need to expect more of our visualizations in this round. We have more sets and interactions to visualize and more data to work with. Data Introduction The data is created using the 2017 Toronto Senior Survey from the Toronto Open Data Catalogue. I feel proud that my current city (Austin) and my previous city (Toronto) both have high quality open data catalogs. I feel strongly that data should be available to the people that pay for it. This data set shows the output of a 2017 senior citizen survey to identify various needs of Toronto's seniors' population, in order to better inform decision making. To make our data processing easier, I have stripped down the columns that we will use and have performed a little pre-formatting. Please see below for a data dictionary and outline of what was changed. Column Source Column ID Not previously included. This is a new unique key column. physicalActivity Survey Question: "1. In the past 3 months, how often did you participate in physical activities like walking?" physicalActivityPerMonth Survey Question: "1. In the past 3 months, how often did you participate in physical activities like walking?". This has been transformed into numerical format. volunteerParticipation Survey Question: "5. During the past 3 months, how often did you participate in volunteer or charity work?" volunteerPerMonth Survey Question: "5. During the past 3 months, how often did you participate in volunteer or charity work?". This has been transformed into numerical format. difficultFinancial Survey Question: "9. In the last year, have you had difficulty paying your rent, mortgage, Hydro bill, or other housing costs? For example, have you had to go without groceries to pay for rent or other monthly housing expenses?" supportSystem Survey Question: "13. Do you have people in your life who you can call on for help if you need it?" postalCode "Survey Question: 14. What are the first three characters of your postal code?" employmentStatus Survey Question: "15. What is your current employment status?" sex Survey Question: "16. What is your sex/gender?" primaryLanguage Survey Question: "18. In what language(s) would you feel most comfortable to receive services?" (first option listed) ageRange Survey Question: "19. Which age category do you belong to?" ttcTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [TTC (bus, subway, or streetcar)]" walkTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [Walk]" driveTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [Drive]" cycleTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [Cycle]" taxiTransportation Survey Question: " 6. To get around Toronto, what modes of transportation do you use frequently? [Taxi or Uber]" communityRideTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [Community Transportation Program, for example Toronto Ride or iRIDE]" wheelTransTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [Wheel-Trans]" friendsTransportation Survey Question: "6. To get around Toronto, what modes of transportation do you use frequently? [Rides from family, friends or neighbours]" ageRange Survey Question: "19. Which age category do you belong to?". minAgeRange Survey Question: "19. Which age category do you belong to?". This has been converted to numerical format, taking the lowest age as the value. Bring in the Data We will start by bringing in the data, replacing the NA’s and renaming the columns for easier display. rawSets <- read.csv( file = "https://raw.githubusercontent.com/lgellis/MiscTutorial/master/sets/seniorTransportation.csv", header = TRUE, sep = ",", stringsAsFactors = FALSE ) # Replace the NA's rawSets[is.na(rawSets)] <- 0 # Rename the columns for easier display sets <- rawSets %>% rename(TTC = ttcTransportation, Walk = walkTransportation, Drive = driveTransportation, Cycle = cycleTransportation, Taxi = taxiTransportation, Community Ride = communityRideTransportation, Wheel Trans = wheelTransTransportation, Friends = friendsTransportation) dim(sets) head(sets) The data comes with the sets in the form of a binary matrix. Create a Venn Diagram Now it’s time to create our Venn diagram. The data is currently in the form of a binary matrix, but to pass it into the venneuler() function, we need to get it into a list of set, ID pairs. # Prep the data for a Venn diagram vennSets <- sets %>% gather(transportation, binary,6:13) %>% # take all binary mappings and convert to be a the set indicator filter(binary == 1) %>% # only include set matches select(ID, transportation) %>% # only include ID and set category mutate(transportation = factor(transportation)) # set the transportation column as a factor dim(vennSets) The data has been transformed to have one set column and one ID column. An ID can be repeated for every set it belongs to. Create the Venn diagram by passing the data frame into the venneuler() function. The rest of the code is for labelling and formatting. v <- venneuler(data.frame(vennSets)) #Note that if you need to move around the labels so that they are not overlapping, you can use the new line breaks like the example below. #v\$labels <- c("TTC", "Walk", "Drive", "Cycle\n\n\n", "\nTaxi", "Community Ride", "Wheel Trans", "Friends") par(cex = 0.7) plot(v, main = "Modes of Senior Transportation (Toronto 2017 Survey)", cex.main = 1.5) grid.text( "@littlemissdata", x = 0.52, y = 0.15, gp = gpar( fontsize = 10, fontface = 3 ) ) Create an UpSet Plot Create an UpSet plot by passing the original binary matrix into the upset() function. You can specify a number of parameters as outlined by this very clear vignette, but it also works very well outside of the box. Other than the upset() function, the rest of the code is for labels and formatting. upset(sets, nsets = 10, number.angles = 30, point.size = 3.5, line.size = 2, mainbar.y.label = "Modes of Senior Transportation (Toronto 2017 Survey)", sets.x.label = "Total Participants" ) grid.text( "@littlemissdata", x = 0.90, y = 0.05, gp = gpar( fontsize = 10, fontface = 3 ) ) Unfortunately, I think when the stakes got higher, Venn diagrams just could not keep up. While I think the Venn diagram is quite pretty, I really can’t make much sense out of it. The clarity provided by the UpSet plot can’t be matched. Round 2 goes to UpSet plots! Round 3: Explore In Context Set Information We are all tied up as we enter round 3, and it’s time to raise the stakes. In this round, we want to explore information about other variables in the data set within the context of the sets. Provide Context with Plot highlighting We will start by using colors to highlight specific areas of the graph that we care about. Highlight Seniors Who Both Walk and Cycle Using "Query=Intersects" UpSet plots have a very cool parameter called queries. Queries can be used to define a subset of the data that you would like to highlight in your graph. The queries property takes in a list of query lists which means that you can pass multiple queries into the same graph. Each query list allows you to set a number of properties about how the query should function. In this example we are viewing the Cycle and Walk set intersection (query and params). We want the query to be highlighted in a nice pink (color). We want to display the query as a highlighted overlap (active) and we will give it a name that we add to the chart legend (query.name) upset(sets, query.legend = "bottom", nsets = 10, number.angles = 30, point.size = 3.5, line.size = 2, mainbar.y.label = "Modes of Senior Transportation (Toronto 2017 Survey)", sets.x.label = "Total Participants", queries = list( list( query = intersects, params = list("Cycle", "Walk"), color = "#Df5286", active = T, query.name = "Physically Active Transportation" ) ) ) grid.text( "@littlemissdata", x = 0.90, y = 0.05, gp = gpar( fontsize = 10, fontface = 3 ) ) Highlight Seniors Who Exercise 1x/Week or Less Using "Query=Elements" In our next example, we are looking to highlight other data in the data frame within the context of the sets. In the normal UpSet graph, we want to highlight the rows identified as physically active less than 1x/week or less (queries, params) across all sets. We want the query to be highlighted in a nice pink (color). We want to display the query as a highlighted overlap (active) and we will give it a name that we add to the chart legend (query.name) upset(sets, query.legend = "bottom", nsets = 10, number.angles = 30, point.size = 3.5, line.size = 2, mainbar.y.label = "Modes of Senior Transportation (Toronto 2017 Survey)", sets.x.label = "Total Participants", queries = list( list( query = elements, params = list("physicalActivityPerMonth", 0,4), color = "#Df5286", active = T, query.name = "Physically Active 1x/Week or Less" ) ) ) grid.text( "@littlemissdata", x = 0.90, y = 0.05, gp = gpar( fontsize = 10, fontface = 3 ) ) Provide Context with Additional Graphs Called "Attribute Plots" Beyond highlighting within the UpSet main graph, we also have the option of bringing in additional plots which can display information about other variables within the context of the sets. Display an in context box plot of age for each set using boxplot.summary() function In our next example, we are looking to display a boxplot of the minimumAgeRange for every single set. We can do this very easily by just passing in the boxplot.summary parameter with the variable that we would like to summarize. upset(sets, query.legend = "bottom", nsets = 10, number.angles = 30, point.size = 3.5, line.size = 2, queries = list( list( query = elements, params = list("physicalActivityPerMonth", 0,4), color = "#Df5286", active = T, query.name = "Physically Active 1x/Week or Less" ) ), boxplot.summary = c("minAgeRange") ) grid.text( "@littlemissdata", x = 0.90, y = 0.05, gp = gpar( fontsize = 10, fontface = 3 ) ) Using "Attribute Plots" Display In-Context Aggregate Statistics for Other Columns Like queries, UpSet plots also allow you to pass in a list of attribute.plots which can display additional graphs depicting the full data frame within the context of your sets. In the example below, we keep our "Physically Active 1x/Week or Less" query and add three attribute plots; 2 histograms and a scatterplot. All have been set to also carry the query highlighting throughout these new plots. upset(sets, query.legend = "bottom", nsets = 10, number.angles = 30, point.size = 3.5, line.size = 2, mainbar.y.label = "Modes of Senior Transportation (Toronto 2017 Survey)", sets.x.label = "Total Participants", queries = list( list( query = elements, params = list("physicalActivityPerMonth", 0,4), color = "#Df5286", active = T, query.name = "Physically Active 1x/Week or Less" ) ), attribute.plots = list(gridrows = 50, plots = list(list(plot = histogram, x = "volunteerPerMonth", queries = T), list(plot = histogram, x = "minAgeRange", queries = T), list(plot = scatter_plot, x = "minAgeRange", y="volunteerPerMonth", queries = F) ), ncols = 3 ) ) grid.text( "@littlemissdata", x = 0.9, y = 0.02, gp = gpar( fontsize = 10, fontface = 3 ) ) Finally, we can use the set.metadata parameter to display aggregate statistics about the core sets. It is quite simple to implement. We start by creating a data frame with summarized set statistics. We need to convert from binary format to list format, and then we will aggregate and summarize the variable values grouping by the sets. In this example we are going to display the average physical activity per month of each set. aggregate <- sets %>% gather(transportation, binary,6:13) %>% filter(binary == 1) %>% # only include set matches group_by(transportation) %>% #get summary stats per transportation category summarize(physicalActivityPerMonth = mean(physicalActivityPerMonth)) aggregate Now that the hard part is done, we simply specify the set.metadata parameter to have the aggregate data set and we are ready to get our set summary data on the bottom left hand plot. upset(sets, set.metadata = list(data = aggregate, plots = list( list( type = "hist", column = "physicalActivityPerMonth", assign = 50 ) ))) By now may be wondering why we haven’t been talking about Venn diagrams in round 3. Simply put, they had to sit out of this round. While you could do some creative ideas to display context through color, it really isn’t on a comparable level to UpSet charts. As such, Venn diagrams are disqualified and I need to give this round to UpSet charts! Thank You Thank you for exploring set analysis visualization options with me.  Please comment below if you enjoyed this blog, have questions, or would like to see something different in the future.  Note that the full code is available on my  github repo. install.packages("usethis") use_course("https://github.com/lgellis/MiscTutorial/archive/master.zip")
2021-01-28 03:24:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23541218042373657, "perplexity": 3512.0867480150255}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835583.91/warc/CC-MAIN-20210128005448-20210128035448-00580.warc.gz"}
https://socratic.org/questions/how-important-is-it-that-my-answer-on-socratic-is-accurate-1
# How important is it that my answer on Socratic is accurate? Apr 8, 2015 The objective of Socratic (from my perspective) is to make the method of solutions understandable. Socratic is an educational site. If this is the case, the method is more important than the accuracy of the final answer BUT the importance of accuracy should not be neglected. (An inaccurate answer implies an inaccuracy in the solution method). Jul 7, 2015 Accuracy is important insofar as it promotes learning and understanding. #### Explanation: First, what does "accurate" mean on Socratic? Answers should always be factually correct and reflect mainstream theories because these factors best help students learn. However, be mindful that the end-goal is not accuracy, but people moving forward in their understanding. Keep in mind that ... • Accuracy is different at different levels of learning • Elements we include in answers may not be "accurate" because we leave out an edge case; that’s ok — we’re explaining in a simple way that will promote learning best So, a few final guidelines: • Don’t be pedantic • No “well, actually …” 's, i.e. when discussing a classical physics question "Well actually, $F = m a$ is only accurate for speeds much slower than the speed of light..." When in doubt, value learning above exact accuracy. Stick to facts, avoid non-mainstream theories, and explain simply. Go learning! May 18, 2017 The best thing about socratic.org is that answers are not only accurate, but are scrutinized by hundreds of viewers, all of whom have the opportunity to update, correct, or offer another viewpoint. #### Explanation: Accuracy is a must when solutions to problems are provided. Updated methods and knowledge are also imperative. The shortest, most understandable answer will usually be chosen by students to minimize their workloads. Answers work best when geared to the reader. Know your audience (where possible) and supply answers that apply to them. Socratic.org provides continuous updating of answers because so many people are viewing input. if you make a mistake in an answer, someone will let you know. Questions with multiple answers reveal various methods that will arrive at the conclusion. Answers to scientific problems will also need to be precise as well as accurate.
2019-01-18 18:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5805308222770691, "perplexity": 2147.3549680962406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00474.warc.gz"}
http://dirkmittler.homeip.net/blog/archives/4860
# KdeWallet, and Using Smb4K, under Plasma 5 One fact which I’ve written about before, is that I have an up-to-date Linux computer, that uses the ‘Plasma 5′ desktop manager, which is actually the successor to ‘KDE 4′. When using this desktop manager, we can still install numerous packages that ‘belong’ to the old, KDE 4, and most of them will continue to work. One of those is ‘smb4k’, which is a point-and-click utility, to mount a network SMB share – aka, a Windows-file-share, such that it will be visible in our home folder, as though that share was a local sub-folder. There exist command-line methods to do the same thing, which would mount that network share, and declare it’s of the ‘cifs’ file-system-type, but the use of a simple GUI to do so may be easier. But then one problem which ensues, is that Smb4K will use the KDE 4 Wallet, to store our password, for that share. It will function in this way, by depending on the package ‘kde-runtime’. In truth, this latter package probably pulls in numerous (old) KDE 4 libraries, and not just the old KWallet, but the existence of this KDE 4 Wallet, on our Plasma 5 machine, is most obvious… Smb4K still works! But, once it has been used, we end up with two KDE Wallets running at the same time: 1. /usr/bin/kwalletd5 2. /usr/bin/kwalletd Some people might find that this is an unacceptable situation by its nature, but I’ll highlight one important reason, fw this could be a problem: ‘Debian 9′ / ‘Debian Stretch’, no longer provides us with a wallet-management application, that connects to the old, KDE 4 Wallet. Such a management application will ultimately be important, because we’ll want to state what the policy of the wallet is, regarding how long it should stay open, which applications may access it, without requiring a password reprompt, if it’s still open, etc.. As it is, this KDE 4 Wallet will have some initial settings, based on dialogs which appeared briefly, and which we may no longer modify, once they have been set, because an application chose to use, the KDE 4 Wallet. And so we could end up with the following result: dirk@Plato:~$ps fu -A | grep wallet dirk 1997 0.0 0.5 601024 68132 ? SLl Feb22 0:04 \_ /usr/bin/kwalletd5 dirk 540 0.5 0.5 606148 68852 ? Sl 14:39 0:00 | | \_ /usr/bin/kwalletmanager5 dirk 670 0.0 0.0 12784 984 pts/0 S+ 14:40 0:00 \_ grep --color=auto wallet dirk 628 1.1 0.2 288824 36284 ? SL 14:39 0:00 /usr/bin/kwalletd dirk@Plato:~$ My main priority here would be, that ‘/usr/bin/kwalletd’ would just stay open, and no longer require a password reprompt, before a user can just remount the remote share. And so at this moment, I’d solve that issue, with the command: kill -15 628 Because my command was a ‘kill -15′, the KDE 4 Wallet closes, gracefully. I suppose that a user would need to make sure, not to enter the following command: killall kwalletd Because that command, would actually kill both wallets! Unless, that’s really what a user wants to do. I’ve created a little shell-script, which facilitates this: #!/bin/bash PID=ps fu -A | grep 'kwalletd$' | gawk '{print$2}' - kill -15 \$PID There’s only one little caveat in using this script (as user): If we still have Smb4K running, while we click on this script, then Smb4K will ‘remember’ the PID which was once the KDE 4 Wallet, because by default, these wallets belong to the session, and their PIDs should be persistent. In that case, if we additionally try to mount another share, then Smb4K will detect that the wallet has died, and will prompt the user for his full credentials. I can find no settings in the Smb4K GUI, to quit the application once all shares are closed. And so, actually closing Smb4K is a good user-policy, every time the user decides ‘to kill a wallet’. Dirk This site uses Akismet to reduce spam. Learn how your comment data is processed.
2019-07-23 15:20:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4055657982826233, "perplexity": 3649.968634082086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00089.warc.gz"}
https://www.maplesoft.com/support/help/maple/view.aspx?path=Formats/GEXF
GEXF - Maple Help GEXF (.gexf) Graph Format GEXF file format Description • GEXF (Graph Exchange XML Format) is an XML-based file format for storing a single undirected or directed graph. • The GraphTheory[ImportGraph] and GraphTheory[ExportGraph] commands can read from and write to this format. • The general-purpose commands Import and Export also support this format. Examples Import a GEXF file encoding the Petersen graph. > $\mathrm{Petersen}≔\mathrm{Import}\left("example/petersen.gexf",\mathrm{base}=\mathrm{datadir}\right)$ ${\mathrm{Petersen}}{≔}{\mathrm{Graph 1: an undirected unweighted graph with 10 vertices and 15 edge\left(s\right)}}$ (1) > $\mathrm{GraphTheory}:-\mathrm{DrawGraph}\left(\mathrm{Petersen}\right)$ Export a Kneser graph to a GEXF file in the home directory of the current user. > $\mathrm{KG}≔\mathrm{GraphTheory}:-\mathrm{SpecialGraphs}:-\mathrm{KneserGraph}\left(3,2\right)$ ${\mathrm{KG}}{≔}{\mathrm{Graph 2: an undirected unweighted graph with 3 vertices and 0 edge\left(s\right)}}$ (2) > $\mathrm{Export}\left("kneser.gexf",\mathrm{KG},\mathrm{base}=\mathrm{homedir}\right)$ ${524}$ (3) References GEXF Format, www.gexf.net
2022-01-22 09:24:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.665385901927948, "perplexity": 9966.80712880449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00327.warc.gz"}
https://tug.org/pipermail/pstricks/2014/010270.html
# [pstricks] using the xylogscale of pst-plot vs gnuplot exponent issue Herbert Voss Herbert.Voss at FU-Berlin.DE Wed Mar 12 10:54:45 CET 2014 Am 10.03.2014 21:32, schrieb Julien Morand: > http://commons.wikimedia.org/wiki/File:Bv_rdson.png > > I try to use the psgraph environment of pst-plot but when I write the > given equation the plot is not correct. I have to modify the value of > the exponent from 2.5 to 320!! > > I also try the same equation using gnuplot and the plot is correct (see > attached files). Is there anyone that could help me on with this issue? > here is my code: you used a x value in the intervall {1.69879}{3.301} but it must be {10^1.69879}{10^3.301} \documentclass[pstricks]{standalone} \begin{document} \psset{xAxisLabel=$V_{br}(V)$, yAxisLabel=$\displaystyle R_{sp\acute{e}cifique}\left(\frac{\Omega}{cm^2} \right)$, llx=-1.5cm,lly=-1cm,urx=1.5cm,ury=1.5cm} \begin{psgraph}[tickcolor=lightgray,subtickcolor=lightgray,subticksize=0.5, Ox=1,Dx=1,xsubticks=9,xylogBase=10, logLines=all,Oy=-4,Dy=1, ysubticks=9] {->}(1,-4)(3.301,1.18){11cm}{6cm} \psplot[linecolor=darkgray,linestyle=dashed]{1.69879}{3.301}{5.93e-9 320 x exp mul log} \psplot[linecolor=green,algebraic]{1.69879}{3.301}{ log( 5.93e-9*(10^x)^2.5 )} \end{psgraph} \end{document} Herbert
2020-05-31 13:55:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379400610923767, "perplexity": 8745.986539293897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00476.warc.gz"}
https://proofwiki.org/wiki/Definition:Force
# Definition:Force ## Definition A force is an influence which causes a body to undergo a change in velocity. Force is a vector quantity. ### Symbol The usual symbol used to denote the force on a body is $f$. ### Dimension The dimension of measurement of force is $\mathsf {M L T}^{-2}$. This arises from Newton's Second Law of Motion and its definition as a mass (of dimension $\mathsf M$) multiplied by an acceleration (of dimension $\mathsf {L T}^{-2}$). ### Units The SI unit of measurement of force is the newton. The CGS unit of measurement of force is the dyne. ## Also known as Some writers and thinkers subdivide the idea of a force into a push or a pull, but such a dichotomy can serve to confuse the fact that they are in fact both forces. ## Linguistic Note The word force is derived from the Latin word fors meaning strength.
2023-02-06 19:54:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417729377746582, "perplexity": 833.3518153031557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00878.warc.gz"}
https://www.cuemath.com/ncert-solutions/exercise-15-3-statistics-class-11-maths/
# NCERT Solutions For Class 11 Maths Chapter 15 Exercise 15.3 Go back to  'Statistics' ## Chapter 15 Ex.15.3 Question 1 From the data given below state which group is more variable, $$A$$ or $$B$$? Marks $$10 - 20$$ $$20 - 30$$ $$30 - 40$$ $$40 - 50$$ $$50 - 60$$ $$60 - 70$$ $$70 - 80$$ Group $$A$$ $$9$$ $$17$$ $$32$$ $$33$$ $$40$$ $$10$$ $$9$$ Group $$B$$ $$10$$ $$20$$ $$30$$ $$25$$ $$43$$ $$15$$ $$7$$ ### Solution Standard deviation of Group $$A$$ is calculated as follows. Marks Group $$A$$ $${f_i}$$ Mid-point $${x_i}$$ $${y_i} = \frac{{{x_i} - 45}}{{10}}$$ $${y_i}^2$$ $${f_i}{y_i}$$ $${f_i}{y_i}^2$$ $$9$$ $$10 - 20$$ $$15$$ $$- 3$$ $$9$$ $$- 27$$ $$81$$ $$20 - 30$$ $$17$$ $$25$$ $$- 2$$ $$4$$ $$- 34$$ $$68$$ $$30 - 40$$ $$32$$ $$35$$ $$- 1$$ $$1$$ $$- 32$$ $$32$$ $$40 - 50$$ $$33$$ $$45$$ $$0$$ $$0$$ $$0$$ $$0$$ $$50 - 60$$ $$40$$ $$55$$ $$1$$ $$1$$ $$40$$ $$40$$ $$60 - 70$$ $$10$$ $$65$$ $$2$$ $$4$$ $$20$$ $$40$$ $$70 - 80$$ $$9$$ $$75$$ $$3$$ $$9$$ $$27$$ $$81$$ $$150$$ $$- 6$$ $$342$$ Here, $$N = 150,\;h = 10,\;A = 45$$ Mean, \begin{align}\bar x &= A + \frac{{\sum\limits_{i = 1}^7 {{f_i}{y_i}} }}{N} \times h\\&= 45 + \frac{{\left( { - 6} \right)}}{{150}} \times 10\\&= 45 - 0.4\\&= 44.6\end{align} Variance, \begin{align}\left( {{\sigma _1}^2} \right) &= \frac{{{h^2}}}{{{N^2}}}\left[ {N\sum\limits_{i = 1}^7 {{f_i}{y_i} - {{\left( {\sum\limits_{i = 1}^7 {{f_i}{y_i}} } \right)}^2}} } \right]\\&= \frac{{100}}{{22500}}\left[ {150 \times 342 - {{\left( { - 6} \right)}^2}} \right]\\&= \frac{1}{{225}} \times 51264\\&= 227.84\end{align} Standard deviation, \begin{align}\left( {{\sigma _1}} \right)&= \sqrt {227.84} \\&= 15.09\end{align} Standard deviation of Group B is calculated as follows. Marks Group B $${f_i}$$ Mid-point $${x_i}$$ $${y_i} = \frac{{{x_i} - 45}}{{10}}$$ $${y_i}^2$$ $${f_i}{y_i}$$ $${f_i}{y_i}^2$$ $$10 - 20$$ $$10$$ $$15$$ $$- 3$$ $$9$$ $$- 30$$ $$90$$ $$20 - 30$$ $$20$$ $$25$$ $$- 2$$ $$4$$ $$- 40$$ $$80$$ $$30 - 40$$ $$30$$ $$35$$ $$- 1$$ $$1$$ $$- 30$$ $$30$$ $$40 - 50$$ $$25$$ $$45$$ $$0$$ $$0$$ $$0$$ $$0$$ $$50 - 60$$ $$43$$ $$55$$ $$1$$ $$1$$ $$43$$ $$43$$ $$60 - 70$$ $$15$$ $$65$$ $$2$$ $$4$$ $$30$$ $$60$$ $$70 - 80$$ $$7$$ $$75$$ $$3$$ $$9$$ $$21$$ $$63$$ $$150$$ $$- 6$$ $$366$$ Mean, \begin{align}\bar x &= A + \frac{{\sum\limits_{i = 1}^7 {{f_i}{y_i}} }}{N} \times h\\&= 45 + \frac{{\left( { - 6} \right)}}{{150}} \times 10\\&= 45 - 0.4\\&= 44.6\end{align} Variance, \begin{align}{\sigma _2}^2 &= \frac{{{h^2}}}{{{N^2}}}\left[ {N\sum\limits_{i = 1}^7 {{f_i}{y_i} - {{\left( {\sum\limits_{i = 1}^7 {{f_i}{y_i}} } \right)}^2}} } \right]\\&= \frac{{100}}{{22500}}\left[ {150 \times 366 - {{\left( { - 6} \right)}^2}} \right]\\&= \frac{1}{{225}} \times 54864\\&= 243.84\end{align} Standard deviation, $$\begin{array}{c}\left( {{\sigma _2}} \right) = \sqrt {243.84} \\= 15.61\end{array}$$ Since the mean of both the groups is same, the group with greater standard deviation will be more variable. Thus, group B has more variability in the marks. ## Chapter 15 Ex.15.3 Question 2 From the prices of shares of $$X$$ and $$Y$$ below, find out which is more stable in value: $$X$$ $$35$$ $$54$$ $$52$$ $$53$$ $$56$$ $$58$$ $$52$$ $$50$$ $$51$$ $$49$$ $$Y$$ $$108$$ $$107$$ $$105$$ $$105$$ $$106$$ $$107$$ $$104$$ $$103$$ $$104$$ $$101$$ ### Solution The prices of the shares $$X$$ are $$35,54,52,53,56,58,52,50,51,49$$ Here, the number of observations, $$N = 10$$ Mean, \begin{align}\bar x &= \frac{1}{N}\sum\limits_{i = 1}^{10} {x_i} \\ &= \frac{1}{10} \times 510\\ &= 51\end{align} The following table is obtained corresponding to shares $$X.$$ $${x_i}$$ $$\left( {{x_i} - \bar x} \right)$$ $${\left( {{x_i} - \bar x} \right)^2}$$ $$35$$ $$- 16$$ $$256$$ $$54$$ $$3$$ $$9$$ $$52$$ $$1$$ $$1$$ $$53$$ $$2$$ $$4$$ $$56$$ $$5$$ $$25$$ $$58$$ $$7$$ $$49$$ $$52$$ $$1$$ $$1$$ $$50$$ $$-1$$ $$1$$ $$51$$ $$0$$ $$0$$ $$49$$ $$-2$$ $$4$$ $$350$$ Variance, \begin{align}\left( {{\sigma _1}^2} \right) &= \frac{1}{N}\sum\limits_{i = 1}^{10} {{{\left( {{x_i} - \bar x} \right)}^2}} \\&= \frac{1}{{10}} \times 350\\&= 35\end{align} Standard deviation, \begin{align}\left( {{\sigma _1}} \right) &= \sqrt {35} \\&= 5.91\end{align} \begin{align} \text{C}.\text{V}.\left( \text{Shares X} \right)&=\frac{{{\sigma }_{1}}}{X}\times 100 \\ & =\frac{5.91}{51}\times 100 \\ & =11.58 \end{align} The prices of the shares $$Y$$ are $$108,107,105,105,106,107,104,103,104,101$$ Here, the number of observations, $$N = 10$$ Mean, \begin{align}\bar y &= \frac{1}{N}\sum\limits_{i = 1}^{10} {{y_i}} \\&= \frac{1}{{10}} \times 1050\\&= 105\end{align} The following table is obtained corresponding to shares $$Y.$$ $${y_i}$$ $$\left( {{y_i} - \bar y} \right)$$ $${\left( {{y_i} - \bar y} \right)^2}$$ $$108$$ $$3$$ $$9$$ $$107$$ $$2$$ $$4$$ $$105$$ $$0$$ $$0$$ $$105$$ $$0$$ $$0$$ $$106$$ $$1$$ $$1$$ $$107$$ $$2$$ $$4$$ $$104$$ $$- 1$$ $$1$$ $$103$$ $$- 2$$ $$4$$ $$104$$ $$- 1$$ $$1$$ $$101$$ $$- 4$$ $$16$$ $$40$$ Variance, \begin{align}\left( {{\sigma _2}^2} \right) &= \frac{1}{N}\sum\limits_{i = 1}^{10} {{{\left( {{y_i} - \bar y} \right)}^2}} \\&= \frac{1}{{10}} \times 40\\&= 4\end{align} Standard deviation, $$\left( {{\sigma _2}} \right) = \sqrt 4 = 2$$ \begin{align} \text{C}.\text{V}.\left( \text{Shares Y} \right)&=\frac{{{\sigma }_{2}}}{X}\times 100 \\ & =\frac{2}{105}\times 100 \\ & =1.9 \end{align} $$C.V$$ of prices of shares $$X$$ is greater than the $$C.V$$ of prices of shares $$Y.$$ Thus, the prices of shares $$Y$$ are more stable than the prices of shares $$X.$$ ## Chapter 15 Ex.15.3 Question 3 An analysis of monthly wages paid to workers in two firms $$A$$ and $$B,$$ belonging to the same industry, gives the following results: Firm A Firm B No. of wages earners $${\rm{586}}$$ $$648$$ Mean of monthly wages Rs. $$5253$$ Rs. $$5253$$ Variance of the distribution of wages $$100$$ $$121$$ (i) Which firm $$A$$ or $$B$$ pays larger amount as monthly wages? (ii)Which firm, $$A$$ or $$B,$$ shows greater variability in individual wages? ### Solution (i) Monthly wages of firm $$A=$$ Rs $$5253$$ Number of wage earners in firm$$A=$$$${\rm{586}}$$ $$\therefore$$Total amount paid= Rs. $$5253 \times 586$$ Monthly wages of firm $$B=$$Rs $$5253$$ Number of wage earners in firm $$B=$$$$648$$ $$\therefore$$Total amount paid= Rs. $$5253 \times 648$$ Thus, firm $$B$$ pays the larger amount as monthly wages as the number of wage earners in firm B are more than the number of wage earners in firm $$A.$$ (ii)Variance of the distribution of wages in firm $$A$$ $$\left( {\sigma _1^2} \right) = 100$$ $$\therefore$$Standard deviation of the distribution of wages in firm $$A$$$$\left( {{\sigma _1}} \right) = \sqrt {100} = 10$$ Variance of the distribution of wages in firm $$B$$ $$\left( {\sigma _1^2} \right) = 121$$ $$\therefore$$Standard deviation of the distribution of wages in firm $$A$$$$\left( {{\sigma _1}} \right) = \sqrt {121} = 11$$ The mean of monthly wages of both the firms is same. Therefore, the firm with greater standard deviation will have more variability. Thus, firm $$B$$ has greater variability in the individual wages. ## Chapter 15 Ex.15.3 Question 4 The following is the record of goals scored by team A in a football session: No. of goals scored $$0$$ $$1$$ $$2$$ $$3$$ $$4$$ No. of matches $$1$$ $$9$$ $$7$$ $$5$$ $$3$$ For the team B, mean number of goals scored per math was $$2$$ with a standard deviation of $$1.25$$ goals. Find which team may be considered more consistent? ### Solution The mean and standard deviation of goals scored by team A are calculated as follows. No. of goals scored No. of matches $${f_i}{x_i}$$ $${x_i}^2$$ $${f_i}{x_i}^2$$ $$0$$ $$1$$ $$0$$ $$0$$ $$0$$ $$1$$ $$9$$ $$9$$ $$1$$ $$9$$ $$2$$ $$7$$ $$14$$ $$4$$ $$28$$ $$3$$ $$5$$ $$15$$ $$9$$ $$45$$. $$4$$ $$3$$ $$12$$ $$16$$ $$48$$ $$25$$ $$50$$ $$130$$ Mean, \begin{align}\bar x &= \frac{1}{N}\sum\limits_{i = 1}^5 {{f_i}{x_i}} \\&= \frac{{50}}{{25}}\\&= 2\end{align} Thus, the mean of both the teams is same. \begin{align}\sigma &= \frac{1}{N}\sqrt {N\sum {{f_i}{x_i}^2 - {{\left( {{{\sum {{f_i}x} }_i}} \right)}^2}} } \\&= \frac{1}{{25}}\sqrt {25 \times 130 - {{\left( {50} \right)}^2}} \\&= \frac{1}{{25}}\sqrt {750} \\&= \frac{1}{{25}} \times 27.38\\&= 1.09\end{align} The standard deviation of team B is $$1.25$$ goals. The average number of goals scored by both the teams is same i.e., $$2.$$ Therefore, the team with lower standard deviation will be more consistent. Thus, team A is more consistent than team B. ## Chapter 15 Ex.15.3 Question 5 The sum and sum of squares corresponding to length X (in cms) and weight y (in gms) of $${\rm{5}}0{\rm{ }}$$plant products are given below. $\sum\limits_{i = 1}^{50} {{x_i} = 212} ,\;\;\sum\limits_{i = 1}^{50} {{x_i}^2 = 902.8} ,\;\;\sum\limits_{i = 1}^{50} {{y_i} = 261} ,\;\;\sum\limits_{i = 1}^{50} {{y_i}^2 = 1457.6}$ Which is more varying, the length or the weight? ### Solution $$\sum\limits_{i = 1}^{50} {{x_i} = 212} ,\sum\limits_{i = 1}^{50} {{x_i}^2 = 902.8}$$ Here, $$N = 50$$ Mean, \begin{align}\bar x &= \frac{1}{N}\sum\limits_{i = 1}^{50} {{x_i}} \\&= \frac{{212}}{{50}}\\&= 4.24\end{align} Variance, \begin{align}\left( {{\sigma _1}^2} \right) &= \frac{1}{N}\sum\limits_{i = 1}^{50} {{{\left( {{x_i} - \bar x} \right)}^2}} \\&= \frac{1}{{50}}\sum\limits_{i = 1}^{50} {{{\left( {{x_i} - 4.24} \right)}^2}} \\&= \frac{1}{{50}}\sum\limits_{i = 1}^{50} {\left[ {{x_i}^2 - 8.48{x_i} + 17.97} \right]} \\&= \frac{1}{{50}}\left[ {\sum\limits_{i = 1}^{50} {{x_i}^2} - 8.48\sum\limits_{i = 1}^{50} {{x_i}} + 17.97 \times 50} \right]\\A&= \frac{1}{{50}}\left[ {902.8 - 8.48 \times \left( {212} \right) + 898.5} \right]\\&= \frac{1}{{50}}\left[ {1801.3 - 1797.76} \right]\\&= \frac{1}{{50}} \times 3.54\\&= 0.07\end{align} Standard variation $${\sigma _2}\left( {length} \right) = \sqrt {0.07} = 0.26$$ \begin{align}{\rm{C}}.{\rm{V}}\left( {{\rm{length}}} \right) &= \frac{{{\text{standard deviation}}}}{{mean}} \times 100\\&= \frac{{0.26}}{{4.24}} \times 100\\&= 6.13\end{align} $$\sum\limits_{i = 1}^{50} {{y_i} = 261} ,\sum\limits_{i = 1}^{50} {{y_i}^2 = 1457.6}$$ Here, $$N = 50$$ Mean, \begin{align}\bar y &= \frac{1}{N}\sum\limits_{i = 1}^{50} {{y_i}} \\&= \frac{{261}}{{50}}\\&= 5.22\end{align} Variance, \begin{align}\left( {{\sigma _2}^2} \right) &= \frac{1}{N}\sum\limits_{i = 1}^{50} {{{\left( {{y_i} - \bar y} \right)}^2}} \\&= \frac{1}{{50}}\sum\limits_{i = 1}^{50} {{{\left( {{y_i} - 5.22} \right)}^2}} \\&= \frac{1}{{50}}\sum\limits_{i = 1}^{50} {\left[ {{y_i}^2 - 10.44{y_i} + 27.24} \right]} \\&= \frac{1}{{50}}\left[ {\sum\limits_{i = 1}^{50} {{y_i}^2} - 10.44\sum\limits_{i = 1}^{50} {{y_i}} + 27.24 \times 50} \right]\\&= \frac{1}{{50}}\left[ {1457.6 - 10.44 \times \left( {261} \right) + 1362} \right]\\&= \frac{1}{{50}}\left[ {2819.6 - 2724.84} \right]\\&= \frac{1}{{50}} \times 94.76\\&= 1.89\end{align} Standard variation $${\sigma _2}\left( {weight} \right) = \sqrt {1.89} = 1.37$$ \begin{align}{\rm{C}}.{\rm{V}}\left( {{\text{weight}}} \right) &= \frac{{{\text{standard deviation}}}}{{mean}} \times 100\\&= \frac{{1.37}}{{5.22}} \times 100\\&= 26.24\end{align} Thus, C.V of weights is greater than C.V of lengths. Therefore, weights vary more than the lengths. Related Sections Related Sections Instant doubt clearing with Cuemath Advanced Math Program
2020-11-29 04:32:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 40, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000090599060059, "perplexity": 2288.7948948396247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00280.warc.gz"}
https://www.varsitytutors.com/hspt_quantitative-help/how-to-manipulate-numbers?page=2
HSPT Quantitative : How to manipulate numbers Example Questions Example Question #11 : Number Manipulation* What number is  of the average of , , and ? Explanation: Work backwards to solve this problem. First, the average of , , and  is , because . Next, find  of this number by dividing by  and multiplying by : Example Question #11 : Number Manipulation* What number is  less than the product of  and ? Explanation: Work backwards to solve this problem. First,  and . The product of these is , because . Finally,  less than  is , because . Example Question #11 : How To Manipulate Numbers percent of  percent of  is what number? Explanation: percent of  is , because . percent of  is , because . Example Question #12 : Number Manipulation* of what number is  of ? Explanation: of  is , because . Then,  of  is , because . Example Question #361 : Hspt Quantitative Skills What number is  less than the product of , and ? Explanation: Work backwards to solve this problem. First, find the product of the three numbers: Second, subtract . Example Question #16 : How To Manipulate Numbers What number is  percent of the difference between  and ? Explanation: Work backwards to solve this problem. First, find the difference between the two numbers through subtraction: Then, multiply by  to find the answer: Example Question #11 : Number Manipulation* What number when divided by  is the sum of , and ? Explanation: Find the sum of the three numbers through addition: Then, multiply by  to reverse the division: Example Question #12 : Number Manipulation* What is the difference between  and ? Explanation: First, calculate the exponents: Subtract these two to find the difference: Example Question #11 : Number Manipulation* What number is  less than  percent of ? Explanation: Work backwards to solve this problem. First,   percent of  is : Second, subtract : Example Question #11 : How To Manipulate Numbers What is  less than the square root of ?
2019-04-22 09:13:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774067401885986, "perplexity": 4542.351315257209}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578548241.22/warc/CC-MAIN-20190422075601-20190422101601-00315.warc.gz"}
http://tex.stackexchange.com/questions/172964/preamble-anywhere
Preamble anywhere If I want to use a set of macros in my tex document, I have to obviously include the appropriate packages IN THE PREAMBLE. So now, i write a 100 page document, with 55 packages already included (in a separate tex file or everything inline) and I want to include a new package, I would need to go to the preamble, and include my package and then return back to my work. Is it possible in LaTeX, where in the `\usepackage{}` commands can be written anywhere in the file and not only the preamble something similar to C++ or Java where in the declaration of a variable need not be always after main immediately rather anywhere in the program(the main disadvantage of C) - There are two reasons for this: a 'LaTeX' one and a 'TeX' one. The 'LaTeX' reason is that the mechanisms for loading packages are deliberately disabled at the start of the document, so for example `\usepackage` gives an error. The decision to do this is done is partly based on a desire for 'logical structure' but mainly due to the underlying 'TeX reason'. TeX reads files sequentially and processes as it goes. As such, there is no 'first see what packages are loaded' phase to running (La)TeX: package features can only be used after they have been loaded. As LaTeX can't know what any particular package does, this means they all need to come in the preamble before 'stuff' is typeset: the outcomes might otherwise be altered. Note that you can `\input` a file anywhere, so you can for example define commands in a secondary file then load it part-way through a document. That's not encouraged: experience suggests that commands should be defined for all of a document, not just part of it. - Note that the usual way to handle long docs is to have a 'main' file then one or more 'content' files. Done this way, adding a package only requires switching to the main file then back to the content: not really too much work. –  Joseph Wright Apr 23 '14 at 8:13 Also see `latex.ltx`: `\@onlypreamble\usepackage`. –  Ruben Apr 23 '14 at 8:15 Does LaTeX3 support this feature? –  subham soni Apr 23 '14 at 8:22 @subhamsoni Currently, the only part of LaTeX3 that is anything like 'done' is the programming layer (`expl3`), thus talking about other parts is speculative. That said, I think it's very unlikely a change to this restriction will be made: the idea that packages add functionality to the entire document works well, and as I said there are technical reasons for therefore requiring all package loading in the preamble. –  Joseph Wright Apr 23 '14 at 9:00 `\usepackage{}` can be used only in the preamble. What you can do is make other file and concatenate them in another document (for exemple using `\includepdf` if you use pdftex or equivalent) -
2015-01-31 11:52:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407713413238525, "perplexity": 1405.42538214205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00071-ip-10-180-212-252.ec2.internal.warc.gz"}
http://interescena.com/freebooks/semigroups-underlying-first-order-logic-memoirs-of-the-american-mathematical-society
## Download E-books Semigroups Underlying First-order Logic (Memoirs of the American Mathematical Society) PDF By William Craig Desk of Contents: Boolean, relation-induced, and different operations for facing first-order definability Uniform relatives among sequences Diagonal kinfolk Uniform diagonal family and a few different types of bisections or bisectable kin Presentation of ${\mathbf S}_q$, ${\mathbf S}_p$ and comparable constructions Presentation of ${\mathbf S}_{pq}$, ${\mathbf S}_{pe}$ and similar constructions Appendix. Presentation of ${\mathbf S}_{pqe}$ and comparable constructions Bibliography Index of symbols Index of words and topics record of family members occupied with shows Synopsis of shows Similar Algebra Trigonometry books Holt Algebra 2: Student Edition 2007 Take scholars a step additional in studying algebra particularly written for low-level novices, Algebra 2 covers numerous equipment for fixing quadratic equations, comparable to factoring, finishing the sq., and graphing. The textual content additionally introduces trigonometry and exponential functions—vital recommendations for genuine international functions. McGraw-Hill's 500 College Algebra and Trigonometry Questions: Ace Your College Exams: 3 Reading Tests + 3 Writing Tests + 3 Mathematics Tests (McGraw-Hill's 500 Questions) 500 how one can in attaining Your most sensible Grades we'd like you to be successful in your university algebra and trigonometry midterm and ultimate tests. that is why we've got chosen those 500 inquiries to assist you research extra successfully, use your education time correctly, and get your top grades. those questions and solutions are just like those you will find on a customary university examination, so that you will understand what to anticipate on attempt day. MyMathLab College Algebra: Guided Notebook, 2nd Edition This is often the guided workstation through Krik Trigsted, collage Algebra 2d variation to be used with mymathlab (unbound yet unwrapped). Additional info for Semigroups Underlying First-order Logic (Memoirs of the American Mathematical Society) Show sample text content Rated 4.70 of 5 – based on 25 votes
2018-03-18 17:45:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31289640069007874, "perplexity": 7553.702726323547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00522.warc.gz"}
http://paperity.org/p/37070502/on-sumsets-and-convex-hull
On Sumsets and Convex Hull Discrete & Computational Geometry, Sep 2014 One classical result of Freiman gives the optimal lower bound for the cardinality of $$A+A$$ if $$A$$ is a $$d$$-dimensional finite set in $$\mathbb R^d$$. Matolcsi and Ruzsa have recently generalized this lower bound to $$|A+kB|$$ if $$B$$ is $$d$$-dimensional, and $$A$$ is contained in the convex hull of $$B$$. We characterize the equality case of the Matolcsi–Ruzsa bound. The argument is based partially on understanding triangulations of polytopes. This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs00454-014-9633-2.pdf Károly J. Böröczky, Francisco Santos, Oriol Serra. On Sumsets and Convex Hull, Discrete & Computational Geometry, 2014, 705-729, DOI: 10.1007/s00454-014-9633-2
2018-10-24 00:19:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482266664505005, "perplexity": 710.0837207839269}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517628.91/warc/CC-MAIN-20181024001232-20181024022732-00294.warc.gz"}
http://mathhelpforum.com/calculus/208500-using-integrals-print.html
# Using Integrals :/ • November 26th 2012, 08:30 PM illicitkush Using Integrals :/ I need somewhere to start this problem. Were working on integrals in class and I need help with this Application. Question: Water flows into an empty storage tank at a rate of V'(t)= 40e^(-0.01t) liters per minute. How much water is in the storage tank after 20 minutes? What I did to set it up was I setup the integral from 0 to 20 with the integrand being 40e^(-0.01t). Now from here do I just take the anti-derivative and then plug in my times and subract them from eachother? • November 26th 2012, 08:37 PM MarkFL Re: Using Integrals :/ Yes, you would have: $V(20)-V(0)=\int_0^{20}V'(t)\,dt$ Since the tank is initially empty, i.e. $V(0)=0$ we then have: $V(20)=40\int_0^{20}e^{-0.01t}\,dt$ • November 26th 2012, 08:45 PM illicitkush Re: Using Integrals :/ Okay I got everything you said. The only thing is I try to do U substitution but I'm not sure if that would work. Im getting U= -.01t, DU= -.01dt. So I dont know if i multiply the 40 by the reciprocal of -.01 or what? • November 26th 2012, 08:56 PM illicitkush Re: Using Integrals :/ Okay I got it to be 40(e^-.01(20)) and got it to be 792.03986. To the nearest thousandth would be 792.039 liters • November 26th 2012, 09:03 PM illicitkush Re: Using Integrals :/ I did something wrong. Damn it. • November 26th 2012, 09:03 PM MarkFL Re: Using Integrals :/ Your $u$ substitution is what I would use: $u=-0.01t\,\therefore\,du=-0.01\,dt$ and so we now have: $V(20)=40(-100)\int_{u(0)}^{u(20)}e^{u}\,(-0.01\,dt)=4000\int_{-\frac{1}{5}}^0 e^u\,du$ Note: For the last step I used the rule $-\int_a^b f(x)\,dx=\int_b^a f(x)\,dx$ • November 26th 2012, 09:13 PM illicitkush Re: Using Integrals :/ For the answer I got -8.008 liters? Did I do something wrong? 4000*[e^0 - e^(-.01)*(-1/5)] and got -8.008 liters. • November 26th 2012, 09:47 PM MarkFL Re: Using Integrals :/ In liters, we have: $V(20)=4000\int_{-\frac{1}{5}}^0 e^u\,du=4000\left[e^u \right]_{-\frac{1}{5}}^0=4000\left(e^0-e^{-\frac{1}{5}} \right)=$ $4000\left(1-e^{-\frac{1}{5}} \right)\approx725.0769876880727$ • November 26th 2012, 09:49 PM illicitkush Re: Using Integrals :/ What happens to the -.01 inside the e function? • November 26th 2012, 10:01 PM MarkFL Re: Using Integrals :/ We did away with that when we substituted. We rewrote the definite integral in terms of u.
2014-08-27 17:21:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549130201339722, "perplexity": 2073.581374460412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829661.96/warc/CC-MAIN-20140820021349-00315-ip-10-180-136-8.ec2.internal.warc.gz"}
https://mathproblems123.wordpress.com/category/algebra/
### Archive Archive for the ‘Algebra’ Category ## SEEMOUS 2018 – Problems Problem 1. Let ${f:[0,1] \rightarrow (0,1)}$ be a Riemann integrable function. Show that $\displaystyle \frac{\displaystyle 2\int_0^1 xf^2(x) dx }{\displaystyle \int_0^1 (f^2(x)+1)dx }< \frac{\displaystyle \int_0^1 f^2(x) dx}{\displaystyle \int_0^1 f(x)dx}.$ Problem 2. Let ${m,n,p,q \geq 1}$ and let the matrices ${A \in \mathcal M_{m,n}(\Bbb{R})}$, ${B \in \mathcal M_{n,p}(\Bbb{R})}$, ${C \in \mathcal M_{p,q}(\Bbb{R})}$, ${D \in \mathcal M_{q,m}(\Bbb{R})}$ be such that $\displaystyle A^t = BCD,\ B^t = CDA,\ C^t = DAB,\ D^t = ABC.$ Prove that ${(ABCD)^2 = ABCD}$. Problem 3. Let ${A,B \in \mathcal M_{2018}(\Bbb{R})}$ such that ${AB = BA}$ and ${A^{2018} = B^{2018} = I}$, where ${I}$ is the identity matrix. Prove that if ${\text{tr}(AB) = 2018}$ then ${\text{tr}(A) = \text{tr}(B)}$. Problem 4. (a) Let ${f: \Bbb{R} \rightarrow \Bbb{R}}$ be a polynomial function. Prove that $\displaystyle \int_0^\infty e^{-x} f(x) dx = f(0)+f'(0)+f''(0)+...$ (b) Let ${f}$ be a function which has a Taylor series expansion at ${0}$ with radius of convergence ${R=\infty}$. Prove that if ${\displaystyle \sum_{n=0}^\infty f^{(n)}(0)}$ converges absolutely then ${\displaystyle \int_0^{\infty} e^{-x} f(x)dx}$ converges and $\displaystyle \sum_{n=0}^\infty f^{(n)}(0) = \int_0^\infty e^{-x} f(x).$ Hints: 1. Just use $2f(x) \leq f^2(x)+1$ and $xf^2(x) < f^2(x)$. The strict inequality comes from the fact that the Riemann integral of strictly positive function cannot be equal to zero. This problem was too simple… 2. Use the fact that $ABCD = AA^t$, therefore $ABCD$ is symmetric and positive definite. Next, notice that $(ABCD)^3 = ABCDABCDABCD = D^tC^tB^tA^t = (ABCD)^t = ABCD$. Notice that $ABCD$ is diagonalizable and has eigenvalues among $-1,0,1$. Since it is also positive definite, $-1$ cannot be an eigenvalue. This allows to conclude. 3. First note that the commutativity allows us to diagonalize $A,B$ using the same basis. Next, note that $A,B$ both have eigenvalues of modulus one. Then the trace of $AB$ is simply the sum $\sum \lambda_i\mu_i$ where $\lambda_i,\mu_i$ are eigenvalues of $A$ and $B$, respectively. The fact that the trace equals $2018$ and the triangle inequality shows that eigenvalues of $A$ are a multiple of eigenvalues of $B$. Finish by observing that they have the same eigenvalues. 4. (a) Integrate by parts and use a recurrence. (b) Use (a) and the approximation of a continuous function by polynomials on compacts to conclude. I’m not sure about what others think, but the problems of this year seemed a bit too straightforward. ## Putnam 2017 A2 – Solution Problem A2. We have the following recurrence relation $\displaystyle Q_n = \frac{Q_{n-1}^2-1}{Q_{n-2}},$ for ${n \geq 2}$, given ${Q_0=1}$ and ${Q_1=x}$. In order to prove that ${Q_n}$ is always a polynomial with integer coefficients we should prove that ${Q_{n-2}}$ divides ${Q_{n-1}^2-1}$ somehow. Recurrence does not seem to work very well. Also, root based arguments might work, but you need to take good care in the computation. A simpler idea, which is classic in this context, is to try and linearize the recurrence relation. In order to do this, let’s write two consecutive recurrence relations $\displaystyle Q_nQ_{n-2} +1 = Q_{n-1}^2$ $\displaystyle Q_n^2 = Q_{n+1}Q_{n-1}+1$ We add them and we obtain the following relation $\displaystyle \frac{Q_n}{Q_{n-1}} = \frac{Q_{n+1}+Q_{n-1}}{Q_n+Q_{n-2}},$ which leads straightforward to a telescoping argument. Finally, we are left with a simple linear recurrence with integer coefficient polynomials, and the result follows immediately. ## Balkan Mathematical Olympiad 2017 – Problems Problem 1. Find all ordered pairs of positive integers ${ (x, y)}$ such that: $\displaystyle x^3+y^3=x^2+42xy+y^2.$ Problem 2. Consider an acute-angled triangle ${ABC}$ with ${AB and let ${\omega}$ be its circumscribed circle. Let ${t_B}$ and ${t_C}$ be the tangents to the circle ${\omega}$ at points ${B}$ and ${C}$, respectively, and let ${L}$ be their intersection. The straight line passing through the point ${B}$ and parallel to ${AC}$ intersects ${t_C}$ in point ${D}$. The straight line passing through the point ${C}$ and parallel to ${AB}$ intersects ${t_B}$ in point ${E}$. The circumcircle of the triangle ${BDC}$ intersects ${AC}$ in ${T}$, where ${T}$ is located between ${A}$ and ${C}$. The circumcircle of the triangle ${BEC}$ intersects the line ${AB}$ (or its extension) in ${S}$, where ${B}$ is located between ${S}$ and ${A}$. Prove that ${ST}$, ${AL}$, and ${BC}$ are concurrent. Problem 3. Let ${\mathbb{N}}$ denote the set of positive integers. Find all functions ${f:\mathbb{N}\longrightarrow\mathbb{N}}$ such that $\displaystyle n+f(m)\mid f(n)+nf(m)$ for all ${m,n\in \mathbb{N}}$ Problem 4. On a circular table sit ${\displaystyle {n> 2}}$ students. First, each student has just one candy. At each step, each student chooses one of the following actions: • (A) Gives a candy to the student sitting on his left or to the student sitting on his right. • (B) Separates all its candies in two, possibly empty, sets and gives one set to the student sitting on his left and the other to the student sitting on his right. At each step, students perform the actions they have chosen at the same time. A distribution of candy is called legitimate if it can occur after a finite number of steps. Find the number of legitimate distributions. (Two distributions are different if there is a student who has a different number of candy in each of these distributions.) Source: AoPS ## IMC 2016 – Day 2 – Problem 8 Problem 8. Let ${n}$ be a positive integer and denote by ${\Bbb{Z}_n}$ the ring of integers modulo ${n}$. Suppose that there exists a function ${f:\Bbb{Z}_n \rightarrow \Bbb{Z}_n}$ satisfying the following three properties: • (i) ${f(x) \neq x}$, • (ii) ${x = f(f(x))}$, • (iii) ${f(f(f(x+1)+1)+1) = x}$ for all ${x \in \Bbb{Z}_n}$. Prove that ${n \equiv 2}$ modulo ${4}$. ## IMC 2016 – Day 1 – Problem 2 Problem 2. Let ${k}$ and ${n}$ be positive integers. A sequence ${(A_1,...,A_k)}$ of ${n\times n}$ matrices is preferred by Ivan the Confessor if ${A_i^2 \neq 0}$ for ${1\leq i \leq k}$, but ${A_iA_j = 0}$ for ${1\leq i,j \leq k}$ with ${i \neq j}$. Show that if ${k \leq n}$ in al preferred sequences and give an example of a preferred sequence with ${k=n}$ for each ${n}$. ## Balkan Mathematical Olympiad – 2016 Problems Problem 1. Find all injective functions ${f: \mathbb R \rightarrow \mathbb R}$ such that for every real number ${x}$ and every positive integer ${n}$, $\displaystyle \left|\sum_{i=1}^n i\left(f(x+i+1)-f(f(x+i))\right)\right|<2016$ Problem 2. Let ${ABCD}$ be a cyclic quadrilateral with ${AB. The diagonals intersect at the point ${F}$ and lines ${AD}$ and ${BC}$ intersect at the point ${E}$. Let ${K}$ and ${L}$ be the orthogonal projections of ${F}$ onto lines ${AD}$ and ${BC}$ respectively, and let ${M}$, ${S}$ and ${T}$ be the midpoints of ${EF}$, ${CF}$ and ${DF}$ respectively. Prove that the second intersection point of the circumcircles of triangles ${MKT}$ and ${MLS}$ lies on the segment ${CD}$. Problem 3. Find all monic polynomials ${f}$ with integer coefficients satisfying the following condition: there exists a positive integer ${N}$ such that ${p}$ divides ${2(f(p)!)+1}$ for every prime ${p>N}$ for which ${f(p)}$ is a positive integer. Problem 4. The plane is divided into squares by two sets of parallel lines, forming an infinite grid. Each unit square is coloured with one of ${1201}$ colours so that no rectangle with perimeter ${100}$ contains two squares of the same colour. Show that no rectangle of size ${1\times1201}$ or ${1201\times1}$ contains two squares of the same colour. ## SEEMOUS 2016 – Problems Problem 1. Let ${f}$ be a continuous and decreasing real valued function defined on ${[0,\pi/2]}$. Prove that $\displaystyle \int_{\pi/2-1}^{\pi/2} f(x)dx \leq \int_0^{\pi/2} f(x)\cos x dx \leq \int_0^1 f(x) dx.$ When do we have equality? Problem 2. a) Prove that for every matrix ${X \in \mathcal{M}_2(\Bbb{C})}$ there exists a matrix ${Y \in \mathcal{M}_2(\Bbb{C})}$ such that ${Y^3 = X^2}$. b) Prove that there exists a matrix ${A \in \mathcal{M}_3(\Bbb{C})}$ such that ${Z^3 \neq A^2}$ for all ${Z \in \mathcal{M}_3(\Bbb{C})}$. Problem 3. Let ${A_1,A_2,...,A_k}$ be idempotent matrices (${A_i^2 = A_i}$) in ${\mathcal{M}_n(\Bbb{R})}$. Prove that $\displaystyle \sum_{i=1}^k N(A_i) \geq \text{rank} \left(I-\prod_{i=1}^k A_i\right),$ where ${N(A_i) = n-\text{rank}(A_i)}$ and ${\mathcal{M}_n(\Bbb{R})}$ is the set of ${n \times n}$ matrices with real entries. Problem 4. Let ${n \geq 1}$ be an integer and set $\displaystyle I_n = \int_0^\infty \frac{\arctan x}{(1+x^2)^n}dx.$ Prove that a) ${\displaystyle \sum_{i=1}^\infty \frac{I_n}{n} =\frac{\pi^2}{6}.}$ b) ${\displaystyle \int_0^\infty \arctan x \cdot \ln \left( 1+\frac{1}{x^2}\right) dx = \frac{\pi^2}{6}}$. Some hints follow.
2018-03-17 22:05:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 166, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978203296661377, "perplexity": 149.94777531255193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645362.1/warc/CC-MAIN-20180317214032-20180317234032-00333.warc.gz"}
https://nforum.ncatlab.org/discussion/8386/
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorTodd_Trimble • CommentTimeApr 5th 2018 • (edited Apr 5th 2018) I have begun cleaning up the entry cycle category, tightening up definitions and proofs. This should render some of the past discussion obsolete, by re-expressing the intended homotopical intuitions (in terms of degree one maps on the circle) more precisely, in terms of “spiraling” adjoints on the poset $\mathbb{Z}$. Here is some of the past discussion I’m now exporting to the nForum: The cycle category may be defined as the subcategory of Cat whose objects are the categories $[n]_\Lambda$ which are freely generated by the graph $0\to 1\to 2\to\ldots\to n\to 0$, and whose morphisms $\Lambda([m],[n])\subset\mathrm{Cat}([m],[n])$ are precisely the functors of degree $1$ (seen either at the level of nerves or via the embedding $\mathrm{Ob}[n]_\Lambda\to \mathbf{R}/\mathbf{Z}\cong S^1$ given by $k\mapsto k/(n+1)\,\mathrm{mod}\,\mathbf{Z}$ on the level of objects, the rest being obvious). The simplex category $\Delta$ can be identified with a subcategory of $\Lambda$, having the same objects but with fewer morphisms. This identification does not respect the inclusions into $Cat$, however, since $[n]$ and $[n]_\Lambda$ are different categories. • CommentRowNumber2. • CommentAuthorTodd_Trimble • CommentTimeApr 5th 2018 Removed query boxes, given below for archival purposes: This discussion is about getting the definition right. +–{: .query} Mike: Sorry, I don’t think I believe this either. The category freely generated by any of the above graphs (for $n\gt 0$) has infinitely many morphisms between any pair of objects, and therefore (since it is free) infinitely many endomorphisms. But aren’t the hom-sets of $\Lambda$ supposed to be finite? Zoran Škoda Hom sets of $\Lambda$-yes, but we are now trying to find the concrete realization of objects of $\Lambda$: there are many realizations some involving some sort of categories with infinitely many morphisms. Each $C_n$ in presentation above have infinitely many morphisms in $\mathrm{Cat}$, but only finitely many in $\Lambda$ as I just corrected (thanks for being alert and convince me to think three times!). The infinities in models for $\Lambda$ have to do with a ’reason’ why for example finite group model has $SL(2,Z)$ symmetry – as it is natural to be explained in terms of second iteration of inertia orbifold, which is closely related to cyclic cohomology. But it is confusing because the t-operators are finite…Drinfeld talks about $Z_+$ categories when talking about $\Lambda$, maybe I confused something, I’ll think about it, in that setup there are infinitely many morphisms for what he calls $[n]_{cyc}$ but maybe it is not the same what I intended to do. Mike: What does it mean for a functor to be “of degree 1?” I assume that your parenthetical is meant to explain this, but to me it is not obvious. Zoran Skoda: By degree I mean the degree of a map in the sense of homotopy theory – the class of circle to circle map, either at the level of nerves or looking at the subset of circle. Mike: It’s not immediately obvious to me that the nerve of your category $[n]_\Lambda$ is homotopically a sphere or something else for which the notion of ’degree’ makes sense. What does “looking at the subset of circle” mean? I would prefer if we give a more explicit combinatorial description of $\Lambda$ as its definition, although we could also include this version later on the page. Zoran Skoda: As I wrote above, $k\mapsto k/(n+1)\,\mathrm{mod}\,\mathbf{Z}$, is THE formula for embedding $[n]_{\Lambda}$ into the circle. On the other hand, the nerve the free category on the graph $0\to 1\to ..\to n\to 0$ is homotopically the circle, isn’t it? I think the definition is cleaner than the explanation below via generators and relations. Mike: What about “$\Lambda$ is the category of finite nonempty cyclically ordered sets?” I think that gets across the intended intuition better than either, and is cleaner than either modulo a definition of “cyclic order.” Zoran Skoda: very good! Mike: I think I understand what you are getting at with your definition now, although I still don’t think it’s quite right yet. I agree that the nerve of $[n]_\Lambda$ is homotopically a circle—except when $n=0$. And I think that exception means that not all the functors you want have degree 1—those that factor through $[0]_\Lambda$ have degree 0. It seems like those might be the only functors with degree 0, though so maybe it would suffice to consider all functors with degree 0 or 1. Mike Shulman: Apparently I’m wrong: the $0$-cycle is still supposed to be a “loop” of some sort. So maybe your definition is right as long as the category $[0]_\Lambda$ is defined as freely generated by an endomorphism $0\to 0$. This page should probably be rewritten with an “Idea” section at the beginning and then descriptions of the many different ways to define it formally. =– This discussion is about the name of the category. +–{.query} We might also call it the cycle category in analogy with simplex category, cube category, and globe category that we've already got here. If that's a good system. —Toby Mike: I like that system. Zoran Škoda I personally prefer category of cycles, even sometimes category of simplices, category of (hyper)cubes as I hear from geometers. Partly because when you translate to other languages, bahuvrihi style (which is anyway an abbreviation of the other form) is not preferred (unlike in German where it is even written as one word, and in English in which it is one word but is written as two), or sometimes impossible, hence one needs to convert the modifier back into an adjective when translating, what one does not need with saxon genitive. But I am ambivalent to that issue in other cases, but cycle category sounds too similar to cyclic category (for simplicial there is no problem as it sounds very different from simplex)… Mike: Of course, also “category of simplices” has a different meaning: one talks about “the category of simplices of a simplicial set” to mean the comma category of $\Delta$ over it. The simplex category is then the category of simplices of the terminal simplicial set. Regarding translation, I would be inclined to just regard that as something that happens in translation. Since English uses noun adjuncts frequently, it must be commonplace for translators to replace them with the preferred forms in other languages. There are lots of other cases in translation where you can’t just replace word-for-word; doesn’t translation really consist instead of writing new sentences in the target language with the same meaning as those in the source language? Zoran Skoda: Look, one can translate a phrase, but not just a stack of nouns. Stack of nouns either stays stack of nouns (what is very awkward nowdays, with young people using lots of stacks of nouns literally from English semi-translated to languages which do not do massive bahuvrihi compounds) or need to be expanded/described. But how to expand cycle category then to category of cycles. Hence I have no problem to translate cycle category to Croatian, but then it will coincide in Croatian with category of cycles. Or I can make unnatural compounds with hyphen ciklus-kategorija, what sounds like water-fruit for juicy fruit or for fruit with juice. What do you, mathematics-man think of this word-compound tongue-thing ? And beware that the cost (unusualness) in many languages of such compounds is orders of magnitude more unusual than in English. Or give me another descriptive expansion (if category of cycles already used/not accepted!). By the way, I hear around certain students of Kan talking “the category of simplices” for $\Delta$. I see no problem with the fact that there are more general “categories of simplices” in specialistic usage (non-specialists use them very little and specialists will anyway say simplices where…$\Delta$ is used by everybody, not only homotopy theorists or simplicial experts, the latter comma category is a rather specialists’ thing). Mike: You appear to be trying to ridicule me for exactly I was saying one shouldn’t do. Of course, when you translate “cycle category” to Croatian it will come out the same as “category of cycles.” When written in language X, mathematics should be written in language X, not simply obtained from language Y by replacing things word-for-word; this is just as true when X=English and Y=Croatian as when X=Croatian and Y=English. In particular, a noun together with some noun adjuncts is a phrase in English, not just a “stack of nouns,” and should be translated to result in a grammatical phrase in whatever other language one is translating to. Don’t blame me because some people translate things from English incorrectly. I don’t have a problem with people saying “category of simplices” for $\Delta$, but I prefer to say “simplex category” myself as it is slightly more precise. Mike: You do, however, have a good point that when writing in English we should not try to distinguish in meaning between “cycle category” and “category of cycles,” since when translating into many other languages they will become the same. I would still prefer that we name the page “cycle category” here on the nLab, since it accords better with our existing terminology for similar categories, but I think you should feel free to use “category of cycles.” • CommentRowNumber3. • CommentAuthorTodd_Trimble • CommentTimeApr 5th 2018 Tweaked the Idea section a bit. • CommentRowNumber4. • CommentAuthorTodd_Trimble • CommentTimeApr 6th 2018 Added a reference to an article by Elmendorf on which a number of latest changes were based. • CommentRowNumber5. • CommentAuthorUrs • CommentTimeApr 6th 2018 As a minimal cross-link with Hochschild/cyclic (co-)homology I added the following sentence to the Idea section: The cycle category is used for the description of the cyclic structure on Hochschild homology/Hochschild cohomology and accordingly for the description of cyclic homology/cyclic cohomology. • CommentRowNumber6. • CommentAuthorUrs • CommentTimeApr 6th 2018 Added mentioning of cyclic object already in the Idea-section. I notice that we also have cyclic set (added mentioning of that, too). • CommentRowNumber7. • CommentAuthorUrs • CommentTimeApr 6th 2018 I wanted to go ahead and cross-link cyclic set and cyclic object with each other, but in that very moment something seems to have happened to the nLab and each request now gives an error message. • CommentRowNumber8. • CommentAuthorUrs • CommentTimeApr 6th 2018 It works again now. I discovered that we also have paracyclic set and paracyclic object, none of which was properly cross-linked. Am adding some links now. • CommentRowNumber9. • CommentAuthorTim Campion • CommentTimeApr 16th 2021 Added a reference to the paracycle category, which was previously defined on the page, but not given that name. Actually Nikolaus and Scholze say “paracyclic category”, but since we’re saying “cycle category” rather than “cyclic category”, the term “paracycle category” seems more consistent. • CommentRowNumber10. • CommentAuthorUrs • CommentTimeJun 26th 2021 expanded out the citation data for: • CommentRowNumber11. • CommentAuthorUrs • CommentTimeJun 26th 2021 re #9: but since we’re saying “cycle category” rather than “cyclic category” I just noticed this. Maybe we shouldn’t. It seems “cyclic category” is the standard term. • CommentRowNumber12. • CommentAuthorDmitri Pavlov • CommentTimeJun 26th 2021 MathSciNet lists 37 hits for “cyclic category” and 1 for “cycle category” (but in the sense of cycles in chain complexes, so not the same usage). • CommentRowNumber13. • CommentAuthorUrs • CommentTimeJun 27th 2021 Thanks. Okay, so I have changed the entry title. I suppose the reasoning for the non-standard name was the following remark in the entry (Thus, the common term “the cyclic category” to refer to $\Lambda$ is misleading, just like using “the simplicial category” to refer to the simplex category $\Delta$.) • CommentRowNumber14. • CommentAuthorTodd_Trimble • CommentTimeJun 27th 2021 I don’t plan on standing in the way of determined opposition, but FWIW, “cycle category” is a better grammatical fit to simplex category. I feel that from the outset, “cyclic category” was never a great name, particularly if “the” and not “a” is the only admissible article, but if forces that be decide to change the title, then please keep open “cycle category” as a redirect. • CommentRowNumber15. • CommentAuthorUrs • CommentTimeJun 27th 2021 Of course it’s kept as a redirect. The priority must be that information can be found. If established terminology is to be critiqued, that ought to be done explicitly by a dedicated paragraph in the entry, not implicitly by silently making changes that the reader is left to figure out. And while “cyclic category” has problems, the term “cycle category” is possibly worse, as becomes more pronounced by its would-be dualization to “cocycle category”. The actual analog to “simplex category” would be something like “circle category”. • CommentRowNumber16. • CommentAuthorDavid_Corfield • CommentTimeJun 27th 2021 We have David Ben-Zvi backing that when he wrote: The way I understand it the category $\Lambda$ is just a way to express the circle and its classifying space in simplicial language. • CommentRowNumber17. • CommentAuthorUrs • CommentTimeJun 27th 2021 That’s certainly well-known, Loday in 1992 amplifies it nicely in his textbook chapter 6 here. • CommentRowNumber18. • CommentAuthorDavid_Corfield • CommentTimeJun 27th 2021 Added reference to where the cyclic category is discussed as a generalized Reedy category: • CommentRowNumber19. • CommentAuthorUrs • CommentTimeJun 27th 2021 • (edited Jun 27th 2021) re #17: I correct that: Loday gets around to explaining this only in Chapter 7 (here).
2021-09-25 00:44:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7571324110031128, "perplexity": 1417.6758634556752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00446.warc.gz"}
https://calculator.academy/market-to-book-ratio-calculator/
Enter the total market value ($) and the total book value ($) into the Market to Book Ratio Calculator. The calculator will evaluate and display the Market to Book Ratio. ## Market to Book Ratio Formula The following formula is used to calculate the Market to Book Ratio. MBVR = MV / BV * 100 • Where MBVR is the Market to Book Ratio (%) • MV is the total market value ($) • BV is the total book value ($) To calculate the market to book ratio, divide the total market value by the total book value, then multiply by 100. ## How to Calculate Market to Book Ratio? The following example problems outline how to calculate Market to Book Ratio. Example Problem #1: 1. First, determine the total market value ($). • The total market value ($) is given as: 275. 2. Next, determine the total book value ($). • The total book value ($) is provided as: 900. 3. Finally, calculate the Market to Book Ratio using the equation above: MBVR = MV / BV * 100 The values given above are inserted into the equation below and the solution is calculated: MBVR = 275 / 900 * 100 = 30.55 (%) Example Problem #2: For this problem, the variables required are provided below: total market value ($) = 800 total book value ($) = 10000
2023-01-31 03:31:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4528582692146301, "perplexity": 2315.403850934309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00679.warc.gz"}
https://projecteuclid.org/euclid.aos/1522742431
## Annals of Statistics ### Regularization and the small-ball method I: Sparse recovery #### Abstract We obtain bounds on estimation error rates for regularization procedures of the form \begin{equation*}\hat{f}\in\mathop{\operatorname{argmin}}_{f\in F}(\frac{1}{N}\sum_{i=1}^{N}(Y_{i}-f(X_{i}))^{2}+\lambda \Psi(f))\end{equation*} when $\Psi$ is a norm and $F$ is convex. Our approach gives a common framework that may be used in the analysis of learning problems and regularization problems alike. In particular, it sheds some light on the role various notions of sparsity have in regularization and on their connection with the size of subdifferentials of $\Psi$ in a neighborhood of the true minimizer. As “proof of concept” we extend the known estimates for the LASSO, SLOPE and trace norm regularization. #### Article information Source Ann. Statist., Volume 46, Number 2 (2018), 611-641. Dates Revised: January 2017 First available in Project Euclid: 3 April 2018 https://projecteuclid.org/euclid.aos/1522742431 Digital Object Identifier doi:10.1214/17-AOS1562 Mathematical Reviews number (MathSciNet) MR3782379 Zentralblatt MATH identifier 06870274 #### Citation Lecué, Guillaume; Mendelson, Shahar. Regularization and the small-ball method I: Sparse recovery. Ann. Statist. 46 (2018), no. 2, 611--641. doi:10.1214/17-AOS1562. https://projecteuclid.org/euclid.aos/1522742431 #### References • [1] Artstein-Avidan, S., Giannopoulos, A. and Milman, V. D. (2015). Asymptotic Geometric Analysis. Part I. Mathematical Surveys and Monographs 202. Amer. Math. Soc., Providence, RI. • [2] Bach, F. R. (2010). Structured sparsity-inducing norms through submodular functions. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a Meeting Held 6-9 December 2010 118–126, Vancouver, British Columbia, Canada. • [3] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of lasso and Dantzig selector. Ann. Statist. 37 1705–1732. • [4] Bogdan, M., van den Berg, E., Sabatti, C., Su, W. and Candès, E. J. (2015). SLOPE—Adaptive variable selection via convex optimization. Ann. Appl. Stat. 9 1103–1140. • [5] Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, Heidelberg. • [6] Candès, E. J. and Plan, Y. (2011). Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inform. Theory 57 2342–2359. • [7] Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57 1548–1566. • [8] Koltchinskii, V. (2011). Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Lecture Notes in Math. 2033. Springer, Heidelberg. • [9] Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39 2302–2329. • [10] Koltchinskii, V. and Mendelson, S. (2015). Bounding the smallest singular value of a random matrix without concentration. Int. Math. Res. Not. IMRN 23 12991–13008. • [11] Lecué, G. and Mendelson, S. (2013). Learning subgaussian classes: Upper and minimax bounds. Technical report, CNRS, Ecole polytechnique and Technion. • [12] Lecué, G. and Mendelson, S. (2015). Regularization and the small-ball method II: Complexity-based bounds. Technical report, CNRS, ENSAE and Technion, I.I.T. • [13] Lecué, G. and Mendelson, S. (2017). Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. (JEMS) 19 881–904. • [14] Lecué, G. and Mendelson, S. (2018). Supplement to “Regularization and the small-ball method I: sparse recovery.” DOI:10.1214/17-AOS1562SUPP. • [15] Ledoux, M. (2001). The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs 89. Amer. Math. Soc., Providence, RI. • [16] Lounici, K. (2008). Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. Electron. J. Stat. 2 90–102. • [17] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462. • [18] Meinshausen, N. and Yu, B. (2009). Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist. 37 246–270. • [19] Mendelson, S. (2013). Learning without concentration for general loss function. Technical report, Technion, I.I.T. Available at arXiv:1410.3192. • [20] Mendelson, S. (2014). Learning without concentration. In Proceedings of the 27th Annual Conference on Learning Theory COLT14 25–39. • [21] Mendelson, S. (2014). A remark on the diameter of random sections of convex bodies. In Geometric Aspects of Functional Analysis. Lecture Notes in Math. 2116 395–404. Springer, Cham. • [22] Mendelson, S. (2015). Learning without concentration. J. ACM 62 Art. 21, 25. • [23] Mendelson, S. (2015). “Local vs. global parameters”, breaking the Gaussian compexity barrier. Technical report, Technion, I.I.T. • [24] Mendelson, S. (2015). On multiplier processes under weak moment assumptions Technical report, Technion, I.I.T. • [25] Mendelson, S. (2016). Upper bounds on product and multiplier empirical processes. Stochastic Process. Appl. 126 3652–3680. • [26] Negahban, S. and Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res. 13 1665–1697. • [27] Negahban, S. N., Ravikumar, P., Wainwright, M. J. and Yu, B. (2012). A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers. Statist. Sci. 27 538–557. • [28] Nickl, R. and van de Geer, S. (2013). Confidence sets in sparse regression. Ann. Statist. 41 2852–2876. • [29] Recht, B., Fazel, M. and Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 471–501. • [30] Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist. 39 887–930. • [31] Rudelson, M. and Vershynin, R. (2015). Small ball probabilities for linear images of high-dimensional distributions. Int. Math. Res. Not. IMRN 19 9594–9617. • [32] Su, W. and Candès, E. (2016). SLOPE is adaptive to unknown sparsity and asymptotically minimax. Ann. Statist. 44 1038–1068. • [33] Talagrand, M. (2014). Upper and lower bounds for stochastic processes. In Modern Methods and Classical Problems. Ergebnisse der Mathematik und Ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics 60. Springer, Heidelberg. • [34] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 58 267–288. • [35] van de Geer, S. (2014). Weakly decomposable regularization penalties and structured sparsity. Scand. J. Stat. 41 72–86. • [36] van de Geer, S., Bühlmann, P., Ritov, Y. and Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Statist. 42 1166–1202. • [37] van de Geer, S. A. (2007). The deterministic lasso. Technical report, ETH Zürich. Available at http://www.stat.math.ethz.ch/~geer/lasso.pdf. • [38] van de Geer, S. A. (2008). High-dimensional generalized linear models and the lasso. Ann. Statist. 36 614–645. • [39] Watson, G. A. (1992). Characterization of the subdifferential of some matrix norms. Linear Algebra Appl. 170 33–45. • [40] Zhang, C.-H. and Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. Ser. B. Stat. Methodol. 76 217–242. • [41] Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res. 7 2541–2563. #### Supplemental materials • Supplementary material to “Regularization and the small-ball method I: sparse recovery”. In the supplementary material we study a general $X$ without assuming it is isotropic.
2020-08-12 10:22:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4045967161655426, "perplexity": 4594.472233469413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00393.warc.gz"}
https://www.physicsforums.com/threads/confusion-regarding-sin-and-cos.119367/
# Confusion regarding sin and cos 1. Apr 30, 2006 ### ness9660 Generally Im confused about the use of sin and cos in physics problems. http://img188.imageshack.us/img188/3162/eg2gu.gif [Broken] The torque about the beam's attachment to the wall is: T * 8 * sin(53) Where T is the tension of the wire. Why is sin the choice and not cos? The best correlation Ive come up with so far was in two dimensional collisions, where motion in the y axis is always associated with sin, while the x axis with cos. Can anyone give any insight? Last edited by a moderator: May 2, 2017 2. Apr 30, 2006 ### robphy better: cos goes with ADJACENT side sin goes with OPPOSITE side (from the definitions of sin() and cos(), of course). 3. Apr 30, 2006 ### Curious3141 No, don't use "blind" methods of association to learn stuff like that - you will make mistakes later on (and they are not always applicable). (edit : this is in reference to the orig. post, not robphy's reply) The magnitude of the moment (torque) of a force about a point is the product of the force and the perpendicular distance from the point to the line of force (this is called the "moment arm"). Draw a perpendicular from the point of attachment at the wall to the wire (which corresponds to the direction of the tensional force) and calculate the length of the perpendicular segment with trig. More properly, the definition of torque is $$\tau$$ = r X F, meaning the cross product of the position vector of the point of application of force (taking the fulcrum to be the origin) and the force itself. By the definition of the cross product, the magnitude of the torque will always come out to the product of the magnitudes of the distance and the force times the sine of the angle between them, i.e. $$|\tau| = |r||F|\sin \theta$$ which you can verify is the case in this problem too (though in this case, $$\theta$$ is actually (180 - 53) = 127 degrees, which has the same sine as 53 degrees). The only thing is that torque (properly defined) is a vector quantity, and its direction is at right angles to the other two vectors, in this case, the torque vector will be pointing out of the page at you. Last edited: Apr 30, 2006 4. May 1, 2006 ### kudos213 Good info already stated. What i'll say is that perhaps it's better to view the result as 8*(T*sin(53)). Now look at the term in parenthesis... T*sin(53). what would happen to the tension in the rope if the 53 degrees went to 0? Since the rope and the beam would then be the same length the tension would be 0 also. So ask the question "if theta went to 0, what trig function sin or cos would also give me 0?" You have to have the problem set up right with axes and all that, but this helps when trying to decide between sin and cos...
2017-05-25 16:36:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.790557324886322, "perplexity": 432.7916615983559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608107.28/warc/CC-MAIN-20170525155936-20170525175936-00325.warc.gz"}
http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=95-464
95-464 Duan, J. and Wiggins, S. Nonlinear Stability of One-Layer Geostrophic Fronts (133K, PostScript) Oct 18, 95 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. We study the nonlinear stability of one-layer geostrophic fronts described by frontal geostrophic equations. We have showed that the $\beta$-effect plays a crucial role in the stability of one-layer geostrophic fronts. Especially, we have proved that the class of nonlinearly stable fronts is more restricted when the $\beta$-effect is present than when it is absent. This indicates an important difference between the $\beta$-plane approximation (of the earth's surface) and the $f$-plane approximation. Files: 95-464.src( desc , 95-464.ps )
2018-04-23 15:13:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7678653001785278, "perplexity": 4936.227011116055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00480.warc.gz"}
https://ee.gateoverflow.in/458/gate-electrical-2015-set-2-question-42
A 3-phase transformer rated for $33 kV/11 kV$ is connected in delta/star as shown in figure. The current transformers ($CTs$) on low and high voltage sides have a ratio of $500/5$. Find the currents $i_{1}$ and $i_{2}$, if the fault current is $300$ A as shown in figure. 1. $i_{1}=1\sqrt{3}A, i_{2}=0A$ 2. $i_{1}=0A, i_{2}=0A$ 3. $i_{1}=0A, i_{2}=1\sqrt{3}A$ 4. $i_{1}=1\sqrt{3}A, i_{2}=1\sqrt{3}A$
2022-12-08 12:09:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835604727268219, "perplexity": 482.9616634681153}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00006.warc.gz"}
https://www.nature.com/articles/s41598-018-34001-w?error=cookies_not_supported&code=9930c0d2-b4e0-4188-b9b7-4f78929f5664
Article | Open | Published: # Sensitive, Real-time and Non-Intrusive Detection of Concentration and Growth of Pathogenic Bacteria using Microfluidic-Microwave Ring Resonator Biosensor Scientific Reportsvolume 8, Article number: 15807 (2018) | Download Citation ## Abstract Infection diagnosis and antibiotic susceptibility testing (AST) are time-consuming and often laborious clinical practices. This paper presents a microwave-microfluidic biosensor for rapid, contactless and non-invasive device for testing the concentration and growth of Escherichia Coli (E. Coli) in medium solutions of different pH to increase the efficacy of clinical microbiology practices. The thin layer interface between the microfluidic channel and the microwave resonator significantly enhanced the detection sensitivity. The microfluidic chip, fabricated using standard soft lithography, was injected with bacterial samples and incorporated with a microwave microstrip ring resonator sensor with an operation frequency of 2.5 GHz and initial quality factor of 83 for detecting the concentration and growth of bacteria. The resonator had a coupling gap area on of 1.5 × 1.5 mm2 as of its sensitive region. The presence of different concentrations of bacteria in different pH solutions were detected via screening the changes in resonant amplitude and frequency responses of the microwave system. The sensor device demonstrated near immediate response to changes in the concentration of bacteria and maximum sensitivity of 3.4 MHz compared to a logarithm value of bacteria concentration. The minimum prepared optical transparency of bacteria was tested at an OD600 value of 0.003. The sensor’s resonant frequency and amplitude parameters were utilized to monitor bacteria growth during a 500-minute time frame, which demonstrated a stable response with respect to detecting the bacterial proliferation. A highly linear response was demonstrated for detecting bacteria concentration at various pH values. The growth of bacteria analyzed over the resonator showed an exponential growth curve with respect to time and concurred with the lag-log-stationary-death model of cell growth. This biosensor is one step forward to automate the complex AST workflow of clinical microbiology laboratories for rapid and automated detection of bacteria as well as screening the bacteria proliferation in response to antibiotics. ## Introduction Bacterial infection is a common problem throughout hospitals around the worlds1. Every hour of delay in antibiotic treatment increases the mortality rate of patients by 7.6% with sepsis and septic shocks2. The failure to diagnose early and critical stages of bacterial infections can be detrimental to a patient’s health and potentially fatal. The existing methods of diagnosing infections and performing antibiotic susceptibility testing (AST) suffer from a time-consuming and laborious process, typically taking up to 2–5 days to obtain accurate and reliable results3,4,5. Patients typically start showing symptoms of sepsis when blood concentration of bacteria reaches between 1–100 CFU/mL6. Several detection methods have been developed for bacteria detection and AST, including optical imaging7,8, cell counting9,10,11,12,13, Coulter counters14, pH monitoring15, magnetic16, fluorescent17,18,19, electrochemical20, or bioluminescence17 detection, which all require extensive image or signal processing, and are almost undesirable for clinical applications. Furthermore, their instrumentation can be large, expensive, laborious to work with and lack the potential for miniaturization and subsequent development for point-of-care applications. The lack of standardization of protocols also create discrepancies in results. There are still crucial needs to develop diagnostic devices compatible with the workflow of clinical microbiology laboratories with the capability of rapid, real-time, high-throughput and contactless detecting the concentration and growth of bacteria. The advent of lab-on-a-chip microfluidic technology has so far revolutionized clinical analysis, medical research and diagnostics fields. Due to several inherent characteristics of microfluidic technologies such as portability, less patient sample requirements, miniaturization, minimizing user intervention, cost-effectiveness, enhanced sensitivity and specificity, and higher throughput, they have been recently used as a potent platform for multiple medical applications including clinical microbiology. Clinical analysis of blood and urine bioassays for monitoring infectious diseases, and drug discovery and development, microbiology and pathology studies are becoming fully automated with the increased efficiency of clinical practices21,22,23,24,25,26,27,28,29. Dedicated work has now gone into the development of microfluidic chips with the ability to detect bacteria30 and bacterial growth8 with high sensitivity, and perform rapid and accurate AST through employing accurate gradients of antibiotics31,32. Furthermore, through multiplexing, a multitude of assay combinations are possible in the realm of microfluidic-based biosensing33. Contemporary microfluidic devices are being combined with electromagnetic technologies to develop highly sensitive electromagnetic sensors for the detection of cells and molecules. Microwave-based resonator devices have recently demonstrated a significant potential for biosensing26,33,34,35,36. They can translate a variation in dielectric properties of adjacent materials into quantifiable electrical signals such as resonant frequency and resonant amplitude in a remote non-contact manner. The present resonant-based bacteria sensing devices operate mostly in optical, microwave, and terahertz spans of electromagnetic spectrum among which, planar microwave resonators have grabbed extensive attention in recent years33,36,37. Through their intrinsic advantages, including simple and low-cost fabrication process, compatibility with other state-of-the-art technologies such as printed circuit boards, complementary metal oxides semiconductors (CMOS), and microfluidic lab-on-a-chip systems38, their merits increase significantly as biosensors in biomedical applications. Several groups including ours have integrated planar microwave sensing structures with microfluidic technology to demonstrate outstanding potential for sensing and monitoring36,37,39, where the resonator is able to detect changes in the electric field as a result of changes in liquid inside the microfluidic channel. Nicolic-Jarik et al.40 demonstrated a transmission line resonator alongside with radiofrequency (RF) readout/setup circuitry to detect small capacitive changes of 650 zF in an interdigitated capacitor structure within microfluidics channel. Nakouti et al.41 also employed interdigitated capacitor structure embedded within microfluidic channels to detect bacterial contaminations in water based on reflection pattern of microwave power. Utilizing a planar microwave resonator in conjunction with microfluidics as a biosensor to assist in infection diagnosis and AST analysis could prove to increase clinical efficiency and decrease infectious fatalities. Further studies are needed to prove that this is a viable pathway for the future of AST biosensors. In this study, a simplified model of bacteria concentration and growth analysis within various environments is presented. The Escharichia coli (E. coli) MG1655 bacteria with the OD600 concentration ranging from 0.003 to 1 are injected into the microfluidic chip resting on top of the resonator (Fig. 1). The electrical signal of the resonator is then analyzed by means of a vector network analyzer (VNA). The advantages of this work include real-time, non-contact, accurate and reliable testing of bacteria concentration in a variety of environments and bioassays. It also has the potential to be improved to perform real-time AST in a point-of-care setting. The bacteria concentrations tested in this work are equivalent to clinical concentration of bacteria presently handled in clinical microbiology laboratories, which is typically greater than 105 CFU/mL, although clinics can require high volumes of patient samples42,43. Given the possible presence of bacteria in a plethora of environments within the human body44, the effect of environmental factors need to be considered for developing biosensors for the diagnosis of bacterial infections. pH value is a strong indicator of the environmental conditions, which greatly affects metabolic activities of bacteria and therefore the ability to proliferate and fatality potential to patients45. The bacteria in pH environments of 5.5–8 are examined in increments of 0.5 to calibrate the biosensor. This pH range fits well within the pH of human biofluids, although most biofluids are around the generally accepted physiological pH of 7.4. Furthermore, monitoring the growth of bacteria is studied as a function of time to assess the sensor’s performance in detecting the bacteria proliferation under transient conditions. This study allows us to further develop a rapid, label-free and contactless diagnostic tool for clinical analysis of biofluids in clinical microbiology laboratories for both rapid detection of bacteria and screening the interaction of bacteria and antibiotics. ## Results and Discussion ### Electromagnetic Field Analysis of the Sensor A ring resonator structure was implemented in High Frequency Structure Simulator (HFSS) software to conduct analysis on the electrical fields surrounding the microwave platform. This 2-port structure was designed to be operated under 2–3 GHz frequency span with an initial quality factor of 83.58. Excitation through one port and receiving power from the other port was used to analyze the absorption of microwave power within the sensor’s vicinity. Figure 2a displays the simulated structure within the HFSS software. According to the simulation results, the electric field is highly concentrated around the microstrip resonator gap within the ring structure. This gap area, particularly on the surface of the resonator, is the most sensitive region to the variation within the sensor’s environment (Fig. 2b). Electrical field analysis was conducted in two planes perpendicular to the surface of the resonator to demonstrate the penetration distance of the electrical field for the sensor at the resonant frequency (Fig. 2c,d). It is, therefore, implied that the sensitive region of the resonator has the capability to handle a volume of 0.38 µL. The measured profiles of S21 parameters are compared with the simulated results through HFSS of two scenarios: the bare resonator, and the resonator assembled with the microfluidic chip (Fig. 2e). A close relationship between the resonant profile of the simulation and experimental data is demonstrated (Fig. 2d). The resulting resonant frequency and amplitude of the simulation was found to be 2.576 GHz and −3.658 dB, respectively, while the experimental data for the resonant frequency and amplitude are determined to be 2.591 GHz and −3.698 dB, respectively, resulting in a 0.58% and 1.09% error between the experimental and theoretical values. Following the validation of the numerical model, the resonator integrated with the microfluidic chip was simulated to have a resonant profile of 2.511 GHz and −5.25 dB (Fig. 2f). The experimental data was then gathered to be 2.511 GHz and −3.882 dB, showing a 0% and 26.05% error of the resonant frequency and amplitude, respectively. These errors, although within reasonable bounds, can be caused through differences between the simulated and real permittivity, loss factors of the thin PDMS layer and ultra-thin glass, and deviations of height of the bulk PDMS structure. Furthermore, spatial differences between the experimental setup and theoretical simulations have a considerable effect on these values, therefore the exact placement of the microchannel above the resonator will also play a role in the small numerical and experimental errors. These errors, however, do not affect the subsequent results as they are proposed as baseline results for the following tests with the resonator in the presence of fluids and bacteria. ### Experimental Setup of the Sensor and VNA The room temperature was set to 20 °C for all sets of experiments, unless otherwise mentioned. The cables and tubing were secured through duct tape to limit the movement and mechanical drift of data. The microfluidic chip was secured onto the resonator using double-sided tape (Fig. 3a). The entire setup can be seen in Fig. 3b. Temperature variations would require the sensor to be calibrated for the testing temperature. The VNA was brought to operating temperature and calibrated within the frequency span of 2–3 GHz at this temperature using 2001 steps in transmission mode, with an IF bandwidth of 1 kHz. The resonant frequency and amplitude were extracted through S21 parameters for different bacteria concentrations at distinct pH levels. The heat generated at these settings was miniscule and can be neglected at any portion of the analysis. The response from the VNA was nearly immediate but measurements were taken at 1-minute intervals to ensure homogenous distribution of bacterial fluid within the chip. ### Microwave and Imaging-based Detection of Bacterial Concentration For detecting bacteria concentrations using microscopy technique, various concentrations of E. Coli were prepared (in pH 7.5 and the initial OD600 value of 1.2) and injected into the microfluidic chip at dilution factors of 0, 1, 2, 5, 10, 25, 50, 100 and 200. When the fluidic channel is filled with the liquid, the flow is stopped and the bacteria within the channel were imaged immediately by the Nikon AR1 microscope. Figure 3c shows a visual inside the bare microfluidic channel. The concentrations of bacteria present above the active region of the sensor increases with decreasing dilution factors in subsequent steps of the experiment (Fig. 3d). The concentration of bacteria detected within the microchannel for different dilution factors is determined to be proportional to the concentration of bacteria injected into the chip, confirming the uniform distribution of bacteria within the microfluidic chip. It is noted that no bacteria were found adhered to PDMS surfaces and this is important to maintain the homogenous representation of each dilution factor. The flowrate of 50 µL/min through the channel was high enough to produce enough sheer to prevent cells from sticking to the channel walls. This was observed through by fixating the camera view on the channel wall and through multiple iterations of replacing fluid with higher bacterial concentrations, no cells were found to adhere to the channel walls. As shown in Fig. 4, a highly linear relationship of resonant amplitude and resonant frequency is demonstrated with respect to the logarithm of OD600 values of the solution with pH 7.5 residing inside the microfluidic chamber above the resonator’s active region. The linear response of the resonator to the logarithm of OD600 values is also extended to all other pH values (Fig. 4). We examined the resonator response to pH values within the range of 5.5–8 with increments of 0.5 and OD600 values ranging from 0.003 to approximately 1 (Fig. 4). Each test was measured three times and the error bars are displayed (the error bars are too small to be seen in Fig. 4). The trends clearly display a decreasing resonant frequency and resonant amplitude with the increase of OD600 values. All relationships displayed R2 values of greater than 0.99 except for the resonant frequency plot of pH 5.5, which displayed an R2 value of 0.9832. This response is highly linear and reliable; however, a higher variance was detected in resonant frequency response due to introducing excess protons, metabolites and proteins of these cells. Moreover, detection of changes in resonant frequency and resonant amplitude is registered immediately, therefore providing a strong avenue towards rapid AST and diagnosis. It is noted that the difference in resonant frequency and resonant amplitude between each pH can be explained by the miniscule spatial differences in placing the microchannel on the resonator and the size variations in PDMS blocks. The slope of each line is an indicative of the sensitivity of the device (the resonant frequency and resonant amplitude) to OD600, and hence the bacterial concentration. It is expected that each solution interacts with the electrical field differently as each pH inherently differs in charge. This may have effects on the sensitivity of the resonator, as impedance and permittivity both effect the electrical field interactions. Subsequently, the resonator can be further optimized with this knowledge to test under different environments by utilizing a higher frequency range or different geometry. The graph with pH 8 shows the highest slope magnitude of −0.0034 GHz/log(OD600) in resonant frequency displaying a large sensitivity to bacterial concentrations. Meanwhile, the graph with pH 7 shows a slope of −0.0007 GHz/log(OD600), which is less sensitive to bacterial concentration. Similarly, the graph with pH 7.5 has a large sensitivity to bacterial concentrations with respect to resonant amplitude (−0.4771 dB/log(OD600)), and the graph with pH 6 has the lowest sensitivity with respect to resonant amplitude (−0.0130 dB /log(OD600)). ### Microwave-based Screening of Bacterial Growth A similar setup used for the microwave-based detection of bacterial concentration (Fig. 3) was used to screen the bacterial growth, with the sole difference being the sensor was placed inside a New Brunswick Scientific Innova 40 shaker incubator, to maintain the atmospheric temperature at 37.4 °C. This temperature is the recommended culturing temperature at which the proliferation of E. Coli is the fastest. Bacteria injected into the chip of pH 7.5 and OD600 0.008 were left inside the microfluidic channel and incubated for several hours. This pH was selected, as pH 7.5 is the closest to human body’s physiological pH of 7.4. Furthermore, it was also the second most sensitive with respect to resonant frequency, after pH 8. The tubing was crimped at both the inlet and outlet, and data was gathered every 10 min over a 500-minute span. The results show that both the amplitude and frequency decrease with time, consistent with the observed trends in experiments previously conducted in this study. The resulting data of bacteria growth using the resonator was fitted with exponential curves resulting in R2 values of 0.997 and 0.9893 for resonant amplitude and resonant frequency, respectively. The resulting equations (1) and (2) for each of the graphs (Fig. 5) is indicative of the lag-log-stationary growth model of bacterial cells. However, several variables other than the bacteria growth may contribute to the signals, e.g. the metabolic reactions, pH and CO2 change, nutrient concentration in the MH medium, and aggregation and colonization of the bacteria during the bacteria culture. These variations in the sensor’s response can be initiated by the permittivity and loss factor inside the microfluidic channel. It is demanding to distinguish the response of each of these individual factors through simple S parameter analysis in a complex biological environment. However, they can be utilized as an indication for overall metabolic activity of the bacterial sample. While the results of Fig. 5 are demonstrated to be highly predictive and potentially applicable for clinical use, multiplexing of the microwave resonator and microchannel is one solution for in-depth analysis of these factors. Combining multiple ring resonators in a microwave design allows one section to act as a control to be compared to another46, which could lead to deeper levels of analysis on specific factors. Further experiments were done to culture bacteria over an 8-hour period. OD600 values were recorded every hour, starting from 0.008, and the same lag-log-stationary growth model was shown to be prevalent (shown in Supplementary Figure S1). It was found that upon comparing the resonant data with the OD600, the extreme ends of the concentration of bacteria tended to deviate from linearity, displayed in Fig. 5c,d. This is shown by comparing the transient OD data with the corresponding resonant data. The slower growth rate at the start and end of the experiment is not as prevalent in the resonant data as slope of the resonant frequency does not deviate proportionally to the transient OD changes and the slope of the resonant amplitude barely shifts at all. However, during the middle of the experiment, as OD reaches ~0.05, the resonant data compared to the OD displays a near linear relationship until the OD reaches ~3.2, where linearity of data is once again compromised. Further optimization of the sensor is required to increase the sensitivity and widen the linear detection range. $$\begin{array}{cc}{\rm{f}}\,({\rm{t}})=20.3\times {{\rm{e}}}^{(0.0005933\cdot {\rm{t}})}-24.73\times {{\rm{e}}}^{(0.0005192\cdot {\rm{t}})} & ({\rm{Resonant}}\,{\rm{amplitude}})\end{array}$$ (1) $$\begin{array}{cc}{\rm{g}}({\rm{t}})=0.001797\times {{\rm{e}}}^{(-0.03534\bullet {\rm{t}})}+2.4983\times {{\rm{e}}}^{(-3.477{\rm{e}}-06\bullet {\rm{t}})} & ({\rm{Resonant}}\,{\rm{frequency}})\end{array}$$ (2) It is noted that the concentration initially cultured in this work is far below the technical capacity of clinics which demonstrates the potential of this device in a clinical setting. Seeing as clinical samples require approximately 48 hours to culture, utilizing this technology to display change in culture samples can prove to be beneficial. Having deemed that the overall metabolic activity of solutions changes with the proliferation in bacteria, utilizing the VNA can help indicate the presence of bacteria in a matter of minutes, as opposed to standard clinical practices. Furthermore, the value of non-invasive and contactless features are assets, as this increases sterility in a hospital setting. Our contactless bacteria sensing device could further be adapted to be used directly on patient samples. In summary, a microfluidic-integrated microwave biosensor for detecting the concentration and proliferation of E. Coli in real-time was presented and tested. Our sensing platform possesses several key advantages such as low sample and reagent volume requirements (~400 nL), enhanced detection, sensitivity (detecting OD600 values of 0.003, without confirming the limit of detection), rapid detection times (almost immediately), and enhanced combinatorial capabilities compared to several alternative electrical sensing techniques. Moreover, our method further enables direct observation and enumeration of bacteria. Images of the bacteria suspended within the microchannel show a uniform distribution of bacteria all along the channel and without any adherence to the PDMS surface. This work gives us an insight to the dielectric properties of E. Coli K12 MG1655 and how they can be correlated to the resonant amplitude and resonant frequency of the resonating sensor. Given the highly linear relationships between the electrical signal and the concentration of bacteria, further tests need to be established using our microwave-microfluidic platforms for developing diagnostic methods and antibiotic susceptibility testing (AST). This work could benefit clinical microbiology laboratories through automating the workflow of AST and increasing their capabilities for the diagnosis and handling bacterial infections. Future work will focus on improving the layout of the electrodes and reducing the distance between the microwave resonator and the microfluidic channel to detect E. Coli at lower concentrations. Further experiments need to be performed to assess the feasibility of rapid diagnosis and management of infections. Moving beyond our present work, the ability of the microfluidic-microwave platform to support multiplexed and high-throughput cell culture and sensing over extended periods of time (up to several hours) in a non-invasive and noncontact system makes it a versatile tool for time resolved sensing of live cells in a multiplexed fashion for other applications, including cancer detection and systems biology. ## Methods and Materials ### Reagents and Materials Sodium phosphate dibasic (795410) and sodium phosphate monobasic (RDD007) for pH adjustment were purchased from Sigma Aldrich, Canada. Mueller Hinton Broth BBL (211443) as growth media (GM) was purchased from BD, Canada. Ammonium persulfate was purchased from MG Chemicals (Surrey, Canada). The polydimethylsiloxane (PDMS) and curing agent were obtained from Dow Corning Slygard. Ultra-thin 70 µm thickness AF32 glass was kindly provided by Schott Glass USA. Tygon Microbore tubing, 0.020″ × 0.060″OD, 100 ft/roll was purchased from Cole-Parmer, Montreal, Canada, PCB Board was provided by Rogers Corp. USA. Also, a Rohde and Schwarz ZNB20 VNA was used for the analysis of the resonator signals. ### Bacterial Samples Preparation The bacteria strain utilized in this work is wild-type strain DA5438 (E. Coli MG1655). In preparation for analysis, the E. Coli from 50% glycerol stocks at −80 °C were inoculated into 50 mL Müller-Hinton (MH) growth medium and incubated (37 °C; shaking at 170 RPM) for about 10 hrs. Subsequently, the optical density (OD600) was measured using NanoDrop™ One Microvolume UV-Vis Spectrophotometer from Thermo Fisher Scientific. The pH was measured for each sample by an Orion™ Versa Star Pro™ pH Benchtop Meter (Thermo Fisher Scientific). A mixture of 0.2 M sodium phosphate dibasic and 0.2 M sodium phosphate monobasic was prepared for pH adjustment of bacterial samples and created solutions with pH 5.5, 6, 6.5, 7, 7.5, 8. The MH medium of the same pH was used to further dilute the bacteria samples by factors of 1, 2, 5, 10, 25, 100 and 200, since these dilution factors would bring the concentration of bacteria down to the capacity that cannot be handled by clinics. It is noted that the bacteria were stored in 4 °C while they were not in use to retard their growth to ensure the most accurate representation of each dilution factor. The samples were brought to room temperature prior the use through dilution in MH medium. For high concentrations of bacteria, 2–3 mL of MH medium was left at room temperature for about 3 min to register room temperature. ### Microfluidic Platform The microfluidic platform primarily consisted of PDMS and glass as these are robust materials in microfluidic research with stable electrical properties, when subjugated to electromagnetic fields. The microfluidic chip features a simple straight channel produced from 10:1 ration of polydimethylsiloxane (PDMS) to curing agent. The channel was 2 mm wide, 0.17 mm high and 23 mm long, and capable of handling 7.82 µL of fluid. This chip was produced from a silicon wafer mold with the SU8 straight channel features hard-baked onto the substrate through photolithography45. The PDMS polymer was poured over the mold and baked at 80 °C for 30 min. The chip was carved out and 1.5 mm holes were punched into the hardened polymer. The chip was subsequently baked for 24 hrs. Simultaneously, the PDMS polymer of the same polymeric ration was spin-coated over ultra-thin glass (70 µm in thickness), after being cleaned with acetone and nitrogen, to form an additional layer of 25 µm. Ultra-thin glass was used due to its mechanical rigidity, maintaining the shape and size of the PDMS-based chips over long experiments after the bonding had occurred, given the fact that the microwave sensing technology is very sensitive to spatial variations. The 70 µm thin glass layer was used to bring the fluid inside the chip as close as possible to the resonator, to allow for greater accuracy in measurement, while maintaining the robustness of the design, as PDMS covered thin-glass would not shatter easily and the microfluidic chip would bond to the PDMS surface very easily, reducing the likelihood of leak through the channel. The thin PDMS layer was cured in oven for a 24 hr period. Following the curing of both the fluidic network layer and the thin PDMS layer over the glass slide, the bonding surfaces were thoroughly cleaned with acetone and nitrogen. The featured side of the chip was then bonded irreversibly with the thin PDMS layer on the glass through oxygen plasma treatment by Electro-Technic Products Inc. BD 20 Plasma wand to create a hydrophilic surface. Tygon tubing, 1.5 mm outer diameter, was connected to the holes to bring flow into and out of the chip. Syringes with fluids were mounted on Harvard Apparatus syringe pumps and connected to the tubing. The waste was collected in a petri dish and discarded appropriately. The resulting fluidic device was then mounted onto the resonator through double sided tape placed away from the resonator’s sensitive zone. ### Microwave Resonator A microstrip planar ring resonator sensor was implemented on a high frequency RO5880 substrate from Rogers26. The substrate was covered by two copper layers on its top and bottom surface with the thickness and conductivity of 35 µm and 58 MSm−1, respectively. The substrate had a thickness of 0.79 mm with the permittivity of 2.2 +/− 0.02 and loss tangent of 0.0009. ### Microwave Measurements of Varying pH and Concentrations The electrical measurements of the bare resonator and the resonator placed underneath the microfluidic channel were taken before the start of the experiments with bacteria. Syringes were filled with MH medium and connected to the microfluidic chip using Tygon tubing. The fluid was introduced into the microchannel while the VNA was running. The syringes were set onto the syringe pump calibrated to the flow rate of 50 µL/min and run for two minutes. A flowrate applied was to ensure a reduced attachment of bacteria to PDMS surfaces. Every following concentration was therefore ensured to be a homogenous representation of its dilution factor. Subsequently, three measurements were then recorded from the VNA with one-minute time intervals. The syringes were then replaced with a higher concentration of bacteria; as the fluid as once again forced in, flushing out all the fluid of the previous concentration, and set onto the syringe pump at the same flow rate. This set of experiments was repeated for all pH values of the medium and bacteria concentrations. The plasma treated hydrophilic surface of the PDMS helped eliminate air bubbles as subsequent fluids were forced in. ### Screening of Bacteria Growth The resonator was placed inside a New Brunswick Scientific Innova 40 shaking incubator at the temperature of 37.4 °C, and the electrical measurements of both the bare resonator and resonator with the microfluidic chip were taken. Subsequently, bacterial cultures of OD600 of 0.008 was injected into the microfluidic chip using a similar setup to that described above. However, this experiment was done without flow, as the two ends of tubing were crimped to contain all the bacterial sample inside to proliferate. The electrical measurements of these samples were obtained in 10-minute intervals for 500 min, as this is sufficient time for E. coli to proliferate into the log phase of growth. This sequencing frame was chosen as the doubling time of E. coli MG1655 in MH medium is approximately 20 min5. Therefore, the selected 500 min experimental period provides a reasonable growth curve in terms of resonant frequency and amplitude. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Ventola, C. L. The antibiotic resistance crisis: part 1: causes and threats. 40, 277–83 (2015). 2. 2. Faridi, M. A. et al. Elasto-inertial microfluidics for bacteria separation from whole blood for sepsis diagnostics. J. Nanobiotechnology 15, 3 (2017). 3. 3. Baltekin, Ö., Boucharin, A., Tano, E., Andersson, D. I. & Elf, J. Antibiotic susceptibility testing in less than 30 min using direct single-cell imaging. Proc. Natl. Acad. Sci. USA 114, 9170–9175 (2017). 4. 4. Ruiz-Giardín, J. M. et al. Diagnosis of bacteraemia and growth times. Int. J. Infect. Dis. 41, 6–10 (2015). 5. 5. Ingham, C. J., van den Ende, M., Pijnenburg, D., Wever, P. C. & Schneeberger, P. M. Growth and multiplexed analysis of microorganisms on a subdivided, highly porous, inorganic chip manufactured from anopore. Appl. Environ. Microbiol. 71, 8978–81 (2005). 6. 6. Yagupsky, P. & Nolte, F. S. Quantitative aspects of septicemia. Clin. Microbiol. Rev. 3, 269–79 (1990). 7. 7. Choi, J. et al. Rapid antibiotic susceptibility testing by tracking single cell growth in a microfluidic agarose channel system. Lab Chip 13, 280–287 (2013). 8. 8. Li, B. et al. Gradient microfluidics enables rapid bacterial growth inhibition testing. Anal. Chem. 86, 3131–3137 (2014). 9. 9. Lu, Y. et al. Single cell antimicrobial susceptibility testing by confined microchannels and electrokinetic loading. Anal. Chem. 85, 3971–6 (2013). 10. 10. Chen, C. H. et al. Antimicrobial susceptibility testing using high surface-to-volume ratio microchannels. Anal. Chem. 82, 1012–1019 (2010). 11. 11. Price, C. S., Kon, S. E. & Metzger, S. Rapid antibiotic susceptibility phenotypic characterization of Staphylococcus aureus using automated microscopy of small numbers of cells. J. Microbiol. Methods 98, 50–58 (2014). 12. 12. Peitz, I. & van Leeuwen, R. Single-cell bacteria growth monitoring by automated DEP-facilitated image analysis. Lab Chip 10, 2944 (2010). 13. 13. Matsumoto, Y. et al. A Microfluidic channel method for rapid drug-susceptibility testing of Pseudomonas aeruginosa. PLoS One 11, e0148797 (2016). 14. 14. Kubitschek, H. E. & Friske, J. A. Determination of bacterial cell volume with the Coulter Counter. J. Bacteriol. 168, 1466–7 (1986). 15. 15. Tang, Y., Zhen, L., Liu, J. & Wu, J. Rapid antibiotic susceptibility testing in a microfluidic pH sensor. Anal. Chem. 85, 2787–2794 (2013). 16. 16. Sinn, I. et al. Asynchronous magnetic bead rotation (AMBR) biosensor in microfluidic droplets for rapid bacterial growth and susceptibility measurements. Lab Chip 11, 2604 (2011). 17. 17. Dong, T. & Zhao, X. Rapid identification and susceptibility testing of uropathogenic microbes via immunosorbent ATP-bioluminescence assay on a microfluidic simulator for antibiotic therapy. Anal. Chem. 87, 2410–2418 (2015). 18. 18. Boedicker, J. Q., Li, L., Kline, T. R. & Ismagilov, R. F. Detecting bacteria and determining their susceptibility to antibiotics by stochastic confinement in nanoliter droplets using plug-based microfluidics. Lab Chip 8, 1265 (2008). 19. 19. He, J. et al. A novel microbead-based microfluidic device for rapid bacterial identification and antibiotic susceptibility testing. Eur. J. Clin. Microbiol. Infect. Dis. 33, 2223–2230 (2014). 20. 20. Pandey, A., Gurbuz, Y., Ozguz, V., Niazi, J. H. & Qureshi, A. Graphene-interfaced electrical biosensor for label-free and sensitive detection of foodborne pathogenic E. coli O157:H7. Biosens. Bioelectron. 91, 225–231 (2017). 21. 21. Madadi, H., Casals-Terré, J. & Mohammadi, M. Self-driven filter-based blood plasma separator microfluidic chip for point-of-care testing. Biofabrication 7, 25007 (2015). 22. 22. Yan, H. et al. Multiplex detection of bacteria on an integrated centrifugal disk using bead-beating lysis and loop-mediated amplification. Sci. Rep. 7, 1460 (2017). 23. 23. Lee, W., Kwon, D., Choi, W., Jung, G. Y. & Jeon, S. 3D-printed microfluidic device for the detection of pathogenic bacteria using size-based separation in helical channel with trapezoid cross-section. Sci. Rep. 5, 7717 (2015). 24. 24. Kinahan, D. J. et al. Automation of silica bead-based nucleic acid extraction on a centrifugal lab-on-a-disc platform. J. Phys. Conf. Ser. 757, 12013 (2016). 25. 25. Mohammadi, M., Madadi, H. & Casals-Terré, J. Microfluidic point-of-care blood panel based on a novel technique: Reversible electroosmotic flow. Biomicrofluidics 9, 54106 (2015). 26. 26. Zarifi, M. H., Sadabadi, H., Hejazi, S. H., Daneshmand, M. & Sanati-Nezhad, A. Noncontact and nonintrusive microwave-microfluidic flow sensor for energy and biomedical engineering. Sci. Rep. 8, 139 (2018). 27. 27. Bahadoran, M. et al. Modeling and analysis of a microresonating biosensor for detection of Salmonella bacteria in human blood. Sensors 14, 12885–12899 (2014). 28. 28. Kaushik, A. M. et al. Accelerating bacterial growth detection and antimicrobial susceptibility assessment in integrated picoliter droplet platform. Biosens. Bioelectron. 97, 260–266 (2017). 29. 29. Lee, W. B. et al. A microfluidic device for antimicrobial susceptibility testing based on a broth dilution method. Biosens. Bioelectron. 87, 669–678 (2017). 30. 30. Etayash, H., Khan, M. F., Kaur, K. & Thundat, T. Microfluidic cantilever detects bacteria and measures their susceptibility to antibiotics in small confined volumes. Nat. Commun. 7, 12947 (2016). 31. 31. Dai, J., Suh, S. J., Hamon, M. & Hong, J. W. Determination of antibiotic EC 50 using a zero-flow microfluidic chip based growth phenotype assay. Biotechnol. J. 10, 1783–1791 (2015). 32. 32. Hou, Z. et al. Time lapse investigation of antibiotic susceptibility using a microfluidic linear gradient 3D culture device. Lab Chip 14, 3409–3418 (2014). 33. 33. Mohan, R. et al. A multiplexed microfluidic platform for rapid antibiotic susceptibility testing. Biosens. Bioelectron. 49, 118–125 (2013). 34. 34. Zarifi, M. H., Thundat, T. & Daneshmand, M. High resolution microwave microstrip resonator for sensing applications. Sensors Actuators A Phys. 233, 224–230 (2015). 35. 35. Lee, H. J. & Yook, J.-G. Recent research trends of radio-frequency biosensors for biomolecular detection. Biosens. Bioelectron. 61, 448–459 (2014). 36. 36. Chen, Y. F., Wu, H. W., Hong, Y. H. & Lee, H. Y. 40  GHz RF biosensor based on microwave coplanar waveguide transmission line for cancer cells (HepG2) dielectric characterization. Biosens. Bioelectron. 61, 417–421 (2014). 37. 37. Kim, N. Y., Dhakal, R., Adhikari, K. K., Kim, E. S. & Wang, C. A reusable robust radio frequency biosensor using microwave resonator by integrated passive device technology for quantitative detection of glucose level. Biosens. Bioelectron. 67, 687–693 (2015). 38. 38. Abbasi, Z., Zarifi, M. H., Shariati, P., Hashisho, Z. & Daneshmand, M. Flexible coupled microwave ring resonators for contactless microbead assisted volatile organic compound detection. In 2017 IEEE MTT-S International Microwave Symposium (IMS) 1228–1231, https://doi.org/10.1109/MWSYM.2017.8058827 (IEEE, 2017). 39. 39. Rawat, K. A., Bhamore, J. R., Singhal, R. K. & Kailasa, S. K. Microwave assisted synthesis of tyrosine protected gold nanoparticles for dual (colorimetric and fluorimetric) detection of spermine and spermidine in biological samples. Biosens. Bioelectron. 88, 71–77 (2017). 40. 40. Nikolic-Jaric, M. et al. Microwave frequency sensor for detection of biological cells in microfluidic channels. Biomicrofluidics 3, 34103 (2009). 41. 41. Nakouti, I., Korostynska, O., Mason, A. & Al-Shamma’, A. I. Detection of pathogenic bacteria in aqueous media: assessing the potential of real-time electromagnetic wave sensing. Proceedings of the 8th International Conference on Sensing Technology, Sep. 2–4, Liverpool, UK 2014. 42. 42. Wilson, M. L. & Gaido, L. Laboratory diagnosis of urinary tract infections in adult patients. Clin. Infect. Dis. 38, 1150–1158 (2004). 43. 43. Opota, O., Croxatto, A., Prod’hom, G. & Greub, G. Blood culture-based diagnosis of bacteraemia: state of the art. Clin. Microbiol. Infect. 21, 313–322 (2015). 44. 44. Willey, J. M., Sherwood, L. & Woolverton, C. J. Prescott’s microbiology (2014). 45. 45. Small, P., Blankenhorn, D., Welty, D., Zinser, E. & Slonczewski, J. L. Acid and base resistance in Escherichia coli and Shigella flexneri: role of rpoS and growth pH. J. Bacteriol. 176, 1729–37 (1994). 46. 46. Zarifi, M. H. et al. Effect of phosphonate monolayer adsorbate on the microwave photoresponse of TiO2 nanotube membranes mounted on a planar double ring resonator. Nanotechnology 16, 27–37 (2016). ## Acknowledgements Authors acknowledge CMC Microsystems for generously providing us with access to measurements instruments and providing access to applicable software. Authors thank Natural Sciences and Engineering Council of Canada (NSERC) for providing funding for this project. A.S.N. thanks Canada Research Chair for providing funding for this project. ## Author information ### Author notes 1. Sevda Mohammadi and Mehdi Mohammadi Ashani contributed equally. ### Affiliations 1. #### BioMEMS and Bioinspired Microfluidic Laboratory, Department of Mechanical and Manufacturing Engineering, University of Calgary, Calgary, AB, T2N 2N1, Canada • Rakesh Narang • , Mehdi Mohammadi Ashani •  & Amir Sanati-Nezhad 2. #### Microelectronics and Advanced Sensors Laboratory, School of Engineering, University of British Columbia, Kelowna, BC, V1V 1V7, Canada •  & Mohammad Hossein Zarifi 3. #### Biomedical Engineering Graduate Program, University of Calgary, 2500 University Dr. NW, Calgary, AB, T2N 1N4, Canada • Rakesh Narang • , Mehdi Mohammadi Ashani •  & Amir Sanati-Nezhad 4. #### Center for BioEngineering Research and Education, University of Calgary, Calgary, AB, T2N 1N4, Canada • Rakesh Narang • , Mehdi Mohammadi Ashani •  & Amir Sanati-Nezhad 6. #### Subsurface Fluidics and Porous Media Laboratory, Chemical and Petroleum Engineering, University of Calgary, Calgary, AB, T2N 1N4, Canada • Hossein Hejazi ### Contributions R.N. designed and fabricated microfluidic chips and collected the experimental data. S.M. performed simulation modeling of microwave resonators. M.M.A. designed bacteria testing and collected the experimental data. H.S. fabricated and tested microwave chips. H.H. designed the experiment and analyzed the experimental data. M.H.Z. designed microwave resonators and interpret the results. A.S.N. designed the experiments and analyzed the data. R.N., M.M.A., M.H.Z. and A.S.N. wrote the manuscript. ### Competing Interests The authors declare no competing interests. ### Corresponding authors Correspondence to Mohammad Hossein Zarifi or Amir Sanati-Nezhad. ## Electronic supplementary material ### DOI https://doi.org/10.1038/s41598-018-34001-w
2019-03-27 00:39:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5732726454734802, "perplexity": 5184.187132944905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207146.96/warc/CC-MAIN-20190327000624-20190327022624-00043.warc.gz"}
https://www.groundai.com/project/boundary-induced-spin-density-waves-in-linear-heisenberg-antiferromagnetic-spin-chains-with-mathbfs-ge-1/
Boundary-induced spin density waves in linear Heisenberg antiferromagnetic spin chains with \mathbf{S\geq 1} # Boundary-induced spin density waves in linear Heisenberg antiferromagnetic spin chains with S≥1 Dayasindhu Dey S. N. Bose National Centre for Basic Sciences, Block - JD, Sector - III, Salt Lake, Kolkata - 700098, India    Manoranjan Kumar S. N. Bose National Centre for Basic Sciences, Block - JD, Sector - III, Salt Lake, Kolkata - 700098, India    Zoltán G. Soos Department of Chemistry, Princeton University, Princeton, New Jersey 08544, USA August 1, 2019 ###### Abstract Linear Heisenberg antiferromagnets (HAFs) are chains of spin- sites with isotropic exchange between neighbors. Open and periodic boundary conditions return the same ground state energy in the thermodynamic limit, but not the same spin when . The ground state of open chains of spins has or , respectively, for even or odd . Density matrix renormalization group (DMRG) calculations with different algorithms for even and odd are presented up to for the energy and spin densities of edge states in HAFs with , 3/2 and 2. The edge states are boundary-induced spin density waves (BI-SDWs) with for . The SDWs are in phase when is odd, out of phase when is even, and have finite excitation energy that decreases exponentially with for integer and faster than for half integer . The spin densities and excitation energy are quantitatively modeled for integer chains longer than spins by two parameters, the correlation length and the SDW amplitude, with for and 49.0 for . The BI-SDWs of chains are not localized and are qualitatively different for even and odd . Exchange between the ends for odd is mediated by a delocalized effective spin in the middle that increases and weakens the size dependence. The nonlinear sigma model (NLM) has been applied the HAFs, primarily to with even , to discuss spin densities and exchange between localized states at the ends as . chains with odd are fully consistent with the NLM; chains have two gaps with the same as predicted whose ratio is 3.45 rather than 3; the NLM is more approximate for chains with even and is modified for exchange between ends for odd . ###### pacs: 75.10.Pq,75.10.Kt,75.30.Fv,75.40.Mg ## I Introduction The Hilbert space of a system of spins has dimension . The total spin and its components are conserved for isotropic (Heisenberg) exchange interactions between spins. The simplest case is a chain with equal exchange between nearest neighbors. A great many theoretical and experimental studies have been performed on the linear Heisenberg antiferromagnet (HAF), Eq. 1 below, with and . There are multiple reasons why. First, there are good physical realizations of spin-1/2 chains in inorganic crystals with localized spins on metal ions and in organic crystals based on one-dimensional (1D) stacks of radical ions. Second, the Hilbert space is smallest for for any choice of exchange interactions, small enough to access the full spectrum and thermal physics for comparison with experiment. Third, long ago Bethe and Hulthen obtained the exact ground state Bethe (1931); *hulthen38 of the infinite chain with antiferromagnetic exchange between nearest neighbors, a prototypical gapless many-body system with quasi-long-range order. HAFs and related chains with came to the fore with Haldane’s conjecture based on field theory that integer chains are gapped. Haldane (1982) Shortly thereafter, White introduced the density matrix renormalization group (DMRG) method that made possible accurate numerical calculation of the ground state properties of chains. White (1992); *white-prb93 The thermodynamic limit of spin chains with exchange interactions leads to quantum phase diagrams with many interesting correlated phases. According to the valence bond solid (VBS) analysis, Affleck et al. (1988) integer chains have localized edge states with spin . DMRG studies of finite chains have confirmed edge states in both integer White and Huse (1993); Schollwöck et al. (1996) and half integer Qin et al. (1995); Fáth et al. (2006) chains. Machens et al., Machens et al. (2013) have recently discussed short HAFs with comparable energies for bulk excitations and edge states. They summarize previous studies such as the relation of HAFs to the nonlinear model (NLM), its application to edge states, the VBS model and its valence bond diagrams. Qin et al., Qin et al. (1995) applied DMRG to HAFs up to 100 spins to discuss the energies of edge states and to distinguish between chains of integer and half integer . DMRG is quantitative for HAFs of spins with correlation length and large Haldane gap. Longer chains are necessary for the HAF with or for the gapless HAF. In this paper we consider edge states of HAFs with , 3/2 and in systems of up to 500 spins. We use conventional DMRG for chains with an even number of spins and another algorithm for chains with an odd number of spins. We compute and model the spin densities of edge states as well as their excitation energies. The Hamiltonian of the spin- HAF chain with open boundary conditions (OBC) is HS(N)=JN−1∑r=1→Sr⋅→Sr+1. (1) The spin at site is , the total spin and its component are conserved, and is a convenient unit of energy. The terminal spins and are coupled to only one spin in Eq. 1. Periodic boundary conditions (PBC) also has between sites 1 and . Every spin is then coupled to two neighbors, the system has translational symmetry, and the smallest is expected in the ground state (GS) for AF exchange. Indeed, the GS of PBC chains is a singlet, spin , except for odd and half integer , when . The sectors of integer and half integer are disjoint, and even is conventionally taken for the thermodynamic limit. As noted by Faddeev and Takhtajan, the thermodynamic limit of the HAF with odd is not well understood. Faddeev and Takhtajan (1981) HAFs with OBC are fundamentally different because there is no energy penalty for parallel spins at sites 1 and . The GS of Eq. 1 remains a singlet for even , but it becomes a multiplet with and Zeeman degeneracy for odd . The lowest-energy triplet is necessarily an excited state when is even. For integer , the singlet is an excited state when is odd, while for half integer , the doublet is an excited state when is odd. Except in the case, depends on the boundary conditions for arbitrarily large systems. It follows that HAFs with OBC support edge states with whose energies become degenerate in the thermodynamic limit with those of PBC systems with or 1/2. We define the energy gaps of edge states in chains of spins as ΓS(N)=E0(S,N)−E0(0,N), (2) where is the lowest energy in the sector with total spin . Even leads to . Odd leads to for integer and to relative to for half integer . Since DMRG algorithms conserve rather than , the most accurate results are the GS in sectors with increasing and . Otherwise, the singlet or doublet is an excited state in the sector for integer or in the sector for half integer . The size dependence of is faster than , which distinguishes gap states from bulk excitations that may also have zero gap in the thermodynamic limit. We shall characterize edge states using spin densities and call them boundary-induced spin density waves (BI-SDWs). BI-SDW is more descriptive than edge state and is more accurate than localized state, since BI-SDWs are not localized in half integer chains. By convention, we choose the Zeeman level when and define the spin density at site as ρ(r,N)=⟨Szr⟩,r=1,2…N. (3) The expectation value is with respect to the state of interest. Singlet states have at all sites. SDWs with have equal spin density at and by symmetry in chains, by construction and . It is advantageous to focus on spin densities rather than energy gaps. Spin densities are exclusively associated with states while the in Eq. 2 are small differences between extensive energies. The NLM is a good approximation for HAFs, and theoretical discussions have focused as much on field theory as on spin chains. Affleck (1986a); *affleck86b; Schulz (1986); Affleck et al. (1989) The model for integer chains relates edge states to an effective Hamiltonian between spins at the ends, Machens et al. (2013) Heff(N)=(−1)NJeexp(−N/ξ)→s′1⋅→s′N. (4) The correlation length and exchange are fit to DMRG results for . An interesting point is that refers to the bulk, the singlet GS in the thermodynamic limit, as has been confirmed within numerical accuracy in chains. White and Huse (1993) The chain has two gap states that afford more stringent tests of Eq. 4. For example, the ratio of the two gaps is necessarily 3:1 for . Edge states in HAFs with half integer have been discussed Ng (1994); Qin et al. (1995); Machens et al. (2013) using with effective spins and effective exchange that decreases faster than but not exponentially. Our principal goal is the quantitative description of edge states in HAFs that are sufficiently long to neglect bulk excitations in or 2 chains. The paper is organized as follows. Section II summarizes conventional DMRG algorithm for even and a different algorithm for odd that is related to junctions. Section III presents BI-SDWs spin densities and gaps for and chains with finite Haldane gaps and finite correlation lengths . DMRG returns for chains, in agreement with 6.03(1) reported previously, White and Huse (1993) and for chains. DMRG spin densities are fit quantitatively by BI-SDWs that are in phase for odd , out of phase for even . The coupling between ends is quantitative for chains and is semi quantitative for chains, in qualitative agreement with the VBS picture of localized spins. Section IV presents the BI-SDWs and gaps of the chain. The BI-SDWs are not localized in this case. The singlet-triplet gap for even decreases faster than , as anticipated by Ng. Ng (1994) The gap for odd requires a modified with a delocalized spin in central part in addition to spins at the ends. The delocalized spin rationalizes and a weaker size dependence. The Discussion summarizes the limited nature of connections to the NLM or to VBS. ## Ii DMRG algorithms By now, DMRG is a mature numerical method for 1D systems. Schollwöck (2005); Hallberg (2006) It gives excellent GS properties and has been widely applied to spin chains. Conventional DMRG starts with a superblock that consists of four sites: one site in the left block, one in the right block and two new sites, the central sites. The left and right blocks increase by one site as two new sites are added at every step. The procedure generates a chain with OBC and an even number of sites . The vast majority of DMRG calculations been performed on chains with even . White has discussed White (2005) an algorithm with one rather than two central sites that speeds up the computational time by a factor of two to four. The method was tested on an HAF of 100 spins. We use conventional DMRG for spin chains with even and adapt an algorithm for odd that was developed for junctions. Kumar et al. (2016) junctions of spins have three arms of spins plus a central site for which we recently presented an efficient DMRG algorithm. Fig. 2 of Ref. Kumar et al., 2016 shows the growth of the infinite DMRG algorithm. A chain of spins can be viewed as two arms of spins plus a central site. The algorithm takes the system as an arm plus the central site and the environment as the other arm. Since the system of spins at step becomes the environment at the next step, the chain grows by two spins at each step. The procedure described for junctions Kumar et al. (2016) holds with one fewer arm for chains of spins. The accuracy of the algorithm for odd is comparable to conventional DMRG, as has already been shown for functions. Kumar et al. (2016) In either algorithm, new sites are coupled to the most recently added sites and the superblock Hamiltonian contains only new and once renormalized operators. Table 1 has representative DMRG results for the ground states of and 3/2 chains with spins. The index is the number of states kept per block. The truncation error is where the sum is over the eigenvalues of the density matrix. Several sweeps of finite DMRG calculations are required for or 2, with calculations per sweep, and finite DMRG is necessary for accurate spin densities. Increasing rapidly increases the required computer resources for long chains and involves trade offs. We have checked our results against previous studies in Table 1 as well as against chains and find comparably small or smaller that amount to evolutionary improvements for even . The algorithm for odd returns equally small . In the following we have set according to Table 1 and performed 5-10 sweeps of finite DMRG for and 3/2 chains. We estimate that GS energies per site are accurate to for chains, to for and to for . The energy gaps between the GS in sectors with different total spin are accurate to for and to for or 2. The spin densities are estimated to be accurate to better than based, for example, on DMRG calculations with different algorithms for and . Accurate are readily obtained in large systems whose are not accessible. ## Iii Integer spin, S=1 and 2 We start with the extensively studied HAF with OBC and even . The large Haldane gap White and Huse (1993); Nakano and Terai (2009) reduces the computational effort. The singlet-triplet gap in Eq. 2 decreases rapidly with system size. The GS alternates between and 1 for even and odd , respectively. We evaluate for even as the difference of the total energy in the sectors and 1. In addition, we also obtain for odd using the first excited state in the sector. The excited state is accurate to for . As shown in Fig. 1, upper panel, with different symbols for even and odd , decreases as with . The effective exchange between the ends is in Eq. 4 with spins . The effective Hamiltonian is quantitative for chains. The gap at is , which still exceeds the estimated numerical accuracy. The inset shows the relevant VBS valence bond diagram. Affleck et al. (1988) Each line is a singlet pair, , between spins, two per site, and the circles are unpaired spins at the ends. Comparable DMRG accuracy for chains with even has been discussed previously. Srensen and Affleck found for and 6.028(3) for spin densities. Sørensen and Affleck (1994) White and Huse obtained White and Huse (1993) the GS energy per site very accurately and reported for the spin densities of a 60-site chain with an auxiliary spin-1/2 at one end (site in Eq. 1). Schollwöck et al., Schollwöck et al. (1996) discussed the same procedure for chains with even and an auxiliary spin-1 at one end. Auxiliary spins at both ends with adjustable exchange to sites 1 and can be used to study bulk excitations. White and Huse (1993) In this paper, we shall not resort to auxiliary spins. We always consider BI-SDWs at both ends of chains. The spin densities in Fig. 1, lower panel, are for the GS of the 65-spin chain. We take and obtain positive at odd numbered sites and negative at even numbered sites, respectively. All chains with have , which is why call them as BI-SDWs. Table 2 lists the spin densities of the first 10 sites in chains of 66/65 spins and 48/47 spins. The 66/65 spin densities clearly refer to the same triplet and speak to the numerical accuracy since different algorithms are used. The spin density at site 1 is slightly greater than 1/2, and so is the total spin density to odd-numbered sites. The total spin density to an even-numbered site approaches 1/2 from below and exceeds 0.45 at . The apparent exponential decrease of does not hold for the first few sites since, for example, . The triplets are identical near the ends but of course differ at the middle of the chain, where becomes . The 48/47 data illustrate the weak size dependence of spin densities at the ends. Well-defined edge states must become size independent. The first 10 sites of or 66 chains are close to the thermodynamic limit of BI-SDWs. To minimize the even-odd variations of spin densities and to divide out an overall scale factor, we consider the function f(r,N)=ρ(r−1)−ρ(r+1)ρ(r−1)+ρ(r+1)≈−∂∂rlnρ(r,N). (5) is odd with respect to the chain’s midpoint while is even. Figure 2 shows for chains up to the middle, . The DMRG points near the edge become size independent. Except for the first few () sites, is constant up to about . The difference between even and odd is clearly seen in the middle region, and for even, odd pairs are a convenient way to present spin densities directly without making any assumptions about the appropriate model or interpretation. It follows that the thermodynamic limit is . The lines are fits as discussed below using the correlation length from the gap , in accord with the NLM’s expectation of equal for gaps and spin densities. The magnitudes of the spin densities are shown in Fig. 3 as a function of up to the middle of the chains. They decrease as and deviate upward in the middle for odd , downwards for even . The amplitude is independent of system size when . To model the spin densities of integer chains, we introduce SDWs at the left and right ends, ρ(r,N) = A(−1)r−1[exp(−r/ξ) (6) −(−1)Nexp(−(N+1−r)/ξ)]. The SDWs are in phase for odd when all odd-numbered sites have ; they are out of phase for even with equal at sites and . Except for Ref. Sørensen and Affleck, 1994, the spin densities have been assumed to decrease exponentially, thereby ignoring contributions from the other end. While that is the case in the thermodynamic limit, is minimally required to neglect contributions from the other BI-SDW in the middle. Since the system size in DMRG calculations rarely exceeds , it is advantageous to consider both ends. We have ρ(r,N)=2A(−1)r−1exp(−(N+1)/2ξ){cosh((N+1−2r)/2ξ),($odd$N)sinh((N+1−2r)/2ξ),($even$N) (7) f(r,N)=tanh(1/ξ){tanh((N+1−2r)/2ξ),($odd$N)coth((N+1−2r)/2ξ),($odd$N) (8) The relative phase of the SDWs matters within of the middle. The range of is the same for and when is even. The lines in Fig. 2 are for and continuous in Eq. 8. The thermodynamic limit is , the dashed line in Fig. 2, and it reduces to for a continuous chain. The correlation length is accurately obtained using both even and odd chains. The spin densities indicate a gap of that is far below the accuracy of the energy difference. DMRG spin densities are also limited, however, to less than 149/150; there the show considerable scatter where has even-odd variations. The SDW amplitude in Fig. 3 accounts quantitatively for spin densities aside from the first few. The parameters and suffice for all fits in Figs. 2 and 3. The chain has a smaller Haldane gap Nakano and Terai (2009) of . Numerical analysis is more difficult since (i) there are more degrees of freedom per site; (ii) requires longer chains; and (iii) gaps also require longer chains to distinguish between edge and bulk excitations. Results are fewer and less accurate. The nature of BI-SDWs in or 3/2 chains was the motivation for DMRG calculations on even and odd chains of hundreds of spins. According to the NLM, edge states for are associated with spin in , Eq. 4. Even chains have a singlet GS and gaps to two edge states, to the triplet () and to the quintet (). The correlation length is the same for both and . The VBS valence bond diagram corresponds to two diagrams in Fig. 1(a): There are four lines per interior site and two lines, two unpaired spin at the ends. The BI-SDW analysis of chains is equally applicable to integer chains. Increasing leads to longer and to gaps whose relative magnitudes are fixed in advance by Eq. 4. A chain with and 200 spins or 400 spins has a GS energy of roughly . The corresponding in Table 3 are less than and their estimated accuracy is . Our and 3/2 gaps are consequently limited to and 450, respectively. They are differences between total energies. Spin densities, by contrast, are exclusively related to the GS in a sector with . The representative gaps in Table 3 cover more than a decade. We studied the dependence of gaps in and 3/2 chains, summarized in Table 1, in order to find the largest accessible systems. The gap of the infinite chain is slightly larger than . The competition between edge and bulk excitations in short HAFs with is discussed elsewhere. Qin et al. (1995); Fáth et al. (2006); Machens et al. (2013); Lou et al. (2002) Figure 4 shows and for and even . The gaps are exponential in , as expected for integer . The correlation length, , is the same within our numerical accuracy. The ratio is based on the fitted lines and it varies between 3.27 and 3.56 for individual points. Although is approximate, the ratio is larger than the NLM value of 3 based on Eq. 4. We return to gaps after presenting results for spin densities. chains with odd have a quintet GS and excitations to the triplet and singlet. We again use and the BI-SDWs analysis. Figure 5 shows in the sector up to the middle of the chains. For the sake of clarity, not all points are shown. Even-odd effects now extend to about the first 25 sites and become size independent in long chains. The thermodynamic limit is with . The magnitudes of spin-densities in Fig. 6 are fit as a function of with the same and in Eq. 7. Two parameters are nearly quantitative aside from sites . The triplet is an excited state for either even or odd . It is the lowest state in the sector for even and the first excited state in that sector for odd . DMRG calculations for even converge slowly for reasons we do not understand in detail. The triplet spin densities return the same as the quintets in Fig. 5. The correlation length based on spin densities is more accurate than from energy gaps. The fits account for of even and odd chains that extend to 500 spins, whereas numerical accuracy limits to . Schollwöck et al., Schollwöck et al. (1996) argued that the thermodynamic limit requires and obtained (Fig. 6 of [Schollwöck et al., 1996]) for with an auxiliary spin-1 at the other end using the local correlation length . Qin et al., Qin et al. (1995) estimated that for S = 2 chains up to and remarked that the accuracy was much worse than for chains. Indeed, spin densities for return . As seen in Figs. 3 and 6 for and 2, respectively, the thermodynamic limit requires even when the contribution of the BI-SDW at the other end is included. The present results for chains offer more stringent comparisons of the NLM. The model is semi quantitative: The ratio is greater than 3. We note that BI-SDWs with exponentially decreasing would assumed on general grounds and follows directly from , but not the same for gaps and spin densities. ## Iv Half integer spin, S=3/2 HAF chains with half integer are gapless and their edge states are fundamentally different. Even chains have a singlet GS and BI-SDWs with integer ; odd chains have and BI-SDWs with half integer . The even chain has a singlet-triplet gap that decreases faster than and has been studied by Qin et al., Qin et al. (1995) and in greater detail by Fáth et al. Fáth et al. (2006) The NLM gap goes as Fáth et al. (2006) NΓ1(N)=alnBN+O(lnlnN(lnN)2) (9) Fath et al. Fáth et al. (2006) used DMRG to compute for chains from to in steps of 12 spins. The first term of Eq. 9 leads to parameters and whose size dependence was obtained from successive gaps and . Extrapolation in gave the thermodynamic values of and with uncertainties. In the present study, we are characterizing BI-SDWs in spin chains and take the first term with constant , as a two-parameter approximation. Figure 7 shows the calculated gaps of HAFs as . The gaps decrease faster than as expected for edge states. The NLM size dependence for even is Eq. 4 with . The dashed line has and as inferred by Fáth et al. Fáth et al. (2006) The solid line for even is a power law with two parameters, . Either fit is adequate over this range of system sizes, and neither accounts the decrease at . The shortest chains in which edge and bulk excitations are decoupled are probably in the range to 60, and the desired fits are for long chains. The gaps for odd are several times larger and their size dependence is weaker. They can be approximation by a different logarithm or power law. The gaps and of the chain are in marked contrast to equal in Fig. 1 for chains with even and odd . The BI-SDWs of even chains are triplets. The ratios in Eq. 5 are quite different for the chain, either even or odd, and are not shown. The upper panel of Fig. 8 shows the magnitude of spin densities up to the middle of chains. The SDWs converge at small but are not localized in the chain. The spin densities add up to for even . They decrease slowly and the sum over diverges in the thermodynamic limit. The lines are fits that are discussed below. The lower panel of Fig. 8 shows the cumulative spin density to site that we define as T(R,N)=R∑r=1ρ(r,N)+ρ(R+1,N)/2. (10) The total spin density is . increases rapidly to 0.5 around , reaches a broad maximum that depends on system size and decreases as required by symmetry to in the middle of the chain. The VBS valence bond diagram in the inset has unpaired spins at each end that correspond Ng (1994) to . Each site forms three singlet-paired spins to a neighbor. The middle and either the top or bottom line corresponds to the VBS diagram of the chain with a localized spin at the ends. The remaining line with paired spins is a singlet valence bond diagram of the chain. The slow variation of in the middle and no net spin between and is consistent with singlet-paired spins. The GS of odd chains is a quartet, . Figure 9, upper panel, shows to the middle of chains. The large amplitude of in-phase BI-SDWs in the middle decreases slowly with system size. The cumulative spin density in the lower panel is again given by Eq. 11 except that the spin density is shared equally between the two halves. The total is for the entire chain, or 0.75 for the half chain. The rapid initial increase to by suggests a spin-1/2, as does the gradual increase to in the middle. The VBS valence bond diagram in the lower panel has three unpaired spins, two at one end, one at the other end; the diagram with reversed unpaired spins at the ends contributes equally by symmetry. The middle and either top or bottom line is again the VBS diagram. The remaining line is an valence bond diagram with an unpaired spin at either end. Although the diagram correctly has three unpaired spins, the DMRG spin densities clearly show one spin in the central region rather than at the ends. The HAF with odd does not support edge states. The spin density is delocalized over the entire chain. Soos and Ramasesha (1983) Even more simply, a half-filled tight binding or Hückel band of sites has spin density at odd numbered sites and at even numbered sites; in that case, goes as and immediately rationalizes the linear increase in Fig. 9, lower panel. We attribute the larger gap in Fig. 7 and its weaker dependence of system size to enhanced coupling between the ends by the delocalized spin in the middle. The BI-SDW amplitude at the middle in the upper panel of Fig. 9 decreases slightly faster than . The size dependence of the amplitude suggests modeling the spin densities as ρ(r,N) = (−1)r−1CN((lnBrr)1/2 (11) −(−1)N(lnB(N+1−r)N+1−r)1/2⎞⎠. The amplitude depends on system size because the SDWs are not localized. We took with and the indicated to generate the lines in the upper panels of Figs. 8 and 9. The spin densities are adequately fit in the central region in either case. Deviations are seen for when is even and for when is odd. To some extent, Eq. 11 can be understood in terms of the NLM. In the thermodynamic limit, the GS spin correlation functions depend only on the separation between spins. The NLM result is Sørensen and Affleck (1994) C(r)≡⟨Sz0Szr⟩∝(−1)rr−1/2exp(−r/ξ), (12) for integer spin HAFs and . Several authors Nomura (1989); White and Huse (1993); Sørensen and Affleck (1994) have remarked that DMRG results for are noticeably closer to exponential in chains of 60 or 100 spins. Since converged are limited to about , such agreement is promising but not forced. White and Huse discuss White and Huse (1993) the point explicitly and show (Fig. 4 of [White and Huse, 1993]) that the ratio to the NLM correlation function becomes constant at . The first few sites where can be computed most accurately are inevitably excluded from direct comparison since the NLM describes a continuous rather than a discrete system. The factor in does not appear in the spin densities of integer chains, Sørensen and Affleck (1994) whose exponential decrease with is shown in Figs. 3 and 6. The spin correlations of the HAF go as according to field theory Affleck et al. (1989) and Monte Carlo calculations Sandvik (2010) up to return . But exact results for in finite PBC systems Sandvik (2010) still show significant deviations at . Hallberg et al. Hallberg et al. (1996) applied the NLM and DMRG to the chain and confirmed that it belongs to the same universality class as the chain. They report and estimate from to in a 60-spin chain. We find in similar calculations for = 200. Fáth et al. Fáth et al. (2006) extrapolate to for in the thermodynamic limit. The differences are negligible in the context of spin densities. Then gives Eq. 11 when contributions from both ends are taken into account. DMRG results for deviate from Eq. 11 near the ends of chains and from Eq. 7 in integer chains. The choice of changes the fits at small . Since small is not modeled quantitatively in either case and does not concern us here, we took in Eq. 11 for the spin densities of chains. Three effective spins are needed for the spin densities when is odd, a spin in the middle in addition to spins at the ends. The generalization of Eq. 4 to half integer and odd is Heff(N)=−J1(N)(→s′⋅(→s′1+→s′N))−J2(N)→s′1⋅→s′N. (13) The eight microstates of correspond to the GS quartet and two doublets. Both the total effective spin and are conserved, with , in the GS. The doublets have and or 1. The spectrum is Eeff(S′,S′1N) = −J12S′(S′+1)+J1−J22S′1N(S′1N+1) (14) +3(J1+2J2)8. The gap is to the doublet with singlet-paired spins at the ends; the gap to parallel spins is . The effective exchanges in Eq. 13 can be fit to DMRG results for the doublets with the lowest and second lowest energy in the sector. We find , and , . Large in Fig. 7 for odd is due to and coupling through the delocalized effective spin . The small effective exchange is antiferromagnetic. To conclude the discussion of chains, we recall that the GS for PBC and odd has . Since is between sites in the same sublattice, the system is not bipartite, and the GS has a domain wall or topological soliton. The OBC system is bipartite. The doublet with the lowest energy has positive spin densities at odd-numbered sites and negative spin densities at even-numbered sites, respectively, with singlet paired and in Eq. 13. Figure 10 shows for to the middle of chains in the upper panel and the cumulative spin density in the lower panel. The magnitude of the spin density at the middle decreases roughly as . There are no boundary-induced edge states. The spin is delocalized as expected on general grounds and becomes the effective spin in Eq. 13. By contrast, the spin densities are entirely associated with BI-SDWs in OBC systems with even or integer since singlet states have at all sites. ## V Discussion We have applied different DMRG algorithms to spin- HAFs, Eq. 1, with even and odd number sites in order to obtain accurate edge states in chains of several hundred spins. The principal results are the energy gaps , Eq. 2, and the spin densities , Eq. 3, that are modeled as boundary-induced spin density waves (BI-SDWs) at both ends. For the HAF, we reproduce and refine previous studies on even chains of 60 or 100 spins that exceed the correlation length by an order of magnitude. We confirm that the gap goes as in chains with odd . Two parameters, and the SDW amplitude, account quantitatively for and for chains from to at least 120. The BI-SDWs are in phase for odd , out of phase for even . The smaller Haldane gap of the HAF or the gapless HAF requires substantially longer chains, here up to 500 spins, whose edge states have previously been studied in shorter chains . The spin densities of HAFs beteen and 502 are modeled by BI-SDWs with correlation length and amplitude . There are now two gaps, and , that decrease exponentially as up to the limit of our numerical accuracy. The gap ratio is . The gap of the HAF with even decreases faster than , roughly as or as . The gap for odd has larger amplitude and weaker size dependence. The BI-SDWs of the chain have maximum spin density at the ends but are not localized. The spin densities in chains of more than 100 spins have not been previously reported to the best of our knowledge. The ground state for odd can be modeled as a spin-1/2 at each end and a spin- in between. DMRG calculations can be performed on longer chains of and/or larger . But the condition for integer is increasingly difficult to satisfy for small Haldane gaps whose rapid decrease has been reported Nakano and Terai (2009) to . Moreover, the gaps will require extraordinary accuracy since, as shown in Table 3, is reached at for or at for . Spin densities are more promising probes of long chains in terms of the sectors of and spins. But the required system size for half integer is poorly known and may not have been reached in the present work. The nonlinear sigma model (NLM) and valence bond solid (VBS) have been applied to spin chains, primarily to the HAF in the thermodynamic limit. Machens et al. Machens et al. (2013) summarize and critically evaluate both the NLM and VBS in connection with short chains of less than 20 spins. In partial disagreement with earlier works, they find that the effective coupling between edge states in Eq. 4 in short chains is influenced by the comparably small finite-size gaps of bulk excitations. We have characterized long chains whose prior modeling has mainly been for . Accurate DMRG results for or HAFs are a prerequisite for comparisons, mainly via in Eq. 4, with either the NLM or VBS. Good agreement in chains carries over to some extent to chains and less so to chains. Spin densities to yield for the correlation length of the chain. The gaps in shorter chains return the same , but the ratio is 3.45 instead of 3. The deviation is real. The 3/2 chain does not follow the Lou et al. (2002) pattern of in Eq. 4. The BI-DWIs are not localized. Two effective spins at the ends account for when is even. A third in the middle leads to and in Eq. 13 for odd . In other ways, however, comparisons are simply not possible. Since field theory starts with a continuous system rather than a discrete chain, the ends can be distinguished from the bulk but not sites at a finite distance from the ends. Similarly, VBS deals with special Hamiltonians, Affleck et al. (1988); Machens et al. (2013); Schollwöck et al. (1996); Totsuka and Suzuki (1995) that contain, in addition to Eq. 1, terms that go as with and coefficients . Exact GS are obtained in the thermodynamic limit of these models. The relevant valence bond diagrams have paired spins, as shown, except at the first and last sites. Either the NLM or VBS correctly places localized states or unpaired spins for integer , but neither describes the BI-SDWs found in DMRG calculations spin- HAFs. The BI-SDWs are not localized in half integer chains and have different effective coupling between ends. Direct solution of Eq. 1 for chains inevitably leads to edge states whose features are blurred or lost in the NLM or VBS. Comparisons may well be limited to effective spins and exchange at the ends. The occurrence of edge states in HAFs with follows directly from Eq. 1, as shown in the Introduction. PBC systems with have except for half integer and odd , when . OBC systems with have for even and for odd . The energy per site in the thermodynamic limit cannot depend on boundary conditions for short-range interactions. Different under OBC and PBC implies edge states, or BI-SDWs, in HAF with and gaps relative to for even or integer S or to for odd and half integer S. The size dependence and interpretation of gaps or spin densities are standard for integer . The spin densities and gaps of the chain lead to different BI-SDWs for even and odd . The NLM or VBS provides useful guidance for quantitative modeling of BI-SDWs obtained by DMRG for HAFs with . ###### Acknowledgements. We thank S. Ramasesha and D. Huse for discussions and the NSF for partial support of this work through the Princeton MRSEC (DMR-0819860). MK thanks DST for a Ramanujan Fellowship SR/S2/RJN-69/2012 and DST for funding computation facility through SNB/MK/14-15/137. ## References • Bethe (1931) H. Bethe, Zeitschrift für Physik 71, 205 (1931). • Hulthén (1938) L. Hulthén, Ark. Mat. Astron. Fys. 26, 106 (1938). • Haldane (1982) F. D. M. Haldane, Phys. Rev. B 25, 4925 (1982). • White (1992) S. R. White, Phys. Rev. Lett. 69, 2863 (1992). • White (1993) S. R. White, Phys. Rev. B 48, 10345 (1993). • Affleck et al. (1988) I. Affleck, T. Kennedy, E. H. Lieb,  and H. Tasaki, Communications in Mathematical Physics 115, 477 (1988). • White and Huse (1993) S. R. White and D. A. Huse, Phys. Rev. B 48, 3844 (1993). • Schollwöck et al. (1996) U. Schollwöck, O. Golinelli,  and T. Jolicœur, Phys. Rev. B 54, 4038 (1996). • Qin et al. (1995) S. Qin, T.-K. Ng,  and Z.-B. Su, Phys. Rev. B 52, 12844 (1995). • Fáth et al. (2006) G. Fáth, O. Legeza, P. Lajkó,  and F. Iglói, Phys. Rev. B 73, 214447 (2006). • Machens et al. (2013) A. Machens, N. P. Konstantinidis, O. Waldmann, I. Schneider,  and S. Eggert, Phys. Rev. B 87, 144409 (2013). • Faddeev and Takhtajan (1981) L. Faddeev and L. Takhtajan, Physics Letters A 85, 375 (1981). • Affleck (1986a) I. Affleck, Phys. Rev. Lett. 56, 746 (1986a). • Affleck (1986b) I. Affleck, Phys. Rev. Lett. 56, 2763 (1986b). • Schulz (1986) H. J. Schulz, Phys. Rev. B 34, 6372 (1986). • Affleck et al. (1989) I. Affleck, D. Gepner, H. J. Schulz,  and T. Ziman, Journal of Physics A: Mathematical and General 22, 511 (1989). • Ng (1994) T.-K. Ng, Phys. Rev. B 50, 555 (1994). • Schollwöck (2005) U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005). • Hallberg (2006) K. A. Hallberg, Advances in Physics 55, 477 (2006). • White (2005) S. R. White, Phys. Rev. B 72, 180403 (2005). • Kumar et al. (2016) M. Kumar, A. Parvej, S. Thomas, S. Ramasesha,  and Z. G. Soos, Phys. Rev. B 93, 075107 (2016). • Qin et al. (1997) S. Qin, Y.-L. Liu,  and L. Yu, Phys. Rev. B 55, 2721 (1997). • Nakano and Terai (2009) H. Nakano and A. Terai, Journal of the Physical Society of Japan 78, 014003 (2009). • Sørensen and Affleck (1994) E. S. Sørensen and I. Affleck, Phys. Rev. B 49, 15771 (1994). • Lou et al. (2002) J. Lou, S. Qin, T.-K. Ng,  and Z. Su, Phys. Rev. B 65, 104401 (2002). • Soos and Ramasesha (1983) Z. G. Soos and S. Ramasesha, Phys. Rev. Lett. 51, 2374 (1983). • Nomura (1989) K. Nomura, Phys. Rev. B 40, 2421 (1989). • Sandvik (2010) A. W. Sandvik, AIP Conference Proceedings 1297, 135 (2010). • Hallberg et al. (1996) K. Hallberg, X. Q. G. Wang, P. Horsch,  and A. Moreo, Phys. Rev. Lett. 76, 4955 (1996). • Totsuka and Suzuki (1995) K. Totsuka and M. Suzuki, Journal of Physics: Condensed Matter 7, 1639 (1995). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-07-08 09:54:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7266787886619568, "perplexity": 1044.08787736558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00269.warc.gz"}
https://loadsloadsxwqf.web.app/guziec73427pyz/514885.html
This extension for Jupyter notebook enables the use of some LaTeX commands Environments title/numbering can be customized by users in user_envs.json config file. to convert FILE.ipynb into html/latex while keeping all the features of the and a pdf resulting from conversion to LaTeX is available as documentation. This template removes markdown cells from the output, and also changes how the Using this template, we see that the resulting Python code does not contain share/jupyter. nbconvert. templates. html. latex. The HTML and LaTeX/PDF Main page. header. body. any_cell. codecell. input_group. in_prompt. input. 13 May 2019 LaTeX templates for jupyter notebook conversion to pdf. Project description; Project details; Release history; Download files \maketitle is removed (If you want a title then add a markdown cell to the top of your notebook). (This change was merged into nbconvert 5.5.0); In/Out counts will move to the  9 Oct 2018 If you would like to use something else, feel free to go download your favorite Notebook. The typical command you use to export using nbconvert is as follows: jupyter nbconvert Decorators.ipynb --to pdf If you convert a Notebook to reStructuredText or latex, than nbconvert will use pandoc underneath ## Jupyter notebook fedora IPython - Free ebook download as PDF File (.pdf), Text File (.txt) or read book online for free. IPython Documentation CS Stuff is an awesome collection of Computer Science Stuff. - Spacial/csstuff Csirt is an awesome curated list of links and resources in security and csirt daily activities. - Spacial/csirt A list of my GitHub stars! Contribute to Jocs/awesome-stars development by creating an account on GitHub. ## Text, LaTeX, PDF, and slide shows, via the nbconvert command. 3 The markdown heading will be converted to a A notebook may be downloaded as a .ipynb file or converted to a number of other formats using the menu option When a cell is in edit mode, the Cell Mode Indicator will change to reflect the cell's state. Jupyter - Quick Guide - Project Jupyter is a suite of software products used in from Anaconda's download page www.anaconda.com/download/ Binaries for Embedded IPython shell doesn't change the state of earlier code or objects. The notebook is saved as ipynb file and can be exported as html, pdf and LaTex files. If you want to use a multipage pdf file using LaTeX, you need to use from 3)) plt.plot(range(7), [3, 1, 4, 1, 5, 9, 2], 'r-o') plt.title('Page One') pdf.savefig() # saves the current figure into a pdf page plt.close() # if LaTeX is not installed or error caught, change to usetex=False Download Jupyter notebook: multipage_pdf.ipynb. Here's how to format Markdown cells in Jupyter notebooks: Headings: Use the number sign (#) followed by a blank space for notebook titles and section headings: For the text inside the parentheses, replace any spaces and special characters and download fixes at the IBM Support Portal · Find a technical tutorial in IBM  6 Mar 2018 In this tutorial we introduce the web-based application Jupyter Easy to write mathematical notation with markdown cells using LaTeX. As well as changing cell types. The notebook name can be altered directly by clicking on the title. If you wish to download it as a .pdf , LaTeX and pandoc is required. 18 Oct 2015 How to: get nice vector graphics in your exported PDF ipython notebooks tricks & notes – Part 1, but I thought I'd give it a more self-explanatory title) some Markdown and LaTeX cells along with the code and output graphs). 2019; Automatically downloading nursery photos from ParentZone using  Widgets such as sliders and text inputs have a description attribute that can render Latex Equations. The Label widget also renders Latex equations. Check Host header to more securely protect localhost deployments from DNS rebinding. This is a pre-emptive measure, not fixing a known vulnerability.
2021-10-21 21:35:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24571315944194794, "perplexity": 5009.94087312884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00350.warc.gz"}
http://clay6.com/qa/41496/charge-is-distributed-along-the-x-axis-from-x-0-to-x-l-50-0-cm-in-such-a-wa
Browse Questions # Charge is distributed along the x-axis from $x = 0 \;to\; x = L = 50.0\; cm$ in such a way that its linear charge density is given by $\lambda = ax^2,$ where $a = 18.0 \mu C m^{-3}$. Calculate the total charge in the region $0 \leq x \leq L$. Total charge in the region $= 0.75 \;\mu C$
2017-01-22 20:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740527272224426, "perplexity": 126.53351648251574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00024-ip-10-171-10-70.ec2.internal.warc.gz"}
https://search.r-project.org/CRAN/refmans/clusTransition/html/OverLap-class.html
OverLap-class {clusTransition} R Documentation ## Overlap between clusters ### Description Contains matrix of similarity indices between clusters, after clustering dynamic datasets at consecutive time points. ### Slots Overlap A numeric matrix containing the similarity index between clusters extracted at time point t_1 and t_2. The rows of the matrix illustrate clusters extracted from first clustering \xi_1(time point t_1),whereas columns represent clusters extracted from second clustering \xi_2(time point t_2). rx A numeric vector containg radius of each cluster from first clustering \xi_1. ry A numeric vector containg radius of each cluster from second clustering \xi_2. Centersx A numeric vector containing centers of clusters from first clustering \xi_1. Centersy A numeric vector containing centers of clusters from second clustering \xi_2. avgDisx A numeric vector containing average distance between points in a cluster from its center in first clustering \xi_1. avgDisy A numeric vector containing average distance between points in a cluster from its center in second clustering \xi_2. clusterMem A vector of integers containing cluster membership from second clustering \xi_2. [Package clusTransition version 1.0 Index]
2022-12-07 03:04:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20777486264705658, "perplexity": 9450.155858837335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00317.warc.gz"}
https://tex.stackexchange.com/questions/355129/how-to-display-two-minipages-containing-unequal-length-listings-side-by-side
# How to display two minipages containing unequal length listings side by side? I have two listings which are my Makefile sources. Some lines in this Makefiles are long. I need to fit them side by side in a particular location in my report. I tried to do something like this but I was not successful. \documentclass[12pt, a4paper, titlepage]{scrartcl} \usepackage{lmodern} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{graphicx} \graphicspath{{./Figures/}} \usepackage{url} %\usepackage{titlesec} %\newcommand{\sectionbreak}{\clearpage} \usepackage{dirtree} \usepackage{color} \usepackage{listings} \usepackage{caption} \usepackage{hyperref} \hypersetup{ citecolor=black, filecolor=black, urlcolor=black } \newcommand{\courierword}[1]{\textsf{\itshape #1}}{\fontfamily{pcr}\selectfont}% \setlength{\parindent}{0.0cm} \setlength{\parskip}{1ex} \setkomafont{sectioning}{\normalcolor\bfseries} The actual content is as below \begin{figure}[!ht] \begin{minipage}{.5\textwidth} \captionof{figure}{Makefile for 'aocs' Collection} \lstset{language=make, breaklines=true, } \begin{lstlisting}[frame=single][t] This section contains my Makefile 2 Some lines are really toooooooooooooooooooooooooooooooooo long \end{lstlisting} \end{minipage}% \begin{minipage}{.5\textwidth} \captionof{figure}{Makefile for 'aocsApFw' Constituent} \lstset{language=make, breaklines=true, } \begin{lstlisting}[frame=single][t] This section contains my Makefile 2 Some lines are really toooooooooooooooooooooooooooooooooo long \end{lstlisting} \end{minipage} \end{figure} • Would a landscape page be an option? Or setting them on a left/right page pair? The bets font I found for long lines in listings was Latin Modern TT Condensed; that might help a lot. – Chris H Feb 22 '17 at 12:08 1. Your are using the listings option frame. Due to this your listings itself get wider. The new listings has the width: framerule+framesep+0.5\linewidth+framesep+framerule To take care of this you have to add the following option: xleftmargin=3.4pt,xrightmargin=3.4pt, 2. With the option breaklines=true listings can break to long lines. However the breaks occurs only at characters declared as other. The following table shows the default definition (see documentation) 3. The environent minipage has an optional argument do adjust the vertical position. In Your case you can use t. Please note that caption needs the same amount of lines. Here is the modication of your code with the result \documentclass[12pt, a4paper, titlepage]{scrartcl} \usepackage{lmodern} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{graphicx} \graphicspath{{./Figures/}} \usepackage{url} %\usepackage{titlesec} %\newcommand{\sectionbreak}{\clearpage} \usepackage{dirtree} \usepackage{color} \usepackage{listings} \usepackage{caption} \usepackage{hyperref} \hypersetup{ citecolor=black, filecolor=black, urlcolor=black } \newcommand{\courierword}[1]{\textsf{\itshape #1}}{\fontfamily{pcr}\selectfont}% \setlength{\parindent}{0.0cm} \setlength{\parskip}{1ex} \setkomafont{sectioning}{\normalcolor\bfseries} \begin{document} \begin{figure}[!ht] \begin{minipage}[t]{.5\textwidth} \caption{Makefile for 'aocs' Collection}\par\strut \lstset{language=make,breakatwhitespace=false,xleftmargin=3.4pt,xrightmargin=3.4pt, breaklines=true, } \begin{lstlisting}[frame=single][t] This section contains my Makefile 2 Some lines are really tooooooooooooooo ooooooooooooooooooo long \end{lstlisting} \end{minipage}% \begin{minipage}[t]{.5\textwidth} \caption{Makefile for 'aocsApFw' Constituent} \lstset{language=make,,xrightmargin=3.4pt,,xleftmargin=3.4pt,
2021-04-20 17:35:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982213139533997, "perplexity": 5341.306692085313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00222.warc.gz"}
https://math.stackexchange.com/questions/1485327/show-that-fx-y-is-differentiable-at-0-0
# Show that $f(x,y)$ is differentiable at $(0,0)$ Show that $f(x,y)$ defined by: $$f(x,y) = \begin{cases}\dfrac{x^2y^2}{\sqrt{x^2+y^2}}&\text{ if }(x,y)\not =(0,0)\\0 &\text{ if }(x,y)=(0,0)\end{cases}$$ is differentiable at $(x,y) = (0,0)$ I tried to solve this problem by applying the theorem that if partial derivatives are continuous then the function is differentiable. Therefore, I calculated partial derivated but not I am stuck in showing they are indeed continuous. Help me! • Hint: $\sqrt{x^2+y^2} \ge \max\{|x|, |y|\}$ and $|xy| =\min\{|x|, |y|\} \max\{|x|, |y|\}$ – user251257 Oct 18 '15 at 3:46 • The title and the problem do not match. Your title says that "show that $f$ is differentiable at $(0,0)$", however you haven't specified the value of $f(0,0)$. In fact, if $f(0,0) \ne 0$, then $f$ is not even continuous at $(0,0)$ and can not be differentiable at $(0,0)$. – Mercy King Oct 18 '15 at 4:12 • If $f(0,0)=0$, then $df(0)\equiv0$ because $|f(h)|\le \|h\|_2^3$ for all $h \in \mathbb{R}^2$ – Mercy King Oct 18 '15 at 4:20 • @MercyKing I edited the problem. Sorry for the confusion! – Guten Tag Oct 18 '15 at 4:48 • @user251257 I didn't get your hint. Can you explain furthur please? – Guten Tag Oct 18 '15 at 4:51 You have $$x^2+y^2-2 \vert xy \vert=(\vert x \vert - \vert y \vert)^2 \ge 0$$ Hence $$\vert xy \vert \le \frac{x^2+y^2}{2}$$ and $$0 \le \frac{\vert f(x,y) \vert}{\sqrt{x^2+y^2}} = \frac{x^2y^2}{x^2+y^2} \le \frac{1}{4}(x^2+y^2)$$ As $\lim_{(x,y) \to (0,0)} x^2+y^2 = 0$, this proves that $f$ is differentiable at $(0,0)$ and that its Fréchet derivative is equal to $0$. Which means that $f_x(0,0)=f_y(0,0)=0$. For every non-zero $h=(h_1,h_2)\in \mathbb{R}^2$ we have: $$|f(h)-f(0)|=||f(h)|=\frac{h_1^2h_2^2}{\|h\|_2}\le \frac{\|h\|_2^4}{\|h\|_2}=\|h\|_2^3,$$ and therefore $$\lim_{\|h\|_2\to0}\frac{|f(h)-f(0)|}{\|h\|_2}=0.$$ This shows that $f$ is differentiable at $(0,0)$, and $Df(0)\equiv 0$.
2019-12-06 06:16:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522771835327148, "perplexity": 177.26662968952212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484815.34/warc/CC-MAIN-20191206050236-20191206074236-00258.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=CENBG&langue=fr&action_todo=view&id=in2p3-00139046&version=1
160 articles – 2210 Notices  [english version] HAL : in2p3-00139046, version 1 arXiv : nucl-th/0703040 Physical Review C 75 (2007) 048201 In-medium omega meson mass and quark condensate in a Nambu Jona-Lasinio model constrained by recent experimental data (2007) We have determined the relation between the in-medium $\omega$ meson mass and quark condensate in the framework of a Nambu Jona-Lasinio model constrained by some recent experimental data on the meson properties in nuclei. In addition to the usual four-quark interactions, we have included eight-quark terms in the Lagrangian. The parameters of this model have been determined using the meson properties in the vacuum as well as in the medium. More particularly, we have constrained both the in-medium pion decay constant to the value measured in experiments on deeply bound pionic atoms and the in-medium $\omega$ meson mass to the experimental value obtained either by the TAPS collaboration or by the E325 experiment at KEK. Our results are compared to several scaling laws and in particular to that of Brown and Rho. Thème(s) : Physique/Physique Nucléaire Théorique Lien vers le texte intégral : http://fr.arXiv.org/abs/nucl-th/0703040 in2p3-00139046, version 1 http://hal.in2p3.fr/in2p3-00139046 oai:hal.in2p3.fr:in2p3-00139046 Contributeur : Pascale Chambon <> Soumis le : Jeudi 29 Mars 2007, 09:18:04 Dernière modification le : Jeudi 3 Juillet 2008, 12:17:55
2014-07-23 16:04:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814041793346405, "perplexity": 1950.6099419146155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997880800.37/warc/CC-MAIN-20140722025800-00227-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/final-sum.236178/
Final sum Homework Statement Hi Find the (finite,ultimate,definitive,peremptory,eventua,conclusive) sum: $$a) cos^2x+cos^22x+...+cos^2nx ; b) sin^2x+sin^22x+...+sin^2nx$$ z=A+Bi The Attempt at a Solution $$A=cos^2x+cos^22x+...+cos^2nx$$ $$B=sin^2x+sin^22x+...+sin^2nx$$ $$z=cos^2x+cos^22x+...+cos^2nx+i(sin^2x+sin^22x+...+sin^2nx)=(cos^2x+isin^2x)+(cos^22x+isin^22x)+...+(cos^2nx+isin^2nx)=(cos^2x+isin^2x)+$$ $$(cos^2x+isin^2x)^2+...+(cos^2x+isin^2x)^n$$ How will I continue? Last edited: Borek Mentor Not sure what you are trying to do. First, you defined two sums - a and b. Is your task to find sum of those two sums? a+b is what you are looking for? If so, why do use i? Do you know what Pythagorean trigonometric identity is? I don't know if they are separate or together... I should use "i", since that's my task, to find it through i. Any help? Borek Mentor You won't get more help without giving more information. At present - at least IMHO - question as posted, and the idea of using i for calculations, doesn't make sense. Dick Homework Helper The real part of exp(i*n*x)^2 is cos(n*x)^2-sin(n*x)^2. So summing the geometric series exp(i*2*n*x) and taking the real part will give you the difference A-B. What's the sum A+B? That's one way to do it. $$(\frac{1+cos2x}{2}+i\frac{1-cos2x}{2})+(\frac{1+cos4x}{2}+i\frac{1-cos4x}{2})+(\frac{1+cosnkx}{2}+i\frac{1-cosnkx}{2})$$ $$\frac{1}{2}(1+cos2x+i(1-cos2x))+\frac{1}{2}(1+cos2x+i(1-cos2x))+\frac{1}{2}(1+cos4x+i(1-cos4x))+\frac{1}{2}(1+cosnkx+i(1-cosnkx))$$ How will I get rid of "1"? How will I go next? Dick, someone? Dick Homework Helper Dick, someone? Why did you ignore my last message? It had some reasonable suggestions in it. I have never learn about it. Is mine correct? How will I continue solving? Dick Homework Helper I have never learn about it. Is mine correct? How will I continue solving? What you wrote is a version of A+Bi. Do you know how to sum the series cos(2n)+cos(4n)+cos(6n)+...? Actully, I have never learned. So I don't know. Dick Homework Helper Actully, I have never learned. So I don't know. Can you sum a geometric series? I don't know how. Dick Homework Helper I don't know how. If you haven't been given the sum of the finite series sin(nx) and cos(nx), or been taught how to derive them by summing geometric series like exp(i*n*x), then I don't know how you are supposed to do this problem. I know something... $$a=cosx+cos2x+...+cosnx$$ $$B=sinx+sin2x+...+sinnx$$ $$A=Re(z) ; B=Im(z)$$ $$z=A+Bi$$ $$z=A+iB=(cosx+cos2x+...+cosnx)+i(sinx+sin2x+...+sinnx)= (cosx+isinx)+$$ $$+(cos2x+isin2x)+...+(cosnx+isinnx)= (cosx+isinx)+(cosx+isinx)^2+...+(cosx+isinx)^n=$$ I don't know where to go out of here. Dick Homework Helper I$$(cosx+isinx)+(cosx+isinx)^2+...+(cosx+isinx)^n=$$ I don't know where to go out of here. That's a geometric series with a common ratio of exp(ix). You should know how to write down it's sum. http://en.wikipedia.org/wiki/Geometric_progression Once you do that the real part of the sum is A and the imaginary part is B. For (cosx+isinx) is simpler, but still got no clue how to continue. Dick Homework Helper For (cosx+isinx) is simpler, but still got no clue how to continue. I just TOLD you. Review geometric series. Yes, yes, but I don't understand this step. Now I have $$(cosx+isinx)(1+(cosx+isinx)+...+(cosx+sinx)^n^-^1)$$ I understand this. But don't understand where the next step comes from? Dick Homework Helper Yes, yes, but I don't understand this step. Now I have $$(cosx+isinx)(1+(cosx+isinx)+...+(cosx+sinx)^n^-^1)$$ I understand this. But don't understand where the next step comes from? Why did you do that?? Look, look up the formula for the sum of a geometric series and apply it to this problem. Until you do that, there's not much more I can say. But, please. I want to know where the formula comes from. Dick Homework Helper Call the sum S=exp(ix)+exp(ix)^2+...+exp(ix)^n. Calculate S-exp(ix)*S. All but two terms cancel. Now solve for S. Ok, in my problem. $$\frac{1-(cos^2x+isin^2x)^n^+^1}{1-cos^2x-isin^2x}$$. But where I will go out of here? Dick Homework Helper Ok, in my problem. $$\frac{1-(cos^2x+isin^2x)^n^+^1}{1-cos^2x-isin^2x}$$. But where I will go out of here? No. That's wrong. If you want the sum of S=exp(i*2x)+exp(i*4x)+...exp(i*n2x) it's (exp(i*2x)-exp(i*2(n+1)x)/(1-exp(i*2x)). If you manage to take the real part of S, what would it be? What would that have to do with your problem? I was talking about the problem in my first post of this thread. I know for cosx+isinx that we can use De Moivre formula. Dick Homework Helper I was talking about the problem in my first post of this thread. I know for cosx+isinx that we can use De Moivre formula. I was talking about what you just posted. It's wrong. exp(i*2*x) is not equal to cos^2(x)+i*sin^2(x). Ok. Using $$\frac{(cos^2x+isin^2x)(1-(cos^2x+isin^2x)^n)}{1-cos^2x-isin^2x}$$. But how will I take the real part out? Dick Homework Helper "It's wrong. exp(i*2*x) is not equal to cos^2(x)+i*sin^2(x)." You really aren't paying much attention to me, are you? To get the real part out you generally multiply the denominator by it's complex conjugate. But I wouldn't waste your time doing it on what you just posted. Because it's wrong. $$S=d+d^2+...+d^n$$ $$Sd=d^2+d^3+...+d^n^+^1$$ $$S-Sd=S(1-d)=d-d^n^+^1$$ $$S=\frac{d-d^n^+^1}{1-d}$$ If we substitute for d=cos^2x+isin^2x $$S=\frac{(cos^2x+isin^2x)-(cos^2x+isin^2x)^n^+^1}{1-cos^2x-isin^2x}$$ Why you say it is not correct?
2022-05-17 17:30:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6975103616714478, "perplexity": 817.2473398577096}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00456.warc.gz"}
https://jcristharif.com/gsoc-week-9.html
# GSoC Week 9: Docs! Posted on July 18, 2014 This week I spent time on all sorts of little things: • Finished up the refactoring of KanesMethod • Little fixes to my current PR. Just waiting on my mentors to review this, I want to get it merged soon-ish. • Documentation. Writing documentation is the worst1. After taking time to implement all sorts of new interesting things, the last thing I want to do is go back and write about them in detail. Which is why it's so important to do early on. Good documentation needs to accomplish three things: 1. Provide motivation for why your software is necessary/better/useful. 2. Describe the user interface, showing how to use each function or class. 3. Provide real world examples showing how to tie everything together. Python's documentation is interesting in that there are varying ways to do it. Some of Sympy's documentation is just a nicely rendered form of the docstrings for all the methods. Other modules have a more prose-y explanation of their functionality. mechanics is one of those modules. In my opinion the prose documentation approach is the better way. Having good docstrings is important, but they aren't the end-all of documentation2. Of course, if I have a question the first thing I'm going to do is read the docstrings (IPython makes this trivially easy). Only if I still have questions afterwards will I turn to the online documentation. However, it'd be extremely off-putting if the online documentation was just the docstrings again. With the various changes I've made so far I needed to: 1. Update the LagrangesMethod documentation to reflect the interface change. 2. Create a documentation page all about the linearization methods. 3. Update all the examples to reflect the new functionality. All of these are "done". I still need to go through and proofread, but overall I'd say that the current state of the documentation is acceptable. I would like to take some time to reorganize the layout of the whole mechanics documentation at some point. The current layout isn't the easiest to navigate for what you're looking for. With this out of the way, the linearization portion of my project is tentatively done. I say tentatively because I'm still waiting on my PRs to get merged, and am also still playing around with solving the nan issue that I've been writing about these last couple weeks. With that done, I hope to move on to code generation. I've read the current code generation code and documentation, as well as this pydy wiki page on Jason's ideas about code generation. I'm still a little iffy about the intention of this functionality, so I'm waiting until we can all meet to discuss what needs to be done. That was supposed to have happened this week, but fell through. Hopefully we can set some time aside next week, and I can finally get to work on it. 1. Not actually the worst.
2020-07-10 15:19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2792297899723053, "perplexity": 1030.4016075064287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911092.63/warc/CC-MAIN-20200710144305-20200710174305-00139.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1134.35333
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1134.35333 Fan, Xianling; Han, Xiaoyou Existence and multiplicity of solutions for $p(x)$-Laplacian equations in $\Bbb R^N$. (English) [J] Nonlinear Anal., Theory Methods Appl. 59, No. 1-2, A, 173-188 (2004). ISSN 0362-546X Summary: This paper investigates the existence and multiplicity of solutions for $p(x)$-Laplacian equations $-\mathrm{div}(|\nabla u|^{p(x)-2}\nabla u) + |u|^{p(x)-2}u = f(x,u)$ in $\Bbb R^N$, $u\in W^{1,p(x)}(\Bbb R^N)$ in the cases corresponding to sublinear", superlinear" and concave-convex nonlinearity" if $p=2$. They apply critical point theory in certain Sobolev spaces fitted to the problem. MSC 2000: *35J60 Nonlinear elliptic equations 35D05 Existence of generalized solutions of PDE 47J30 Variational methods 58E05 Abstract critical point theory Keywords: $p(x)$-Laplacian; Generalized Sobolev space; Critical point; Genus theory Cited in: Zbl 1142.35018 Highlights Master Server
2013-05-26 06:35:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4482244551181793, "perplexity": 9782.155682457105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635063/warc/CC-MAIN-20130516121715-00021-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.oyohyee.com/post/HDU/1240/
343 题目 Description You're in space. You want to get home. There are asteroids. You don't want to hit them. Input Input to this problem will consist of a (non-empty) series of up to 100 data sets. Each data set will be formatted according to the following description, and there will be no blank lines separating data sets. A single data set has 5 components: Start line - A single line, "START N", where 1 <= N <= 10. Slice list - A series of N slices. Each slice is an N x N matrix representing a horizontal slice through the asteroid field. Each position in the matrix will be one of two values: 'O' - (the letter "oh") Empty space 'X' - (upper-case) Asteroid present Starting Position - A single line, "A B C", denoting the <A,B,C> coordinates of your craft's starting position. The coordinate values will be integers separated by individual spaces. Target Position - A single line, "D E F", denoting the <D,E,F> coordinates of your target's position. The coordinate values will be integers separated by individual spaces. End line - A single line, "END" The origin of the coordinate system is <0,0,0>. Therefore, each component of each coordinate vector will be an integer between 0 and N-1, inclusive. The first coordinate in a set indicates the column. Left column = 0. The second coordinate in a set indicates the row. Top row = 0. The third coordinate in a set indicates the slice. First slice = 0. Both the Starting Position and the Target Position will be in empty space. Output For each data set, there will be exactly one output set, and there will be no blank lines separating output sets. A single output set consists of a single line. If a route exists, the line will be in the format "X Y", where X is the same as N from the corresponding input data set and Y is the least number of moves necessary to get your ship from the starting position to the target position. If there is no > > route from the starting position to the target position, the line will be "NO ROUTE" instead. A move can only be in one of the six basic directions: up, down, left, right, forward, back. Phrased more precisely, a move will either increment or decrement a single component of your current position vector by 1. START 1 O 0 0 0 0 0 0 END START 3 XXX XXX XXX OOO OOO OOO XXX XXX XXX 0 0 1 2 2 1 END START 5 OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO XXXXX XXXXX XXXXX XXXXX XXXXX OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO OOOOO 0 0 0 4 4 4 END 1 0 3 4 NO ROUTE 代码 /* By:OhYee Github:OhYee Email:oyohyee@oyohyee.com Blog:http://www.cnblogs.com/ohyee/ かしこいかわいい? エリーチカ! */ #include <cstdio> #include <algorithm> #include <cstring> #include <cmath> #include <string> #include <iostream> #include <vector> #include <list> #include <queue> #include <stack> #include <map> using namespace std; //DEBUG MODE #define debug 0 //循环 #define REP(n) for(int o=0;o<n;o++) const int maxn = 11; int n; int dis[maxn][maxn][maxn]; char Map[maxn][maxn][maxn]; const int delta[6][3] = {{1,0,0},{-1,0,0},{0,1,0},{0,-1,0},{0,0,1},{0,0,-1}}; struct point { int x,y,z; point() { x = y = z = -1; } point(int a,int b,int c) { x = a; y = b; z = c; } bool operator == (const point &rhs)const { return ((x == rhs.x) && (y == rhs.y) && (z == rhs.z)); } }; char c; int ans = 0; while (c = getchar(),!(c >= '0' && c <= '9')); while (c >= '0'&&c <= '9') { ans *= 10; ans += (int)c - '0'; c = getchar(); } return ans; } int BFS(point s,point v) { if (s == v) return 0; memset(dis,-1,sizeof(dis)); queue<point> Q; Q.push(s); dis[s.x][s.y][s.z] = 0; while (!Q.empty()) { int x = Q.front().x; int y = Q.front().y; int z = Q.front().z; Q.pop(); REP(6) { int xx = x + delta[o][0]; int yy = y + delta[o][1]; int zz = z + delta[o][2]; //非法路径 if (xx < 0 || xx >= n || yy < 0 || yy >= n || zz < 0 || zz >= n) continue; //墙 if (Map[xx][yy][zz] == 'X') continue; //尚未访问过 if (dis[xx][yy][zz] == -1) { dis[xx][yy][zz] = dis[x][y][z] + 1; //到达终点 if (point(xx,yy,zz) == v) return dis[xx][yy][zz]; Q.push(point(xx,yy,zz)); } } } return -1; } bool Do() { char c; if (scanf("\n%c",&c) == EOF) return false; //printf(" (%d) \n",n); for (int k = 0;k < n;k++)//块 for (int i = 0;i < n;i++)//行 scanf("%s",Map[k][i]); int s1,s2,s3,v1,v2,v3; //scanf("%d%d%d",&s1,&s2,&s3); //scanf("%d%d%d",&v1,&v2,&v3); point s = point(s3,s1,s2); point v = point(v3,v1,v2); int ans = BFS(s,v); if (ans == -1) printf("NO ROUTE\n"); else printf("%d %d\n",n,ans); scanf("%*s"); return true; } int main() { while (Do()); return 0; } • 点击查看/关闭被识别为广告的评论
2019-04-19 06:53:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23969802260398865, "perplexity": 5540.187467992622}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00058.warc.gz"}
https://research.ipmu.jp/seminar/?seminar_id=2222
# APEC Seminar (Astronomy - Particle Physics - Experimental Physics - Cosmology) Speaker: Ke-Pan Xie (Seoul National University) Broad composite resoance and its signals at the LHC Fri, Mar 01, 2019, 13:30 - 14:30 Seminar Room A 2222.pdf The existence of the $SU(2)_L$ triplet composite spin-1 resonances $\rho^{\pm,0}$ is a universal prediction of the strongly interacting new physics models addressing the naturalness problem. Although being plausible, such resonances are not found yet in the di-boson final states (i.e. $W^\pm Z/W^\pm h$, $W^+W^-/Zh$), which are expected to be the dominant decay channels of the $\rho$s. In this work we propose a new scenario that the $\rho$-resonances might be broad and mainly decay to the third generation quarks, in which the left-handed quark doublet $q_L = (t_L, b_L)$ is fully composite. In this case, The $t\bar{t}$ resonance search channel is comparable in sensitivity to the di-lepton channel. We also discuss how to use deep learning to improve the efficiency in the search for such a broad resonance.
2022-11-27 09:10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290095329284668, "perplexity": 1032.6265101944427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00272.warc.gz"}
http://www.jstor.org/stable/2038913
## Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: ## If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. # Wallman-Type Compactifications on 0-Dimensional Spaces Li Pi Su Proceedings of the American Mathematical Society Vol. 43, No. 2 (Apr., 1974), pp. 455-460 DOI: 10.2307/2038913 Stable URL: http://www.jstor.org/stable/2038913 Page Count: 6 Preview not available ## Abstract Let E be Hausdorff 0-dimensional, D the discrete space {0, 1}, and N the discrete space of all nonnegative integers. Every E-completely regular space X has a clopen normal base F with $X\backslash F \in \mathscr{F}$ for each F ∈ F. The Wallman compactification ω(F) is D-compact. Moreover, if an E-completely regular space X has a countably productive clopen normal base F with $X\backslash F \in \mathscr{F}$ for each F ∈ F, then the Wallman space η(F) is N-compact. Hence, if X has such an F, and is an F-realcompact space, then X is N-compact. • 455 • 456 • 457 • 458 • 459 • 460
2016-10-28 06:33:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4680853486061096, "perplexity": 5323.192907973445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721558.87/warc/CC-MAIN-20161020183841-00231-ip-10-171-6-4.ec2.internal.warc.gz"}
https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10%3A_Linear_Systems_of_Differential_Equations/10.04%3A_Constant_Coefficient_Homogeneous_Systems_I/10.4.0E%3A_10.4E%3A_Constant_Coefficient_Homogeneous_Systems_I_(Exercises)
# 10.4E: Constant Coefficient Homogeneous Systems I (Exercises) are all defined on $$(-\infty,\infty)$$. 1. Use Theorem 10.2.1 to show that the only solution of (A) that can ever equal the zero vector is $${\bf y}\equiv{\bf0}$$. 2. Suppose $${\bf y}_1$$ is a solution of (A) and $${\bf y}_2$$ is defined by $${\bf y}_2(t)={\bf y}_1(t-\tau)$$, where $$\tau$$ is an arbitrary real number. Show that $${\bf y}_2$$ is also a solution of (A). 3. Suppose $${\bf y}_1$$ and $${\bf y}_2$$ are solutions of (A) and there are real numbers $$t_1$$ and $$t_2$$ such that $${\bf y}_1(t_1)={\bf y}_2(t_2)$$. Show that $${\bf y}_2(t)={\bf y}_1(t-\tau)$$ for all $$t$$, where $$\tau=t_2-t_1$$. ## Q10.4.4 In Exercises 10.4.29-10.4.34 describe and graph trajectories of the given system. 29. $${\bf y}'= \left[\begin{array}{cc} 1&1\\1&-1\end{array}\right]{\bf y}$$ 30. $${\bf y}'= \left[\begin{array}{cc} -4&3\\-2&-11\end{array}\right]{\bf y}$$ 31. $${\bf y}'= \left[\begin{array}{cc} 9&-3\\-1&11\end{array}\right]{\bf y}$$ 32. $${\bf y}'= \left[\begin{array}{cc} -1&-10\\-5&4\end{array}\right]{\bf y}$$ 33. $${\bf y}'= \left[\begin{array}{cc} 5&-4\\1&10\end{array}\right]{\bf y}$$ 34. $${\bf y}'= \left[\begin{array}{cc} -7&1\\3&-5\end{array}\right]{\bf y}$$ ## Q10.4.5 35. Suppose the eigenvalues of the $$2\times 2$$ matrix $$A$$ are $$\lambda=0$$ and $$\mu\ne0$$, with corresponding eigenvectors $${\bf x}_1$$ and $${\bf x}_2$$. Let $$L_1$$ be the line through the origin parallel to $${\bf x}_1$$. 1. Show that every point on $$L_1$$ is the trajectory of a constant solution of $${\bf y}'=A{\bf y}$$. 2. Show that the trajectories of nonconstant solutions of $${\bf y}'=A{\bf y}$$ are half-lines parallel to $${\bf x}_2$$ and on either side of $$L_1$$, and that the direction of motion along these trajectories is away from $$L_1$$ if $$\mu>0$$, or toward $$L_1$$ if $$\mu<0$$. ## Q10.4.6 The matrices of the systems in Exercises 10.4.36-10.4.41  are singular. Describe and graph the trajectories of nonconstant solutions of the given systems. 36. $${\bf y}'= \left[\begin{array}{cc} -1&1\\1&-1\end{array}\right]{\bf y}$$ 37. $${\bf y}'= \left[\begin{array}{cc} -1&-3\\2&6\end{array}\right]{\bf y}$$ 38. $${\bf y}'= \left[\begin{array}{cc} 1&-3\\-1&3\end{array}\right]{\bf y}$$ 39. $${\bf y}'= \left[\begin{array}{cc} 1&-2\\-1&2\end{array}\right]{\bf y}$$ 40. $${\bf y}'= \left[\begin{array}{cc} -4&-4\\1&1\end{array}\right]{\bf y}$$ 41. $${\bf y}'= \left[\begin{array}{cc} 3&-1\\-3&1\end{array}\right]{\bf y}$$ ## Q10.4.6 42. Let $$P=P(t)$$ and $$Q=Q(t)$$ be the populations of two species at time $$t$$, and assume that each population would grow exponentially if the other didn’t exist; that is, in the absence of competition, $P'=aP \quad \text{and} \quad Q'=bQ, \tag{A}$ where $$a$$ and $$b$$ are positive constants. One way to model the effect of competition is to assume that the growth rate per individual of each population is reduced by an amount proportional to the other population, so (A) is replaced by \begin{aligned} P'&=\phantom{-}aP-\alpha Q\\ Q'&=-\beta P+bQ,\end{aligned} where $$\alpha$$ and $$\beta$$ are positive constants. (Since negative population doesn’t make sense, this system holds only while $$P$$ and $$Q$$ are both positive.) Now suppose $$P(0)=P_0>0$$ and $$Q(0)=Q_0>0$$. 1. For several choices of $$a$$, $$b$$, $$\alpha$$, and $$\beta$$, verify experimentally (by graphing trajectories of (A) in the $$P$$-$$Q$$ plane) that there’s a constant $$\rho>0$$ (depending upon $$a$$, $$b$$, $$\alpha$$, and $$\beta$$) with the following properties: 1. If $$Q_0>\rho P_0$$, then $$P$$ decreases monotonically to zero in finite time, during which $$Q$$ remains positive. 2. If $$Q_0<\rho P_0$$, then $$Q$$ decreases monotonically to zero in finite time, during which $$P$$ remains positive. 2. Conclude from (a) that exactly one of the species becomes extinct in finite time if $$Q_0\ne\rho P_0$$. Determine experimentally what happens if $$Q_0=\rho P_0$$. 3. Confirm your experimental results and determine $$\gamma$$ by expressing the eigenvalues and associated eigenvectors of $A=\left[\begin{array}{cc}{\alpha }&{-\alpha }\\{-\beta }&{b}\end{array} \right]\nonumber$ in terms of $$a$$, $$b$$, $$\alpha$$, and $$\beta$$, and applying the geometric arguments developed at the end of this section.
2020-01-27 19:14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852356314659119, "perplexity": 4328.831884161034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251705142.94/warc/CC-MAIN-20200127174507-20200127204507-00108.warc.gz"}
https://www.physicsforums.com/threads/three-layers-of-a-capacitor-voltage-divider-field-strength.741123/
# Three layers of a capacitor - Voltage divider + field strength. 1. Mar 2, 2014 ### Mutaja 1. The problem statement, all variables and given/known data In a plate capacitor, the plates has a area of $100cm^2$ and a distance of 3mm. The insulation between the plates is a 1mm glass plate (εr = 10), a 0.5mm thick mica plate (εr = 5) and the rest is air. The insulation layers is parallel with the capacitor plates. The capacitor is connected to a 100V voltage supplier. a) find the voltage across the layers. b) How big is the field strength in each layer? 2. Relevant equations 3. The attempt at a solution For a. I'm using the current divider rule. Vglass = $\frac{\frac{10*0.01m^2}{0.004mm}}{\frac{10*0.01m^2}{0.004mm}+\frac{5*0.01m^2}{0.0035mm}+\frac{0.01m^2}{0.003mm}}$ * 100V = 56.88V. This answer seems about 10x larger than it should be - which is why I'm pretty sure that I'm on the right track here. Well, that's only based on the fact that the random chance that I've come up with a completely wrong formula and get an answer that is almost exactly 10 times larger than what it should be, is fairly low. What am I missing here? As always, and feedback is very much appreciated. - Mutaja. 2. Mar 2, 2014 ### Staff: Mentor The layers are stacked between the plates and have thicknesses of 1 mm, 0.5 mm, and 1.5 mm. So that models as three capacitors in series. Being in series they must all carry the same current when any current flows. So current division isn't the way to go. You have the information required to calculate the individual capacitor values. Why not start there? 3. Mar 3, 2014 ### rude man You don't need to compute capacitances and the area of the plates is immaterial. You can write an equation for the voltage drops across each region with E, the field in the air gap, as the unknown. You know what the relative E fields are in each region and you also know that the sum of the voltage drops has to equal 100V. You're right, your computed value for Vglass is close to 10x what it should be. 4. Mar 3, 2014 ### Mutaja I realize that, but voltage divider makes sense, which is what I thought I attempted at least. Ok. This is my new attempt: Cglass=$8.85*10^-12$ * εr * $\frac{A}{d}$ = $8.85*10^-12$ * 10 * $\frac{0.01m^2}{0.001m}$ = 8,85*10-10. Assuming this is correct, I now know c. If I want to use this to find the voltage, I need Q. Knowing myself, though, I've probably overlooked a useful formula. Or are we now back at voltage dividing since I know more of the size of these capacitors related to each other (assuming, of course, I compute C for mica and air as well). Hmm. This has always been a challenge for me. In my initial attempt, I've written an equation for the voltage across the "glass capacitor" - wrongly. How would your method compare to that? "the field in the air gap" doesn't make much sense to me. Is it the electrical field strength (is that what the E is for?). Sorry for the confusion, and thanks a lot for your help. 5. Mar 3, 2014 ### Staff: Mentor rude man is correct that you can write an equation for the voltage drop across each region if you know how the relative permittivity affects the net field strength (relative to vacuum) within the region. It's a good approach and quicker to achieve than what I was suggesting. What I was going for is a variation of the voltage divider method that takes advantage of the way charge is distributed in serially connected capacitors. The usual voltage divider formulas get messier when there are more than two capacitors in series. By computing each of the capacitor values and then the total equivalent capacitance, a known total voltage (100 V) applied gives you the net charge on the equivalent capacitor. For series capacitors the charges are all the same so that it will be the same charge on each of the capacitor "layers", and hence you know the potential across each layer via V = Q/C. With potential difference and thickness you then have your field strengths for each layer. 6. Mar 3, 2014 ### Mutaja Computing each of the capacitor values: Cglass=$8.85*10^-12$ * εr * $\frac{A}{d}$ = $8.85*10^-12$ * 10 * $\frac{0.01m^2}{0.001m}$ = 8.85*10-10. Cmica=$8.85*10^-12$ * εr * $\frac{A}{d}$ = $8.85*10^-12$ * 5 * $\frac{0.01m^2}{0.0005m}$ = 8.85*10-10. Cair=$8.85*10^-12$ * εr * $\frac{A}{d}$ = $8.85*10^-12$ * 1 * $\frac{0.01m^2}{0.0015m}$ = 5.9*10-10. Ctotal=2.36*10-9 Net charge on the equivalent total (Q?): Q = V * C = 100V * 2.36*10-9 = 2.36*10-7 Using this Q across all capacitors is wrong - obviously. I get voltages of 266 and 400 using V = Q/C. Being able to see that I've done a silly mistake somewhere and not being able to spot it is unfortunate, at best. Hopefully it's not all wrong and I've made at least *some* progress. Thanks once again for helping me out. 7. Mar 3, 2014 ### Staff: Mentor How do capacitors in series add? (as a check: The total should end up smaller than any of the individual capacitors). You're making progress. Fix up your capacitor values and net capacitance. 8. Mar 3, 2014 ### Mutaja Thank you! I figured out the voltage across each layer. My power of 10 mistake should be - 11, and capacitors in series "equals" resistors in parallel. Well, the method of computing the values at least. My C total is $5.2*10^-11$. V=Q/C is then correct. I will give part b a shot in a few hours. I'm currently on the road so my apologies for that. But I guess the forum isn't going anywhere by the time I'm by my computer again heh. Thank you so much for all your help so far. 9. Mar 3, 2014 ### rude man Across each layer 1, 2 or 3, voltage V = distance d times parallel component of electric field E. So: E1 d1 + E2 d2 + E3 d3 = V. And then use the fact that E = Evacuor. When you get your answer I'll let you know if it's right. EDIT: NM, I see you went the other way already. 10. Mar 3, 2014 ### Mutaja Yes, I understand enough to attempt it, but seeing as I already have my answers, I'll leave this behind for now. When I get ahead of my schedule, I will look into it though. Any practice won't harm me, to say the least. For my 2nd problem, it was a lot easier, obviously, when I now have the voltage across each capacitor. I simply used E = V/d (which I learned about in a previous thread on this forum), and I got answers that made very much sense. Thanks a lot for your help, both of you.
2017-08-18 17:23:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5893730521202087, "perplexity": 760.3400114167083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104704.64/warc/CC-MAIN-20170818160227-20170818180227-00315.warc.gz"}
https://www.learnaptitude.com/2019/11/
# Time and Work Question, Formula Tricks Hey Guys, Welcome back to Learn Aptitude. Today I will discuss the full concept of Time Work, shortcut tricks, Formula  and I will all variant of Time and Work Questions. In India 80% people are facing problems in Time and Work Questions. It is not so difficult, but it is a conceptual trick. Just clear your concept about Time and work and your rest work is done. ## Basic Concept of Time and Work Let us Assume there are two people, first one is Ram and Second one is Hari. In one day Ram can make 3 dolls and Hari can make 5 dolls. Now observe the following Time and Work table. Time Ram Hari 1- Day 3 Dolls 5 Dolls 7-Days 7 × 3 =21 Dolls 7 × 5 =35 Dolls 30-Days 30 × 3 = 90 Dolls 30 × 5 = 150 Dolls From the above table, we can observe that Ram is taking more time than Hari for making Dolls.While Hari is producing more dolls in less time. So we can conclude that Time and work are indirectly propotional to each other. ## Time and Work Formulas • Work Done in 1-Days= 1/Number of Days • Work Done in n days = Work Done in 1 Days × n days • Number of Days= Work Done Days/ work done in 1 days ## Time and Work Questions with Answers Q.1 A can complete a work within 6 days while B can complete that work within 5 days. If both A and B work together, then how many days, then will take to complete the same work? (A) 10 Days (B) 11 Days (C) 12 Days (D) 13 Days Answer: (B) 11 Days Solution: A =6 Days B= 5 Days L.C.M of A and B = 30 Days will be taken A and B = LCM/ A's Days + LCM/B's Days= (30/6) + (30/5)= 5 + 6= 11 ∴ If A and B will work together then they will take 11 days to complete the work. Q.2  Two People both P and Q can complete a work in 15 days and Q alone can complete the work in 20 days, then how many days P will take to complete the work? (A) 30 Days (B) 23 Days (C) 60 Days (D) 83 Days Answer: (C) 60 Days Solution: P+Q= 15 days Q= 20 days LCM of P and Q = 60 P and Q can do in 1 day= LCM/15= 4 Q can do in 1 day= LCM/20= 60/20=3 Hence P can do in 1 Day= 4-3=1 Finally, P can alone complete the work = 1 × 60= 60days Q.3  Ram can complete a wall in 7 days of working 9 hours each while Laxman can complete in 6 days of working 7 hours each. How many days it will take if they work together 42/5? (A) 3 Days (B) 6 Days (C) 9 Days (D) 8 Days Answer: (A) 3 Days Solution: Ram can complete the work in 7 × 9 = 63 hours Laxman can complete the work in 6 × 7 = 42 days Ram's 1 hour work= 1/63 Laxman's 1 hour Work= 1/42 Both Ram and Laxman's 1 hour work will be (1/63) + (1/42)= 5/126 Hence, both will complete the work = (126/5)hrs Days it will take to complete if they work together for 42/5 hours= (126/5) × ( 5/42) = 3 days Q.4 Y can work twice of X. If both X and Y complete the work together within 30 days then how many days Y will take to complete the work alone? (A) 45 Days (B) 10 Days (C) 58 Days (D) 60 Days Answer: (A) 45 Days Solution: Q.5 Between two people A and B, B can complete a work in 20 days while A is 30% more efficient than B. How many days A will take to complete the work? (A) 22 Days (B) 24 Days (C) 36 Days (D) 38 Days Answer: (B) 24 Days Solution: As Per question A can work 30% more than B and we can write the ratio of their work as 120:100= 6:5 Lets assume A can complete the work within x days Finally, we can re-write it as 6:5 :: x:20 ⇒5x = 120 ⇒x = 120/5= 24days Q.6 A woman can complete a certain job in 24 days and a man can complete the same work in half the time taken by the Woman. If they work together, then how many days they will complete the work? (A) 22 Days (B) 15 Days (C) 8 Days (D) 6 Days Answer: (C) 8 days Solution: As per Question Woman can complete the work i 24 days 1 day's work of man = 1/24 and A Man can complete the work  in 24/2= 12 days. 1 day's work of woman= 1/12 Man's 1 hour Work + Woman's 1 hour Work = (1/24) + (1/12) = 3/24= 1/8 Both Man and woman will complete the work in 8/1= 8 days. Q.7 I can paint a wall within 24 hours and with the help of my brother, I can paint it by 8 hours. If my brother will paint alone, then how many hour he will take? (A) more than 4 days (B) less than 4 days but greater than 3 days (C) less than 4 days (D) more than 4 days but less than 5 days Answer: (D) more than 4 days but less than 5 days Solution: I can complete the paint alone in 24 hours 1 hour's work = 1/24 With brother, I take 6 hours 1 hour's paint with my brother = 1/6 1 hour's alone paint of my brother  (1/6) - (1/24) = 5/24 Alone my brother will take time = 24/5= 4.8days Q.8 A water tank has two holes. The first hole alone can make the water tank empty in 9 minutes where the second hole alone can make the tank empty in 6 minutes. If water flow remains at the constant rate and both the hole started to make the tank empty, then how many minutes it will take to make the tank empty. (A) more than 4 days (B) less than 4 days but greater than 3 days (C) less than 4 days (D) more than 4 days but less than 5 days Answer: (B) less than 4 days but greater than 3 days Solution: First tank makes the tank empty in 9 minutes In 1 minutes it will make 1/9 Second tank makes the tank empty the in 6 minutes In 1 minutes it will make 1/6 So, if both the holes will work at same time then in 1 minute (1/9) + (1/6) = 5/18 ∴ Required time = 18/5 = 3.6 Q.9 Papu can make half sweet than Lipu in three-fourth of  the time. If they work together then they take 30 days to complete the sweet production then how many days Lipu alone will take to complete the work? (A) 30 (B) 40 (C) 50 (D) 45 Answer: (A) 30 Solution: Let Lipu take x days to complete sweet production. 1 days work = 1/x Lipu and Papu will complete sweet production in 30 days In 1 days work= 1/18 Papu takes {2 × (3x/4)= 3x/2 days to complete it As per Question we have (1/x) + (3x/2) = 1/18 By solving abopve, we have x= 30 Q.9 Papu can make half sweet than Lipu in three-fourth of  the time. If they work together then they take 30 days to complete the sweet production then how many days Lipu alone will take to complete the work? (A) 30 (B) 40 (C) 50 (D) 45 Answer: (A) 30 Solution: Let Lipu take x days to complete sweet production. 1 days work = 1/x Lipu and Papu will complete sweet production in 30 days In 1 days work= 1/18 Papu takes {2 × (3x/4)= 3x/2 days to complete it As per Question we have (1/x) + (3x/2) = 1/18 By solving abopve, we have x= 30 Q.10 Two people A and B can do a certain work in 60 days and 40 days respectively. If they both work for 6 hours, then how much fraction of the work will be completed in 6 days? (A) 1/9 (B) 1/8 (C) 1/24 (D) 1/6 Answer: (B) 1/8 Solution: A can do the work in 60 days In 1 day it will do 1/60 fraction B can do the work in 40 days In 1 days B can do 1/40 fraction In 1 day both A and B can complete (1/60) + (1/40) = 1/24 fraction In 4 days = (1/24) × 4= 1/8 # Divisibility Rule of Numbers Are you suffering to remembering Divisibility rule? Don’t worry, just follow this post and definitely you will remember all the rule. ## Divisibility Rule ### Divisibility by 2 All numbers are divisible by 2 if its first digit from your right side (Unit Place) contains 0, 2, 4, 6 or 8. Only all, even numbers are divisible by 2. Number Unit Place Is divisible? Cause 110 0 Yes Its Unit Place is 0 192 2 Yes Its Unit Place is 2 234 4 Yes Its Unit Place is 4 366 6 Yes Its Unit place is 6 568 8 Yes Its Unit Place is 8 651 1 No Its Unit Place is 1 96767 7 NO Its Unit Place is 7 ### Divisibility by 3 A number is divisible by 3 only if the sum total of all the digits is divisible by 3. We have to add all the digits of that number and check whether the number is divisible by 3 or not? If Yes then the number is divisible by 3. Number Sum of Digits Is divisible by 3? Result 78 7+8=15 Yes As 15 is divisible by 3 So 78 is divisible 134 1+3+4=8 No As 8 is not divisible by 3 So 134 is not divisible 261 2+6+1=9 Yes 9 is divisible by 3 So 261 is divisible 378 3+7+8=18 Yes As 18 is divisible by 3 So 378 is divisible 6566 6+5+6+6=23 No As 23 is not divisible by 3 SO 6566 is not divisible 46323 4+6+3+2+3= 18 Yes As 18 is divisible by 3 So 46323 is divisible by 3 ### Divisibility by 4 A number is divisible by 4 if its last two digits from your right hand side is divisible by 4. Number Last Two Digit Is divisible by 4? Result 156 56 Yes As 56 is divisible by 4 So 156 is divisible 242 42 No As 42 is not divisible by 4 So 242 is not divisible 456368 68 Yes As 68 is divisible by 4 So 456368 is divisible by 4 ### Divisibility by 5 If the unit digit of any number is either 0 or 5 then we can say that the number is divisible by 5. Otherwise the number is not divisible by 5. Number Last Digit Is divisible? Result 650 0 Yes As the unit digit is 0 So 650 is divisible by 5. 56002 2 No As Unit digit is not 0 or 5 so it is not divisible. 79856385 5 Yes As the unit digit is 5 So 79856385 is divisible by 5 ### Divisibility by 6 We can check that a number is divisible by 6 only if the number is divisible by both 2 and 3. That means we have to apply the divisibility rule of 2 and divisibility rule of 3 to and check whether the number is divisible by both or not and it must be an even number.If not the number is not divisible by 6. A number is divisible by 6 if it is an even number and divisible by 3. Number Is divisible by 2? Is divisible by 3? Result 96 Yes Yes As the number is divisible by 2 and 3 the number is divisible by 6. 256 Yes No The number is divisible by 2 but ot divisible by 3 So 256 is not divisible by 6 263 No Yes As it is an odd number it is not divisible by 6 1266 Yes Yes As 1266 is divisible by 2 and 3 so it is divisible by 6. ### Divisibility by 7 To check the divisibility rule of 7 we have to perform the following steps: 1. Multiply the unit digit by 2. 2. Now subtract the multiplication result from the remaining digits. 3. If the subtraction result is divisible by 7 then the number is divisible by 7. Note: Use this rule till 5 digit Number. Number Unit Digit Double of Unit Digit Subtraction Result 161 1 1 × 2= 2 (16)1= 16-2=14 As 14 is divisible by So 161 is divisible by 7 306 6 6 × 2= 12 (30)6= 30-6=24 As 24 is not divisible by 7 So 306 is not divisible by 7 693 3 3 × 2= 6 (69)3= 69-6= 63 As 63 is divisible by 7 So 693 is divisible by 7 If the number exceeds 5 digits, then do not use the above divisibility rule. Because it will be difficult for you to check the divisibility. #### Divisibility Rule of 7 more than 5 Digit Number 1. First, make pairs of 3 digits from your right hand side. 2. Add the alternative Pairs 3. Subtract the addition result pairs Lets understand by taking Examples Q. Check whether 67666662 is divisible by 7 or not? Step-1: Make Pair of 3 digits from right side (67) -(666)-(662) Step-2: Add Alternative Pairs 67+662= 729 Step-3: Substract from the remaining pairs. 729-666=63 Here 63 is divisible by 7 So 6766662 is divisible by 7 ### Divisibility by 8 If the last 3 digits from your right hand side is divisible by 8 then then the number is divisible by 8. One initial check is that the number must be an even number, otherwise the number is not divisible by 8. Number Last Three Digit Is divisible by 8? Result 263152 152 Yes As 152 is divisible by 8 So 263152 is divisible 566364 364 No As 364 is not divisible by 4 So 566364 is not divisible 5663368 368 Yes As 368 is divisible by 8 So 5663368 is divisible by 4 ### Divisibility by 9 A number is divisible by 9 only if the sum total of all the digits is divisible by 9. We have to add all the digits like the divisibility rule of 3 and check whether it is divisible by 9 or not. Number Sum of Digit Is divisible by 9? Result 6363 6+3+6+3=18 Yes As Sum 18 is divisible by 9 SO the number is divisible by 9 96364 9+6+3+6+4=28 No As sum is 28 which is not divisible by 9 So the number is not divisible by 9. ### Divisibility by 10 Any number which has the only unit digit 0 is divisible by 10. Number Unit Digit Is divisible by 10? Result 62610 0 Yes As its unit digit is 0 so it is divisible by 10 554545 5 No As its unit digit is 5 so it is not divisible by 10 ### Divisibility by 11 A number is diviosible by 11 if the difference between the sum of even place and sum of odd places are divisible 11 or 0. Number Sum of Even Places Sum of Odd Places Odd-Even Result 244354 5+4+2=11 4+3+4=11 11-11=0 As the difference is 0 so the number is divisible by 11 206140 2+6+4=12 0+1+0=1 12-1=11 As the diference is 11 which is divisible by 11 SO the number is divisible by 11 636652 5+6+6=17 2+6+3=11 11-17=-6 As -6 is not divisible by 11 so the number is not divisible by 11 # HCF and LCM Online Aptitude Test-1 Hey, Guys are you searching to make practice HCF and LCM Online Aptitude test? Here is the online Aptitude test of HCF and LCM. Just appear the test and make a practice of yourself. ### HCF and LCM Online Aptitude Test Details This test contains 10 questions from the HCF and LCM sections. You have to complete your test within 15 minutes. You have to answer all the question and no skip is available. Before Appearing the Examination Please Read the HCF and LCM Tricks and Concept Provide your Experience and Feedback in the comment section and help us improve our Online test Series. # HCF and LCM Concept Shortcut Trick Hey Guys, Are you facing difficulty while solving HCF and LCM problems? Don’t worry, just follow the post and  your all difficulties and doubt will be cleared. Here we have mentioned the basic concept of HCF and LCM along with the shortcut tricks to solve all problems. ### Why you should learn HCF and LCM? If you want to be strong in Mathematics and Aptitude, then you have to be strong in HCF and LCM. HCF and LCM are two backbones of Mathematics.  As like you need food to make your body function similarly you have to be strong in LCM and HCF to do solve further mathematics. Because you will use this concept in every step of mathematics like Addition, Substraction, Calculus and other solutions. ### What is the concept of Factor? A bunch of numbers multiplied with each other to form a new number is called factor of that new number. #### How to calculate the factor of a number? Let us understand the concept of factor by taking an example: Example: Find the factor of 24, 25, 28? Solution: (i) 24= 2 × 2 × 2 × 3 (ii) 25= 5 × 5 (iii) 28 = 2 × 2 × 7 I think the concept of factor is now cleared. ### HCF (Highest Common Factor) HCF of two or more number is a number which divides each and every number exactly. In genral, we can say that the highest common factors between two or more numbers is called HCF of that number. #### Common Factor Common factor is the factors which are  present in both the number is called common factor. Example: Factor of 6= 2, 3 & 12= 2, 2, 3 So we can say that 2 and 3  are the common factor of both the number. Example of HCF of two numbers Q. Calculate the HCF of 6 and 12 Factor of 6= 2 , 3 Factor of 12= 2, 2, 3 Here Common factors are 2 and 3 So HCF = 2 × 3=6 Q. Find the HCF of 6 and 14? ### LCM (Least common multiple) LCM is a least number which is divisible by all the given numbers. In other words, we can say that LCM is the product of highest factor of a given number. Example: LCM of 6, 8 6= 2, 3 8= 2, 2, 2 So LCM= 2 × 2 × 2 × 3= 24 Example 2: LCM of 6 and 14? ### Important Formulas • HCF × LCM = Product of numbers You may Like: ### HCF and LCM of Decimal Number Lets take an example to understand How to find the HCF and LCM of Decimal number. Q. Find the GCD and LCM of the number 0.6, 0.18 and 1.2? Step-1: First of all check what is the number of digits after  ‘.’ (Dot) point. Step-2: Convert all the digits after ‘.’ point into two digits. If one digit is present, put ‘0’ to make it two digits. Step-3: Remove ‘.’ dot from all the number. 60       18     120 Step-4: Calculate HCF of 60, 18, 120 Step-5: Calculate LCM of 60, 18, 120 So LCM= 2 × 3 × 2 × 5 × 3 × 2=360 Step-6: Divide the calculated result by 100 to calculate HCF and LCM of the given numbers. HCF= 6/100=0.06 LCM= 360/100= 3.60 ### HCF and LCM of Fraction $\fn_jvn&space;HCF=&space;\frac{HCF&space;of&space;(Numerator)}{LCM&space;of&space;(Denominator)}$ $\fn_jvn&space;LCM=&space;\frac{LCM&space;of&space;(Numerator)}{HCF&space;of&space;(Denominator)}$ Q. Find the LCM and GCD of  $\fn_jvn&space;\frac{6}{21},&space;\frac{8}{35}&space;and&space;\frac{12}{63}$ ? Solution: $\fn_jvn&space;LCM=&space;\frac{LCM(&space;6,&space;8,&space;12)}{HCF(&space;21,&space;35,&space;63)}=&space;\frac{24}{7}$ $\fn_jvn&space;HCF&space;=&space;\frac{HCF(6,&space;8,&space;12&space;)}{LCM(2,&space;35,&space;63&space;)}=&space;\frac{2}{315}$ ### HCF and LCM of Power of a Number Q. Find the GCM and LCM of 6², 6¹³, 6 ^18, 6^19? (^=Power) Solution: HCF= 6² LCM = 6^19 Q. Find the LCM and GCM of 3^-2 , 3^-12, 3^-23, 3 ^-32? (^=Power) Solution: HCF= 6² LCM = 6^19 ### HCF and LCM of Power of Polynomial Q. What is the LCM and HCF of 2ab, 6a²b, 8a²b² ? Solution: Given Expression= 2ab, 6a²b, 8a²b² We can write 2ab, 6a²b, 8a²b²= 2ab, 2 × 3 a²b, 2³a²b² So our final HCF= 2ab and LCM= 2³a²b² Q. What is the LCM and HCF of x² – xy, x – y, 3x – 3y, x²- 2xy + y² ? Solution: Given Expression= x² – xy, x – y, 3x – 3y, x²- 2xy + y We can write  as x( x – y), x – y, ( x – y)² So our final HCF= x – y and LCM= x ( x-y)² I think your concept regarding this topic is cleared. If something is missing in this post, then kindly let me know about that. # Number system Questions with Answers Number System Questions with Solution: Hey Guys, Do you know, the number system questions are one of the cool problems in the Aptitude section of Mathematics. You just have to add, substract, multiply or divide one number with another number. I think you are very much aquented with the number system problems. So lets discuss  different problems of number system with their solutions. ## Number System Questions with Solutions Q.1 What will be the solution of 34598205 × 999=? (A) 34563606747 (B) 34563606795 (C) 34563606022 (D) 34563444222 Answer: (B) 34563606795 Solution:Here we have two  process to solve this question. First way we can multiply 34598205 with 999 which will be lengthy. So we have to follow the second method i.e: 34598205 × ( 1000 - 1 ) =34598205000 - 34598205 = 34563606795 Q.2 What is the value of 998 × 112 + 998 × 88= ? (A) 199600 (B) 156666 (C) 456221 (D) 196632 Answer: (A) 199600 Tips: To solve this question we have to use distributive law: Solution:  998 × 112 + 998 × 88= 998 × ( 112 + 88) =998 × 200 = 199600 Related Topics: Q.3 What is the value of 742 × 742 + 258 × 258 + 2 × 742 × 258= ? (A) 10000000 (B) 45668222 (C) 1000000 (D) 4522588 Answer: (C) 1000000 Tips: Use ( a + b )²= a² + b² + 2ab Solution: 742 × 742 + 258 × 258 + 2 × 742 × 258= (742)² + (258)² + 2 × 742 × 258= (742 + 258)²=1000²=1000000 Q.4 Find the value of  $for which 236$62 is divisible by 3 and must be an odd number? (A) 3 (B) 5 (C) 7 (D) 9 Answer: (D) 9 Tips: Use the divisible formula for 3 i.e A number is divisible by 3 if the sum of all the digit of the number is divisible by 3. Solution: Sum of the given number 2 + 3 + 6  + 6 + 2= 18 So possible answer is 2, Q.5 Weather 876744 is divisible by 88? (A) Yes (B) No Answer: (A) Yes Tips: Use divisible formula of 11 and 8. Solution: 88= 11 × 8 • Sum of digits at odd place - Sum of digits at even place = ( 8 + 6 + 4) - ( 7 + 7 + 4)= 18 - 18= 0 which is divisible by 11 So 876744 is divisible by 11 • Last 3 digit of 876744 is 744 which is divisible by 8, so 876744 is divible by 8. So, 876744 is divisible by 88 because 876744 is divisible by both 11 and 8. Q.6 What will be the unit digit in the multiplication of (5227)³¹³ ×(91)³²¹? (A) 1 (B) 3 (C) 7 (D) 9 Answer: (C) 7 Solution: We have to calculate the unit digit in (5227)³¹³ × (91)³²¹ As we know 74 having 1 as unit digit and (5227)³¹³ 's unit digit can be calculated as per the power of 7 and (91)³²¹ unit digit can be calculated by the help of 1. So we can write (7)³¹³× (1)³²¹=(7)³¹² × 7 × (1)³²¹= ((7)4)78  × 7 × 1 Hence our required unit digit = ( 1 × 7 × 1) = 7 Q.7  What is the minimum number we have to add with 2019 to obtain a number which is completely divisible by 11? (A) 12 (B) 5 (C) 6 (D) 4 Answer: (B) 5 Tips: Use formula  Required Number = Divisor - Remainder Solution: So our Required Number= 11 - 6 = 5 Q.8  What is the minimum number we have to subtract with 2019 to obtain a number that is completely divisible by 11? (A) 12 (B) 5 (C) 6 (D) 4 Answer: (B) 5 Tips: Use formula  Required Number =Remainder Solution: So Our Required Number=6 Q.9 What will be the sum of 1² + 2 ² + 3² + 4² + 5²+ ……..+ 890² ? (A) 396494 (B) 396495 (C) 396496 (D) 396497 Answer: (B) 705761100 Tips: Use the series formula  1² + 2 ² + 3² + 4² + 5²+ ........+ n²= ½n(n+1) (2n + 1) Solution: In 1² + 2 ² + 3² + 4² + 5²+ ........+ 890² here n= 890 So as per the formula we have: ½ × 890(890+1) (2×890 + 1) = 445 × 891 × 1780= 705761100 Q.10 What will be the sum of 1³ + 2³ + 3³ + 4³ + 5³+ ……..+18³​ ? (A) 705761101 (B) 128608721 (C) 128608722 (D) 12860855 Answer: (C) 128608722 Tips: Use the series formula  1³ + 2³ + 3³ + 4³ + 5³+ ........+ n³= ½n²(n+1)² Solution: In 1³ + 2 ³ + 3³ + 4³ + 5³+ ........+ 18³ here n= 18 So as per the formula we have: ½ × 18²(890+1)²= 162 × 793881= 128608722
2019-12-08 23:08:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4855049252510071, "perplexity": 1423.3073657869775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00207.warc.gz"}
https://imagej.github.io/3D_Viewer__User_FAQs
3D Viewer › User FAQs Basic Usage How to display a stack After you have started the 3D viewer, click on ‘->File->Add content’. A dialog window opens, asking for some information: • Image: The image which should be displayed in the viewer. The user can select from a list of all open images. • Name: A name for the 3D object. The default is the image title. • Display as: Stacks can be displayed as volume renderings, orthoslices, surfaces or surface plots. • Color: The color of the 3D object. • Threshold: For surfaces, this is the isovalue of the surface. For all other display modes, this value is the lower threshold of displayed values. • Resampling factor: Large images require downsampling before displaying, to be rendered interactively. A value of 2 means here that the image is downsampled by a factor of 2 in x-, y- and z-direction. • Channels: If displaying color images, this specifies the color channels which are to be displayed. After clicking OK, the 3D object appears in the viewer window. Top How to interact with the viewer (rotate, shift, zoom) The user can rotate, translate and zoom in the 3D space: Two sorts of transformations are distinguished: 1. Transformation of the view: • Rotation: Select the ‘Hand’ tool in ImageJ’s tool bar. If no 3D object is selected, dragging with the left mouse button rotates the view around the universe center. • Translation: Dragging while pressing the ‘Shift’ key shifts the view. • Zooming: Zooming is done by selecting the ‘Glas’ tool in ImageJ’s tool bar and drag with the left mouse button. On many platforms, it is alternatively possible to scroll (while the ‘Hand’ tool is selected) for zooming. 2. Transformation of objects: Individual objects can be transformed with the same key/mouse combinations. To transform a specific object, that object needs to be selected. An object is selected by a single left mouse click. Selection is indicated by a red bounding box. Top How to change the color, transparency… of a 3D object Color, transparency, threshold and displayed channels of color images are so-called attributes of 3D objects. These attributes have the following meaning: • Color: The color of the 3D object. If ‘None’ is selected, the color is taken from the stack image. • Transparency: The transparency of the 3D object: A value of 0 means fully transparent, a value of 1 means fully opaque. • Threshold: In case of surfaces, the threshold specifies the isovalue of the surface. Otherwise, it specifies the lower threshold of displayed pixels. • Channels: In color images, the channels attribute specifies the channels to be displayed. In greyscale images, this attribute has no effect. The attributes can be changed by 1. Select the corresponding object by clicking on it 2. Click on -> Edit -> Attributes and select the attribute you want to change. Top How to make animations and movie recordings To animate the view, click on ->View->Start animation. The view begins immediately to rotate around the y-axis. If you now want to record such an animation, click on ->View->Start recording. The animation is now recorded for one full 360° rotation. The result is displayed in a stack. If you want to include the recording in a presentation, save it via ImageJ’s ‘Save as AVI’ function. You can incorporate the resulting movie file in powerpoint presentations. To stop an animation, click on ->View->Stop animation. Top How to reset the view You can reset the 3D universe to its initial view by clicking on ->View->Reset View. This resets the view. Note however, that this does not change the transformation of individual 3D objects. To reset them, too, select each object and click on ->Transformation->Reset transformation. Top How to hide the coordinate system There are two types of coordinate systems: One global coordinate system, which indicates the origin of the universe, and one local coordinate system for each object. For hiding the global coordinate system, have a look at “How to general view settings” For hiding the local coordinate system, select the object and click on ->Edit->Hide/Show and disable ‘Show coordinate system’. See #How to general view settings for how you can avoid to show local coordinate systems in general. Top How to change the background color To change the background color of the 3D world, click on ->View->Change background color. A dialog opens, which lets you interactively adjust the background color. To use the current background color by default, see #How to change general view settings. Top Surfaces What is the idea of a surface Intuitively, the surface of an object is understood as the border between the object and the background. One common way to find a surface is to choose a threshold which divides object and background: Values above the threshold are assumed to belong to the object, values below are assumed to belong to the background. To construct a surface, an algorithm like the marching cubes algorithm can be utilized. Top How to smooth a surface In order to smooth the surface of a 3D object, select the object and click on ->Edit->Smooth surface. You can also smooth all displayed surfaces by clicking on ->Edit->Smooth all surfaces Top How to export surfaces The displayed surfaces can be exported to files in different surface file formats. Currently supported is Wavefront (.obj) and Drawing Interchange Format (.dxf). Top Volumes What are volumes/volume renderings A volume rendering generates the 3D effect by putting the slices of a stack one behind another, separated by a certain distance. To each pixel in each slice a transparency value is assigned, which depends on the pixel’s brightness. Top How to edit volumes The 3D viewer offers the possibility to edit volumes. To crop volumes, 1. Select an object by clicking on it. 2. Use one of ImageJ’s selection tools to draw a region of interest (ROI). 3. Click on ->Edit->Fill selection to erase the volume which is covered by the ROI. (Erasing means actually filling it with black). Top Orthoslices: What are orthoslices Orthoslices are three orthogonal slices through the volume. The three slices show one xy-plane, one xz-plane and one yz-plane. Top How to change the displayed slice The position of the three slices can be changed. To do so, click on -> Edit -> Adjust slices. A dialog opens, which lets you adjust interactively the position of each of the three slices. There exist also keyboard shortcuts to adjust the slices: hold one of the x, y, and z key pressed and use either the arrow keys or mouse scrolling to adjust the slices. To hide a slice, hold one of the x, y or z key pressed and hit the ‘space’ bar. Top 2D Surface Plots What are surface plots A surface plot displays a 2D slice as a 3D plot: The x- and y-coordinate correspond to the x- and y-coordinate in the 2D slice, the z-coordinate is the pixel value at (x, y). A 3D surface plot always shows one slice a time. When a 3D surface plot is opened in the viewer, the currently selected slice is displayed. When changing the slice of the original image stack, the view is automatically updated. Top How to interactively change the displayed slice A 3D surface plot always shows one slice a time. When a 3D surface plot is opened in the viewer, the currently selected slice is displayed. When changing the slice of the original image stack, the view is automatically updated. Top View Settings How to center a 3D object in the view In case you have several 3D objects in the 3D window, it is desirable to center the view on one specific object. This is possible by selecting an object (by clicking on it) and click on ->View -> Center selected. Top How to hide the coordinate system There are two types of coordinate systems, one global and one locally for each individual 3D object. To hide the local coordinate system of one specific 3D object, see #How to hide the coordinate system. To hide the global coordinate system, see #How to change general view settings. Top How to show a scalebar To show or edit a scalebar to the 3D view, click on ->View->Edit scalebar. A dialog opens, which allows you to adjust scalebar settings: • x position: The x coordinate of the scalebar in realworld coordinates. • y position: The y coordinate of the scalebar in realworld coordinates. • length: The length of the scalebar, also in realworld units. • units: An additional string which is displayed together with the length. • Color: The color of the scalebar. • show: Check/Uncheck this box to show/hide the scalebar. Clicking OK applies the changes. Top How to change general view settings Some general view settings can be changed and made permanent by clicking on ->View->View settings. A dialog window opens, asking the user for settings. There are two types of settings, startup options and options which are applied immediately. 1. Startup options: • Width and Height The window dimensions of the 3D viewer. • Show global coordinate system Show a coordinate system which indicates the origin of the 3D world. • Use current color as default background Activate this option to reload the current background color at each start of the viewer. • Show scalebar Activate this option to show the scalebar by default. (See also #How to show a scalebar. • Apply changes now If activated, the changes in the settings above are immediately applied, otherwise, they are first applied at the next application start. 2. Immediately applied options: • Show local coordinate system by default If activated, the local coordinate system of 3D objects is shown when new objects are loaded in the 3D viewer. If inactivated, the coordinate system is omitted. Note: This only affects newly added 3D objects. Already displayed objects are not affected. • Global rotation around Center/Origin Global rotations (see #How to interact with the viewer (rotate, shift, zoom)) can have two possible centers: • Origin: The origin of the virtual world. This is in most cases the lower left corner of 3D objects. You can make the origin visible by showing the global coordinate system (see above). • Center: The center of the virtual world. The center is automatically calculated from the displayed 3D objects. Top Transformations The concept of transformations There are two types of transformations in the 3D viewer: Global transformations and local transformations. Global transformations refer to transformations of the whole view, no individual objects are transformed, but the whole 3D world together. Local transformations refer to transformations of individual objects. Transformations can be made interactively with the mouse. See #How to interact with the viewer (rotate, shift, zoom) for more information. Alternatively, transformations of individual objects can be altered more exactly, by specifying transformations matrices. Transformations can be set for 3D objects, or applied (concatenated with the current transformation) to 3D objects. Transformations can be saved and reloaded. And finally, it is possible to export a transformed object to a stack image. Top How to apply a specific transformation to a 3D object Applying a transformation to a 3D object means to concatenate the specified transformation with the current transformation of the object. To apply a transformation, select an object and click on ->Transformation->Apply transform. A window opens, which asks you for a transformation matrix. The matrix is supposed to be given as a (3x4) matrix, row by row. All the individual values should be separated by a space character. Example:         |  a11 a12 a13 a14 |         |  a21 a22 a23 a24 |         |  a31 a32 a33 a34 |         |    0   0   0   1 | should be specified as “a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34” (without the ‘”’). The window also allows you to load a transformation from a file. Top How to set a specific transformation for a 3D object Setting a transformation of a 3D object does not concatenate transformations. See #How to apply a specific transformation to a 3D object to concatenate transformations. To set a transformation, select an object and click on ->Transformation->Set transform. A window opens, which asks you for a transformation matrix. The matrix is supposed to be given as a (3x4) matrix, row by row. All the individual values should be separated by a space character. Example:         |  a11 a12 a13 a14 |         |  a21 a22 a23 a24 |         |  a31 a32 a33 a34 |         |    0   0   0   1 | should be specified as “a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34” (without the ‘”’). The window also allows you to load a transformation from a file. Top How can I see the current transformation of a 3D object To see the current transformation matrix of a 3D object, select that object and click for example on -> Transformation -> Set Transform. The window which opens shows the current transformation of the object. A (3x4) matrix is shown, row by row in one line. Example:         |  a11 a12 a13 a14 |         |  a21 a22 a23 a24 |         |  a31 a32 a33 a34 |         |    0   0   0   1 | is shown as “a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34” (without the ‘”’). Click ‘Cancel’ if you don’t want to change the transformation. Top Can I save/reload the current transformation of a 3D object To save the current transformation of a 3D object, select that object and click on ->Transformation->Save transform. You can specify a file to which the current transformation is stored. To load a transformation, click on ->Transformation->Set transform. In the opening window, you can choose a previously stored transformation file. Top How to save a transformed object The 3D viewer allows to load an image stack and display it as a 3D object. This object can be transformed. See e.g. Now such a transformed object can be exported to a stack image again. To do so, click on ->Transformation->Export transformed image. The resulting stack image can of course also be saved via ImageJ’s ‘Save as’ commands. Top Point Lists What is meant by ‘point list’ and why can they be useful Point lists represent a list of named points. They can be used for marking regions in/on 3D objects, for example. One particular usage of point lists is the landmark based registration. (See #How can two 3D objects be registered). Each 3D object owns a point list. This list is not shown by default, however. Top How to show the point list of an object There are two ways to show the point list of a 3D object: • Click on ->Edit->Point list->Show Point list or • Select ImageJ’s point tool and click on a selected object. In both cases, a window opens which shows a list of named points for the selected object. Top To add points, do the following: • Select a 3D object by clicking on it • Select ImageJ’s ‘POINT’ tool • Click somewhere on the selected object. The point is added and appears in the point list window. To remove a point, either • Press shift and click on the point you want to remove. ImageJ’s ‘POINT’ tool has to be selected for that operation or • Right-click on the point in the point list window and click on ‘Remove’. The point disappears. Top How to change the position of a point You can interactively drag the point of interest. ImageJ’s ‘POINT’ tool has to be selected. Click with the left mouse button on the point and drag it to the desired position. Top How to save a point list to file and how to reload it Point lists can be stored to file and be reloaded. To do so, select an object and click on -> Edit -> Point list -> Save Point list or on ->Edit->Point list->Load Point list respectively. Choose the file containing the point lists. Top How to highlight a point from the list in the 3D view Sometimes, one wishes to know where a particular point from the point list window is located in the 3D world. To highlight a point from the point list window, just left-click on it. It gets animated then. Top How to hide the points To hide the points, click on ->Edit->Point List->Hide Point list. The point list window is closed automatically. Top How to close the list window To hide the points, click on ->Edit->Point List->Hide Point list. The point list window is closed automatically. Top Registration Which kinds of registration is supported At the moment, only rigid landmark-based registration is supported. Top How can two 3D objects be registered To initiate registration, load at least two objects into the viewer and click on ->Edit->Register You are now guided step by step through landmark selection of model and reference image and through the registration process. Please note that the images are locked after registration, to prevent unintended user interaction. To be able to transform the objects again, select each of them and inactivate ->Transformation->Lock Top Top
2021-01-20 04:35:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21242724359035492, "perplexity": 1822.9127461912612}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00510.warc.gz"}
https://robotics.stackexchange.com/questions/8389/3-dof-inverse-kinematics-implementation-whats-wrong-with-my-code
3 DOF Inverse Kinematics Implementation: What's wrong with my code? I am currently trying to implement an Inverse Kinematics solver for Baxter's arm using only 3 pitch DOF (that is why the yGoal value is redundant, that is the axis of revolution). I for the most part copied the slide pseudocode at page 26 of http://graphics.cs.cmu.edu/nsp/course/15-464/Fall09/handouts/IK.pdf . def sendArm(xGoal, yGoal, zGoal): invJacob = np.matrix([[3.615, 0, 14.0029], [-2.9082, 0, -16.32], [-3.4001, 0, -17.34]]) ycurrent = 0 while xcurrent != xGoal: theta1 = left.joint_angle(lj[1]) theta2 = left.joint_angle(lj[3]) theta3 = left.joint_angle(lj[5]) xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3) xIncrement = xGoal - xcurrent zIncrement = zGoal - zCurrent increMatrix = np.matrix([[xIncrement], [0], [zIncrement]]) change = np.dot(invJacob, increMatrix) left.set_joint_positions({lj[1]: currentPosition + change.index(0)/10}) #First pitch joint left.set_joint_positions({lj[3]: currentPosition + change.index(1)/10}) #Second pitch left.set_joint_positions({lj[5]: currentPosition + change.index(2)/10}) #Third Pitch joint def forwardKinematics(theta1, theta2, theta3): xcurrent = 370.8 * sine(theta1) + 374 * sine(theta1+theta2) + 229 * sine(theta1+theta2+theta3) zcurrent = 370.8 * cos(theta1) + 374 * cos(theta1+theta2) + 229 * cos(theta1+theta2+theta3) return xcurrent, zcurrent Here is my logic in writing this: I first calculated the Jacobian 3x3 matrix by taking the derivative of each equation seen in the forwardKinematics method, arriving at: [370cos(theta1) + 374cos(theta1+theta2) ..... 0 0 0 -370sin(theta1)-374sin(theta1+theta2)-...... ] In order to arrive at numerical values, I inputted a delta theta change for theta1,2 and 3 of 0.1 radians. I arrived at a Jacobian of numbers: [0.954 0.586 .219 0.0000 0.000 0.0000 -.178 -.142 -0.0678] I then input this matrix into a pseudoinverse solver, and came up with the values you see in the invJacob matrix in the code I posted. I then multiplied this by the difference between the goal and where the end effector is currently at. I then applied a tenth of this value into each of the joints, to make small steps toward the goal. However, this just goes into an infinite loop and my numbers are way off what they should be. Where did I go wrong? Is a complete rewrite of this implementation necessary? Thank you for all your help.
2019-10-15 23:26:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5320035219192505, "perplexity": 4596.915897561316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00315.warc.gz"}
https://projecteuclid.org/euclid.aop/1008956695
## The Annals of Probability ### Limit Distributions Of Norms Of Vectors Of Positive i.i.d.Random Variables Martin Schlather #### Abstract This paper aims to combine the central limit theorem with the limit theorems in extreme value theory through a parametrized class of limit theorems where the former ones appear as special cases. To this end the limit distributions of suitably centered and normalized $l_{cp(n)}$-norms of $n$-vectors of positive i.i.d. random variables are investigated. Here, $c$ is a positive constant and $p(n)$ is a sequence of positive numbers that is given intrinsically by the form of the upper tail behavior of the random variables. A family of limit distributions is obtained if $c$ runs over the positive real axis. The normal distribution and the extreme value distributions appear as the endpoints of these families, namely, for $c =0 +$ and $c = \infty$, respectively. #### Article information Source Ann. Probab., Volume 29, Number 2 (2001), 862-881. Dates First available in Project Euclid: 21 December 2001 https://projecteuclid.org/euclid.aop/1008956695 Digital Object Identifier doi:10.1214/aop/1008956695 Mathematical Reviews number (MathSciNet) MR1849180 Zentralblatt MATH identifier 1014.60015 #### Citation Schlather, Martin. Limit Distributions Of Norms Of Vectors Of Positive i.i.d.Random Variables. Ann. Probab. 29 (2001), no. 2, 862--881. doi:10.1214/aop/1008956695. https://projecteuclid.org/euclid.aop/1008956695 #### References • Abramowitz, M. and Stegun, I. A. (1984). Pocketbook of Mathematical Functions. Harri Deutsch, Frankfurt am Main. • Anderson, C. W. and Turkman, K. F. (1995). Sums and maxima of stationary sequences with heavy tailed distributions. Sankhy¯a Ser. A 57 1-10. • Billingsley, P. (1995). Probability and Measure, 3rd ed. Wiley, New York. • Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation. Cambridge Univ. Press. • Chow, T. L. and Teugels, J. (1979). The sum and the maximum of i.i.d. random variables. In Proceedings of the Second Prague Symposium on Asymptotic Statistics (P. Mandl and M. Huskova, eds.) 81-92. North-Holland, Amsterdam. • Embrechts, P., Kl ¨uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events. Springer, Berlin. • Feller, W. (1971). An Introduction to Probability Theory and Its Applications 2. Wiley, New York. • Gnedenko, B. W. (1963). The Theory of Probability, 2nd ed. Chelsea, New York. • Gradshteyn, I. S. and Ryzhik, I. M. (2000). Table of Integrals, Series, and Products, 6th ed. Academic, London. • Greenwood, P. E. and Hooghiemstra, G. (1991). On the domain of attraction of an operator between supremum and sum. Probab. Theory Related Fields 89 201-210. • Griffin, P. and Kuelbs, J. (1991). Some extensions of the LIL via self-normalisations. Ann. Probab. 19 380-395. • Hahn, M. G. and Weiner, D. C. (1992). Asymptotic behaviour of self-normalized trimmed sums: nonnormal limits. Ann. Probab. 20 455-482. • Ho, H.-C. and Hsing, T. (1996). On the asymptotic joint distribution of the sum and maximum of stationary normal random variables. J. Appl. Probab. 33 138-145. • Hooghiemstra, G. and Greenwood, P. E. (1997). The domain of attraction of the -sun operator for type II and type III distributions. Bernoulli 3 479-489. • Horv´ath, L. and Shao, Q.-M. (1996). Large deviations and law of the iterated logarithm for partial sums normalized by the largest absolute observation. Ann. Probab. 24 1368-1387. • Hsing, T. (1995). A note on the asymptotic independence of the sum and the maximum of strongly mixingstationary random variables. Ann. Probab. 23 938-947. • Kallenberg, O. (1997). Foundations of Modern Probability. Springer, New York. • Leadbetter, M. R., Lindgren, G. and Rootz´en, H. (1983). Extremes and Related Properties of Random Sequences and Processes. Springer, Berlin. • Logan, B. F., Mallows, C. L., Rice, S. O. and Shepp, L. A. (1973). Limit distributions of selfnormalized sums. Ann. Probab. 1 788-809. • Resnick, S. I. (1987). Extreme Values, Regular Variation, and Point Processes. Appl. Probab. 4. Springer, New York. • Samorodnitsky, G. and Taqqu, M. S. (1994). Stable Non-Gaussian Random Processes. Chapman and Hall, Boca Raton, FL. • Sato, K.-I. (1999). L´evy Processes and Infinitely Divisible Distributions. Cambridge Univ. Press. • Shao, Q.-M. (1997). Self-normalized large deviations. Ann. Probab. 25 285-327.
2019-09-20 03:31:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7059846520423889, "perplexity": 2473.308781132904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573827.2/warc/CC-MAIN-20190920030357-20190920052357-00506.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/consider-square-10-m--charges-placed-corners-square-follows-40-956-c-0-0-40-956-c-1-1-30-9-q453534
## Electric Charge and Field!! Will Rate!! Consider a square which is 1.0 m on a side. Charges are placed at the corners of the square as follows: +4.0 μC at (0,0); +4.0μC at (1,1); +3.0 μC at (1,0); -3.0μC at (0,1). What is the magnitude of the electric field at the square's center?
2013-05-22 13:30:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717588186264038, "perplexity": 1017.8554612424878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701760529/warc/CC-MAIN-20130516105600-00085-ip-10-60-113-184.ec2.internal.warc.gz"}
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.19.1/share/doc/Macaulay2/PrimaryDecomposition/html/_reg__Seq__In__Ideal.html
# regSeqInIdeal -- a regular sequence contained in an ideal ## Synopsis • Usage: regSeqInIdeal I regSeqInIdeal(I, n) regSeqInIdeal(I, n, c, t) • Inputs: • I, an ideal, • n, an integer, the length of the regular sequence returned • c, an integer, the codimension of I if known • t, an integer, a limit on the time spent (in seconds) for each trial • Optional inputs: • Strategy => ..., default value Quick • Outputs: • an ideal, generated by a regular sequence of length n contained in I ## Description This method computes a regular sequence of length n contained in a given ideal I. It attempts to do so by first trying "sparse" combinations of the generators, i.e. elements which are either generators or sums of two generators. If a sparse regular sequence is not found, then dense combinations of generators will be tried. If the length n is either unspecified or greater than the codimension of I then it is silently replaced with the codimension of I. The ideal I should be in a polynomial (or at least Cohen-Macaulay) ring, so that codim I = grade I. i1 : R = QQ[x_0..x_7] o1 = R o1 : PolynomialRing i2 : I = intersect(ideal(x_0,x_1,x_2,x_3), ideal(x_4,x_5,x_6,x_7), ideal(x_0,x_2,x_4,x_6), ideal(x_1,x_3,x_5,7)) o2 = ideal (x x , x x , x x , x x , x x , x x , x x , x x , x x , x x , x x , 2 7 0 7 3 6 2 6 1 6 0 6 2 5 0 5 3 4 2 4 1 4 ------------------------------------------------------------------------ x x ) 0 4 o2 : Ideal of R i3 : elapsedTime regSeqInIdeal I -- 0.048216 seconds elapsed o3 = ideal (x x , x x + x x , x x + x x , x x + x x ) 2 7 3 6 0 7 2 5 0 7 1 4 0 7 o3 : Ideal of R If I is the unit ideal, then an ideal of variables of the ring is returned. If the codimension of I is already known, then one can specify this, along with a time limit for each trial (normally this is taken from the length of time for computing codim I). This can result in a significant speedup: in the following example, codim I takes more than a minute to complete. i4 : R = QQ[h,l,s,x,y,z] o4 = R o4 : PolynomialRing i5 : I = ideal(h*l-l^2-4*l*s+h*y,h^2*s-6*l*s^3+h^2*z,x*h^2-l^2*s-h^3,h^8,l^8,s^8) 2 3 2 2 3 2 2 8 o5 = ideal (h*l - l - 4l*s + h*y, - 6l*s + h s + h z, - h - l s + h x, h , ------------------------------------------------------------------------ 8 8 l , s ) o5 : Ideal of R i6 : isSubset(I, ideal(s,l,h)) -- implies codim I == 3 o6 = true i7 : elapsedTime regSeqInIdeal(I, 3, 3, 1) -- 0.00555284 seconds elapsed 2 3 2 2 8 3 2 2 o7 = ideal (h*l - l - 4l*s + h*y, h + l s - h x, s + h + l s - h x) o7 : Ideal of R
2022-08-15 09:57:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7637869715690613, "perplexity": 579.0770087553558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00268.warc.gz"}
http://crypto.stackexchange.com/tags/pbkdf-2/new
# Tag Info 2 There are two ways to attack encryption that uses a derived key: You can attack the encryption algorithm. In the case of correctly used* 128-bit AES, that essentially amounts to a brute force attack on the 128-bit keyspace. This would succeed after on average $2^{127}$ tries (if it were practical). If you knew that two files had used the same password ... 6 PBKDF2 (as defined by RFC 2898) is a function of the form $$DK = \text{PBKDF2}(\text{PRF}, Password, Salt, c, dkLen)$$ In most practical use cases, the $\text{PRF}$ is $\text{HMAC}$ instantiated with a Merkle-Damgård hash function such as $\text{SHA-1}$. The time to compute $\text{PBKDF2}$ is roughly linear with the iteration parameter $c$, all other ... 5 Firstly, How much time will it take to crack PBKDF2 while using a 9 character password? and how do I calculate the cost? I'm not specifying any specific system or platform. If a brute force attack is made using the best ever super computer around how much time will it take to crack it? Unless the underlying PRF is broken, brute force and dictionary ... 3 When using PBKDF2, is there a practical upper limit to the iteration count above which we lose security? No. There is a limit above which you gain no security, but it isn't practical. It's on the order of $2^{128}$ iterations for PBKDF2-HMAC-SHA-2, or $2^{80}$ if you use SHA-1 as the HMAC hash. For an explanation, see the questions mikeazo linked in ... 5 I agree with the comments that SHA-256 should be fine here. However, if you already use HMAC-SHA-256 for PBKDF2, you could use HKDF Expand, which despite its name is defined even for output lengths shorter than input. In your case the output would be simply: $$\operatorname{HMAC-SHA-256}(\text{key}, \text{info} || \text{0x01}),$$ where 'info' is an ... 3 Your description of how RFC 5959 works isn't quite right. It is not quite correct to state that RFC 5959 encrypts using AES in ECB mode. A correct statement is: if the plaintext is exactly 128 bits, then use ECB mode, otherwise use a non-trivial mode of operation found in RFC 3394. In the former case, ECB mode is fine, since it's just a single block of ... Top 50 recent answers are included
2014-07-31 03:31:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5262838006019592, "perplexity": 947.3881859122359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272329.26/warc/CC-MAIN-20140728011752-00100-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-10-section-10-4-adding-subtracting-and-multiplying-radical-expressions-exercise-set-page-710/55
## Algebra: A Combined Approach (4th Edition) $(\sqrt[3]{a}-4)(\sqrt[3]{a}+5)=\sqrt[3]{a^{2}}+\sqrt[3]{a}-20$ $(\sqrt[3]{a}-4)(\sqrt[3]{a}+5)$ Evaluate the product: $(\sqrt[3]{a}-4)(\sqrt[3]{a}+5)=\sqrt[3]{a^{2}}+5\sqrt[3]{a}-4\sqrt[3]{a}-20=...$ Now, simplify by combining like terms: $...=\sqrt[3]{a^{2}}+(5-4)\sqrt[3]{a}-20=\sqrt[3]{a^{2}}+\sqrt[3]{a}-20$
2018-07-19 04:37:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063805341720581, "perplexity": 4824.037159734589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590493.28/warc/CC-MAIN-20180719031742-20180719051742-00498.warc.gz"}
http://kldns.net/error-bars/standard-error-significantly-different.html
## How To Repair Standard Error Significantly Different Tutorial Home > Error Bars > Standard Error Significantly Different # Standard Error Significantly Different ## Contents In that case, the statistic provides no information about the location of the population parameter. I am repeatedly telling students that C.I. One way to do this is with the standard error of the mean. If you take many random samples from a population, the standard error of the mean is the standard deviation of the different sample means. weblink Even though the error bars do not overlap in experiment 1, the difference is not statistically significant (P=0.09 by unpaired t test). Keep doing what you're doing, but put the bars in too. The confidence interval of some estimator. The confidence interval (at the 95% level) is approximately 2 standard errors. ## How To Interpret Error Bars Because s.d. Conversely, to reach P = 0.05, s.e.m. This is why a coefficient that is more than about twice as large as the SE will be statistically significant at p=<.05. Error bars in experimental biology. The two concepts would appear to be very similar. This is also true when you compare proportions with a chi-square test. The former is a statement of frequentist probability representing the results of repeated sampling, and the latter is a statement of Bayesian probability based on a degree of belief. Standard Error Bars Excel For example, Gabriel comparison intervals are easily interpreted by eye.19 Overlapping confidence intervals do not mean two values are not significantly different. Sometimes "standard error" is used by itself; this almost certainly indicates the standard error of the mean, but because there are also statistics for standard error of the variance, standard error Overlapping Error Bars Still, with the knowledge that most people -- even most researchers -- don't understand error bars, I'd be interested to hear our readers make the case for whether or not we Error bars that represent the 95% confidence interval (CI) of a mean are wider than SE error bars -- about twice as wide with large sample sizes and even wider with What if the error bars do not represent the SEM? It is not possible for them to take measurements on the entire population. How To Calculate Error Bars With the purpose of graphs being to highlight the main point of the story, graphing the effects, and their confidence intervals, would have been more appropriate. bars reflect the variation of the data and not the error in your measurement. A second generalization from the central limit theorem is that as n increases, the variability of sample means decreases (2). ## Overlapping Error Bars Intuition matches algebra - note how $s^2$ appears in the numerator of my standard error for $\hat{\beta_1}$, so if it's higher, the distribution of $\hat{\beta_1}$ is more spread out. bars are separated by about 1s.e.m, whereas 95% CI bars are more generous and can overlap by as much as 50% and still indicate a significant difference. How To Interpret Error Bars For instance, a set of pairs like $(14,14.01)$, $(15,15.01)$, $(16,16.01)$, $(17,17.01)$, etc., exhibits variation in each component, but the differences are consistently $0.01$. Large Error Bars Standard error gives smaller bars, so the reviewers like them more. However, the graph shows the error bars for all three conditions overlapping substantially. http://kldns.net/error-bars/standard-error-vs-standard-deviation-error-bars.html All the figures can be reproduced using the spreadsheet available in Supplementary Table 1, with which you can explore the relationship between error bar size, gap and P value. Means ±1 standard error of 100 random samples (n=3) from a population with a parametric mean of 5 (horizontal line). However, many statistical results obtained from a computer statistical package (such as SAS, STATA, or SPSS) do not automatically provide an effect size statistic. Sem Error Bars The formula, (1-P) (most often P < 0.05) is the probability that the population mean will fall in the calculated interval (usually 95%). partner of AGORA, HINARI, OARE, INASP, ORCID, CrossRef, COUNTER and COPE When differences in significance aren't significant differences¶ "We compared treatments A and B with a placebo. In a regression, the effect size statistic is the Pearson Product Moment Correlation Coefficient (which is the full and correct name for the Pearson r correlation, often noted simply as, R). http://kldns.net/error-bars/standard-deviation-or-standard-error-on-graph.html When you view data in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error The two are related by the t-statistic, and in large samples the s.e.m. Error Bars Standard Deviation Or Standard Error Web pages This web page calculates standard error of the mean and other descriptive statistics for up to 10000 observations. Means ±1 standard error of 100 random samples (n=3) from a population with a parametric mean of 5 (horizontal line). ## To obtain the 95% confidence interval, multiply the SEM by 1.96 and add the result to the sample mean to obtain the upper limit of the interval in which the population If the standard error of the mean is 0.011, then the population mean number of bedsores will fall approximately between 0.04 and -0.0016. After all, knowledge is power! #5 P-A July 31, 2008 Hi there, I agree with your initial approach: simplicity of graphs, combined with clear interpretation of results (based on information that How to calculate the standard error Spreadsheet The descriptive statistics spreadsheet calculates the standard error of the mean for up to 1000 observations, using the function =STDEV(Ys)/SQRT(COUNT(Ys)). What Do Small Error Bars Mean Therefore you can conclude that the P value for the comparison must be less than 0.05 and that the difference must be statistically significant (using the traditional 0.05 cutoff). Who calls for rolls? When you view data in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error If you know a little statistical theory, then that may not come as a surprise to you - even outside the context of regression, estimators have probability distributions because they are this content For those of us who would like to go one step further and play with our Minitab, could I safely assume that the Cognitive daily team is open to share their However if two SE error bars do not overlap, you can't tell whether a post test will, or will not, find a statistically significant difference. In most cases, the effect size statistic can be obtained through an additional command. The opposite rule does not apply. It's an easy way of comparing medications, surgical interventions, therapies, and experimental results. Nat. The 95% confidence interval in experiment B includes zero, so the P value must be greater than 0.05, and you can conclude that the difference is not statistically significant. If 95% CI error bars do not overlap, you can be sure the difference is statistically significant (P < 0.05). Unfortunately, owing to the weight of existing convention, all three types of bars will continue to be used. I just couldn't logically figure out how the information I was working with could possibly answer that question… #22 Xan Gregg October 1, 2008 Thanks for rerunning a great article -- Over thirty percent of respondents said that the correct answer was when the confidence intervals just touched -- much too strict a standard, for this corresponds to p<.006, or less than It is an even more valuable statistic than the Pearson because it is a measure of the overlap, or association between the independent and dependent variables. (See Figure 3).     asked 1 year ago viewed 7333 times active 1 year ago Get the weekly newsletter! Standard error: meaning and interpretation. With our tips, we hope you'll be more confident in interpreting them. And someone in a talk recently at 99% confidence error bars, which rather changed the interpretation of some of his data. Means of 100 random samples (N=3) from a population with a parametric mean of 5 (horizontal line). The variability? bars for these data need to be about 0.86 arm lengths apart (Fig. 1b). If two SE error bars overlap, you can be sure that a post test comparing those two groups will find no statistical significance. If the interval includes zero, then they could be equally effective; if it doesn't, then one medication is a clear winner. more hot questions question feed about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science
2018-02-22 03:12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051880121231079, "perplexity": 675.0257115107205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00737.warc.gz"}
https://dmoj.ca/problem/coci14c1p6
COCI '14 Contest 1 #6 Kamp View as PDF Points: 20 (partial) Time limit: 2.0s Memory limit: 128M Problem type In a certain flooded village, a secret superhuman humanitarian camp is being opened as we speak. The village consists of houses marked with integers from to . The houses are connected to each other with roads so that there is a unique way between each two houses. For each road, we know the time it takes for a truck to pass it. The camp should be put up in some house's garden, but the camp manager still hasn't decided which house it is going to be. Mirko has been appointed as the driver. His job is to drive around teams of volunteers in his super truck from the camp to the house where that certain team is going to work. His van is super because all teams at once can drive in it! In total, there are teams and all the teams are going to a different house. All teams board into Mirko's truck initially, and then he drives them to houses in the sequence he determined for himself. After he drives around all teams, Mirko stays and helps the last team (he doesn't go back to camp). In order for the camp manager to determine where to put up the camp, he wants to know, for each house, the minimal time it takes for Mirko to drive around all teams if that house is the headquarters. Write a program that will determine the numbers Mirko's boss wants to see! Input Specification The first line of input contains the integers (), and (). Each of the following lines contains the integers , , (), where is the time it takes to pass a two-way road between houses and . Each of the following lines contains the integers that mark the house where the team is going, respectively. Output Specification Output lines. The line of output must contain the minimal times it takes Mirko to drive around all of the teams if the camp headquarters is located in the house. Scoring In test cases worth 50% of total points, it will hold . Sample Input 1 5 2 2 5 1 2 4 1 1 2 2 1 3 2 4 5 Sample Output 1 5 3 7 2 2 Explanation for Sample 1 If Mirko starts off at house 1, he can drop off volunteers at houses 1-2-4-2-5, respectively. If he starts off at house 2, the possible sequence is 2-5-4. Sample Input 2 7 2 1 2 4 1 3 1 2 5 1 2 4 2 4 7 3 4 6 2 3 7 Sample Output 2 11 15 10 13 16 15 10
2018-03-18 13:26:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1843087524175644, "perplexity": 1626.400333441465}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645775.16/warc/CC-MAIN-20180318130245-20180318150245-00400.warc.gz"}
https://about.intrinio.com/data-tag/pricetorevenue?category=search&category_name=Search&query=price_sales&method=api
# Price to Revenue ## Definition A valuation ratio that compares a companys stock price to its revenues. The price-to-revenue ratio is an indicator of the value placed on each dollar of a companys revenues. It can be calculated either by dividing the companys market capitalization by its total sales over a 12-month period, or on a per-share basis by dividing the stock price by sales per share for a 12-month period. Like all ratios, the price-to-sales ratio is most relevant when used to compare companies in the same sector. A low ratio may indicate possible undervaluation, while a ratio that is significantly above the average may suggest overvaluation. ## Formula $\text{pricetorevenue =}\frac{\text{close_price}}{\text{totalrevenue}}{\text{weightedavedilutedsharesos}}}$ ## Details Intrinio Tag pricetorevenue Statements Calculations Templates Industrial, Financial Type Valuation Units Multiple Historical? Yes Screenable? Yes ## Get This Data If you need this data in your application or spreadsheet, take a look at our offerings below. In many cases, you can start for free! ### US Sector & Industry Intrinio is a financial data platform. Our data feeds and API can power your apps, dashboards, and spreadsheets. Take advantage of our low startup costs, reasonable licensing, and free chat support.
2020-07-03 13:24:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21685685217380524, "perplexity": 6235.034402099267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00061.warc.gz"}
https://greprepclub.com/forum/i-dont-know-what-im-missing-3117.html
It is currently 17 Dec 2018, 03:41 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # i dont know what im missing Author Message TAGS: Intern Joined: 22 Oct 2016 Posts: 15 Followers: 0 Kudos [?]: 3 [0], given: 0 i dont know what im missing [#permalink]  03 Jan 2017, 11:44 00:00 Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions in 750 gram of mixture of salt and water , there is 30% of water. how much water should be converted to steam so that water in the mix remains 25%? i think the answer should be 11.25 bcas, 225gms is 30% of 750gms. 5% of 225gms is 11.25gms which should be evoporated. what am i missing? Brief me, if tthere is a concept behind this. thanks. GMAT Club Legend Joined: 07 Jun 2014 Posts: 4750 GRE 1: Q167 V156 WE: Business Development (Energy and Utilities) Followers: 93 Kudos [?]: 1664 [1] , given: 396 Re: i dont know what im missing [#permalink]  03 Jan 2017, 13:40 1 KUDOS Expert's post ssiva wrote: in 750 gram of mixture of salt and water , there is 30% of water. how much water should be converted to steam so that water in the mix remains 25%? i think the answer should be 11.25 bcas, 225gms is 30% of 750gms. 5% of 225gms is 11.25gms which should be evoporated. what am i missing? Brief me, if tthere is a concept behind this. thanks. Hi, Rule: PERCENTAGES are not your friends Now given that 750 gms of water contains 30% water or $$\frac{30}{100}\times 750$$ grams of water or $$225$$ grams of water. Let x grams of water be evaporated such that the solution now contains 25% water. $$\frac{225 - x}{750 -x}=25%$$. or $$\frac{225 - x}{750 -x}=\frac{1}{4}$$. or $$(225 -x)\times 4 = 750 -x$$. or $$3x = 150$$ or $$x=50$$ gms Hence 50 grams of water needs to be evaporated. _________________ Sandy If you found this post useful, please let me know by pressing the Kudos Button Try our free Online GRE Test Re: i dont know what im missing   [#permalink] 03 Jan 2017, 13:40 Display posts from previous: Sort by
2018-12-17 11:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3988514244556427, "perplexity": 5096.184329287827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00583.warc.gz"}
https://transfer-learning.ai/page/11/
## On line Optimal Ranging Sensor Deployment for Robotic Exploration The approach is general for any class of mobile system, we run simulations and experiments with indoor drones . We provide a detailed analysis of the uncertainty of the positioning system while the UWB infrastructure grows . We developed a genetic algorithm that minimizes the deployment of newanchors, saving energy and resources on the mobile robot and maximizing the mission .… ## A New Approach to Complex Dynamic Geofencing for Unmanned Aerial Vehicles Geofences are proposed as one line of defence to limit UAVs from flying into perimeters of other aircraft and restricted locations . Geofencing algorithms lack accuracy during the calculation of complex geofences, particularly in dynamic urban environments . We propose a new algorithm based on alpha shapes and Voronoi diagrams, which we integrate into an on-drone framework using an open-source mapping database from OpenStreetMap .… ## Minimal Multi Layer Modifications of Deep Neural Networks Deep neural networks may err and produce incorrect outputs in safety-critical settings, such as autonomousdriving, medical diagnosis, and airborne collision avoidance systems . New tool, called 3M-DNN, for repairing agiven DNN, which is known to err on some set of inputs .… ## Ranking Facts for Explaining Answers to Elementary Science Questions Explanations are created from a human-annotated set ofnearly 5,000 candidate facts in the WorldTree corpus . Our aim is to obtainbetter matches for valid facts of an explanation for the correct answer of a question over the available fact candidates .… ## Diameter constrained Steiner tree and related problems We give a dynamic programming solution to find the minimum cost of a diameterconstrained Steiner tree in case of directed graphs . Then we show a simple reduction from undirected version to the directed version to realize an analgorithm of similar complexity i.e,… ## Investigating Man in the Middle based False Data Injection in a Smart Grid Laboratory Environment The security of energy supply is increasingly threatened by cyber-attacks . Traditional cyber-security measures can be used as mitigation and prevention measures, but their effective use requires a deep understanding of the potential threat landscape and complex attack processes in energyinformation systems .… ## Learning Realtime One Counter Automata Ouralgorithm uses membership and equivalence queries as in Angluin’s L* algorithm . In a partialequivalence query, we ask the teacher whether the language of a givenfinite-state automaton coincides with a counter-bounded subset of the targetlanguage . We evaluate an implementation of our algorithm on a number of randombenchmarks .… ## A dynamic mode decomposition extension for the forecasting of parametric dynamical systems Dynamic mode decomposition (DMD) has recently become a popular tool for thenon-intrusive analysis of dynamical systems . Exploiting the proper orthogonaldecomposition as dimensionality reduction technique, DMD is able to approximatea dynamical system as a sum of (spatial) basis evolving linearly in time .… ## Mobility Label Based Network Hierarchical Mobility Management and Packet Forwarding Architecture Mobility LabelBased Network (MLBN) is a new approach to the network layer mobility management problem that relies solely on MPLS to provide both macro- and micro-mobility for IPv4 and IPv6 mobile hosts and routers . This new approach does not rely on the existing IP mobility management protocols such as Mobile IP and is based on the combination of Multi- Protocol BGP (MP-BGP) and MPLS .… ## Learning to Learn a Cold start Sequential Recommender The cold-start recommendation is an urgent problem in contemporary online applications . Many data-driven algorithms, suchas the widely used matrix factorization, underperform because of datasparseness . This work adopts the idea of meta-learning to solve the user’scold start recommendation problem .… ## Algorithms Using Local Graph Features to Predict Epidemics We study a simple model of epidemics where an infected node transmits theinfection to its neighbors independently with probability $p$. This is alsoknown as the independent cascade or Susceptible-Infected-Recovered (SIR) model . The size of an outbreak in this model is closelyrelated to that of the giant connected component in “edge percolation”, whereeach edge of the graph is kept independently .… ## Understanding Players Interaction Patterns with Mobile Game App UI via Visualizations Understanding how players interact with the mobile game app on smartphonedevices is important for game experts to develop and refine their app products . Visualizing therecorded logs of users’ UI operations is a promising way for quantitativelyunderstanding the interaction patterns .… ## The complexity of the Quantified CSP having the polynomially generated powers property The Quantified CSP can be reduced to the CSP over the same constraint language with constants . The only limitation of this reduction is that it is applicable only for theconstraint languages with constant . We drastically simplified the reductionand generalized it for constraint languages without constants .… ## Dynamic Tolling for Inducing Socially Optimal Traffic Loads How to design tolls that induce socially optimal traffic loads? We propose atwo-timescale discrete-time stochastic dynamics that adaptively adjusts thetoll prices on a parallel link network . We show that the loads and the tolls concentrate in a neighborhood of the fixed point, which correspondsto the socially optimal load and toll price .… ## RL4RS A Real World Benchmark for Reinforcement Learning based Recommender System Reinforcement learning based recommender systems (RL-based RS) aims at learning a good policy from a batch of collected data . However, current RL-basedRS benchmarks commonly have a large reality gap, because they involve artificial RL datasets or semi-simulated RS datasets .…
2021-12-02 01:54:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3664809465408325, "perplexity": 2480.7223972523284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00152.warc.gz"}
https://snat.co.uk/tag/imagine
# Why we can’t divide by 0 Quite rarely do I ever do anything about mathematics on Tweaked for your Pleasure but today I got e-mailed a question and although I rarely do requests, I figured I might as well ask this. Basically, the e-mail was as follows: Hi Matt just wondered as you are good at maths why i am told 0 divided by anything is not possible. Mind explaing ? Thanks Bad English aside, the answer is below. The simple answer is because we simply can not define the outcome as we can not decide on any rules that will stick for all inputs. So for example we could have; • $\frac 1 0 =5$ There is a rule within arithmetic that means $(\frac b a) = b$ and following this rule and what we defined a minute ago that if we do; This article was posted on Need to quote me? Ellis, M (2011) Why we can’t divide by 0. [online] Tweaked for your Pleasure. Available at: https://snat.co.uk/rants/why-we-cant-divide-by-0.html [Accessed 12 Jul 2020] # Reason for hating most forum users… Imagine for a few minutes. You are having a good chat on a forum about something and some user comes along and ruin it all. Yes, I am talking about that 12 year old child whom believes he had worked out the whole of life and knows everything but once you challege the person, it always goes something like this …. (the follow is an example, not a real life one though). User 1 Wrote: Well, you see that it ain’t even possible at this time to prove. To be honest, if you can tell me on how it all relates together then I would be very interested in reading it. This article was posted on Need to quote me? Ellis, M (2009) Reason for hating most forum users…. [online] Tweaked for your Pleasure. Available at: https://snat.co.uk/rants/reason-for-hating-most-forum-users.html [Accessed 12 Jul 2020]
2020-07-12 06:27:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40885594487190247, "perplexity": 1535.4218442800532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00322.warc.gz"}
https://codegolf.stackexchange.com/questions/797/roman-numeral-converter-function
# Roman numeral converter function Create the shortest function to convert a string of Roman numerals to an integer. The rules for each letter can be found at the Wikipedia page. Letters above 1,000 will have parentheses placed around them to signal their higher value. Requirements: • Must convert Roman numerals 1 to 500,000 • Must complete in less than a minute • Does not use built-in functions that could provide an advantage (Ex: A function that converts Roman numerals to integers) • Is a function The function does not need to support fractions. Any invalid input should return the number 0. Shortest function wins. In the case of a tie, the one with the most votes wins. ## Test Cases Input III Output 3 Input IIII Output 0 Input XVI Output 16 Input (C)(D)(L)MMI Output 452001 • Unless I'm missing something, (C)(D)(L)MMI would be 452,001. How did you get your value? Additionally, does this need to support "improper" forms (e.g. IC instead of XCIX)? – Anon. Feb 10 '11 at 2:58 • Improper to me means illegal and thus should return 0. – Martin York Feb 10 '11 at 19:46 • @Anon: The number was a mistype from when I changed the original third test case. It does not need to support improper forms, as it would be considered invalid input. – Kevin Brown Feb 10 '11 at 20:21 • Standard practice (and the spec of this question's duplicate) is for invalid input to be undefined behavior. Since this question is four years old and had only one answer, should we change the requirements? – lirtosiast Jun 4 '15 at 23:38 • @KevinBrown I don't see a source or explanation for the parenthesised letters. I think you should change the spec to match codegolf.stackexchange.com/q/16254/43319 and then the answers from there can be migrated here. – Adám Feb 14 '17 at 11:14 ### C++: 914 855 chars #include<map> #include<string> #include<iostream> #include<sstream> #define I istream #define T(C) if(C)throw int(1); #define X(c,v,f,m) D[c]=v;P[c]=D[f];M[c]=m; #define S second using namespace std;typedef map<char,int>R;R D,P,M;struct U{U():t(0),l(0),a(0){}int t,l,a;operator int(){return t+l;}I&d(I&s){char c,b;s>>c;if(c=='('){s>>c>>b;T(b!=')')c+=32;}if(s){R::iterator f=D.find(c);T(f==D.end())if(P[c]==l){l=f->S-l;a=0;}else{T(l&&(f->S>l))a=l==f->S?a+1:1;T(a>M[c])t+=l;l=f->S;}}return s;}};I&operator>>(I&s,U&d){return d.d(s);}int main(){D[' ']=-1;X(73,1,32,3)X(86,5,73,1)X(88,10,73,3)X(76,50,88,1)X(67,100,88,3)X(68,500,67,1)X(77,1000,67,3)X(118,5000,77,1)X(120,10000,77,3)X(108,50000,120,1)X(99,100000,120,3)X(100,500000,99,1)X(109,1000000,99,3)string w;while(cin>>w){try{stringstream s(w);U c;while(s>>c);cout<<c<<"\n";}catch(int x){cout<<"0\n";}}} It could be compressed further. > ./a.exe III 3 IIII 0 XVI 16 (C)(D)(L)MMI 452001 Slightly nicer formatting: 1582 char #include<map> #include<string> #include<iostream> #include<sstream> #define I istream #define T(C) if(C)throw int(1); #define X(c,v,f,m) D[c]=v;P[c]=D[f];M[c]=m; #define S second using namespace std; typedef map<char,int> R; R D,P,M; struct U { U(): t(0), l(0), a(0) {} int t,l,a; operator int() { return t + l; } I& d(I& s) { char c,b; s >> c; if (c == '(') { s >> c >> b; T(b != ')') c = tolower(c); } if (s) { R::iterator f = D.find(c); T(f == D.end()) if (P[c] == l) { l = f->S - l; a = 0; } else { T(l&&(f->S > l)) a=l==f->S?a+1:1; T(a>M[c]) t += l; l = f->S; } } return s; } }; I& operator>>(I& s,U& d) { return d.d(s); } int main() { D[' ']=-1; X(73,1,32,3) X(86,5,73,1) X(88,10,73,3) X(76,50,88,1) X(67,100,88,3) X(68,500,67,1) X(77,1000,67,3) X(118,5000,77,1) X(120,10000,77,3) X(108,50000,120,1) X(99,100000,120,3) X(100,500000,99,1) X(109,1000000,99,3) string w; while(cin >> w) { try { stringstream s(w); U c; while(s >> c); cout << c << "\n"; } catch(int x) { cout << "0\n"; } } } • I don't think you need a space between the macro functions and their definitions. – Adalynn Aug 12 '18 at 17:15 # Javascript, 317 chars function f(s){for(r=/$$?(.$$?)/g,t=e=0;a=r.exec(s);l=a[0].length,d='IXCMVLD'.indexOf(a[1][0]),e=e||d<0||l==2||d*4+l==3,t+='+'+(d>3?5:1)*Math.pow(10,d%4+3*(l>1)));t=t&&t.replace(/1(0*).(10|5)\1(?!0)/g,'$2$1-1$1');return e||/[^0](0*)\+(10|5)\1/.test(t)||/(\+10*)\1{3}(?!-)/.test(t)||/-(10*)\+\1(?!-)/.test(t)?0:eval(t)} Explaination: function f(s){ // iterate over every character grabbing parens along the way for(r=/$$?(.$$?)/g,t=e=0;a=r.exec(s); // get a numerical value for each numeral and join together in a string l=a[0].length, d='IXCMVLD'.indexOf(a[1][0]), e=e||d<0||l==2||d*4+l==3, // find invalid characters, and parens t+='+'+(d>3?5:1)*Math.pow(10,d%4+3*(l>1)) ); // reorder and subtract to fix IV, IX and the like t=t&&t.replace(/1(0*).(10|5)\1(?!0)/g,'$2$1-1$1'); return e|| /[^0](0*)\+(10|5)\1/.test(t)|| // find VV,IIV,IC,... /(\+10*)\1{3}(?!-)/.test(t)|| // find IIII,... but not XXXIX /-(10*)\+\1(?!-)/.test(t) // find IVI,... but not XCIX ?0:eval(t) } Without error detection it is only 180 chars function g(s){for(r=/$$?(.$$?)/g,t=0;a=r.exec(s);d='IXCMVLD'.indexOf(a[1][0]),t+='+'+(d>3?5:1)+'0'.repeat(d%4+3*(a[1].length>1)));return eval(t.replace(/(1(0*).(10|5)\2)/g,'-$1'))} This works the same way, but here is better formatting: function g(s){ for(r=/$$?(.$$?)/g,t=0;a=r.exec(s); d='IXCMVLD'.indexOf(a[1][0]), t+='+'+(d>3?5:1)+'0'.repeat(d%4+3*(a[1].length>1)) ); return eval(t.replace(/(1(0*).(10|5)\2)/g,'-$1')) } # Python3 - 144 chars Follows this question's rules, but its closed, so doesn't check for invalid input or support brackets, but thought I'd answer anyway. def r(s): if not s:return 0 l,v=[(s.split(x,1),6-i) for i,x in enumerate('MDCLXVI') if x in s][0] return 10**(v//2)*(v%2*4+1)-r(l[0])+r(l[1]) Explanation: This algorithm recursively finds the next largest letter, splits the string around it, then subtracts the left side's value and adds the right side's value. Ungolfed: def roman(string): # base case. ends the recursion when either side string is empty. if not string: return 0 split_list, value = [ # splits the string with the leftmost letter of 'MDCLXVI' successively. # we take the first split string in the list (ignoring ones that didn't split) # and use [0] and [1] as the left and right hand side parts to recurse. # stores the split string ('1' means first split only), and the index of the # letter used to split it in a tuple. ( string.split(letter, 1), 6-index ) # loops through each letter used in roman numerals. for index, letter in enumerate('MDCLXVI') # only stores split strings which actually split if letter in string # get the first split string tuple (so it was split with the highest value letter). ][0] # takes the index of roman numeral letter and turns into its value (0->1, 1->5, 2->10). return 10**(value//2)*(value%2*4+1) # subtracts the value of the left hand side. - roman(split_list[0]) # adds the value of the right hand side. + roman(split_list[1]) # J, 43 bytes 1#.2([*_1^<)/\0,~(*/\1,6\$5 2){~'IVXLCDM'&i. Try it online!
2021-07-25 06:39:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3628253638744354, "perplexity": 8822.65067556951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00432.warc.gz"}
http://mathhelpforum.com/algebra/25504-word-problem.html
# Math Help - word problem 1. ## word problem I can't figure out how to write this problem out, so that I can solve it. Here it is: Three times a certain number is as much less than 55 as 4 times the number exceeds 50. What is the number? Can someone help me please? Thanks. 2. Originally Posted by jesuslover I can't figure out how to write this problem out, so that I can solve it. Here it is: Three times a certain number is as much less than 55 as 4 times the number exceeds 50. What is the number? Can someone help me please? Thanks. What does "as much less than" mean? Perhaps, if you reworded your problem, then we can answer you. This is the best I could come up with: $55-3x = 4x-50$ $x = \frac{105}{7} \Rightarrow x = 15$ 3. Originally Posted by jesuslover I can't figure out how to write this problem out, so that I can solve it. Here it is: Three times a certain number is as much less than 55 as 4 times the number exceeds 50. What is the number? Can someone help me please? Thanks. Let the number be $x$ then the amount that three times the number is less than 55 is: $55 - 3x$ and the amount that 4 times the number exceeds 50 is: $4x - 50$ we are told these are the same, that is: $55 - 3x = 4x - 50$ now solve for $x$ EDIT: Ah, Colby beat me to it and did it for you 4. Originally Posted by jesuslover I can't figure out how to write this problem out, so that I can solve it. Here it is: Three times a certain number is as much less than 55 as 4 times the number exceeds 50. What is the number? Can someone help me please? Thanks. It's also interesting to note that you could interpret the 'amount less' or the 'amount exceeding' as negative values $\Rightarrow 3x>55$ and $50>4x$ so you get $3x-55=50-4x$ But this still gets you to $x = 15$ Note that you cannot allow the 'amount 3x is less than 55' and the 'amount 4x exceeds 55' to alternate in sign, as this would lead to the incorrect equations: $3x-55=4x-50$ or $55-3x=50-4x$ As the LHS's sign is not equal to the RHS's sign, this cannot be possible. 5. Hello, jesuslover! Sometimes, "baby talk" will help . . . Three times a certain number is as much less than 55 as 4 times the number exceeds 50. What is the number? Let $x$ = the number. "Three times $x$ is as much less than 55" tells us that $3x$ is less than 55. How much less? . . The difference is: . $55-3x$ "Four times number exceeds 50" tells us that $4x$ is greater than 50. How much more? . . The amount is: . $4x - 50$ We are told that these amounts are equal. . . There is our equation! . $55-3x \:=\:4x-50$ Go for it! 6. Thank-you. I had 3n-55 =4n+50, which didn't come out right. I see where I messed up. Thanks a lot. 7. man, we're really beating this problem to death aren't we? 8. Originally Posted by Jhevon man, we're really beating this problem to death aren't we? It seems that it's a race to get to every question first.
2015-03-28 04:28:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738169074058533, "perplexity": 534.6297471872248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297195.79/warc/CC-MAIN-20150323172137-00058-ip-10-168-14-71.ec2.internal.warc.gz"}
https://encyclopediaofmath.org/wiki/Wall_group
# Wall group An Abelian group associated with a ring with an involution which is an anti-isomorphism. In particular, it is defined for any group ring $\mathbf Z [ \pi _ {1} ( X)]$, where $\pi _ {1} ( X)$ is the fundamental group of a space. If $X$ is a Poincaré complex, then for a bordism class $\alpha$ in $\Omega _ {*} ( x, \nu )$ there is an obstruction in this group to the existence of a simple homotopy equivalence in $\alpha$. This obstruction is called the Wall invariant, cf. [1]. Let $R$ be a ring with an involution $R \rightarrow R$ which is an anti-isomorphism, i.e. $\overline{ {ab }}\; = \overline{ {ba }}\;$. If $P$ is a left $R$- module, then $\mathop{\rm Hom} _ {R} ( P, R)$ is a left $R$- module relative to the action $( af ) ( x) = f ( x) \overline{a}\;$, $f \in \mathop{\rm Hom} _ {R} ( P, R)$, $a \in R$, $x \in P$. This module is denoted by $P ^ {*}$. For a finitely-generated projective $R$- module $P$ there is an isomorphism $P \rightarrow P ^ {**}$: $x \mapsto ( f \mapsto \overline{ {f ( x) }}\; )$, and one may identify $P$ and $P ^ {**}$ using this isomorphism. A quadratic $(- 1) ^ {k}$- form over a ring $R$ with an involution is a pair $( P, \phi )$, where $P$ is a finitely-generated projective $R$- module and $\phi : P \rightarrow P ^ {*}$ is a homomorphism such that $\phi = (- 1) ^ {k} \phi ^ {*}$. A morphism $f: ( P, \phi ) \rightarrow ( Q, \psi )$ of forms is a homomorphism $f: P \rightarrow Q$ such that $f ^ { * } \psi f = \phi$. If $\phi$ is an isomorphism, then the form $( P, \phi )$ is said to be non-degenerate. A Lagrange plane of a non-degenerate form is a direct summand $L \subset P$ for which $L = \mathop{\rm Ann} \phi ( L)$. If $L \subset P$ is a direct summand such that $L \subset \mathop{\rm Ann} \phi ( L)$, then $L$ is called a subLagrange plane. Two Lagrange planes $L, G$ of a form $( P, \phi )$ are called complementary if $L + G = P$ and $L \cap G = \{ 0 \}$. Let $L$ be a projective $R$- module. The non-degenerate $(- 1) ^ {k}$- form H _ {(- 1) ^ {k} } ( L) = \ \left ( L \oplus L ^ {*} , \left ( is called Hamiltonian, and $L, L ^ {*} \subset L \oplus L ^ {*}$ are called its complementary Lagrange planes. If $L$ is a Lagrange plane of the form $( P, \phi )$, then the form is isomorphic to the Hamiltonian form $H _ {(- 1) ^ {k} } ( L)$. The choice of a Lagrange plane complementary to $L$ is equivalent to the choice of an isomorphism $( P, \phi ) \rightarrow H _ {(- 1) ^ {k} } ( L)$, and this complementary plane can be identified with $L ^ {*}$. Let $U _ {2k} ( R )$ be the Abelian group generated by the equivalence classes (under isomorphism) of non-degenerate quadratic $(- 1) ^ {k}$- forms $( P, \phi )$ with the relations: 1) $[( P, \phi )] + [( Q, \psi )] = [( P \oplus Q, \phi \oplus \psi )]$; and 2) $[( P, \phi )] = 0$ if $P$ has a Lagrange plane. A triple $( H; F, L)$ consisting of a non-degenerate $(- 1) ^ {k}$- form $H$ and a pair of Lagrange planes is called a $(- 1) ^ {k}$- formation. A formation is said to be trivial if $F$ and $L$ are complementary, and elementary if there exists a Lagrange plane of $H$ which is complementary to both $F$ and $L$. The trivial formation $( H _ {(- 1) ^ {k} } ( G); G, G)$ is called Hamiltonian. By an isomorphism of formations, $f: ( H; F, L) \rightarrow ( H _ {1} ; F _ {1} , L _ {1} )$, one understands an isomorphism $f: H \rightarrow H _ {1}$ of forms for which $f ( F ) = F _ {1}$, $f ( L) = L _ {1}$. Every trivial formation is isomorphic to the Hamiltonian one. Let $U _ {2k + 1 } ( R )$ be the Abelian group generated by the equivalence classes (under isomorphism) of $(- 1) ^ {k}$- formations with the following relations: a) $[( H; F, L)] \oplus [( H _ {1} ; F _ {1} , L _ {1} )] = [( H \oplus H _ {1} ; F \oplus F _ {1} , L \oplus L _ {1} )]$; b) $[( H; F, L)] = 0$ if the formation is elementary or trivial. The groups $U _ {n} ( R)$ are called the Wall groups of the ring $R$. #### References [1] C.T.C. Wall, "Surgery on compact manifolds" , Acad. Press (1970) MR0431216 Zbl 0219.57024 [2] A.A. Ranicki, "The algebraic theory of surgery I" Proc. London Math. Soc. , 40 : 1 (1980) pp. 87–192 MR0560997 MR0566491 Zbl 0471.57010 In the case of $R = \mathbf Z [ \pi _ {1} ( X) ]$ and the Wall surgery obstruction invariant, the involution on $R$ is given by $g \mapsto w( g) g ^ {-} 1$, $g \in \pi _ {1} ( X)$, where the group homomorphism $w : \pi _ {1} ( X) \rightarrow \{ 1, - 1 \}$ is given by the first Stiefel–Whitney class of the bundle $\nu$ in the bordism class $\Omega _ {*} ( X, \nu )$. The Wall groups $U _ {n} ( R)$ are more often called $L$- groups and denoted by $L _ {n} ( R)$; their theory is referred to as $L$- theory, which is much related to $K$- theory. (Indeed, some authors speak of the $K$- theory of forms, [a2].) The $L$- groups are four-periodic, i.e. $L _ {n} ( R) \simeq L _ {n+} 4 ( R)$. $L$- groups can be defined in more general situations and there are a number of somewhat different varieties of $L$- groups, cf. e.g. [a1], [a2].
2022-07-06 08:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517830610275269, "perplexity": 199.41692735114506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00063.warc.gz"}
http://dimacs.rutgers.edu/archive/Events/1997/Titles/1997/Tardos3.html
# DIMACS Discrete Math/Theory of Computing Seminar ## Title: Is linear hashing good? Gabor Tardos ## Place: DIMACS Center, CoRE Building, Seminar Room 431 Rutgers University ## Time: 4:30 PM Tuesday, February 11, 1997 Abstract: Performance of algorithms using hashing is often related to the largest number of keys mapped by the hash function to the same value. This motivates the study of the expected size of the largest bucket for the worst case set of keys of a given size. In this work, we study the performance of linear transformations as a family of hash functions by this measure. Our main result is the linear mappings over GF[2] are excellent hash functions. They almost match the performance of the best possbile family of hash functions. (But, of course, are much more efficient.) A surprising additional result we have implies that linear mappings over big finite fields (say modulo a prime) are the other way around. They are almost as bad as possible for any universal family of hash functions. The main tool in proving the main result may be of independent interest. We show that a random linear mapping applied on a set of cardinality $c n\log n$ covers all the element of a size $n$ range with probability at least $1-\epsilon_c$. This is joint work with Noga Alon, Martin Ditzfelbinger, Peter Bro Miltersen, and Erez Petrank.
2021-11-29 21:40:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5367485284805298, "perplexity": 634.7944657592391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00169.warc.gz"}
https://www.math.ucdavis.edu/research/seminars/?talk_id=1445
# Mathematics Colloquia and Seminars A branching diffusion is a random process where trajectories follow, independently, a given diffusion law and a given fission law. On a $D$-dimensional hyperbolic (Lobachevsky) space, every individual trajectory of a homogeneous diffusion is attracted to a single (random) point of the absolute. However, if the fission rate is high (i.e., new branches are produced faster than they are attracted to the absolute), the whole random tree will withstand attraction (e.g., will visit every open set at indefinitely large times with probability one). The collection of limiting points for the random tree on the absolute can be characterised by its Hausdorff dimension (HD): under natural assumptions the HD of the limiting set equals, with probability one, a constant, and this constant can be calculated. The result is that for a low fission rate the HD increases from 0 to $(D-1)/2$ and then jumps to $D-1$, which exhibits a curious phase transition. The talk will discuss some recent results in this direction.
2020-01-29 01:36:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7770766019821167, "perplexity": 668.5745082418796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00360.warc.gz"}
https://www.hackerearth.com/practice/notes/prasoon2211/mercurial-vs-git-scaling-and-architecture/
7 Mercurial vs Git: Scaling and Architecture Mercurial Git A long, somewhat technical post on why mercurial is better at scaling that git. Some history, too. Everywhere I look, I see people using git. From experienced programmers to newbie programming enthusiasts, everyone seems to be using git. And GitHub, for that matter. In fact, we may even conclude that the popularity enjoyed by git is in large part due to popularity of GitHub. I mean, we can all admit that all the cool new things in FOSS are being published on GitHub. This just means that anyone willing to contribute needs to know git, at least at a beginner level. So, it’s obvious to anyone that learning git as your first version control system is good investment. Now, let me make it clear that despite all appearances to the contrary, I am in no way criticising git or GitHub. In fact, I love git. Seriously. People may blame git (no pun intended) for a steep learning curve but I don’t. It’s a great piece of software and I thank Linus Torvalds for coming up with the idea, much as I thank him for linux. I also love GH a lot. So, don’t go hating, now. Alright, let’s get back to the topic. First, a touch of history. ## How it all began I’m sure most of you know (specially since I just mentioned it) that Linus Torvalds created git. What you might not know, though, is why he did it. So, let’s walk down memory lane for a bit. The linux kernel, often affectionately just called ‘the kernel’ by the FOSS community, has been in active development since 1991. Initially, the developers threw around patches through emails, reviewed through emails, and pretty much did everything via emails. There was pretty much no version control to speak of, as such, and it was said that Linus Torvalds didn’t scale. Later, the kernel stated to be version controlled on a proprietary DVCS called BitKeeper. But, it was not meant to be and they broke up in 2005. Awww This was the time the two people decided to make a completely new DVCS, both influenced to build on principles established by BitKeeper. These two people were Linus Torvalds, as you already know, and Matt Mackall. Both were adamant on creating a perfect - at least their vision of perfect - DVCS. Both had a vision of perfection. And, almost ten years later, we have both of them to thank for two absolutely fantastic version control systems - systems so good that they blew away all the competition, commercial or otherwise. Linus decided to go with C, for reasons of speed, I should think. Matt, however, saw another language, one that was maybe not quite so fast as C, but with tremendous potential. He decided to develop in Python. The problem was well-know - Python was slow. However, with a clever bit of architecture, Matt made sure that this wasn’t too big of a problem initially. The advantage however, was great. Python brought with it an ease of development that drew in lots of new people as volunteers - Python is just that easy. And this decision paid off well, as we will see. Now let’s talk some basics. Then, we’ll move on to a the scaling problems. ## The Differences: Interface First things first - mercurial is easy for beginners. This, quite mistakenly, makes people believe that it is a dumbed down version of git with lesser functionality, lesser… depth. Not so. Mercurial is every bit as powerful as git and anything that you can do with git, you can almost always do with mercurial. Most of mercurial’s interface is pretty much same as git anyway, with different terminologies. And so it was that the first time I used mercurial, I found it extremely easy to use. But we’re not here to talk of the difference in the interface. So, let’s talk behind-the-scenes. ## The Differences: Architecture Before we start, let me warn you - this is not a detailed article for reading about the architecture of mercurial or git. I’ll mostly just touch the surface of things. Git First, let’s talk about git. If you’ve ever tried to look under-the-covers of a git repository, you’ll know of git objects. Git and mercurial both use a directed acyclic graph (DAG) to represent history with commits - or changesets, as they are called in mercurial - as nodes of the graph. Also, if you’ve read the first chapter of the Pro Git book, you will know that git stores a snapshot of your repository whenever you make a commit instead of the usual delta (or diff) that the other systems do. To proceed, we need to know what is a commit exactly. Well, this diagram may help (image taken from http://skookum.com): So, a commit is a file that stores a bunch of data (author, commit message, parent commit) along with a reference to a tree object. A tree object stores references to other tree objects and blob objects (analogous to the usual file directory/file paradigm in a traditional file system). These blobs are the real building blocks of a git repository, actually. Any modified/new file is added to the .git/objects as a blob object. So, what of files unchanged files? Well, the parent tree object stores reference to the previous version of the blob object, as opposed to creating new ones. Again, an image might help: Here, you can see that two tree objects refer to the same two blobs. And so, this is how git works internally. This appears to be a very good model at first - and in fact, it is. Git’s architecture is designed to provide speed and the way git stores things, it makes both storage and retrieval blazingly fast. However, as your project grows large - specially if there are a lot of commits and a lot of authors - more and more git objects are created and stored as files. Often, several objects are created for a single commit. That’s when the problems begin - your project gets too large and git becomes slowww. How slow? Well, how about getting the result of a git status in 30 seconds instead of the usual microsecond response? Yeah, I know - that would suck pretty bad. But that poses a question - How large does your codebase need to be? A 100,000 lines? A million lines? 10 millions lines? Well, the linux kernel is somewhere around 5 millions LOC and git works just fine with it. So again, how big? Facebook is 62 million lines of code and git has became slow for the engineers at Facebook. So, they decided to make mercurial faster than git. And they succeeded, as the facebook blog post shows. In fact, Facebook has assigned a few people to continue bettering mercurial, improving speed, reliability and usability which helps both the FOSS community as well as the devs at Facebook. It’s a cliched win-win situation, really. Mercurial Anyway, now let’s come to mercurial’s architecture. Mercurial is somewhat odd in it’s choice of storing data - in that it does not use either the delta or the snapshot approach exclusively. It tries to strike a balance between the two. I’ll explain this in a bit. As I’ve said before, git uses git objects - blobs, trees, commits - as backend storage. Also, git creates a new snapshot for every changed file. Mercurial, however, uses a sigularly curious storage system - the Revlog. This is quite an ingenious way of storing data actually. Here’s how: • Every tracked file corresponds to two different files in the storage backend - an index file and a data file. • The data file (a file with a .d extension) contains zlib-compressed binary deltas for every revision of the file - it’s an append-only data structure which means that every new delta gets appended at the end of the .d file. There is a catch to this though. • The .d file doesn’t always store binary deltas. Sometimes, a complete snapshot is stored instead. So, how does mercurial decide between the two? Well, if the compressed delta is smaller than a certain calculated threshold for the file, then it is appended as a binary delta. If however the delta - or the stream of deltas that need to be applied to regain the revision - is larger than this threshold, then the entire compressed file snapshot is appended to the .d file. This threshold that I keep talking about is usually the size of the source file (or maybe even the .d file, for all I know), from what I’ve been able to gather, though I’m not necessarily certain. • For every commit (changeset in mercurial terminology), the index file stores the offset - the byte from which to begin reading in the .d file - and the length - the amount of data to read - from the .d file. This gives us the binary delta for a given changeset. Applying several sets of such deltas in a chain gives us the file at that changeset. So, that’s the revlog storage format for you. Clearly, this means that there aren’t a gazillion files in the storage backend for your repository. Couple this with fast read operations (because of the index file) and you’ve got a system as fast as git, albeit git still appears faster at write operations. However, the analysis operations - like checking status, checking out a revision, getting diffs and the like - are faster. Also, no matter how large your history grows, there are always just two files for every file in your working repository. This means that mercurial doesn’t need to walk over those gazillion files - but git does. And this is what make the difference in scaling, all other things having similar performance. ## Conclusions; Opinions Neither mercurial nor git were built to scale to the level of the Facebook repository. However, in the recent years, mercurial has become capable to adapt better to the scalability problem while git has not. I think that this difference stems from the differences in architecture. In light of this, it will be exciting to see how git tackles this problem of scalability, though I scarce believe that that they (git devs) ever will. Oh well. Anyway, mercurial is fast now. It wasn’t always, though. We have Facebook devs to thank that for a lot of recent advances in mercurial (aside from the open-source community, of course). Also, I think that it was a better awesome decision to make mercurial in Python - this means that even C noobs like me can contribute to mercurial. This generates my interest, and several others like me, if I can hazard a guess. With the revlog format, mercurial has truly became a viable alternative to git, specially in the recent years. It’s good to see the development going strong and I hope that we get more good stuff to see in the coming years. I’m out. Author
2019-05-26 05:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30750560760498047, "perplexity": 1741.641220469163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258849.89/warc/CC-MAIN-20190526045109-20190526071109-00340.warc.gz"}
https://mathoverflow.net/questions/309912/can-projecting-a-simplex-onto-orthogonal-subspaces-exposes-the-same-vertices-and
# Can projecting a simplex onto orthogonal subspaces exposes the same vertices and edges? Given the regular $n$-dimensional simplex $S\subset\Bbb R^n$ with $n\ge 4$, as well as two orthogonal subspaces $V,W\subset\Bbb R^n$ of dimension $\ge2$ (not necessarily of same dimension, not necessarily $V\oplus W=\Bbb R^n$). We consider orthogonal projections of $S$ onto these subspaces, denoted by $S_V$ (resp. $S_W$). A face of $S$ is said to be exposed in a projection, if it gets mapped onto the boundary of $S_V$ (resp. $S_W$). Question: Can it happen that $S_V$ and $S_W$ expose all 0-faces (vertices) of $S$, and expose the exact same 1-faces (edges) of $S$? Especially interesting are examples in which the projections are not neighborly (i.e. not all edges are exposed). Notes • The question what sets of edges of $S$ can be exposed is equivalent to the question what graphs are the 1-skeletons of polytopes $-$ a famously unsolved problem. This question only asks if a set of edges, if exposable at all, can possibly be exposed by different projections to orthogonal subspaces. • If $V$ and $W$ are not required to be orthogonal, then this is obviously possible. We can even require the projections $S_V$ and $S_W$ to be not combinatorially equivalent: take two non-equivalent neighborly polytopes with $n$ vertices and represent them as projections of the same $(n-1)$-dimensional simplex. • It is not sufficient that the projections are combiantorially equivalent. They have to use the exact same 1-faces of $S$. I am not sure whether this is a restriction since we might be able to apply a symmetry transformation of $S$ to one of the subspaces to make the exposed 1-faces line up. However, it is not clear that this preserves the orthogonality of $V$ and $W$. • I am aware of the paper [1], which shows that for $n\to\infty$ the projection of $S\subset\Bbb R^n$ onto a randomly chosen $d$-dimensional subspace with $d=\lfloor \delta n\rfloor,\delta\in(0,1)$ is neighborly with probability converging to one. This seems to make it very likely that such two spaces $V$ and $W$ might exist. However, this is non-obvious and an explicit example would be welcome. It is because of this paper that I consider it more interesting when the projections are not neighborly. In higher dimensions it seems to be unlikely that a certain (non-complete) set of 1-faces is exposed, therefore even more unlikely that two projections onto orthogonal spaces expose the same 1-faces.
2019-06-18 15:55:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198631644248962, "perplexity": 207.98848769363582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00046.warc.gz"}
https://astronomy.stackexchange.com/questions/28067/from-what-distance-would-our-sun-have-an-angular-diameter-of-7-arc-seconds
From what distance would our sun have an angular diameter of 7 arc seconds? If an observer travels away from our sun, at what distance would our sun occupy 7 arc seconds of space? • Why do you want 7 arc sec? – peterh - Reinstate Monica Oct 18 '18 at 0:42 • Pro tip: it is really much better if you first try to do something for yourself, and then visit us with the problem you've found. – peterh - Reinstate Monica Oct 18 '18 at 22:35 • @RobJeffries Please don't do that. There were already answers that now no longer match the question. – user1569 Oct 25 '18 at 8:52 2 Answers From us, the Sun is visible around a half grad, which is $$\approx$$ 30 arc min = 1800 arc seconds. To get to 7 arc seconds, you need to go 1800/7 times farther away, so the result is around 250 AU. • Using Sun’s radius of 695,000 km, I get a slightly bigger figure. But OP will have to decide whether to use your handy estimate or do the calculations for an exact answer. It’s basic high school maths. – Chappo Hasn't Forgotten Monica Oct 19 '18 at 6:17 • @Chappo Right - the goal of my answer was also to show, how can the OP calculate such things in head, in seconds. This is why also Jan Doggen's edit is sub-optimal, although I decided to let it as he did (probably also he didn't understand, the essence of the post also to make it clear, how to calculate it quickly). – peterh - Reinstate Monica Oct 19 '18 at 11:03 While this might seem to be strictly a math problem, it's really loaded with Astronomy, let's see what we can learn! If you're far enough away that you can see essentially a full hemisphere (which you can't if you're close), the apparent angular width is twice the half-width, and that's given by $$\theta_{HW} = \arcsin\frac{r}{R} \approx \frac{r}{R}$$ for small angles, where $$r$$ is the physical radius of the object and $$R$$ is the distance from the viewer to its center. This is also similar to or the same as @peterh's method. There's (at least) two problems with trying to talk about the radius of the Sun; 1. The Sun is oblate 2. The Sun is diffuse; it doesn't have a well defined edge Let's do #2 first. There is a formal working definition for the optical edge of the Sun, and this excellent answer to the question How do you define the diameter of the Sun states: Most literature will define the diameter of the Sun up to the photosphere, the layer of the solar atmosphere you would see if you were to observe the Sun in white light. The base of the Photosphere is defined as the region where the optical depth is around 2/3, or the region where the plasma becomes transparent to most optical light wavelengths. Of course the true edge of the solar atmosphere could be considered as the heliopause, where the direct influence of the Sun's magnetic field and solar wind end and interstellar space begins. With that definition of the edge, let's look at the shape of the Sun. Wikipedia gives both 695,700 and 696,392 km for the equatorial radius of the Sun, sourcing from IAU 2015 Resolution B3... and Measuring the Solar Radius from Space during the 2003 and 2006 Mercury Transits respectively. Let's use 695,700 km because it's the one that I see most often and "because IAU". The Wikipedia article gives a flattening of 9E-06, which makes the polar radius only about 10 parts per million smaller, which is a much smaller equator-to-pole difference than I remembered it to be. I guess we can ignore it after all! Seen from the Earth then, who's orbit takes it from 152.1 million to 147.1 million km from the Sun, the angular width of the Sun would vary from about 1887 to 1951 arcseconds (31.4 to 32.5 arcminutes). At what distance would an object with a radius of 695,700 km have an apparent width of 7 arcseconds (which is 3.39E-05 radians)? Flipping the equation around gives 2×695700/3.39E-05 or 4.105E+10 kilometers about or 274 AU. What is it like 274 AU from the Sun? In addition to appearing to be roughly 274 times smaller from here than from Earth, the Sun is 274×274 times dimmer. Using $$2.5 \times \log_{10}(274×274)$$ we get that the Sun would appear about 12.2 magnitudes dimmer than it's -26.8 magnitude brightness at 1 AU, or -14.6 magnitude, which is still 2 magnitudes brighter than a typical full moon. Your orbital period around the Sun would be over 4,500 years, and you'd be moving at 1.8 km/sec in that orbit, as opposed to 30 km/sec for the Earth. You'd be well past the Kuiper belt but nowhere near the Oort cloud (this is a simplification, but it will do for now), and you'd be way past the farthest known solar system body in 2018! V774104. above: from EarthSky.org's New most distant object in solar system Image via S. Sheppard / C. Trujillo / D. Tholen / Subaru Telescope/ skyandtelescope.com. And at 274 AU, you'd be much farther from the Sun than any deep space probe as well. Voyagers 1 and 2 are "only" 118 and 142 AU from the Sun now, and New Horizons is only about 42 AU. I don't know the name of the place you'd be, it looks pretty lonely though! Source • I wish you could favorite answers, I love the picture there, that's an amazing representation of the Oort cloud. I never really pictured how "thick" it could be. Is that to scale? The Oort cloud is from 1,000AU to 100,000AU? – Magic Octopus Urn Oct 19 '18 at 13:02 • @MagicOctopusUrn this is a schematic/cartoon representation only, and the distance scale is logarithmic. Solar system structures this far away (especially the Oort cloud) are inferred and predicted, not seen or measured. It's educated speculation. – uhoh Oct 19 '18 at 13:20 • fair point, the source does confirm that the estimates are from 1,000AU to 100,000 AU though. – Magic Octopus Urn Oct 19 '18 at 13:40 • What are your thoughts on this proof, that falsifies Edmond Halley's proposition from 1720 that stars appear larger than they are because of an "optical illusion"? youtube.com/watch?v=ivhXqDnrV4o – pol0 Oct 22 '18 at 21:36 • @pol0 consider posting that as a new question? It's not directly related to this answer and so comments here are not the right place to open up a new discussion. Thanks! – uhoh Oct 23 '18 at 0:57
2020-10-22 20:27:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5737813711166382, "perplexity": 941.3576446045777}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00511.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1338675
MathSciNet bibliographic data MR1338675 (96i:58167) 58G16 (35L70 58E15) Klainerman, S.; Machedon, M. Finite energy solutions of the Yang-Mills equations in \$\bold R\sp {3+1}\$$\bold R\sp {3+1}$. Ann. of Math. (2) 142 (1995), no. 1, 39–119. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2014-09-18 12:28:11
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980209469795227, "perplexity": 6764.6239633945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127285.44/warc/CC-MAIN-20140914011207-00184-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.matlabassignmentexperts.com/matlab-tutor.html
318 Order Completed 98 % Response Time 23 Reviews Since 2018 Related Topics Related Blogs Why is Matlab most suitable for applying signal processing techniques?Signal processing is a subfield of electrical engineering that is concerned with modifying, synthesizing, and analyzing signals such as scientific measurements, images, and sounds, for the purpose of improving transmission or stor... 2020-07-15 What makes Matlab and Octave different?Matlab and Octave are the two most popular statistical programming languages today. They offer researchers and data scientists a host of tools for statistical analysis to help them produce the most desirable results from their data. But it can be quite overwhel... 2020-07-15 # Matlab Tutor ## Peterson N Master of Architecture, Mechanical Engineering, University of Melbourne, Australia Profession Matlab homework help expert and online tutor Skills I have more than 10 years of experience as a Matlab tutor. Before I joined Matlab Assignment Experts as an academic assistant, I was a lecturer in a well-known university here in Adelaide where I used to teach Simulink and a variety of data analysis classes. Since 2018, I have been offering expert Matlab assignment help online to students not only on topics related to Simulink but also those issued from data analysis units such as quantitative methods, econometrics, financial forecasting, mathematical statistics, and more. My goal is to help as many students as I can by sharing my knowledge so that they can achieve academic excellence Get Free Quote 0 Files Selected Mathematical Computing clc, clear all, close all %% Part 1 %% Question 1 syms q p C(q) = 610 - 0.2*q + 0.15*q^2; % Cost function D(p) = 260 - 20*p; % Demand function % Revenue is equal to the amount of units multiplied by the demand (price) % function R(p) = p*D(p) % The profit is equal to the Revenue minus the Cost P(p) = R(p) - subs(C(q), q, D(p)) % The marginal cost is the first derivative of the cost function MC(q) = diff(C, q) % The average cost is the Cost function divided by the total units AC(q) = C(q)/q % The marginal revenue is the first derivative of the Revenue function MR(p) = diff(R,p); %% Question 2 figure ps = 0:0.1:13; plot(ps, D(ps), 'linewidth', 2); xlabel('Price ($)'); ylabel('Demand (q)'); title('Demand vs. Price'); grid on %% Question 3 figure subplot(1,2,1) qs = 1:1:D(0); plot(qs, C(qs), 'linewidth', 2), grid on xlabel('Quantity (q)'); ylabel('Cost ($)'); title('Cost, Marginal Cost, Marginal Revenue and Average Cost vs. Quantity'); hold on plot(qs, MC(qs), 'linewidth', 2); plot(qs, AC(qs), 'linewidth', 2); plot(qs, MR(qs), 'linewidth', 2); legend('Cost', 'Marginal Cost', 'Average Cost', 'Marginal Revenue'); subplot(1,2,2) plot(qs, MC(qs), 'linewidth', 2), hold on; plot(qs, AC(qs), 'linewidth', 2); plot(qs, MR(qs), 'linewidth', 2); legend('Marginal Cost', 'Average Cost', 'Marginal Revenue'); xlabel('Quantity (q)'); ylabel('Cost ($)'); title('Marginal Cost, Average Cost and Marginal Revenue vs. Quantity'); grid on %% Question 4 % To calculate the point of elasticity of demand, we select 2 points of the % curve % For example, we selec the points (8, 100) and (9, 80) % Change in quantity q_change_perc = (9-8)/(9+8)/2 *100; p_change_perc = (80-100)/(80+100)/2 *100; p_elast_D = abs(q_change_perc/p_change_perc) % Because the value is < 1, the demand is inelastic % Now, we will calculate the intervals from P = 1 to 13, where demand is % elastic and inelastic pvec = 0:0.01:13; N = length(pvec); elast_vec = []; for i = 1:N-1 P1 = [pvec(i) D(pvec(i))]; P2 = [pvec(i+1), D(pvec(i+1))]; q_change_perc = (P2(1)-P1(1))/(P2(1)+P1(1))/2 *100; p_change_perc = (P2(2)-P1(2))/(P2(2)+P1(2))/2 *100; p_elast_D = abs(q_change_perc/p_change_perc); elast_vec = [elast_vec, p_elast_D]; end temp = pvec(find(elast_vec > 1)); elastic_interval = [temp(1), temp(end)] temp = pvec(find(elast_vec < 1)); inelastic_interval = [temp(1), temp(end)] temp = pvec(find(elast_vec == 1)); if length(temp)>0 unitary_interval = [temp(1), temp(end)] else unitary_interval = [] end %% Question 6 % The revenue is a negative parabola, so we find its vertex a = -20; b = 260; c = 0; pmax = -b/(2*a); qmax = subs(D,p,pmax); %% Question 7 % The Average Cost is a function that is inversely proportional to the % quantity % To minimize this function, we will use the MATLAB's function fmincon % We set the bounds for the quantity to: 0 < q < Inf type ObjectiveAverageCost [qmin, fval, exiflag] = fmincon('ObjectiveAverageCost', 10, [], [], [], [], 0, Inf); qmin_int = round(qmin) pmin = double(solve(D==qmin)) pmin_int = double(solve(D==qmin_int)) %% Question 8 % The profit function is a negative parabola, so the price that maximizes % it is its vertex a = -80; b = 1816; p_profit_max = -b/(2*a) q_profit_max = D(p_profit_max) %% Part 3 %% Question 10 % This time, the supply function is a price function, where the independent % variable is the quantity. % we can express the quantity in function of price as: S(p) = p-8; % The equilibrium point is where the demand and supply are the same p_equil = solve(D == S) q_equil = D(p_equil) figure ps = 0:0.1:20; plot(ps, D(ps), 'linewidth',2), hold on plot(ps, S(ps), 'linewidth', 2) plot(p_equil, D(p_equil), 'r*', 'linewidth', 2); xlabel('Price ($)'); ylabel('Quantity (q)'); title('Supply and Demand vs. Price'); legend('Demand', 'Supply'); grid on %% Question 11 % The consumer surplus is the integral between the equilibrium point and % the Demand function CS = int((D(p)-q_equil), p, 0, p_equil) %% Question 12 % The producer surplus is the integral between the Supply function and the % equilibrium point PS = int((q_equil - S(p)), p, 0, p_equil) function f = ObjectiveAverageCost(x) f = 3*x/20 + 610/x - 1/5; end Numerical Computing in Matlab % The code you used to get the answer: n = 10; digitsWhole = digits(n+(numel(num2str(floor(pi))))); MyPi = vpa(pi); digits_after =char(rem(MyPi,1)); % 10 digits as required ref = strfind(digits_after,'.'); tenth_digit = str2num(digits_after(ref+n)); % The 10th digit of pi after the decimal is: fprintf('The 10th digit of pi after the decimal is: %g \n' ,tenth_digit) % The value of y didn't appear in the command window because: % *It was suppressed with a semicolon (;)* % Your code: horizontal_vec = [8 2 6 10] % Your code: vertical_vec = [5; 9; -9; 0] %OR: vertical_vec = [5 9 -9 0]' % Your code: k=1; for i=98:-1:42 if mod(i,2)~=0 && i~=1 odd(k)=i; k=k+1; end end odd % expected horizontal vector num2str(odd) % Your code: k=1; for i=5:1:101 if mod(i,2)==0 && i~=1 even(k)=i; k=k+1; end end even % expected horizontal vector num2str(even) % The code you used to get the answer: X = 10:1.5:175; Number_of_element_in_X = numel(X); % The answer is: fprintf('The answer is: %g \n', Number_of_element_in_X) % Your code: n=36; a= 23; b = 27; list = linspace(a,b,n)'; num2str(list) % The code you used to create a 5 x 4 matrix of random numbers between -2 % and 5: a = -2; b = 5; m= 5; n=4; Mat_X = a + (b-a).*rand(m,n) % The code you used to determine the number in the 2nd row and 4th column: Mat_X(2,4) % The code you used to create a 3 x 7 matrix of random integers between -8 % and -2: a = -8; b = 2; m= 3; n=7; Mat_Y = randi([a,b], m, n) % The code you used to slice out the 2nd and 3rd rows: j = 1; % removes 2nd and 3rd rows Mat_Y_new = Mat_Y(j,:) % The code you used to create a 3 x 4 matrix of 0's: Mat_A = zeros(3,4); % The code you used to create a 3 x 4 matrix of 6's: Mat_B = 6* ones(3,4); % The code you used to combine them horizontally: Mat_C = horzcat(Mat_A, Mat_B) % The code you used to combine them vertically: Mat_D = vertcat(Mat_A, Mat_B) % Your code: m = 3; n = 700; for i = 1:m for j = 1:n Mat_E(i,j) = j; end end Mat_E % Alternatively: % Mat_E = [1:700].*ones(3,1) % Your code: m = 850; n = 850; for i = 1:m for j = 1:n Mat_F(j,j) = j; end end Mat_F; % Alternatively: % Mat_F = diag([1:850])
2022-09-30 07:16:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107106685638428, "perplexity": 5471.564911939909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00431.warc.gz"}