text
stringlengths
256
16.4k
Publications Influence Claim Your Author Page Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications. We find necessary and sufficient conditions for a Lipschitz map $f:\mathbb{R}E\to X$, into a metric space to have the image with the $k$-dimensional Hausdorff measure equal zero, $H^k(f(E))=0$. An… (More) In this paper we prove that every collection of measurable functions $f_\alpha$, $|\alpha|=m$ coincides a.e. with $m$th order derivatives of a function $g\in C^{m-1}$ whose derivatives of order $m-1$… (More) We introduce a notion of maximal potentials and we prove that they form bounded operators from $L^p$ to the homogeneous Sobolev space $\dot{W}^{1,p}$ for all $n/(n-1)<p<n$. We apply this result to… (More) If M is a compact smooth manifold and X is a compact metric space, the Sobolev space W (M,X) is defined through an isometric embedding of X into a Banach space. We prove that the answer to the… (More) Using a method of Korobenko, Maldonado and Rios we show a new characterization of doubling metric-measure spaces supporting Poincar\'e inequalities without assuming a priori that the measure is… (More) There is a topological embedding $\iota:\mathbb{S}^1\to\mathbb{R}^5$ such that $\pi_3(\mathbb{R}^5\setminus\iota(\mathbb{S}^1))=0$. Therefore, no $3$-sphere can be linked with $\iota(\mathbb{S}^1)$. Let k>n be positive integers. We consider mappings from a subset of k-dimensional Euclidean space R^k to the Heisenberg group H^n with a variety of metric properties, each of which imply that the… (More) We prove that if $M$ and $N$ are Riemannian, oriented $n$-dimensional manifolds without boundary and additionally $N$ is compact, then Sobolev mappings $W^{1,n}(M,N)$ of finite distortion are… (More) In this paper, we present a new characterization of the mappings of bounded length distortion (BLD for short). In the original geometric definition it is assumed that a BLD mapping is open, discrete… (More) We provide a new and elementary proof for the structure of geodesics in the Heisenberg group $\mathbb{H}^n$. The proof is based on a new isoperimetric inequality for closed curves in… (More)
Gravity itself @aeronalias is absolutely right. Given the gravitational acceleration of $g=9.81m/s^2$ on the ground, a perfect spherical earth of radius $R_E=6370km$ with homogenous (at least: radially symmetric) density, one can calculate the gravitational acceleration at an altitude of $h=12km$ by $$g(h)=g\cdot\frac{R_E^2}{(h+R_E)^2}= 9.773 \rm{m}/s^2$$ Expressed in terms of $g$, the difference is $$g_\rm{diff} = 0.0368565736 m/s^2 = 0.003757g$$ Centrifugal forces The question also asks for the centrifugal effect on the aircraft as it travels round the curve of the earth, which has not yet been answered yet. The effect is considered small, but compared to the effect on gravity itself, it isn't always. I got some heavy objections on my answer and I have to admit, I really don't see their point. Therefore, I've edited this section and hope this helps. In general, an object moving on a circular path experiences a centrifugal acceleration, pointing away from the center of the circle: $$a_c=\omega^2r=\frac{v^2}{r}$$ $\omega=\frac{\alpha}{t}$ is the angular speed, i.e. the angle $\alpha$ (in radians) the object travels in a given time $t$ (in seconds). Now let's consider a "perfect" Earth as described above, plus no wind.A balloon hovering stationary over a point at the equator at 12km altitude will do one revolution ($\alpha=2\pi[=360°]$) in 24 hours. So it is $\omega=\frac{2\pi}{24\cdot60\cdot60s}$. Together with $r=R_e+h$, one gets for the balloon: $$a_{cb}=0.03374061 m/s² = 0.0034394098 g$$ The circumference of the circle the balloon flies is $2\pi(R_e+h)=40099km$ Now consider an aircraft flying east along the equator at the same altitude at 250m/s (900km/h, 485kt) with respect to the surrounding air. (Keep in mind: no wind). In 24h, this aircraft travels a distance of 21600km, or 0.539 of the circumference. This means the aircraft does 1.539 revolutions of the circle in 24h, which means its angular speed is $\omega=1.539\cdot\frac{2\pi}{24\cdot60\cdot60s}$.Thus, the centrifugal force on the aircraft flying east is $$a_\rm{ce} = 0.0799053814 m/s^2 = 0.0081452988 g$$ The same way, one can calculate what happens when the aircraft flies west:$\omega=(1-0.539)\cdot\frac{2\pi}{24\cdot60\cdot60s}$ $$a_\rm{cw} = 0.0071833292 m/s^2 = 0.0007322456 g$$ Comparison Let's write the values together to compare them. I've also added how much lighter a 100kg (220lb) person would feel due to the effects: | "weight loss" g_diff = 0.0368565736 m/s² = 0.003757 g | 376gram (0.829lb) a_cb = 0.03374061 m/s² = 0.0034394098 g | 344gram (0.758lb) a_ce = 0.0799053814 m/s² = 0.0081452988 g | 815gram (1.797lb) a_cw = 0.0071833292 m/s² = 0.0007322456 g | 73gram (0.161lb) Note: The 100kg is what a scale at the North Pole (i.e. without any centrifugal effect) shows. The person already feels 344g lighter on the ground at the equator. The balloon doesn't change this (much).But moving east/west has a larger effect on the weight than gravity alone. A person flying west feels even heavier than on ground! Maybe another table, showing the weight of the person: kg lb 1. Man at north pole 100.00 220.46 2. Man at equator 99.66 219.70 3. Man at equator, in balloon 99.28 218.88 4. Man at equator, in aircraft flying east 98.81 217.84 5. Man at equator, in aircraft flying west 99.55 219.47 <- More than 3. The numbers shown are only valid at the equator and for flights east / west. In other cases, it becomes a little more complex. EDIT: Being curious on how this depends on latitude, I created this plot about the absolute acceleration an aircraft experiences. The radius in the equation of the centrifugal force is the distance of the aircraft to the axis of the Earth. It is clear that it decreases when moving away from the equator, and so does the acceleration. The speed of the aircraft flying west will cancel out the speed of the earth at about 57° N / S, i.e. there is no centrifugal force. At larger latitude, the aircraft will fly in the opposite direction around the axis of the earth, building up a centrifugal force again. Near the poles, both aircraft become centrifuges (theoretically). E.g. flying a circle of 500m radius gives an acceleration of 12.7g. This is why the data rises to infinity there. (When doing the math, one has to keep in mind that gravity always points to the center of the earth, while the centrifugal force points away from the axis. You can't just add them)
Adventure Begins The game Pokenom Go has just been released. Pokenom trainers can now travel the world, capture Pokenom in the wild and battle each other! Bash — the Pokenom trainer — has decided to drop out of his university to pursue his childhood dream of becoming the best Pokenom trainer! However, Linux — Bash’s university headmaster — does not allow his students to drop out so easily … Linux puts $N$ black boxes on a straight line. The black boxes are numbered from $1$ to $N$ from left to right. Initially, all black boxes are empty. Then Linux gives Bash $Q$ queries. Each query can be one of the following $2$ types: Linux puts exactly one stone inside exactly one box between $u$-th box and $v$-th box, inclusive, with equal probability. $(1 \le u \le v \le N)$. Let $a_ i$ be the number of stones in black box numbered $i$. Let $A = \sum _{i=1}^{N}{a_ i^2}$. Bash has to calculate the expected value $E(A)$. Bash can only drop out of his university if he is able to answer all queries correctly. But now all Bash can think of is Pokenom. Please help him! Input The first line of input contains exactly $2$ positive integers $N$ and $Q$. $(1 \le N, Q \le 10^5)$. $Q$ lines follow, each line contains exactly one query. As explained, a query can be one of the following $2$ types: $1 \; u \; v$: Linux puts a stone inside one of the boxes between $u$ and $v$. $2$: Linux asks Bash to compute $E(A)$. Output It can be proved that the expected value can be represented as an irreducible fraction $\dfrac {A}{B}$. For each query of type $2$, print one line containing the value $A \times B^{-1}$ modulo $10^{9} + 7$. The given input guarantees that $B$ is not a multiple of $10^{9} + 7$. Explanation for examples In the first example: With a probability of $0.5$, two stones are in different squares. Hence, the answer to the fourth query is $0.5 \times (1^{2} + 1^{2}) + 0.5 \times 2^{2} = 3$. In the second example: With a probability of $\frac{2}{3}$, two stones are in different squares. Hence, the answer to the fourth query is $\frac{2}{3} \times 2 + \frac{1}{3} \times 4 = \frac{8}{3}$. Sample Input 1 Sample Output 1 2 4 1 1 2 2 1 1 2 2 1 3 Sample Input 2 Sample Output 2 3 4 1 1 3 2 1 1 3 2 1 666666674
Area of Regular Polygon by Inradius Theorem Let $C$ be an incircle of $P$. Let the radius of $C$ be $r$. Then the area $\mathcal A$ of $P$ is given by: $\mathcal A = n r^2 \tan \dfrac \pi n$ Proof Then $\mathcal A$ is equal to $n$ times the area of $\triangle OAB$. The angle $\angle AOB$ is equal to $\dfrac {2 \pi} n$. Then $d = 2 r \tan \dfrac \pi n$ So: \(\displaystyle \mathcal A\) \(=\) \(\displaystyle n \frac {r d} 2\) Area of Triangle in Terms of Side and Altitude \(\displaystyle \) \(=\) \(\displaystyle \frac n 2 r \paren {2 r \tan \dfrac \pi n}\) substituting from above \(\displaystyle \) \(=\) \(\displaystyle n r^2 \tan \dfrac \pi n\) rearranging $\blacksquare$
Syllabus for ME 305 — Engineering Computer Applications Fall 2015 News (01 Nov 2015 | 12:39 PM) Homework’s up. Due Wednesday in class. (25 Oct 2015 | 9:04 PM) Homework’s up. Due Wednesday in class. (17 Oct 2015 | 10:33 AM) Homework’s up. Due Wednesday in class. (27 Sep 2015 | 10:09 PM) This week’s homework is up. (20 Sep 2015 | 5:17 PM) This week’s homework is up. (20 Sep 2015 | 5:16 PM) Notes are up for Monday. (15 Sep 2015 | 9:16 PM) Notes are up for Wednesday. (13 Sep 2015 | 11:47 PM) Notes are up for Monday. (11 Sep 2015 | 8:34 AM) This week’s homework assignment is up. It will be due early Wed. (08 Sep 2015 | 7:58 PM) This week’s notes are up. Eleven pages in one day of lecture. No problem ;) (02 Sep 2015 | 1:15 PM) This week’s homework assignment is up! (28 Aug 2015 | 1:42 PM) The lottery forum post is now g2g. (28 Aug 2015 | 1:02 PM) The quiz is up! Course description This course provides an introduction to engineering computer applications with specific emphasis on Matlab. Students learn how to navigate the Matlab environment and create professional engineering graphics. Programming is taught with application to numerical solution of mathematical and engineering problems. Prerequisites: GE 206 and MTH 271. General information Instructor Rico Picone, PhD Instructor Email rpicone (at) stmartin (dot) edu Office Hours MWF 11 am–12 pm, Cebula 103C Office Hours MW 1:30 pm–2:30 pm, Cebula 103C Location Cebula 101 Times MW 11:00–12:20 Website ME 305 Website Moodle ME 305 Moodle secrets Textbooks Kermit Sigmon. MATLAB Primer. Third Edition, 1993. Harold Abelson and Gerald J. Sussman with Julie Sussman. Structure and Interpretation of Computer Programs. Second Edition. MIT Press, 1996. Notes Course notes will be here. Schedule The following schedule is tentative. week topics introduced reading assignment due 1 introduction to computer programming Assignment #1 2 assignment, processes, data, data types Assignment #2 3 introduction to linear algebra Assignment #3 4 arrays and array operations Assignment #4 5 conditionals and loops Assignment #5 6 functions Assignment #6 7 2D graphics Assignment #7 8 3D graphics Assignment #8 Midterm Exam 9 statistics Assignment #9 10 differential equations Assignment #10 11 state space models Assignment #12 12 introduction to software design Assignment #13 13 algorithms Assignment #11 14 symbolics Assignment #14 15 symbolics Assignment #15 16 finals week Final Exam Assignments Assignment #1 Do the assigned reading. Write an expression in a MATLAB Command Window that returns the numerical value 8. Multiply the previous result, using the ans variable, by 3. Compare the result, using ans, to 24. Does the comparison return 1 or 0? What does this mean? Do spaces matter when you write an expression? In MATLAB, is there a difference between the numbers 1 and 1.0? Which computer hardware component stores lots of data, but is relatively slow. Which type of language is MATLAB? High-level, Assembly Language, or Machine Code? Which computer hardware component processes, stores, sends, and receives data? Which numeral system do we use most often? How many numbers can be described by a binary numeral system that has only three digits? (Hint: the base-10 number 1 has representation 001 and the base-10 number 2 has representation 010). Take the weekly homework quiz. Assignment #2 Do the assigned reading. Write a program that has the following features: it generates a random integer between 1 and 10, it asks the user to input their guess of the number (use the inputcommand … type help inputto learn about it), if the user has guessed the correct number, it congratulates the user, if the user’s guess wass incorrect, it tells them they get a “strike,” tells them if they were low or high with their guess, and prompts for a second guess, it repeats this until they get either three “strikes,” in which case they lose, or they guess the correct number. Take the weekly homework quiz. Assignment #3 Write a program that has the following features: it finds the following products: \begin{align} 8 \begin{bmatrix} 6 \\ 2 \\ -1 \end{bmatrix}, && -3 \begin{bmatrix} 4 & -6 \\ 3 & 27 \end{bmatrix}, && \begin{bmatrix} 5 & 3 \\ 9 & 12 \end{bmatrix} \begin{bmatrix} 5 \\ 2 \end{bmatrix}, && \begin{bmatrix} 13 & 3 & 135 \\ -3 & 7 & -3 \\ 0 & 35 & 1 \end{bmatrix} \begin{bmatrix} -4 \\ 1 \\ -3 \end{bmatrix}, \end{align} \begin{align} -3 \begin{bmatrix} 4 & -6 \\ 3 & 27 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ -3 & 2 \end{bmatrix}, && \begin{bmatrix} 13 & 3 & 135 \\ -3 & 7 & -3 \\ 0 & 35 & 1 \end{bmatrix} \begin{bmatrix} 0 & -44 & -6 \\ 0 & 1 & 16 \\ 0 & 1 & 1 \end{bmatrix}; \end{align} it defines an array xwith values 1, 2,, … , 1000; it changes the value of the element of xthat is at index 507to equal 49. it sums all the values of the new xarray, and assigns the result to y. if yis greater than 1e4($10^4$), display the string 'acheivement unlocked'. Otherwise, display the string 'no. just, no.'. Let $x = 2 e^1 - 4 e^2$ be a vector in vector space $\mathbb{R}^2$. Using the standard basis $e=(e^1,e^2)$ for $\mathbb{R}^2$, write the vector’s coordinate tuple $x_e$. Derive the linear map $A: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ that is the change-of-coordinate matrix from the standard basis to the basis $b = (3 e^1, -1 e^2)$. Write a matlab script that takes a vector coordinate tuple (array) x, in the standard basis $(e^i)$, transforms it to the $(b^i)$-basis, and assigns it to the vector coordinate tuple (array) y. Turn in the homework in class on Wednesday. Assignment #4 Write a script that estimates the standard deviation of a random variable that has a gaussian probability density function using the function randn. Generate a 20by 1array that contains elements that have an increasing number of samples for the estimate. The first element is the estimate after 10samples, and each element thereafter is the estimate after 10more samples. Write a script that evaluates each element of an array and determines if it is positive, negative, or zero. Let the array be a 20by 1array of random integers (including zero) with uniform probability density function ranging from -5to 5. When evaluating each element, if the element is positive, print +; if negative, print -; if zero, print 0. Please turn in the homework in class, Wednesday 09/23. Assignment #5 Write a program filethat defines the function rect_intthat performs numerical integration on input data as follows. Let there be four inputs: (1) datafor the one-dimensional data array to be integrated, (2) delta_tfor the scalar time steps between each data point, (3) lower time limit of integration, and (4) upper time limit of integration. Estimate the integral of dataover time using the rectangular rulethat you learned in calculus. Use rect_intto estimate the integral of tanhfrom 0to 3using 100rectangles. Repeat problems 1 and 2, but create and use a function mid_intthat uses the midpoint rule. Repeat problems 1 and 2, but create and use a function trap_intthat uses the trapezoid rule. Compare and discuss the results. Turn in your homework in class, Wednesday. Assignment #6 Write a script that plots the upper half of a hemisphere of radius 5. Turn the edge colors off so that the mesh isn’t shown. Use the hotcolormap and use lighting. Use a colorbar. Write a script that plots the function $e^{-r}\, \cos{2\pi r}$, where $r$ is the radial coordinate $r = \sqrt{x^2 + y^2}$. Color the faces blue, turn edges off, and use lighting. Turn on a Grateful Dead album. Dive in. Assignment #7 Write a function with arguments a time array t, “data” y that is approximately sinusoidal, and the angular frequency of that data omega. The function should return an array [a1, a2, a3] where the elements of the array give the linear least-squares best-fit for the function $$y(t) = a_1 + a_2 \cos(\omega t) + a_3 \sin(\omega t).$$ Test your function on this data, which has angular frequency $\omega = 2\pi\ rad/s$. Your writeup should show the results of this, including a plot of the noisy data and your analytic least-squares approximation. (Note: this used to say $\omega = 1\ rad/s$, which was totally incorrect.) Assignment #8 Write a script that numerically solves the differential equation $$ \ddot{x} + 2 \zeta \omega_n \dot{x} + \omega_n^2 x = f(t), $$ for $x(t)$. Let $f(t)$ be the function $$ f(t) = A \sin{\omega t}, $$ where $A$ is the drive amplitude and $\omega$ is the drive frequency. Use initial conditions $x(0) = 1$ and $\dot{x}(0) = 0$. Show a plot of $x(t)$ for the case that $\zeta = 0.5$, $\omega_n = 20 \pi\ rad/s$, $A = ½$, and $\omega = 10 \pi\ rad/s$. Assignment #9 Design the Butterworth filter below by choosing its parameters to meet the following design criterion: at $1\, kHz$, the output amplitude is approximately $20\%$ of the input amplitude. In order to aid in the design, simulate the response of the system to a sinusoidal input voltage $V_s(t) = A\,\sin{\omega t}$ with $A = 10\, V$, $\omega = 2000\pi$. Turn in your simulation code and a plot showing 5 steady-stateperiods of the input and the output signals. Write and test a program (function) named text_to_codethat converts an input string of English text to international Morse code. It should output an equivalent binary string of ones and zeros—a character for each “unit” of time—with a 1representing the “on” state and a 0representing the “off” state. A separate function code_to_blinkshould be written that takes the output of this string and displays a visual representation (e.g. blinking “light”). Finally, write a script that uses these two functions to visualize the Morse code of the following paragraph (from Kierkegaard’s Purity of Heart is to Will One Thing). Is not despair simply double-mindedness? For what is despairing other than to have two wills? For whether the weakling despairs over not being able to wrench himself away from the bad, or whether the brazen one despairs over not being able to tear himself completely away from the Good: they are both double-minded, they both have two wills. Neither of them wills one thing, however desperately they may seem to will it. Resources Class resources will be posted here throughout the semester. Homework, quiz, & exam policies Homework & homework quiz policies Weekly homework will be due in class on Wednesdays, and it will be turned in for credit. Working in groups on homework is strongly encouraged, but no copying is permitted. Exam policies The midterm and final exams will be in-class. If you require any specific accommodations, please contact me. Calculators will be allowed. Only ones own notes and the notes provided by the instructor will be allowed. No communication-devices will be allowed. No exam may be taken early. Makeup exams require a doctor’s note excusing the absence during the exam. The final exam will be cumulative. Grading policies Total grades in the course may be curved, but individual homework quizzes and exams will not be. They will be available on moodle throughout the semester. Homework quizzes 25% Midterm Exam 35% Final Exam 40% Academic integrity policy Cheating or plagiarism of any kind is not tolerated and will result in a failing grade (“F”) in the course. I take this very seriously. Engineering is an academic and professional discipline that requires integrity. I expect students to consider their integrity of conduct to be their highest consideration with regard to the course material. Correlation of course & program outcomes In keeping with the standards of the Department of Mechanical Engineering, each course is evaluated in terms of its desired outcomes and how these support the desired program outcomes. The following sections document the evaluation of this course. Desired course outcomes Upon completion of the course, the following course outcomes are desired: students will have a clear understanding of basic computer programming techniques; students will understand data types, functions, conditionals, and loops; students will be able to program in MATLAB; students will be able to produce plots in MATLAB; students will be able to import data into MATLAB, manipulate it, and plot it; students will be able to do basic statistical analysis with MATLAB; students will be able to solve differential equations numerically in MATLAB; students will understand basic numerical analysis; students will be able to use the symbolic toolbox in MATLAB; Desired program outcomesThe desired program outcomes are that mechanical engineering graduates have: an ability to apply knowledge of mathematics, science, and engineering; an ability to design and conduct experiments, as well as to analyze and interpret data; an ability to design a system, component, or process to meet desired needs; an ability to function on multi-disciplinary teams; an ability to identify, formulate, and solve engineering problems; an understanding of professional and ethical responsibility; an ability to communicate effectively; the broad education necessary to understanding the impact of engineering solutions in a global and social context; a recognition of the need for, and an ability to engage in life-long learning; a knowledge of contemporary issues; and an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice Correlation of outcomesThe following table correlates the desired course outcomes with the desired program outcomes they support. desired program outcomes A B C D E F G H I J K desired course outcomes 1 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 2 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 3 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 4 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 5 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 6 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 7 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 8 ✔ ✔ ✔ ✔ ✔ ✔ ✔ 9 ✔ ✔ ✔ ✔ ✔ ✔ ✔
Difference between revisions of "Dini derivative" (Importing text file) m (link) (One intermediate revision by one other user not shown) Line 1: Line 1: + ''derived numbers'' ''derived numbers'' − A concept in the theory of functions of a real variable. The upper right-hand Dini derivative + A concept in the theory of functions of a real variable. The upper right-hand Dini derivative is defined to be the limes superiorof the quotient /as , where >. The lower right-hand , the upper left-hand , and the lower left-hand Dini derivative are defined analogously. If =(=), then has at the point a one-sided right-hand (left-hand) Dini derivative. The ordinary derivative exists if all four Dini derivatives coincide. Dini derivatives were introduced by U. Dini [[#References|[1]]]. As N.N. Luzin showed, if all four Dini derivatives are finite on a set, then the function has an ordinary derivative almost-everywhere on that set. ====References==== ====References==== Line 9: Line 10: ====Comments==== ====Comments==== − The Dini derivatives are also called the Dini derivates, and are frequently denoted also by + The Dini derivatives are also called the Dini derivates, and are frequently denoted also by , , , . Latest revision as of 15:14, 8 May 2017 derived numbers A concept in the theory of functions of a real variable. The upper right-hand Dini derivative $\Lambda_\alpha$ is defined to be the limes superior of the quotient $(f(x_1)-f(x))/(x_1-x)$ as $x_1\to x$, where $x_1>x$. The lower right-hand $\lambda_\alpha$, the upper left-hand $\Lambda_g$, and the lower left-hand Dini derivative $\lambda_g$ are defined analogously. If $\Lambda_\alpha=\lambda_\alpha$ ($\Lambda_g=\lambda_g$), then $f$ has at the point $x$ a one-sided right-hand (left-hand) Dini derivative. The ordinary derivative exists if all four Dini derivatives coincide. Dini derivatives were introduced by U. Dini [1]. As N.N. Luzin showed, if all four Dini derivatives are finite on a set, then the function has an ordinary derivative almost-everywhere on that set. References [1] U. Dini, "Grundlagen für eine Theorie der Funktionen einer veränderlichen reellen Grösse" , Teubner (1892) (Translated from Italian) [2] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) Comments The Dini derivatives are also called the Dini derivates, and are frequently denoted also by $D^+f(x)$, $D_+f(x)$, $D^-f(x)$, $D_-f(x)$. How to Cite This Entry: Dini derivative. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Dini_derivative&oldid=13670
\(\newcommand{\cauchy}{\boldsymbol{\sigma}}\) \(\newcommand{\strain}{\boldsymbol{\varepsilon}}\) \(\newcommand{\uV}{\boldsymbol}\) \(\newcommand{\uT}{\boldsymbol}\) \(\newcommand{\defu}{\boldsymbol{u}}\) HRR theory This chapter covers the HRR theory which was derived by Hutchinson, Rice and Rosengren in 1967-1968. The HRR theory allows evaluating the stress and displacement field of a stationary crack tip in the context of non-linear material behavior. Although the theory is formally only valid for crack initiation and not crack propagation, it is of great value since it provides an asymptotic solution for the stress and displacement field in the plastic zone near the crack tip where LEFM is not valid. HRR theory > Elastoplastic behavior (continuation) The understanding of elastoplastic behavior is essential for the HRR theory. Therefore, the related theory in the beforehand chapter is still valid and the reader is advised to review it if necessary. HRR theory > General solution of the $J$-integral from near-tip fields Assumptions and definitions The HRR theory was developed for a semi-infinite crack, and is only valid up to crack initiation caused by a monotonic and proportionally increasing load, which means that all the stress tensor components should increase similarly. As a result of these assumptions, we can define an internal potential, see previous chapter. In addition, we consider the power hardening law, but in order to simplify the derivation, we assume that the elastic deformations can be neglected. Since elastic deformations are neglected, the hardening power law is rewritten in terms of the equivalent von Mises stress and of the equivalent strain, which respectively read \begin{equation} \sigma_e = \sqrt{\frac{3}{2} \mathbf{s}:\mathbf{s}} ,\label{eq:vmstress} \end{equation} \begin{equation} \bar{\varepsilon} = \sqrt{\frac{2}{3} \strain:\strain}. \label{eq:vmstrain} \end{equation} The hardening power law thus becomes \begin{equation} {\sigma}_e \left(\bar{\varepsilon}\right) = {\sigma}_p^0{\left(\frac{\bar{\varepsilon}}{\frac{{\alpha}{\sigma}_p^0}{E}}\right)}^\frac{1}{n},\label{eq:elpowerlaw} \end{equation} and governs the evolution of the equivalent von Mises stress in terms of the equivalent strain, below and above the yield stress ${\sigma}_p^0$. This law can be interpreted as a non-linear elastic law for isochoric solids. Indeed, since the elastic deformations are neglected, the whole deformation tensor $\strain$ is deviatoric. This non-linear elastic law is characterized by the parameters $\alpha$ and $n$ whose variations lead to different material responses as shown in Picture VIII.1. In particular, when $n\rightarrow\infty$, respectively $n\rightarrow 1$, a perfectly plastic-like, respectively linear, behavior is recovered. The relation (\ref{eq:elpowerlaw}) can be inverted in order to express the evolution of the equivalent strain $\bar{\varepsilon}$ in terms of the von Mises stress ${\sigma}_e$, with \begin{equation} \bar{\varepsilon}={\frac{{\alpha}{\mathbf{\sigma}_p^0}}{E}}{\left(\frac{{\sigma}_e}{{\sigma}_p^0}\right)}^n.\label{eq:elpowerlawinverted} \end{equation} In order to investigate the evolution of the strain tensor $\mathbf{\varepsilon}$, the elastic power law (\ref{eq:elpowerlawinverted}) is first differentiated, leading to \begin{equation} d\bar{\varepsilon}=\frac{n{\alpha}{\sigma}_p^0}{E} {\left(\frac{{\sigma}_e}{{\sigma}_p^0}\right)}^{n-1} \frac{d{\sigma}_e}{{\sigma}_p^0}\label{eq:derivationelpowerlaw1}, \end{equation} which allows the plastic flow to be rewritten, with $\strain^\text{p}=\strain$ as \begin{equation} d\strain=\sqrt{\frac{3}{2}}\frac{\mathbf{s}}{\sqrt{\mathbf{s}:\mathbf{s}}}\frac{n{\alpha}{\sigma}_p^0}{E}{\left(\frac{{\sigma}_e}{{\sigma}_p^0}\right)}^{n-1}\frac{d{\sigma}_e}{{\sigma}_p^0}= \frac{3}{2}\frac{\mathbf{s}}{{\sigma}_p^0}\frac{n{\alpha}{\sigma}_p^0}{E}{\left(\frac{{\sigma}_e}{{\sigma}_p^0}\right)}^{n-2}\frac{d{\sigma}_e}{{\sigma}_p^0} . \label{eq:derivationelpowerlaw2}\end{equation} Since the loading conditions are monotonic and proportional, the incremental form can be substituted by the finite one and the plastic strain tensor is directly obtained by combining the plastic flow, Eq. (\ref{eq:vmstress}) and Eq. (\ref{eq:elpowerlawinverted}) as, \begin{equation} \strain=\bar{\varepsilon}\sqrt{\frac{3}{2}}\frac{\mathbf{s}}{\sqrt{\mathbf{s}:\mathbf{s}}}= \left[\frac{{\alpha}{\sigma}_p^0}{E}{\left(\frac{{\sigma}_e}{{\sigma}_p^0}\right)}^n\right] \sqrt{\frac{3}{2}}\frac{\mathbf{s}}{\sqrt{\mathbf{s}:\mathbf{s}}}=\left[\frac{{\alpha}{\sigma}_p^0}{E}{\left(\frac{{\sigma}_e}{{\sigma}_p^0}\right)}^{n-1}\right]\frac{3}{2}\frac{\mathbf{s}}{{\sigma}_p^0} . \label{eq:derivationelpowerlaw4}\end{equation} \begin{equation} U=\int_0^{\strain}\cauchy:d\strain'=\int_0^{\strain}\mathbf{s}:d\strain',\label{eq:derivationelpowerlaw5} \end{equation} since the deformation tensor is deviatoric. Using Eq. (\ref{eq:derivationelpowerlaw2}), Eq. (\ref{eq:vmstress}), and Eq. (\ref{eq:elpowerlaw}), this last relation reads \begin{equation} U=\int_{0}^{{\sigma}_e} \mathbf{s}:\frac{3}{2}\frac{\mathbf{s}}{{\sigma}_p^0}\frac{n{\alpha}{\sigma}_p^0}{E}\left(\frac{{\sigma}_e'}{{\sigma}_p^0}\right)^{n-2}\frac{d{\sigma}_e'}{{\sigma}_p^0}=\int_{0}^{{\sigma}_e}\frac{n{\alpha}{\sigma}_p^0}{E}{\left(\frac{{\sigma}'_e}{{\sigma}_p^0}\right)}^n d{\sigma}'_e=\frac{n{\alpha}\left({\sigma}_p^0\right)^2}{E\left(n+1\right)}{\left(\frac{\bar{\varepsilon}}{{\frac{{\alpha}{\sigma}_p^0}{E}}}\right)}^\frac{n+1}{n}. \label{eq:derivationpowerlaw7}\end{equation} Near-tip fields Since the $J$-Integral is path-independent, see related Chapter 3, one can consider as contour a circle of radius $r\rightarrow 0$ around the semi-infinite crack tip as depicted in Picture VIII.2, leading to the expression \begin{equation} J=\lim_{r\rightarrow 0} \int_{-\pi}^{\pi} \left(U \uV{n}_x - \defu_{,x}\cdot \cauchy\cdot \uV{n}\right) r d\theta. \label{eq:JHRR} \end{equation} Due to the path-independence of $J$, and thus the independence with respect to small $r$, the integrant of this last equation should only depend on $\theta$, yielding \begin{equation} \lim_{r\rightarrow 0} \left(U \uV{n}_x - \defu_{,x}\cdot \cauchy\cdot \uV{n}\right) = \frac{h\left(\theta\right)}{r}.\label{eq:integrantHRR} \end{equation}However, since the energy $U$ and the term $\defu_{,x}\cdot \cauchy$ involve products between strain and stress, Eq. (\ref{eq:integrantHRR}) is equivalent to \begin{equation} \lim_{r\rightarrow 0} \cauchy : \strain = \frac{l\left(\theta\right)}{r}.\label{eq:integrant2HRR} \end{equation} In the latter expressions, $h\left(\theta\right)$ and $l\left(\theta\right)$ are still unknown, but depend on $\theta$ and not on $r$. In order to evaluate them, the stress tensor is expanded in a power series \begin{equation}\label{eq:constant_J} \cauchy\left(r,\,\theta\right) = \sum_s r^s \hat{\cauchy}\left(\theta,\,s\right) \,, \end{equation} in which the tensors $\hat{\cauchy}\left(\theta,\,s\right)$, one different tensor for each term $s$ of the series, represents the distribution of the different stress components in terms of $\theta$. They are thus not depending on $r$. Since we are trying to establish a relation in the vicinity of the crack tip, let $s'$ be the exponent corresponding to the dominant term near the crack tip. Therefore in the vicinity of the crack tip Eq. (\ref{eq:constant_J}) can be approximated by \begin{equation}\label{eq:approx_stress_1} \cauchy \simeq r^{s'} \hat{\cauchy}\left(\theta,\,s'\right). \end{equation} The approximated expressions of the deviatoric stress tensor $\mathbf{s}$ and of the equivalent von Mises stress $\sigma_e$ (\ref{eq:vmstress}) follow from \begin{equation} \uT{s}\simeq r^{s'} \left(\hat{\cauchy}\left(\theta,\,s'\right)-\frac{\text{tr}\left(\hat{\cauchy}\left(\theta,\,s'\right)\right)}{3}\uT{I}\right) \,, \label{eq:approx_s_1} \end{equation} \begin{equation} \sigma_e\simeq r^{s'}\sqrt{\frac{3}{2} \left(\hat{\cauchy}\left(\theta,\,s'\right)-\frac{\text{tr}\left(\hat{\cauchy}\left(\theta,\,s'\right)\right)}{3}\uT{I}\right): \left(\hat{\cauchy}\left(\theta,\,s'\right)-\frac{\text{tr}\left(\hat{\cauchy}\left(\theta,\,s'\right)\right)}{3}\uT{I}\right) },\label{eq:approx_svm_1} \end{equation} Using these approximations, the strain tensor (\ref{eq:derivationelpowerlaw4}) becomes \begin{equation} \strain \simeq \frac{\alpha\sigma_p^0}{E}\left(\frac{\sigma_e}{\sigma_p^0}\right)^{n-1}\frac{3\uT{s}}{2\sigma_p^0}= \left(r^{s'}\right)^{n}\hat{\strain}\left(\theta,\,s'\right) \, \text{.} \label{eq:approx_strain_1}\end{equation} where the tensor $\hat{\strain}\left(\theta,\,s'\right)$ represents the distribution of the different strain components in terms of $\theta$ for the term $s'$ of the series (\ref{eq:constant_J}). Note that if $\hat{\cauchy}\left(\theta,\,s'\right)$ is known, the term $\hat{\strain}\left(\theta,\,s'\right)$ can readily be evaluated from (\ref{eq:approx_strain_1}). Now, in order to satisfy the condition (\ref{eq:integrant2HRR}), by considering Eq. (\ref{eq:approx_stress_1}) and Eq. (\ref{eq:approx_strain_1}), it appears that the exponent $s'$ should be such that $s'n+s'=-1$, or in other words \begin{equation}\label{eq:approx_exp} s' = \frac{-1}{n+1}. \end{equation} The approximation of the stress field near the crack tip is thus obtained by inserting (\ref{eq:approx_exp}) into (\ref{eq:approx_stress_1}), leading to $\cauchy\left(r,\,\theta\right) \simeq r^{\frac{-1}{n+1}} \hat{\cauchy}\left(\theta,\, \frac{-1}{n+1}\right)$. However, since the stress tensor also depends on the loading condition and on the yield stress $\sigma_p^0$, in order to define the distribution of the different stress components in terms of $\theta$ (and the exponent $n$) only, we extract the loading and material effects from $\hat{\cauchy}\left(\theta,\,\frac{-1}{n+1}\right)$, resulting into \begin{equation}\label{eq:J_stress} \cauchy\left(r,\,\theta\right) \simeq \sigma_p^0 k_n\frac{\tilde{\cauchy}\left(\theta,\,n\right)}{r^\frac{1}{n+1}}\,, \end{equation} where $k_n$ is a plastic stress intensity factor, which depends on the geometry and loading conditions, and $\tilde{\cauchy}\left(\theta,\,n\right)$ is the distribution of the different stress components in terms of $\theta$ independently of the loading amplitude. A similar procedure as to derive Eq. (\ref{eq:approx_strain_1}) can be applied for the stress tensor $\strain$, which leads to \begin{equation} \strain\left(r,\,\theta\right) \simeq \frac{\alpha\sigma_p^0}{E} \frac{k_n^n}{r^\frac{n}{n+1}}\tilde{\strain}\left(\theta,\,n\right)\,,\label{eq:J_strain} \end{equation} where the tensor $\tilde{\strain}\left(\theta,\,n\right)$ represents the distribution of the different strain components independently of the loading amplitude and could be directly obtained from $\tilde{\cauchy}\left(\theta,\,n\right)$ (if it were known) through Eq. (\ref{eq:derivationelpowerlaw4}) as $\tilde{\strain}\left(\theta,\,n\right)=\frac{3}{2}\left(\sqrt{\frac{3}{2}\tilde{\uT{s}}:\tilde{\uT{s}}}\right)^{n-1}\tilde{\uT{s}}$, where $\tilde{\uT{s}}\left(\theta,\,n\right)=\tilde{\cauchy}\left(\theta,\,n\right)-\frac{\text{tr}\left(\tilde{\cauchy}\left(\theta,\,n\right)\right)}{3}\uT{I}$. Finally, the internal energy $U$ (\ref{eq:derivationpowerlaw7}) is approximated near the crack tip by \begin{equation} U \simeq \label{eq:J_energy} \frac{n\alpha\left(\sigma_p^0\right)^2}{E\left(n+1\right)}\left(\frac{\sigma_e}{\sigma_p^0}\right)^{n+1}=\frac{n\alpha\left(\sigma_p^0\right)^2}{E\left(n+1 \right)} \frac{1}{r} k_n^{n+1}\sqrt{\frac{3}{2}\tilde{\uT{s}}:\tilde{\uT{s}} }^{n+1} \,, \end{equation} where $\tilde{\uT{s}}$ could be directly obtained from $\tilde{\cauchy}\left(\theta,\,n\right)$ (if it were known). Some remarks arise from this analysis: When considering Eq. (\ref{eq:J_stress}) for $n=1$, linear material, a similar expression as the LEFM asymptotic solution is recovered; When considering Eq. (\ref{eq:J_stress}) for $n=\infty$, perfectly plastic-like material, the stress field remains finite at crack tip as in the cohesive zone model; The plastic stress intensity factor $k_n$ depends on the loading and geometry but has not been developed yet; If $\tilde{\cauchy}\left(\theta,\,n\right)$, the distribution of the different stress components in terms of $\theta$ independently of the loading amplitude, is known, the terms $\tilde{\uT{s}}\left(\theta,\,n\right)$ and $\tilde{\strain}\left(\theta,\,n\right)$ directly arise. Solution in terms of the J-integral As previously stated, $k_n$ has been introduced to represent the dependency of the near-tip fields on the loading and geometry. However, the concept introduced in Chapter 3 to characterize the crack loading for non-linear behaviors is the $J$-Integral. We thus want to introduce this value in the asymptotic solution (\ref{eq:J_stress}-\ref{eq:J_energy}). General stress and strain field Using Eqs. (\ref{eq:J_stress}-\ref{eq:J_energy}) and assuming that the tensors components distributions, i.e. $\tilde{\cauchy}$, $\tilde{\strain}$ and $\tilde{\mathbf{s}}$, are known, the $J$ integral (\ref{eq:JHRR} ) is rewritten as \begin{equation}\label{eq:Jint_rew_tmp} J=\lim_{r\rightarrow 0} \int_{-\pi}^{\pi} \left(U \uV{n}_x - \defu_{,x}\cdot \cauchy\cdot \uV{n}\right) r d\theta = \frac{\alpha\left(\sigma_p^0\right)^2}{E} k_n^{n+1} \int_{-\pi}^{\pi} \tilde{h} \left(\theta,\,n\right) d\theta \,, \end{equation} where $\tilde{h} \left(\theta,\,n\right)$ gathers the terms in $\tilde{\cauchy} \left(\theta,\,n\right)$, $\tilde{\strain}$ and $\tilde{\mathbf{s}}$. Its expression will be discussed later. Therefore, Eq. (\ref{eq:Jint_rew_tmp}) is rewritten \begin{equation}\label{eq:Jint_rew} J= \frac{\alpha\left(\sigma_p^0\right)^2}{E} k_n^{n+1} I_n\,, \end{equation} with \begin{equation} I_n = \int_{-\pi}^{\pi} \tilde{h} \left(\theta,\,n\right) d\theta\,,\label{eq:Jint_In} \end{equation} which could be numerically evaluated from $\tilde{h} \left(\theta,\,n\right)$ if $\tilde{\cauchy} \left(\theta,\,n\right)$ were known. Eventually, solving (\ref{eq:Jint_rew}) for the plastic stress intensity factor $k_n$ with \begin{equation} k_n = \left(\frac{J E}{\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{1}{n+1}} \, \text{,} \label{eq:Jint_kn}\end{equation} we can calculate the stress (\ref{eq:J_stress}) and strain (\ref{eq:J_strain}) fields near the crack tip in a general form in terms of the $J$-integral through \begin{equation}\label{eq:stress_Jint} \cauchy=\sigma_p^0\left(\frac{J E}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{1}{n+1}}\tilde{\cauchy}\left(\theta,\,n\right), \end{equation} \begin{equation}\label{eq:strain_Jint} \strain=\frac{\sigma_p^0\alpha}{E}\left(\frac{J E}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}} \tilde{\strain}\left(\theta,\,n\right) \end{equation} As it can be deduced from (\ref{eq:stress_Jint}), the stress field near the crack tip is governed by $J$, which is the plastic analogue of the stress intensity factor $K$ in LFEM. The criterion for crack initiation should thus be written in terms of $J$ and may be $J \geq J_C$, where $J_C$ is a critical fracture energy. We can now consider two extreme cases: On the one hand, when $n=1$, linear material, we have seen that $J=K_I^2/E^{'}$, and Eq. (\ref{eq:stress_Jint}) yields back the LEFM asymptotic solution; In that case, the crack tip fields are driven by $K$. On the other hand, when $n=\infty$, perfectly plastic-like material, only the strains depend on $J$ and not the stresses and hence, $J$ plays the role of an equivalent "plastic strain intensity factor". Validity The solution for the $J$-integral near the crack tip is not universal and only valid For stationaries cracks (no propagation) as unloading is prohibited; In the region of dominance of the HRR field Near the crack tip, And as $n$ increases, this region decreases, as it has been shown by finite element simulations; In a small deformations setting; For incompressible materials as we are using the power law with a deviatoric strain field. Even though we have a solution for the stress and strain, two questions still remain unanswered: How do we compute the missing terms $I_n$, $\tilde{\cauchy}\left(\theta,\,n\right)$ etc.; and How do we exploit the information (what is the shape of the process zone...?)
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
I'm intrigued by Mathematica indisposition to find an example of the following system of equations $$ \cos(x) + \cos(y) + \cos(z) = \lambda_1, $$ $$ \cos(x+u) + \cos(y+v) + \cos(z+w) = \lambda_2, $$ $$ \cos(u) + \cos(v) + \cos(w) = \lambda_3, $$ for a given set of $\vec{\lambda}$. For example, if I start with a given set of $x$, $y$, $z$, $u$, $v$, and $w$, cond = Thread[{x, y, z, u, v, w} -> (# &@RandomReal[{-π, π}, 6])] then result = {Cos[x] + Cos[y] + Cos[z], Cos[x + u] + Cos[y + v] + Cos[w + z],Cos[u] + Cos[v] + Cos[w]} /. cond the following code never terminates, FindInstance[Thread[{Cos[x] + Cos[y] + Cos[z],Cos[x + u] + Cos[y + v] + Cos[w + z],Cos[u] + Cos[v] + Cos[w]} == result], {x, y, z, u, v, w}] Is there any way to obtain an instance of such system? I've tried using Interval and Reals to constrain my variables without success.
SolidsWW Flash Applet Sample Problem 2 Line 323: Line 323: </p> </p> |} |} + + + + [[Category:Sample Problems]] [[Category:Sample Problems]] Revision as of 10:08, 10 August 2011 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 2 with solidsWW.swf embedded A standard WeBWorK PG file with an embedded applet has six sections: A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete. The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below: There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 3 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ########################################## This is the The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code. All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')). DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", ); This is the The TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = random(2,10,1); $xy = 'x'; $func1 = "$a*sin(pi*x/8)"; $func2 = '2'; $xmax = Compute("8"); $shapeType = 'circle'; $correctAnswer =Compute("128*$a"); This is the The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set ######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height ######################################### <p> This is the Those portions of the code that begin the line with ################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, #answerBoxAlias => 'answerBox', height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' ); You must include the section that follows ################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, ))); The lines The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable The code Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission. TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT The text between the BEGIN_TEXT $BR $BR Find the volume of the solid of revolution formed by rotating the curve \[y=$a\sin\left(\frac{\pi x}{8}\right)\] for \(x=0\) to \(8\) about the \(y\)-axis. \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings; This is the ###################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT(); This is the The The Flash applets are protected under the following license: Creative Commons Attribution-NonCommercial 3.0 Unported License.
Equivalent Binary Quadratic Forms Definition: Let $f(x, y) = ax^2 + bxy + cy^2$ and $g(x, y) = Ax^2 + Bxy + Cy^2$ be binary quadratic forms. Then $f$ and $g$ are said to be Equivalent denoted $f \sim g$ if there exists $m_{11}, m_{12}, m_{21}, m_{22} \in \mathbb{Z}$ such that $f(x, y) = g(m_{11}x + m_{12}y, m_{21}x + m_{22}y)$ and such that $\det \begin{bmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{bmatrix} = m_{11}m_{22} - m_{12}m_{21} = 1$. For example, let $f(x, y) = x^2 + y^2$ and let $g(x, y) = 13x^2 + 16xy + 5y^2$. Let:(1) Then:(2) Therefore $f$ and $g$ are equivalent binary quadratic forms. Now let $f(x, y) = ax^2 + bxy + cy^2$ with discriminant $d$ and let $g(x, y) = Ax^2 + Bxy + Cy^2$ with discriminant $D$. Suppose that $f \equiv g$ so that there exists integers $m_{11}, m_{12}, m_{21}, m_{22} \in \mathbb{Z}$ such that $f(m_{11}x + m_{12}y, m_{21}x + m_{22}y) = g(x, y)$ where $m_{12}m_{22} - m_{12}m_{21} = 1$. Let:(3) There are a few observations that can be made. First, observe that:(4) Similarly:(5) Since $f \sim g$ we have that:(6) Therefore, if $f \sim g$ and we have that:(7) Proposition 1: Let $f$, $g$, and $h$ be binary quadratic forms. a) $f \sim f$ (Reflexivity Property). b) If $f \sim g$ then $g \sim f$ (Symmetry Property). c) If $f \sim g$ and $g \sim h$ then $f \sim h$ (Transitivity Property). Properties (a), (b), and (c) tell us that equivalence of binary quadratic forms is an equivalence relation. Proof of a)Let $M = I_2$, the $2 \times 2$ identity matrix. Then clearly $F = I^T F I$. So $f \sim f$. Proof of b)Since $f \sim g$ we have that $G = M^T F M$ where $\det (M) = 1$. Since $\det (M) = 1$, we have that: So $g \sim f$. Proof of c)Suppose that $f \sim g$ and $g \sim h$. Then $G = M^T F M$ and $H = N^T G N$ where $\det (M) = 1$ and $\det (N) = 1$. So: Observe that $\det (MN) = \det(M) \cdot \det(N) = 1$. So $f \sim h$. $\blacksquare$ Proposition 2: Let $f$ and $g$ be binary quadratic forms with discriminants $d$ and $D$ respectively. a) If $f \sim g$ and $n \in \mathbb{Z}$ then $n$ can be represented by $f(x, y)$ if and only if $n$ can be represented by $g(x, y)$. b) If $f \sim g$ then $d = D$. It is important to note that the converse of (b) is not true in general. That is, $d = D$ does NOT imply that $f \sim g$.
\(\newcommand{\cauchy}{\boldsymbol{\sigma}}\) \(\newcommand{\strain}{\boldsymbol{\varepsilon}}\) \(\newcommand{\uV}{\boldsymbol}\) \(\newcommand{\uT}{\boldsymbol}\) \(\newcommand{\defu}{\boldsymbol{u}}\) HRR theory > Other HRR solutions In the previous section, we have derived the expressions of the near-tip stress and strain fields for a mode I crack in plane-$\varepsilon$ state. Moreover an uncompressible material has been assumed. In this section, a brief summary is given for other HRR solutions that were derived in the literature: Plane strain deformation near a crack tip in a power-law hardening material, J. R. Rice, G. F. Rosengren, Journal of Mechanics and Physics of Solids 16 (1968), 1-12; Singular behaviours at the end of a tensile crack in a hardening material, J. W. Hutchinson, Journal of Mechanics and Physics of Solids 16 (1968), 13-31; Plastic stress and strain fields at a crack tip, J. W. Hutchinson, Journal of Mechanics and Physics of Solids 16 (1968), 337-347; Fully plastic solutions and large scale yielding estimates for plane stress crack problems, C. F. Shih, J. W. Hutchinson, Journal of Engineering Materials and Technology 98 (1976), 289-295; Requirements for a one parameter characterization of crack tip fields by the HRR singularity, C. F. Shih, M. D. German, International Journal of Fracture 17(1) (1981), 27- 43. Mode I crack in plane-$\sigma$ state Still assuming an uncompressible material, a similar analysis as in the plane-$\varepsilon$ state can be performed and yields different components tensors $\tilde{\cauchy} \left(\theta,n\right)$, $\tilde{\strain} \left(\theta,n\right)$, and $\tilde{\bf{u}} \left(\theta,n\right)$ fields. Near-tip stress field As we have derived in the previous section, the near-tip stress field reads \begin{equation} \cauchy=\sigma_p^0\left(\frac{JE}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{1}{n+1}}\tilde{\cauchy}\left(\theta,\,n\right) \, \text{.} \end{equation}. Pictures VIII.17-VIII.20 compare the stress shape tensor $\tilde{\sigma}$ components between plane-$\sigma$ and plane-$\varepsilon$ conditions for two different hardening parameters. Moreover the integral $I_n$ is reported in Picture VIII.21 for the two different cases. These figures allow comparing the difference in the crack tip loading, for instance through the asymptotic hoop stress $\sigma_{\theta\theta}$, between a plane-$\varepsilon$ and a plane-$\sigma$ states. Indeed, for the same value of the $J$-integral, and at the same crack tip distance $r$, the ratio between the two hoop stresses for two different hardening parameters are \begin{equation} \frac{\left.\cauchy_{\theta\theta}\left(0,\,r\right)\right|_{n=3,\,\text{plane }-\varepsilon}}{\left.\cauchy_{\theta\theta}\left(0,\,r\right)\right|_{n=3,\,\text{plane}-\sigma}} \simeq\frac{1.9}{1.1}\left(\frac{3.86}{5.81}\right)^\frac{1}{4} = 1.56 \, \text{,} \end{equation} \begin{equation} \frac{\left.\cauchy_{\theta\theta}\left(0,\,r\right)\right|_{n=13,\,\text{plane}-varepsilon}}{\left.\cauchy_{\theta\theta}\left(0,\,r\right)\right|_{n=13,\,\text{plane}-\sigma}} \simeq\frac{2.6}{1.2}\left(\frac{2.87}{3.4}\right)^\frac{1}{14} = 2.13\,. \end{equation} It can be seen that because the material cannot expend laterally, the crack tip is more loading in plane-$\varepsilon$ state, for the same value of the $J$-integral. Process zone shapes The process zone was defined as the zone in which $\sigma_e > \sigma_p^0$, whose boundary corresponds to $\sigma_e=\sigma_p^0$, and was found to be \begin{equation} \tilde{r}\left(\theta,\,n\right) = \frac{\left(\tilde{\sigma}_e\left(\theta,\,n\right)\right)^{n+1}}{I_n} \, \text{.}\label{eq:processZone} \end{equation} in terms of a non-dimensional radius $\tilde{r}= \frac{r \alpha \left(\sigma_p^0\right)^2 }{JE}$. The process zones for $n=$ are illustrated in Picture VIII.24 for the plane-$\sigma$ and plane-$\varepsilon$ states. It can be seen that the process zone is more diffuse in plane-$\sigma$ state. Crack Tip Opening Displacement We have seen that there exists a direct relation between the crack tip opening displacement and the $J$-integral: \begin{equation} \delta_t = d_n \frac{J}{\sigma^0_p}\,, \end{equation} where $d_n$ depends on $n$ but also on the elastic strain of the considered material $\frac{\alpha\sigma^0_p}{E}$. The evolutions of $d_n$ in the cases of plane-$\varepsilon$ and plane-$\sigma$ states are respectively illustrated in Picture VIII.22 and Picture VIII.23. Mode III crack loading Assuming SSY, the process zone develops under mode III following a circular shape, as illustrated in Picture VIII.25 and Picture VIII.26 for respectively a perfectly plastic material and an elasto-plastic material with strain hardening. Mode II and mixed mode crack loading In SSY, the solution depends on the elastic mixity parameter given by \begin{equation} M^e = \frac{2}{\pi}\arctan{\frac{K_I}{K_{II}}}\,, \end{equation} which ranges from 0 (Mode I crack loading) to 1 (Mode II crack loading). Varying the mixity parameter the process zone changes orientation, from a"horizontal" shape in Mode II crack loading, as depicted in Picture VIII.27, rotating upward for a mixed mode crack loading as depicted in Picture VIII.28. Compressibility effect The HRR theory was obtained assuming an incompressible material. Using the finite element approach, it is possible to solve the problem with a real elasto-plastic law for which the elastic part follows a classical Hooke's law characterized by a Poisson ratio $\nu$. Considering a crack loaded in mode I under plane-$\varepsilon$ state and assuming SSY, the effect of the Poisson ratio on the process zone is depicted in Picture VIII.29. The plastic zone size $r_p$ is defined as the length of the plastic zone ahead of the crack tip, and increases with the material compressibility.
The problem states: Let $p$ and $q$ be distinct prime numbers with $p \equiv q \equiv 3\pmod 4$. Prove that if the congruence $x^2 \equiv p \pmod q$ is not solvable, then the congruence $x^2 \equiv q \pmod p$ has exactly two incongruent solutions modulo $p$. I feel like I'm supposed to do something with the fact that $p \equiv q \equiv 3\pmod 4$ means that $(\frac pq) = (-1)*(\frac qp)$, but I'm not sure how that would help me. In other words, I'm lost. Update: I worked on it and this is my solution. If $p \equiv q \equiv 3 \pmod 4$ then $p,q > 2$, but also it implies that $(\frac pq)(\frac qp) = (-1)^{\frac {(p-1)(q-1)}4}=-1$. This means that $(\frac pq)=1$ or $(\frac pq)=-1$ and $(\frac qp)$ is the opposite. So if $(\frac pq)=-1$ then $(\frac qp)=1$, and $x^2 \equiv p \pmod q$ doesn't have a solution and $x^2 \equiv q \pmod p$ has exactly two incongruent solutions modulo p.
\(\newcommand{\cauchy}{\boldsymbol{\sigma}}\) \(\newcommand{\strain}{\boldsymbol{\varepsilon}}\) \(\newcommand{\uV}{\boldsymbol}\) \(\newcommand{\uT}{\boldsymbol}\) \(\newcommand{\defu}{\boldsymbol{u}}\) HRR theory > Outcomes of the HRR models So far, we have derived the HRR asymptotic solution. At this point, we want to discuss the practical outcomes of the theory, and what we can actually learn from these models. HRR solution can explain 3D effects One of the advantages of the HRR solution is that it can explain 3D effects in crack propagation. Let us assume a specimen with an edge-crack. When applying the HRR theory, the solution is either Plane-$\sigma$-like near the free surfaces (where the material can actually expand laterally); or Plane-$\varepsilon$-like near the mid-plane of a specimen (where by symmetry the material does not expand laterally). Both solutions are depicted in Picture VIII.30, where it can be seen that the shape of the process zone evolves from a "propeller-like" at mid-section to the more diffuse process zone at the specimen free faces. Although the $J$-integral is formally not constant along the crack front through the thickness, it remains that There is a transition in the plastic zone shape and size; This transition is responsible for the shear lips; Shear lips form at a 45$^{\circ}$ angle since $\sigma_{zz}=0$ so that the maximum shear stress is at 45$^{\circ}$ in the plane Oyz, see Picture VIII.31. Assuming SSY, and performing a toughness test, the real value of $K$ changes along the crack front, so that the measured value of $K$ is an average one. As a results The measured value of $K$ is more important for thin specimen, since the part of the specimen under plane-$\varepsilon$ state is relatively less important; For thick specimen, the process zone is mainly under plane-$\varepsilon$ state, although the free surfaces remain under plane-$\sigma$ state, and the measured $K$ stabilized to the so called plane-$\varepsilon$ critical $K$, the toughness, as indicated in Picture VIII.32, see also Lecture 2. Nevertheless, there actually exist complex 3D effects: Under SSY assumption, even for a thin specimen, a plane-$\varepsilon$ state generally develops near the mid-plane. However, if the load increases (or the specimen thickness decreases), the plastic zone can become plane-$\sigma$-like near the mid-plane. Rigorously, the effect of the specimen thickness results from triaxiality effect. Remember that under SSY assumption, the stress intensity factor characterizes the dominant term at crack tip, but at a larger distance, the other series terms are not negligible anymore, see Lecture 5. Effect of $T$-stress, see Picture VIII.33, should also be considered: Recall $T$-stress is the 0-order term obtained with the asymptotic solution, which is dominant at a distance $r_c$ from crack tip; In general, if the toughness test is such that $T<0$, the measured fracture $K$ will be larger than for $T>0$, independently of the thickness (see D.J. Smith, M.R. Ayatollahi and M.J. Pavier, Proc. R. Soc. A, 2006, vol. 462, pp 415-2437). For ASTM toughness tests, the thickness is large $t > 2.5 \left(\frac{K_C}{\sigma_p^0}\right)^2$ so that $T>0$. Effective crack length for SSY Another outcome of the HRR theory was the prediction of the process zone size. If the SSY assumption holds true, then $J$ can be expressed in terms of $K$; Then the plastic size can be written $r \propto \frac{JE}{\left(\sigma_p^0\right)^2} \propto \left(\frac{K_I}{\sigma_p^0}\right)^2$; However, there are dependencies on The parameters $n$ (plasticity) and $\nu$ (compressibility) and Whether it is plane-$\varepsilon$ or plane-$\sigma$ state. In the case of perfectly plastic materials $\left( n\rightarrow \infty \right)$, Rice has developed a model to predict the process zone size, based on a redistribution of the stress at crack tip, see Picture VIII.34. When assuming linear elasticity, the Mode I asymptotic solution predicts an infinite stress at crack tip, following \begin{equation} \cauchy_{yy}\left(r,\theta=0\right) = \frac{K_I}{\sqrt{2\pi r}}\,. \label{eq:LEFM}\end{equation} blue line, which is not physical. At crack tip the stress is limited by the yield stress $\sigma_p^0$. However, there should be a stress redistribution so that the total load remains constant, leading to the green line for which the process zone extend to a length $r_p$. The model is then solved in two steps: Evaluation of the intersection between the linear elastic solution with the yield stress at a fraction $\eta$ of the process zone. Using Eq. (\ref{eq:LEFM}) yields \begin{equation}\label{eq:rice_1} \sqrt{\eta r_p} = \frac{1}{\sqrt{2\pi}}\frac{K_I}{\sigma_p^0}\,. \end{equation} Equivalence between the total traction values of the LEFM solution (\ref{eq:LEFM}) and of the shifted green solution. Outside the plastic zone, the stress distribution is shifted, and the two hashed parts of Picture VIII.34 should have the same area: \begin{equation}\label{eq:rice_2} \sigma_p^0 \left(1-\eta\right)r_p = \int_0^{\eta r_p} \left(\cauchy_{yy}-\sigma_p^0\right) dr\,,\end{equation} where $\cauchy_{yy}$ follows Eq. (\ref{eq:LEFM}). We thus have two equations with two unknowns $\eta$ and $r_p$. Using Eq. (\ref{eq:LEFM}) in (\ref{eq:rice_2}) yields \begin{equation}\label{eq:rice_3} \sigma_p^0 r_p = \int_0^{\eta r_p} \frac{K_I}{\sqrt{2\pi r}} dr \,,\end{equation} so that the plastic zone becomes \begin{equation}\label{eq:rice} r_p = \frac{ 2K_I}{\sqrt{2\pi}\sigma_p^0}\sqrt{ \eta r_p} = \frac{K_I^2}{\pi} \frac{1}{\left(\sigma_p^0\right)^2}\,, \end{equation} with $\eta = \frac{1}{2}$. So the behavior of perfectly plastic materials is as if the crack had an effective length of $a+\eta r_p = a + r_p/2$ on which the LEFM solution is applied. Nevertheless, this solution was obtained for perfectly plastic materials. From numerical simulations, considering Blunting; Compressibility; Hardening ... and a more general estimation is \begin{equation}\label{eq:etarP} \eta r_p=\frac{r_p}{2} = \left\{ \begin{array}{cc} \frac{n-1}{n+1}\frac{1}{2\pi}\left(\frac{K_I\left(a_\text{eff}\right)}{\sigma_p^0}\right)^2 & \text{ in plane-}\sigma\\ \frac{n-1}{n+1}\frac{1}{6\pi}\left(\frac{K_I\left(a_\text{eff}\right)}{\sigma_p^0}\right)^2 & \text{ in plane-}\varepsilon\text{.} \end{array}\right. \end{equation} Therefore, if $\sigma_{\infty}$ remains lower than about $50\%$ of $\sigma_p^0$ (to avoid large scale yielding), then a second order SSY assumption holds true, and the LEFM methodology can be adapted following Assuming the process zone $r_p$ remains small compared to the crack size; The effective crack can be evaluated as $a_{\text{eff}}=a + \eta r_p = a + r_p/2$ with $\eta r_p$ as stated in (\ref{eq:etarP}); The stress intensity factor is evaluated from the effective crack length as $K_I\left(a_{\text{eff}}\right)$. So there is an iterative procedure to follow: Compute $K$ from $a$; Compute the effective crack size; Compute the new $K$ from $a_{eff}$ and go back to step 2. if needed. This approach is a correction for linear fracture mechanics, but does not allow considering problems with large yielding.
Quiz 2 Greatest Hits Logic Equivalences The distributive laws read a\land (b\lor c)\equiv a\land b \lor a\land c and a\lor (b\land c)\equiv (a\lor b) \land (a\lor c). A bunch of other things do not obey distributive laws: There's no distributive law for implication. (see also below) You should definitely not distribute negations over \land and \lor. That's what DeMorgan's law is for. What is a predicate? What isn't? a|b is a predicate, i.e. results in a value of true or false. Thus you cannot write things like a|b\in \mathbb N. Which variable needs to be quantified? Which one doesn't? You need to quantify a variable which isn't either given or already quantified. I.e. if you're defining the predicate P(n) as P(n)\equiv \dots, then within the dots, the meaning of n is clear (it's the argument of the predicate). If you're stating an implication, then generally allvariables need to be quantified. In general, if you write more than one quantifier for the same variable within one statement, what you are writing is wrong. Each quantifier gives a meaning to the variable within its 'reach' or 'quantified area'. The variable cannot be meaningfully used outside that area. In particular: WRONG: 3| x \land \forall x\in \mathbb N, x \bmod 2. (x outside (in front of) quantified area) Rewriting Rules for Implication We know exactly two rewriting rules for the implication: a\rightarrow b \equiv \neg b \rightarrow \neg a (contrapositive) a\rightarrow b \equiv \neg a\lor b Therefore, if the contrapositive doesn't help in a given situation, you needto use the second rule. Once you have the statement in terms of only \land, \lor and \not, all the rewriting rules we learned in Section 1.3 are available, and you'll have a much easier time. Notation Confusion a|b means: a divides b, i.e. b is divisible by a. So in problem 3, it's always 3|\text{stuff}. Different ways of denoting divisibility All these things say the same thing: 3 divides a a is divisible by 3 3|a \exists k \in \mathbb Z, a=3k a\bmod 3=0 All these, too: \exists q\in \mathbb Z,a/3 = q R 1 \exists k \in \mathbb Z, a=3k+1 a\bmod 3=1 You should be comfortable moving between these different ways of stating divsibility.
Let $X/k$ be a scheme of finite type over a field $k$, $G/k$ be a finite group scheme and suppose $G$ acts on $X$, i.e we have a $k$-morphism $ \mu : G \times_k X \rightarrow X$ satisfies some conditions, see Mumford's book "Abelian Varieties", p. 108. Suppose that $k$ is algebraically closed and consider the map $\mu$ induced on closed points $G(k) \times X(k) \rightarrow X(k)$, which gives $G(k) \rightarrow \mathrm{Aut}(X(k))$. Under the assumption that each $x \in X$ has an open affine neighborhood $U$ such that $U$ is invariant under $G$, one can form the quotient $X/G$. In particular, quasi-projective varieties have this property. This assumption reduces the construction from general variety to affine case. For a general field $k$, if one can cover $X$ by open affine subset $U_i$ such that the image of $ G \times_k U_i$ under $\mu$ is contained in $U$, then one reduces the construction to the affine case. But I couldn't figure out if a quasi-projective variety $X$ over $k$ always has such a covering. By working with base change to the algebraic closure of $k$, one gets a $G$-invariant open affine covering of $X_{\overline{k}}$. The images of these open affine subsets of $X_{\overline{k}}$ under the projection to $X$ is still $G$-invariant, but not necessary affine. So for quasi-projective varieties over $k$, do we have the existence of the quotient $X/G$? Another question is that if it exists, then if it is also quasi-projective? What I know is that, in the classical case ( over algebracially closed field and giving finite group action $H \rightarrow \mathrm{Aut}_k (X))$, if the quotient exists and $X$ is complete, then $X/H$ is also complete.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 51 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*} The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$. By virtue of Proposition 20 we have another interpretation for the total influence of monotone functions: This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice: Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$ Rousseau [Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$ Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition: Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \] Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce: An alternative analytic definition involves introducing the Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$. In the exercises you are asked to verify the following: $\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$. We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence: Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \] Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights. Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$. Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$. For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or (edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets. For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$: This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”.
\(\newcommand{\cauchy}{\boldsymbol{\sigma}}\) \(\newcommand{\strain}{\boldsymbol{\varepsilon}}\) \(\newcommand{\uV}{\boldsymbol}\) \(\newcommand{\uT}{\boldsymbol}\) \(\newcommand{\defu}{\boldsymbol{u}}\) HRR theory > Validity of HRR field Lastly, we want to discuss the validity and applicability of the HRR solution for different loading intensities. Small Scale Yielding (SSY) case Assuming the process zone remains confined at crack tip and that a $K$-dominance zone develops, we are under Small Scale Yielding (SSY). In that case, there exists a zone, in light green in Picture VIII.35, in which the behavior is elastic and in which the asymptotic solution of the LEFM governed by the SIF $K$ holds. Outside this zone, since the solution is asymptotic, the structural loading is not captured. When getting closer to crack tip, we are entering the process zone, and the LEFM asymptotic solution is no longer valid. However, there is a part of the process zone in which the asymptotic HRR solution governed by the $J$-integral is valid and matches the real behavior. However, contrarily to the real behavior, the asymptotic HRR solution is singular at crack tip. There are thus two separate regions in which one of the two asymptotic solutions captures the evolution of the stress with the radius $r$, as illustrated in the log-scales. Fracture can thus be predicted using either the SIF or the $J$-integral. Practically, the SSY condition is satisfied if the crack length $a$, the ligament $L$, see Picture VIII.37, but also the thickness if we want to avoid the thickness dependency on the toughness, are all larger than 25 times the plastic zone length $r_p$, that is in plane-$\varepsilon$ state \begin{equation} a,\,t,\,L > 25 r_p \simeq \frac{25}{3\pi}\left(\frac{K_I}{\sigma_p^0}\right)^2\simeq 2.5 \left(\frac{K_I}{\sigma_p^0}\right)^2 \, \text{.} \label{eq:SSYcond}\end{equation} Since both HRR and LEFM asymptotic solutions hold, the crack initiation criterion can be based on $J$ or $\delta_t$ such as $J \geq J_C$ or $\delta_t \geq \delta_C$. But as the LEFM solution holds, we can still use $K\left(a\right) \geq K_C$ which might be corrected by using the effective length $a_{eff}$ if $\sigma_{\infty} < 50\%$ of $\sigma_p^0$. When considering a toughness test or a crack propagation assessment, Eq. (\ref{eq:SSYcond}) becomes \begin{equation} a,\,t,\,L > 2.5 \left(\frac{K_C}{\sigma_p^0}\right)^2 \, \text{.} \label{eq:SSYKC}\end{equation} To provide some material examples for which the validity in SSY is ensured or not, a titanium alloy is compared to a mild steel: Titanium alloy 6%Al-4%V Yield stress: $\simeq$ 830 MPa; Toughness: $\simeq$ 55 MPa m$^{1/2}$; So Eq. (\ref{eq:SSYKC}) becomes $a$, $t$, $L$ $>$ 1.1 cm; Mild steel Yield stress: $\simeq$ 350 MPa; Toughness: $\simeq$250 MPa m$^{1/2}$; So Eq. (\ref{eq:SSYKC}) becomes $a$, $t$, $L$ $>$ 1.27 m !!! For a toughness test to be valid, all three dimensions, i.e. $a$, $t$, $L$, have to be larger than 1.1 cm for the titanium alloy. On the other hand, for the steel, the critical length rises to 1.27 m which cannot be practically achieved. Elasto-plastic small deformation case When elasto-plasticity develops further from the crack tip, but still assuming that the deformations are small, the zone of $K$-dominance no longer exists. Nevertheless, we still have one zone of $J$-dominance, part of the dark green zone in Picture VIII.37, in which the asymptotic HRR solution is valid, see Picture VIII.38. The asymptotic solution of the LEFM is NOT valid in the elastic zone, as this one is now located further from the crack tip so that the structural loading cannot be neglected. The conditions to have $J$-dominance zone are that all sizes are 25 larger than the crack tip opening displacement (CTOD), i.e. \begin{equation} a,\,t,\,L > 25 \delta_t\simeq 25 \frac{J}{\sigma_p^0} \, \text{.} \label{eq:HRRcond}\end{equation} Once again, similar to SSY, the crack initiation criteria based on $J$ or $\delta_t$ such as for $J \geq J_C$ or $\delta_t \geq \delta_C$ are valid. $J$ and $\delta_t$ depend among others on the crack length $a$, the geometry and the loading. A main difference to SSY is that the LEFM solution DOES NOT hold since there is no $K$-dominance zone, and hence, we CANNOT use $K\left(a\right) \geq K_C$. The toughness of a material should thus be evaluated though the $J$-integral in which case Eq. (\ref{eq:HRRcond}) becomes \begin{equation} a,\,t,\,L > 25 \frac{J_C}{\sigma_p^0} \, \text{.} \label{eq:HRRJC}\end{equation} Let us have a look again at the two previously considered materials to have an estimation of the crack sizes so that the HRR field is valid in the elasto-plastic conditions. Titanium alloy 6%Al-4%V Yield stress: \simeq 830 MPa; Toughness: \simeq 55 MPa m$^{1/2}$; Young's modulus: \simeq 110 GPa; So Eq. (\ref{eq:HRRJC}) \becomes $a$, $t$, $L$ $>$ 0.75 mm; Mild steel Yield: \simeq 350 MPa; Toughness: \simeq 250 MPa m$^{1/2}$; Young's modulus: \simeq 210 GPa; So Eq. (\ref{eq:HRRJC}) becomes $a$, $t$, $L$ $>$ 1.93 cm. We can observe that when measuring the critical $J_C$ instead of $K_C$ (which can be deduced from $E' J_C=K_{IC}^2$), the limit for the crack sizes decreases for both materials to a reasonably small value. So for ductile material, the toughness $K_C$ cannot be measured experimentally, and is deduced from ameasured $J_C$. Large scale yielding case Lastly, if the loading conditions keep increasing in amplitude, or if the ligament is too small, large plastic deformations arise. In such a case, the small deformations assumption does not hold and neither the HRR field nor the LEFM asymptotic field are valid. This is illustrated in Picture VIII.40 or Picture VIII.41 where it can be seen that the asymptotic solutions are nowhere in agreement with the true stress field. For large scale yielding, defining a criterion for the crack initiation is not an easy task as there is no zone of $J$-dominance. The question arises whether we can still use $J$ or not? Actually, the plastic strain concentrations depend on the experiment which might be of the forms depicted in Picture VIII.43 to Picture VIII.44. It appears that the plastic zones are not reproducible from one test to another. Regarding the crack initiation criterion, we can say that the solution is no longer uniquely governed by $J$. The relation between $J$ and $\delta_t$ is dependent on the configuration and on the loading. The critical $J_C$ measured for an experiment might not be valid for another one. A two-parameter characterization is thus required.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
The game Pokenom Go has just been released. Pokenom trainers can now travel the world, capture Pokenom in the wild and battle each other! Bash — the Pokenom trainer — has decided to drop out of his university to pursue his childhood dream of becoming the best Pokenom trainer! However, Linux — Bash’s university headmaster — does not allow his students to drop out so easily … Linux puts $N$ black boxes on a straight line. The black boxes are numbered from $1$ to $N$ from left to right. Initially, all black boxes are empty. Then Linux gives Bash $Q$ queries. Each query can be one of the following $2$ types: Linux puts exactly one stone inside exactly one box between $u$-th box and $v$-th box, inclusive, with equal probability. $(1 \le u \le v \le N)$. Let $a_ i$ be the number of stones in black box numbered $i$. Let $A = \sum _{i=1}^{N}{a_ i^2}$. Bash has to calculate the expected value $E(A)$. Bash can only drop out of his university if he is able to answer all queries correctly. But now all Bash can think of is Pokenom. Please help him! The first line of input contains exactly $2$ positive integers $N$ and $Q$. $(1 \le N, Q \le 10^5)$. $Q$ lines follow, each line contains exactly one query. As explained, a query can be one of the following $2$ types: $1 \; u \; v$: Linux puts a stone inside one of the boxes between $u$ and $v$. $2$: Linux asks Bash to compute $E(A)$. It can be proved that the expected value can be represented as an irreducible fraction $\dfrac {A}{B}$. For each query of type $2$, print one line containing the value $A \times B^{-1}$ modulo $10^{9} + 7$. The given input guarantees that $B$ is not a multiple of $10^{9} + 7$. In the first example: With a probability of $0.5$, two stones are in different squares. Hence, the answer to the fourth query is $0.5 \times (1^{2} + 1^{2}) + 0.5 \times 2^{2} = 3$. In the second example: With a probability of $\frac{2}{3}$, two stones are in different squares. Hence, the answer to the fourth query is $\frac{2}{3} \times 2 + \frac{1}{3} \times 4 = \frac{8}{3}$. Sample Input 1 Sample Output 1 2 4 1 1 2 2 1 1 2 2 1 3 Sample Input 2 Sample Output 2 3 4 1 1 3 2 1 1 3 2 1 666666674
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Search for t t ¯ H production in the H → b b ¯ decay channel with leptonic t t ¯ decays in proton-proton collisions at √s=13 TeV Journal of High Energy Physics, ISSN 1126-6708, 03/2019, Volume 2019, Issue 3, pp. 1 - 62 A search is presented for the associated production of a standard model Higgs boson with a top quark-antiquark pair (tt¯H\[... Hadron-Hadron scattering (experiments) | Top physics | Higgs physics | Protons | Confidence intervals | Large Hadron Collider | Particle collisions | Signal strength | Decay | Luminosity | Higgs bosons | Quarks | Solenoids | Muons Hadron-Hadron scattering (experiments) | Top physics | Higgs physics | Protons | Confidence intervals | Large Hadron Collider | Particle collisions | Signal strength | Decay | Luminosity | Higgs bosons | Quarks | Solenoids | Muons Journal Article 2. Search for t t ¯ H $$ \mathrm{t}\overline{\mathrm{t}}\mathrm{H} $$ production in the H → b b ¯ $$ \mathrm{H}\to \mathrm{b}\overline{\mathrm{b}} $$ decay channel with leptonic t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ decays in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV Journal of High Energy Physics, 3/2019, Volume 2019, Issue 3, pp. 1 - 62 A search is presented for the associated production of a standard model Higgs boson with a top quark-antiquark pair ( t t ¯ H $$... Higgs physics | Hadron-Hadron scattering (experiments) | Top physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Higgs physics | Hadron-Hadron scattering (experiments) | Top physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Journal Article PHYSICAL REVIEW D, ISSN 2470-0010, 11/2017, Volume 96, Issue 9 Journal Article Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 11, p. 111801 We present a measurement of angular observables and a test of lepton flavor universality in the B→K^{*}ℓ^{+}ℓ^{-} decay, where ℓ is either e or μ. The analysis... Journal Article The European Physical Journal C, ISSN 1434-6044, 9/2019, Volume 79, Issue 9, pp. 1 - 12 The advanced molybdenum-based rare process experiment (AMoRE) aims to search for neutrinoless double beta decay ($$0\nu \beta \beta $$ 0νββ ) of $$^{100}$$ 100... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | Sensors Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | Sensors Journal Article 6. Publisher’s Note: Search for B→hνν¯ decays with semileptonic tagging at Belle [Phys. Rev. D 96 , 091101(R) (2017)] Physical Review D, ISSN 2470-0010, 11/2017, Volume 96, Issue 9 Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 03/2017, Volume 118, Issue 11 Journal Article Physical Review Letters, ISSN 0031-9007, 06/2018, Volume 120, Issue 23 Here, the observation of Higgs boson production in association with a top quark-antiquark pair is reported, based on a combined analysis of proton-proton... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article SCIENTIFIC REPORTS, ISSN 2045-2322, 04/2016, Volume 6, Issue 1, p. 24068 2H-TaSe2 has been one of unique transition metal dichalcogenides exhibiting several phase transitions due to a delicate balance among competing electronic... TRANSITION-METAL DICHALCOGENIDES | MULTIDISCIPLINARY SCIENCES | NEUTRON-SCATTERING | 2H-TASE2 | CRYSTALS | DYNAMICS | 2H-NBSE2 | 1T-TAS2 | Temperature effects | Phase transitions | Crystals | Phase transition TRANSITION-METAL DICHALCOGENIDES | MULTIDISCIPLINARY SCIENCES | NEUTRON-SCATTERING | 2H-TASE2 | CRYSTALS | DYNAMICS | 2H-NBSE2 | 1T-TAS2 | Temperature effects | Phase transitions | Crystals | Phase transition Journal Article Autophagy, ISSN 1554-8627, 01/2016, Volume 12, Issue 1, pp. 1 - 222 stress | chaperone-mediated autophagy | vacuole | autolysosome | macroautophagy | flux | autophagosome | LC3 | lysosome | phagophore | Phagophore | Lysosome | Autolysosome | Flux | Autophagosome | Vacuole | Chaperonemediated autophagy | Macroautophagy | Stress | ACTIVATED PROTEIN-KINASE | ENDOPLASMIC-RETICULUM STRESS | STARVATION-INDUCED AUTOPHAGY | GLUCAGON-INDUCED AUTOPHAGY | CELL BIOLOGY | PROGRAMMED CELL-DEATH | LIFE-SPAN EXTENSION | BETAINE HOMOCYSTEINE METHYLTRANSFERASE | VACUOLAR MEMBRANE DYNAMICS | NF-KAPPA-B | Autophagy - physiology | Biological Assay - methods | Animals | Biological Assay - standards | Computer Simulation | Humans | Life Sciences | Medical and Health Sciences | Medicin och hälsovetenskap Journal Article
Difference between revisions of "Image Dimensions" m (Add link to a post on synfig-dev) m (Text replace - "{{Category|NewTerminology}}" to "{{NewTerminology}}") (2 intermediate revisions by 2 users not shown) Line 1: Line 1: + <div style="background-color:#DDFFDD; border:thin solid green; padding:1em"> <div style="background-color:#DDFFDD; border:thin solid green; padding:1em"> '''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div> '''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div> Line 5: Line 6: The user access the image dimensions in the {{l|Canvas Properties Dialog}}. The user access the image dimensions in the {{l|Canvas Properties Dialog}}. ===The ''Other'' tab=== ===The ''Other'' tab=== − [[File:Canvas-properties-other.png]] + [[File:Canvas-properties-other .png]] Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). Line 11: Line 12: ===The ''Image'' tab=== ===The ''Image'' tab=== − [[File:Canvas-properties-image.png]] + [[File:Canvas-properties-image .png]] Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: Latest revision as of 10:55, 20 May 2013 Contents Describing the fields of the Canvas Properties Dialog The user access the image dimensions in the Canvas Properties Dialog. The Other tab Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). The Image tab Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947. Possible intended effects of out-of-ratio image areas As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia. Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles. Feature wishlist to simplify working across documents See also Explanation by dooglus on the synfig-dev mailing list.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Energetic Approach This chapter introduces important concepts related to the energy released during a crack propagation. It is shown that the crack growth can be predicted based on energy principle, as predicted by Griffith formula $\sigma_{\text{TS}}\sqrt{a} \div \sqrt{E\, 2\gamma_s}$: the resistance of a structure could only be evaluated by considering its defects. The relation between the energy concept and the SIFs is also detailed. Particular emphasis is put on the applicability of the energetic methods and on the assumptions made. Energetic Approach > Energy of cracked Bodies In this chapter we use the notations and governing equations previously detailed. The difference lies in the presence of cracks in the body B, as illustrated in Picture III.1. In this class we assume that these crack surfaces are stress-free. As usually done to establish the principle of virtual work, let us consider a kinematically admissible field $\delta\mathbf{u}$ so that $\delta\mathbf{u}=0$ on $\partial_D B$, which multiplies the linear momentum balance equation. We can then integrate the resulting equation on the body and proceed with an integration by parts followed by the Gauss theorem, yielding \begin{equation} \begin{cases} \int_B \mathbf{\nabla}\cdot\mathbf{\sigma}^T \cdot \delta\mathbf{u} dB + \int_B \mathbf{b}\cdot \delta\mathbf{u} dB = 0\\ \iff \int_B \mathbf{\sigma} : \delta\mathbf{\varepsilon} dB = \int_{\partial_N B} \bar{\mathbf{T}} \cdot \delta\mathbf{u} d\partial D + \int_B \mathbf{b}\cdot \delta\mathbf{u} dB\end{cases}\label{eq:pvw1} .\end{equation} In the elastic regime (linear or not) the stress tensor derives from an internal potential $U$ \begin{equation} \mathbf{\sigma} = \partial_{\mathbf{\varepsilon}} U.\label{eq:Uint} \end{equation} For example, in linear elasticity one has \begin{equation} \mathbf{\sigma} = \mathcal{H}:\mathbf{\varepsilon} = \partial_{\mathbf{\varepsilon}} \frac{\mathbf{\varepsilon}:\mathcal{H}:\mathbf{\varepsilon}}{2} = \partial_{\mathbf{\varepsilon}} U.\label{eq:UintLinear} \end{equation} Using (\ref{eq:Uint}) in the virtual work equation (\ref{eq:pvw1}) leads to \begin{equation} \delta E_{\text{int}} = \delta \int_B U dB = \int_{\partial_N B} \bar{\mathbf{T}} \cdot \delta\mathbf{u} d\partial D + \int_B \mathbf{b}\cdot \delta\mathbf{u} dB = \delta W_{\text{ext}} \label{eq:pvw2},\end{equation} which expresses the equality between the virtual work of the external forces and the virtual increment of internal energy. We can now analyze the energy changes when a crack propagates in a body $B$. Body subjected to a prescribed loading (dead load) Let us assume that the body is under constant loading $\bar{\mathbf{T}}$ and $\mathbf{b}$. Assuming the crack surface is now different, the body deformation changes, and the body sees a displacement field $\delta \mathbf{u}$. Without loss of generalities we can represent this configuration by a general loading $Q$ at a loading point and a general displacement $\delta u$ at this loading point as shown in Picture III.2. Indeed, one has simply to define this general loading so that \begin{equation} \begin{cases} \int_{\partial_N B} \bar{\mathbf{T}} d\partial D + \int_B \mathbf{b} dB = Q \left[ \int_{\partial_N B} \hat{\bar{\mathbf{T}}} d\partial D + \int_B \hat{\mathbf{b}} dB\right]\\ \delta \mathbf{u} = \delta u \hat{\mathbf{u}}\end{cases}\label{eq:genLoading},\end{equation} where $ \hat{\bar{\mathbf{T}}} $ and $ \hat{\mathbf{b}}$ are unit loading corresponding to a displacement field $\delta \mathbf{u}$. From (\ref{eq:pvw2}) and (\ref{eq:genLoading}), assuming two different crack surfaces $A'$ and $A>A'$ for the same specimen under the constant loading $Q$, the loading point sees a displacement field $\delta u$, which corresponds to a work of the external forces \begin{equation}\delta W_{\text{ext}} \int_{\partial_N B} \bar{\mathbf{T}} \cdot \delta\mathbf{u} d\partial D + \int_B \mathbf{b}\cdot \delta\mathbf{u} dB = Q\delta u \label{eq:wextPrescLoading}.\end{equation} For the two crack surfaces, the internal energy of the body is also different. On the one hand the compliance and stiffness of the body $B$ is modified as the geometry changed, see Picture III.3, and on the other hand the body sees a displacement at the loading point. So the internal energy depends on the loading and on the crack surface \begin{equation} E_\text{int} = E_\text{int}\left(Q,\, A\right). \label{eq:prescLoading1} \end{equation} Finally $G$ is the energy associated to the creation of a (unit) crack surface. As $Q$ is constant, the balance of energy means that the work of external forces is used to change the internal energy and to create the surface, so that \begin{equation} \delta E_\text{int} = Q\delta u - G \delta A = \delta\left(Qu\right) - G\delta A. \label{eq:prescLoading2}\end{equation} From this equation we can define the energy release rate $G$ for a body under prescribed loading as \begin{equation} G = - \partial_A \left( E_\text{int} - Q u\right). \label{eq:GPrescLoading} \end{equation} This energy release rate is the energy released by the system (internal energy and work of external force) by unit increment of the crack surface. Let us now analyze the involved energies. Picture III.4 defines the complementary energy $u'\,dQ$ for a body of crack surface $A$. During the loading of the body -without crack propagation- the load $Q'$ ranges from 0 to $Q$ and the complementary energy reads \begin{equation} Qu - E_\text{int} = \int_0^Q u\left(Q',\,A\right) dQ'.\label{eq:complementaryEnergyPrescLoading}\end{equation} This equation allows the displacement field to be expressed for a given load $Q$ and for a given crack surface $A$ as \begin{equation} u\left(Q,\,A\right) = - \partial_Q \left(E_\text{int}-Qu\right).\label{eq:uPrescLoading} \end{equation} Comparing (\ref{eq:GPrescLoading}) and (\ref{eq:uPrescLoading}) allows writing \begin{equation} \partial_Q G = \partial_A u \iff G = \int_0^Q \partial_A u\left(Q',\,A\right) dQ' . \label{eq:measuredGPrescLoading}\end{equation} This equation gives the following physical meaning to $G$. Let us consider Picture III.5. A body of initial crack surface $A_0$ is loaded up to the point $Q'=Q$. Now the crack is assumed to propagate under constant loading to reach a surface $A_0 +dA$. As the compliance and stiffness of the body have changed the body sees an increase of the displacement from $u$ to $u+\delta u$. The body is then unloaded to reach $Q'=0$. For the same value of $Q$, the difference in terms of displacement between the loading and unloading curves is easily obtained as $d u=\partial_A u\left(Q,\,A\right) dA$. Thus the surface between the two curves reads $\int_0^Q\partial_A u\left(Q',\,A\right) dQ'dA$. We now have the physical meaning of (\ref{eq:measuredGPrescLoading}) as the energy released rate multiplied by the surface increment $GdA$ is nothing else than the surface between two loading curves of a specimen with two different crack surfaces. This can be a way to measure $G$ either by finite-element or experimentally as one has just to load up to the same force a specimen with two different crack surfaces. This method has the advantage of not involving real crack propagation. Note that in this section we do not know yet if the crack would propagate or not. We have analyzed the energy changes of a sample with two different crack sizes $A$ and $A+dA$ under prescribed loading. Body subjected to a prescribed displacement Let us now assume that the cracked body is subjected to a prescribed displacement $u$ at the loading point, see Picture III.6. For two different crack surfaces, as the displacement is constant, the reaction force is different as the compliance and stiffness of the sample are different due to the geometry change. The loading reaction changes from $Q$ to $Q+\delta Q$. Intuitively $\delta Q$ is negative as the compliance of the specimen increases with the crack surface. Considering the same sample with two crack surfaces $A'$ and $A>A'$ under prescribed displacement. As the displacement is prescribed, the loading point does not move and does not induce a work: \begin{equation}\delta W_{\text{ext}} = 0 \label{eq:wextPrescDisp}.\end{equation} Nevertheless, the internal energy is different as the compliance and stiffness of the body are different, see Picture III.7. For the same displacement $u$ the loading reaction $Q$ is smaller for the larger crack surface, and so for the internal energy. So the internal energy depends on the displacement constraint and on the crack surface \begin{equation} E_\text{int} = E_\text{int}\left(u,\, A\right). \label{eq:prescDisp1} \end{equation} Finally $G$ is the energy associated to the creation of a (unit) crack surface. As $u$ is constant, the balance of energy means that the energy required to create the crack surface should come from the internal energy, so that \begin{equation} \delta E_\text{int} = - G \delta A. \label{eq:prescDisp2}\end{equation} From this equation we can define the energy release rate $G$ for a body under prescribed displacement as \begin{equation} G = - \partial_A E_\text{int}. \label{eq:GPrescDisp} \end{equation} This energy release rate is the energy released by the system from its internal energy by unit increment of the crack surface. Moreover, considering Picture III.8, which defines the internal energy increment $Q'dU$ for a body of crack surface $A$, leads to \begin{equation} Q\left(u,\,A\right) = \partial_u E_\text{int}.\label{eq:QPrescDisp} \end{equation} Comparing (\ref{eq:GPrescDisp}) and (\ref{eq:QPrescDisp}) allows writing \begin{equation} -\partial_u G = \partial_A Q \iff G = - \int_0^u \partial_A Q\left(u',\,A\right) du' . \label{eq:measuredGPrescDisp}\end{equation} This equation gives the following physical meaning to $G$. Let us consider Picture III.9. A body of initial crack surface $A_0$ is loaded under displacement control up to the point $u'=u$. Now the crack is assumed to propagate under constant constrained displacement to reach a surface $A_0 +dA$. As the compliance and stiffness of the body have changed the body sees a decrease of the reaction force from $Q$ to $Q+\delta Q<Q$. The constrained displacement is then relaxed to reach $u'=0$. For the same value of $u$, the difference in terms of reaction force between the loading and unloading curves is easily obtained as $d Q=\partial_A Q\left(u,\,A\right) dA$ (<0). Thus the surface (>0) between the two curves reads $-\int_0^u\partial_A Q\left(u',\,A\right) du'dA$. We now have the physical meaning of (\ref{eq:measuredGPrescDisp}) as the energy released rate multiplied by the surface increment $GdA$ is nothing else than the surface between two loading curves of a specimen with two different crack surfaces. As for the prescribed force loading case, this can be used to measure $G$ without involving crack propagation. Note that in this section we do not know yet if the crack would propagate or not. We have analyzed the energy changes of a sample with two different crack sizes $A$ and $A+dA$ under prescribed loading.
Energetic Approach > The energy release rate In the previous section we have deduced the energy released by the system when the crack surface increases and we have found \begin{equation} G = \begin{cases} & - \partial_A \left( E_\text{int} - Q u\right) &\text { for a prescribed loading} \\ &- \partial_A E_\text{int} & \text { for a prescribed displacement}\end{cases}.\label{eq:GPrescribedCases}\end{equation} Note that these equations hold for any elastic materials, linear or not. The only requirement is the existence of a material internal potential $U$. Nevertheless the expression of $G$ depends on the loading conditions, and one would like to define it for a general loading condition. General Loading Let us define the potential energy of the specimen as \begin{equation}\Pi_T = E_\text{int}-W_{\text{ext}}.\label{eq:potEnergy}\end{equation} The energy release rate is thus defined as the energy, per unit crack surface, released by the system assuming a crack grows \begin{equation} G = -\partial_A \left(E_\text{int}-W_\text{ext}\right) = - \partial_A \Pi_T .\label{eq:G}\end{equation} Note that in the case of prescribed loading or displacement, (\ref{eq:G}) simplifies into (\ref{eq:GPrescribedCases}). Crack growth What has to be understood from the definition of the energy release rate (\ref{eq:G}) is that the crack growth is purely virtual: We evaluate the energy released by the system if a crack would grow. But what is required for a crack to grow? Let us now consider the total energy of the system \begin{equation}E=\Pi_T + \Gamma,\label{eq:sysEnergy}\end{equation} where $\Gamma$ is the energy required to create a crack surface $A$ in the body. Two kinds of materials can be considered For brittle materials the energy required to create a crack surface is the energy required to break the atomic bonds. So $\partial_A \Gamma= 2\gamma_s$, with $\gamma_s$ the surface energy of the material (a crack creates 2 surfaces, so the presence of the 2). This is a material constant that can be measured (with difficulties for solids however). For other materials (ductile, composite , polymers , etc) this energy depends on the failure process, void coalescence, debonding ... For ductile materials we need to consider the energy required for the plastic deformations precluding the crack formation and $\partial_A \Gamma= 2\gamma_s + W_\text{pl}$. For both cases we define the fracture energy as the energy required to form a unit surface in the material \begin{equation}G_C = \partial_A \Gamma.\label{eq:Gc}\end{equation} As the derivation of (\ref{eq:sysEnergy}) implies \begin{equation} \partial_A E= \partial_A \Pi_T +\partial_A \Gamma=G_C-G,\label{eq:dsysEnergy}\end{equation} we have now a crack propagation criterion: If $G<G_C$ then $\partial_A E<0$, which is impossible. So the crack does not propagate as the energy released from the system potential energy is not enough to create the surface in the material. If $G>G_C$ then $\partial_A E>0$ and the crack propagates as the energy released by the system potential energy (internal energy and/or work of external forces) is enough to create a surface in the material. Thus the crack propagation criterion simply reads \begin{equation} G \begin{cases} & < G_C &\text { no propagation} \\ &\geq G_C & \text { propagation}\end{cases}.\label{eq:Gcriterion}\end{equation} In this relation $G$ depends on the sample geometry (including crack length) and boundary conditions, while $G_C$ depends on the material (for ductile materials it can also depend on the crack advance). We have now introduced the energy release rate concept and have related it to a fracture criterion. These considerations are valid for elastic materials (or materials that can be considered as it) as we have assumed a material internal potential $U$. We did not make assumption on the linearity or not of the elastic material. However, things become easier for linear elasticity. Linear case and compliance In particular, in linear elasticity the expression (\ref{eq:GPrescribedCases}) of $G$ for different loading conditions can be unified in terms of the structure compliance $C$. Indeed as the material is linear the compliance of the specimen, for a given crack surface $A$, is defined as \begin{equation} C\left(A\right)=\frac{u}{Q}\label{eq:C}.\end{equation} The system energies can directly be computed from this compliance \begin{equation} \begin{cases} E_\text{int} &=& \frac{1}{2}Qu=\frac{u^2}{2C} = \frac{Q^2C}{2}\\ W_\text{ext} &=& Qu = \frac{u^2}{C}=Q^2C\end{cases}\label{eq:ELinear}.\end{equation} We can now analyze the two cases of interest. Prescribed displacements In that case, (\ref{eq:GPrescribedCases}) simplifies into \begin{equation} G =- \left.\partial_A E_\text{int}\right|_u = -\left.\partial_A \left(\frac{u^2}{2C}\right)\right|_u = \frac{u^2}{2C^2}\partial_AC = \frac{Q^2}{2}\partial_A C .\label{eq:GLinearPrescribedDisp}\end{equation} The physical interpretations of this relation are For the crack to grow all the required energy comes from the elastic internal energy (as the external forces do not work). As a result the internal energy decreases with the crack growth. Prescribed loading In that case, (\ref{eq:GPrescribedCases}) simplifies into \begin{equation} G =- \left.\partial_A \left(E_\text{int}-Q u\right)\right|_Q = -\left.\partial_A \left(\frac{Q^2C}{2}-Q^2C\right)\right|_Q = \frac{Q^2}{2}\partial_AC.\label{eq:GLinearPrescribedLoading}\end{equation} This expression is similar as (\ref{eq:GLinearPrescribedDisp}) obtained for a prescribed displacement. However the implication is quite different. Indeed, starting again from (\ref{eq:GPrescribedCases}) and using (\ref{eq:GLinearPrescribedLoading}), we have \begin{equation} \begin{cases} G =- \left.\partial_A \left(E_\text{int}-Q u\right)\right|_Q = -\left.\partial_A E_\text{int}\right|_Q + \left. \partial_A\left(Q u\right)\right|_Q \\ \iff G + \left.\partial_A E_\text{int}\right|_Q = Q \partial_A u = Q^2\partial_AC = 2G\\\iff G =\left.\partial_A E_\text{int}\right|_Q\end{cases} .\label{eq:GLinearPrescribedLoading1}\end{equation} The physical interpretations of this relation are For a crack to grow of $dA$, the external forces have to achieve a work of $2G$ as the internal energy is also increased by $G$. The internal energy increases with the crack growth contrarily to the prescribed displacement case. Application of the compliance method: Delamination of a composite laminate Delamination is a common failure mode in composite structures. The composite laminate is made of a stacking sequence of composite plies and two plies can unglue from each other due to the shear force, leading to delamination. This effect can be modeled by considering two clamped beams as in Picture III.10. This model is valid when the crack is long compared to the ply thickness: $a>>>h$. In that case we can consider two identical (semi-)cantilever beams of length $a$. The question is how to determine the energy release rate of this structure. The resolution method is as follows: Usual beam theory predicts the displacement $u$ of the double system from the deflection $v_\text{max}$ of a beam subjected to an end load $Q$.:\begin{equation} \begin{cases} v_\text{max}= \frac{Qa^3}{3EI} \text{ with } I = \frac{t h^3}{12}\\ \iff u=2 v_\text{max} = \frac{8Q a^3}{Eth^3}\end{cases} \label{uDCB}. \end{equation} The compliance and its derivative with respect to the crack surface $A=at$ can directly be evaluated from this last expression \begin{equation} \begin{cases}C=\frac{u}{Q} = \frac{8 a^3}{Eth^3}\\ \partial_A C=\frac{1}{t}\partial_a \frac{8 a^3}{Eth^3} =\frac{24 a^2}{Et^2h^3}\end{cases}.\label{eq:CDCB}\end{equation} Using (\ref{eq:ELinear}) the energy release rate reads \begin{equation}G=\frac{Q^2}{2}\partial_A C=\frac{12 Q^2 a^2}{Et^2h^3} . \label{eq:GDCB}\end{equation} This model can be used to measure the delamination energy of composite plies following a rigorous standard (ISO 15024). This process is summarized by A sample is manufactured with two plies partially unglued (use of a tape between the plies during manufacturing). Two loading blocks are glued on the resulting beams, see Picture III.11. A crack is then initiated between the plies from the originally unglued part by applying a first loading/unloading cycle. The length of the resulting crack is then deduced from the unloading response using (\ref{eq:CDCB}). A second loading is then applied and the maximum force reached before the stable crack propagation is used to determine the initiation $G_C$ from a corrected expression of (\ref{eq:GDCB}). To differentiate the initiation $G_C$ and the propagation one, the value $G_C(a)$ can be monitored with the crack propagation length.
What is the difference between $$\operatorname{div}(-\mu(u)\nabla u+pI)=f\tag{1}$$ and $$\operatorname{div}(-\mu(u)(\nabla u\color{red}{+(\nabla u)^\mathsf{T}})+pI)=f\tag{2}$$ from the physical point of view, in the glacier modeling context? Does it make sense to model a glacier with the first one? To explain the notation, $u$ and $p$ denote the velocity and pressure fields, respectively; $\mu(u)$ denotes a velocity-dependent viscosity coefficient (for example, Glen's power flow law). The gradient of $u$ is $$\nabla u=\left(\begin{array}{cc}\dfrac{\partial u_1}{\partial x}&\dfrac{\partial u_1}{\partial y}\\ \dfrac{\partial u_2}{\partial x}&\dfrac{\partial u_2}{\partial y}\end{array}\right)$$ and $(\nabla u)^\mathsf{T}$ denotes the transpose matrix of $\nabla u$. (If $u$ has 3 variables its gradient is analogous to the 2-variable case). Frequently, $f$ is $f=(0,0,-\rho g)$, where $g$ is the gravitational acceleration constant and $\rho$ the ice density. $I$ denotes the identity matrix and $div$ the divergence of a matrix is the divergence by rows, for example, $$\operatorname{div}\left(\begin{array}{cc}w_1&w_2\\ s_1& s_2\end{array}\right)=\left(\begin{array}{cc}\dfrac{\partial w_1}{\partial x}+\dfrac{\partial w_2}{\partial y}\\ \dfrac{\partial s_1}{\partial x}+\dfrac{\partial s_2}{\partial y}\end{array}\right).$$ This way, we may rewrote (1) and (2), respectively, as follows: $$-\operatorname{div}(\mu(u)\nabla u)+\nabla p=f$$ and $$-\operatorname{div}(\mu(u)(\nabla u\color{red}{+(\nabla u)^\mathsf{T}})+\nabla p=f$$
66 9 Homework Statement Let f and g be derivable functions and let a be a real number such that ##f(a)=g(a)=0 ## ##g'(a) ≠ 0 ## Justify that ##\frac{f'(a)}{g'(a)} ## = ##\lim_{x\to a}\frac{f(x)}{g(x)}## You may only use the definition of the derivative and boundary rules. Homework Equations ##\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}## My attempt: ##\frac{f'(a)}{g'(a)} ## = ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\cdot\frac{h}{g(a+h)-g(a)}## = ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{g(a+h)-g(a)}## I don't think I am doing this right. I don't even understand how I am supposed to use the boundary rules. I really appreciate some help! ##\frac{f'(a)}{g'(a)} ## = ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\cdot\frac{h}{g(a+h)-g(a)}## = ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{g(a+h)-g(a)}## I don't think I am doing this right. I don't even understand how I am supposed to use the boundary rules. I really appreciate some help!
Energetic Approach > Crack closure integral On the one hand, we have seen that the energy release rate $G$ is the variation of the system potential energy with respect to the crack size. Knowing the fracture energy required for acrack surface to form in a given material, this concept can be used to write a fracture criterion \begin{equation} G \begin{cases} & < G_C &\text { no propagation} \\ &\geq G_C & \text { propagation}\end{cases}.\label{eq:Gcriterion}\end{equation} In this relation $G$ depends on the sample geometry (including crack length) and on the boundary conditions, while $G_C$ depends on the material (for ductile materials it can also depend on the crack advance). The concept of energy release rate $G$ is based on the existence of an internal material potential, and thus on elasticity, but does not require linear material behaviors. On the other hand we have seen previously that for linear elasticity that in 1957 G. R. Irwin introduced a new failure criterion using the SIFs defined in the asymptotic solution. In mode I this criterion reads \begin{equation}K_I\quad\begin{cases} < K_{IC} \rightarrow \text{ the crack does not propagate,}\\ \geq K_{IC} \rightarrow \text{ the crack does propagate,} \end{cases}\label{eq:SIFCriterion}\end{equation} where $K_{IC}$ is the mode I toughness. Under the LEFM assumption: $K_I$ depends on the geometry and loading conditions only, $K_{IC}$ depends on the material only. We have thus two concepts, energy and SIFs, which should give the same predictions when both are valid: in linear elasticity. In this section we establish the relation between the energy release rate and the SIFs. Notations In 1957 Irwin introduced the crack closure integral. We use the notations and governing equations previously detailed. The difference lies in the presence of an initial cavity of surface $S$ in the body B, as illustrated in Picture III.12. In this initial configuration the following assumptions are made: The stress state is $\mathbf{\sigma}$; The displacement fields is $\mathbf{u}$ (in $B$ and on $S$) and is constrained to $\bar{\mathbf{u}}$ on $\partial_D B$; The surface traction is $\mathbf{T}$ with the constraints $\bar{\mathbf{T}}$ on $\partial_N B$, and $0$ on $S$ (stress free cavity surface). Then, the cavity is assumed to grow to $S+\Delta S$ leading to a final configuration, with A volume loss of $\Delta B$; The stress state becoming $\mathbf{\sigma}+\Delta\mathbf{\sigma}$; The displacement field becoming $\mathbf{u}+\Delta\mathbf{u}$ with $\Delta \mathbf{u} = 0$ on $\partial_D B$; The surface traction becoming $\mathbf{T}+\Delta\mathbf{T}$ with $\Delta \mathbf{T} = 0$ on $\partial_N B$ and $0$ on $S+\Delta S$. Potential energy In elasticity (linear or non-linear) there exists a material internal potential $U$, which depends on the strain state. As the strain depends on $\mathbf{\nabla}\mathbf{u}$, we can use the notation $U\left(\mathbf{\nabla}\mathbf{u}\right)$ and if $b$ is assumed equal to 0, the potential energy can be obtained for the two configurations as \begin{equation} \begin{cases} \Pi_T& = &\int_B U\left(\mathbf{\nabla}\mathbf{u}\right) dB - \int_{\partial_NB} \bar{\mathbf{T}} \cdot \mathbf{u} d\partial B\\ \Pi_T+\Delta \Pi_T &=& \int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right) dB - \int_{\partial_N B} \bar{\mathbf{T}} \cdot \left(\mathbf{u}+\Delta\mathbf{u}\right) d\partial B\end{cases}.\label{eq:piCCI} \end{equation} The difference of the potential energies between the two configurations reads \begin{equation} \Delta \Pi_T = \int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right)-U\left(\mathbf{\nabla}\mathbf{u}\right) dB-\int_{\Delta B} U\left(\mathbf{\nabla}\mathbf{u}\right) dB - \int_{\partial_N B} \bar{\mathbf{T}} \cdot\Delta\mathbf{u}d\partial B.\label{eq:DpiCCI1} \end{equation} As by definition $\mathbf{\sigma}=\partial_{\mathbf{\varepsilon}} U$, the first term of the right hand side of (\ref{eq:DpiCCI1}) can be rewritten \begin{eqnarray}\int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right)-U\left(\mathbf{\nabla}\mathbf{u}\right) dB &=&\int_{B-\Delta B} U\left(\mathbf{\varepsilon}+\Delta\mathbf{\varepsilon}\right)-U\left(\mathbf{\varepsilon}\right) dB \nonumber \\ &=& \int_{B - \Delta B}\left\{ \int_0^{\mathbf{\varepsilon}+\Delta\mathbf{\varepsilon}}\mathbf{\sigma}\left(\mathbf{\varepsilon}'\right): d\mathbf{\varepsilon}' -\int_0^{\mathbf{\varepsilon}}\mathbf{\sigma}\left(\mathbf{\varepsilon}'\right): d\mathbf{\varepsilon}'\right\} dB, \end{eqnarray} which simplifies into \begin{equation}\int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right)-U\left(\mathbf{\nabla}\mathbf{u}\right) dB = \int_{B - \Delta B} \int_\mathbf{\varepsilon}^{\mathbf{\varepsilon}+\Delta\mathbf{\varepsilon}}\mathbf{\sigma}\left(\mathbf{\varepsilon}'\right): d\mathbf{\varepsilon}' dB\label{eq:DpiCCItmp}. \end{equation} As $ \mathbf{\sigma}$ is symmetric, we have $ \mathbf{\sigma}\left(\mathbf{\varepsilon}'\right): d\mathbf{\varepsilon}'=\mathbf{\sigma}\left(\mathbf{\varepsilon}'\right): \mathbf{\nabla}d\mathbf{u}'=\mathbf{\nabla} \cdot\left(\mathbf{\sigma}^T\cdot d\mathbf{u}'\right)-\left(\mathbf{\nabla} \cdot\mathbf{\sigma}^T\right)\cdot d\mathbf{u}'$. As $\mathbf{b}=0$, the linear momentum equation implies $\mathbf{\nabla} \cdot\mathbf{\sigma}^T=0$ and (\ref{eq:DpiCCItmp}) becomes \begin{eqnarray} \int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right)-U\left(\mathbf{\nabla}\mathbf{u}\right) dB &= & \int_{B - \Delta B}\int_{\mathbf{u}}^{\mathbf{u}+\Delta\mathbf{u}}\mathbf{\nabla}\cdot\left( \mathbf{\sigma}^T\left(\mathbf{u}'\right)\cdot d \mathbf{u}'\right) dB \nonumber\\&=& \int_{S+ \Delta S+\partial_N B+\partial_D B}\int_{\mathbf{u}}^{\mathbf{u}+\Delta\mathbf{u}} \left[\mathbf{\sigma}\left(\mathbf{u}'\right) \cdot \mathbf{n}\right]\cdot d \mathbf{u}' d\partial B,\label{eq:DpiCCItmp2}\end{eqnarray} where we have applied Gauss theorem. From our assumptions The surface traction $\bar{\mathbf{T}}$ is constant on $\partial_N B$ and the displacement is constant on $\partial_D B$ ($d\mathbf{u}=0$); On the cavity surface $S$(+$\Delta S$), the surface traction $t$ is defined as $\mathbf{\sigma} \cdot \mathbf{n}$. Then (\ref{eq:DpiCCItmp2}) becomes \begin{equation} \int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right)-U\left(\mathbf{\nabla}\mathbf{u}\right) dB = \int_{S+ \Delta S}\int_{\mathbf{u}}^{\mathbf{u}+\Delta\mathbf{u}} \mathbf{t}\left(\mathbf{u}'\right)\cdot d \mathbf{u}' d\partial B+ \int_{\partial_NB} \bar{\mathbf{T}}\cdot\Delta \mathbf{u} d\partial B.\label{eq:DpiCCItmp3}\end{equation} Here we need to discuss the stress-free nature of the cavity surface. The final cavity surface $S+\Delta S$ is stress-free, but only in the final configuration, so $\mathbf{\sigma}\left(\mathbf{u}+\Delta\mathbf{u}\right)\cdot\mathbf{n}=0$ on $S+\Delta S$; The surface $S$ corresponding to the initial cavity remains stress-free during the whole process, so $\mathbf{\sigma}\left(\mathbf{u}'\right)\cdot\mathbf{n}=0$ on $S$; During the cavity growth process there is a surface traction on $\Delta S$ as the cavity is not formed yet, so $\mathbf{\sigma}\left(\mathbf{u}'\right)\cdot\mathbf{n}\neq 0$ on $\Delta S$. Because of these considerations, (\ref{eq:DpiCCItmp3}) is finally rewritten \begin{equation} \int_{B-\Delta B} U\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\Delta\mathbf{u}\right)-U\left(\mathbf{\nabla}\mathbf{u}\right) dB = \int_{\Delta S}\int_{\mathbf{u}}^{\mathbf{u}+\Delta\mathbf{u}} \mathbf{t}\left(\mathbf{u}'\right)\cdot d \mathbf{u}' d\partial B+ \int_{\partial_NB} \bar{\mathbf{T}}\cdot\Delta \mathbf{u} d\partial B.\label{eq:DpiCCItmp4}\end{equation} Using this last result, the change of potential energy (\ref{eq:DpiCCI1}) reads \begin{equation} \Delta \Pi_T = \int_{\Delta S}\int_{\mathbf{u}}^{\mathbf{u}+\Delta\mathbf{u}} \mathbf{t}\left(\mathbf{u}'\right)\cdot d \mathbf{u}' d\partial B - \int_{\Delta B} U\left(\mathbf{\nabla}\mathbf{u}\right) dB.\label{eq:DpiCCI2} \end{equation} As instead of a cavity we consider a crack, the change of volume vanishes and we eventually have \begin{equation} \Delta \Pi_T = \int_{\Delta S}\int_{\mathbf{u}}^{\mathbf{u}+\Delta\mathbf{u}} \mathbf{t}\left(\mathbf{u}'\right)\cdot d \mathbf{u}' d\partial B .\label{eq:DpiCCI} \end{equation} As explained in the overview, the physical interpretation of (\ref{eq:DpiCCI}) can be explained by considering a crack growth from a size $2a$, see Picture III.14, to a size $2(a+\Delta a)$, see Picture III.15, of a centered crack in an infinite plate. During this crack propagation, $\sigma_{yy}$ produces a (negative) work on the crack growth $\Delta a$. Energy release rate in linear elasticity The general expression of the change of potential energy (\ref{eq:DpiCCI}) is valid in the general elastic case (linear or not). If the response is elastic and linear, see Picture III.16, this expression can be evaluated as The traction $t'$ is decreasing linearly with $u'$ (linear material response); The achieved work is then $t \frac{\Delta u}{2}$. The variation of potential of potential energy (\ref{eq:DpiCCI}) now reads \begin{equation} \Delta \Pi_T = \int_{\Delta S}\frac{1}{2} \mathbf{u}{t}\cdot \Delta \mathbf{u} d\partial B, \label{eq:DpiCCILin} \end{equation} where To evaluate the energy release rate, we need to relate (\ref{eq:DpiCCILin}) with the increment of crack surface. The created surface $\Delta S$ \begin{eqnarray} \Delta \Pi_T &=& \int_{\Delta A^+}\frac{1}{2} \mathbf{t}\cdot \left[\Delta \mathbf{u}^+ - \Delta \mathbf{u}^-\right]d\partial B\nonumber\\ &=& \int_{\Delta A^+}\frac{1}{2} \mathbf{t}\cdot [\![ \Delta \mathbf{u} ]\!] d\partial B, \end{eqnarray} where $[\![ \Delta\mathbf{u}]\!]$ is the opening between the crack lips. The increment of fracture area $\Delta A$ corresponds to $\Delta A^+$, so that the energy release rate $G$ becomes: \begin{equation} G= -\partial_A \Delta \Pi_T = - \lim_{\Delta A\rightarrow 0}\frac{1}{\Delta A} \int_{\Delta A}\frac{1}{2} \mathbf{t}\cdot [\![\Delta \mathbf{u} ]\!] d\partial B \label{Glin}. \end{equation} This relation is valid for any linear elastic materials and for any directions of crack growth (mode I, II or III). We can relate the expression (\ref{Glin}) to the work needed to close a crack. On the one hand, we know that $G > 0$ for a crack growth and from (\ref{Glin}) it appears that $\Pi_T$ decreases during the crack growth. This means the system free some energy which is used to break the bonding (or for plastic deformation etc). On the other hand, $G$ (\ref{Glin}) can also be interpreted as the energy required to close the crack tip on a surface $\Delta A$, with $\mathbf{t}$ the force required to close the opening $[\![\Delta \mathbf{u}]\!]$. Assuming a tensile mode I, this crack closure is illustrated in Picture III.18. The next page particularies these expressions when cracks grow straight ahead.
Does there exist a sequence $(a_i)_{i \geq 0}$ of distinct positive integers such that $\sum_{i\geq 0}\frac{1}{a_i} \in \mathbb{Q}$ and $$\{ p \in \mathbb{P} \text{ }|\text{ } \exists\text{ } i\geq 0 \text{ s.t.}\text{ } p | a_i\}$$ is infinite? Motivation: All geometric series (corresponding to sets $\{ 1,n,n^2,n^3,... \}$) are rational and the terms obviously contain finitely many primes. The same is true for say, sums of reciprocals of all numbers whose prime factiorisation contains only the primes $p_1, p_2, ...,p_k$ : the sum is then $\prod_{i=1}^k\left(\frac{p_i}{p_i-1}\right)$ On the other side, series corresponding to sets $\{1^2, 2^2, 3^2, ...\}, \{1^3,2^3,3^3,...\},\{1!,2!,3!,...\}$ converge to $\frac{\pi^2}{6}$, Apery's constant and $e$ respectively, which are all known to be irrationals. I am aware of the fact that if this statement is true then it has not been proven yet (since it implies that the values of the zeta function at positive integers are irrational, which to my knowledge has not been shown yet). Any counterexamples or other possible observations (such as, instead of requiring the set of primes to be infinite, requiring that it contains all primes except a finite set)?
This note is based on my talk An Expedition to the World of p-adic Numbers at Carnegie Mellon University on January 15, 2014. Construction from Cauchy Sequences A standard approach to construct the real numbers from the rational numbers is a procedure called completion, which forces all Cauchy sequences in a metric space by adding new points to the metric space. Definition. A norm, denoted by |\cdot|, on the field F is a function from F to the set of nonnegative numbers in an ordered field R such that (1) |x|=0 if and only if x=0; (2) |xy|=|x||y|; (3) |x+y|\leq|x|+|y|. Remark. The ordered field is usually chosen to be \mathbb{R}. However, to construct of \mathbb{R}, the ordered field is \mathbb{Q}. A norm on the field F naturally gives rise to a metric d(x,y)=|x-y| on F. For example, the standard metric on the rationals is defined by the absolute value, namely, d(x, y) = |x-y|, where x,y\in\mathbb{Q}, and |\cdot| is the standard norm, i.e., the absolute value, on \mathbb{Q}. Given a metric d on the field F, the completion procedure considers the set of all Cauchy sequences on F and an equivalence relation (a_n)\sim (b_n)\text{ if and only if }d(a_n, b_n)\to 0\text{ as }n\to\infty. Definition. Two norms on F, |\cdot|_1, |\cdot|_2, are equivalent if and only if for every sequence (a_n), it is a Cauchy sequence with respect to d_1 if and only if it is so with respect to d_2, where d_1, d_2 are the metrics determined by |\cdot|_1, |\cdot|_2 respectively. Remark. It is reasonable to worry about the situation in which two norms |\cdot|_1 and |\cdot|_2 are equivalent, but they introduce different equivalent relationships on the set of Cauchy sequences. However, given two equivalent norms, |a_n|_1 converges to 0 if and only if it does so with respect to d_2. (Hint: prove by contradiction and consider the sequence (1/a_n).) Definition. The trivial norm on F is a norm |\cdot| such that |0|=0 and |x|=1 for x\neq 0. Since we are interested in norms that generate different completions of the field, it would be great if we can classify all nontrivial norms modulo the norm equivalence. Alternative Norms on Rationals Definition. Let p be any prime number. For any non-zero integer a, let \mathrm{ord}_pa be the highest power of p which divides a. For any rational x=a/b, define \mathrm{ord}_px=\mathrm{ord}_pa-\mathrm{ord}_pb. Further define a map |\cdot|_p on \mathbb{Q} as follows: |x|_p = \begin{cases}p^{-\mathrm{ord}_px} & \text{if }x\neq 0 \\ 0 & \text{if }x=0\end{cases}. Proposition. |\cdot|_p is a norm on \mathbb{Q}. We call it the p-adic norm on \mathbb{Q}. Proof (sketch). We only check the triangle inequality. Notice that \begin{aligned}\mathrm{ord}_p\left(\frac{a}{b}-\frac{c}{d}\right) & = \mathrm{ord}_p\left(\frac{ad-bc}{bd}\right) \\ & = \mathrm{ord}_p(ad-bc)-\mathrm{ord}_p(bd) \\ & \geq \min(\mathrm{ord}_p(ad), \mathrm{ord}_p(bc)) - \mathrm{ord}_p(bd) \\ &= \min(\mathrm{ord}_p(ad/bd), \mathrm{ord}_p(bc/bd)).\end{aligned} Therefore, we obtain |a/b-c/d|_p \leq \max(|a/b|_p, |c/d|_p) \leq |a/b|_p+|c/d|_p. [qed] Remark. Some counterintuitive fact about the p-adic norm are The following theorem due to Ostrowski classifies all possible norms on the rationals up to norm equivalence. We denote the standard absolute value by |\cdot|_\infty. Theorem (Ostrowski 1916). Every nontrivial norm |\cdot| on \mathbb{Q} is equivalent to |\cdot|_p for some prime p or for p=\infty. Proof. We consider two cases (i) There is n\in\{1,2,\ldots\} such that |n|>1; (ii) For all n\in\{1,2,\ldots\}, |n|\leq 1. As we shall see, in the 1st case, the norm is equivalent to |\cdot|_\infty, whereas, in the 2nd case, the norm is equivalent to |\cdot|_p for some prime p. Case (i). Let n_0 be the least such n\in\{1,2,\ldots\} such that |n|>1. Let \alpha > 0 be such that |n_0|=n_0^\alpha. For every positive integer n, if n_0^s \leq n < n_0^{s+1}, then we can write it in n_0-base: n = a_0 + a_1n_0 + \ldots + a_sn_0^s. By the choice of n_0, we know that |a_i|\leq 1 for all i. Therefore, we obtain \begin{aligned}|n| & \leq |a_0| + |a_1||n_0| + \ldots + |a_s||n_0|^s \\ & \leq 1 + |n_0| + \ldots |n_0|^s \\ & \leq n_0^{s\alpha}\left(1+n_0^{-\alpha}+n_0^{-2\alpha}+\ldots\right) \\ & \leq Cn^\alpha,\end{aligned} where C does not depend on n. Replace n by n^N and get |n|^N = |n^N| \leq Cn^{N\alpha}, and so |n|\leq \sqrt[N]{C}n^\alpha. As we can choose N to be arbitrarily large, we obtain |n| \leq n^\alpha. On the other hand, we have \begin{aligned}|n| & \geq |n_0^{s+1}| - |n_0^{s+1}-n| \\ & \geq n_0^{(s+1)\alpha} - (n_0^{s+1}-n_0^s)^\alpha\\ & = n_0^{(s+1)\alpha}\left[1-(1-1/n_0)^\alpha\right] \\ & > C'n^\alpha.\end{aligned} Using the same trick, we can actually take C'=1. Therefore |n| = n^\alpha. It is easy to see it is equivalent to |\cdot|_\infty. Case (ii). Since the norm is nontrivial, let n_0 be the least n such that |n|<1. Claim 1. n_0=p is a prime. Claim 2. |q|=1 if q is a prime other than p. Suppose |q| < 1. Find M large enough so that both |p^M| and |q^M| are less than 1/2. By Bézout’s lemma, there exists a,b\in\mathbb{Z} such that ap^M + bq^M = 1. However, 1 = |1| \leq |a||p^M| + |b||q^M| < 1/2 + 1/2 = 1, a contradiction. Therefore, we know |n|=|p|^{ord_p n}. It is easy to see it is equivalent to |\cdot|_p. [qed] Non-Archimedean Norm As one might have noticed, the p-adic norm satisfies an inequality stronger than the triangle inequality, namely |a\pm b|_p\leq \max(|x|_p, |y|_p). Definition. A norm is non-Archimedean provided |x\pm y|\leq \max(|x|, |y|). The world of non-Archimedean norm is good and weird. Here are two testimonies. Proposition (no more scalene triangles). If |x|\neq |y|, then |x\pm y| = \max(|x|, |y|). Proof. Suppose |x| < |y|. On one hand, we have |x\pm y| \leq |y|. On the other hand, |y| \leq \max(|x\pm y|, |x|). Since |x| < |y|, we must have |y| \leq |x\pm y|. [qed] Proposition (all points are centers). D(a, r^-) = D(b, r^-) for all b\in D(a, r^-) and D(a, r) = D(b,r) for all b\in D(a,r), where D(c, r^-) = \{x : |x-c|<r\} and D(c,r)=\{x:|x-c|\leq r\}. Construction of p-adic Numbers The p-adic numbers are the completion of \mathbb{Q} via the p-adic norm. Definition. The set of p-adic numbers is defined as \mathbb{Q}_p = \{\text{Cauchy sequences with respect to }|\cdot|_p\} / \sim_p, where (a_n)\sim_p(b_n) iff |a_n-b_n|_p\to 0 as n\to\infty. We would like to extend |\cdot|_p from \mathbb{Q} to \mathbb{Q}_p. When extending |\cdot|_\infty from \mathbb{Q} to \mathbb{R}, we set |[(a_n)]|_\infty to be [(|a_n|)], an element in \mathbb{R}. However, the values that |\cdot|_p can take, after the extension, are still in \mathbb{Q}. Definition. The extension of |\cdot|_p on \mathbb{Q}_p is defined by |[(a_n)]|_p = \lim_{n\to\infty}|a_n|_p. Remark. Suppose (a_n)\sim_p (a_n'). Then \lim_{n\to\infty}|a_n-a_n'|_p=0, and so \lim_{n\to\infty}|a_n|_p=\lim_{n\to\infty}|a_n'|_p. Moreover, one can prove that \lim_{n\to\infty}|a_n|_p always exists provided that (a_n) is a Cauchy sequence. (Hint: Suppose \lim_{n\to\infty}|a_n|_p > 0. There exists \epsilon > 0 such that |a_n|_p > \epsilon infinitely often. Choose N enough so that |a_m - a_n|_p < \epsilon for all m,n>N. Use ‘no more scalene triangles!’ property to deduce a contradiction.) Representation of p-adic Numbers Even though each real number is an equivalence class of Cauchy sequences, each equivalence class has a canonical representative. For instance, the canonical representative of \pi is (3, 3.1, 3.14, 3.141, 3.1415, \ldots). The analog for \mathbb{Q}_p is the following. Theorem. Every equivalence class a in \mathbb{Q}_p for which |a|_p\leq 1 has exactly one representative Cauchy sequence of the form (a_i) for which (1) 0\leq a_i < p^i for i=1,2,3,\ldots; (2) a_i = a_{i+1} (\pmod p^i) for i=1,2,3,\ldots. Proof of uniqueness. Prove by definition chasing. Proof of existence. We shall repeatedly apply the following lemma. Lemma. For every b\in\mathbb{Q} for which |b|_p \leq 1 and i\in\mathbb{N}, there exists a\in\{0, \ldots, p^i-1\} such that |a-b|_p \leq p^{-i}. Proof of Lemma. Suppose b=m/n is in the lowest form. As |b|_p\leq 1, we know that (n, p^i)=1. By Bézout’s lemma, an+a'p^i=m for some integers a,a'. We may assume a\in\{0,\ldots,p^i-1\}. Note that a-b=a'p^i/n, and so |a-b|_p \leq p^{-i}. [qed] Suppose (c_i) is a representative of a. As (c_i) is a Cauchy sequence, we can extract a subsequence (b_i) such that |b_i - b_j| \leq p^{-i} for all i < j which is still a representative of a. Using the lemma above, for each b_i, we can find 0 \leq a_i < q^i such that |a_i-b_i|_p \leq q^{-i}. Therefore (a_i) is a representative of a as well. For all i<j, we have |a_i-a_j|_p \leq \max(|a_i - b_i|_p, |b_i-b_j|_p, |b_j-a_j|_p) \leq q^{-i}, Therefore q^i divides a_i - a_j. [qed] For |a|_p\leq 1, we write a=b_0 + b_1p + b_2p^2 + \ldots, where (b_{n-1}b_{n-2}\ldots b_0)_p = a_n. What if |a|_p > 1? As |ap^m|=|a|/p^m, |ap^m|\leq 1 for some natural number m. By the representation theorem, we can write ap^m = b_0 + b_1p + b_2p^2 + \ldots, and a = b_0p^{-m} + b_1p^{-m+1} + b_2p^{-m+2} + \ldots. Using the representation of p-adic numbers, one can perform arithmetic operations such as addition, subtraction, multiplication and division. Like \mathbb{R}, \mathbb{Q}_p is not algebraically closed. Though \mathbb{C}, the algebraic closure of \mathbb{R}, has degree 2 over \mathbb{R}, and it is complete with respect to the absolute value, it is not so for \overline{\mathbb{Q}_p}, the algebraic closure of \mathbb{Q}_p. In fact, \overline{\mathbb{Q}_p} has infinite degree over \mathbb{Q}_p and is, unfortunately, not complete with respect to proper extension of |\cdot|_p. The good news is that the completion of \overline{\mathbb{Q}_p}, denoted by \Omega is algebraically closed.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
The Homomorphic Image of a Solvable Group is Solvable Table of Contents The Homomorphic Image of a Solvable Group is Solvable Proposition 1: Let $G$ and $H$ be groups. If $G$ is a solvable group and $\phi : G \to H$ is a group homomorphism from $G$ to $H$ then $\phi(G)$ is a solvable group. Proof:Let $G$ be a solvable group and let $\{ e \} = G_0 \leq G_1 \leq ... \leq G_n = G$ be a finite chain of successive subgroups of $G$ such that $G_i$ is a normal subgroup of $G_{i+1}$ for all $0 \leq i \leq n - 1$ and $G_{i+1}/G_i$ is abelian for all $0 \leq i \leq n - 1$. For each $0 \leq k \leq n$ let $H_k = \phi(G_k)$. Note that the homomorphic image of each $G_k$, $\phi(G_k)$ is a subgroup of $\phi(G)$ and that: \begin{align} \quad \{ e \} = H_0 \leq H_1 \leq ... \leq H_n = \phi(G_n) = \phi(G) \end{align} Let $0 \leq i \leq n - 1$. Since $G_i$ is a normal subgroup of $G_{i+1}$ we have that $\phi(G_i)$ is a normal subgroup of $\phi(G_{i+1})$. To see this, let $x \in \phi(G_{i+1})$ and let $z \in \phi(G_i)$. Since $x \in \phi(G_{i+1})$ there exists $g_x \in G_{i+1}$ such that $\phi(g_x) = x$. Since $z \in \phi(G_i)$ there exists $g_z \in G_i$ such that $\phi(g_z) = z$. Since $G_i$ is a normal subgroup of $G_{i+1}$ we have that: \begin{align} \quad g_xg_zg_x^{-1} \in G_i \end{align} Therefore: \begin{align} \quad \phi(g_xg_zg_x^{-1}) \in \phi(G_i) \\ \quad \phi(g_x)\phi(g_z)\phi(g_x)^{-1} \in \phi(G_i) \\ \quad xzx^{-1} \in \phi(G_i) \end{align} Since this holds for all $x \in \phi(G_{i+1})$ and for all $x \in \phi(G_i)$ we see that $\phi(G_i)$ is a normal subgroup of $\phi(G_{i+1})$. So the quotient group $\phi(G_{i+1})/\phi(G_i)$ is well-defined for each $0 \leq i \leq n - 1$. Let $f : G_{i+1}/G_i \to \phi(G_{i+1})/\phi(G_i)$ be defined for all $G_i \in G_{i+1}/G$ by: \begin{align} \quad f(gG_i) = \phi(g) \phi(G_i) \end{align} Then $f$ is a homomorphism from $G_{i+1}/G_i$ to $\phi(G_{i+1})/\phi(G_i)$ since for all $gG_i, hG_i \in G_{i+1}/G_i$ we have that: \begin{align} \quad f((gG_i)(hG_i)) = f(ghG_i) = \phi(gh)\phi(G_i) = \phi(g)\phi(h) \phi(G_i) = [\phi(g) \phi(G_i)][\phi(h) \phi(G_i)] = f(gG_i) f(hG_i) \end{align} Moreover, $f$ is surjective since for all $x \phi(G_i) \in \phi(G_{i+1})/\phi(G_i)$ since $x \in \phi(G_{i+1})$ there exists a $g_x \in G_{i+1}$ such that $\phi(g_x) = x$, and so $g_xG_i \in G_{i+1}/G_i$ is such that $f(g_xG_i) = \phi(g_x) \phi(G_i) = x \phi(G_i)$. Since $G_{i+1}/G_i$ is abelian, so is $f(G_{i+1}/G_i) = \phi(G_{i+1})/\phi(G_i) = H_i$. A similar argument applies for each $0 \leq i \leq n - 1$. So $\{ 0 \} = H_0 \leq H_1 \leq ... \leq H_n = \phi(G)$ is such that $H_i$ is a normal subgroup of $H_{i+1}$ for all $0 \leq i \leq n - 1$ and $H_{i+1}/H_i$ is abelian for all $0 \leq i \leq n - 1$. Therefore $\phi(G)$ is solvable. $\blacksquare$
The Topology of Closed Intervals [-n, n] on the Set of Real Numbers Recall from the Topological Spaces page that a set $X$ an a collection $\tau$ of subsets of $X$ together denoted $(X, \tau)$ is called a topological space if: $\emptyset \in \tau$ and $X \in \tau$, i.e., the empty set and the whole set are contained in $\tau$. If $U_i \in \tau$ for all $i \in I$ where $I$ is some index set then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, i.e., for any arbitrary collection of subsets from $\tau$, their union is contained in $\tau$. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, i.e., for any finite collection of subsets from $\tau$, their intersection is contained in $\tau$. On The Topology of Open Intervals on the Set of Real Numbers page we saw that if $\tau = \emptyset \cup \mathbb{R} \cup \{ (-n, n) : n \in \mathbb{Z}, n \geq 1 \}$ then $(X, \tau)$ is a topological space. We will now look at a similar topology of closed intervals of the form $[-n, n]$ with $\emptyset$, $\mathbb{R}$ included on the set of real numbers. Consider the following collection, from $\mathbb{R}$, of closed intervals with $\emptyset$ and $\mathbb{R}$ included:(1) For the first condition, clearly $\emptyset, \mathbb{R} \in \tau$ by the definition of $\tau$. For the second condition, notice that:(2) Therefore any arbitrary union $\displaystyle{\bigcup_{i \in I} U_i}$ for $U_i \in \tau$ for all $i \in I$ is the "largest" subset in the union and is hence contained in $\tau$. For the third condition, we have that any finite intersection $\displaystyle{\bigcap_{i=1}^{n} U_i}$ for $U_i \in \tau$ and $i \in \{ 1, 2, ..., n \}$ is the "smallest subset in the intersection and is hence contaiend in $\tau$. Therefore $(X, \tau)$ is a topological space.
Here is what I think you need to come to your own conclusion. First I will give a very general overview over lift creation, and then I will look at three wings: An unmodified wing This wing plus a winglet This wing plus the winglet, but this time folded down into the plane of the wing. For each I will plot the lift and bending moment distribution. I will assume an elliptic circulation, fully knowing that this is not what most aircraft use. But I have to pick a distribution to make all three cases comparable, and the elliptic one makes things easier. The conclusions can be generalized for other distributions. This will be a lengthy post (you should know me by now), so thanks to all who persevere through all of it. Lift creation and induced drag This topic had been covered before, and I mention it again to show a very simple and elegant way to explain induced drag that does not need vortices. I want to dispel the myth that induced drag is caused by air flowing around the wingtip, and winglets somehow magically can suppress this flow. Consider a wing with elliptic circulation over span (think of circulation as the product of the local lift coefficient $c_l$ and local chord; it is basically the lift per spanwise increment). The wing bends the air through which it flows slightly downwards, and creates an opposite upwards force, namely lift (Newton's second law). I choose an elliptic distribution because then the downwash is constant over span, which makes the following calculations easier. The sheet of air coming off behind the wing looks trough-shaped and moves downwards, thereby pressing other air below out of the way and allowing air above to flow inwards and to fill up the vacated volume. That is how the free vortex is created, and air flowing around the wingtips has only a small part in this. Induced drag is the consequence of the wing bending the airflow downwards. To simplify things, let's assume the wing is just acting on the air with the density $\rho$ flowing with the speed $v$ through a circle with a diameter equal to the span $b$ of the wing. If we just look at this stream tube, the mass flow is $$\frac{dm}{dt} = \frac{b^2}{4}\cdot\pi\cdot\rho\cdot v$$ Lift $L$ is then the impulse change which is caused by the wing. With the downward air speed $v_z$ imparted by the wing, lift is: $$L = \frac{b^2}{4}\cdot\pi\cdot\rho\cdot v\cdot v_z = S\cdot c_L\cdot\frac{v^2}{2}\cdot\rho$$ $S$ is the wing area and $c_L$ the overall lift coefficient. If we now solve for the vertical air speed, we get $$v_z = \frac{S\cdot c_L\cdot\frac{v^2}{2}\cdot\rho}{\frac{b^2}{4}\cdot\pi\cdot\rho\cdot v} = \frac{2\cdot c_L\cdot v}{\pi\cdot AR}$$with $AR = \frac{b^2}{S}$ the aspect ratio of the wing. Now we can divide the vertical speed by the air speed to calculate the angle by which the air has been deflected by the wing. Let's call it $\alpha_w$: $$\alpha_w = arctan\left(\frac{v_z}{v}\right) = arctan \left(\frac{2\cdot c_L}{\pi\cdot AR}\right)$$ The deflection happens gradually along the wing chord, so the mean local flow angle along the chord is just $\alpha_w / 2$. Lift acts perpendicularly to this local flow, thus is tilted backwards by $\alpha_w / 2$. In coefficients, lift is $c_L$, and the backwards component is $\alpha_w / 2 \cdot c_L$. Let's call this component $c_{Di}$: $$c_{Di} = arctan \left(\frac{c_L}{\pi\cdot AR}\right)\cdot c_L$$ For small $\alpha_w$s the arcus tangens can be neglected, and we get this familiar looking equation for the backwards-pointing component of the reaction force: $$c_{Di} = \frac{c_L^2}{\pi\cdot AR}$$ If the circulation over span has an elliptic distribution, the local change in circulation times the local amount of circulation is constant, and the induced drag $c_{Di}$ is at its minimum. If this would be different, a higher local $v_z$ causes a quadratic increase in local induced drag, so the whole wing will create its lift less efficiently. Now we know we can calculate induced drag and we understand why the vortex sheet behind the wing rolls up, producing two counter-rotating vortices, all without looking at the details of the wingtip. What counts is that the wing is of finite span, so the stream tube which is influenced by the wing is of finite diameter as well. Of course, in reality there is no clear boundary between air which is affected by the wing and other air which is not. There is a diffuse transition the more one moves away from the wing. Comparison of wingtips First the geometries: Here are three wingtips in top and front views for comparison: Now let's look at the circulation distribution of the simple wing tip: Again, I choose the elliptic distribution for simplicity. The corresponding bending moment looks like that: No surprises so far. Now we add a winglet and make it work as best as possible. This means we have to give it an angle of attack where it carries the circulation from the wing over on the winglet and completes the elliptic tapering of circulation down to 0 at the tip: The grey dashed line is the circulation of the original wing. I adjusted the circulation such that both wings produce the same lift. $b_{WL}$ is the span at the winglet tip, and for the bending moment plot I have folded the spanwise coordinate down on the y axis: Now the bending moment starts at the wingtip with a nonzero value. Since the sideways force of the winglet is parallel to the wing spar, this bending moment contribution is constant over span. But there is more: Now also circulation at the old wingtip location is nonzero, and we get a substantial lift increase at the outer wing stations. This effect is what causes the additional lift and gives the better aileron response that winglets make possible. But it also increases the root bending moment, because this additional lift acts with the lever arm of the outer wing. How can we compare the induced drag of the wing with winglets to the original wing? The circulation gradient is lower, that helps. Also the diameter of that stream tube is larger, but it is hard to say by how much. The sideways force on the winglet is created by pushing the vortex sheet aft of the winglet sideways out, so the trough-shaped area should become wider. Empirical evidence hints at an increase in diameter of 45% of the winglet span (see chapter 6 for a discussion of several papers on the topic). Just for the heck of it, let's assume that the diameter really increases in line with winglet span. Then let's compare that to the straight wing extension, where the same diameter can be assumed with much more certainty: Now also the lift on the folded-down winglet acts upwards, so the circulation at the center of the wing can be reduced even further. However, now it adds a linearly increasing part to the bending moment, and the outer wing section creates more lift, as before with the wing with winglet: Here, the root bending moment is higher than in the winglet case. This is a second advantage of winglets: They allow to increase maximum lift with less bending moment increase than a wing extension. But the wing extension puts all parts towards the creation of lift, and not some to the useless creation of side force. Both the extended and the winglet-wing have the same surface friction and (when we assume the same diameter of the hypothetical stream tube) the same induced drag. But since the winglet creates some side force, the remaining wing needs to fly at a higher lift coefficient. Also, the intersection of wing and winglet might be as well rounded as possible, this is where early separation starts at higher angles of attack. None of this affects the straight wing extension. Most evidence shows that winglets improve L/D over the original wing, but folding the winglet down will more than double its effectiveness in lowering drag. Even if we assume that the winglet is just as good as an equal span extension, still the span extension comes out ahead in L/D improvement because all its lift contributes to overall lift, whereas the winglet produces a side force instead. If no separation occurs at the wing-winglet intersection, both will create the same induced and profile drag (pressure and friction), because both have the same wetted surface and the same local circulation. Again, this gives winglets the benefit of equally low induced drag, which is not supported by most measurements. The extended wingtip in the example above has interesting characteristics. It is a sweptback (raked) wingtip, which causes the local lift curve slope to be lower than that of the straight wing. This increases its maximum angle of attack and - assuming that the local area is bigger than what an elliptic wing shape would dictate - makes it possible to keep a nearly elliptic circulation distribution over a wider angle of attack range. The bigger local area is a sensible precaution against the wing tip stalling first, so a raked wing tip will combine benign stall characteristics and very low induced drag. Compare this to the winglet, which has to be tailored for one polar point: Since changes in wing angle of attack will not change the incidence of the winglet, it cannot adapt as well to different flow conditions as can the extended wing. In sideslip the winglet will mess up the circulation distribution on the wingtip and will act like a deflected spoiler. Conclusion Comparing equal winglets and wing extensions gives these basic characteristics: Both have the same viscous drag at low angle of attack. Both can create more maximum lift, and both lower induced drag. The wing extension can create most lift for the given increase in wetted surface. The wing extension is more than twice as effective in lowering induced drag. The wing extension gives a better circulation distribution at off-design angle of attack. The wing extension produces the highest root bending moment for a given amount of lift. How much the bending moment increase will drive up structural mass depends on the aspect ratio of the original wing. Low aspect ratio wings will not suffer much, but stretching high aspect ratio wings will drive up spar mass considerably. But please note that the winglet also causes higher root bending moments, and it creates less bending moment than the wing extension because it creates some side force instead of pure, useful lift.
I am learning about ElGamal signature verification. During the signature generation one has to choose a k such that 1 < k < p − 1 and gcd(k, p − 1) = 1. I am using the notation from the Wikipedia site. Later it is used the inverse of k. I assume this is a modular inverse. Learning materials I have found does not state whether this is a modular inverse regarding p or p-1. Can somebody clarify this? I am learning about ElGamal signature verification. During the signature generation one has to choose a In ElGamal Signature Scheme we have: $$\beta=\alpha^a \bmod p$$ The values $p,\alpha$ and $\beta$ are public key, and $a$ is private key. $$\operatorname{sig_k}(x,k)=(\gamma , \delta)$$ where $\gamma = \alpha^k \bmod p $ and $\delta = (x-a\gamma)k^{-1} \bmod {p-1}$. It is $p-1$. I now give a detailed explanation as to why this is: 1. First, I give a corollary which is used to prove the ElGamal signature verification. Corollary If $a$ and $p$ are relative prime integers, and $p$ is a prime integer,then $a^i \equiv a^j \pmod p$,where i and j are non-negative integers, if and only if $i \equiv j \pmod {p-1}$. This corollary gives the relationship of $p$ and $p-1$. One of the methods to prove it is based on a theorem. This theorem can be found in Elementary Number Theory and Its Application, (6th ed.), by Kenneth Rosen (p. 349). Theorem 9.2. It can also be used to prove DSA. Theorem 9.2 If $a$ and $n$ are relative prime integers with $n>0$, then $a^i \equiv a^j \pmod n$, where i and j are non-negative integers, if and only if $i \equiv j \pmod {ord_n^a}$ Specially, when $p$ is a prime integer, ${ord_p^a}=\varphi(p)=p-1$ , so we prove the corollary. hence: $$i \equiv j \pmod {p-1} \Leftrightarrow a^i \equiv a^j \pmod p.$$ 2. Now, we review the ElGamal signature, and prove it using the corollary. 2.1 Key Generation for an ElGamal Digital Signature Choose a large prime $p$. Choose a primitive element $\alpha$ of $Z_p^*$. Choose a random integer $d \in \{2,3, . . . , p-2\}$. Compute$\beta = \alpha^d \pmod p$. The public key is $k_{pub}=(p, \alpha, \beta)$, and private key $k_{pr}=d$ 2.2 ElGamal Signature Generation The signing consists of two main steps: choosing a random value $k$, which forms an ephemeral private key, and computing the actual signature of $x$. Choose a random ephemeral key $k \in \{0,1,2, . . . , p-2\}$ such that $gcd(k, p-1) = 1$. Compute the signature parameters: $$r \equiv \alpha^k \pmod p$$ $$s \equiv (x-d\cdot r)k^{-1} \pmod {p-1}$$ Hence, the signature is: $$sig_{k_{pr}}(x,k)=(r,s)$$ 2.3 ElGamal Signature Verification Compute the value $$t \equiv \beta^r \cdot r^s \pmod p$$ The verification follows from: $$ t\begin{cases} \equiv \ \alpha^x \pmod p\ \ \ \ \Rightarrow \ \ \ \ valid \ \ signature\\\\ \not \equiv \ \alpha^x \ \pmod p\ \ \ \ \Rightarrow \ \ \ \ invalid \ \ signature \end{cases} $$ 2.4 Proof Prove that: $$ \alpha^x \equiv \beta^r \cdot r^s \pmod p$$since: $$ \beta^r \cdot r^s \pmod p \equiv (\alpha^d)^r(\alpha^k)^s \pmod p \equiv \alpha^{d\cdot r+k\cdot s} \pmod p$$ So, we require that : $$\alpha^x \equiv \alpha^{d\cdot r+k\cdot s} \pmod p$$ According the corollary shown above, the relationship holds if and only if: $$ x \equiv d\cdot r+k\cdot s \pmod {p-1}$$ hence, we get $s$: $$s\equiv (x - d \cdot r)k^{-1} \pmod {p-1}$$ This is just the construction rule of the signature parameters $s$ follows from. The condition that $gcd(k, p-1) = 1$ is required since we have to invert the ephemeral key modulo $p-1$ when computing $s$. 3. Conclusion From the description above, we now know why it is $p-1$, not $p$. References 《Rosen, K. H. (2011). Elementary Number Theory and Its Applications(6th ed.). Pearson.》 《Paar, C., & Pelzl, J. (2010). Understanding Cryptography. Springer-Verlag.》
the question I am trying to answer is: Show that for any not explicitly time-dependent operator $\hat A$, $$\frac{d}{dt} \langle \hat A \rangle = \frac{i}{\hbar} \langle \psi | [\hat H, \hat A]| \psi \rangle .$$ Use this relation to show that $\langle \hat A \rangle$ does not change whenever the system is in a stationary state. I know how to show the relation, using the product rule on $\frac{d}{dt} \langle \psi | \hat A | \psi \rangle$, and using the Schrodinger equation to evaluate each term. $$\frac{d}{dt} \langle \psi | \hat A | \psi \rangle = (\frac{d}{dt} \langle \psi |) \hat A | \psi \rangle + \langle \psi | \frac{d}{dt}\hat A | \psi \rangle + \langle \psi | \hat A \frac{d}{dt}| \psi \rangle$$ $$\frac{d}{dt}| \psi \rangle = - \frac{i}{\hbar} \hat H | \psi \rangle$$ $$ \frac{d}{dt} \langle \psi | = \frac{i}{\hbar} \langle \psi | \hat H$$ $$\frac{d}{dt} \hat A = 0$$ $$\frac{d}{dt} \langle \psi | \hat A | \psi \rangle = \frac{i}{\hbar} (\langle \psi | \hat H \hat A | \psi \rangle - \langle \psi | \hat A \hat H | \psi \rangle) = \frac{i}{\hbar} \langle \psi | [\hat H, \hat A] | \psi \rangle$$ I am unsure of how to show that $\frac{d}{dt} \langle \hat A \rangle = 0$ for the second part of the question. Clearly, if $\hat A$ commutes with the hamiltonian, then this will be true since $\langle \psi | 0 | \psi \rangle = 0$. But I don't think I can assume this to be the case. Thanks, I hope that makes sense. Jacob
30 4 Homework Statement In the figure below an electron is shot directly toward the center of a large metal plate that has surface charge density -1.20 × 10^-6 C/m2. If the initial kinetic energy of the electron is 1.60 × 10^-17 J and if the electron is to stop (due to electrostatic repulsion from the plate) just as it reaches the plate, how far from the plate must the launch point be? Homework Equations F=ma E=(1/2)mV^2 I am trying to solve the following problem: My attempt at the problem: $$m = 9.11 * 10^{-31}kg$$ $$q = 1.6 * 10^{-19} C$$ $$V_0 = \sqrt{\frac{2KE}{m}} = \sqrt{\frac{2 * 1.6 * 10^{-17}}{1.6 * 10^{-19}}} = 5,926,739 m/s$$ $$F_{net} = -F_{e} = ma$$ $$a = \frac{-F_{e}}{m} = ma$$ $$E = \frac{\phi}{2\epsilon_0}$$ $$F_e = \frac{E}{q} = \frac{\phi}{2\epsilon_0q}$$ $$a = -\frac{q\phi}{2m\epsilon_0} = \frac{1.6 * 10^{-19} * -1.20 * 10^{-6}}{2 * 9.11 * 10^{-31} * 8.85 * 10^{-12}} = -1.191 * 10^{16} m/s^2$$ $$V_f^2 = V_0^2 + 2ad$$ $$V_f = 0$$ $$-2ad = V_0^2$$ $$d = \frac{V_0^2}{2a} = -\frac{5,926,739}{2 * -1.191 * 10^{16}} = 0.001475m$$ I got that the answer is 0.001475m, but this is not correct. Does anyone know what I am doing wrong? My attempt at the problem: $$m = 9.11 * 10^{-31}kg$$ $$q = 1.6 * 10^{-19} C$$ $$V_0 = \sqrt{\frac{2KE}{m}} = \sqrt{\frac{2 * 1.6 * 10^{-17}}{1.6 * 10^{-19}}} = 5,926,739 m/s$$ $$F_{net} = -F_{e} = ma$$ $$a = \frac{-F_{e}}{m} = ma$$ $$E = \frac{\phi}{2\epsilon_0}$$ $$F_e = \frac{E}{q} = \frac{\phi}{2\epsilon_0q}$$ $$a = -\frac{q\phi}{2m\epsilon_0} = \frac{1.6 * 10^{-19} * -1.20 * 10^{-6}}{2 * 9.11 * 10^{-31} * 8.85 * 10^{-12}} = -1.191 * 10^{16} m/s^2$$ $$V_f^2 = V_0^2 + 2ad$$ $$V_f = 0$$ $$-2ad = V_0^2$$ $$d = \frac{V_0^2}{2a} = -\frac{5,926,739}{2 * -1.191 * 10^{16}} = 0.001475m$$ I got that the answer is 0.001475m, but this is not correct. Does anyone know what I am doing wrong?
The OP is correct in pointing out that "Local Non Satiation (LNS) only says there's a (utility) increasing direction but doesn't say which direction it is increasing". Namely, we entertain the possibility on dealing with "bads" also, not only with "goods". MWG Microeconomic Theory book page 43 Figure 3.B.1 depicts exactly such a situation. But it is the case that, when the bundle set is $\mathbb R_+$, under LNS not all items can be bads. Because then, the zero vector will be a point of satiation (and so it would violate the LNS assumption). So using non-negative quantities of items and imposing LNS forces us to consider only the cases where at least one item in the bundle is a good and not a bad, in which case "more is better" for this item. Then, we can prove that local non-satiation implies exhaustion of the available budget. Ad absurdum, assume that $px^* < m$. Under LNS for every $\epsilon >0$, there exists a $y(\epsilon)$ that is more preferred to $x^*$. If some $y(\epsilon)$ is feasible, $py(\epsilon) \leq m$, then $x^*$ cannot be the optimal choice in the first place. So the question is : Is it possible that all $y(\epsilon)$ that are preferred to $x^*$ under LNS, are infeasible, $py(\epsilon)>m,\;\; \forall \epsilon>0$? I guess the OP can take it from here.
Many portfolio optimization methods (e.g., Markowitz/Modern Portfolio Theory in 1952) face the well-known predicament called the “corner portfolio problem”. When short selling is allowed, they usually give efficient allocation weighting that is highly concentrated in only a few assets in the portfolio. This means that the portfolio is not as diversified as we would like, which makes the optimized portfolio less practically useful. In [Corvalan, 2005], the author suggests to look for instead an “almost efficient” but “more diversified” portfolio within the close neighborhood of the Mean-Variance (MV) optimal solution. The paper shows that there are many eligible portfolios around the MV optimal solution on the efficient frontier. Specificially, given the MV optimal solution, those “more diversified” portfolios can be computed by relaxing the requirements for the portfolio return \(R\) and risk \(\sigma\) in an additional optimization problem: \(max_{w} D(w) \ \ \textup{s.t.,} \\ \begin{aligned} \sqrt{w’ \Sigma w} & \le \sigma^* + \Delta \sigma \\ R^* – \Delta R & \le w’r \\ w’ 1 & = 1 \\ w_i & \ge 0 \end{aligned}\) where \(( \sigma^* , R^* ) = ( \sqrt{w^{*’} \Sigma w^*} , w^{*’} r ) \), \(w^*\) is the Markowitz MV optimal weights, \(\Delta \sigma, \Delta R \) are the relaxation tolerance parameters, and \(D(w)\) is a diversification measure for the portfolio (for example, \(\sum_i w_i, ln(w_i)\), \(\prod_i w_i\)). In other words, the new optimization problem looks for a portfolio with the maximal diversification around the optimal solution. Corvalan’s approach can be extended to create an approximate, sufficiently optimal and well diversified portfolio from the optimal portfolio. The approximate portfolio keeps the constraints from the original optimization problem. References:
A comparison principle for a Sobolev gradient semi-flow 1. Department of Mathematics, 1 University Station C1200, Austin, TX 78712-0257, USA Government 2. Department of Mathematics, 1 University Station C1200, University of Texas, Austin, TX 78712, United States 3. Università degli Studi di Milano, Dipartimento di Matematica Via Saldini, 50, 20133 Milano, Italy We consider the steepest descent equation for $S$ where the gradient is an element of the Sobolev space $H^{\beta}$, $\beta \in (0,1)$, with a metric that depends on $A$ and a positive number $\gamma >$sup$|V_{2 2}|$. We prove a weak comparison principle for such a gradient flow. We extend our methods to the case where $A$ is a fractional power of an elliptic operator, and provide an application to the Aubry-Mather theory for partial differential equations and pseudo-differential equations by finding plane-like minimizers of the energy functional. Keywords:Comparison principle, fractional powers of elliptic operators., Sobolev gradient, semigroups of linear operators. Mathematics Subject Classification:Primary: 35B50, 46N20; Secondary: 35J2. Citation:Timothy Blass, Rafael De La Llave, Enrico Valdinoci. A comparison principle for a Sobolev gradient semi-flow. Communications on Pure & Applied Analysis, 2011, 10 (1) : 69-91. doi: 10.3934/cpaa.2011.10.69 References: [1] [2] [3] [4] E. De Giorgi, [5] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,", [6] M. Haase, "The Functional Calculus for Sectorial Operators," volume 169 of Operator Theory: Advances and Applications., [7] [8] O. A. Ladyzhenskaya and N. N. Uraltseva, "Linear and Quasilinear Elliptic Equations,", [9] A. Lunardi, "Analytic Semigroups and Optimal Regularity in Parabolic Problems,", [10] R. de la Llave and E. Valdinoci, [11] C. Martínez Carracedo and M. Sanz Alix, "The Theory of Fractional Powers of Operators," volume 187 of North-Holland Mathematics Studies,, [12] M. Miklavčič, "Applied Functional Analysis and Partial Differential Equations,", [13] J. Moser, [14] J. Moser, [15] J. Moser, [16] J. W. Neuberger, "Sobolev Gradients and Differential Equations," volume 1670 of Lecture Notes in Mathematics,, [17] A. Pazy, "Semigroups of Linear Operators and Applications to Partial Differential Equations," volume 44 of Applied Mathematical Sciences,, [18] M. H. Protter and H. F. Weinberger, "Maximum Principles in Differential Equations,", [19] R. E. Showalter, "Monotone Operators in Banach Space and Nonlinear Partial Differential Equations," volume 49 of Mathematical Surveys and Monographs,, [20] M. A. Shubin, "Pseudodifferential Operators and Spectral Theory,", [21] E. M. Stein, "Singular Integrals and Differentiability Properties of Functions,", [22] M. E. Taylor, "Partial differential equations. I," volume 115 of Applied Mathematical Sciences., [23] M. E. Taylor, "Partial Differential Equations. III," volume 117 of Applied Mathematical Sciences., [24] I. I. Vrabie, "$C_0$-semigroups and Applications," volume 191 of North-Holland Mathematics Studies., [25] K. Yosida, "Functional Analysis,", show all references References: [1] [2] [3] [4] E. De Giorgi, [5] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,", [6] M. Haase, "The Functional Calculus for Sectorial Operators," volume 169 of Operator Theory: Advances and Applications., [7] [8] O. A. Ladyzhenskaya and N. N. Uraltseva, "Linear and Quasilinear Elliptic Equations,", [9] A. Lunardi, "Analytic Semigroups and Optimal Regularity in Parabolic Problems,", [10] R. de la Llave and E. Valdinoci, [11] C. Martínez Carracedo and M. Sanz Alix, "The Theory of Fractional Powers of Operators," volume 187 of North-Holland Mathematics Studies,, [12] M. Miklavčič, "Applied Functional Analysis and Partial Differential Equations,", [13] J. Moser, [14] J. Moser, [15] J. Moser, [16] J. W. Neuberger, "Sobolev Gradients and Differential Equations," volume 1670 of Lecture Notes in Mathematics,, [17] A. Pazy, "Semigroups of Linear Operators and Applications to Partial Differential Equations," volume 44 of Applied Mathematical Sciences,, [18] M. H. Protter and H. F. Weinberger, "Maximum Principles in Differential Equations,", [19] R. E. Showalter, "Monotone Operators in Banach Space and Nonlinear Partial Differential Equations," volume 49 of Mathematical Surveys and Monographs,, [20] M. A. Shubin, "Pseudodifferential Operators and Spectral Theory,", [21] E. M. Stein, "Singular Integrals and Differentiability Properties of Functions,", [22] M. E. Taylor, "Partial differential equations. I," volume 115 of Applied Mathematical Sciences., [23] M. E. Taylor, "Partial Differential Equations. III," volume 117 of Applied Mathematical Sciences., [24] I. I. Vrabie, "$C_0$-semigroups and Applications," volume 191 of North-Holland Mathematics Studies., [25] K. Yosida, "Functional Analysis,", [1] Maria Francesca Betta, Rosaria Di Nardo, Anna Mercaldo, Adamaria Perrotta. Gradient estimates and comparison principle for some nonlinear elliptic equations. [2] Jeffrey R. L. Webb. Positive solutions of nonlinear equations via comparison with linear operators. [3] Isabeau Birindelli, Francoise Demengel. Eigenvalue, maximum principle and regularity for fully non linear homogeneous operators. [4] Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Selfadjointness of degenerate elliptic operators on higher order Sobolev spaces. [5] Angela A. Albanese, Xavier Barrachina, Elisabetta M. Mangino, Alfredo Peris. Distributional chaos for strongly continuous semigroups of operators. [6] V. Pata, Sergey Zelik. A result on the existence of global attractors for semigroups of closed operators. [7] Yaiza Canzani, A. Rod Gover, Dmitry Jakobson, Raphaël Ponge. Nullspaces of conformally invariant operators. Applications to $\boldsymbol{Q_k}$-curvature. [8] Bertrand Lods, Mustapha Mokhtar-Kharroubi, Mohammed Sbihi. Spectral properties of general advection operators and weighted translation semigroups. [9] Francesco Altomare, Mirella Cappelletti Montano, Vita Leonessa. On the positive semigroups generated by Fleming-Viot type differential operators. [10] Marius Ionescu, Luke G. Rogers. Complex Powers of the Laplacian on Affine Nested Fractals as Calderón-Zygmund operators. [11] [12] [13] François Hamel, Emmanuel Russ, Nikolai Nadirashvili. Comparisons of eigenvalues of second order elliptic operators. [14] Giorgio Metafune, Chiara Spina, Cristian Tacelli. On a class of elliptic operators with unbounded diffusion coefficients. [15] [16] Giuseppe Di Fazio, Maria Stella Fanciullo, Pietro Zamboni. Harnack inequality for degenerate elliptic equations and sum operators. [17] [18] Bernd Kawohl, Vasilii Kurta. A Liouville comparison principle for solutions of singular quasilinear elliptic second-order partial differential inequalities. [19] Thaís Jordão, Xingping Sun. General types of spherical mean operators and $K$-functionals of fractional orders. [20] Daliang Zhao, Yansheng Liu, Xiaodi Li. Controllability for a class of semilinear fractional evolution systems via resolvent operators. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
In the paper "PRIMES is in P" the following is said (page 1): Let PRIMES denote the set of all prime numbers. The definition of prime numbers already gives a way of determining if a number $n$ is in PRIMES: try dividing $n$ by every number $m \leq \sqrt{n}$ — if any m divides $n$ then it is composite, otherwise it is prime. This test was known since the time of the ancient Greeks — it is a specialization of the Sieve of Eratosthenes (ca. 240 BC) that generates all primes less then $n$. The test, however, is inefficient: it takes $\Omega (\sqrt{n})$ steps to determine if $n$ is prime. An efficient test should need only a polynomial (in the size of the input = $\lceil \log n \rceil$) number of steps. Firstly, isn't complexity $\sqrt{n}$ already polynomial? Secondly, why would complexity $\lceil \log n \rceil$ represent an efficient algorithm? This seems somewhat arbitrary to me.
From the very basic understanding that they are created out of nothing mutually and collide to annihilate each other seems to indicate this happens due to an attraction. Why? this just means that if two of them are nearby, they can annihilate. Remember that particles are waves, and thus are quite spread out. They don't have to be directed to collide with each other using any kind of force, they just need to be near each other. Plus they are exactly alike except their opposite charge Not true. Particle-antiparticle pairs have the same mass, and spin/isospin (I think), but they have opposite charge, baryon number, lepton number, strangeness, charm, bottomness, (and probably more stuff). Neither is it mandatory for them to have opposite charge. They can both be neutral. For example, the neutron and all neutrinos have distinct antiparticles, and so does the neutral kaon (giving rise to the strange symbol $\bar{K}^0$). The neutral antikaon has a strangeness of +1, while the neutral kaon has a strangeness of -1. (Strangeness is a property rather whimsically named due to the observation that certain "strange" particles always appeared in pairs or not at all). That being said, there are particles who are their own antiparticle (pi-0 mesons, and all the neutral guage bosons-- photons, gluons, Z, Higgs, gravitons) That being said, there are only four forces (listed in increasing order of strength) Gravity. Electromagnetism. Note that the force felt when pushing a wall (normal reaction force) or when two balls collide is a manifestation of this force, as what's actually happening is that the electron clouds of atoms are repelling. Many people use this "collision force" while thinking about particles, and this is wrong. Particles don't collide, they can only exchange other particles (as well as absorb/emit each other). Weak nuclear force: Manifests itself in beta decays. Strong nuclear force: Holds atoms together. Gives boom-boom in nuclear explosions. For a particle-antiparticle pair, there usually will be some sort of force, and yes, it will usually be attractive. But the force can be classified into the four given above. Since gravity is weak and always acts, I'm neglecting it here: Pair of neutral hadrons (neutron, kaon, etc): These have the strong force between them. There also is a bit of the weak nuclear force, though it is not necessarily attractive Pair of charged hadrons (protons,pions, etc): Here the EM force as well as the strong force acts, but the strong force is attractive Pair of electron-like leptons(e,$\mu$,$\tau$): Have EM attraction as well as the weak force. The weak force isn't attractive of repulsive, and it dominates. So an electron-positron pair need not attract all the time Pair of neutrinos ($\nu_e$,$\nu_{\tau}$, etc): Only the weak force, need not attract Pair of quarks (u,d,c,s,&c): Strong force, EM force, little bit of weak force, will attract For guage bosons, it becomes a bit complicated. So yes, we can see that a general attractive force is prevalent, but not in all cases, and not due to the same phenomenon. Why don't they need to be close? (Addendum from the comments below) Quantum mechanics has a nice concept called wave particle duality. Any particle can be expressed as a wave. In fact, both are equivalent. Exactly what sort of wave is this? Its a probability wave. By this, I mean that it tracks probabilities. I'll give an example. Lets say you have a friend, A. Now at this moment, you don't know where A is. He could be at home or at work. Alternatively, he could be somewhere else, but with lesser probability. So, you draw a 3D graph. The x and y axes correspond to location (So you can draw a map on the x-y plane), and the z axis corresponds to probability. Your graph will be a smooth surface, that looks sort of like sand dunes in a desert. You'll have "humps" or dunes at A's home and at A's workplace, as there's the maximum probability that he's there. You could have smaller humps on other places he frequents. There will be tiny, but finite probabilities, that he's elsewhere (say, a different country). Now, lets say you call him and ask him where he is. He says that he's on his way home from work. So, your graph will be reconfigured, so that it has "ridges" along all the roads he will most probably take. Now, he calls you when he reaches home. Now, since you know exactly where he is, there will be a "peak" with probability 1 at his house (assuming his house is point-size, otherwise ther'll be a tall hump). Five minutes later, you decide to redraw the graph. Now you're almost certain that he's at home, but he may have gone out. He can't go far in 5 minutes, so you draw a hump centered at his house, with slopes outside. As time progresses, this hump will gradually flatten. So what have I described here? It's a wavefunction (technically the modulus squared of a wavefunction), or the "wave" nature of a particle. The wavefunction can reconfigure and also "collapse" to a "peak", depending on what data you receive. Now, everything has a wavefunction. You, me, a house, and particles. You and me have a very restricted wavefunction (due to tiny wavelength, but let's not go into that), and we rarely (read:never) have to take wave nature into account at normal scales. But, for particles, wave nature becomes an integral part of their behavior. In the next paragraph I am simplifying some stuff with the wavefunctions, and neglecting part of their nature just so my job becomes easier Back to the problem. Now, our particle and antiparticle are both waves. They have a small hump, but can be quite spread out. Now, these waves come near each other. Remember, the value of the wave (actually the square of its modulus, as a wavefunction is a complex number) gives the probability at which we can find a particle at a point. If the wavefunctions are$\Psi_1(x,y),\Psi_2(x,y)$, the probability of finding both particles at the same point will be $\Psi_1(x,y)\times\Psi_2(x,y)$ (normal probability rules). Now, you have a whole bunch of points where both wavefunctions exist (infinite actually, and technically both wavefunctions are spread out all over the universe, but I'm neglecting that). Adding these probabilities $$\sum\limits_{\forall\space(x,y)}\Psi_1(x,y)\times\Psi_2(x,y)$$, you will get some finite significant probability, even though the individual probabilities are infinitesimal. So, even if part of the two wavefunctions overlap, there is some nontrivial probability that they annihilate each other. Like I said, the wavefunctions actually cover all of space, but if we neglect those parts (they're extremely small), then the wave "size" is still pretty large. So a particle/antiparticle pair need not be too close to annihilate.
Tool Edit We have a tool for training SVD models (factorize sparse matrices) that takes very general input in the form of CSV files. The documentation needs some work, but you can find it here jssvd. Mathematical Description Edit Prediction Formula Edit $ \hat{r}_{ui} = \mu +b_u + b_i + p_u^{T}q_i $ where $ \mu $ is the overall average rating $ b_u $ is the user bias for user $ u $ $ b_i $ is the movie bias for movie $ i $ $ p_u $ is the first feature vector corresponding to the $ u^{th} $ user $ q_i $ is the second feature vector corresponding to the $ i^{th} $ movie Sometimes biases are not used. In this case, the prediction formula is: $ \hat{r}_{ui} = p_u^{T}q_i $ Note that two models are inherently the same, since we can increase the dimensionality of all u and v by 2, and write: $ p_{u,n+1} = b_u + \mu $ $ q_{u,n+1} = 1 $ $ p_{u,n+2} = 1 $ $ q_{i,n+2} = b_i $ However, their training may produce different results. Training Algorithm Edit Objective: $ \min_{q_{*}, p_{*}, b_{*}}{\sum_{(u,i) \in K}} ( r_{ui} - \mu - b_u -b_i - p_u^{T}q_i ) ^ 2 + \lambda ( b_u^2 +b_i^2 + ||q_i||^2 + ||p_u ||^2 ) $ Weight update formula Edit For each pair (u, i) from the training set do: Calculate residual: $ d_{ui} = r_{ui} - p_u q_i\! $ $ \Delta q_i = \eta \left(d_{ui} * p_u -\lambda q_i\right)\! $ $ \Delta p_u = \eta \left(d_{ui} * q_i -\lambda p_u\right)\! $ $ q_i \leftarrow q_i + \Delta q_i \! $ $ p_u \leftarrow p_u + \Delta p_u \! $ We will refer to these as standard formulas. Variants Edit Regression to average Edit $ \Delta q_i = \eta \left(d_{ui} p_u -\lambda (q_i-\bar q)\right) $ $ \Delta p_u = \eta \left(d_{ui} q_i -\lambda (p_u-\bar p)\right) $ where $ \bar q = {\sum_{j=1}^{N_\mathrm{items}} q_j \over N_\mathrm{items}} $ $ \bar p = {\sum_{j=1}^{N_\mathrm{users}} p_j \over N_\mathrm{users}} $ The reasoning behind this is that the prior distribution for $ p_{jk} $ is $ N\left({\bar p}_k, \sigma^2\right) $ rather than $ N\left(0, \sigma^2\right) $ Support-dependent relaxation rate Edit Note that the standard formulas do NOT minimize the function above. Indeed, if $ \eta $ is very small, and using the original formula, the total change of $ p_u $ at each iteration is: $ \left(\Delta p_u\right)^\mathrm{total} = \eta\left(\sum_{i\in I(u)} d_{ui} q_i - \lambda p_u S_u \right) $ where $ I(u) $ is the set of items which appear together with u in the training set, $ S_u $ is the support for the user u. Note the $ S_u $ factor in the second term. It should not be here, if we indeed moved in the direction of the gradient. This brings the idea to decrease relaxation rate with support. $ \Delta q_i = \eta \left(d_{ui} p_u -\lambda_i q_i\right) $ $ \Delta p_u = \eta \left(d_{ui} q_i -\lambda_u p_u\right) $ where $ \lambda_i=f(S_i)\! $, $ \lambda_u=g(S_u)\! $, f and g are non-increasing functions. For Netflix, the following formula was useful: $ \lambda_u(S_u) = \lambda^{(1)} + {\lambda^{(2)}\over \max(S_u, \bar S)} $, where $ S_u\! $ is the support for the user u. $ \bar S $ is average support for all users and similar for items. Support-dependent relaxation rate with regression to average Edit This combines both approaches above: $ \Delta q_i = \eta \left(d_{ui} p_u -\lambda_i (q_i-\bar q)\right) $ $ \Delta p_u = \eta \left(d_{ui} q_i -\lambda_u (p_u-\bar p)\right) $ Different learning rates and relaxation rates Edit Relaxation rates for users and items might be different. Learning rates also might be different. When using biases ($ \mu $, $ b_i $, $ b_u $), they may have their own learing and relaxation rates.
I am following along Ian Goodfellow's new Deep Learning book and, reading the last chapter, I am confused about equations 20.7-20.9. We have a joint distribution function, $P(v,h)$, and we are interested in finding the conditional distribution function, $P(h|v)$. If, from the definition of the Restricted Boltzmann Machine, $$ P(v, h) = \frac{1}{Z} \exp(b^Tv +c^Th + v^TWh) $$ where $Z$ is the normalizing constant, $$ Z = \sum_v \sum_h \exp(b^Tv +c^Th + v^TWh). $$ $P(h|v)$ is then (copied from the chapter): \begin{align} P(h|v) &= \frac{P(h,v)}{P(v)} \tag{20.7}\\[5pt] &= \frac{1}{P(v)}\frac{1}{Z}\exp(b^Tv +c^Th + v^TWh) \tag{20.8}\\[5pt] &= \frac{1}{Z'}\exp(c^Th + v^TWh) \tag{20.9} \end{align} That last step is where I am confused. What is $Z'$ -- there is nothing about that in the text? Is it some other constant that serves as a new normalizing constant or is it actually a derivative with respect to either $v$ or $h$? Could someone fill in some missing steps?
For the last few hours, I tried to solve an exercise about synchrotron radiation but can't get to a solution. I think there are some concepts about special relativity that I didn't understand during the lecture. The exercise is the following: It is given that the power P radiated by an accelerated charge e is: $$P = \frac{2}{3} \hbar \alpha \gamma^{6}(\dot{\vec{\beta}}^2-(\vec{\beta} \times \dot{\vec{\beta}})^2)$$ I am asked to express this in terms of the bending radius $\rho$ and the particle energy. I can assume highly relativistic particles. It is clear to me that the cross product is a simple multiplication since the velocity and acceleration is perpendicular. But how can I express $\beta$ in a way that the term only depends on the particle energy? I started with $\vec{\beta} = \frac{\vec{p}c}{E}$ and with that $\dot{\vec{\beta}} = \frac{\dot{\vec{p}}c}{E}$ Now $\dot{p}$ has to be the force (Lorentz force) so $\dot{p} = qvB$ and the magnetic field can be determined by the centripetal force: $qvB = \frac{mv^2}{\rho}$ $\Leftrightarrow$ $B = \frac{mv}{q\rho}$ $\rightarrow \quad$ $\dot{\vec{p}} = \frac{mv^2}{\rho}$ And now I'm stuck. Because if I continue like that it will end up with an expression dependent on the velocity, mass, energy of the particle and bending radius. And without a given mass I can't determine the radiation power P. I think I made a mistake when saying $\dot{\vec{p}} = Force$ because I didn't consider special relativity effects at all but I have no idea what to use instead. Thank you very much for your help!
The Optimal control on the Kuramoto adaptative coupling model To start this tutorial, write the following command in the MATLAB console open T06ODET0004_Kuramoto In this tutorial, we present how to use Pontryagin environment to control a consensus system that models the complex emergent dynamics over a given network. The control basically minimize the cost functional which contains the running cost and desired final state. Model The Kuramoto model describes the phases $\theta_i$ of active oscillators, which is described by the following dynamics: Here the first constant terms $\omega_i$ denote the natural oscillatory behaviors, and the interactions are nonlinearly affected by the relative phases. The amplitude of interactions is determined by the coupling strength, $\kappa$. Control strategy The control interface is on the coupling strength as follows: This is a nonlinear version of bi-linear control problem for the Kuramoto interactions. The idea is as follows; There are $N$ number of oscillators, oscillating with their own natural frequencies. We want to make a collective behavior using their own decision process. The interaction is given by the Kuramoto model, or may follow other interaction rules. The network can be given or flexible with control. The cost of control will be related to the collective dynamics we want, such as the variance of frequencies or phases. Numerical simulation Here, we consider a simple problem: we control the all-to-all network system to get gathered phases at final time $T$. We first need to define the system of ODEs in terms of symbolic variables. m = 5; %% [m]: number of oscillators.syms t;symTh = sym('y', [m,1]); %% [y] : phases of oscillators, $\theta_i$.symOm = sym('om', [m,1]); %% [om]: natural frequencies of osc., $\omega_i$.symK = sym('K',[m,m]); %% [K] : the coupling network matrix, $\kappa$.symU = sym('u',[1,1]); %% [u] : the control functions along time, $u(t)$.syms Vsys; %% [Vsys]: the vector fields of ODEs.symThth = repmat(symTh,[1 m]);Vsys = symOm + (symU./m)*sum(symK.*sin(symThth.' - symThth),2); %% Kuramoto interaction terms. The parameter $\omega_i$ and $\kappa$ should be specified for the calculations. Practically, $K > \vert \max\Omega - \min\Omega \vert$ leads to the synchronization of frequencies. We normalize the coupling strength to 1, and give random values for the natural frequencies from the normal distribution $N(0,0.1)$. We also choose initial data from $N(0,pi/4)$. %% Om_init = normrnd(0,0.1,m,1);%% Om_init = Om_init - mean(Om_init); %% Mean zero frequencies%% Th_init = normrnd(0,pi()/4,m,1); K_init = ones(m,m); %% Constant coupling strength, 1.T = 5; %% We give enough time for the frequency synchronization.file = 'T002_OptimalControlKuramotoAdaptative.m';path_data = replace(which(file),file,'');load([path_data,'functions/random_init.mat'],'Om_init','Th_init'); %% reference data symF = subs(Vsys,[symOm,symK],[Om_init,K_init]);Params = sym.empty;symFFcn = matlabFunction(symF,'Vars',{t,symTh,symU,Params});odeEqn = ode(symFFcn,symTh,symU,'InitialCondition',Th_init,'FinalTime',T,'Nt',400); We next construct cost functional for the control problem. symPsi = @(T,symThth) 10000*norm(sin(symThth.' - symThth),'fro'); %% Sine distance for the periodic interval $[0,2pi]$.symL_1 = @(t,symThth,symU) (symU.'*symU); %% Set the L^2 regularization for the control $u(t)$.%%iCP_1 = Pontryagin(odeEqn,symPsi,symL_1);%%U0 = zeros(length(iCP_1.Dynamics.tspan),iCP_1.Dynamics.ControlDimension); Solve Gradient descent ticGradientMethod(iCP_1,U0)toc Solve with precision: We obtain: J(u) = 2.612700E+02 error = 2.330901E+00 With 94 iterations, In 5.9585 secondsElapsed time is 5.959747 seconds. Visualization First, we present the dynamics without control, [tspan, ThetaVector] = solve(odeEqn);clfplot(tspan',ThetaVector)legend("\theta_"+[1:m])ylabel('Phases [rad]')xlabel('Time [sec]')title('The dynamics without control (incoherence)') and see the controled dynamics. odec_1 = iCP_1.Dynamics;clfplot(odec_1.tspan',odec_1.StateVector.Numeric(:,:))legend("\theta_"+[1:m])ylabel('Phases [rad]')xlabel('Time [sec]')title('The dynamics under control') We also can plot the control function along time. clfUfinal_1 = iCP_1.Solution.UOptimal;plot(odec_1.tspan',Ufinal_1)legend("norm(u(t)) = "+norm(Ufinal_1))ylabel('u(t)')xlabel('Time [sec]')title('The control function') The problem with different regularization In this part, we change the regularization into L^1-norm and see the difference. symL_2 = @(t,Y,symU) abs(symU);iCP_2 = Pontryagin(odeEqn,symPsi,symL_2);%%ticGradientMethod(iCP_2,U0)toc Solve with precision: We obtain: J(u) = 7.808495E+01 error = 9.356892E-01 With 6 iterations, In 1.0928 secondsElapsed time is 1.093697 seconds. odec_2 = iCP_2.Dynamics;clfplot(odec_2.tspan',odec_2.StateVector.Numeric(:,:))legend("\theta_"+[1:m])ylabel('Phases [rad]')xlabel('Time [sec]')title('The dynamics under control with different regularization') Ufinal_2 = iCP_2.Solution.UOptimal;figureplot(odec_1.tspan',Ufinal_1)line(odec_2.tspan',Ufinal_2,'Color','red')Thfinal_1 = odec_1.StateVector.Numeric(end,:);Thfinal_2 = odec_2.StateVector.Numeric(end,:);Psi_1 = norm(sin(Thfinal_1.' - Thfinal_1),'fro');Psi_2 = norm(sin(Thfinal_2.' - Thfinal_2),'fro');legend("u(t) with L^2-norm; Terminal cost = "+Psi_1,"u(t) with L^1-norm; Terminal cost = "+Psi_2)ylabel('The coupling strength (\kappa+u(t))')xlabel('Time [sec]')title('The comparison between two different control cost functionals') As one can expected from the regularization functions, the control function from $L^2$-norm acting more smoothly from 0 to the largest value. The function from $L^2$-norm draws much stiff lines. YFr = odeEqn.StateVector.Numeric;YL1 = iCP_1.Dynamics.StateVector.Numeric;YL2 = iCP_2.Dynamics.StateVector.Numeric;%%%% animationpendulums({YFr,YL1,YL2},tspan,{'Free','L^2 Control','L^1 Control'})
In classical calculus, we know that the limit of percentage return (ie $dS/S$) equals that of the log return (ie. $dln(S)$ ). With uncertainty, we rely on Ito Lemma to draw a relationship between the two: \begin{equation*} dS = \mu S dt + \sigma Sdz \end{equation*} and \begin{equation*} dln(S) = (\mu - \sigma^2/2) dt + \sigma dz \end{equation*} I understand the mathematics behind but I would like to know more about the intuition, mainly with uncertainties, when we "switch" from percentage return to log return, why do we have a smaller drift $(\mu - \sigma^2/2)$? Is there any intuition or financial sense behind? Moreover, when we discretize the process, can we draw the same relationship and say something like \begin{equation*} \Delta S = \mu S \Delta t + \sigma S \Delta z \end{equation*} and \begin{equation*} \Delta ln(S) = (\mu - \sigma^2/2) \Delta t + \sigma \Delta z \end{equation*} Thank you in advance.
Basic Properties of Complex Numbers Review Basic Properties of Complex Numbers Review We will now review some of the recent material regarding complex numbers. Recall from the The Set of Complex Numberspage that the Imaginary Unitis defined to be $i = \sqrt{-1}$. An Imaginary Numberis a number of the form $bi$ where $b \in \mathbb{R}$. For example, $2i$ is an imaginary number. A Complex Numberis a number of the form $a + bi$ where $a, b \in \mathbb{R}$. For example, $3 + 2i$ is a complex number. The set of complex numbers is denoted by $\mathbb{C}$. If $z = a + bi \in \mathbb{C}$ then we defined the Real Partof $z$ as $\mathrm{Re} (z) = a$, and we defined the Imaginary Partof $z$ as $\mathrm{Im} (z) = b$. Furthermore, $z$ can be represented as an ordered pair or vector $(a, b)$ in the complex plane where the complex plane is $\mathbb{R}^2$ with the $x$-axis labelled as the Real Axisand the $y$-axis labelled as the Imaginary Axis. We then began to define some operations on the set of complex numbers. On the Addition and Multiplication of Complex Numberspage we defined Additionbetween $z = a + bi, w = c + di \in \mathbb{C}$ to obtain the sum $z + w$ by: \begin{align} \quad z + w = (a + c) + (b + d)i \end{align} We defined Multiplicationbetween $z = a + bi, w = c + di \in \mathbb{C}$ to obtain the Product$z \cdot w$ by: \begin{align} \quad z \cdot w = (ac - bd) + (ad + bc)i \end{align} On the Division of Complex Numberspage we defined Divisionof two complex numbers. First we noted that if $w = c + di \in \mathbb{C}$ and $w \neq 0$ then the inverse of $w$ is: \begin{align} \quad w^{-1} = \frac{1}{w} = \frac{c}{c^2 + d^2} - \frac{d}{c^2 + d^2}i \end{align} So if $z = a + bi, w = c + di \in \mathbb{C}$ and $w \neq 0$ then we can define the Quotient$\displaystyle{\frac{z}{w}}$ as: \begin{align} \quad zw^{-1} = \frac{z}{w} = \frac{ac + bd}{c^2 + d^2} - \frac{ad - bc}{c^2 + d^2}i \end{align} On the The Set of Complex Numbers is a Fieldpage we then noted that the set of complex numbers $\mathbb{C}$ with the operations of addition $+$ and multiplication $\cdot$ defined above make $(\mathbb{C}, +, \cdot)$ an algebraic field (similarly to that of the real numbers with the usually defined addition and multiplication). On the The Conjugate of a Complex Numberpage we then defined the Complex Conjugateof $z = a + bi$ to be the complex number $\overline{z} = a - bi$. We then proved some basic properties regarding the complex conjugate. We saw that $z = \overline{z}$ if and only if $z \in \mathbb{R}$, $z + \overline{z} = 2a$, $z - \overline{z} = 2bi$, and that $z \cdot \overline{z} = a^2 + b^2$. On the The Absolute Value/Modulus of a Complex Numberpage we define the Absolute Valueor Modulusof a complex number $z = a + bi \in \mathbb{C}$ to be: \begin{align} \quad \mid z \mid = \sqrt{a^2 + b^2} \end{align} Geometrically, the absolute value of a complex number $z = a + bi$ gives us the length of the position vector $(a, b)$ in the complex plane. On the Square Roots of Complex Numberspage we proved a very important result which said that if $z \in \mathbb{C}$ then there exists a $w \in \mathbb{C}$ such that $z = w^2$. In other words, every complex number has a square root. More generally, every complex number has precisely two square roots (allowing repeated roots). We saw that the square roots of a complex number $z$ are both real if and only if $z$ is real and positive. We saw that the square roots of $z$ are both imaginary if and only if $z$ is real and negative. We saw that the square roots of $z$ are the same if and only if $z = 0$.
Written by: Paul Rubin Primary Source: OR in an OB World A somewhat curious question showed up on a forum today. The author of the question has an optimization model (I’ll assume it is either a linear program or mixed integer linear program) of the form \begin{alignat*}{2} & \textrm{maximize} & & \sum_{i=1}^{N}x_{i}\\ & \textrm{s.t.} & & x\in\mathcal{X} \end{alignat*} where the feasible region $\mathcal{X}$ is presumably polyhedral. What the author wants to do is instead maximize the sum of the $K$ largest terms in the objective, for some fixed $K<N$. The question was how to do this. In effect, the author wants to selectively turn some terms on and others off in the objective function. Any time I think about turning things on and off, I immediately think of using binary variables as the “switches”. That in turn suggests the likely need for auxiliary variables and the very likely need for a priori bounds on the things being turned on and off. Here is one solution, step by step, assuming that the $x$ variables are nonnegative and that we know a finite upper bound $U_i$ for each $x_i$. 1. Introduce a binary variable for each term to be switched on/off. So we add variables $z_i \in \{0,1\}$ for $i\in 1\dots N$, with $z_i=1$ if and only if $x_i$ is to be counted. 2. Limit the number of terms to count. This is just the constraint $$\sum_{i=1}^N z_i = K$$ (with the option to change the equality to $\le$ if you want up to $K$ terms counted. 3. Replace the objective terms with surrogates that can be turned on/off. We will add real variables $y_1,\dots,y_N$ and make the objective $$\textrm{maximize} \sum_{i=1}^N y_i.$$ 4. Connect the surrogate variables to the original variables and the on-off decisions. Here we benefit from a key property: if we limit the objective function to $K$ terms, the fact that we are maximizing will naturally favor the $K$ largest terms. So we just need the following constraints: \begin{alignat*}{2} y_{i} & \le x_{i} & & \forall i\in\left\{ 1,\dots,N\right\} \\ y_{i} & \le U_{i}z_{i} & \quad & \forall i\in\left\{ 1,\dots,N\right\} . \end{alignat*}If $z_i = 0$, the second constraint will force $y_i=0$ and the term will not contribute to the objective function. If $z_i=1$, the second constraint will become vacuous and the first term will allow $y_i$ to contribute an amount up to $x_i$ to the objective. Since the objective is being maximized, $y_i=x_i$ is certain to occur. A symmetric version of this will work to minimize the sum of the $K$ smallest terms in the objective. Minimizing the sum of the largest terms or maximizing the sum of the smallest terms is a bit trickier, requiring some extra constraints to enforce $y_i=x_i$ when $z_i = 1$.
I analyzed a Pratt & Whitney F100 turbofan last semester in my aerothermodynamics course, so allow me to answer this question. The short answer: the un-compressed air provides the majority of an engine's total thrust since the compressed air powers the engine. Correction: I forgot to mention that the fans also compress entering air. That is, all air entering an engine is compressed a bit by the fan. Some of this compressed air enters the turbojet core and the rest of the fan-compressed air bypasses the engine core. I ignored this for the sake of simplicity, but I should have explained this since you directly asked about air compression in a turbofan. Long answer: see below! 1. The fan(s) Air enters the engine through a fan (or fans in the case of the F100 engine). These are the giant fans with a spinning insignia in the middle that you see inside an engine. Update: The spinning insignia clearly show if the fan is spinning so workers don't get injured. The fan(s) increases the pressure of the air that enters an engine. Some of this compressed air is diverted around the rest of the engine and is directed straight out of the engine. The bypass ratio is a measure of how much air bypasses the "jet" core (bypass air/core air). 2. The compressors The rest of this air is then compressed through a combination of more fans and a converging duct. The data from my thermo project tells me that this stage increases the pressure of the core air by more than 10,000%, but I'm not too sure about that*. Sufficed to say, this core air now has a lot of energy--let's add some more :D Quick note: The compressed air now has insignificant velocity relative to the bypass air. Most of the compressed air's energy is "in" its pressure (senior SE members, please correct me if I'm wrong). 3. The combustion chamber (aka, combustor aka magic chamber) Now the core air enters the combustion chamber. Here the air enters small chambers, mixes with jet fuel, and is ignited. The main parts-of-an-engine diagram I posted makes it seam like the combustion chamber is one big part of a jet-engine, but really the combustor consists of a bunch of smaller chambers surrounded the main shaft of the engine. Here is a gif that shows what I mean: How the combustion chamber operates is beyond my scope of understanding, but consider that a combustor is essentially trying to keep a candle alight in the middle of a hurricane. Awesome engineering goes into designing better and more efficient (hotter-burning) chambers. 4. The turbine (aka more magic section) Now that hot and even more energetic air enters a the turbine section which consists of a diverging (increasing in area) duct and more fans. Whereas the compress "inserted" energy into the air, the turbines draw out energy from the air. As the air enters the larger (in volume) turbine area, it expands and spins the turbine fans which power the compressors and the fan. This Back Work Ratio (BWR) is a measure of how much turbine power it takes to spin the compressors. 5. The nozzle The still energetic core air is once again concentrated before being shot out of the back of the engine.This thrust, together with the thrust of the bypass air, propels the air forward following this model: $F_{thrust} = \dot{m}_{bypass} \times \Delta v_{bypass} + \dot{m}_{core} \times \Delta v_{core}$ Where $\dot{m}_{bypass}$ is the mass flow-rate of air that is bypassed and $\Delta v_{bypass}$ is the change in velocity of that air as a result of the fan. And $\dot{m}_{core}$ is the mass flow-rate of air that is combusted and $\Delta v_{core}$ is the change in velocity of the core air as a result of the fan, compressor, combustion chamber, and turbine. The uncompressed air contributes about 60% of the total thrust. The "processed" air loses a significant portion of its energy to powering the engine. However, the compressed air still provides about 40% of the total thrust. Adding an afterburner can increase this contribution to 50%. How is this possible? Dead algae from a billion years ago. The hydrocarbons in jet fuel pack a lot of energy into a small space and mass (two completely different concepts). Burning those hydrocarbons releases a lot of energy that powers the fan, compressors, and electric generators of an aircraft before pushing the airplane forward. This high energy in a small place/space is also why electric cars/anything weren't practical until LiPo batteries (story for another article). I applaud you for noticing that the uncompressed air contributes to an engine's thrust. I think the term "bypass" confuses some people into thinking that this air is "thrown away". it's not. The bypassed air is actually sped up by a series of fans and imparts forward momentum to the aircraft. I'm studying Aerospace engineering right now (Whoop!), so my excitement at being able to answer this question rather side-tracked me from your original question, but I hope you enjoy the extra info on how a turbofan operates. Additional info The hotter a combustion chamber burns, the hotter the core air gets and the more energy it can provide for engine operation. Furthermore, hotter chambers waste less jet fuel and result in relatively cleaner and safer fumes. Increasing CC max temps and designing the rest of the engine (i.e., the turbine blades) to handle the increase in temp is the cutting edge of engine research. This is one of the most challenging and lucrative materials science/engineering problems in the world since efficiency is paramount in today and tomorrow's aviation industry. Here is a nice article describing how GE, Rolls-Royce, and other engine companies sell thrust, not engines. It's a big long, so you may read it on your next flight :)
Infinite dimensional dynamical systems coupling age structuring with diffusion appear naturally in population dynamics, medicine or epidemiology. A by now classical example is the Lotka-Mckendrick system with spatial diffusion. The aim of this blog is to study contaollability properties of such age structured models in an unified manner. Let $A : \mathcal{D}(A) \to X$ be the generator of a $C^0$ semigroup $\mathbb{S}$ on the Hilbert space $X$ and let $U$ be another Hilbert space. Both $X$ and $U$ will be identified with their duals. Let $B$ be a (possibly unbounded) linear operator from $U$ to $X$, which is supposed to be an admissible control operator for $\mathbb{S}$. In the examples we have in mind, the above spaces and operators describe the dynamics of a system without age structure. In particular, $X$ is the state space and $U$ is the control space. The corresponding age structured system is obtained by first extending these spaces to \begin{equation} \label{input-space} \mathcal{X} = L^{2}(0,a_{\dagger}; X), \quad \mathcal{U} = L^{2}(0,a_\dagger;U). \end{equation} where $a_\dagger>0$ denotes the maximal age individuals can attain. Let $p(t) \in \chi$ be the distribution density of the individuals with respect to age $a\geqslant 0$ and at some time $t \geqslant 0.$ Then the abstract version of the Lotka-McKendrick system to be considered in this paper writes: \begin{equation} \label{eq:main} \begin{cases} \displaystyle \frac{\partial p}{\partial t} + \frac{\partial p}{\partial a} - A p + \mu(a) p = \mathcal{\chi}_{(a_{1},a_{2})} B u, & t \geqslant 0, a \in (0,a_\dagger), \\ \displaystyle p(t,0) = \displaystyle \int_{0}^{a_\dagger} \beta(s) p(t,s) \; {\rm d}s, & t \geqslant 0, \\ \displaystyle p(0,a) = p_{0}, \end{cases} \end{equation} where $\mathcal{\chi}$ is the characteristic function of the interval $(a_{1}, a_{2})$ with $0 \leqslant a_{1} < a_{2} \leqslant a_\dagger$ and $p_{0}$ is the initial population density. In the above system, the positive function $\mu:[0,a_\dagger] \to \mathbb{R}_{+}$ denotes the natural mortality rate of individuals of age $a.$ We denote by $\beta: [0,a_\dagger] \to \mathbb{R}_{+}$ the positive function describing the fertility rate at age $a.$ We assume that the fertility rate $\beta$ a nd the mortality rate $\mu$ satisfy the conditions (H1) $\beta \in L^\infty[0, a_\dagger], \; \beta \geqslant 0$ for almost every $a \in [0,a_\dagger].$ (H2) $\mu \in L^1_{loc}[0, a_\dagger], \; \mu \geqslant 0$ for almost every $a \in [0,a_\dagger].$ (H3) $\displaystyle \int_0^{a_\dagger} \mu(a) \ {\rm d} a = \infty.$ Examples We consider two examples : 1) The classical Lotka-McKendrick system and 2) Lotka-Mckendrick system with diffusion. 1) The classical Lotka-McKendrick system : Let us choose $X = \mathbb{R},$ $A=0$ and $B=1.$ Then the system \eqref{eq:main} reduces to classical Lotka-McKendrick system. By Theorem 1, this system is null controllable in time $\tau > a_1 + a_\dagger - a_2.$ A similar result was obtained in [1]. 2) The Lotka-McKendrick system with spatial diffusion : Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^3$ and $\omega \subseteq \Omega.$ Let us consider \begin{equation*} X = L^2(\Omega), \quad A = \Delta , \quad \mathcal{D}(A) = \left\{ f \in H^2(\Omega) \mid \displaystyle \frac{\partial f}{\partial n} = 0\right\}, \quad B = \chi_{\omega}. \end{equation*} This corresponds to the Lotka-McKendrick model with spatial diffusion ([2]). It is known that, the pair $(A, B)$ or equivalently the heat equation with localized interior control, is null controllable in any time. Thus we can apply Theorem 1 to conclude that the system is null controllable in time $\tau > a_1 + a_\dagger - a_2.$ Several other applications can be found in [3]. Numerical Simulations We now present some numerical siimulations of the controlled trajectory. We shall consider the case $A = 0$ and $B = 1,$ i.e the classical Lotka-McKendrick system. We also consider the case $a_1 = 0$ and $a_2 = a_{\dagger} =4,$ i.e., the control acts everywhere with respect to age variable. We present numerical simulations in the following two scenarios : 1) null controllabilty, 2) controllability to a steady state. Null ControllabilityBy Theorem 1 we know that system (1) is null controllable in any time. In the following video, we see the evolution of unctrolled trajectory and controlled trajectories to zero with controllability time $t=2$ and $t =5.$ We now plot the control functions. In the following two videoes, we see the evolution of the control functions at controllability time $t = 2$ and $t =5$ respectively. Controllability to a steady state If we choose $\beta$ and $\mu$ such that $R = \int_0^{a_\dagger} \beta(a) e^{-\int_0^r \mu(r) \ dr } da = 1,$ then $p_s(a) = \alpha e^{-\int_0^r \mu(r) \ dr }$ with $\alpha \in (0,\infty)$ is a steady state to the system (1) with zero steady control. Thus we can control to these trajectories also. In the following example, we have chosen $a_{\dagger} = 4,$ $a_b = 1,$ $\alpha = 1.25$ and \begin{equation*} \beta(a) = \begin{cases} 0 & \mbox{ if } a \in [0,1), \\ \frac{1}{3}e^{.05 a^2} & \mbox{ if } a \in [1,4], \end{cases} \qquad \mu(a) = .1 a. \end{equation*} In this case we can verify that $R=1$ and $p_s(a) = 1.25 e^{-.05 a^2}.$ As before we take control everywhere, thus the system is controllable to the trajectory $p_s(a)$ in any time. In the following video, we see the evolution of unctrolled trajectory and controlled trajectories to the steady state $p_s(a)$ with controllability time $t=2$ and $t =5.$ As before, in the following two videos we see the evolution of the control functions at time $t =2$ and $t=5$ respectively. References [1] D. Maity, On the Null Controllability of the Lotka-Mckendrick System, Submitted. [2] D. Maity, M. Tucsnak and E. Zuazua, Controllability and positivity constraints in population dynamics with agestructuring and diffusion , Journal de Mathématiques Pures et Appliqués. In press, 10.1016/j.matpur.2018.12.006. [3] D. Maity, M. Tucsnak and E. Zuazua, Controllability of a Class of Infinite DimensionalSystems with Age Structure. Submitted. [4] M. Tucsnak and G. Weiss, Observation and control for operator semigroups, Birkhäuser Advanced Texts: Basler Lehrbuücher. [Birkhäuser Advanced Texts: Basel Textbooks], Birkhäuser Verlag, Basel, 2009.
This is an interesting question that I have asked myself. Below is my take. Let us consider an economy $(\Omega,\mathcal{F},P)$ equipped with a filtration $(\mathcal{F})_{t \geq 0}$ consisting on a traded asset $S_t$ and a numéraire $N_t$ specified by the following stochastic differential equations:$$\begin{align}\text{d}S_t&=\alpha(t,S_t)\text{d}t+\beta(t,S_t)\text{d}W_t\\[3pt]\text{d}N_t&=a(t,N_t)\text{d}t+b(t,N_t)\text{d}\tilde{W}_t\end{align}$$Our economy has a derivative contract written on the asset $S_t$ with payoff function $h(\cdot)$ at maturity $T$. By derivative pricing theory, the price $V_t$ of the derivative is given by the following expectation under the measure $P^N$ associated to the numéraire $N_t$, conditional on the available information:$$\tag{1}V_t=N_tE^N\left(\frac{h(S_T)}{N_T}\bigg|\mathcal{F}_t\right)$$Defining the function $g(\cdot)$ for $(s,n) \in \mathbb{R}_+^2$:$$g(s,n)=\frac{h(s)}{n}$$By the Markov Property $-$ see e.g. theorem 6.3.1. in Stochastic Calculus for Finance II by Shreve $-$ we have for $0\leq t\leq T$:$$\tag{2} V_t=v(t,S_t,N_t)$$Thus by Itô's lemma:$$\begin{align}\tag{3}\text{d}V_t=& \ \frac{\partial v}{\partial t}\text{d}t+\left(\frac{\partial v}{\partial S}\text{d}S_t+\frac{1}{2}\frac{\partial^2 v}{\partial S^2}(\text{d}S_t)^2\right)+\left(\frac{\partial v}{\partial N}\text{d}N_t+\frac{1}{2}\frac{\partial^2 v}{\partial N^2}(\text{d}N_t)^2\right)\\&+\left(\frac{\partial^2v}{\partial S\partial N}\text{d}S_t\text{d}N_t\right)\end{align}$$We note two things: Observability: by equation $(2)$ the value today of a derivative depends upon the value today of the underlying asset and the numéraire $N_t$, therefore the numéraire needs to be at least observable, i.e. it cannot be some latent state variable. If the numéraire is unobservable we cannot compute the price. Tradability: most importantly, by equation $(3)$ we observe the variations in value of the derivative also depends upon the variations of the value of the underlying asset and the numéraire. If we are to set up a hedging portfolio, we need to be able to trade the numéraire $N_t$ in order to offset the fluctuations in the value of the derivative due to fluctuations of the numéraire. References Shreve, S. (2004). Stochastic Calculus for Finance II, Springer. @AFK (2016). "Feynman Kac and choice of measure", Quant Stack Exchange. @Quantuple (2016) "Other numeraire choices when applying Feynman Kac", Quant Stack Exchange.
It makes no difference whatsoever, provided you compute consistently. Suppose you define your version of the variance of a batch of data $X = x_1, x_2, \ldots, x_n$, with mean $\bar x = (x_1 + x_2 + \cdots + x_n)/n$, to be $$\text{var}_f(X) = f(n)\left((x_1-\bar x)^2 + (x_2 - \bar x)^2 + \cdots + (x_n - \bar x)^2\right)$$ where $f:\mathbb{N}\to (0,\infty)$ is a personalized function giving some positive value $f(n)$ for each positive integer $n$. Some people might use $f(n) = 1/n$, others might use $f(n) = 1/(n-1)$, and others might use something else altogether such as $f(n)=\Gamma((n-1)/2)^2/(2\Gamma(n/2)^2)$. It doesn't matter, because as we have seen you must use the same function $f$ to compute the covariance of any batch of paired data $(X,Y) = (x_1,y_1), (x_2,y_2), \ldots, (x_n,y_n)$: $$\text{cov}_f(X,Y) = f(n)\left((x_1-\bar x)(y_1-\bar y) + (x_2-\bar x)(y_2-\bar y) + \cdots + (x_n-\bar x)(y_n-\bar y)\right).$$ Consequently your personal correlation coefficient will be $$\eqalign{\rho_f(X,Y) &= \frac{\text{cov}_f(X,Y)}{\sqrt{\text{var}_f(X)}\sqrt{\text{var}_f(Y)}} \\&= \frac{f(n)\sum_i(x_i-\bar x)(y_i-\bar y)}{\sqrt{f(n)\sum_i(x_i-\bar x)^2}\sqrt{f(n)\sum_i(y_i-\bar y)^2}}\\&= \frac{\sum_i(x_i-\bar x)(y_i-\bar y)}{\sqrt{\sum_i(x_i-\bar x)^2}\sqrt{\sum_i(y_i-\bar y)^2}}.}$$ Since the factor $f(n)/(\sqrt{f(n)}\sqrt{f(n)}) = 1$ disappears, your correlation coefficient will be the same as anyone else's. As a bonus, now you know you don't have to compute $f$.
Difference between revisions of "Talk:Gamma-function" Line 208: Line 208: \Gamma_q(z) = (1-q)^{1-z} \Gamma_q(z) = (1-q)^{1-z} \prod_{k=1}^\infty \frac{1-q^{k+1}}{1-q^{k+z}}, \quad \prod_{k=1}^\infty \frac{1-q^{k+1}}{1-q^{k+z}}, \quad − z \neq 0,-1,\ldots;\quad 0<q<1, + z \neq 0,-1,\ldots;\quad 0<q<1, $$ $$ − cf. + cf. [[#References|[a2]]]. Its origin goes back to E. Heine (1847) and D. Jackson (1904). ====References==== ====References==== <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> E. Artin, "The gamma function" , Holt, Rinehart & Winston (1964)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> R. Askey, "The $ $-Gamma and $ $-Beta functions" ''Appl. Anal.'' , '''8''' (1978) pp. 125–141</TD></TR></table> <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> E. Artin, "The gamma function" , Holt, Rinehart & Winston (1964)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> R. Askey, "The $ $-Gamma and $ $-Beta functions" ''Appl. Anal.'' , '''8''' (1978) pp. 125–141</TD></TR></table> Revision as of 19:31, 27 April 2012 $\Gamma$-function $ \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\Re}{\mathop{\mathrm{Re}}} \newcommand{\Im}{\mathop{\mathrm{Im}}} $ A transcendental function $\Gamma(z)$ that extends the values of the factorial $z!$ to any complex number $z$. It was introduced in 1729 by L. Euler in a letter to Ch. Goldbach, using the infinite product $$ \Gamma(z) = \lim_{n\rightarrow\infty}\frac{n!n^z}{z(z+1)\ldots(z+n)} = \lim_{n\rightarrow\infty}\frac{n^z}{z(1+z/2)\ldots(1+z/n)}, $$ which was used by L. Euler to obtain the integral representation (Euler integral of the second kind, cf. Euler integrals) $$ \Gamma(z) = \int_0^\infty x^{z-1}e^{-x} \rd x, $$ which is valid for $\Re z > 0$. The multi-valuedness of the function $x^{z-1}$ is eliminated by the formula $x^{z-1}=e^{(z-1)\ln x}$ with a real $\ln x$. The symbol $\Gamma(z)$ and the name gamma-function were proposed in 1814 by A.M. Legendre. If $\Re z < 0$ and $-k-1 < \Re z < -k$, $k=0,1,\ldots$, the gamma-function may be represented by the Cauchy–Saalschütz integral: $$ \Gamma(z) = \int_0^\infty x^{z-1} \left( e^{-x} - \sum_{m=0}^k (-1)^m \frac{x^m}{m!} \right) \rd x. $$ In the entire plane punctured at the points $z=0,-1,\ldots $, the gamma-function satisfies a Hankel integral representation: $$ \Gamma(z) = \frac{1}{e^{2\pi iz} - 1} \int_C s^{z-1}e^{-s} \rd s, $$ where $s^{z-1} = e^{(z-1)\ln s}$ and $\ln s$ is the branch of the logarithm for which $0 < \arg\ln s < 2\pi$; the contour $C$ is represented in Fig. a. [FIXME] It is seen from the Hankel representation that $\Gamma(z)$ is a meromorphic function. At the points $z_n = -n$, $n=0,1,\ldots$ it has simple poles with residues $(-1)^n/n!$. Figure: g043310a Contents Fundamental relations and properties of the gamma-function. 1) Euler's functional equation: $$ z\Gamma(z) = \Gamma(z+1), $$ or $$ \Gamma(z) = \frac{1}{z\ldots(z+n)}\Gamma(z+n+1); $$ $\Gamma(1)=1$, $\Gamma(n+1) = n!$ if $n$ is an integer; it is assumed that $0! = \Gamma(1) = 1$. 2) Euler's completion formula: $$ \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}. $$ In particular, $\Gamma(1/2)=\sqrt{\pi}$; $$ \Gamma\left(n+\frac{1}{2}\right) = \frac{1.3\ldots(2n-1)}{2^n}\sqrt{\pi} $$ if $n>0$ is an integer; $$ \abs{\Gamma\left(\frac{1}{2} + iy\right)}^2 = \frac{\pi}{\cosh y\pi}, $$ where $y$ is real. 3) Gauss' multiplication formula: $$ \prod_{k=0}^{m-1} \Gamma\left( z + \frac{k}{m} \right) = (2\pi)^{(m-1)/2}m^{(1/2)-mz}\Gamma(mz), \quad m = 2,3,\ldots $$ If $m=2$, this is the Legendre duplication formula. 4) If $\Re z \geq \delta > 0$ or $\abs{\Im z} \geq \delta > 0$, then $\ln\Gamma(z)$ can be asymptotically expanded into the Stirling series: $$ \ln\Gamma(z) = \left(z-\frac{1}{2}\right)\ln z - z + \frac{1}{2}\ln 2\pi + \sum_{n=1}^m \frac{B_{2n}}{2n(2n-1)z^{2n-1}} + O\bigl(z^{-2m-1}\bigr), \quad m = 1,2,\ldots, $$ where $B_{2n}$ are the Bernoulli numbers. It implies the equality $$ \Gamma(z) = \sqrt{2\pi} z^{z-1/2} z^{-z} \left( 1 + \frac{1}{12}z^{-1} + \frac{1}{288}z^{-2} - \frac{139}{51840}z^{-3} - \frac{571}{2488320}z^{-4} + O\bigl(z^{-5}\bigr) \right). $$ In particular, $$ \Gamma(1+x) = \sqrt{2\pi} x^{x+1/2} e^{-x + \theta/12x}, \quad 0 < \theta < 1. $$ More accurate is Sonin's formula [6]: $$ \Gamma(1+x) = \sqrt{2\pi} x^{x+1/2} e^{-x + 1/12(x+\theta)}, \quad 0 < \theta < 1/2. $$ 5) In the real domain, $\Gamma(x) > 0$ for $x > 0$ and it assumes the sign $(-1)^{k+1}$ on the segments $-k-1 < x < -k$, $k = 0,1,\ldots$ (Fig. b). Figure: g043310b The graph of the function $ $. For all real $x$ the inequality $$ \Gamma\Gamma^{\prime\prime} > \bigl(\Gamma^\prime\bigr)^2 \geq 0 $$ is valid, i.e. all branches of both $\abs{\Gamma(x)}$ and $\ln\abs{\Gamma(x)}$ are convex functions. The property of logarithmic convexity defines the gamma-function among all solutions of the functional equation $$ \Gamma(1+x) = x\Gamma(x) $$ up to a constant factor (see also the Bohr–Mollerup theorem). For positive values of $x$ the gamma-function has a unique minimum at $x=1.4616321\ldots$ equal to $0.885603\ldots$. The local minima of the function $\abs{\Gamma(x)}$ form a sequence tending to zero as $x\rightarrow -\infty$. Figure: g043310c The graph of the function $ $. 6) In the complex domain, if $\Re z > 0$, the gamma-function rapidly decreases as $\abs{\Im z} \rightarrow \infty$, $$ \lim_{\abs{\Im z} \rightarrow \infty} \abs{\Gamma(z)}\abs{\Im z}^{(1/2)-\Re z}e^{\pi\abs{\Im z}/2} = \sqrt{2\pi}. $$ 7) The function $1/\Gamma(z)$ (Fig. c) is an entire function of order one and of maximal type; asymptotically, as $r \rightarrow \infty$, $$ \ln M(r) \sim r \ln r, $$ where $$ M(r) = \max_{\abs{z} = r} \frac{1}{\abs{\Gamma(z)}}. $$ It can be represented by the infinite Weierstrass product: $$ \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left(\left( 1 + \frac{z}{n} \right) e^{-z/n} \right), $$ which converges absolutely and uniformly on any compact set in the complex plane ($\gamma$ is the Euler constant). A Hankel integral representation is valid: $$ \frac{1}{\Gamma(z)} = \frac{1}{2\pi i} \int_{C'} e^s s^{-z} \rd s, $$ where the contour $C'$ is shown in Fig. d. Figure: g043310d $ $ G.F. Voronoi [7] obtained integral representations for powers of the gamma-function. In applications, the so-called poly gamma-functions — $k$th derivatives of $\ln\Gamma(z)$ — are of importance. The function (Gauss' $\psi$-function) $$ \psi(z) = \frac{\mathrm{d}}{\mathrm{d}z}\ln\Gamma(z) = \frac{\Gamma'(z)}{\Gamma(z)} = -\gamma + \sum_{n=0}^\infty \frac{z-1}{(n+1)(z+n)} = -\gamma + \int_0^1 \frac{1 - (1-t)^{z-1}}{t} \rd t $$ is meromorphic, has simple poles at the points $z=0,-1,\ldots$ and satisfies the functional equation $$ \psi(z+1) - \psi(z) = \frac{1}{z}. $$ The representation of $\psi(z)$ for $\abs{z}<1$ yields the formula $$ \ln\Gamma(1+z) = -\gamma z + \sum_{k=2}^\infty \frac{(-1)^k S_k}{k} z^k, $$ where $$ S_k = \sum_{n=1}^\infty n^{-k}. $$ This formula may be used to compute $\Gamma(z)$ in a neighbourhood of the point $z=1$. For other poly gamma-functions see [2]. The incomplete gamma-function is defined by the equation $$ I(x,y) = \int_0^y e^{-t}t^{x-1} \rd t. $$ The functions $\Gamma(z)$ and $\psi(z)$ are transcendental functions which do not satisfy any linear differential equation with rational coefficients (Hölder's theorem). The exceptional importance of the gamma-function in mathematical analysis is due to the fact that it can be used to express a large number of definite integrals, infinite products and sums of series (see, for example, Beta-function). In addition, it is widely used in the theory of special functions (the hypergeometric function, of which the gamma-function is a limit case, cylinder functions, etc.), in analytic number theory, etc. References [1] E.T. Whittaker, G.N. Watson, "A course of modern analysis" , Cambridge Univ. Press (1952) [2] H. Bateman (ed.) A. Erdélyi (ed.) , Higher transcendental functions , 1. The gamma function. The hypergeometric functions. Legendre functions , McGraw-Hill (1953) [3] N. Bourbaki, "Elements of mathematics. Functions of a real variable" , Addison-Wesley (1976) (Translated from French) [4] , Math. anal., functions, limits, series, continued fractions , Handbook Math. Libraries , Moscow (1961) (In Russian) [5] N. Nielsen, "Handbuch der Theorie der Gammafunktion" , Chelsea, reprint (1965) [6] N.Ya. Sonin, "Studies on cylinder functions and special polynomials" , Moscow (1954) (In Russian) [7] G.F. Voronoi, "Studies of primitive parallelotopes" , Collected works , 2 , Kiev (1952) pp. 239–368 (In Russian) [8] E. Jahnke, F. Emde, "Tables of functions with formulae and curves" , Dover, reprint (1945) (Translated from German) [9] A. Angot, "Compléments de mathématiques. A l'usage des ingénieurs de l'electrotechnique et des télécommunications" , C.N.E.T. (1957) Comments The $q$-analogue of the gamma-function is given by $$ \Gamma_q(z) = (1-q)^{1-z} \prod_{k=1}^\infty \frac{1-q^{k+1}}{1-q^{k+z}}, \quad z \neq 0,-1,-2,\ldots;\quad 0<q<1, $$ cf. [a2]. Its origin goes back to E. Heine (1847) and D. Jackson (1904). References [a1] E. Artin, "The gamma function" , Holt, Rinehart & Winston (1964) [a2] R. Askey, "The $ $-Gamma and $ $-Beta functions" Appl. Anal. , 8 (1978) pp. 125–141 How to Cite This Entry: Gamma-function. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Gamma-function&oldid=25602
Previously, the vector space $\mathbb{V}$ and three vector product operations were considered in Simple tensor algebra I and Simple tensor algebra II. But we do not talk much about the tensor. Here, the tensor will be introduced. So, what is a tensor? The following figure shows a tensor in a 2D Cartesian coordinate system. The vector is a tensor, and it is a first-order (1st-order) tensor. Actually, the scalar is a tensor as well, we can consider the scalar as a zeroth-order (0th-order) tensor. Let $ a, b$ be two scalars in $ \mathbb{R}$, we put these two scalars (0th-order tensor) together with $ \{ a, b \}$ to form a vector (1st-order tensor). So, if we can put two vectors $ \boldsymbol g, \boldsymbol f $ together, then we will get a 2nd-order tensor. The tensor product $ \otimes $ plays such role here ($ \boldsymbol g \otimes \boldsymbol f $). This is not a tutorial, please refer to the books for tensor algebra. Second-order tensor: Let $ \mathcal{G} = \{\boldsymbol g_1, \boldsymbol g_2, \boldsymbol g_3, \dots \}$ and $ \mathcal{F} = \{\boldsymbol f_1, \boldsymbol f_2, \boldsymbol f_3, \dots \}$ be two bases in Euclidean space $ \mathbb{E}^n$. Then, the tensor product of two vectors $ \boldsymbol g_i \otimes \boldsymbol f_j $ forms a second-order tensor in $\mathbf{Lin}^n$ and it represents a basis of $\mathbf{Lin}^n$, Thus we can represent a 2nd-order tensor $ \mathbf A \in \mathbf{Lin}^n$ by, $$ \displaystyle \mathbf{A} = A^{ij} \boldsymbol g_i \otimes \boldsymbol f_j$$ Then, $$ \displaystyle \boldsymbol g^i \mathbf{A} \boldsymbol f^j= \boldsymbol g^i A^{ij} (\boldsymbol g_i \otimes \boldsymbol f_j) \boldsymbol f^j = \boldsymbol g^i A^{ij} \boldsymbol g_i (\boldsymbol f_j \cdot \boldsymbol f^j) = \boldsymbol g^i A^{ij} \boldsymbol g_i (\boldsymbol f_j \cdot \boldsymbol f^j) = A^{ij}$$ Similar to the dual basis in 2D vector space, four bases $ \boldsymbol g_i \otimes \boldsymbol f_j $, $ \boldsymbol g^i \otimes \boldsymbol f_j $, $ \boldsymbol g_i \otimes \boldsymbol f^j $ or $ \boldsymbol g^i \otimes \boldsymbol f^j $ are usually used in $ \mathbf{Lin}^n$. Then, $$\displaystyle \mathbf{A} = A^{ij} \boldsymbol g_i \otimes \boldsymbol f_j = A^j_{i \cdot} \boldsymbol g^i \otimes \boldsymbol f_j= A^i_{\cdot j} \boldsymbol g_i \otimes \boldsymbol f^j = A^{ij} \boldsymbol g^i \otimes \boldsymbol f^j$$ and the components $$ \displaystyle A^{ij} = \boldsymbol g^i \mathbf{A} \boldsymbol f^j$$ $$\displaystyle A^j_{i \cdot} = \boldsymbol g_i \mathbf{A} \boldsymbol f^j$$ $$ \displaystyle A^i_{\cdot j} = \boldsymbol g_i \mathbf{A} \boldsymbol f^j$$ $$ \displaystyle A_{ij} = \boldsymbol g_i \mathbf{A} \boldsymbol f_j$$ The identity tensor can be expressed by $$ \mathbf{I} = \boldsymbol g_i \otimes \boldsymbol g^i = \boldsymbol g^i \otimes \boldsymbol g_i = g^{ij} \boldsymbol g_i \otimes \boldsymbol g_j = g_{ij} \boldsymbol g^i \otimes \boldsymbol g^j$$ where $ \boldsymbol g^i $ is dual basis of $ \boldsymbol g_i $, and, $$ \displaystyle I^{ij} = g^{ij}$$ $$ \displaystyle I_{ij} = g_{ij}$$ $$ \displaystyle I^i_{\cdot j} = I^j_{i \cdot} = \delta^i_j$$] Proof: let $ \boldsymbol x \in \mathbb{E}^n$ be an arbitrary vector, $$ \displaystyle \mathbf{I} \boldsymbol{x} = (\boldsymbol g_i \otimes \boldsymbol g^i)\boldsymbol{x} = \boldsymbol{g}_i (\boldsymbol{g}^i \cdot x^k \boldsymbol{g}_k) = x^i \boldsymbol{g}_i = \boldsymbol{x}$$ Tensor operations: Previously, several operations for vectors have been introduced. The operations for 2nd-order tensor are similar to that of 1st-order tensor but there still exists some special operations defined for 2nd-order tensor or high-order tensor. Contraction : The contraction operation is the basic operation for a tensor. The similar concept for vectors can be found in Simple tensor algebra II. Let $ \mathbf A$ be a high-order tensor, say fourth-order tensor, $$ \displaystyle \mathbf A = A^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l$$ Then, the contraction means the dot product of two arbitrary bases in this tensor, say $ \boldsymbol g_j$ and $\boldsymbol g^l$, $$ \displaystyle \overset{\mathbf{Ctrc}}{\widehat{\mathbf A}} = \overset{\mathbf{Ctrc}}{ \widehat{A^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l} }=\delta_j^l A^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g^k= A^{ij}_{\cdot\cdot kj} \boldsymbol g_i \otimes \boldsymbol g^k = C^{i}_{\cdot k} \boldsymbol g_i \otimes \boldsymbol g^k$$ The contraction is a basic operation for tensors. Each contraction operation will reduce the order of the tensor. Composition: The composition of two tensors is similar to the dot product of two vectors. Let $\mathbf A$ be the fourth-order tensor and $\mathbf B$ be the three-order tensor in $\mathbf{Lin}^n$, $$\mathbf A = A^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l$$ $$\mathbf B = B^{rs}_{\cdot\cdot t} \boldsymbol g_r \otimes \boldsymbol g_s \otimes \boldsymbol g^t$$ Then we use tensor product to get a result, $$\mathbf A \otimes \mathbf B = A^{ij}_{\cdot\cdot kl} B^{rs}_{\cdot\cdot t} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l \otimes \boldsymbol g_r \otimes \boldsymbol g_s \otimes \boldsymbol g^t$$ And contract two bases (e.g.,$ \boldsymbol g_k$ and $\boldsymbol g^s$), we get the composition result, $$\begin{align} {\boldsymbol{AB}} &= \delta _s^k A_{ \cdot \cdot kl}^{ij}B_{ \cdot \cdot t}^{rs}{{\boldsymbol{g}}_i} \otimes {{\boldsymbol{g}}_j} \otimes {{\boldsymbol{g}}^l} \otimes {{\boldsymbol{g}}_r} \otimes {{\boldsymbol{g}}^t}\\ &= A_{ \cdot \cdot kl}^{ij}B_{ \cdot \cdot t}^{rk}{{\boldsymbol{g}}_i} \otimes {{\boldsymbol{g}}_j} \otimes {{\boldsymbol{g}}^l} \otimes {{\boldsymbol{g}}_r} \otimes {{\boldsymbol{g}}^t}\\ &= C_{ \cdot \cdot l \cdot t}^{ij \cdot r}{{\boldsymbol{g}}_i} \otimes {{\boldsymbol{g}}_j} \otimes {{\boldsymbol{g}}^l} \otimes {{\boldsymbol{g}}_r} \otimes {{\boldsymbol{g}}^t}\\ &= \boldsymbol C \end{align}$$ The composition of two tensors satisfied the following properties: Associativity: $(\bf A \bf B) \bf C = \bf A (\bf B \bf C)$, Distributivity: $(\bf A+\bf B) \bf C = \bf A \bf C +\bf B \bf C$; $\bf A(\bf B+ \bf C) = \bf A \bf B +\bf A \bf C$. Transposition: The transpose operation is to change the position of the tensor index. Let $\mathbf A$ be the fourth-order tensor in $\mathbf{Lin}^n$, $$\mathbf A = A^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l$$ Then, $$\mathbf B = A^{ji}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l = B^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l$$ $$\mathbf C = A^{\cdot ji}_{k \cdot\cdot l} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l = C^{ij}_{\cdot\cdot kl} \boldsymbol g_i \otimes \boldsymbol g_j \otimes \boldsymbol g^k \otimes \boldsymbol g^l$$ Both are the transposed tensors of $\mathbf A$. Note that the transposition do not change the permutation of the bases. Decomposition: Follow the rule in Transposition, and consider the fourth-order tensor $\mathbf A$ and its transposed tensor $\mathbf A^ \text T$, $${{\bf{A}}^\text T} = A_{ \cdot \cdot kl}^{ji}{{\boldsymbol {g}}_i} \otimes {{\boldsymbol {g}}_j} \otimes {{\boldsymbol {g}}^k} \otimes {{\boldsymbol {g}}^l}$$ If, $$A_{ \cdot \cdot kl}^{ji} = A_{ \cdot \cdot kl}^{ij}$$ Then, the fouth-order tensor is a symmetric tensor with respect to the first and second index. If the tensor $\mathbf A$ and its transposed tensor $\mathbf A^ \text T$ satisfy, $$A_{ \cdot \cdot kl}^{ji} = -A_{ \cdot \cdot kl}^{ij}$$ Then, the tensor $\mathbf A$ is skew-symmetric with respect to the first and second index. All the elements on the diagonal of a skew-symmetric matrix are zero, we use the following notation, $$A_{ \cdot \cdot kl}^{\underline i \underline i } = 0$$ The underlinded symbol means no summation convention for index $i$. Now, the symmetric and the skew-symmetric tensor can be constructed by the tensor and its transposed tensor, $${\bf{D}} = \frac{1}{2}\left( {{\bf{A}} + {{\bf{A}}^{\rm{T}}}} \right)$$ $${\bf{W}} = \frac{1}{2}\left( {{\bf{A}} – {{\bf{A}}^{\rm{T}}}} \right)$$ Stress: Here, we consider the stress tensor in three dimensional space as an example, $${\boldsymbol{ \sigma }} = {\sigma _{ij}}{\boldsymbol e_i} \otimes {\boldsymbol e_j} = \left[ {\begin{array}{*{20}{c}} {{\sigma _{11}}}&{{\sigma _{12}}}&{{\sigma _{13}}}\\ {{\sigma _{21}}}&{{\sigma _{22}}}&{{\sigma _{23}}}\\ {{\sigma _{31}}}&{{\sigma _{32}}}&{{\sigma _{33}}} \end{array}} \right]$$ With the given direction $\boldsymbol n=n^i \boldsymbol e_i$ (unit vector), the traction vector is given by, $${\bf T = \boldsymbol \sigma \cdot \boldsymbol n}$$ and the normal stress on this surface can be written by, $${\sigma _n} = \bf T \cdot \boldsymbol n =\boldsymbol n \cdot \boldsymbol \sigma \cdot \boldsymbol n $$ Then, we decompose the stress into two tensors, $${\boldsymbol{\sigma }} = sph({\boldsymbol{\sigma }}) + dev({\boldsymbol{\sigma }})$$ $$sph({\boldsymbol{\sigma }}) = \frac{1}{3}{\rm{Tr}}({\boldsymbol{\sigma }}){\bf{I}};dev({\boldsymbol{\sigma }}) = {\boldsymbol{\sigma }} – \frac{1}{3}{\rm{Tr}}({\boldsymbol{\sigma }}){\bf{I}}$$ $sph({\boldsymbol{\sigma }})$ is the hydrostatic part of the stress and $dev({\boldsymbol{\sigma }})$ is the deviatoric part of the stress. Hydrostatic stress is about the volumn change and the Deviatoric stress is corresponded to the plastic deformation.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 11-20 of 168 Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Search Now showing items 1-10 of 192 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
The Closed Graph Theorem The Closed Graph Theorem Definition: Let $f : X \to Y$ be a function. Then the Graph of $f$ is defined to be $\mathrm{Gr} (f) = \{ (x, f(x)) : x \in X \}$. Theorem 1 (The Closed Graph Theorem): Let $X$ and $Y$ be Banach spaces and let $T : X \to Y$ be a linear operator. Then $T$ is a bounded linear operator if and only if the graph of $T$, $\mathrm{Gr} (T) = \{ (x, T(x)) : x \in X \}$ is a closed set in the product space $X \times Y$, that is, if $(x_n)_{n=1}^{\infty}$ is a sequence of points in $X$ that converges to $x \in X$ and $(T(x_n))_{n=1}^{\infty}$ converges to $y \in Y$ then $T(x) = y$. Proof:$\Rightarrow$ Suppose that $T$ is bounded (continuous). Let $(x_n, T(x_n))$ be a sequence in $\mathrm{Gr}(T)$ that converges to $(x, y)$. Then $(x_n)$ converges to $x$ and $(T(x_n))$ converges to $y$. Since $T$ is continuous, $(T(x_n))$ converges to $T(x)$. So $T(x) = y$. So $\mathrm{Gr}(T)$ is closed. $\Leftarrow$ Suppose that $\mathrm{Gr}(T)$ is a closed set in the product space $X \times Y$. Define a new norm $\| \cdot \|_T$ on $X$ for all $x \in X$ by: \begin{align} \quad \| x \|_T = \| x \| + \| T(x) \| \end{align} We must first verify that $\| \cdot \|_T$ is indeed a norm. Suppose that $\| x \|_T = 0$. Then $\| x \| + \| T(x) \| = 0$. Since $\| x \| \geq 0$ and $\| T(x) \| \geq 0$ we must have that $\| x \| = 0$. But $\| \cdot \|$ is a norm so $x = 0$. Now we also have that $\| 0 \|_T = \| 0 \| + \| T(0) \| = 0 + 0 = 0$. So $\| x \|_T = 0$ if and only if $x = 0$. Let $\lambda \in \mathbb{C}$. Then: \begin{align} \quad \| \lambda x \|_T = \| \lambda x \| + \| T(\lambda x) \| = | \lambda | \| x \| + | \lambda | \| T(x) \| = | \lambda |(\| x \| + \| T(x) \|) \end{align} Lastly, let $x, y \in X$. Then: \begin{align} \quad \| x + y \|_T = \| x + y \| + \| T(x + y) \| \leq \| x \| + \| y \| + \| T(x) \| + \| T(y) \| = (\| x \| + \| T(x) \|) + (\| y \| + \| T(y) \|) = \| x \|_T + \| y \|_T \end{align} So indeed, $\| \cdot \|_T$ is a norm on $X$. Furthermore, since $\| x \| \geq 0$ and $\| T(x) \| \geq 0$ we have for all $x \in X$ that: \begin{align} \quad \| x \| \leq \| x \| + \| T(x) \| = 1 \cdot \| x \|_T \quad (*) \end{align}(5) \begin{align} \quad \| T(x) \| \leq \| x \| + \| T(x) \| = 1 \| x \|_T \quad (**) \end{align} We aim to show that $X$ with the norm $\| \cdot \|_T$ is a Banach space. Let $(x_n)_{n=1}^{\infty}$ be a Cauchy sequence in $X$ with respect to the norm $\| \cdot \|_T$. Let $m, n \in \mathbb{N}$. Then from $(*)$ and $(**)$ we have that: \begin{align} \quad \| x_m - x_n \| \leq \| x_m - x_n \|_T \quad (***) \end{align}(7) \begin{align} \quad \| T(x_m) - T(x_n) \| \leq \| x_m - x_n \|_T \quad (****) \end{align} From $(***)$ we have that $(x_n)_{n=1}^{\infty}$ must be a Cauchy sequence in $X$ with respect to the original norm on $X$. And from $(****)$ we have that $(T(x_n))_{n=1}^{\infty}$ must be a Cauchy sequence in $Y$ with respect to the norm on $Y$. Since $X$ is a Banach space, $(x_n)_{n=1}^{\infty}$ converges to some $x \in X$ and since $Y$ is a Banach space, $(T(x_n))_{n=1}^{\infty}$ converges to some $y \in Y$. So $(x_n)_{n=1}^{\infty}$ is a sequence of points in $X$ that converges to $x \in X$ and $(T(x_n)_{n=1}^{\infty}$ converges to $y \in Y$, so $T(x) = y$. Now all that remains to show is that the Cauchy sequence $(x_n)_{n=1}^{\infty}$ in $X$ converges to $x \in X$ with respect to the $\| \cdot \|_T$] norm. We have that: \begin{align} \quad \lim_{n \to \infty} \| x_n - x \|_T = \lim_{n \to \infty} [\| x_n - x \| + \| T(x_n) - T(x) \|] = \lim_{n \to \infty} \| x_n - x \| + \lim_{n \to \infty} \| T(x_n) - y \| = 0 + 0 = 0 \end{align} So indeed every Cauchy sequence $(x_n)_{n=1}^{\infty}$ in $X$ converges to some $x \in X$ with respect to the $\| \cdot \|_T$ norm. So $X$ is a Banach space with respect to the $\| \cdot \|_T$ norm. From $(*)$ we have that $\| \cdot \|$ and $\| \cdot \|_T$ are equivalent norms by the theorem on the Equivalence of Norms on Banach Spaces page. So there exists a $C > 0$ such that for all $x \in X$: \begin{align} \quad \| x \|_T \leq C \| x \| \end{align} But from $(**)$ this means that for all $x \in X$: \begin{align} \quad \| T(x) \| \leq C \| x \| \end{align} So $T$ is a bounded linear operator. $\blacksquare$ Corollary 2 (An Equivalent Formulation of the Closed Graph Theorem): Let $X$ and $Y$ be Banach spaces and let $T : X \to Y$ be a linear operator. Then $T$ is a bounded linear operator if and only if whenever $(x_n)_{n=1}^{\infty}$ is a sequence of points in $X$ that converges to $0 \in X$ and $(T(x_n))_{n=1}^{\infty}$ converges to $y \in Y$ then $y = 0$. Proof:$\Rightarrow$ If $T$ is a bounded linear operator then by the Closed Graph theorem above, if $(x_n)_{n=1}^{\infty}$ converges to $0$ and $(T(x_n))_{n=1}^{\infty}$ converges to $y$ then we must have that $T(0) = y$. But $T(0) = 0$, i.e., $y = 0$.. $\Leftarrow$ Suppose that whenever $(x_n)_{n=1}^{\infty}$ converges to $0$ and $(T(x_n))_{n=1}^{\infty}$ converges to $y$ we have that $y = 0$. Let $(z_n)_{n=1}^{\infty}$ be a sequence in $X$ that converges to $z$, and let $(T(z_n))_{n=1}^{\infty}$ converge to $w \in Y$. Then $(z_n - z)_{n=1}^{\infty}$ converges to $0$, and $(T(z_n) - w)_{n=1}^{\infty}$ converges to $T(z) - w$. So by the assumption we must have that $T(z) - w = 0$, that is, $T(z) = w$. By the Closed Graph theorem, $T$ is bounded. $\blacksquare$
Let $\psi(x)=\sum_{n\leq x} \Lambda(n)$ be the weighted prime counting function. I am trying to evaluate the integral $$\kappa:=\int_{1}^{\infty}\frac{\psi(x)-x}{x^{2}}dx$$ in several different ways. Originally, this integral came up as a particular part in a particular case for a a formula for a summatory function I was looking at. From now on, let $\gamma$ refer to the Euler-Mascheroni constant. (Now Corrected:) I found a fun, elementary approach to this integral which gave $\kappa=-1-\gamma$ if we assume the quantitative prime number theorem. (Precisely, we just need to assume that this integral is absolutely convergent. ) Since I am not too confident about this, I naturally wanted to check by complex analytic methods to see if my answer was correct. My question then is: What other ways can be used to prove this identity? I feel like knowing many approaches to this problem will give a greater understanding of certain properties of these functions. A friend suggested that it must be related to the logarithmic derivative of $\zeta(s)$, and certain special values, but I cannot see how to use this. Thanks a lot! Additional Remark: I attempted to use the explicit formula for $\psi(x)$, and deduced $\kappa=-\gamma-1$. Originally I felt this was wrong, but after reading Julian Rosen's answer I think it is correct. Here is the alternate solution: Substituting in the explicit formula, and then integrating termwise we have$$\kappa=\int_{1}^{\infty}\left(-\sum_{\rho}\frac{x^{\rho-2}}{\rho}-\frac{\log2\pi}{x^{2}}-\frac{\log\left(1-x^{-2}\right)}{2x^{2}}\right)dx=\sum_{\rho}\frac{1}{\rho(\rho-1)}-\log2\pi+1-\log2$$since $$\frac{1}{2}\int_{1}^{\infty}\frac{\log\left(1-x^{-2}\right)^{-1}}{x^{2}}dx=\frac{1}{2}\int_{1}^{\infty}\sum_{i=1}^{\infty}\frac{1}{ix^{2i+2}}dx=\sum_{i=1}^{\infty}\frac{1}{2i(2i+1)}=1-\log2.$$As $$\sum_{\rho}\frac{1}{\rho(\rho-1)}=\sum_{\rho}\frac{1}{\rho-1}-\frac{1}{\rho}=-\sum_{\rho}\frac{1}{1-\rho}+\frac{1}{\rho}=2B=-\gamma-2+\log4\pi$$it follows that $\kappa=-\gamma-1$.
I'd like to calculate the fourier series of $f(x)=x\cos(x)$, with $x\in(-\pi,\,\pi)$. My solution, however, doesn't agree with my teacher's solution. So either I went wrong somewhere (most likely), or it was him who went wrong (but I don't think so). So, $$f(x)=a_0+\sum_{n=1}^\infty\left(a_n\cos(nx)+b_n\sin(nx)\right)$$ I start by realising that $f(x)$ is an odd function, since $f(-x)=-f(x)$, and therefore the coefficients $a_0$ and $a_n$ will both equal $0$: $$a_0=a_n=0$$ Therefore, I'm left with: $$f(x)=\sum_{n=1}^\infty b_n\sin(nx)$$ And all I have to do now is calculate the coefficients $b_n$, and this is exactly where I'm going wrong. Firstly I'll present my teacher's solution: $\boxed{b_n=\dfrac{2(-1)^{n+1}}{n^2-1}}$ ($n\neq1$) Now I'll show you what I'm doing: $$b_n=\dfrac2\pi\int_0^\pi x\cos(x)\sin(nx)dx$$ Integrating by parts: $$\begin{cases} u = x & \Rightarrow du=dx\\ dv = \cos(x)\sin(nx)dx & \Rightarrow v=-\dfrac12\left(\dfrac{\cos x(n+1)}{n+1}+\dfrac{\cos x(n-1)}{n-1}\right) \end{cases}$$ Where $v$ was obtained via integration by parts. So: $$b_n=\dfrac2\pi\left(uv\vert_0^\pi-\int_0^\pi vdu\right)$$ Where the last integral equals $0$. Therefore I get: $$\boxed{b_n=\dfrac{2n(-1)^{n+1}}{1-n^2}}\,(n\neq1)$$ Because: $\cos [\pi(n+1)] = \cos [\pi(n-1)] = (-1)^{n+1}$ I don't know where I went wrong... In particular notice that in my solution I have an $n$ in the numerator, its sign is different from my teacher's solution. Any hints? Thanks in advance.
Electronic Journal of Probability Electron. J. Probab. Volume 22 (2017), paper no. 60, 77 pp. Local circular law for the product of a deterministic matrix with a random matrix Abstract It is well known that the spectral measure of eigenvalues of a rescaled square non-Hermitian random matrix with independent entries satisfies the circular law. In this paper, we consider the product $TX$, where $T$ is a deterministic $N\times M$ matrix and $X$ is a random $M\times N$ matrix with independent entries having zero mean and variance $(N\wedge M)^{-1}$. We prove a general local circular law for the empirical spectral distribution (ESD) of $TX$ at any point $z$ away from the unit circle under the assumptions that $N\sim M$, and the matrix entries $X_{ij}$ have sufficiently high moments. More precisely, if $z$ satisfies $||z|-1|\ge \tau $ for arbitrarily small $\tau >0$, the ESD of $TX$ converges to $\tilde \chi _{\mathbb D}(z) dA(z)$, where $\tilde \chi _{\mathbb D}$ is a rotation-invariant function determined by the singular values of $T$ and $dA$ denotes the Lebesgue measure on $\mathbb C$. The local circular law is valid around $z$ up to scale $(N\wedge M)^{-1/4+\epsilon }$ for any $\epsilon >0$. Moreover, if $|z|>1$ or the matrix entries of $X$ have vanishing third moments, the local circular law is valid around $z$ up to scale $(N\wedge M)^{-1/2+\epsilon }$ for any $\epsilon >0$. Article information Source Electron. J. Probab., Volume 22 (2017), paper no. 60, 77 pp. Dates Received: 5 June 2016 Accepted: 15 June 2017 First available in Project Euclid: 21 July 2017 Permanent link to this document https://projecteuclid.org/euclid.ejp/1500602612 Digital Object Identifier doi:10.1214/17-EJP76 Mathematical Reviews number (MathSciNet) MR3683369 Zentralblatt MATH identifier 1373.15058 Subjects Primary: 15B52: Random matrices Secondary: 60B20: Random matrices (probabilistic aspects; for algebraic aspects see 15B52) 82B44: Disordered systems (random Ising models, random Schrödinger operators, etc.) Citation Xi, Haokai; Yang, Fan; Yin, Jun. Local circular law for the product of a deterministic matrix with a random matrix. Electron. J. Probab. 22 (2017), paper no. 60, 77 pp. doi:10.1214/17-EJP76. https://projecteuclid.org/euclid.ejp/1500602612
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Daniel Fischer I'm a hobbyist programmer. Mostly I write in Haskell, second place is taken by C. Other languages I use more than once a year are C#, Python and Java. Member for 7 years, 11 months 31 profile views Last seen 37 mins ago Communities (5) Mathematics♦ 176.3k 176.3k1717 gold badges186186 silver badges301301 bronze badges Stack Overflow 165.9k 165.9k1414 gold badges273273 silver badges403403 bronze badges Meta Stack Exchange 8.2k 8.2k55 gold badges2929 silver badges4545 bronze badges Code Review 711 71177 silver badges1010 bronze badges Academia 101 10133 bronze badges Top network posts 3958 Why is processing a sorted array faster than processing an unsorted array? 201 Why does i|= j|= k|= (j+= i) - - (k+++k) - - (i =+j) == 11? 186 Why does the google calculator give $\tan 90^{\circ} = 1.6331779e^{+16}$? 171 Calculating and printing the nth prime number 124 An inequality: $1+\frac1{2^2}+\frac1{3^2}+\dotsb+\frac1{n^2}\lt\frac53$ 121 What is the smallest unknown natural number? 95 Why can't (or doesn't) the compiler optimize a predictable addition loop into a multiplication? View more network posts →
MisterGeeky Hi, I'm Nick. I'm a beginner hobbyist programmer. Currently learning Javascript, Perl, PHP, C/C# and a bit of Python. A bit about me: Graduated highschool under CBSE, Computer Science stream, where I leant C++, SQL, BASIC, Scratch and Logo. Right now, I'm pursuing my Bachelors in Technology under the Kerala Technological University. My concentration is on Computer Science and Engineering. In the future, I wish to work on disruptive technology and Robotics/AI. I love FOSH&IUT !! Very few know what that is. Thakful to be one of them. India Member for 1 year, 3 months 1 profile view Last seen Jul 11 '18 at 17:51 Communities (60) Mathematics 3.8k 3.8k77 gold badges3636 silver badges6161 bronze badges Chemistry 617 61788 silver badges1212 bronze badges Meta Stack Exchange 292 29211 silver badge55 bronze badges Science Fiction & Fantasy 226 22611 gold badge33 silver badges77 bronze badges Academia 212 21222 silver badges77 bronze badges View network profile → Top network posts 89 Can the golden ratio accurately be expressed in terms of $e$ and $\pi$ 49 Sheldon Cooper Primes 48 Why does $1+2+3+\cdots = -\frac{1}{12}$? 37 What would be the effect of the addition of an inert gas to a reaction at equilibrium? 29 Prove $\sin^2\theta + \cos^2\theta = 1$ 25 What was Wolverine doing during X-Men:First Class? 23 Given two blank rulers, measure any length View more network posts →
SciPost Submission Page Generalized Gibbs Ensemble and string-charge relations in nested Bethe Ansatz by György Z. Fehér, Balázs Pozsgay Submission summary As Contributors: Balázs Pozsgay Arxiv Link: https://arxiv.org/abs/1909.04470v2 Date submitted: 2019-09-19 Submitted by: Pozsgay, Balázs Submitted to: SciPost Physics Domain(s): Theoretical Subject area: Quantum Physics Abstract The non-equilibrium steady states of integrable models are believed to be described by the Generalized Gibbs Ensemble (GGE), which involves all local and quasi-local conserved charges of the model. In this work we investigate integrable lattice models solvable by the nested Bethe Ansatz, with group symmetry $SU(N)$, $N\ge 3$. In these models the Bethe Ansatz involves various types of Bethe rapidities corresponding to the "nesting" procedure, describing the internal degrees of freedom for the excitations. We show that a complete set of charges for the GGE can be obtained from the known fusion hierarchy of transfer matrices. The resulting charges are quasi-local in a certain regime in rapidity space, and they completely fix the rapidity distributions of each string type from each nesting level. Current status: Author comments upon resubmission List of changes We had two statements regarding the local inversion relations. The first said that for representations with rectangular Young diagrams the $R$-matrices are linear and satisfy the inversion. This is correct. The second statement was that for other representations the $R$-matrices are still linear, but they do not satisfy the inversion. This is not true: for other representations the $R$-matrices still satisfy the local inversion, but they are not linear. We now corrected this, and added the references from which this becomes clear. Submission & Refereeing History Report 2 submitted on 2019-10-15 14:59 by Anonymous Report 1 submitted on 2019-09-26 04:03 by Anonymous Reports on this Submission Anonymous Report 2 on 2019-10-15 Invited Report Strengths technically strong and precise well written Weaknesses using previously introduced techniques and concepts some of the more general statements do not seem to be accurate (see report) Report In this paper, the authors propose the exact correspondence between quasi-local charges and thermodynamic Bethe ansatz in the SU(N) integrable model. This involves the nested Bethe ansatz, and in that respect goes further than previous works on the subject. The paper proposes that a certain set of conserved charges, built out of transfer matrices based on higher-spin auxiliary-space representations, are quasi-local, and that their eigenvalues fix the full string distributions, which arise from the string hypothesis, of the nested Bethe ansatz. Certain aspects are proven rigorously, such as the quasi-locality of some of the charges. The paper is professionally and clearly written, and the results are strong and, as far as I can see, correct. I think this is an excellent work, and I accept it for publication. I have only two more philosophical comments in relation to statements made by the authors in the introduction, and I hope the authors can clarify their comments accordingly. First, on page 4, it is mentioned that the complete GGE as written in eq 1.8 can never be quasi-local. This is not true. I take here that the authors use the definition of quasi-local in 2.1; this was, previously, referred to as “pseudolocal” (quasi-local being reserved for densities with exponentially decaying envelope). It was shown in [B. Doyon, Thermalization and pseudolocality in extended quantum systems, Commun. Math. Phys. 351, 155 (2017)], mathematically rigorously, that if the large-time limit exists (that is, there is relaxation, in a given precise sense), then, from a large family of states, including thermal states of arbitrary local hamiltonians, the state that comes out of the quench is pseudolocal (quasi-local in the author’s definition). That is, it is essentially of the form 1.8, with the result of the series $\sum_i \beta_i Q_i$ being a pseudolocal charge. For instance, the use of the mode operators in free theories, $\int dp f(p) a^\dagger(p)a(p)$, does not preclude pseudolocality, all details are in the properties of the coefficient $f(p)$. Not all possible functions $f(p)$ can occur - pseudolocality imposes certain conditions. This is a theorem - it comes with conditions of relaxation, and it has a precise expression which is a bit more involved than expression 1.8. If the authors know for sure that pseudolocality is broken, they should mention how the conditions of the theorem are actually broken and explain a bit more. Second, on page 4 again, it is mentioned that the complete GGE does not use the maximum entropy principle. Again, I disagree, at least with the philosophy of this statement. I agree that in order to determine the $\beta_i$ in 1.8 - or, equivalently, the root densities - in any given quench, one does not use explicitly a maximisation of entropy; this principle is used in a different method, the quench action method. However, on principles, both method are equivalent: in the method fixing the beta’s (or root densities) from evaluating averages of conserved charges in the initial state, one uses maximal-entropy *implicitly*. The form 1.8, or its equivalent in terms of distribution of Bethe roots, is a state obtained by *maximising entropy with respect to the conditions on the conserved densities*. It is, like the Gibbs state, a state obtained by maximising entropy. The use of this form in quench protocols already implies that we assume that entropy is maximised after long times. And indeed, a large amount of information is lost in this process - one cannot reconstruct the exact initial state. That the GGE fixes all Bethe root densities does not preclude the fact that entropy has been maximised. Finally, on pages 6,7, of course there is a more general notion of pseudolocality (quasi-locality here), with respect to other states, see []; this in fact also something alluded to by the author later on in the paper. Requested changes adjust statements as per the three points in the report. Anonymous Report 1 on 2019-9-26 Invited Report Strengths The manuscript is very thorough, and the derivations are long and clear. It is written in a way suggesting further generalizations and future applications to other Bethe-Ansatz-solvable models. Weaknesses Nothing significant. Report The paper analyses the recently discovered phenomenon of an incompleteness, in the thermodynamic limit, of the set of local conserved quantities as a set of operators entering the sum the logarithm of the Generalized Gibbs density matrix. The models authors use as a testing ground---the generalized Heisenberg chains---are, probably, the most ideally suited to illustrate the phenomenon and suggest remedies. For these models, authors obtain explicit expressions for the additional, now only quasi-local charges that complete the Generalized Gibbs exponent. Thanks to the Bethe Ansatz integrability of the models considered, the criterion for completeness is quite transparent: the expressibility of the rapidity density through the charges of the alleged complete set alone. The paper is well written, it deserves publishing inits current form. Requested changes Optionally, I would at least consider referencing [V. E. Korepin, N. M. Bogoliubov, A. G. Izergin, "Quantum Inverse Scattering Method and Correlation Functions"]: the thing authors call a Generalized Eigenstate Thermalization" is present there under a "representative state hypothesis".
Frequency Response of Mechanical Systems In this follow-up to a previous blog post on damping in structural dynamics, we take a detailed look at the harmonic response of damped mechanical systems. We also demonstrate different ways of setting up a frequency-response analysis in the COMSOL Multiphysics® software as well as how to interpret the results. What Is Frequency Response? In a general sense, the frequency response of a system shows how some property of the system responds to an input as a function of excitation frequency. When talking about frequency response in COMSOL Multiphysics, we usually mean the linear (or linearized) response to a harmonic excitation. In order to produce a frequency response curve, we need to perform a frequency sweep; that is, solve for a number of different frequencies. A frequency response curve will, in general, exhibit a number of distinct peaks located at the natural frequencies of the system. A typical frequency response curve. There are two natural frequencies at 13 Hz and 31 Hz in the plotted range. The Single-DOF System, Revisited Various aspects of the dynamics of a single-DOF system with viscous damping were discussed in the previous blog post. One result is that the damped natural frequency is \omega_d = \omega_0\sqrt{1-\zeta^2} \approx \omega_0 \left ( 1 – \frac{\zeta^2}{2} \right ) This is the frequency at which the system will vibrate (with a decaying amplitude) if released from a deformed state, when there is no other external excitation. An interesting question arises: “Which excitation frequency will give the maximum amplitude of the response?” You would expect it to be exactly the damped natural frequency, but as we will show below, this is not the case. A single-DOF system. Since we are dealing with harmonic motion, it is convenient to use a complex notation, factoring out the common harmonic multiplier e^{i \omega t}. The equation of motion is then \left (-\omega^2m +ic\omega +k \right) u = f The phase angle of the load f can be taken as reference so that f is real-valued. A normalized form can be obtained by dividing by the stiffness k: \left (1-\left (\frac{\omega}{\omega_0} \right) ^2 +2i\zeta \left (\frac{\omega}{\omega_0} \right) \right) u = \frac{f}{k} The right-hand side is now exactly the static displacement. Thus, the ratio between the dynamic and static solutions is \displaystyle H(\omega) = \left (1-\left (\frac{\omega}{\omega_0} \right) ^2 +2i\zeta \left (\frac{\omega}{\omega_0} \right) \right)^{-1} =\frac{1}{1-\beta ^2 +2i\zeta \beta} The function H is sometimes called the transfer function. Here, β is used to denote the ratio between the excitation frequency and the undamped natural frequency. The magnitude of the transfer function is \displaystyle \left | \frac{1}{1-\beta ^2 +2i\zeta \beta} \right | = \frac{1}{\sqrt {(1-\beta ^2)^2 +4\zeta^2 \beta^2}} This function is shown in the graph below. Using standard calculus, the frequency giving maximum amplitude can be determined by finding the minimum of the (squared) denominator {(1-\beta ^2)^2 +4\zeta^2 \beta^2}. The result is \beta = \sqrt{1-2 \zeta^2} Thus, the excitation frequency giving the maximum response is \omega_{\mathrm {max}} = \omega_0\sqrt{1-2\zeta^2} \approx \omega_0 \left ( 1 – \zeta^2 \right ) which is lower than the damped natural frequency. Actually, the frequency shift is twice as large. The fact that the excitation frequency that causes maximum amplification does not coincide with the frequency of free vibration may seem like a paradox. This can be attributed to the phase shift between force and displacement caused by the damping. Without damping, the load and displacement flip from being perfectly in phase below the natural frequency to being 180° out-of-phase above the natural frequency. With damping, the transition in phase shift is smooth, as shown in the graph below. Irrespective of the damping level, the phase shift at the undamped natural frequency is always 90°. Phase shift of the displacement as function of frequency. The fact that the force and displacement are slightly out-of-phase when there is damping affects the possibility of the force to supply energy to the system. Loss Factor Damping Let’s repeat the analysis for a single-DOF system with loss factor damping. In this case, the equation of motion is \left (-\omega^2m +k(1+i\eta ) \right) u = f and the damped natural frequency can be shown to be \displaystyle \omega_d = \omega_0 \sqrt {\left( \frac{1}{2} \left( 1 + \sqrt{1+\eta^2} \right ) \right ) } \approx \omega_0 \left (1 + \frac{\eta^2}{8} \right ) It may come as a surprise that the effect of adding damping in this case is to increase, rather than decrease, the natural frequency. The explanation is that this form of loss factor damping representation actually also increases the stiffness. The absolute value of the complex-valued stiffness is |\tilde k| = k \sqrt {1 + \eta^2} \approx k \left ( 1+ \frac{\eta^2}{2} \right ) With this loss factor damping, the transfer function is \displaystyle \frac{1}{1-\beta ^2 +i\eta } and its magnitude is \displaystyle \left | \frac{1}{1-\beta ^2 +i\eta} \right | = \frac{1}{\sqrt {(1-\beta ^2)^2 +\eta^2}} It can be immediately seen that the maximum amplitude occurs at β = 1; that is, at the undamped natural frequency. Again, maximum amplification occurs at a frequency that is lower than the damped natural frequency. The alternative definition of loss factor damping mentioned in the previous blog post has the property that the absolute value of the complex stiffness is independent of the damping level. This is obtained by using a definition that normalizes the complex stiffness so that a pure rotation in the complex plane is obtained, \tilde k = \displaystyle \frac{k(1+i \eta)}{\sqrt{1+ \eta^2}} Such a formulation leads to a natural frequency that decreases with damping: \displaystyle \omega_d = \omega_0 \sqrt { \frac {\frac{1}{2} \left( 1 + \sqrt{1+\eta^2} \right )}{1+ \eta^2} } \approx \omega_0 \left (1 – \frac{3\eta^2}{8} \right ) An analysis that is omitted here will show a corresponding drop in the excitation frequency that will give the maximum amplification so that it is still lower than the damped natural frequency. The phase shift between excitation and response when using loss factor damping is particularly interesting: Even at very low excitation frequencies, there is still a phase shift. Its asymptotic value is arctan(η). Phase shift of the displacement as a function of frequency when using loss factor damping. Low-frequency asymptotes are indicated by the dotted lines. A Note About Friction When friction between two surfaces supplies the damping mechanism, the response to a harmonic input is no longer harmonic because of the nonlinearity in the system. There may still be a periodic, but anharmonic, response. Such problems cannot be solved by the frequency-domain methods, in which the assumption is that the input-output relation is linear. Modeling Frequency Response in COMSOL Multiphysics® Setting Up the Study After adding a structural mechanics physics interface in the Model Wizard, you will be presented with a number of study types, four of which can be used for computing frequency response: Frequency Domain Frequency Domain, Prestressed Frequency Domain, Modal Frequency Domain, Prestressed, Modal Available study types for a Solid Mechanics interface. Two of the studies use a direct solution approach and two use the mode superposition approach. In the prestressed types of analysis, the change in stiffness from a stationary preload is taken into account. Mode superposition is very well suited for frequency-domain analysis, since it it easy to select the appropriate eigenmodes based on the given frequencies. In either case, you perform a frequency sweep by providing a list of frequencies in the study settings for which the response is computed. Often, you want to cluster the frequencies around the natural frequencies of the structure. Entering frequencies for a frequency sweep. Note that without damping, the response exactly at a natural frequency tends toward infinity. This means that it is not possible to solve an undamped frequency response problem at, or close to, a natural frequency. The numerical formulation will give a singular, or at least ill-conditioned, system matrix. Perturbation or Not? There is a very important setting in the Stationary node in the solver sequence for a frequency-domain study: Linearity. Selecting the Linearity property. In principle, any frequency-domain analysis can be considered to be a small perturbation, so using Linear perturbation is never wrong. The most common case, however, is that the vibrations are centered around zero. In that case, it does not really matter whether the problem is considered as Linear or Linear Perturbation. The setting does, however, always fundamentally change the interpretation of loads. A load can be tagged as Harmonic Perturbation. Such a load is only taken into account if Linearity is set to Linear perturbation. All loads not marked as Harmonic Perturbation are ignored in such a study. Conversely, if Linearity is not Linear perturbation, then all loads marked as Harmonic Perturbation are ignored, and other loads are considered as harmonic. An edge load, designated as Harmonic Perturbation . The purpose of this setting is to be able to discriminate between loads causing a possible prestress state and the harmonic excitation acting on top of that. When you add a standard Frequency Domain study, the study is, by default, not set as perturbation. Thus, the Harmonic Perturbation tag should not be used for the loads in this case, unless you change the Linearity setting. When you add a Frequency Domain, Prestressed study, the frequency response study step is set up for perturbation analysis. If the study is of a mode superposition type, then the study is always of a linear perturbation type. Interpreting the Results The results of a frequency-domain analysis are complex-valued and the harmonic variation is implicit. The phase angle of the complex number describes the phase shift with respect to the reference phase (which can be chosen arbitrarily, but is often taken as the phase of the main load). It also provides information about the phase shift between different points in the structure. Note that since the displacement components within a single finite element can have different phase angles, it is also quite possible that the components of the stress tensor are not in phase with each other. This can be of importance in, for example, fatigue analysis. In many cases, like in a color plot, it is only possible to display a real number. The convention during all results presentation is as follows: If you request a complex-valued variable v in a context where a real value is expected, then the real part is used. \displaystyle v = \Re(\tilde v e^{i \phi}) The phase angle Φ is a property of the dataset that you can modify. Adjusting the phase angle in the dataset. In most frequency-response analyses, you are interested in the amplitude of a result quantity, v, as function of frequency. This means that you should investigate abs(v) rather than v itself. The difference between the two is shown in the figure below. Example of a frequency response graph. Note that the graph of “u” is identical to “real(u)”. In order to see what happens in more detail, we can add the imaginary part and argument of the result quantity to the graph: Frequency response including phase shift. For low frequencies, the real part is close to the absolute value. In the vicinity of the natural frequency, the imaginary part is dominant instead. This means that the response is almost out of phase with the excitation. Now, let’s investigate what happens if we change the phase angle in the dataset to 45°. Frequency response when the phase angle in the dataset is 45°. As expected, the amplitude graph does not change. However, the individual values of the real and imaginary parts do. The phase angle curve shifts π/4 upward. Actually, this is the same exact graph that we would obtain if a 45° phase angle was added to the load. Adding a phase angle to a load. Instead of using the phase angle input, you can equivalently enter the load directly using complex notation: Complex representation of the same load as above. The possibility to prescribe the phase angle is important when not all loads are in phase with each other. A rotating unbalanced mass can, for example, be described conveniently by giving the load in the y direction a 90° phase shift with respect to the load in the x direction. Results from a Perturbation Study If the study is of the perturbation type, there will actually be two sets of results: the prestress solution and the perturbation solution. In this case, you will, in the various result presentation features, get access to an extra selection: Expression evaluated for. Selecting the evaluation type for a perturbation analysis. Here, you can choose to study the perturbation solution, the prestress solution, or combinations thereof. For the perturbation solution, you also get one more option: the Compute differential check box. Selecting Compute differential . This setting affects how nonlinear expressions are treated. When Compute differential is not selected, then a nonlinear quantity is taken at face value. For example, the expression u^2 will simply take the square of the variable u from the perturbation solution. Since u is, in general, complex-valued, this will usually be a nonsensical operation. When Compute differential is selected, then the nonlinear quantity will be linearized around the prestressed state. The expression u^2 will evaluate to 2*u0*u, where u0 is the value at the linearization point. Converting Frequency-Response Results to the Time Domain There are some situations in which you may want to actually visualize the harmonic response from a frequency-domain analysis in the time domain. In particular, this is true if you have multiple excitation frequencies. Response to the excitation of two loads with different frequencies. You can transform frequency-response results to the time domain using the Frequency to Time FFT study step. Study sequence for transforming results from the frequency domain to the time domain. This technique is used in the following tutorial models: Concluding Remarks Frequency-domain analysis is a powerful tool for analyzing linear systems subjected to harmonic excitation. Actually, by performing an initial Fourier transform of the loading, any type of periodic excitation can be studied using frequency-response analysis. There are many more examples of mechanical frequency-response analyses available in the Application Gallery, such as: Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Answer 3.2 revolutions Work Step by Step We find: $\Delta \theta = \frac{\omega^2 - \omega_0^2 }{2\alpha}= \frac {0-2.2^2}{2 \times -.12} \approx 20 rads \approx 3.2\ revs $ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Search Now showing items 1-10 of 166 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Tabulating numbers Let $u(n,k)$ denote the number of upwards-pointing triangles of size $k$ included in a triangle of size $n$, where size is a short term for edge length. Let $d(n,k)$ likewise denote the number of down triangles. You can tabulate a few numbers to get a feeling for these. In the following table, row $n$ and column $k$ will contain two numbers separated by a comma, $u(n,k), d(n,k)$. $$\begin{array}{c|cccccc|c}n \backslash k &1 & 2 & 3 & 4 & 5 & 6 & \Sigma \\\hline1 & 1, 0 &&&&&& 1 \\2 & 3, 1 & 1,0 &&&&& 5 \\3 & 6, 3 & 3,0 & 1,0 &&&& 13 \\4 & 10, 6 & 6,1 & 3,0 & 1,0 &&& 27 \\5 & 15,10 & 10,3 & 6,0 & 3,0 & 1,0 && 48 \\6 & 21,15 & 15,6 & 10,1 & 6,0 & 3,0 & 1,0 & 78\end{array}$$ Finding a pattern Now look for patterns: $u(n, 1) = u(n - 1, 1) + n$ as the size change added $n$ upwards-pointing triangles $d(n, 1) = u(n - 1, 1)$ as the downward-pointing triangles are based on triangle grid of size one smaller $u(n, n) = 1$ as there is always exactly one triangle of maximal size $d(2k, k) = 1$ as you need at least twice its edge length to contain a downward triangle. $u(n, k) = u(n - 1, k - 1)$ by using the small $(k-1)$-sized triangle at the top as a representant of the larger $k$-sized triangle, excluding the bottom-most (i.e. $n$ th) row. $d(n, k) = u(n - k, k)$ as the grid continues to expand, adding one row at a time. Using these rules, you can extend the table above arbitrarily. The important fact to note is that you get the same sequence of $1,3,6,10,15,21,\ldots$ over and over again, in every column. It describes grids of triangles of same size and orientation, increasing the grid size by one in each step. For this reason, those numbers are also called triangular numbers. Once you know where the first triangle appears in a given column, the number of triangles in subsequent rows is easy. Looking up the sequence Now take that sum column to OEIS, and you'll find this to be sequence A002717 which comes with a nice formula: $$\left\lfloor\frac{n(n+2)(2n+1)}8\right\rfloor$$ There is also a comment stating that this sequence describes the Number of triangles in triangular matchstick arrangement of side $n$. Which sounds just like what you're asking. References If you want to know how to obtain that formula without looking it up, or how to check that formula without simply trusting an encyclopedia, then some of the references given at OEIS will likely help you out: J. H. Conway and R. K. Guy, The Book of Numbers, p. 83. F. Gerrish, How many triangles, Math. Gaz., 54 (1970), 241-246. J. Halsall, An interesting series, Math. Gaz., 46 (1962), 55-56. M. E. Larsen, The eternal triangle – a history of a counting problem, College Math. J., 20 (1989), 370-392. C. L. Hamberg and T. M. Green, An application of triangular numbers, Mathematics Teacher, 60 (1967), 339-342. (Referenced by Larsen) B. D. Mastrantone, Comment, Math. Gaz., 55 (1971), 438-440. Problem 889, Math. Mag., 47 (1974), 289-292. L. Smiley, A Quick Solution of Triangle Counting, Mathematics Magazine, 66, #1, Feb '93, p. 40.
The approximate fixed point property in Hausdorff topological vector spaces and applications 1. Departamento de Matemática, Universidade Federal do Ceará, Fortaleza, 60455-760, CE, Brazil lbe a compact convex subset of a Hausdorff topological vector space $(\mathcal{E},\tau)$ and $\sigma$ another Hausdorff vector topology in $\mathcal{E}$. We establish an approximate fixed point result for sequentially continuous maps f: ( l,$\sigma$)$\to$ ( l,$\tau$). As application, we obtain the weak-approximate fixed point property for demicontinuous self-mapping weakly compact convex sets in general Banach spaces and use this to prove new results in asymptotic fixed point theory. These results are also applied to study the existence of limiting-weak solutions for differential equations in reflexive Banach spaces. Keywords:Differential equations, Approximate fixed point property, Reflexive spaces., Asymptotic fixed point theorem, Hausdorff topological vector spaces. Mathematics Subject Classification:Primary: 46H03; Secondary: 47H1. Citation:Cleon S. Barroso. The approximate fixed point property in Hausdorff topological vector spaces and applications. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 467-479. doi: 10.3934/dcds.2009.25.467 [1] [2] [3] [4] Jeffrey W. Lyons. An application of an avery type fixed point theorem to a second order antiperiodic boundary value problem. [5] Yakov Krasnov, Alexander Kononovich, Grigory Osharovich. On a structure of the fixed point set of homogeneous maps. [6] [7] Luis Hernández-Corbato, Francisco R. Ruiz del Portal. Fixed point indices of planar continuous maps. [8] [9] [10] Ovide Arino, Eva Sánchez. A saddle point theorem for functional state-dependent delay differential equations. [11] [12] Mircea Sofonea, Cezar Avramescu, Andaluzia Matei. A fixed point result with applications in the study of viscoplastic frictionless contact problems. [13] Parin Chaipunya, Poom Kumam. Fixed point theorems for cyclic operators with application in Fractional integral inclusions with delays. [14] Mark S. Gockenbach, Akhtar A. Khan. Identification of Lamé parameters in linear elasticity: a fixed point approach. [15] [16] Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, Sandro Vaienti. Polynomial loss of memory for maps of the interval with a neutral fixed point. [17] [18] Byung-Soo Lee. A convergence theorem of common fixed points of a countably infinite family of asymptotically quasi-$f_i$-expansive mappings in convex metric spaces. [19] Enrique Fernández-Cara, Arnaud Münch. Numerical null controllability of semi-linear 1-D heat equations: Fixed point, least squares and Newton methods. [20] Xing Wang, Nan-Jing Huang. Stability analysis for set-valued vector mixed variational inequalities in real reflexive Banach spaces. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Table of Contents An Example of a Retract that is NOT a Deformation Retract Let $X$ be a topological space and let $A \subseteq X$ be a topological subspace. Recall the following definitions: On the Retract Subspaces of a Topological Space page we said that $A$ is a retract of $X$ if there exists a continuous function (called a retract) $r : X \to A$ such that: On the Deformation Retract Subspaces of a Topological Space page we said that $A$ is a deformation retract of $X$ if there exists a continuous function $r : X \to A$ such that: We are now able to give an example of two topological spaces $A$ and $X$ where $A$ is a retract of $X$ but such that $A$ is not a deformation retract of $X$. Let $X = S^1\times S^1$ be the torus and let $A \subseteq X$ be the equatorial circle of $X$ depicted below: Showing that $A$ is a retract of $X$: Define a function $r : X \to A$ as follows. Cut $X$ up into vertical circle sections. Then each circle intersects $A$ uniquely. Define $r(x)$ to be the unique point $a \in A$ for which the unique circle intersects both $x$ and $a$. Alternatively, recall that the torus $X$ can be identified as a quotient topological space by taking $[0, 1] \times [0, 1]$ and identified opposite edges of the square in the same orientation. Then the circle $A$ can be viewed as a horizontal line segment in $[0, 1] \times [0, 1]$ and we map every point in $[0, 1] \times [0, 1]$ to its vertical projection onto this line segment: It is easy to see that $r$ is a continuous map. This is because if $U$ is an open subset of $X$, then $r^{-1}(U)$ will be a union of open intervals in the equatorial circle. So $r^{-1}(U)$ is open. Furthermore, we have that $r(a) = a$ for all $a \in A$, and so:(3) So $A$ is a retract of $X$. Showing that $A$ is NOT a deformation retract of $X$. Observe that:(4) These fundamental groups are not isomorphic and so $A$ cannot be a deformation retract of $X$.
Simply Connected Topological Spaces Recall from The Fundamental Group of a Topological Space at a Point page that if $X$ is a topological space and $x \in X$ then the fundamental group of $X$ at $x$ is:(1) With the group operation of homotopy class multiplication defined for all $\alpha, \beta \in \pi_1(X, x)$ by:(2) Sometimes the fundamental group of a topological space $X$ at a point $x$ is the trivial group consisting only of the homotopy class of the constant loop, $[c_x]$. Such spaces for which the fundamental groups all trivial are given a special name. Definition: Let $X$ be a topological space. Then $X$ is said to be Simply Connected if $\pi_1(X, x) = \{ [c_x] \}$ for every $x \in X$. The following theorem give us some examples of simply connected spaces. Theorem 1: If $X$ is a convex topological subspace of $\mathbb{R}^n$ then $X$ is simply connected. For example, consider the topological subspace of the closed unit disk $D^2 = \{ (x, y) \in \mathbb{R}^2 : x^2 + y^2 \leq 1 \}$. Clearly $D^2$ is convex and so for every $\vec{x} \in D^2$:(3) So $D^2$ is simply connected.
Comparing Two Interfaces for High-Frequency Modeling It is always important to choose the correct tool for the job, and choosing the correct interface for high-frequency electromagnetic simulations is no different. In this blog post, we take a simple example of a plane wave incident upon a dielectric slab in air and solve it in two different ways to highlight the practical differences and relative advantages of the Electromagnetic Waves, Frequency Domain interface and the Electromagnetic Waves, Beam Envelopes interface. Meshing Free Space in Two Electromagnetic Interfaces Both of these interfaces solve the frequency-domain form of Maxwell’s equations, but they do it in slightly different ways. The Electromagnetic Waves, Frequency Domain interface, which is available in both the RF and Wave Optics modules, solves directly for the complex electric field everywhere in the simulation. The Electromagnetic Waves, Beam Envelopes interface, which is available solely in the Wave Optics Module, will solve for the complex envelope of the electric field for a given wave vector. For the remainder of this post, we will refer to the Electromagnetic Waves, Frequency Domain interface as a Full-Wave simulation and the Electromagnetic Waves, Beam Envelopes interface as a Beam-Envelope simulation. To see why the distinction between Full-Wave and Beam-Envelope is important, we will begin by discussing the trivial example of a plane wave propagating in free space, as shown in the image below. We will then apply the lessons learned to the dielectric slab. A graphical representation of a plane wave propagating in free space, where the red, green, and blue lines represent the electric field, magnetic field, and Poynting vector, respectively. To properly resolve the harmonic nature of the solution in a Full-Wave simulation, we need to mesh finer than the oscillations in the field. This is discussed further in these previous blog posts on tools for solving wave electromagnetics problems and modeling their materials. To simulate a plane wave propagating in free space, the number of mesh elements will then scale with the size of the free space domain in which we are interested. But what about the Beam-Envelopes simulation? The Beam-Envelopes method is particularly well-suited for models where we have good prior knowledge of the wave vector, \mathbf{k}. Practically speaking, this means that we are solving for the fields using the ansatz \mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}}. Notice that the only unknown in the ansatz is the envelope function \mathbf{E_1}\left(\mathbf{r}\right). This is the quantity that needs to be meshed to obtain a full solution, hence the mention of beam envelopes in the name of the interface. In the case of a plane wave in free space, the form of the ansatz matches exactly with the analytical solution. We know that the envelope function will be a constant, as shown by the green line in the figure below, so how many mesh elements do we need to resolve the solution? Just one. The electric field and phase of a plane wave propagating in free space. In the field plot (left), the blue and green lines show the real part and absolute value of E(r), which are abs(\mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}}) = E_1 and real(\mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}}) = E_1\cos(kr), respectively. The phase plot (right) shows the argument of E(r). In both plots, the x -axis is normalized to a wavelength, so this represents one full oscillation of the wave. In practice, Beam-Envelopes simulations are more flexible than the \mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}} ansatz we just used. This is for two reasons. First, instead of specifying a wave vector, we can specify a user-defined phase function, \phi\left(\mathbf{r}\right) = \mathbf{k}\cdot\mathbf{r}. Second, there is also a bidirectional option that allows for a second propagating wave and a full ansatz of \mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}\left(\mathbf{r}\right)e^{-j\phi_1\left(\mathbf{r}\right)} + \mathbf{E_2}\left(\mathbf{r}\right)e^{-j\phi_2\left(\mathbf{r}\right)}. This is the functionality that we will take advantage of in modeling the dielectric slab (also called a Fabry-Pérot etalon). The points discussed here will come up again in the dielectric slab example, and so we highlight them again for clarity. The size of mesh elements in a Full-Wave simulation is proportional to the wavelength because we are solving directly for the full field, while the mesh element size in a Beam-Envelopes simulation can be independent of the wavelength because we are solving for the envelope function of a given phase/wave vector. You can greatly reduce the number of mesh elements for large structures if a Beam-Envelopes simulation can be performed instead of a Full-Wave simulation, but this is only possible if you have prior knowledge of the wave vector (or phase function) everywhere in the simulation. Since the degrees of freedom, memory used, and simulation time all depend on the number of mesh elements, this can have a large influence on the computational requirements of your simulation. Meshing a Dielectric Slab in COMSOL Multiphysics Using the 2D geometry shown below, we can clearly see the different waves that need to be accounted for in a simulation of a dielectric slab illuminated by a plane wave. On the left of the slab, we have to account for the incoming wave traveling to the right, as well as the reflected wave traveling to the left. Because of internal reflections inside the slab itself, we have to account for both left- and right-traveling waves in the slab, and finally, the transmitted waves on the right. We also choose a specific example so that we can use concrete numbers. Let’s make the dielectric slab an undoped silicon (Si) wafer that is 525 µm thick. We will simulate the response to terahertz (THz) radiation (i.e., submillimeter waves), which encompasses wavelengths of approximately 1 mm to 100 µm and is increasingly used for classifying semiconductor properties. The refractive index of undoped Si in this range is a constant n = 3.42. We choose the domain length to be 15 mm in the direction of propagation. The simulation geometry. Red arrows indicate incident and reflected waves. The left and right regions are air with n = 1 and the Si slab in the center has a refractive index n = 3.42. The x is on the bottom denote the spatial location of the planes. The slab is centered in the simulation domain, such that x1 = (15 mm – 525 µm)/2. Note that this image is not to scale. For a 2D Full-Wave simulation, we set a maximum element size of \lambda/8n to ensure the solution is well resolved. The simulation is invariant in the y direction and so we choose our simulation height to be \lambda/(8\times3.42). Because we have constrained the wave to travel along the x-axis, we choose a mapped mesh to generate rectangular elements. The mesh will then be one mesh element thick in the y direction, with a mesh element size in the x direction of \lambda/8n, where n depends on whether it is air or Si. Again, note that this is a wavelength-dependent mesh. Before setting up the mesh for a Beam-Envelopes simulation, we first need to specify our user-defined phase function. The Gaussian Beam Incident at the Brewster Angle example in the Application Gallery demonstrates how to define a user-defined phase function for each domain through the use of variables, and we will use the same technique here. Referring to x 0, x 1, and x 2 in the geometry figure above, we define the phase function for a plane wave traveling left to right in the three domains as where n = 3.42 and the first line corresponds to \phi in the leftmost domain, the second line is \phi in the Si slab, and the bottom line is \phi in the rightmost domain. We then use this variable for the phase of the first wave, and its negative for the phase of the second wave. Because we have completely captured the full phase variation of the solution in the ansatz, this allows a mapped mesh of only three elements for the entire model — one for each domain. Let’s examine what the mesh looks like in the Si slab for these two interfaces at two different wavelengths, corresponding to 1 mm and 250 µm. The mesh in the Si (dielectric) slab. From left to right, we have the Full-Wave mesh at 1 mm, the Full-Wave mesh at 250 µm, and the Beam-Envelopes mesh at any wavelength. Note that the Full-Wave mesh density clearly increases with decreasing wavelength, while the Beam-Envelopes mesh is a single rectangular element at any wavelength. Yes, that is the correct mesh for the Si slab in the Beam-Envelopes simulation. Because the ansatz matches the solution exactly, we only need three total elements for the entire simulation: one for the Si slab and one each for the two air domains on either side of it. This is independent of wavelength. On the other hand, the mesh for the Full-Wave simulation is approximately four times more dense at \lambda = 250 µm than at \lambda = 1 mm. Let’s look at this in concrete numbers for the degrees of freedom (DOF) solved for in these simulations. Wavelength Simulated Full-Wave Simulation DOF Used Beam-Envelopes Simulation DOF Used 1 mm 4,134 74 250 µm 16,444 74 The number of degrees of freedom (DOF) used at two different wavelengths for the Full-Wave and Beam-Envelopes simulations. Again, it is important to point out that this does not mean that one interface is better or worse than another. They are different techniques and choosing the appropriate option is an important simulation decision. However, it is fair to say that a Full-Wave simulation is more general, since we did not need to supply it with a wave vector or phase function. It can solve a wider class of problems than Beam-Envelopes simulations, but Beam-Envelopes simulations can greatly reduce the DOF when the wave vector is known. As we have seen in a previous blog post, memory usage in a simulation strongly depends on the number of DOF. Do not blindly use a Beam-Envelopes simulation everywhere though! Let’s take a look at another example where we intentionally make a bad choice for the wave vector and see what happens. Making Smart Choices for the Wave Vector In the hypothetical free space example above, we chose a unidirectional wave vector. Here, we will do the same for the Si slab. It is important to emphasize that choosing a single wave vector where we know that the solution will be a superposition of left- and right-traveling waves is an exceptionally bad choice, and we do this here solely for demonstration purposes. Instead of using the bidirectional formulation with a user-defined phase function, let’s naively choose a single “guess” wave vector of \mathbf{k_G} = n\mathbf{k_0} = \mathbf{k} and see what the damage is. Using our ansatz, inside of the dielectric slab we have where the left-hand side is the solution we are computing and the right-hand side is exact. Now, we manipulate the equation slightly to examine the spatial variation in the solution. We intentionally chose the case where \mathbf{k_G} = \mathbf{k}, which means we can simplify to Since \mathbf{E_1} and \mathbf{E_2} are constants determined by the Fresnel relations at the boundaries of the dielectric slab, this means that the only spatial variation in the computed solution will come from exp\left(-j\left(\mathbf{k+k_G}\right)\cdot\mathbf{r}\right). The minimum mesh requirement in the slab is then determined by the “effective” wavelength of this oscillating term which is half of the original wavelength. Not only have we made the Beam-Envelopes mesh wavelength dependent, but the required mesh in the dielectric slab for this choice of wave vector needs to be twice as dense as the mesh for a Full-Wave simulation. We have actually made the situation worse with the poor choice of a single wave vector for a simulation with multiple reflections. We could, of course, simply double the mesh density and obtain the correct solution, but that would defeat the purpose of choosing the Beam-Envelopes simulation in the first place. Make smart choices! Simulation Results Another practical question is how do the results of a Full-Wave and Beam-Envelopes simulation compare? They are both solving Maxwell’s equations on the same geometry with the same material properties, and so the various results (transmission, reflection, field values) agree as you would expect. There are slight differences though. If you want to evaluate the electric field of the right-propagating wave in the dielectric slab, you can do that in the Beam-Envelopes simulation. This is, of course, because we solved for both right- and left-propagating waves and obtained the total field by summing these two contributions. This could be extracted from the Full-Wave simulation in this case as well, but it would require additional user-defined postprocessing and may not be possible in all cases. It may seem counterintuitive in that we actually have more information readily available from a Beam-Envelopes simulation, even though it is computationally less expensive. We must remember, however, that this is simply the result of solving the model using the ansatz we specified initially. Concluding Thoughts on Interfaces for High-Frequency Modeling We have examined the simple case of a dielectric slab in free space using both the Electromagnetic Waves, Frequency Domain and Electromagnetic Waves, Beam Envelopes interfaces. In comparing Full-Wave and Beam-Envelopes simulations, we showed that a Beam-Envelopes simulation can handle much larger simulations, but only in cases where we have good knowledge of the wave vector (or phase function) everywhere in the simulation. This knowledge is not required for a Full-Wave simulation, but the simulation must then be meshed on the order of a wavelength, as opposed to meshing the change in the envelope function in a Beam-Envelopes simulation. It is also worth mentioning that most Beam-Envelopes meshes will need more than the three elements shown here. This was only possible here because we chose a textbook example with an analytical solution to use as a teaching model. For more realistic simulations, you can refer to the Mach-Zehnder Modulator or Self-Focusing Gaussian Beam examples in the Application Gallery. Note that the Electromagnetic Waves, Frequency Domain interface is available in both the RF and Wave Optics modules, although with slightly different features. The Full-Wave simulation discussed in this post could be performed in either module, although the Beam-Envelopes simulation requires the Wave Optics Module. For a full list of differences between the RF and Wave Optics modules, you can refer to this specification chart for COMSOL Multiphysics products. Further Resources Browse the COMSOL Blog for more discussions of electrical modeling Watch these videos: Take your electromagnetics modeling to the next level at a local training event Contact us with questions about your own model Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Short answer: Yes. The discrete logarithm can be attacked in a multitude of ways: Baby-step giant-step (BSGS), Pollard's Rho, Pohlig-Hellman, and the several variants of Index Calculus, the best of which currently is the Number Field Sieve. Let $n$ be the order of the generator of our field $\mathbb{F}_p$; it is $n = p-1$. We are trying to find $x$ given $a$ and $b=a^x$ in the above field. Pollard's Rho and BSGS In Baby-step giant step, we are trying to find $i$ and $j$ such that $b(a^{-m})^i = a^j$, where $m = \lceil \sqrt{n} \rceil$. Once we find such a pair, the discrete logarithm $x = i m + j$ follows, as $ b(a^{-m})^i = a^j \Leftrightarrow a^{x - m i} = a^j \Leftrightarrow x \equiv mi + j \pmod{n}$. To do so, we first compute a table of all $a^j$ for all $j$ up to $m-1$. Then we iterate through all $i$ up to $m-1$, and compare $b(a^{-m})^i$ with $a^j$. Ignoring arithmetic costs, the runtime of this method is at most $2(m-1) = O(\sqrt{n})$ (with the same space requirements). Due to the large space requirements, BSGS is rarely used in practice. Instead, we turn to Pollard's Rho. The crux of this method is to find a colliding nontrivial pair $(i,j)$ and $(k,l)$ such that $a^ib^j \equiv a^kb^l$. It follows that $x = \frac{k-i}{l-j} \pmod{n}$, since $a^i a^{xj} = a^k a^{xl} \Leftrightarrow a^{i + xj} = a^{k + xl} \Leftrightarrow i + xj \equiv k + xl \pmod{n}$. So Rho comes down to finding a collision quickly. This can be done with various algorithms, Floyd's being the oldest and best known. The good news is that we can try and find a collision without an enormous table; the not so good news is that the algorithm is probabilistic, although the birthday paradox tells us we should expect a collision in about $\sqrt{n}$ steps. In any case, these attacks are no good against a safe prime, where the order large enough that $\sqrt{n}$ is computationally unfeasible. Pohlig-Hellman The Pohlig-Hellman approach relies on the observation that there is an homomorphism $\phi$ from $a$ and $b$ from their group of order $n$, to the subgroup of order $p_i^{e_i}$ dividing $n$. In general, given $n = p_1^{e_1}p_2^{e_2}\ldots p_m^{e_m}$, $$\phi_{p_i^{e_i}}(a) = a^{n/p_i^{e_i}}$$ This allows us to compute the discrete logarithm of $\phi_{p_i^{e_i}}(a)$ and $\phi_{p_i^{e_i}}(b)$, which really is the discrete logarithm of $a$ and $b$ modulo $p_i^{e_i}$. From this observation, it is a matter of computing the logarithm modulo all prime divisors of $n$ (using the methods in the previous section) and combining them together using the Chinese remainder theorem. If $n$ has many small prime divisors, i.e., it is smooth, this method is very much faster than Rho or BSGS. In a safe prime, however, this is not the case, since the order $n$ is the product $2q$, for a very large $q$. Pohlig-Hellman doesn't help much here. Index Calculus Index Calculus is the basis for the best-performing algorithms to compute discrete logarithms modulo safe primes. Suppose we know the logarithms of $2$ and $3$; finding the logarithm of $12$ is easy: $\log_a12 = 2 log_a2 + log_a3$, since $12$ factors into $2^2\times 3$. We can generalize this method to arbitrary elements. Start by defining the factor base, i.e., all the primes up to some bound $B$. Then, find the logarithms of all the elements of the factor base (this is the tricky part). Finally, factor $b$ into the factor base, and simply add all the logarithms corresponding to the factorization you find. If $b$ does not factor completely into the factor base, multiply $b$ by some known exponent of $a$ and try again. Finding the logarithms of all the primes up to $B$ requires some trickery. It has two major steps: For $k_i \in [1..n]$, find (usually by sieving) at least $\pi(B)$ elements $a^k$ that factor completely into the factor base. Store both $a^{k_i}$ and its complete factorization. Now we have the linear system (modulo $n$):$$\begin{eqnarray}e_{1,1} \log_a 2 &+& e_{1,2} \log_a 3 &+& \ldots &+& e_{1,{\pi(B)}} &=& {k_1} \\e_{2,1} \log_a 2 &+& e_{2,2} \log_a 3 &+& \ldots &+& e_{2,{\pi(B)}} &=& {k_2} \\&&&&\ldots&&&& \\e_{{\pi(B)},1} \log_a 2 &+& e_{{\pi(B)},2} \log_a 3 &+& \ldots &+& e_{{\pi(B)},{\pi(B)}} &=& {k_{\pi(B)}} \\\end{eqnarray}$$ Solving the above linear system gives us the needed logarithms for the factor base. The runtime of this method, for appropriate choice of $B$, is $\exp{((2+o(1)((\log n)^{1/2}(\log \log n)^{1/2}))}$. This is not strictly polynomial, but is a big improvement on the previous methods. Number field sieve The number field sieve is currently the best algorithm for both integer factorization and discrete logarithms over finite fields. For the discrete logarithm, it is analogous to the above index calculus, with a few major modifications: We are working in the number fields $\mathbb{Q}[\alpha]$ and $\mathbb{Q}[\beta]$ instead of the integers; there is, however, a map from such fields to the integers under some conditions. The number fields are defined by the polynomials $f_1$ and $f_2$ of degree $d_1$ and $d_2$; there must exist an integer $m$ such that $f_1(m) = f_2(m) = 0 \pmod{p}$. The factor base is formed by the primes in both $\mathbb{Q}[\alpha]$ and $\mathbb{Q}[\beta]$, up to bounds $B_1$ and $B_2$. During sieving, we look for pairs $(x,y)$ such that $N_{f_1}(x + \alpha y)$ and $N_{f_2}(x + \beta y)$ are $B_1$ and $B_2$-smooth, respectively, where $N_{f_i}$ is given by$$N_{f_i}(x + \alpha y) = y^{d_i} f_i(x/y)$$ The speed of the number field sieve hinges on the speed of finding $(x,y)$ with smooth norms $N_{f_i}(x,y)$. In turn, the probability of $N_{f_i}(x,y)$ being smooth is linked to its size: the smaller it is, the more likely it is to be smooth. And in turn, the size of $N_{f_i}(x,y)$ is determined by $x$ and $y$ (obviously), but also by the coefficients of $f_i$! When $f_i$ has very small coefficients, the number field sieve becomes asymptotically faster, from $$\exp ( (1.923 + o(1))( (\log n)^{1/3}(\log \log n)^{2/3})$$ to $$\exp ( (1.526 + o(1))( (\log n)^{1/3}(\log \log n)^{2/3}).$$ When one chooses a prime $p$ very close to $2^k$, it becomes easy to find a sparse polynomial that has a root modulo $p$. In your example, $p = 2^{2048} - 1942289$, we can find the degree $8$ polynomial $$x^8 - 1942289,$$ since $2^{2048} - 1942289 = (2^{256})^8 - 1942289$. This polynomial has very small coefficients, that render the number field sieve for the discrete logarithm in this field much faster that it would be for a random 2048-bit prime.
Those aren't definitions; they're just facts about how $\cos$ and $\sin$ behave in the second quadrant. Those features follow from how the trig functions are defined using the unit circle: If $(x,y)$ is the point on the unit circle obtained by rotating counterclockwise by $\theta$ radians from the point $(1, 0)$, then $\cos(\theta) := x$ and $\sin(\theta) := y$. These definitions are built specifically to agree with the "SOH CAH TOA" definitions for $\theta$ between $0$ and $\pi/2$; because the hypotenuse is 1, by definition, the sine of an angle is simply equal to the opposite side, which is $y$, and the cosine of an angle is equal to the adjacent side, which is $x$. For $\theta$ in the interval from $\pi/2$ to $\pi$ (i.e., $\theta$ an obtuse angle), $(x, y)$ is in the second quadrant, where $x<0$ and $y>0$. Reflecting across the $y$-axis corresponds to taking the supplementary angle, and this reflection negates $x$ and leaves $y$ unchanged. Hence the cosine of $\theta$ is given by the negative of the cosine of the supplementary angle to $\theta$, and the sine is equal to the sine of the supplementary angle. Since the supplementary angle is given by $\pi-\theta$, these two facts can be summarized by the equations $$\cos(\pi-\theta) = -\cos(\theta)$$ and $$\sin(\pi-\theta)=\sin(\theta),$$ both of which are special cases of the more general angle addition and subtraction formulas.
Garabedian,Typically, the "swap curve" refers to an x-y chart of par swap rates plotted against their time to maturity. This is typically called the "par swap curve."Your second question, "how it relates to the zero curve," is very complex in the post-crisis world.I think it's helpful to start the discussion with a government bond yield curve to ... I like to present to you a slightly different approach:Historically, only one single yield curve was derived from different instruments, such as OIS, deposit rates, or swap rates. However, market practice nowadays is to derive multiple swap curves, optimally one for each rate tenor. This idea goes against the idea of one fully-consistent zero coupon curve, ... You can't make any concrete statements about the monotonicity, convexity or even sign of the yield curve.Yields are almost always positive, and in the past (2007 and earlier) you could find people who would argue that yields must be positive, typically using a no-arbitrage argument. But recent history has shown us that it is possible for even 10Y yields to ... To explain why a negative sloping yield curve is bad, you have to start with a theory of the yield curve.The dominant theories for the term structure of interest rates are the rational expectations, liquidity preference, and market segmentation. (The first two theories are quite compatible with each other and have more standing so let's assume that view.)... There are two parts to your question and I'd like to answer them separately.Curve ConstructionOn a daily basis, you can observe prices on a large variety of instruments, whose prices are driven by news and trading flows. Based on market prices of these instruments, there are a number of ways to create discount curves/forward curves. At a very high level (... (In addition to the answers of Freddy and Phil H):With "modern" multi-curve setups: You have to distinguish between discount curves (which describe todays value of the a future fixed payoff (e.g. a zero coupon bond)) and forward curve, which describe the expectation (in a specific sense) of future interest rate fixings.Swaps pay LIBOR rates and are ... You should take a look at the example from Hull's book.Assume that the 6-month, 12-month, 18-month zero rates are 4%, 4.5%, and 4.8%, respectively.Suppose we know that the 2-year swap rate is 5%, which implies thata 2-year bond with a semiannual coupon of 5% per annum sells for par:$$2.5 e^{-0.04 \bullet 0.5} + 2.5 e^{-0.045 \bullet 1.0}+ 2.5 e^{-... Your overall approach is correct. However to my knowledge it is formally more appealing to work with a parameterized and smoothed yield curve.Basically one assumes that the yield curve can be described by a smooth function $r(t,\alpha, \beta,\gamma)$ (mostly of three parameters)Given a set of market data $Y(t,T_1)\dots Y(t, T_n)$ one looks for ... In the beginning, we had a plot of yields of individual bonds against time to maturity, the crudest form of "yield curve."Years later, people began hand-drawing a smoothed line through these yields as closely as possible. Because bonds have different coupon rates, making their yields hard to compare, people tend to draw the curve through bonds trading ... To elaborate on Freddy's answer:These days you need to maintain a separate funding (usually OIS) curve to your Libor* type curves. Once you have this discounting curve, you can calculate from Libor instrument market data what the market estimations of that Libor are: 3m instruments like Interest Rate Futures, IRS with a 3m float leg, 3m FRAs can be used to ... The original Nelson Siegel paper describes a parsimonious model of the term structure using only four or three (if $\lambda_t$ is fixed). Filipovic (1999) proves that this model can never be used in a arbitrage free context, paraphrasing the abstract:We introduce the class of consistent state space processes, whichhave the property to provide an ... There are many reasons why a yield curve can be inverted. A default-free yield curve reflects a combination of -market expectation of future short-term interest rates;bond risk premium: usually positive, longer duration bonds are more volatile and riskier, so investors demand a compensation in the form of higher yields;convexity.Let's consider a case ... The short answer is that Libor swap rates come from the market. They represent a series of cashflows in the future whose value is determined by the fixing, which the market participants have their own valuations of.Since the actual cash flows are now discounted using a separate funding curve, the swap prices embed both a prediction of future fixings and a ... Within the fixed income space, there's a lot of literature on PCA trading.The first 2-3 principal component factors (PCs) can typically explain 90-99% of the total variances in yield curve movement. It's also nice, because the first PC looks like a change in the overall level of the yield curve, the second PC looks like a slope change, while the third ... The short answer is that using 2y/10y is not a requirement and many other combinations are commonly used (e.g., 3m/10y, 1y/10y, fed funds/10y). According to a note published by the New York Fed:With regard to the short-term rate, earlier research suggests that the three-month Treasury rate, when used in conjunction with the ten-year Treasury rate, ... Given a forward rate, for example:$ F(t, T, T+\delta)$The instantaneous forward rate $f(t,T)$ fixed in $t$ is the limit when $\delta \rightarrow 0$ of your forward rate.If the relation between forward rate and zero coupon bond is:$F(t,T,T+\delta) = \frac{p(t,T) - p(t,T+\delta)}{\delta p(t,T+\delta)}$We have,\begin{equation}f(t,T) = \lim_{\... You should use the full yield curve, discounting cash flows at specific dates using the appropriate zero-coupon interest rate. As to which yield curve, that is often a matter of convention. Generally one uses the LIBOR/swaps curve for all but the most liquid products (in which case you use the treasury curve). The curve is constructed from LIBOR/Eurodollar ... The Macaulay duration is a measure of how sensitive a bond's price is to changes in interest rates. Duration is related to, but differs from, the slope of the plot of bond price against yield-to-maturity. The slope of the price-yield curve is$-\frac{D}{1+r}P,$where $D$ is Macaulay duration, $P$ is bond price, and $r$ is yield.Here's how the definition ... Ok, I've done some digging in the code.It's an issue with the LogLinear interpolation; while trying to find the correct rate for the 1-week node, the bootstrapper wanders unchecked into a region of negative rates and the logarithms blow up. At this time, I'm afraid the workaround is just to use some other interpolation. Or recompile the library and the ... Your observations are pretty much correct.The groupings are because of the fine print "Note how I have expanded the drift and volatility terms at $t = T$; in the above these are evaluated at $r$ and $T$." on the same page (p.528).Basically, $w$ is a function of both $r$ and $t$. Since we want to use $w(r,T)$ instead of $w(r,t)$, we taylor expand $w(r,t)$... There's no class at this time to add two curves as you want, but it won't be much difficult to write it.The closest you'll get in the library is the ZeroSpreadedTermStructure class, that shows the general idea: it inherits from YieldTermStructure (by way of ZeroYieldStructure) takes a YieldTermStructure and a spread (constant, in this case) and override ... Quantlib supports multi-curve framework (to the best of my knowledge).By the way, there's a "newer" version of that paper (authored by Pallavicini & Brigo).http://arxiv.org/abs/1304.1397This paper might also be useful for you, very practical and basically answers any question you could have.Also see this discussion about multi-curve discounting ... The NS model should be fit directly to bond prices. If you have the prices of all the Treasuries, you should use those directly. See this paper for how the Fed does it http://www.federalreserve.gov/pubs/feds/2006/200628/200628pap.pdfThe "Daily Treasury Yield Curve Rates" are already fitted par yields (they're fitted using a cubic spline model to on-the-run ... This is what banks have been doing for hundreds of years. They borrow short term (mainly through deposits and interbank lending) and lend long term (e.g. mortgages).I would not call it arbitrage, as it is not riskless profit.Apart from credit risk and interest rate risk, there is also liquidity risk. In these type of strategies, the investor has to ... fixedLegBPS is the basis-point sensitivity of the fixed leg, that is, how much its NPV changes when the fixed rate changes by one basis point: it's calculated as the NPV corresponding to a fixed rate of 1 bps.Since the NPV of the fixed leg is linearly proportional to the fixed rate, you can write the equationtargetNPV : fixedRate = BPS : 1 basis point... There is a liquidity premium between on-the-run treasury issues and off-the-run issues with similar characteristics. This is why when building a yield curve, typically on-the-run issues are used to compute this curve as a representation of the risk-free rate.Depends on what you're using the curve for. In practice, it is far more prevalent to use only OFF-... 1. Observable instruments, spot rates, and forward ratesFirst remember that something observable means that you can observe/find the rate in the market by looking at traded rate instruments or fixings.1.1. Observed spot ratesFor simplicity, assume Zero Coupon Bonds (ZCBs) are traded with time left to maturity of 10Y, 15Y and 20Y. Hence, by observing ... The answer to your first four questions is affirmative. Option-adjusting the spread makes an equivalence between everything theoretically possible, but the quality of results depends significantly on the quality of your interest rate model and its calibration. My personal opinion, though, is that the results need to be treated carefully because the OAS ... It's hard to be sure without seeing the inputs, but I'm guessing that the implied curve changes shape because the original curve does (which you can see from your output: except for the 1-year and 5-years points, the actual discounts are different).The reason the original curve changes is probably the different position of weekends or holidays (so that, ...
How to Perform Various Rotor Analyses in the COMSOL® Software Vibration in rotating machinery is very sensitive to the geometric, structural, and inertial properties of the various rotating and stationary components interacting with each other. These properties include the location of the mounted components and their inertial properties, bearing characteristics, and shaft properties. To understand the effects of these parameters, start with a simple model and perform various analyses to correlate the rotor response within the same model. Let’s demonstrate this process with a simply supported beam rotor example. 2 Analysis Types for a Simply Supported Beam Rotor System The rotor system in this example is a simple rotor with a uniform cross section throughout its length. It is supported at both ends by bearings and there are three mounted components called disks at different locations of the rotor. You can model this rotor using the Beam Rotor interface in the COMSOL Multiphysics® software. The inertial properties and offset of the rotor components are modeled with the Disk node. The bearing support is modeled by an equivalent stiffness-based approach via the Journal Bearing node provided in the Beam Rotor interface. For more information about the geometric properties and model setup, check out the references in the model documentation. Geometry of the beam rotor example. Two types of analyses are commonly used to study rotor vibration characteristics: eigenfrequency and time-domain analyses. As mentioned in a previous blog post, critical speeds of the rotor strongly depend on the rotor’s angular speed. Therefore, while performing the eigenfrequency analysis, you need to consider the variation in the rotor speed to get the correct critical speeds. A time-domain analysis is performed when you want to look at the system response under time-varying excitation. Now, let’s look at what type of information each analysis provides as well as the steps involved to perform these analyses. Eigenfrequency Analysis of a Rotor Eigenfrequency analysis is used to determine the natural frequencies of a system. In a rotordynamics scenario, this analysis can be used in two different ways. First, for the operating speed of a system that is not fixed, you can perform an eigenfrequency analysis of the system for the range of operating speeds and choose the one that is furthest from the critical speed of the system and meets other design considerations. If you cannot find a suitable operating speed for the current system, you might need to make certain design modifications in the system to get a stable operating speed that meets all of the requirements. In the second type of analysis, the operating speed of the system is fixed. In such a case, you need to perform an eigenfrequency analysis at the given operating speed to check that any of the natural frequencies of the system are not close to the operating speed. If any of the natural frequencies fall closer to the operating speed, design modifications are a must. The design modifications in the rotor system require an understanding of what kind of modifications will produce the desired effect and at what cost. This is where simulating simple systems to understand the effect of design modifications is very helpful. Simulation can provide guidelines for design modifications, thus reducing the number of iterations in the design process. Consider the first case, in which the operating speed of the system is not fixed, to understand the analysis steps. In this case, you need to perform a parametric eigenfrequency analysis for the angular speed of the rotor. This requires two steps in the Study node: Parametric Sweep and Step 1: Eigenfrequency, shown below on the left. Settings for the Parametric Sweep node for a sweep over a parameter Ow representing the angular speed of the rotor are shown below in the center. This parameter is used as an input in the Rotor Speed section of the Beam Rotor node settings, shown below on the right. After performing the analysis, you get a whirl plot of the rotor as the default, shown below. The whirl plot shows the whirling orbit and the deformed shape of the rotor for the given rpm and natural frequency combinations. Whirl plot of the rotor. The deformed shape of the rotor also gives you an idea of how strongly the natural frequency will depend on the angular speed of the rotor. If the disks move away from the rotation axis without significant tilting, then the split in the frequency in the backward whirl (opposite to the spin) and forward whirl (same direction as the spin) is not significant. Alternatively, if the disks do not move significantly far from the rotation axis and rather have significant tilting, then the split in the frequency of the backward and forward whirl is noticeable. To understand this concept in depth, you can plot the variation of the natural frequency for different modes against the angular speed of the rotor, which is often called a Campbell diagram. The Campbell plot for the simply supported rotor example is shown below. You can see the strong divergence of the eigenfrequencies with rotor speed for certain modes; whereas for others, particularly the modes with low natural frequencies, the divergence is not significant. If you look at the mode shapes corresponding to these frequencies, they confirm the behavior previously discussed. Critical speeds of the rotor can be obtained from the Campbell plot by looking at the intersection of the natural frequency vs. angular speed curve with ω = Ω curve. These are the speeds near which a rotor should not be operated, unless sufficiently damped. Campbell plot of the simply supported rotor system. The damping in the respective modes can be accessed by plotting the logarithmic decrement with the angular speed of the rotor. The logarithmic decrement is defined as where A( t) is a time-varying response and ω is the complex eigenfrequency of the system. T is the time period given by T = \frac{2\pi}{\Re(\omega)}. Logarithmic decrement for different bending modes in the simply supported rotor system. In the plot above, you can see a logarithmic decrement variation for the different bending modes with the angular speed for the simply supported rotor. The notation ‘b’ and ‘f’ is used for the backward and forward whirl modes, respectively. A logarithmic decrement of zero means that the system is undamped, a negative value indicates an unstable system, and a positive value indicates a stable system. You can also note the pattern change for some of the curves. The reason is that the modal data is arranged in increasing order of the natural frequency. But we know that the rotor’s natural frequencies decrease in the backward whirl modes and increase in the forward whirl modes. Due to this, there is crossover of the natural frequencies between the higher backward whirl and lower forward whirl modes beyond a certain angular speed. This upsets the initial order of the modes, resulting in switching of the patterns across the crossover points. Time-Dependent Analysis of a Rotor Eigenfrequency analysis gives the characteristics of the rotor system operating at steady state. However, before and after reaching the steady state, during the run-up and run-down, the angular speed of the rotor varies with time. In certain cases, the operating speed might be above the first few natural frequencies of the rotor. Therefore, during the run-up and run-down, the rotor will cross over the corresponding critical speeds. Also, there could be nonharmonic time-varying external excitation acting on the rotor. In such cases, the rotor response cannot be completely determined by an eigenfrequency or frequency-domain analysis. Rather, you need a time-dependent simulation to study the response of the system. You can also perform a time-dependent analysis of the rotor at different angular speeds by performing a parametric sweep to see how the angular speed governs the response. An obvious extension of such an analysis is to evaluate the frequency spectrum of the time-dependent response of the rotor for all of the angular speeds and analyze what combinations of the angular speed and frequency result in a high amplitude response. A waterfall plot shows the response amplitude vs. angular speed and frequency and gives the distribution of the modal participation in the response at different speeds. Such an analysis can be set up using the three steps in the study node, as shown below. Steps for a waterfall plot analysis. The Parametric Sweep study step is used to sweep the angular speed, a Time Dependent study step is used to perform a time-dependent analysis corresponding to each angular speed in the parametric sweep, and a Time to Frequency FFT study step takes the fast Fourier transform of the time-dependent data to convert into the frequency spectrum. In the eigenfrequency analysis, bearings are modeled using constant stiffness and damping coefficients. However, in reality, these coefficients are strongly dependent on the journal motion. To highlight the effect of the nonlinearity for the time-domain analysis, a plain journal bearing model is used instead of the constant bearing coefficients. A plain journal bearing model is based on the analytical solution of the Reynolds equation for a short bearing approximation. The system in this case is self-excited due to the eccentric mounting modeled as a disk. To simplify the system, only the second disk is considered with small eccentricity in the local y direction. The waterfall plot of the z-component of the displacement is shown in the figure below. You can observe three peaks clearly in the spectrum. The third peak, which falls along the ω = Ω curve, corresponds to a 1X synchronous whirl. This is in response to the centrifugal force due to eccentricity changing its direction with the rotation of the shaft. Other peaks correspond to the orbiting of the rotor due to the complex rotor bearing interaction. The reason is that the forces from the pressure distribution around the journal in the bearing have a cross-coupling effect with the journal motion. In other words, the motion of the journal in one of the lateral directions induces a component of the force in the lateral direction perpendicular to it. The effect of this phenomenon is a net force acting on the rotor in the direction of the forward whirl. This causes the subsynchronous orbiting of the rotor. A waterfall plot shows the response amplitude vs. the angular speed and frequency of the rotor. The orbit of the different locations along the length of the rotor at 30,000 rpm is shown below. The orbit curve changes its color from green to red with time. You can see that after the initial transient phase, the rotor undergoes a forward circular whirl in the steady state. Also, the second bending mode has the highest participation in the response. Orbit of the rotor at different locations. The plot changes from green to red with time. The time variation of the z-direction displacement of a point on the rotor at 30,000 rpm is shown below. Apart from the high-frequency variation, there is also a low-frequency component that envelops the response, but gets damped out with time. Time variation of the z -displacement. With this tutorial model, we have demonstrated the approach to set up different analyses in a rotor system, as well as how to plot and analyze the simulation results. Ready to give this tutorial a try? Simply click on the button below to access the MPH-file via the Application Gallery or open it via the Application Library in the COMSOL® software. Learn More About Analyzing Rotordynamics Applications Learn about the features included with the Rotordynamics Module See how to model a reciprocating engine to optimize its design Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Path Connectivity of the Range of a Path Connected Set under Continuous Functions Suppose that we have two topological spaces $X$ and $Y$, and that $f : X \to Y$ is continuous. If $X$ is a path connected space, then we should expect that the range, $f(X)$, is also path connected. If we take any two points $x$ and $y$ in the range, then there exists two points in the domain, $u$ and $v$ that are mapped to $x$ and $y$ respectively. Now consider a path from $u$ to $v$ in $X$. Then if we take the image of all the points in this path, then since $f$ is continuous, the resulting image of points will be continuous and will be a path from $x$ to $y$. We prove this result in the following theorem. Theorem 1: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ be continuous. If $X$ is path connected in then $f(X)$ is path connected in $Y$. Proof:Let $x, y \in f(X)$. Then there exists $u, v \in X$ such that $f(u) = x$ and $f(v) = y$. Since $X$ is path connected, there exists a path $\alpha : [0, 1] \to X$ such that $\alpha(0) = u$ and $\alpha(1) = v$. We define a path $\beta : [0, 1] \to f(X)$ from $x$ to $y$ as: Then $\beta$ is continuous since $f$ and $\alpha$ are continuous. Furthermore, we see that: Thus $f(X)$ is path connected.
I understand this question is about two years old, but it was recently poked by Community and so came to my attention. I'd solve this problem using the relationship between a survival function of a non-negative random variable and its expectation. Let $X$ be a non-negative random variable (here $X$ is total sales). Then $EX = \int_0^\infty (1-F(x))\ dx$, where $F(x) = P(X \leq x)$ denotes the cumulative distribution function (CDF) of $X$. From the data you have, we can see that $P(100 \leq X \leq 200) = 1$ (implying $X$ is non-negative). So, we have$$\begin{aligned}EX &= \int_{0}^{\infty} (1-F(x)) \ dx \\&= \int_{0}^{200} (1-F(x)) \ dx \qquad \text{(since } 1 - F(x) = P(X > x) = 0 \text{ for } x \geq 200) \\ &= (200 - 0) - \int_{100}^{200} F(x) \ dx \\&= 200 - \left[\int_{100}^{105} F(x) \ dx + \int_{105}^{110} F(x) \ dx + \dots + \int_{171}^{200} F(x) \ dx \right] \end{aligned}$$ Since we don't know the CDF completely, all we can do is some approximation. A simple (albeit crude) approximation is given by the composite trapezoidal method. Here we approximate each of the above sub-integrals using the Trapezoidal rule, which says that the definite integral $$\int_{a}^{b} F(x) \, dx \approx (b-a) \cdot \frac{F(a)+F(b)}{2}.$$ So, e.g., $\int_{100}^{105} F(x) \ dx \approx (105 - 100) [F(100) + F(105)]/2 = 5 * (0 + 0.05)/2 = 0.125$. An R implementation of this approximation scheme is as follows: x <- c(100, 105, 110, 116, 122, 128, 134, 141, 148, 155, 163, 171, 200) Fx <- c(0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.90, 1) n <- length(Fx) EX.approx <- 200 - sum(diff(x) * (Fx[-n] + Fx[-1])/2) EX.approx # [1] 144.3 Another approximation will be obtained by first linearly interpolating $F$ in $[100, 200]$ based on the data we have, and then doing a numerical integration over that interpolated function. Here is an R implementation based on this interpolation approach: Fx_smooth <- approxfun(x = x, y = Fx, method = "linear") EX.approx2 <- 200 - integrate(Fx_smooth, lower = 100, upper = 200)$val EX.approx2 # [1] 144.2997 Edits: Grammar.
I have rotations that are in Euler form only. All I would ever have to do to them mathematically is interpolate two rotations. Nothing else. The person providing them refuses to put them in a convenient form other than Euler angles because they believe that two interpolate two rotations you just ... When you say say $p(A, ?)$ what is $?$ referring to? The $y$ coordinate? Well define y well first. Give me $y=...$ I'll give you your $y$ coordinate. In one sense you're talking about the $y$ coordinate of a single side of the equation $f(x)= g(x)$ as in $y=g(x)$ or $y=f(x)$; but you're also talking about $y$ in the sense of $y := f(x)-g(x)=0$. Anyway, I'll stop writing nonsense since I really don't know what I'm talking about. I was trying to reduce $y'' - y' + y = e^x, y_1 = e^x$ but when I plug in $ve^x$ I get $v'' + v' + v = 1$, which can't be reduced. I was under the impression that there should never be a $v$ term after simplifying. Well you need reduction of order implicitly when solving a constant coefficient linear equation where the characteristic equation's discriminant is $0$, right? It's just that the answer is always $y_2 = ty_1$. The upshot being: if you have an nth-order linear nonhomogenous DE, and you know one solution to the homogeneous DE, then you can do reduction of order to get down to a new (n-1)th order linear nonhomogeneous DE Suppose I have a box which has volume 3 cm3 and inside, there is another box which measured to have volume 3 cm3 This will be an example of something that has a self injection that is not a surjection We can go further and test whether the space is eulidean by checking how parallel lines behave near the boxes. If the space is euclidean, then it is more plausible that the box behave like infinity at least for objects of size less than 3cm3 Non-well-founded set theories are variants of axiomatic set theory that allow sets to contain themselves and otherwise violate the rule of well-foundedness. In non-well-founded set theories, the foundation axiom of ZFC is replaced by axioms implying its negation.The study of non-well-founded sets was initiated by Dmitry Mirimanoff in a series of papers between 1917 and 1920, in which he formulated the distinction between well-founded and non-well-founded sets; he did not regard well-foundedness as an axiom. Although a number of axiomatic systems of non-well-founded sets were proposed afterwards... So I go to this talk, where the dude is supposed to talk on the Yang Baxter equation and it's set theoretical analogue with its connections to Braid groups and knot invariants. It turned out to be just a result on the solvability and nilpotency of a slew brace using the characterization of finite simple groups. Algebraists draw out geometers and other mathematicians by talking about applications and then feed them finite group theory. Question: Can we have a model of $ZF-\text {Regularity}$ where there exist an ordinal $\kappa$ such that $H_{\kappa}$ exists and $H_{\kappa}$ is not equinumerous to any well founded set?The motivation for this question comes in connection with defining Cardinality under some situations beyo... More reflections on infinity as Chapter 5 of the book is just reached: Unreachability is not a unique trait of infinity. Dedekind finite sets can be indefinitely reduced in cardinality but the process will not complete in finite steps So to update on this: Let $X$ be some object with some properties $x$ Let $P$ be a process which outputs some objects or properties when given $x$ If there exists some $P$ such that given a property $x$ in $X$, it can be guarentee that $P$ continues and each output is distinct, then $X$ has some notion of being actual infinity as it contains what is effectively the output of a nonrepeating and neverending process Example: Let $U$ be the unravelling process which takes in a set equation $S$ and output the outcome when everything on the left of the equality is replaced by the corresponding thing on the right: This process is nonrepeating and nonterminating, and yet it is clearly contained in $X$. Therefore $X$ is actual infinite wrt the unravel operator Meanwhile let $A$ be the operator that takes a set and convert it into an accessible point graph $A(X)$ terminates in one step to give a 1-cycle, and hence $X$ is finite wrt $A$ Thus, a process P that is guarentee to continue for each step and give distinct outputs at each step necessarily generates a potential infinity, and some object that contains all the output of this process is an actual infinity for P In other words, one type of actual infinity is the completion of P Anyway, I don't know if I will call a process that produce a sequence like 1,2,3,1,2,3,1,2,3,... as potentially infinite. Because you can produce the same sequence by cycling through the elements of the set {1,2,3} indefinitely, or you actually have an ordered tuple (1,2,3,1,2,3,1,2,3,...) and you move to the next element for each step. The latter case will be your suggestion of the Poncaire' theorem scenario. Otherwise, going to spent some time to read that book before thinking about this Trying to capture sequences that repeat themselves at some point remains a challenge, as well the behavior of lie paradox and Yablo paradox It is clear what the necessary condition for something to be actual infinite is now, but it is not clear what the sufficient condition is That a notion of completion or closure over a potential infinity is necessary to define actual infinity, may open the door to proving there cannot be a predicative actual infinite object, because Godel's incompleteness theorem may appear in some form We knew that a formal system that is sound and consistent, cannot prove nor disprove its own consistency, hence incomplete Perhaps, a special case of this is given any potentially infinite process, we cannot construct its completion with the underlying formal system Let $P$ be a process which may or may not be terminating. If $P$ does not terminate (Given any step of execution of $P$, $P$ can continue its execution) then $P$ represents a potential infinity. An object $A$ such that given any output of a potentially infinite $P$ it is found in $A$ is an actual infinity wrt $P$ Actually, to take account of things like successor ordinals such as $\omega +1$, maybe it needs to be defined this way: Nontermination: A process $P$ is nonterminating if given a sufficiently large part of its execution history, there is a portion of its history such that given any step of execution of $P$, $P$ can continue its execution For example a $P$ that generates $1,2,3,4,...,\omega$ is nonterminating since the subsequence $1,2,3,4,...$, $P$ can continue its execution at any point in this sequence without interruptions That should capture everything from cyclic sequences, amorphous sets, infinite dedekind finite sets, alephs, other cardinals, uncomputability and so on Thus a concrete example is the set of naturals, there exists no terminating process that can enumerate all naturals, even though there exists a finite representation for such a process, namely the rules of the successor operator given by peano axioms > This can be very shocking to those people who are first introduced to the technical term “actual infinity.” It seems not to be the kind of infinity they are thinking about. The crux of the problem is that these people really are using a different concept of infinity. The sense of infinity in ordinary discourse these days is either the Aristotelian one of potential infinity or the medieval one that requires infinity to be endless, immeasurable, and perhaps to have connotations of perfection or inconceivability. This article uses the name transcendental infinity for the medieval concept alt… For those who think I have not spent my last 5 months reading the philosophy of infinity > Dedekind’s new definition of "infinite" is defining an actually infinite set, not a potentially infinite set because Dedekind appealed to no continuing operation over time. The concept of a potentially infinite set is then given a new technical definition by saying a potentially infinite set is a growing, finite subset of an actually infinite set. The problem of this in the context of predicative actual infinity is that we don't know if there exists formal systems which dedekind's definition of infinite is a theorem rather than an axiom, without employing stronger axioms such as there exists a universal set, as it is done in new foundations Given the Euclidian axioms, is there a unique Riemannian manifold that obeys them? 2 ie $(\mathbb{R^2}, \delta_{ij})$ I'm pretty sure that's true though it's a bit tricky to show because Euclidian axioms are a bit vague sometimes I thought $(D^2, \delta_{ij})$ might also obey them but I think that's not actually true because there are not "circles of any sizes", ie you can find two points separated by a length superior to the radius of any possible circles I'm pretty sure the first two axioms imply that it's geodesically complete and the last one that it's flat, but it's a bit tricky Kind of depends on how one defines the elements ie is a "line" just an inextendible geodesic or does it need to be infinite Question: Can I find an example with n=2? Answer 1: No, and here's why, but an example with n=3 is... Comment to A1: That's not an example. Answer 2: You can't do it for n<=3, but here's an example for n=4. Comment to A2: That's not an example. Alas, there is a trivial example for n=5 and therefore the pattern can't proceed beyond that. Me either. Some buzzwords from the course description: Dirac structures, sigma models, Gerbes, Courant algebroids, Gray-Hervella classification, Hitchin functional, Neveu-Schwarz fluxes, D-branes, etc. Only D-branes I have previously heard of. And "algebroids", but not Courant. "This is an introductory (i.e. first year graduate students are welcome and expected) course in generalized geometry, with a special emphasis on Dirac geometry, as developed by Courant, Weinstein, and Severa, as well as generalized complex geometry, as introduced by Hitchin. Dirac geometry is based on the idea of unifying the geometry of a Poisson structure with that of a closed 2-form, whereas generalized complex geometry unifies complex and symplectic geometry." Whose lectures notes are these? A first-year graduate student, even at MIT, wouldn't have a clue in such a course. When I taught a topics in geometry course at MIT, I had all advanced grad students in it (some from Harvard). Ridiculous to say first-years are "expected." Hmm, never heard of Gualtieri. His webpage at Toronto says he works in math physics (and differential geometry).
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
Duckheim, Mathias, Loss, Daniel, Scheid, Matthias, Richter, Klaus, Adagideli, İnanç and Jacquod, Philippe (2010) Spin Accumulation in Diffusive Conductors with Rashba and Dresselhaus Spin-Orbit Interaction. Physical Review B (PRB) 81 (8), 085303. PDF - Published Version Download (222kB) PDF - Submitted Version arXiv PDF (23.09.2009) Download (278kB) Date of publication of this fulltext: 30 Sep 2009 05:37 Abstract We calculate the electrically induced spin accumulation in diffusive systems due to both Rashba (with strength $\alpha)$ and Dresselhaus (with strength $\beta)$ spin-orbit interaction. Using a diffusion equation approach we find that magnetoelectric effects disappear and that there is thus no spin accumulation when both interactions have the same strength, $\alpha=\pm \beta$. In thermodynamically ... Export bibliographical data Item type: Article Date: 2 February 2010 Institutions: Physics > Institute of Theroretical Physics > Chair Professor Richter > Group Klaus Richter Projects: SFB 689: Spinphänomene in reduzierten Dimensionen Identification Number: Classification: Dewey Decimal Classification: 500 Science > 530 Physics Status: Published Refereed: Yes, this version has been refereed Created at the University of Regensburg: Partially Item ID: 9562
I'm stuck at this exercise: Let $G$ be a group, with $|G|=pqr$, $p,q,r$ different primes, $q<r$, $r \not\equiv 1$ (mod $q$), $qr<p$. Also suppose that $p \not\equiv 1$ (mod $r$), $p \not\equiv 1$ (mod $q$). Let $C$ (the commutator of $G$) and $K$ be subgroups of $G$, with $C \leq K$, $K \trianglelefteq G$ and $|K|=q$. $K$ is the unique Sylow $q$-subgroup on $G$ (so $K \trianglelefteq G$). Let $G/K$ be an abelian group. Prove that $C=\{e\}$. I tried using Lagrange theorem, knowing that $C\leq K$, and then $|C|=\{1\ or\ q\}$. But I don't know how to eliminate the option $|C|=q$. This is a little part from a longer exercise. The definition of $C$ is $C=\langle[a,b]=aba^{-1}b^{-1} \mid a,b\in G \rangle$, $G/C$ is abelian too. Thank you.
If we consider an elliptic curve $E/k$ given in Weierstrass form $y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6}$, then I know that the translation maps $\tau_{P}$ with $P\in{E}$ fix the invariant differential $\omega_{E}=\frac{dx}{2y+a_{1}x+a_{3}}$, i.e. $\tau_{P}^{*}\omega_{E}=\omega_{E}$. But I'm wondering if the converse is true? Namely, if $\phi:E\rightarrow{E}$ is an automorphism of $E/k$ such that $\phi^{*}\omega_{E}=\omega_{E}$, then $\phi$ should be a translation map? My only attempt is assuming that $\phi$ is of the form $(x,y)\mapsto{(u^{2}x+r,u^{3}y+u^{2}sx+t)}$ for some $u\in{k^{*}}$ and $r,s,t\in{k}$ (I think we can assume this) and applying the definition directly $\phi^{*}\frac{dx}{2y+a_{1}+a_{3}}=\frac{d\phi^{*}x}{2\phi^{*}y+a_{1}\phi^{*}x+a_{3}}=\frac{d(u^{2}x+r)}{2(u^{3}y+u^{2}sx+t)+a_{1}(u^{2}x+r)+a_{3}}=\frac{u^{2}dx}{2u^{3}y+(2u^{2}s+a_{1}u^{2})x+(2t+a_{1}r+a_{3})}$ which by hypothesis is equal to $\omega_{E}$ again. This seems to suggest that $u=1$ and $s=0$, and also $2t+a_{1}r=0$. So $\phi$ is of the form $(x,y)\mapsto{(x+r,y+t)}$. Is it possible to conclude from this that $\phi$ is a translation map? How can I do that?
As we know, antiderivative or indefinite integral is the function the derivative of which gives the actual function. Let $F(x)$ be the derivative of $f(x)$ ie. the instantaneous rate of change of $f(x)$ with respect to $x$ is $F(x)$ . $$\dfrac{d{f(x)}}{dx} = F(x).$$ Now $$ d{f(x)} = F(x)\,dx,$$ right? Then writing indefinite integral on both side, we get, $$\int d{f(x)} = \int F(x)\,dx.$$ Then abruptly many books do like that $$ f(x) = \int F(x)\,dx + C.$$ Really? What does this $\int$ mean? Summation, right? Then how can summation of $d{f(x)}$ give the function? It is the change and not the function. What about $C$? My physics book tells that it is the initial value i.e. $v_0, a_0 , x_0$ . But it can be anything, right? So, how can this process give the function? The integral $\int$ is indeed a continuous version of summation. There are two ways of looking at this: As an indefinite integral, you have $\int df(x)=\int F(x)dx$. As we are dealing with indefinite integral, the right side after evaluation still depends on $x$. So naturally, it's a function of $x$. On the left, the notation with $(x)$ is perhaps confusing to you, but you can just write $df$ instead of $df(x)$, and you just get $f$ on the left (that is then obviously dependent on $x$ that is found in the solution of the right-side integral). Also, you could just expand the differential of the function over $x$: $df(x)=f'(x)dx$ so you have $\int f'(x)dx=\int F(x)dx$ and now it's probably more obvious to you that it equals to $f(x)$ because integration and differentiation are roughly speaking inverse operations (up to a constant). With indefinite integral, you get this free constant $C$ that has to be manually determined by plugging in the initial conditions. And yes, you usually get something like "initial velocity" there. As a definite integral with a free upper bound. This is the more logical way of dealing with things. With definite integrals, you have two boundary conditions. So, we start by writing$$\int_a^b df=\int_{A}^{B} F(x)dx$$Now, what does that mean? Imagine following what's happening to both sides when we go from initial to final condition. On the right, we start at initial $x=x_0$ where we know about our initial condition (depending on what you are integrating, it could be initial position, initial time, or something like that -- you very commonly choose your coordinates so that $x_0=0$). At this chosen initial condition, the value of $f$ is also the initial one, $f(x_0)$. Then, you imagine integrating slowly over the region, the right side calculates the small contributions $F(x)dx$ that are added on the left as increments $df$. There are now two possibilities. If you actually want to follow what's happening in the middle -- the entire curve that $f$ traces when you are adding more and more contributions on the right, then you use an "arbitrary $x$" in the upper limit on the right, and of course, $f(x)$ on the left, because that's how far $f$ got by that "time" (or position, or whatever). So, you get $$\int_{f(x_0)}^{f(x)}df=f|_{f(x_0)}^{f(x)}=f(x)-f(x_0)=\int_{x_0}^x F(t)dt$$ Notice that I renamed the integration variable, because it's inappropriate to use the same symbol as in the limits. $t$ is the thing that goes from the initial $x_0$ to the current $x$ when the integral performs the summation. Each chosen $x$ results in a different integration interval and also different value of $f(x)$. So, we performed a definite integral, but left a variable upper limit so we now still have a functional dependence: $$f(x)=f(x_0)+\int_{x_0}^x F(t)dt$$ Which you read as: function $f$ starts at its initial value $f(x_0)$ and changes when you go to $x$ for the amount given by the integral. Sometimes, you don't care what happens in the middle, and you actually just want the definite integral over a pre-determined interval (such as "how far did we get in one hour"). In that case, you can write $$f(x_1)=f(x_0)+\int_{x_0}^{x_1} F(t)dt$$ where $x_1$ is just a number. Note how there is absolutely no difference between these two cases and you get the "definite integral over a fixed range" just by putting the number of $x_1$ into the function $f(x)$ that you calculated before. In many aspects, the definite integral with variable upper limit is the same as the indefinite integral. However, with indefinite integral, you have absolutely no idea what $C$ before you take into account the initial conditions, while with definite integral, you see it's equal to $f(x_0)$ in this case. Also, the notation is better because in definite integrals, the integrating variable has nothing to do with the limits. I could have written it as $\int_{f(x_0)}^{f(x)}d\xi=\int_{x_0}^x F(\chi)d\chi$ and it wouldn't have made a difference. This probably answers your confusion about $df(x)$: it just tells you that you want to capture variation of the upper limit on the left when you vary the upper limit at the right. Also, it's much easier to deal with physical units, because lower and upper limit of the integral have the same units (are the same physical quantity as the differential under the integral sign) while the $C$ is just a placeholder for the thing you don't know yet. If you differentiate the definite integral over the upper limit it's the same thing as differentiating the result of the indefinite integral. In both cases you get $f'(x)=F(x)$. To sum up: indefinite integral is the mathematically formal procedure that brings you to the correct result. Definite integral with variable upper limit is the same thing and has the physical interpretation of following the "summation" (following your position when making steps in time with changing velocity, or something like that). Definite integral with fixed numerical limits is just one read-out of the function you get from the integrals discussed above. I hope this answers your question. exactly as you said, if you only know $F(x)$, which is how the function "changes", then you can only recover the original "shape" of $f(x)$, because the change will be the same if you move ("shift") $f(x)$ all up or all down the y axis. So if $f(x)$ is a solution, then $f(x)+C$ must also be a solution. Regarding why C is the initial condition: it is by definition. $F(0)$ is a constant thus $\int F(0).dx=xF(0)$. Thus $f(0)=C$, thus $C$ is by definition the initial condition . Let's say for simplicity $F(x) = 3 x^2$, just so we have an example to talk about. As you wrote $$f(x) = \int F(x) \mathrm d x + C = x^3 +C\,.$$ And now the initial conditions come into play (the $x_0$, $v_0$ and so on) or in some cases boundary conditions. Those are always needed to find a specific solution to a differential equation. So if I also know, that $f(0) = f_0$, then we can just plug that in: $$f(0) = 0^3 +C = C = f_0$$ So we see, if $f(x)$ should also fulfill the condition $f(0) = f_0$, $C$ can not have any value, but needs to be $f_0$ in this specific example - in general (for other $f(x)$'s) you always need to calculate the $C$ that you need to fulfill your initial conditions. Here some more notes: A lot of times, when you solve a differential equation you try to find all or a whole bunch of solutions (that's the $x^3 +C$ above) and then pick one of them, that fits your initial/boundary conditions. If you have higher order DGLs (differential equations) like $F = m \ddot x$ (where each dot means a derivative with respect to time), then you need more than one initial condition to solve this for $x(t)$ (the initial position and initial velocity).
I'll probably be expanding this more (!) and adding pictures and links as I have time, but here's my first shot at this. Mostly math-free explanation A special coin Let's begin by thinking about normal bits. Imagine this normal bit is a coin, that we can flip to be heads or tails. We'll call heads equivalent to "1" and tails "0". Now imagine instead of just flipping this coin, we can rotate it - 45${}^\circ$ above horizontal, 50$^\circ$ above horizontal, 10$^\circ$ below horizontal, whatever - these are all states. This opens up a huge new possibility of states - I could encode the whole works of Shakespeare into this one coin this way. But what's the catch? No such thing as a free lunch, as the saying goes. When I actually look at the coin, to see what state it's in, it becomes either heads or tails, based on probability - a good way to look at it is if it's closer to heads, it's more likely to become heads when looked at, and vice versa, though there's a chance the close-to-heads coin could become tails when looked at. Further, once I look at this special coin, any information that was in it before can't be accessed again. If I look at my Shakespeare coin, I just get heads or tails, and when I look away, it still is whatever I saw when I looked at it - it doesn't magically revert to Shakespeare coin. I should note here that you might think, as Blue points out in the comments, that Given the huge advancement in modern day technology there's nothing stopping me from monitoring the exact orientation of a coin tossed in air as it falls. I don't necessarily need to "look into it" i.e. stop it and check whether it has fallen as "heads" or "tails". This "monitoring" counts as measurement. There is no way to see the inbetween state of this coin. None, nada, zilch. This is a bit different from a normal coin, isn't it? So encoding all the works of Shakespeare in our coin is theoretically possible but we can never truly access that information, so not very useful. Nice little mathematical curiosity we've got here, but how could we actually do anything with this? The problem with classical mechanics Well, let's take a step back a minute here and switch to another tack. If I throw a ball to you and you catch it, we can basically model that ball's motion exactly (given all parameters). We can analyze its trajectory with Newton's laws, figure out its movement through the air using fluid mechanics (unless there's turbulence), and so forth. So let's set us up a little experiment. I've got a wall with two slits in it and another wall behind that wall. I set up one of those tennis-ball-thrower things in the front and let it start throwing tennis balls. In the meantime, I'm at the back wall marking where all our tennis balls end up. When I mark this, there are clear "humps" in the data right behind the two slits, as you might expect. Now, I switch our tennis-ball-thrower to something that shoots out really tiny particles. Maybe I've got a laser and we're looking where the photons look up. Maybe I've got an electron gun. Whatever, we're looking at where these sub-atomic particles end up again. This time, we don't get the two humps, we get an interference pattern. Does that look familiar to you at all? Imagine you drop two pebbles in a pond right next to each other. Look familiar now? The ripples in a pond interfere with each other. There are spots where they cancel out and spots where they swell bigger, making beautiful patterns. Now, we're seeing an interference pattern shooting particles. These particles must have wave-like behavior. So maybe we were wrong all along. (This is called the double slit experiment.)Sorry, electrons are waves, not particles. Except...they're particles too. When you look at cathode rays (streams of electrons in vacuum tubes), the behavior there clearly shows electrons are a particle. To quote wikipedia: Like a wave, cathode rays travel in straight lines, and produce a shadow when obstructed by objects. Ernest Rutherford demonstrated that rays could pass through thin metal foils, behavior expected of a particle. These conflicting properties caused disruptions when trying to classify it as a wave or particle [...] The debate was resolved when an electric field was used to deflect the rays by J. J. Thomson. This was evidence that the beams were composed of particles because scientists knew it was impossible to deflect electromagnetic waves with an electric field. So...they're both. Or rather, they're something completely different. That's one of several puzzles physicists saw at the beginning of the twentieth century. If you want to look at some of the others, look at blackbody radiation or the photoelectric effect. What fixed the problem - quantum mechanics These problems lead us to realize that the laws that allow us to calculate the motion of that ball we're tossing back and forth just don't work on a really small scale. So a new set of laws were developed. These laws were called quantum mechanics after one of the major ideas behind them - the existence of fundamental packets of energy, called quanta. The idea is that I can't just give you .00000000000000000000000000 plus a bunch more zeroes 1 Joules of energy - there is a minimum possible amount of energy I can give you. It's like, in currency systems, I can give you a dollar or a penny, but (in American money, anyway) I can't give you a "half-penny". Doesn't exist. Energy (and other values) can be like that in certain situations. (Not all situations, and this can occur in classical mechanics sometimes - see also this; thanks to Blue for pointing this out.) So anyway, we got this new set of laws, quantum mechanics. And the development of those laws is complete, though not completely correct (see quantum field theories, quantum gravity) but the history of their development is kind of interesting. There was this guy, Schrodinger, of cat-killing (maybe?) fame, who came up with the wave equation formulation of quantum mechanics. And this was preferred by a lot of physicists preferred this, because it was sort of similar to the classical way of calculating things - integrals and hamiltonians and so forth. Another guy, Heisenberg, came up with another totally different way of calculating the state of a particle quantum-mechanically, which is called matrix mechanics. Yet another guy, Dirac, proved that the matrix mechanical and wave equation formulations were equal. So now, we must switch tacks again - what are matrices, and their friend vectors? Vectors and matrices - or, some hopefully painless linear algebra Vectors are, at their simplest, arrows. I mean, they're on a coordinate plane, and they're math-y, but they're arrows. (Or you could take the programmer view and call them lists of numbers.) They're quantities that have a magnitude and a direction. So once we have this idea of vectors...what might we use them for? Well, maybe I have an acceleration. I'm accelerating to the right at 1 m/s$^2$, for example. That could be represented by a vector. How long that arrow is represents how quickly I am accelerating, the arrow would be pointing right along the x-axis, and by convention, the arrow's tail would be situated at the origin. We notate a vector by writing something like [2, 3] which would notate a vector with its tail at the origin and its point at (2, 3). So we have these vectors. What sorts of math can I do with them? How can I manipulate a vector? I can multiply vectors by a normal number, like 3 or 2 (these are called scalars), to stretch it, shrink it (if a fraction), or flip it (if negative). I can add or subtract vectors pretty easily - if I have a vector (2, 3) + (4, 2) that equals (6, 5). There's also stuff called dot products and cross products that we won't get into here - if interested in any of this, look up 3blue1brown's linear algebra series, which is very accessible, actually teaches you how to do it, and is a fabulous way to learn about this stuff. Now let's say I have one coordinate system, that my vector is in, and then I want to move that vector to a new coordinate system. I can use something called a matrix to do that. Basically we can define in our system two vectors, called $\hat{i}$ and $\hat{j}$, read i-hat and j-hat (we're doing all this in two dimensions in the real plane; you can have higher dimension vectors with complex numbers ($\sqrt{-1} = i$) as well but we're ignoring them for simplicity), which are vectors that are one unit in the x direction and one unit in the y direction - that is, (0, 1) and (1, 0). Then we see where i-hat and j-hat end up in our new coordinate system. In the first column of our matrix, we write the new coordinates of i-hat and in the second column the new coordinates of j-hat. We can now multiply this matrix by any vector and get that vector in the new coordinate system. The reason this works is because you can rewrite vectors as what are called linear combinations. This means that we can rewrite say, (2, 3) as 2*(1, 0) + 3*(0, 1) - that is, 2*i-hat + 3*j-hat. When we use a matrix, we're effectively re-multiplying those scalars by the "new" i-hat and j-hat. Again, if interested, see 3blue1brown's videos. These matrices are used a lot in many fields, but this is where the name matrix mechanics comes from. Tying it all together Now matrices can represent rotations of the coordinate plain, or stretching or shrinking the coordinate plane or a bunch of other things. But some of this behavior...sounds kind of familiar, doesn't it? Our little special coin sounds kind of like it. We have this rotation idea. What if we represent the horizontal state by i-hat, and the vertical by j-hat, and describe what the rotation of our coin is using linear combinations? That works, and makes our system much easier to describe. So our little coin can be described using linear algebra. What else can be described linear algebra and has weird probabilities and measurement? Quantum mechanics. (In particular, this idea of linear combinations becomes the idea called a superposition, which is where the whole idea, oversimplified to the point it's not really correct, of "two states at the same time" comes from.) So these special coins can be quantum mechanical objects. What sorts of things are quantum mechanical objects? photons superconductors electron energy states in an atom Anything, in other words, that has the discrete energy (quanta) behavior, but also can act like a wave - they can interfere with one another and so forth. So we have these special quantum mechanical coins. What should we call them? They store an information state like bits...but they're quantum. They're qubits. And now what do we do? We manipulate the information stored in them with matrices (ahem, gates). We measure to get results. In short, we compute. Now, we know that we cannot encode infinite amounts of information in a qubit and still access it (see the notes on our "shakespeare coin"), so what then is the advantage of a qubit? It comes in the fact that those extra bits of information can affect all the other qubits (it's that superposition/linear combination idea again), which affects the probability, which then affects your answer - but it's very difficult to use, which is why there are so few quantum algorithms. The special coin versus the normal coin - or, what makes a qubit different? So...we have this qubit. But Blue brings up a great point. how is a quantum state like $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ different from a coin which when tossed in the air has a 50−50 chance of turning out to be heads or tails. Why can't we say that a classical coin is a "qubit" or call a set of classical coins a system of qubits? There are several differences - the way that measurement works (see the fourth paragraph), this whole superposition idea - but the defining difference (Mithrandir24601 pointed this out in chat, and I agree) is the violation of the Bell inequalities. Let's take another tack. Back when quantum mechanics was being developed, there was a big debate. It started between Einstein and Bohr. When Schrodinger's wave theory was developed, it was clear that quantum mechanics would be a probabilistic theory. Bohr published a paper about this probabilistic worldview, which he concluded saying Here the whole problem of determinism comes up. From the standpoint of our quantum mechanics there is no quantity which in any individual case causally fixes the consequence of the collision; but also experimentally we have so far no reason to believe that there are some inner properties of the atom which conditions a definite outcome for the collision. Ought we to hope later to discover such properties ... and determine them in individual cases? Or ought we to believe that the agreement of theory and experiment—as to the impossibility of prescribing conditions for a causal evolution—is a pre-established harmony founded on the nonexistence of such conditions? I myself am inclined to give up determinism in the world of atoms. But that is a philosophical question for which physical arguments alone are not decisive. The idea of determinism has been around for a while. Perhaps one of the more famous quotes on the subject is from Laplace, who said An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. The idea of determinism is that if you know all there is to know about a current state, and apply the physical laws we have, you can figure out (effectively) the future. However, quantum mechanics decimates this idea with probability. "I myself am inclined to give up determinism in the world of atoms." This is a huge deal! Albert Einstein's famous response: Quantum mechanics is very worthy of regard. But an inner voice tells me that this is not yet the right track. The theory yields much, but it hardly brings us closer to the Old One's secrets. I, in any case, am convinced that He does not play dice. (Bohr's response was apparently "Stop telling God what to do", but anyway.) For a while, there was debate. Hidden variable theories came up, where it wasn't just probability - there was a way the particle "knew" what it was going to be when measured; it wasn't all up to chance. And then, there was the Bell inequality. To quote Wikipedia, In its simplest form, Bell's theorem states No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. And it provided a way to experimentally check this. It's true - it is pure probability. This is no classical behavior. It is all chance, chance that affects other chances through superposition, and then "collapses" to a single state upon measurement (if you follow the Copenhagen interpretation). So to summarize: firstly, measurement is fundamentally different in quantum mechanics, and secondly, that quantum mechanics is not deterministic. Both of these points mean that any quantum system, including a qubit, is going to be fundamentally different from any classical system. A small disclaimer As xkcd wisely points out, any analogy is an approximation. This answer isn't formal at all, and there's a heck of a lot more to this stuff. I'm hoping to add to this answer with a slightly more formal (though still not completely formal) description, but please keep this in mind. Resources Nielsen and Chuang, Quantum Computing and Quantum Information. The bible of quantum computing. 3blue1brown's linear algebra and calculus courses are great for the math. Michael Nielsen (yeah, the guy who coauthored the textbook above) has a video series called Quantum Computing for the Determined. 10/10 would recommend. quirk is a great little simulator of a quantum computer that you can play around with. I wrote some blog posts on this subject a while back (if you don't mind reading my writing, which isn't very good) that can be found here which attempts to start from the basics and work on up.
This question is not asking for a solution, but rather as a check / validation of my thought process. Given the form: $W = \forall X.((\forall Y.\exists Z. R(X,Y,Z)) \land \forall S. \exists T. R(X, S,T))$ The algorithm I learnt states, that I have to 1th bring $W$ into prenex form. Doing so I bring all quantors to the beginning, if necessary resolve all variable dependencies by defining fresh variables. The exact sequence of the operators has to be maintained in the process! 2th I look for something of the form: $ \forall x_1. \forall x_2. \forall x_3 \dots \exists y_1 \exists y_2 (R(y_1, y_2))$ and replace it by something like $ \forall x_1. \forall x_2. \forall x_3 (R(g(x_1, x_2, x_3, \dots), f(x_1, x_2, x_3, \dots))) $ In the above example $W$, following strictly the rules stated previously, I would expect the result: Prenexform: $\forall X.\forall Y. \exists Z. \forall S. \exists T. (R(X,Y,Z)) \land R(X, S,T)) $ Skolemform: $ \forall X.\forall Y. \forall S. (R(X,Y,g(X,Y)) \land R(X, S,f(X,Y,S)) $, which might be correct, but kinda dumb looking. Thus the question arises, if the rules above are absolut, if there were not a way to soften them and for example allow to switch the order of $\forall$ and $\exists$ operators. Any constructive comments/answers to my question are appreciated.
$\bullet$ No this is not surface area and here is why: Consider a small patch $S_0$ on your surface $S$. Now consider a parametrization $G: D \to S$ where $D$ has coordinate functions $u,v$. Let $(u_0,v_0) \in D$ then we have: $$G(u_0,v_0) - G(u_0+\Delta u,y_0) \approx G_u(u_0,v_0)\Delta u$$ $$G(u_0,v_0) - G(u_0,v_0+\Delta v) \approx G_v(u_0,v_0)\ \Delta v$$ Hence, the normal vector for $S_0$ is approximately $G_u(u_0,v_0) \Delta u \times G_v(u_0,v_0) \Delta v := \vec{n}(u_0,v_0) \Delta u \Delta v$. The magnitude of this vector greatly approximates the area of the patch $S_0$ i.e we have: $$\textbf{Area}(S_0) \approx \|\vec{n}(u_0,v_0) \Delta u \Delta v\| = \|\vec{n}(u_0,v_0)\| \Delta u \Delta v$$ Thus, if we take a partition of $D$ say $\{(x_i,y_j): i \in [0,N], j \in [0,M]\}$ which also has the property that: $$ \lim_{N,M \to \infty}\sum_{i,j} \|\vec{n}(u_i,v_j)\|\ \Delta u \Delta v< \infty$$ then we know: $$\textbf{Surface Area of S}=\lim_{N,M \to \infty} \sum_{j=0}^M \sum_{i=0}^N \|\vec{n}(u_i,v_j)\| \ \Delta u \Delta v = \iint_D \|\vec{n}(u,v)\| \ du \ dv$$ $\bullet$ In the case of vector fields, the integral you are trying to compute is flux. The flux of a surface measures how much fluid is flowing either from the inside of the surface to the outside or the outside of the surface to the inside. The reason you have to choose $\vec{n}$ i.e a normal for your surface is because you need a way to how the fluid is flowing (where to the two flows are the ones I've mentioned). $\bullet$ To show that this integral is calculating exactly that, again we restrict ourselves to looking at what is going on at a small patch. Let $S_0$ be as above and consider $\vec{n}(u_0,v_0) \Delta v \Delta$ defined above as well. $\bullet$ What may really be confusing you is your integral. I think it is off. By definition the surface element $dS=\|\vec{n}(u,v)\| \ du dv$ and so the integral should read: $$\int_S \vec{F} \cdot \textbf{e}_{\vec{n}} \ dS = \int_s \vec{F} \cdot \frac{\vec{n}(u,v)}{\|\vec{n}(u,v)\|} \ dS=\iint_D \vec{F}(G(u,v)) \cdot \vec{n}(u,v) \ du \ dv$$ Using the fact that $u \cdot v = \|u\| \|v\| \cos \theta_{\vec{u},\vec{v}}$ then it is from here, the integral makes sense because: $$\vec{F} \cdot \textbf{e}_{\vec{n}} = \|\vec{F}\| \cos \theta_{\vec{F}, \vec{n}} = \|\textrm{proj} \vec{F}_{\vec{n}}\|$$ i.e the above quantity multiplied by $\|\vec{n}(u,v)\| \Delta u \Delta v$ gives the volume of fluid passing through the patch $S_0$.
Final Topologies Recall from the Initial Topologies page that if $X$ is a set, $Y$ is a topological space, and $f : X \to Y$ then the initial topology induced by $f$ on $X$ is the coarsest topology which makes the map $f : X \to Y$ continuous. More generally, if $X$ is a set, $\{ Y_i : i \in I \}$ is a collection of topological spaces, and $\{ f_i : X \to Y_i : i \in I \}$ is a collection of topological maps then the initial topology induced by $\{ f_i : i \in I \}$ on $X$ is the topology which makes $f_i : X \to Y_i$ continuous for all $i \in I$. We proved that for any set $X$ with any collection of topological spaces $\{ (Y, \tau_i) \}$ and any collection of maps $\{ f_i : X \to Y_i : i \in I \}$ that the initial topology induced by $\{ f_i : i \in I \}$ has the following subbasis:(1) We will now look at an analogous topology known as a final topology which we define below. Definition: Let $X$ be a set, $\{ Y_i : i \in I \}$ be a collection of topological spaces, and $\{ f_i : Y_i \to X : i \in I \}$ be a collection of maps. The Final Topology Induced by $\{ f_i : i \in I \}$ on $X$ is the finest topology $\tau$ on $X$ which makes $f_i : Y_i \to X$ continuous for all $i \in I$. It is important to emphasize that the final topology induced by $\{ f_i : i \in I \}$ is the FINEST topology on $X$ which makes $f_i : Y_i \to X$ continuous for all $i \in I$. To construct the final topology on $X$ induced by the maps $f_i : Y_i \to X$, consider any of the subsets $U$ in $X$. Take the inverse image of $U$ with respect to each of the maps $f_i$, i.e., $f_i^{-1}(U)$. If $f_i^{-1}(U)$ is open in $Y_i$ for each $i \in I$, then declare $U$ to be open in $X$. The following theorem will provide us with an explicit form of final topology induced by $\{ f_i : i \in I \}$ on $X$. Theorem 1: Let $X$ be a set, $\{ (Y_i, \tau_i) : i \in I \}$ be a collection of topological spaces, and $\{ f_i : X \to Y : i \in I \}$ be a collection of maps. Then the final topology induced by $\{ f_i : i \in I \}$, call it $\tau$ is given as $\tau = \{ U \subseteq X : f_i^{-1}(U) \in \tau_i \: \mathrm{for \: all \:} i \in I \}$. Proof:To show that $\tau$ above is the final topology induced by $\{ f_i : i \in I \}$ on $X$ we must show that $\tau$ makes $f_i : Y_i \to X$ continuous for all $i \in I$ and that any topology $\tau'$ which also accomplishes this must be coarser than $\tau$. Clearly each $f_i : Y_i \to X$ is continuous with this topology since for all $U \in \tau$ we have that $f^{-1}_i(U)$ is open for all $i \in I$, so $f^{-1}_i$ is continuous for all $i \in I$. Now suppose that $\tau'$ is another topology that accomplishes this. If $\tau \subseteq \tau'$ then there exists a $V \in \tau'$ such that $V \not \in \tau$. So $V$ is open in $X$. But then $f^{-1}_i(V)$ cannot be open in $Y_i$ for all $i \in I$ otherwise $V \in \tau$ by how we defined $\tau$. This implies that $f^{-1}_i$ is not continuous for some $i \in I$ and so $\tau'$ does not accomplish being the final topology. Hence $\tau \not \subseteq \tau'$ and so: So any topology $\tau'$ on $X$ which also makes $f_i : Y_i \to X$ continuous for all $i \in I$ must be coarser than $\tau$, so $\tau$ is indeed the final topology induced by $\{f_i : i \in I \}$ on $X$. $\blacksquare$
To solve the heat equation, using the separation of variables and decomposition into Fourier series usually works well. Consider the homogeneous equation\begin{align*} & u_t-u_{xx}=0\\ & u_x(0,t)=u_x(\pi,t)=0\end{align*}whitout bothering about the initial condition. We wish to find solutions of the form $u(x,t)=c(t)v(x)$. Plugging this into the equation, we get$$ c'(t)v(x)-c(t)v''(x)=0.$$This equation has to be true for any $x,t$, which means there exists $k\geq0$ such that $c'=\pm k^2 c$ and $v''=\pm k^2 v$. We only seek physically relevant solutions, so we can disregard the case $c'=k^2c$, because it would lead to a diverging solution when $t\to+\infty$. The equation for $v$ becomes $v'+k^2v=0$, and the general solution of this equation is $$ \{x\mapsto A\cos(kx)+B\sin(kx),\;A,B\in\mathbb{R}\}$$The condition $u_x(0,t)=u_x(\pi,t)=0$ implies $B=0$ and $k\in\mathbb{N}$. Thus, for $k\in\mathbb{N}$, let$$ \boxed{v_k:x\mapsto A_k\cos(kx)}.$$Note that this will be very convenient for Fourier series. Now, what if we plug a function $u_k(x,t)=c_k(t)v_k(t)$, where $c_k$ is to be determined, into $u_t-u_{xx}$? We get$$ (c'_k(t)+k^2c_k(t))A_k\cos(kx)$$We want to find $u(x,t)=\sum{k\geq0} u_k(x,t)$ such that it verifies the inhomogeneous equation, so we seek $c_k$ such that$$ (c'_k(t)+k^2c_k(t))A_k\cos(kx)=tA_k\cos(kx)$$which is equivalent, assuming continuity, to$$ c'_k(t)+k^2c_k(t)=t$$If $k=0$, the solution is $\boxed{c_0(t)=t^2/2+B_0}$. Assume that $k>0$.One solution to the homogeneous version of this equation is $\lambda(t)=\exp(-k^2t)$. Using the variation of parameters, $c_k=y\lambda$ with $y'(t)=t\exp(k^2t)$, thus$$ \boxed{c_k:t\mapsto\frac{1}{k^4}(tk^2-1)+B_k\exp(-k^2t)}.$$ Consider $u(x,t)=\sum_{k\geq0}c_k(t)v_k(x)$. We want $u$ to be a solution of the PDE.$$ u_t-u_{xx}=\sum_{k\geq0} tA_k\cos(kx)=t\sum_{k\geq0} A_k\cos(kx)=tx$$thus$$ A_k=\frac{2}{\pi}\int_0^\pi x\cos(kx)\mathrm{d}x=\begin{cases}\pi&\text{if }k=0 \\ \frac{2}{\pi}\frac{(-1)^k-1}{k^2} & \text{if not}\end{cases}$$Finally, we need to make sure $u$ verifies the boundary conditions. We already have $u_x(0,t)=u_x(\pi,t)=0$, so the only remaining problem is $u(x,0)=1$.$$ u(x,0)=B_0A_0+\sum_{k\geq1}\left(B_k-\frac{1}{k^4}\right)v_k(x).$$Therefore, setting$$ B_k=\begin{cases}1/A_0=1/\pi & \text{if }k=0 \\ \frac{1}{k^4} &\text{if not}\end{cases}$$solves the problem.
Differential and Integral Equations Differential Integral Equations Volume 21, Number 3-4 (2008), 265-284. Multiple solutions for a class of nonlinear equations involving a duality mapping Abstract The paper studies the existence of multiple solutions to abstract equation \begin{eqnarray*} J_p u = N_f u, \end{eqnarray*} where $J_p$ is the duality mapping on a real reflexive and smooth Banach space $X$, corresponding to the gauge function $\varphi(t) = t^{p-1}, 1 < p < +\infty $. It is assumed, that $X$ is compactly imbedded in a Lebesgue space $L^q(\Omega), p \leq q < p^*$, and continuously imbedded in $L^{p^*}(\Omega)$, $p^*$ being the Sobolev conjugate exponent, $N_f :L^q(\Omega) \rightarrow L^{q'}(\Omega)$ , ${\frac{1}{q}+ \frac{1}{q'}=1 }$, being the Nemytskii operator generated by a function $f \in {\mathcal{C}}(\bar{\Omega} \times {\mathbf{R}}, {\mathbf{R}})$, which satisfies some appropriate conditions. These assumptions allow the use of many procedures appearing essentially in Li and Zhou [9]. As applications we obtain in a unitary manner the multiplicity results already given in [9] for a Dirichlet problem with $p$-Laplacian as well as some new multiplicity results for the Neumann problem \begin{eqnarray*} - \Delta_{p} u + | {u} | ^{p-2} u & =& f(x,u) \qquad \textrm{in}\quad \Omega,\\ | {\nabla u} | ^{p-2}\frac{\partial u}{\partial n} & =& 0 \qquad \textrm{on}\quad \partial \Omega. \end{eqnarray*} Article information Source Differential Integral Equations, Volume 21, Number 3-4 (2008), 265-284. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356038780 Mathematical Reviews number (MathSciNet) MR2484009 Zentralblatt MATH identifier 1224.35085 Citation Crînganu, Jenica; Dinca, George. Multiple solutions for a class of nonlinear equations involving a duality mapping. Differential Integral Equations 21 (2008), no. 3-4, 265--284. https://projecteuclid.org/euclid.die/1356038780
Triplet embeddings consist of mapping a group of images to an embedding space, such that images deemed more similar to each other end up closer together. The "triplet" comes from training, where we have (A,P,N), where A is our anchor image, P is a positive example image (one deemed similar to A), and N is a negative example. So the architecture is as follows: each element of the triplet is first mapped through a convolutional neural net, followed by an embedding net. We denote this with function $f$, so that for image $x$, $f(x)$ is the embedding. My question concerns the choice of loss function. As an example here are two different approaches: In the first approach, separation of positive/negative examples is achieved using a margin of separation: $$[\|f(x_i^A)-f(x_i^P)\|_2^2-\|f(x_i^A)-f(x_i^N)\|_2^2+\alpha]_+.$$ The second approach is to use a softmax loss: $$\frac{e^{\|f(x_i^a)-f(x_i^p)\|_2}}{e^{\|f(x_i^a)-f(x_i^p)\|_2}+e^{\|f(x_i^a)-f(x_i^n)\|_2}},$$ which has the property that as the loss goes to 0, $\frac{\|f(x_i^a)-f(x_i^p)\|_2}{\|f(x_i^a)-f(x_i^n)\|_2}\rightarrow 0$, which achieves the desire to embed positive examples closer than negative examples. I'm sure there are other ways, and I'm curious if there's a nice review of which methods work better?