url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://gmatclub.com/forum/m32-244852.html
|
It is currently 21 Jan 2018, 20:39
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# M32-09
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 43347
### Show Tags
17 Jul 2017, 03:48
Expert's post
1
This post was
BOOKMARKED
00:00
Difficulty:
45% (medium)
Question Stats:
63% (00:28) correct 37% (00:45) wrong based on 27 sessions
### HideShow timer Statistics
Certain word is written on a paper. What is the number of arrangements of letters of that word ?
(1) If the first two letters were omitted, the number of arrangements of letters of shortened word would be 6
(2) There are 5 letters in the word
[Reveal] Spoiler: OA
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 43347
### Show Tags
17 Jul 2017, 03:48
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
Official Solution:
Certain word is written on a paper. What is the number of arrangements of letters of that word ?
(1) If the first two letters were omitted, the number of arrangements of letters of shortened word would be 6. This one is clearly insufficient: we don't know how many letters are repeated in the word or even how many letters are there. Not sufficient.
(2) There are 5 letters in the word. We don't know how many letters are repeated in the word. Not sufficient.
(1)+(2) We can deduce that the last three letters of the word are all different (hence their arrangement of $$3! = 6$$) but we still don't know whether they repeat any of the first two letters. For example, if the word is goose, then the number of arrangements of its letters would be $$\frac{5!}{2!}$$ but if the word is close, then the number of arrangements of its letters would be $$5!$$. Not sufficient.
_________________
Math Expert
Joined: 02 Aug 2009
Posts: 5537
### Show Tags
17 Jul 2017, 05:34
1
KUDOS
Expert's post
Bunuel wrote:
Certain word is written on a paper. What is the number of arrangements of letters of that word ?
(1) If the first two letters were omitted, the number of arrangements of letters of shortened word would be 6
(2) There are 5 letters in the word
Ofcourse solution remains as above..
Just a few examples included...
1) two letters omitted, arrangements become 6..
One case... Remaining all are different... 3!=6... Total letters 3+2
Second case.. few remaining are common... 4 letters with 2 of one kind and other two of similar kind...$$\frac{4!}{2!/2!}$$=6
Here elements are 4+2
So there can be different elements and we don't even know if the two removed are same or different.
Insufficient
2) there are five letters.
All five could be same 5!/5!=1
All different=5!
Insufficient
Combined..
5 letters..
2 removed, remaining 3 in 6 arrangements... So SURELY these 3 are different letters.
But WHAT about the 2 removed..
Both same but different from the three left.. 5!/2!
Both same as one already in 3... 5!/3!
All different..5!
And so on
Insufficient
E
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
BANGALORE/-
Intern
Joined: 05 Sep 2016
Posts: 3
Location: India
Concentration: General Management, Operations
WE: Engineering (Energy and Utilities)
### Show Tags
22 Sep 2017, 20:21
I think this is a high-quality question and I agree with explanation.
Re M32-09 [#permalink] 22 Sep 2017, 20:21
Display posts from previous: Sort by
# M32-09
Moderators: chetan2u, Bunuel
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-01-22 04:39:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6814759969711304, "perplexity": 3211.2779019251825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890991.69/warc/CC-MAIN-20180122034327-20180122054327-00682.warc.gz"}
|
http://mathhelpforum.com/algebra/114981-ratios.html
|
Your very good language abilities indicate that you are not in second or third grade so I don't understand why you are not able to do this immediately. Reduce $\frac{244}{306}$ to lowest terms. I see immediately that numerator and denominator are both divisible by 2.
|
2017-01-23 04:46:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7607426643371582, "perplexity": 118.94936883115054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00160-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Right_Coset
|
# Definition:Coset/Right Coset
(Redirected from Definition:Right Coset)
## Definition
Let $G$ be a group, and let $H \le G$.
The right coset of $y$ modulo $H$, or right coset of $H$ by $y$, is:
$H y = \set {x \in G: \exists h \in H: x = h y}$
This is the equivalence class defined by right congruence modulo $H$.
That is, it is the subset product with singleton:
$H y = H \set y$
## Also defined as
The definition given here is the usual one, but some sources (see 1982: P.M. Cohn: Algebra Volume 1 (2nd ed.), for example) order the operands in the opposite direction, and hence $x H$ is a right coset.
## Also see
• Results about cosets can be found here.
|
2019-08-22 07:28:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939979314804077, "perplexity": 617.772980864553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00230.warc.gz"}
|
https://studysoup.com/tsg/19190/fluid-mechanics-2-edition-chapter-6-problem-80p
|
×
Get Full Access to Fluid Mechanics - 2 Edition - Chapter 6 - Problem 80p
Get Full Access to Fluid Mechanics - 2 Edition - Chapter 6 - Problem 80p
×
# A 7-cm diameter vertical water jet is injected upwards by
ISBN: 9780071284219 39
## Solution for problem 80P Chapter 6
Fluid Mechanics | 2nd Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Fluid Mechanics | 2nd Edition
4 5 1 308 Reviews
26
5
Problem 80P Problem 80P
A 7-cm diameter vertical water jet is injected upwards by a nozzle at a speed of 15 m/s. Determine the maximum weight of a flat plate that can be supported by this water jet at a height of 2 m from the nozzle.
Step-by-Step Solution:
Q:A 7-cm diameter vertical water jet is injected upwards by a nozzle at a speed of 15 m/s.Determine the maximum weight of a flat plate that can be supported by this water jet at aheight of 2 m from the nozzle.Answer:771N SolutionStep 1 of 4We need to find the weight of the flat plate which is supported by the stream of water comingvertically out of a nozzle of diameter 7cm.The plate is held at 2m from the nozzle. The water as it hits the plate, it splatters and covers theentire area of the plate. The thrust given by the water balances the weight of the plate, thusholding the plate at 2m from nozzle.Assumptions: 1. The water is incompressible 2. Friction between the water and air is negligible 3. Momentum-flux correction is negligible 4. Thin region below the flat plate is taken as the control volume.Data given:1.Density of water=2.3.Height,4.Velocity of water at the nozzle,
Step 2 of 4
Step 3 of 4
##### ISBN: 9780071284219
Fluid Mechanics was written by and is associated to the ISBN: 9780071284219. This full solution covers the following key subjects: nozzle, Water, jet, determine, injected. This expansive textbook survival guide covers 15 chapters, and 1547 solutions. The answer to “A 7-cm diameter vertical water jet is injected upwards by a nozzle at a speed of 15 m/s. Determine the maximum weight of a flat plate that can be supported by this water jet at a height of 2 m from the nozzle.” is broken down into a number of easy to follow steps, and 43 words. Since the solution to 80P from 6 chapter was answered, more than 924 students have viewed the full step-by-step answer. This textbook survival guide was created for the textbook: Fluid Mechanics, edition: 2. The full step-by-step solution to problem: 80P from chapter: 6 was answered by , our top Engineering and Tech solution expert on 07/03/17, 04:51AM.
## Discover and learn what students are asking
Calculus: Early Transcendental Functions : Preparation for Calculus
?In Exercises 5–8, test for symmetry with respect to each axis and to the origin. $$y=x^{2}+4 x$$
Calculus: Early Transcendental Functions : First-Order Linear Differential Equations
?In Exercises 1-4, determine whether the differential equation is linear. Explain your reasoning. $$2 x y-y^{\prime} \ln x=y$$
Statistics: Informed Decisions Using Data : Testing the Significance of the Least-Squares Regression Model
?CEO Performance (Refer to Problem 31 in Section 4.1) The following data represent the total compensation for 12 randomly selected chief executive offi
#### Related chapters
Unlock Textbook Solution
|
2022-08-09 04:27:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3404960036277771, "perplexity": 1932.3481667316894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00262.warc.gz"}
|
https://alfanotes.wordpress.com/2011/12/19/real-sym-matrices-eigenvalues/
|
## Real symmetric matrices have real eigenvalues
A real matrix is symmetric if $A^t=A$. I will show in this post that a real symmetric matrix have real eigenvalues.
I will need a dot product for the prof and I’ll use the basic dot product for two vectors $X$ and $Y$: $=X^t\overline{Y}$, where $\overline{Y}$ is the complex conjugate of the vector $Y$.
The useful property of this dot product is that $=$, for any matrix $A$
And considering that $A$ is real, a simple proof is:
$=(AX)^t\overline{Y}=X^tA^t\overline{Y}=X^t\overline{A^tY}=$
An eigenvalue have a correspondent eigenvector: $AX=\lambda X$.
We have $=<\lambda X, X>=\lambda $ and considering that A is symmetric $====\overline{\lambda}$.
From $\lambda=\overline{\lambda}$ and because $X$ is not a zero vector it results that the imaginary part of $\lambda$ is zero, so the eigenvalue is a real number.
|
2018-09-19 15:02:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9734894037246704, "perplexity": 212.675403352378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.31/warc/CC-MAIN-20180919141825-20180919161825-00003.warc.gz"}
|
https://es.mathworks.com/help/control/ref/tuninggoal.loopshape-class.html
|
Documentation
# TuningGoal.LoopShape class
Package: TuningGoal
Target loop shape for control system tuning
## Description
Use TuningGoal.LoopShape to specify a target gain profile (gain as a function of frequency) of an open-loop response. TuningGoal.LoopShape constrains the open-loop, point-to-point response (L) at a specified location in your control system. Use this tuning goal for control system tuning with tuning commands, such as systune or looptune.
When you tune a control system, the target open-loop gain profile is converted into constraints on the inverse sensitivity function inv(S) = (I + L) and the complementary sensitivity function T = 1–S. These constraints are illustrated for a representative tuned system in the following figure.
Where L is much greater than 1, a minimum gain constraint on inv(S) (green shaded region) is equivalent to a minimum gain constraint on L. Similarly, where L is much smaller than 1, a maximum gain constraint on T (red shaded region) is equivalent to a maximum gain constraint on L. The gap between these two constraints is twice the CrossTol parameter, which specifies the frequency band where the loop gain can cross 0 dB.
For multi-input, multi-output (MIMO) control systems, values in the gain profile greater than 1 are interpreted as minimum performance requirements. Such values are lower bounds on the smallest singular value of the open-loop response. Gain profile values less than one are interpreted as minimum roll-off requirements, which are upper bounds on the largest singular value of the open-loop response. For more information about singular values, see sigma.
Use TuningGoal.LoopShape when the loop shape near crossover is simple or well understood (such as integral action). To specify only high gain or low gain constraints in certain frequency bands, use TuningGoal.MinLoopGain and TuningGoal.MaxLoopGain. When you do so, the software determines the best loop shape near crossover.
## Construction
Req = TuningGoal.LoopShape(location,loopgain) creates a tuning goal for shaping the open-loop response measured at the specified location. The magnitude of the single-input, single-output (SISO) transfer function loopgain specifies the target open-loop gain profile. You can specify the target gain profile (maximum gain across the I/O pair) as a smooth transfer function or sketch a piecewise error profile using an frd model.
Req = TuningGoal.LoopShape(location,loopgain,crosstol) specifies a tolerance on the location of the crossover frequency. crosstol expresses the tolerance in decades. For example, crosstol = 0.5 allows gain crossovers within half a decade on either side of the target crossover frequency specified by loopgain. When you omit crosstol, the tuning goal uses a default value of 0.1 decades. You can increase crosstol when tuning MIMO control systems. Doing so allows more widely varying crossover frequencies for different loops in the system.
Req = TuningGoal.LoopShape(location,wc) specifies just the target gain crossover frequency. This syntax is equivalent to specifying a pure integrator loop shape, loopgain = wc/s.
Req = TuningGoal.LoopShape(location,wcrange) specifies a range for the target gain crossover frequency. The range is a vector of the form wcrange = [wc1,wc2]. This syntax is equivalent to using the geometric mean sqrt(wc1*wc2) as wc and setting crosstol to the half-width of wcrange in decades. Using a range instead of a single wc value increases the ability of the tuning algorithm to enforce the target loop shape for all loops in a MIMO control system.
## Properties
LoopGain Target loop shape as a function of frequency, specified as a SISO zpk model. The software automatically maps the input argument loopgain onto a zpk model. The magnitude of this zpk model approximates the desired gain profile. Use viewGoal(Req) to plot the magnitude of the zpk model LoopGain. CrossTol Tolerance on gain crossover frequency, in decades. The initial value of CrossTol is set by the crosstol input when you create the tuning goal. Default: 0.1 Focus Frequency band in which tuning goal is enforced, specified as a row vector of the form [min,max]. Set the Focus property to limit enforcement of the tuning goal to a particular frequency band. Express this value in the frequency units of the control system model you are tuning (rad/TimeUnit). For example, suppose Req is a tuning goal that you want to apply only between 1 and 100 rad/s. To restrict the tuning goal to this band, use the following command:Req.Focus = [1,100]; Default: [0,Inf] for continuous time; [0,pi/Ts] for discrete time, where Ts is the model sample time. Stabilize Stability requirement on closed-loop dynamics, specified as 1 (true) or 0 (false). When Stabilize is true, this requirement stabilizes the specified feedback loop, as well as imposing gain or loop-shape requirements. Set Stabilize to false if stability for the specified loop is not required or cannot be achieved. Default: 1 (true) LoopScaling Toggle for automatically scaling loop signals, specified as 'on' or 'off'. In multi-loop or MIMO control systems, the feedback channels are automatically rescaled to equalize the off-diagonal terms in the open-loop transfer function (loop interaction terms). Set LoopScaling to 'off' to disable such scaling and shape the unscaled open-loop response. Default: 'on' Location Location at which the open-loop response shape to be constrained is measured, specified as a cell array of character vectors that identify one or more analysis points in the control system to tune. For example, if Location = {'u'}, the tuning goal evaluates the open-loop response measured at an analysis point 'u'. If Location = {'u1','u2'}, the tuning goal evaluates the MIMO open-loop response measured at analysis points 'u1' and 'u2'. The initial value of the Location property is set by the location input argument when you create the tuning goal. Models Models to which the tuning goal applies, specified as a vector of indices. Use the Models property when tuning an array of control system models with systune, to enforce a tuning goal for a subset of models in the array. For example, suppose you want to apply the tuning goal, Req, to the second, third, and fourth models in a model array passed to systune. To restrict enforcement of the tuning goal, use the following command: Req.Models = 2:4; When Models = NaN, the tuning goal applies to all models. Default: NaN Openings Feedback loops to open when evaluating the tuning goal, specified as a cell array of character vectors that identify loop-opening locations. The tuning goal is evaluated against the open-loop configuration created by opening feedback loops at the locations you identify. If you are using the tuning goal to tune a Simulink model of a control system, then Openings can include any linear analysis point marked in the model, or any linear analysis point in an slTuner interface associated with the Simulink model. Use addPoint to add analysis points and loop openings to the slTuner interface. Use getPoints to get the list of analysis points available in an slTuner interface to your model. If you are using the tuning goal to tune a generalized state-space (genss) model of a control system, then Openings can include any AnalysisPoint location in the control system model. Use getPoints to get the list of analysis points available in the genss model. For example, if Openings = {'u1','u2'}, then the tuning goal is evaluated with loops open at analysis points u1 and u2. Default: {} Name Name of the tuning goal, specified as a character vector. For example, if Req is a tuning goal: Req.Name = 'LoopReq'; Default: []
## Examples
collapse all
Create a target gain profile requirement for the following control system. Specify integral action, gain crossover at 1, and a roll-off requirement of 40 dB/decade.
The requirement should apply to the open-loop response measured at the AnalysisPoint block X. Specify a crossover tolerance of 0.5 decades.
LS = frd([100 1 0.0001],[0.01 1 100]);
Req = TuningGoal.LoopShape('X',LS,0.5);
The software converts LS into a smooth function of frequency that approximates the piecewise-specified requirement. Display the requirement using viewGoal.
viewGoal(Req)
The green and red regions indicate the bounds for the inverse sensitivity, inv(S) = 1-G*C, and the complementary sensitivity, T = 1-S, respectively. The gap between these regions at 0 dB gain reflects the specified crossover tolerance, which is half a decade to either side of the target loop crossover.
When you use viewGoal(Req,CL) to validate a tuned closed-loop model of this control system, CL, the tuned values of S and T are also plotted.
Create separate loop shape requirements for the inner and outer loops of the following control system.
For the inner loop, specify a loop shape with integral action, gain crossover at 1, and a roll-off requirement of 40 dB/decade. Additionally, specify that this loop shape requirement should be enforced with the outer loop open.
LS2 = frd([100 1 0.0001],[0.01 1 100]);
Req2 = TuningGoal.LoopShape('X2',LS2);
Req2.Openings = 'X1';
Specifying 'X2' for the location indicates that Req2 applies to the point-to point, open-loop transfer function at the location X2. Setting Req2.Openings indicates that the loop is opened at the analysis point X1 when Req2 is enforced.
By default, Req2 imposes a stability requirement on the inner loop as well as the loop shape requirement. In some control systems, however, inner-loop stability might not be required, or might be impossible to achieve. In that case, remove the stability requirement from Req2 as follows.
Req2.Stabilize = false;
For the outer loop, specify a loop shape with integral action, gain crossover at 0.1, and a roll-off requirement of 20 dB/decade.
LS1 = frd([10 1 0.01],[0.01 0.1 10]);
Req1 = TuningGoal.LoopShape('X1',LS1);
Specifying 'X1' for the location indicates that Req1 applies to the point-to point, open-loop transfer function at the location X1. You do not have to set Req1.Openings because this loop shape is enforced with the inner loop closed.
You might want to tune the control system with both loop shaping requirements Req1 and Req2. To do so, use both requirements as inputs to the tuning command. For example, suppose CL0 is a tunable genss model of the closed-loop control system. In that case, use [CL,fSoft] = systune(CL0,[Req1,Req2]) to tune the control system to both requirements.
Create a loop-shape requirement for the feedback loop on 'q' in the Simulink model rct_airframe2. Specify that the loop-shape requirement is enforced with the 'az' loop open.
Open the model.
open_system('rct_airframe2')
Create a loop shape requirement that enforces integral action with a crossover a 2 rad/s for the 'q' loop. This loop shape corresponds to a loop shape of 2/_s_.
s = tf('s');
shape = 2/s;
Req = TuningGoal.LoopShape('q',shape);
Specify the location at which to open an additional loop when enforcing the requirement.
Req.Openings = 'az';
To use this requirement to tune the Simulink model, create an slTuner interface to the model. Identify the block to tune in the interface.
ST0 = slTuner('rct_airframe2','MIMO Controller');
Designate both az and q as analysis points in the slTuner interface.
This command makes q available as an analysis location. It also allows the tuning requirement to be enforced with the loop open at az.
You can now tune the model using Req and any other tuning requirements. For example:
[ST,fSoft] = systune(ST0,Req);
Final: Soft = 0.845, Hard = -Inf, Iterations = 51
Create a tuning requirement specifying that the open-loop response of loop identified by 'X' cross unity gain between 50 and 100 rad/s.
Req = TuningGoal.LoopShape('X',[50,100]);
Examine the resulting requirement to see the target loop shape.
viewGoal(Req)
The plot shows that the requirement specifies an integral loop shape, with crossover around 70 rad/s, the geometrical mean of the range [50,100]. The gap at 0 dB between the minimum low-frequency gain (green region) and the maximum high-frequency gain (red region) reflects the allowed crossover range [50,100].
## Tips
• This tuning goal imposes an implicit stability constraint on the closed-loop sensitivity function measured at Location, evaluated with loops opened at the points identified in Openings. The dynamics affected by this implicit constraint are the stabilized dynamics for this tuning goal. The MinDecay and MaxRadius options of systuneOptions control the bounds on these implicitly constrained dynamics. If the optimization fails to meet the default bounds, or if the default bounds conflict with other requirements, use systuneOptions to change these defaults.
## Algorithms
When you tune a control system using a TuningGoal, the software converts the tuning goal into a normalized scalar value f(x), where x is the vector of free (tunable) parameters in the control system. The software then adjusts the parameter values to minimize f(x) or to drive f(x) below 1 if the tuning goal is a hard constraint.
For TuningGoal.LoopShape, f(x) is given by:
$f\left(x\right)={‖\begin{array}{c}{W}_{S}S\\ {W}_{T}T\end{array}‖}_{\infty }.$
Here, S = D–1[I – L(s,x)]–1D is the scaled sensitivity function at the specified location, where L(s,x) is the open-loop response being shaped. D is an automatically-computed loop scaling factor. (If the LoopScaling property is set to 'off', then D = I.) T = S – I is the complementary sensitivity function.
WS and WT are frequency weighting functions derived from the specified loop shape. The gains of these functions roughly match LoopGain and 1/LoopGain, for values ranging from –20 dB to 60 dB. For numerical reasons, the weighting functions level off outside this range, unless the specified loop gain profile changes slope for gains above 60 dB or below –60 dB. Because poles of WS or WT close to s = 0 or s = Inf might lead to poor numeric conditioning of the systune optimization problem, it is not recommended to specify loop shapes with very low-frequency or very high-frequency dynamics.
To obtain WS and WT, use:
[WS,WT] = getWeights(Req,Ts)
where Req is the tuning goal, and Ts is the sample time at which you are tuning (Ts = 0 for continuous time). For more information about the effects of the weighting functions on numeric stability, see Visualize Tuning Goals.
## Compatibility Considerations
expand all
Behavior changed in R2016a
|
2019-11-21 03:54:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5653120875358582, "perplexity": 2746.654824244086}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00013.warc.gz"}
|
https://dcc.ligo.org/LIGO-G1601144/public
|
# Factors pertaining to the strength of four-fiber monolithic silica test mass suspensions
Document #:
LIGO-G1601144-v1
Document type:
G - Presentations (eg Graphics)
Other Versions:
Abstract:
The diameter of the silica fibers used in the 40 kg quasi-monolithic aLIGO test mass suspensions was chosen as d = 400µm to keep the bounce frequency below 10 Hz, and the violin mode frequencies above 500 Hz [1]. For further improvement of detector performance at low-frequency reducing the vertical bounce mode frequency would be beneficial. To make the bounce frequency smaller, the fibres need to be made thinner. Article [2] suggested that the fiber diameter can be reduced to 288 µm; this thickness is sufficient to give fiber strength three times larger than the static load in aLIGO suspension. The factor 3 is chosen as a reasonable safety margin [2]. In this poster we analyze the overall strength of welded 4-fiber. The additional factors such as strength of welded joints or stock misalignments may limit the suspension strength. None of these weakening factors is crucial: the strength of the suspension is sufficiently large and able to provide long-time operation of the gravitational interferometer. The fiber's thickness is one of the few competitive limiting factors. In such case the fiber's diameter can be made ~ 300 µm and the full suspension strength should not be significantly affected.
References:
1. Heptonstall et al, Invited Article: CO2 laser production of fused silica fibers for use in interferometric gravitational wave detector mirror suspensions. Rev. Sci. Instrum. 82 (2011) 011301.
2. Heptonstall et al, Enhanced characteristics of fused silica fibers using laser polishing. Class. Quant. Gravity, 31 (2014), 105006.
Files in Document:
Topics:
Authors:
Related Documents:
DCC Version 3.4.1, contact DCC Help
|
2023-02-09 13:44:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8213608264923096, "perplexity": 3806.1158290606472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00022.warc.gz"}
|
https://rpg.stackexchange.com/questions/84069/extremely-heavy-weight?noredirect=1
|
# Extremely Heavy Weight [duplicate]
This question already has an answer here:
RAW, is there anything that explicitly states what happens to a creature (either PC or monster) when a weight that exceeds their carry capacity is suddenly dropped on them?
EDIT: The intent is to find out the mechanics, if any exist, of what would happen if a Goliath Barbearian PC unceremoniously dropped, say, a 1000 lb. iron ball on a wolf. How would damage and other things be handled? If existing mechanics can't fully describe this situation, how would you handle it at your table?
# Rules As Written
I've not seen that (damage to something from falling onto that something) specifically.
The only firm reference is that falling damage is 1d6 per 10'.
# How I Handle It.
I've used the falling damage modified by size. I don't judge it by Carrying, just by sizes and falling distance.
Noting the principle in physics of equal and opposite reactions, and applying it liberally...
I assume a Medium creature does the same damage to what it falls upon a medium creature. I use a Dex save to avoid; DC 10 for ragdoll falls, but if intentional, it's an opposed Attacking character's Dexterity (Acrobatics) vs the target's Strength (Athletics) or Dexterity (Acrobatics).
• My understanding of the normal falling damage rules is that they are written for the default scenario of landing on the ground. So when a Medium creature falls onto something softer, like a bush, hay bale, or another Medium creature, I would subtract 10 feet from the distance fallen when calculating damage. Extending this to your house rule for damage to the thing fallen on, I'd likewise reduce the damage that the unfortunate soul below takes if the thing falling is a creature or otherwise has some give (e.g., a very leafy tree branch, perhaps). – Dan Henderson Jul 13 '16 at 18:32
• Possible exceptions to this reduced damage exception include particularly un-soft creatures, like a golem, mimic, or gargoyle; those that are extremely heavy for their size, like a rhinoceros; and those of different sizes (as you already mention). – Dan Henderson Jul 13 '16 at 18:50
• @DanHenderson you're talking about damage to the falling; the OP is asking about damage to the critter landed upon. As am I. I don't reduce damage for falling onto critters; the extra from concentration of force makes up for the landing upon something. – aramis Jul 14 '16 at 5:02
• I clearly am talking about the critter landed upon: "Extending this to your house rule for damage to the thing fallen on, I'd likewise reduce the damage that the unfortunate soul below takes..." – Dan Henderson Jul 14 '16 at 7:10
• The counterexample creatures I listed would be those which, if they landed on you, you would not get less damage. But if a human fell 20 feet to land on another human, the damage to either would certainly be less than the damage one human would suffer from falling 20 feet onto the hard ground, for the same reason seatbelts and airbags reduce injury - the deceleration is spread out over a slightly longer period of time (~1/2sec rather than ~1/50sec) because the person on the bottom collapses after the initial moment of impact. – Dan Henderson Jul 14 '16 at 7:18
|
2019-09-23 14:38:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3527282476425171, "perplexity": 2828.1874227128155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00462.warc.gz"}
|
https://www.gamedev.net/topic/638907-whats-your-preferred-way-of-letting-objects-affect-the-game/
|
\$10
### Image of the Day Submit
IOTD | Top Screenshots
## What's your preferred way of letting objects affect the game?
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
20 replies to this topic
### #1Cornstalks Members
Posted 16 February 2013 - 04:27 PM
I try to program as atomically as possible, following the Single Responsibility Principle as best I can (but I don't go extreme nazi on things, in case you panic). This works pretty well in a lot of my work.
But games are complicated. Objects interact with each other and the entire game's state regularly throughout the game. Units collide, one unit attacks another, a building makes units, a unit can create projectiles, and those projectiles can affect a large number of other units, etc. In other words, when programming this, you need a way for one object to be able to affect another object (or the game state itself, possibly by creating other units or projectiles). But it's not just a one-time case; many different units have different affects and interactions, which can result in messy code when trying to make all these possible interactions and events possible.
What's your preferred way of letting various objects (units) affect the game (by creating other units, buildings, projectiles, hurting/healing other units, etc.)?
I'm trying to think of some good ways to let these interactions occur that allows for (generally) clean code. My current solution is to have a Game class that actually runs a single game (by updating units (units update themselves as much as possible, but the Game tells them when to), enforcing game rules (so units can't walk up cliffs, for example), updating scores, etc.), and having each unit (or any other object that needs to interact with the Game itself) hold a reference to the Game to which it belongs. If a unit wants to do something that affects another unit or the Game, it calls the corresponding method on the Game object.
For example, if unit A wants to create a building at x, y, it calls Game's makeBuilding(owning_player, x, y), and the Game either creates the building, or it returns false indicating that was an illegal move (perhaps there's already a building there). Or, for another example, unit B wants to launch a rocket that's aimed at target_x, target_y. Unit B then calls Game's createProjectile(owning_player, unit_pos_x, unit_pos_y, target_x, target_y), and the Game creates the projectile and takes ownership of it. Or, another example, unit C performs a melee attack at position x, y in direction dir, so it calls Game's meleeAttack(owning_player, x, y, dir) and the Game can decide if another (enemy) unit was hit and deal damage as needed.
Is this a sane solution? Is there a better one? I'm afraid Game will grow into a monstrous class and have too much responsibility, but then again I'm not sure this can be avoided, and this solution will hopefully prevent a lot of spaghetti code and unnecessary coupling.
I want to try and explore as many (good) solutions as I can before I get too far down the road with this game I'm working on.
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
### #2Khatharr Members
Posted 16 February 2013 - 06:06 PM
This should be a cool thread. Looking forward to responses.
void hurrrrrrrr() {__asm sub [ebp+4],5;}
There are ten kinds of people in this world: those who understand binary and those who don't.
### #3Tasche Members
Posted 16 February 2013 - 06:10 PM
though i basically went for the same system you are using in my own engine, so i cant say for sure that it is better, but in retrospect i wish i had implemented it as a event driven system, where game objects can register callbacks for an event. that way the interface stays 'clean', and you have a clearly defined connection point, namely the event message dispatcher/processing. also, when done properly, you get the 'only update when necessary' for free. needless to say, this variant is usually very performance effective (of course it always depends)
however, like i said, i haven't actually done it, so there could be some serious pitfalls i cant think of just now, i'm just learning this stuff myself.
### #4ApochPiQ Moderators
Posted 16 February 2013 - 09:17 PM
POPULAR
The key realization is that representing a "game" or "unit" or "building" might still be violating SRP.
Yes, on some level, those are "single" groups of responsibility. But they're really just God Classes of slightly more tame dimensions.
At some point you have to decide how much complexity is actually too much; in some games, like PacMan, representing a single "character" in a class is totally fine; in, say, an RPG, it's a mistake. This is entirely situational and dependent on the nature of the game you're writing, which is why I resist the idea that there's value in a "preferred way" or some kind of ruleset for making design decisions absent the relevant contextual information.
On one extreme, I find no sin in writing a game like TicTacToe in a couple of modules/units of organization (classes, if your language of choice believes in them, are just one unit of organization).
At the other, let's look at how a rich RPG might be implemented. I would consider each of the following to merit its own unit of organization in a complex game:
• Individual items
• Groups of items - think bags, containers, etc.
• Inventories
• Health bars
• Individual skills (swing sword, cast fireball)
• Skill groupings (all skills granted to sword wielders)
• Movement controller (can I walk? Run? Swim? Climb trees?)
• High level action management (coordination between using items, skills, movement)
• Combat engine (coordinates usage of skills/items and how they interact with health and movement)
• Status engine (handles things like temporary status ailments, buffs, etc.)
Movement controllers interact with the physics system to determine where units move. The status engine might be consulted to see if we are paralyzed or slowed for some reason. Skill groupings may selectively permit certain skills in various contextual scenarios. Action management might be wired to player input or an AI system. And so on.
Component architectures tend to be a popular contemporary solution for building up complex logic from all this sort of decomposition of responsibilities. They're just one option, though; and not necessarily the best.
Wielder of the Sacred Wands
### #5thePyro_13 Members
Posted 16 February 2013 - 09:24 PM
I've been implementing a Entity Component System and have found it to be very clean and easy to control.
Entity's store data. Controllers are attached to entities and add or alter the entities data(usually after reading it) they also handle events. Observers act on data changes and fire events.
I currently have an explicit controller, that takes the user input and changes the entitys state.
An implicit controller that takes movement states and turns it into actual movement.
An observer that watches entity movement and fires collision events.
and an observer that watches entity positions and animations and sends updated data to the renderer.
Such a system can require some intermediate techniques to set up, but has many advantages. Controllers and observers each only have one task, and so are very simple. Game data is cleanly split from the renderer(and both can be moved to different threads much easier).
### #6phantom Members
Posted 17 February 2013 - 04:12 AM
Apoch has covered design nicely, but I think this is still worth calling out.
But games are complicated. Objects interact with each other and the entire game's state regularly throughout the game. Units collide, one unit attacks another, a building makes units, a unit can create projectiles, and those projectiles can affect a large number of other units, etc. In other words, when programming this, you need a way for one object to be able to affect another object (or the game state itself, possibly by creating other units or projectiles). But it's not just a one-time case; many different units have different affects and interactions, which can result in messy code when trying to make all these possible interactions and events possible.
The bold section, in particular the end, is key to this I feel; a single object in the game is not going to touch the whole game state. Not ever and certainly not directly. This is something you go on to basically say yourself with the list of interactions below and this is a key place to start when thinking about it - each object has a pre-defined interaction with the world. If you were going to make an RTS game, for example, your 'tank' super-object (super-object => collection of parts) isn't suddenly going to be marrying The Hulk any time soon ;)
If we take a closer look at one of the examples you gave, "a building makes units", as a typical factory in an RTS game.
While this might look complicated it isn't really, at the highest level the building does one thing 'creates a unit', at which point beyond throwing out a few notifications it is done.
Its output might simply be a message (in the generic sense, not a 'message system' sense) which says "I've made vehicle <type> at location <here> for <player>". Other systems would be listening for that event and respond accordingly; a player's unit collection might update and the unit spawner might create a new object so it can be rendered.
At which point the messages could ripple further afield (unit collection => UI to display message/icon; spawner to any AI and physics systems to insert it into their world view; and so on) but the key interaction is done.
As to how you expose this... well, there is the erstwhile mentioned 'message system' which you could route all the messages to - however this global routing might not be what you want. Your 'factory' super-object might contain a list of delegates/functions to call when certain events happen and systems 'subscribe' to them or you could abstract the whole thing and come up with a processing graph which represents spawning in data; this would require your super-class still outputs events but the hookup of those input and output pins becomes data driven and potentially modified per level.
The key thing, however, is that each unit has a predetermined level of interaction with the world so it can't mutate all the state itself nor does it have to deal with every possible thing that can happen. Even something as simple as 'collision' isn't really the units problem as it is a problem for the physics under the hood.
### #7Tasche Members
Posted 17 February 2013 - 01:46 PM
i didn't mention this in my first post, but i've actually moved to something in between 'every object can affect everything else (within its class)' and an 'event system', im using a command queue of sorts.
all objects interacting with the world issue a command to the queue (most prominently spawning of stuff, but also game logic triggers and such), and the command queue handles dispatching data to other objects that need it. so its basically an event system, but i will not shoehorn my collision into there, that seems performant enough, since i got a broadphase covering all objects anyway.
a nice side effect of the queue was that i was easily able to prioritize and limit events per game loop iteration.
even though a lot still gets calculated inside the object class itself, i managed to move most of it to the queue.
deriving new objects from my game object class has become so clean and easy now, wish i had done it from day one (well actually i did, but limited to spawning at first)
EDIT: seems from the answers sofar that event systems are the way to go....
Edited by Tasche, 17 February 2013 - 01:49 PM.
### #8AllEightUp Moderators
Posted 18 February 2013 - 02:56 AM
I think Apoch and phantom cover the bases well but phantom brings up the real problem: how to trigger event and apply effect. No offense meant but phantoms "clarification" was unfortunately lacking in "how" it works.
It's all in the lists...
Games are "turn based", just really damned fast turns. So, your object wants to create a projectile, you create the new projectile and send a message to the games main loop saying "add this to the world". You send a message because you want the creation of a new projectile "NOT" to effect current turn processing, the creation is the result of this turn and will affect the world next turn, "not" this turn. (This delayed reaction solves ungodly numbers of bugs, just do it, really.)
Given we solved how to produce a projectile, how do you apply damage? Same basic deal, the projectile says "I hit" and posts a message for "do damage to xxx" because I hit it. Next frame xxx figures out if it is still alive after getting hit.
Or, xxx could post a "I did x damage to area" to the world object and the message can be applied to all objects in the area. Those objects can be told to die or removed, depends how you want to deal with the details here.
The key thing to remember is that even though the game is realtime the rules are basically turn based. If you try to apply rules interactively in a single frame, you are asking for all sorts of problems. OMG, he didn't die 1/60th of a second faster really isn't something a player will notice and not integrating everything every loop makes for a much more simplified coding problem. Network and all that, gets more complicated but still basically the same other than when to apply messages.
### #9darkhaven3 Members
Posted 18 February 2013 - 11:11 AM
typedef unsigned long coord_t;
typedef unsigned char byte_t;
typedef signed short mobjtype_t;
typedef enum {
MOBJ_CREATE=0,
MOBJ_ACTIVE,
MOBJ_ATTACK,
MOBJ_PAIN,
MOBJ_DEATH,
maxmobjframes;
} mobjstates_e;
typedef struct {
mobjtype_t type;
frame_t* mobjframes[maxmobjframes];
coord_t mapx,mapy;
mobj_t* target;
} mobj_t;
mobj_t represents every possible object in the game world, including the player. mobj_t contains a list of pointers to a master "frame list" that defines things like what graphic should be displayed for this object at this frame, what the next and/or previous frames for the current one are, and a function pointer to describe what needs to be done by this object this frame. mobj_t frames only have access to certain functions in a master list which should, by convention, only interact with other mobj_t types, and what object that ends up being is dependent on mobj_t.target. Objects can do whatever the hell they want to each other after this point.
Edited by darkhaven3, 18 February 2013 - 11:12 AM.
### #10ApochPiQ Moderators
Posted 18 February 2013 - 03:30 PM
mobj_t represents every possible object in the game world, including the player. mobj_t contains a list of pointers to a master "frame list" that defines things like what graphic should be displayed for this object at this frame, what the next and/or previous frames for the current one are, and a function pointer to describe what needs to be done by this object this frame. mobj_t frames only have access to certain functions in a master list which should, by convention, only interact with other mobj_t types, and what object that ends up being is dependent on mobj_t.target. Objects can do whatever the hell they want to each other after this point.
I can see this being a legitimate approach if and only if your game logic is heavily data-driven and validated by external tools prior to loading into the engine.
If you're hard-coding a non-trivial game in this style, it's going to turn into a Big Ball of Mud sooner rather than later.
Of course, as I said before, for a certain class of games (and more accurately, for a certain degree of simulation simplicity) that's totally fine.
I'd hate to see an RTS built that way, though ;-)
Wielder of the Sacred Wands
### #11darkhaven3 Members
Posted 18 February 2013 - 03:46 PM
It's heavily implied by the design approach that I'm not going to just allow any frame to start calling functions designed to load a map or initialize SDL or anything, just by convention of how the function-pointer list that the frames reference works. I imagine it will work well enough for the sidescroller I intend to use it with where there's not a whole lot of elegance in object management required.
Example being: "Object A tries to move in the direction of object B. Do whatever the function to move this object says to do. I don't care to sanity-check the results; the function will resolve that on its own. The end."
Edited by darkhaven3, 18 February 2013 - 03:55 PM.
### #12phantom Members
Posted 18 February 2013 - 04:27 PM
I can see this being a legitimate approach if and only if your game logic is heavily data-driven and validated by external tools prior to loading into the engine.
Really? All I can see is the many hundreds of ways this can crash and burn in a series of amusing ways... more so when you consider the follow up reply about not sanity checking the results...
### #13phantom Members
Posted 18 February 2013 - 04:31 PM
No offense meant but phantoms "clarification" was unfortunately lacking in "how" it works.
heh, I was vague on purpose as I felt the general idea was more important than the precise details ;)
For the record I'd favour a hybrid of using scripts/graphs designed by a designer to drive things but allow that to push messages into the message system if needs be. (Such as wanting to kick a sound off and not want to directly tie the sound system into the scripting/graph system.)
### #14Telastyn Members
Posted 18 February 2013 - 08:13 PM
What's your preferred way of letting various objects (units)
affect the game (by creating other units, buildings, projectiles,
hurting/healing other units, etc.)?
Directly. I've started in with component based entities, and letting isolated single responsibility components work directly against references (while ignoring what those references are, or how they are provided) is totally awesome. No event storms to hunt through, no message queues to interpret or parse, no bundles of void* or object to cast about.
Dependency resolution is done in one isolated place, and otherwise components go about their business: exposing functionality for other things to use/consume.
### #15Khatharr Members
Posted 19 February 2013 - 07:13 PM
ApochPiQ, on 16 Feb 2013 - 19:25, said:
The key realization is that representing a "game" or "unit" or "building" might still be violating SRP.
Yes, on some level, those are "single" groups of responsibility. But they're really just God Classes of slightly more tame dimensions.
I'm curious about this. I understand what you're saying in terms of risk to SRP, but is a 'Game' class really necessarily a god class? If all of the components of 'Game' are properly abstracted it could be pretty simplistic, couldn't it?
You made the point that representing an RPG character with a single class is inappropriate. Are you saying that a character instance should be represented by more than one object, such as dividing character state between components that deal with the character or do you just mean that the attributes of the character should be further abstracted?
void hurrrrrrrr() {__asm sub [ebp+4],5;}
There are ten kinds of people in this world: those who understand binary and those who don't.
### #16ApochPiQ Moderators
Posted 20 February 2013 - 07:21 PM
I'm not sure I follow the question, honestly.
I think you should organize your code so that individual units of organization have a single, clearly defined area of responsibility. I also think that's extremely context-sensitive, as I said before, and what makes sense for a small design may utterly backfire in a larger one - and vice versa.
There are no 100% applicable rules that govern every single design situation. You have to learn how to solve the problem at hand.
Wielder of the Sacred Wands
### #17Khatharr Members
Posted 20 February 2013 - 08:16 PM
I'm just asking what you mean when you say classes like "game" or "unit" or "building" are mini-god classes.
My understanding of SRP is that a unit should do one thing and do it well. When I think of a 'Game' class I want to limit it to essentially being a container for the game's components that works like a composite for updating them. The Game class itself only holds the component system objects, of which there are few, and runs the main control loop, which simply updates the member objects in order. I consider this to be adhering to SRP. While it's true that nearly the entire program runs within the context of the Game class instance, the Game class itself is only performing one duty - playing host to the parts. What the parts themselves do I consider to be their own areas of responsibility, not that of 'Game'. Is this sensible?
Just interested in your point of view.
Edited by Khatharr, 20 February 2013 - 08:20 PM.
void hurrrrrrrr() {__asm sub [ebp+4],5;}
There are ten kinds of people in this world: those who understand binary and those who don't.
### #18ApochPiQ Moderators
Posted 20 February 2013 - 11:10 PM
I was referring to the OP. Nothing grand or sweeping. I don't mean to imply that any class called "Game" or "Unit" or "Building" is inherently bad, although it seems that's what you inferred, for which I apologize for my lack of clarity.
If you read through the rest of my posts in this thread, it should (I hope!) be abundantly clear that composing such a class from SRP-adherent constituents is perfectly fine with me.
Wielder of the Sacred Wands
### #19Khatharr Members
Posted 21 February 2013 - 01:39 AM
Ah. Sorry. I was just sort of skimming before.
void hurrrrrrrr() {__asm sub [ebp+4],5;}
There are ten kinds of people in this world: those who understand binary and those who don't.
### #20Sporniket Members
Posted 21 February 2013 - 09:58 AM
In my recent game projects, I can relate my design to a company : game entitys (player, asteroids, aliens, laser beams,...) are under the care of a manager (PlayerManager, FlyingObjectsManager, BulletManager,etc...), that are under the care of the Main manager (Game)
Basically, the main manager tells the other managers to do their stuff or redraw their stuff when it's time, and the others manager tell their entity to update or redraw, and manages their group of entities (e.g. the flyingObjectManager spawns asteroids and bonus and recycle them when they go out of the screen).
When their should be an interaction between entities from different manager, it is carried by the main manager either procedurally (e.g. in my game loop retrieve a collection of flying objects/bullets from the corresponding manager, and ask the player manager if anything collides with a player), either event-driven (e.g., a player entity has collided with a bonus, the score must be updated and the bonus entity must be hidden).
Of course, my games are simple, so this level of details is adequate. For bigger projects, (e.g. an ARPG in an open world), their would be more levels of management, like in a big company.
Edited by Sporniket, 21 February 2013 - 09:59 AM.
Space Zig-Zag, a casual game of skill for Android by Sporniket-Studio.com
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
2017-02-22 22:08:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1963217556476593, "perplexity": 1775.1868266151075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00386-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://byjus.com/chemistry/class-11-quantitative-estimation-preparation-of-standard-solution-of-oxalic-acid/
|
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis :
# Chemistry Practical Class 11 Preparation of standard solution of oxalic acid Viva Questions with Answers
Q1. What is quantitative analysis?
Answer. Quantitative analysis (QA) is a technique for understanding behaviour that employs mathematical and statistical modelling, measurement, and research. A given reality is represented numerically by quantitative analysts.
Q2. What are the formula and the basicity of hydrated oxalic acid and anhydrous oxalic acid?
Answer. The formula of oxalic acid is (C2H2O4); its usual form is that of the crystalline hydrate, (COOH)2·2H2O. The basicity for both the molecules is 2.
Q3. What do you mean by the basicity of an acid?
Answer. The number of hydrogen atoms in a molecule that can be ionised is referred to as an acid’s basicity.
Q4. Is oxalic acid a strong acid?
Answer. As an organic acid, oxalic acid is a weak acid. Oxalic acid is known to be a soft acid. It’s weaker than (water) H3O+ atom. But it is better than acetic acid, sulphuric acid, nitrous acid, benzoic acid, and so on.
Q5. What do you mean by a molar solution?
Answer. A molar solution is an aqueous solution containing one mole (gram-molecular weight) of a compound dissolved in one litre of solution. In other words, the solution has a molarity of 1 (1M) and a concentration of 1 mol/L.
Q6. Why is the standard solution always prepared in a volumetric flask?
Answer. A volumetric flask is used when it is necessary to know the volume of the solution being prepared precisely and accurately. Volumetric flasks have been calibrated (standardised) to specific volumes. This enables scientists to determine how much liquid is contained in a specific flask when it is filled.
Q7. What is the molar mass of the oxalic acid?
Answer. The molar mass of the oxalic acid is 126 g.
Q8. How will you prepare 250 ml of 0.05 M oxalic acid solution?
Answer. The molecular formula of solid crystalline oxalic acid is-
Its molecular mass is 126 g.
Molarity of the required solution = 0.05 M or M/20
To prepare 250ml of 0.05 M oxalic acid solution –
$$\begin{array}{l}\frac{126\times 250\times 0.05}{1000}=1.575 g\end{array}$$
1.575 g of oxalic acid is to be dissolved in water in a 250 ml volumetric flask.
Q9. What type of substance can be used for preparing a standard solution?
Answer. A standard solution is a solution with an exactly known concentration. Dissolving a primary standard in a suitable solvent yields a standard solution (such as distilled water).
Q10. What is meant by “weighing by transfer”? When is this used?
Answer. It can be used to balance beams and springs. It means that the weight of the body has been increased by either adding mass to the body or increasing the force of gravity.
Q11. Why should weights never be touched by hands?
Answer. This leads to weighing errors because some matter may be transferred from the hand to the weight. For accurate measurements, forceps should be used to transfer weights from the weight box to the pan of the balance and a spatula should be used to transfer the reagent from the bottle onto the watch glass.
Q12. Why is distilled water always used to prepare the standard solution?
Answer. Distilled water is essentially inert, which means it contains only hydrogen and oxygen. Since distillation kills most organic matter and removes minerals from water, it is an excellent control element for science projects and laboratory tests.
Q13. What precautions to be taken while performing the experiment?
Answer. The precautions to be taken while performing the experiment are as follows-
• Weighing of oxalic acid crystals needs weights of 2g + 1g + 100mg + 5mg.
• Wash the watch glass carefully so that even a single crystal of oxalic acid is not left on the watch glass.
• The last few drops should be added using a pipette to avoid extra addition of distilled water above the mark on the neck of the measuring cylinder.
• If it is necessary to titrate oxalic acid or oxalate, add the required dilute H2SO4 amount and heat the flask to 60 °-70 ° C.
Q14. The standard solution of oxalic acid is used for which purpose?
Answer. The standard solution of oxalic acid can be used to determine the unknown concentration of an alkali solution.
Q15. What is the unit to express the strength of a standard solution?
Answer. The strength of a standard solution is expressed in moles per litre.
Q16. What method is used to calculate the strength of a given solution?
Answer. The equivalent law is used to determine strength. The amount of material equivalence to be titrated is equal to the amount of titrant equivalence used under this legislation.
Q17. What is the standard solution?
Answer. A solution of known concentration is what the standard solution is called. A normal solution can be made by dissolving a known amount of the substance in a specific volume of the solvent.
Q18. What do you mean by “concordant readings”?
|
2022-06-29 12:51:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.534241259098053, "perplexity": 1939.0042410102717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00771.warc.gz"}
|
https://gharpedia.com/properties-marble-stone/
|
## Properties of Marble Stone: All you Need to Know
Marble properties are essential when it comes to identification of rocks and while using in home at different place. Marble stone is a natural material hence each piece differs from the other and gives each tiles its own unique appearance. Properties of marble stone include hardness, colour, fracture, luster, compressive strength, etc.
##### Also Read : Marble Stone Flooring: All You Need to Know
Courtesy - Shutterstock
### 01. Formation of Marble:
Marble is a metamorphic rock. It is formed from limestone by pressure and heat in the earth crust due to geological processes. Marble is formed as a result of the recrystallization of limestone under the extreme pressure and temperature of geologic processes. The effect of this process is the creation of stone with a very rigid crystalline structure and small but definite porosity.
### 02. The Hardness of Marble:
Marble is less porous but stronger than limestone, but less durable than granite. Rocks are rated on the Mohs hardness scale which rates the rocks on the scale from 1 to 10. Rock with hardness 1-3 are called soft rocks from 3-6 are called medium hardness rocks and 6-10 are called hard rocks. Marble has a hardness of 3-4 on the Mohs hardness scale. As a result, marble is easy to carve and that makes it useful for producing sculptures and ornamental objects and hence popularly used in temples since ancient times.
### 03. Colour:
Marble is usually a light-coloured rock. The marble that contains very few impurities such as clay minerals, iron oxides, or bituminous material can be bluish, grey, pink, yellow, or black in colour.
Marble having an extremely high purity with a bright white colour is the very useful marble. It is often mined, crushed to powder and processed to remove many impurities. The resulting product is called “whiting.”
### 04. Compressive Strength:
The compressive strength of marble is 115.00 N/mm2 as good as M10 concrete.
### 05. Durability:
Marble is hard, sound and dense stone.
### 06. Porosity:
Marble is more porous as compared to granite as lemons, wine and vinegar will get absorbed into marble and cause permanent stain. Hence it is less stain resistant.
### 07. Acid Resistance:
Marble can’t resist acid or acidic foods. Its colour and texture will get changed when in contact with acidic contents.
### 08. Luster:
The luster of marble is due to interaction of light with the surface of marble. Luster of marble is dull to shining to subvitreous.
### 09. Fire Resistance:
The marble is not considered flammable hence marble is found to be a fire – resistant material.
### 10. Use of Marble:
Marble has been used since ancient times in sculpture as well as, as a decorative construction material and hence more properly used in temples. Marble is used for outdoor sculpture. It is also used in exterior walls, flooring, decorative features, stairway, kitchen countertop, cupboard and walkways etc. The use of marble depends upon its quality.
Marble stone is an ideal material for building because of durability i.e long life and resistance to weather cycles, and less maintenance.
## Material Exhibition
Explore the world of materials.
|
2019-01-22 15:33:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38161808252334595, "perplexity": 4646.751841073503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00247.warc.gz"}
|
https://mathhelpboards.com/threads/linear-ode-for-a-fundamental-solution-set.1340/
|
# Linear ODE for a fundamental solution set
##### New member
Question:
For the interval x > 0 and the function set S = { 3ln(x), ln2, ln(x), ln(5x)}, construct a linear ODE of the lowest order.
My work:
Taking the wronskian for this solution set, I get it as 0. Doesn't that mean that a linear ODE for this set cannot be found?
I'm very confused here, and any help is appreciated. Thanks
#### Opalg
##### MHB Oldtimer
Staff member
Question:
For the interval x > 0 and the function set S = { 3ln(x), ln2, ln(x), ln(5x)}, construct a linear ODE of the lowest order.
My work:
Taking the wronskian for this solution set, I get it as 0. Doesn't that mean that a linear ODE for this set cannot be found?
I'm very confused here, and any help is appreciated. Thanks
Notice that the four functions in the set S are not linearly independent. They are all of the form $A + B\ln x$ (where A and B are constants). So for example you could replace S by the smaller set $\{\ln 2, \ln x\}$.
##### New member
Ok I see, so since the terms are linearly dependant, we need to rewrite the fundamental set? So how do you come to the conclusion that we can use the smaller set of {ln2, ln x}. Is it because these two terms are lin. independent?
Could I use {3lnx, ln2} ?
#### HallsofIvy
##### Well-known member
MHB Math Helper
Ok I see, so since the terms are linearly dependant, we need to rewrite the fundamental set? So how do you come to the conclusion that we can use the smaller set of {ln2, ln x}. Is it because these two terms are lin. independent?
Could I use {3lnx, ln2} ?
Yes, Opalg told you that all of those are of the form Aln(x)+ B for some A and B. You could, just as easily write them as A'(3 ln(x))+ B where A'= A/3.
Of your original set, 3ln(x), ln(2), ln(x), and ln(5x), note that ln(2) is a constant 3ln(x) is just 3 times ln(x), and ln(5x)= ln(x)+ ln(5). So all of them are of the form "a multiple of ln(x)" plus a constant. That is Aln(x)+ B.
|
2021-01-16 00:32:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223302006721497, "perplexity": 976.5054710803923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00178.warc.gz"}
|
https://de.zxc.wiki/wiki/Area_Mass_Index
|
# Area Mass Index
The Area Mass Index (AMI) is a parameter from anthropometry and represents the ratio of a person's body mass - measured in kilograms (kg) - to their actual body surface - measured in m² - whereby the body surface in turn depends on the individual physique (stature). and the gender of a person.
## Scientific background
The surface of a person's body is also the surface of his heat exchange with the environment. The heat generation of humans, which is necessary to maintain the body temperature , depends on the mass - more precisely on the muscle mass. The ratio of body mass to body surface area is not constant, but is determined by the body shape. Compact bodies always have a significantly smaller body surface area per kg of body mass than lean body shapes. In this respect, slim body shapes give off considerably more energy in the form of heat to the environment than compact ones, provided that the conditions of heat exchange (temperature of the environment, insulation through clothing, etc.) are identical.
## Definition of the AMI
The definition of the AMI was originally based on the evaluation of measurements of the actual body surface of test persons with the help of 3D body scans in cooperation with the Size Germany project . The mathematical evaluation of 188 data sets had shown at the time that the following formulas can be used to approximately calculate the AMI, based on body mass m (in kilograms) and body height H (in meters):
${\ displaystyle {\ mathit {AMI}} = 0 {,} 865 {\ frac {m} {H ^ {2}}} + 18 {,} 56}$ for women and
${\ displaystyle {\ mathit {AMI}} = 1 {,} 048 {\ frac {m} {H ^ {2}}} + 16 {,} 08}$ for men.
## Calculation of the body surface
The reciprocal of the Area Mass Index represents the specific body surface area of a person, in the unit m² / kg. By multiplying it by the body mass, the surface area of a person can be calculated in m².
## Further developments
As a further development of the AMI, the so-called AMI formula, also called AMI 2.0, has existed since 2014. This allows the precise calculation of the energy requirement of a healthy person on the basis of the AMI, taking into account the individual body composition and the activity profile of this person.
The Heat Performance Index (HPI) has also been developed. This is used to divide people into the following groups based on their heat dissipation to the environment: ultra high performers, high performers, ideal performers, low performers and ultra low performers. The Heat Performance Index can be calculated by any user with the help of anthropometric data (weight, height, fat percentage, muscle percentage).
## Individual evidence
1. E. Schlich, M. Schumm, M. Schlich: 3D body scan as an anthropometric method for determining the specific body surface. In: Nutritional review. 4, 2010, pp. 178-183.
2. E. Schlich: The AMI formula - 3D body scanning in online nutritional advice. 22nd Aachen dietetics training, 19.-20. September 2014. Association for Nutrition and Dietetics eV
3. E. Schlich: About the Area Mass Index (AMI) for the energy balance of humans. (= Nutrition and Consumer Education. Volume 2). Shaker Verlag, Aachen 2014, ISBN 978-3-8440-3202-4 .
4. http://www.area-mass-index.de/heat-performance-index/
|
2022-09-30 16:25:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565308570861816, "perplexity": 2061.8602885940318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00403.warc.gz"}
|
https://nbviewer.jupyter.org/github/aparrish/rwet/blob/master/understanding-word-vectors.ipynb
|
# Understanding word vectors¶
... for, like, actual poets. By Allison Parrish
In this tutorial, I'm going to show you how word vectors work.
In [2]:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('max_rows', 25)
plt.style.use('ggplot')
plt.rcParams["figure.figsize"] = (10, 4)
## Why word vectors for poetry?¶
Tzara proposed a method of composing a Dada poem: cut out the words of a text, shake them in a bag, then write down the words as you remove them at random from the bag. The very idea caused a riot and sundered the avant garde in twain (or so the story goes). For poets, word vectors are (for better or worse) a tool to help soften the blow of cut-up techniques: instead of selecting words at random, we might select units of text that are close in meaning to other units. This can yield poetic juxtapositions with subtle effects impossible to achieve with other techniques.
Also, it's fun!
## Animal similarity and simple linear algebra¶
We'll begin by considering a small subset of English: words for animals. Our task is to be able to write computer programs to find similarities among these words and the creatures they designate. To do this, we might start by making a spreadsheet of some animals and their characteristics. In Python, you'd define such a spreadsheet like this:
In [3]:
animals = [
{'name': 'kitten', 'cuteness': 95, 'size': 15},
{'name': 'hamster', 'cuteness': 80, 'size': 8},
{'name': 'tarantula', 'cuteness': 8, 'size': 3},
{'name': 'puppy', 'cuteness': 90, 'size': 20},
{'name': 'crocodile', 'cuteness': 5, 'size': 40},
{'name': 'dolphin', 'cuteness': 60, 'size': 45},
{'name': 'panda bear', 'cuteness': 75, 'size': 40},
{'name': 'lobster', 'cuteness': 2, 'size': 15},
{'name': 'capybara', 'cuteness': 70, 'size': 30},
{'name': 'elephant', 'cuteness': 65, 'size': 90},
{'name': 'mosquito', 'cuteness': 1, 'size': 1},
{'name': 'goldfish', 'cuteness': 25, 'size': 2},
{'name': 'horse', 'cuteness': 50, 'size': 50},
{'name': 'chicken', 'cuteness': 25, 'size': 15}
]
animal_lookup = {item['name']: (item['cuteness'], item['size']) for item in animals}
And then display it:
In [4]:
pd.DataFrame(animals, columns=['name', 'cuteness', 'size'])
Out[4]:
name cuteness size
0 kitten 95 15
1 hamster 80 8
2 tarantula 8 3
3 puppy 90 20
4 crocodile 5 40
5 dolphin 60 45
6 panda bear 75 40
7 lobster 2 15
8 capybara 70 30
9 elephant 65 90
10 mosquito 1 1
11 goldfish 25 2
12 horse 50 50
13 chicken 25 15
This table associates a handful of animals with two numbers: their cuteness and their size, both in a range from zero to one hundred. (The values themselves are simply based on my own judgment. Your taste in cuteness and evaluation of size may differ significantly from mine. As with all data, these data are simply a mirror reflection of the person who collected them.)
These values give us everything we need to make determinations about which animals are similar (at least, similar in the properties that we've included in the data). Try to answer the following question: Which animal is most similar to a capybara? You could go through the values one by one and do the math to make that evaluation, but visualizing the data as points in 2-dimensional space makes finding the answer very intuitive:
In [5]:
plt.figure(figsize=(8, 8))
plt.scatter([item[0] for item in animal_lookup.values()],
[item[1] for item in animal_lookup.values()])
plt.xlabel('cuteness')
plt.ylabel('size')
for label, (cute, size) in animal_lookup.items():
plt.text(cute+1, size+1, label, fontsize=12)
plt.show()
The plot shows us that the closest animal to the capybara is the panda bear (again, in terms of its subjective size and cuteness). One way of calculating how "far apart" two points are is to find their Euclidean distance. (This is simply the length of the line that connects the two points.) For points in two dimensions, Euclidean distance can be calculated with the following Python function:
In [6]:
import math
def distance2d(a, b):
return math.sqrt((a[0] - b[0])**2 + (a[1] - b[1])**2)
(The ** operator raises the value on its left to the power on its right.)
So, the distance between "capybara" (70, 30) and "panda" (74, 40):
In [7]:
distance2d(animal_lookup['capybara'], animal_lookup['panda bear']) # panda and capybara
Out[7]:
11.180339887498949
... is less than the distance between "tarantula" and "elephant":
In [8]:
distance2d(animal_lookup['tarantula'], animal_lookup['elephant']) # tarantula and elephant
Out[8]:
104.0096149401583
Modeling animals in this way has a few other interesting properties. For example, you can pick an arbitrary point in "animal space" and then find the animal closest to that point. If you imagine an animal of size 25 and cuteness 30, you can easily look at the space to find the animal that most closely fits that description: the chicken.
Reasoning visually, you can also answer questions like: what's halfway between a chicken and an elephant? Simply draw a line from "elephant" to "chicken," mark off the midpoint and find the closest animal. (According to our chart, halfway between an elephant and a chicken is a horse.)
You can also ask: what's the difference between a hamster and a tarantula? According to our plot, it's about seventy five units of cute (and a few units of size).
The relationship of "difference" is an interesting one, because it allows us to reason about analogous relationships. In the chart below, I've drawn an arrow from "tarantula" to "hamster" (in blue):
In [9]:
plt.figure(figsize=(8, 8))
plt.scatter([item[0] for item in animal_lookup.values()],
[item[1] for item in animal_lookup.values()])
plt.xlabel('cuteness')
plt.ylabel('size')
for label, (cute, size) in animal_lookup.items():
plt.text(cute+1, size+1, label, fontsize=12)
plt.arrow(
*(animal_lookup['tarantula']),
*(np.array(animal_lookup['hamster']) - np.array(animal_lookup['tarantula'])),
fc="b", ec="b", head_width=1.5, head_length=2, linewidth=1.5)
plt.arrow(
*(animal_lookup['chicken']),
*(np.array(animal_lookup['hamster']) - np.array(animal_lookup['tarantula'])),
fc="r", ec="r", head_width=1.5, head_length=2, linewidth=1.5)
plt.show()
You can understand this arrow as being the relationship between a tarantula and a hamster, in terms of their size and cuteness (i.e., hamsters and tarantulas are about the same size, but hamsters are much cuter). In the same diagram, I've also transposed this same arrow (this time in red) so that its origin point is "chicken." The arrow ends closest to "kitten." What we've discovered is that the animal that is about the same size as a chicken but much cuter is... a kitten. To put it in terms of an analogy:
Tarantulas are to hamsters as chickens are to kittens.
A sequence of numbers used to identify a point is called a vector, and the kind of math we've been doing so far is called linear algebra. (Linear algebra is surprisingly useful across many domains: It's the same kind of math you might do to, e.g., simulate the velocity and acceleration of a sprite in a video game.)
A set of vectors that are all part of the same data set is often called a vector space. The vector space of animals in this section has two dimensions, by which I mean that each vector in the space has two numbers associated with it (i.e., two columns in the spreadsheet). The fact that this space has two dimensions just happens to make it easy to visualize the space by drawing a 2D plot. But most vector spaces you'll work with will have more than two dimensions—sometimes many hundreds. In those cases, it's more difficult to visualize the "space," but the math works pretty much the same.
## Language with vectors: colors¶
So far, so good. We have a system in place—albeit highly subjective—for talking about animals and the words used to name them. I want to talk about another vector space that has to do with language: the vector space of colors.
Colors are often represented in computers as vectors with three dimensions: red, green, and blue. Just as with the animals in the previous section, we can use these vectors to answer questions like: which colors are similar? What's the most likely color name for an arbitrarily chosen set of values for red, green and blue? Given the names of two colors, what's the name of those colors' "average"?
We'll be working with this color data from the xkcd color survey. The data relates a color name to the RGB value associated with that color. Here's a page that shows what the colors look like. Download the color data and put it in the same directory as this notebook.
A few notes before we proceed:
Now, import the json library and load the color data:
In [10]:
import json
In [11]:
color_data = json.loads(open("xkcd.json").read())
The following function converts colors from hex format (#1a2b3c) to a tuple of integers:
In [12]:
def hex_to_int(s):
s = s.lstrip("#")
return np.array([int(s[:2], 16), int(s[2:4], 16), int(s[4:6], 16)])
And the following cell creates a dictionary and populates it with mappings from color names to RGB vectors for each color in the data:
In [13]:
colors = dict()
for item in color_data['colors']:
colors[item["color"]] = hex_to_int(item["hex"])
Testing it out:
In [14]:
colors['olive']
Out[14]:
array([110, 117, 14])
In [15]:
colors['red']
Out[15]:
array([229, 0, 0])
In [16]:
colors['black']
Out[16]:
array([0, 0, 0])
In [17]:
colors['cyan']
Out[17]:
array([ 0, 255, 255])
### Vector math¶
Before we keep going, we'll need some functions for performing basic vector "arithmetic." These functions will work with vectors in spaces of any number of dimensions.
The first function returns the Euclidean distance between two points:
In [18]:
from numpy.linalg import norm
def distance(a, b):
return norm(a - b)
In [19]:
distance(colors['cyan'], colors['blue'])
Out[19]:
190.7275543805876
In [20]:
distance(np.array([10, 1]), np.array([5, 2]))
Out[20]:
5.0990195135927845
Subtracting vectors:
In [21]:
colors['cyan'] - colors['blue']
Out[21]:
array([ -3, 188, 32])
Adding vectors:
In [22]:
colors['cyan'] + colors['blue']
Out[22]:
array([ 3, 322, 478])
You can find the average of two vectors using the expected formula:
In [23]:
(colors['cyan'] + colors['blue']) / 2
Out[23]:
array([ 1.5, 161. , 239. ])
Or use the following function, which finds the mean of any number of vectors:
In [24]:
def meanv(vecs):
total = np.sum(vecs, axis=0)
return total / len(vecs)
In [25]:
meanv([colors['red'], colors['pink'], colors['maroon']])
Out[25]:
array([195., 43., 75.])
Just as a test, the following cell shows that the distance from "red" to "green" is greater than the distance from "red" to "pink":
In [26]:
distance(colors['red'], colors['green']) > distance(colors['red'], colors['pink'])
Out[26]:
True
### Finding the closest item¶
Just as we wanted to find the animal that most closely matched an arbitrary point in cuteness/size space, we'll want to find the closest color name to an arbitrary point in RGB space. The easiest way to find the closest item to an arbitrary vector is simply to find the distance between the target vector and each item in the space, in turn, then sort the list from closest to most distant.
Calculating the distance between two points, however, is computationally expensive, especially when you're working with data that has many dimensions. To solve this problem, computer scientists and mathematicians came up with the idea of approximate nearest neighbor search, a technique for finding similar points in high-dimensional spaces that make use of various tricks to speed up the process (potentially at the cost of accuracy).
We're going to use a library I made called Simple Neighbors that builds such an approximate nearest neighbors index to quickly return the closest items for any given vector. (Simple Neighbors is based on Annoy.)
Install Simple Neighbors like so:
In [27]:
import sys
!{sys.executable} -m pip install simpleneighbors
Collecting simpleneighbors
Using cached simpleneighbors-0.1.0-py2.py3-none-any.whl (12 kB)
Installing collected packages: simpleneighbors
Successfully installed simpleneighbors-0.1.0
You'll want to install Annoy as well, to speed up the nearest neighbor search. As of this writing, I recommend Annoy 1.16.3:
In [28]:
import sys
!{sys.executable} -m pip install annoy==1.16.3
Collecting annoy==1.16.3
Using cached annoy-1.16.3.tar.gz (644 kB)
Building wheels for collected packages: annoy
Building wheel for annoy (setup.py) ... done
Created wheel for annoy: filename=annoy-1.16.3-cp38-cp38-macosx_10_9_x86_64.whl size=67466 sha256=a58d081c546e5df896ba76ec57b4aa8c86617b99063c044fb9ddc3bb44ef9116
Stored in directory: /Users/allison/Library/Caches/pip/wheels/93/66/00/3527630e17462dcb505b4688f787b40bc020268237d54e5e79
Successfully built annoy
Installing collected packages: annoy
Successfully installed annoy-1.16.3
If you get an error from the above, and you're using Anaconda, you can try installing the Anaconda package:
In [29]:
import sys
!conda install -y --prefix {sys.prefix} -c conda-forge python-annoy
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.10.1
latest version: 4.10.3
Please update conda by running
\$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /Users/allison/opt/miniconda3/envs/rwet-2022
added / updated specs:
- python-annoy
The following packages will be downloaded:
package | build
---------------------------|-----------------
libcxx-11.1.0 | habf9029_0 1.0 MB conda-forge
python-annoy-1.17.0 | py38h0a5c65b_2 61 KB conda-forge
------------------------------------------------------------
Total: 1.1 MB
The following NEW packages will be INSTALLED:
python-annoy conda-forge/osx-64::python-annoy-1.17.0-py38h0a5c65b_2
The following packages will be UPDATED:
ca-certificates pkgs/main::ca-certificates-2021.5.25-~ --> conda-forge::ca-certificates-2021.5.30-h033912b_0
libcxx pkgs/main::libcxx-10.0.0-1 --> conda-forge::libcxx-11.1.0-habf9029_0
The following packages will be SUPERSEDED by a higher-priority channel:
certifi pkgs/main::certifi-2021.5.30-py38hecd~ --> conda-forge::certifi-2021.5.30-py38h50d1736_0
openssl pkgs/main::openssl-1.1.1k-h9ed2024_0 --> conda-forge::openssl-1.1.1k-h0d85af4_0
Downloading and Extracting Packages
python-annoy-1.17.0 | 61 KB | ##################################### | 100%
libcxx-11.1.0 | 1.0 MB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
If neither of those works—especially if you're using Windows—you may need to install a C++ compiler, or you can use this notebook on Binder. (If you're a student, come see me for more info.)
Once you have the library installed, import it like so:
In [30]:
from simpleneighbors import SimpleNeighbors
The first parameter to SimpleNeighbors() is the number of dimensions in the data; the second is the distance metric to use. (This defaults to cosine distance, but in this case we want to use Euclidean distance.)
The .add_one() method adds an item and a vector to the index. Once all the items are added, .build() actually builds the index. This should go quick, since we don't have that much data!
In [31]:
color_lookup = SimpleNeighbors(3, 'euclidean')
for name, vec in colors.items():
color_lookup.add_one(name, vec)
color_lookup.build()
To find the nearest item to a specified vector, pass the vector to the .nearest() method:
In [32]:
color_lookup.nearest(colors['red'])
Out[32]:
['red',
'fire engine red',
'bright red',
'tomato red',
'cherry red',
'scarlet',
'vermillion',
'orangish red',
'cherry',
'lipstick red',
'darkish red',
'neon red']
Limit the number of results returned with the second parameter:
In [33]:
color_lookup.nearest(colors['red'], 3)
Out[33]:
['red', 'fire engine red', 'bright red']
Here are the colors closest to (150, 60, 150):
In [34]:
color_lookup.nearest([150, 60, 150])
Out[34]:
['warm purple',
'medium purple',
'ugly purple',
'light eggplant',
'purpleish',
'purplish',
'purply',
'light plum',
'purple',
'muted purple',
'dull purple',
'dusty purple']
The .dist() method gives the distance between two items in the index:
In [35]:
color_lookup.dist('rose', 'pink')
Out[35]:
94.28679656982422
In [36]:
color_lookup.dist('green', 'purple')
Out[36]:
221.90313720703125
And you can check the .corpus attribute to see if an item is even in the index to begin with:
In [37]:
'orange' in color_lookup.corpus
Out[37]:
True
In [38]:
'kitten' in color_lookup.corpus
Out[38]:
False
### Color magic¶
The magical part of representing words as vectors is that the vector operations we defined earlier appear to operate on language the same way they operate on numbers. For example, if we find the word closest to the vector resulting from subtracting "red" from "purple," we get a series of "blue" colors:
In [39]:
color_lookup.nearest(colors['purple'] - colors['red'])
Out[39]:
['cobalt blue',
'royal blue',
'darkish blue',
'true blue',
'royal',
'prussian blue',
'dark royal blue',
'deep blue',
'marine blue',
'deep sea blue',
'darkblue',
'twilight blue']
This matches our intuition about RGB colors, which is that purple is a combination of red and blue. Take away the red, and blue is all you have left.
You can do something similar with addition. What's blue plus green?
In [40]:
color_lookup.nearest(colors['blue'] + colors['green'])
Out[40]:
['bright turquoise',
'bright light blue',
'bright aqua',
'cyan',
'neon blue',
'aqua blue',
'bright cyan',
'bright sky blue',
'aqua',
'bright teal',
'aqua marine',
'greenish cyan']
That's right, it's something like turquoise or cyan! What if we find the average of black and white? Predictably, we get gray:
In [41]:
# the average of black and white: medium grey
color_lookup.nearest(meanv([colors['white'], colors['black']]))
Out[41]:
['medium grey',
'purple grey',
'steel grey',
'battleship grey',
'grey purple',
'purplish grey',
'greyish purple',
'steel',
'warm grey',
'green grey',
'brown grey',
'bluish grey']
Just as with the tarantula/hamster example from the previous section, we can use color vectors to reason about relationships between colors. In the cell below, finding the difference between "pink" and "red" then adding it to "blue" seems to give us a list of colors that are to blue what pink is to red (i.e., a slightly lighter, less saturated shade):
In [42]:
# an analogy: pink is to red as X is to blue
pink_to_red = colors['pink'] - colors['red']
color_lookup.nearest(pink_to_red + colors['blue'])
Out[42]:
['neon blue',
'bright sky blue',
'bright light blue',
'cyan',
'bright cyan',
'bright turquoise',
'clear blue',
'azure',
'dodger blue',
'lightish blue',
'sky blue',
'aqua blue']
Another example of color analogies: Navy is to blue as true green/dark grass green is to green:
In [43]:
# another example:
navy_to_blue = colors['navy'] - colors['blue']
color_lookup.nearest(navy_to_blue + colors['green'])
Out[43]:
['true green',
'dark grass green',
'grassy green',
'racing green',
'bottle green',
'dark olive green',
'darkgreen',
'forrest green',
'grass green',
'navy green',
'dark olive',
'hunter green']
The examples above are fairly simple from a mathematical perspective but nevertheless feel magical: they're demonstrating that it's possible to use math to reason about how people use language.
### Interlude: A Love Poem That Loses Its Way¶
In [44]:
import random
red = colors['red']
blue = colors['blue']
for i in range(14):
rednames = color_lookup.nearest(red)
bluenames = color_lookup.nearest(blue)
print("Roses are " + rednames[0] + ", violets are " + bluenames[0])
red = colors[random.choice(rednames[1:])]
blue = colors[random.choice(bluenames[1:])]
Roses are red, violets are blue
Roses are vermillion, violets are deep sky blue
Roses are bright orange, violets are water blue
Roses are deep orange, violets are windows blue
Roses are dark orange, violets are cornflower blue
Roses are brick orange, violets are faded blue
Roses are orange brown, violets are cool blue
Roses are dirty orange, violets are off blue
Roses are pumpkin, violets are steel blue
Roses are rusty orange, violets are grey blue
Roses are brick orange, violets are greyish blue
Roses are rust orange, violets are grey/blue
Roses are dark orange, violets are bluish grey
Roses are burnt sienna, violets are grey/blue
### Doing bad digital humanities with color vectors¶
With the tools above in hand, we can start using our vectorized knowledge of language toward academic ends. In the following example, I'm going to calculate the average color of Mary Shelley's Frankenstein.
(Before you proceed, make sure to download the text file from Project Gutenberg and place it in the same directory as this notebook.)
First, we'll load spaCy. Note: For the rest of this tutorial to work, you'll want to download at least the medium model for English. The default "small" model doesn't include word vectors. I've also written an introduction to spaCy that includes installation instructions.
In [45]:
import spacy
nlp = spacy.load('en_core_web_md')
To calculate the average color, we'll follow these steps:
1. Parse the text into words
2. Check every word to see if it names a color in our vector space. If it does, add it to a list of vectors.
3. Find the average of that list of vectors.
4. Find the color(s) closest to that average vector.
The following cell performs steps 1-3:
In [46]:
doc = nlp(open("84-0.txt").read())
# use word.lower_ to normalize case
drac_colors = [colors[word.lower_] for word in doc if word.lower_ in colors]
avg_color = meanv(drac_colors)
print(avg_color)
[125.52050473 134.0851735 121.63722397]
Now, we'll pass the averaged color vector to the closest() function, yielding... well, it's just a grey mush, which is kinda what you'd expect from adding a bunch of colors together willy-nilly.
In [47]:
color_lookup.nearest(avg_color)
Out[47]:
['medium grey',
'green grey',
'steel grey',
'grey green',
'brown grey',
'battleship grey',
'greeny grey',
'purple grey',
'warm grey',
'grey/green',
'slate green',
'steel']
On the other hand, here's what we get when we average the colors of Charlotte Perkins Gilman's classic The Yellow Wallpaper. (Download from here and save in the same directory as this notebook if you want to follow along.) The result definitely reflects the content of the story, so maybe we're on to something here.
In [48]:
doc = nlp(open("1952-0.txt").read())
wallpaper_colors = [colors[word.lower_] for word in doc if word.lower_ in colors]
avg_color = meanv(wallpaper_colors)
color_lookup.nearest(avg_color)
Out[48]:
['pea',
'puke yellow',
'sick green',
'vomit yellow',
'booger',
'olive yellow',
'snot',
'gross green',
'dirty yellow',
'mustard yellow',
'dark yellow',
'baby puke green']
Exercise for the reader: Use the vector arithmetic functions to rewrite a text, making it...
• more blue (i.e., add colors['blue'] to each occurrence of a color word); or
• more light (i.e., add colors['white'] to each occurrence of a color word); or
• darker (i.e., attenuate each color. You might need to write a vector multiplication function to do this one right.)
## Distributional semantics¶
In the previous section, the examples are interesting because of a simple fact: colors that we think of as similar are "closer" to each other in RGB vector space. In our color vector space, or in our animal cuteness/size space, you can think of the words identified by vectors close to each other as being synonyms, in a sense: they sort of "mean" the same thing. They're also, for many purposes, functionally identical. Think of this in terms of writing, say, a search engine. If someone searches for "mauve trousers," then it's probably also okay to show them results for, say,
In [49]:
for cname in color_lookup.nearest(colors['mauve']):
print(cname + " trousers")
mauve trousers
dusty rose trousers
dusky rose trousers
brownish pink trousers
old pink trousers
reddish grey trousers
dirty pink trousers
old rose trousers
light plum trousers
ugly pink trousers
pinkish brown trousers
dusky pink trousers
That's all well and good for color words, which intuitively seem to exist in a multidimensional continuum of perception, and for our animal space, where we've written out the vectors ahead of time. But what about... arbitrary words? Is it possible to create a vector space for all English words that has this same "closer in space is closer in meaning" property?
To answer that, we have to back up a bit and ask the question: what does meaning mean? No one really knows, but one theory popular among computational linguists, computer scientists and other people who make search engines is the Distributional Hypothesis, which states that:
Linguistic items with similar distributions have similar meanings.
What's meant by "similar distributions" is similar contexts. Take for example the following sentences:
It was really cold yesterday.
It will be really warm today, though.
It'll be really hot tomorrow!
Will it be really cool Tuesday?
According to the Distributional Hypothesis, the words cold, warm, hot and cool must be related in some way (i.e., be close in meaning) because they occur in a similar context, i.e., between the word "really" and a word indicating a particular day. (Likewise, the words yesterday, today, tomorrow and Tuesday must be related, since they occur in the context of a word indicating a temperature.)
In other words, according to the Distributional Hypothesis, a word's meaning is just a big list of all the contexts it occurs in. Two words are closer in meaning if they share contexts.
## Word vectors by counting contexts¶
So how do we turn this insight from the Distributional Hypothesis into a system for creating general-purpose vectors that capture the meaning of words? Maybe you can see where I'm going with this. What if we made a really big spreadsheet that had one column for every context for every word in a given source text. Let's use a small source text to begin with, such as this excerpt from Dickens:
It was the best of times, it was the worst of times.
The code in the following cell builds a table of what such a spreadsheet might look like. (Don't worry about understanding this code! But feel free to play around with it.)
In [50]:
from collections import defaultdict
src = "it was the best of times it was the worst of times"
tokens = ['START'] + src.split() + ['END']
contexts = defaultdict(lambda: defaultdict(int))
tok_types = {}
for i in range(len(tokens)-2):
tok_types[tokens[i+1]] = 1
contexts[(tokens[i], tokens[i+2])][tokens[i+1]] += 1
data = {" ___ ".join(k): [v[tok] for tok in tok_types.keys()] for k, v in contexts.items()}
df = pd.DataFrame(data=data, index=tok_types)
df
Out[50]:
START ___ was it ___ the was ___ best the ___ of best ___ times of ___ it times ___ was was ___ worst worst ___ times of ___ END
it 1 0 0 0 0 0 1 0 0 0
was 0 2 0 0 0 0 0 0 0 0
the 0 0 1 0 0 0 0 1 0 0
best 0 0 0 1 0 0 0 0 0 0
of 0 0 0 0 1 0 0 0 1 0
times 0 0 0 0 0 1 0 0 0 1
worst 0 0 0 1 0 0 0 0 0 0
The spreadsheet has one column for every possible context, and one row for every word. The values in each cell correspond with how many times the word occurs in the given context. The numbers in the columns constitute that word's vector, i.e., the vector for the word of is
[0, 0, 0, 0, 1, 0, 0, 0, 1, 0]
Because there are ten possible contexts, this is a ten dimensional space! It might be strange to think of it, but you can do vector arithmetic on vectors with ten dimensions just as easily as you can on vectors with two or three dimensions, and you could use the same distance formula that we defined earlier to get useful information about which vectors in this space are similar to each other. In particular, the vectors for best and worst are actually the same (a distance of zero), since they occur only in the same context (the ___ of):
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
Of course, the conventional way of thinking about "best" and "worst" is that they're antonyms, not synonyms. But they're also clearly two words of the same kind, with related meanings (through opposition), a fact that is captured by this distributional model.
### Contexts and dimensionality¶
Of course, in a corpus of any reasonable size, there will be many thousands if not many millions of possible contexts. It's difficult enough working with a vector space of ten dimensions, let alone a vector space of a million dimensions! It turns out, though, that many of the dimensions end up being superfluous and can either be eliminated or combined with other dimensions without significantly affecting the predictive power of the resulting vectors. The process of getting rid of superfluous dimensions in a vector space is called dimensionality reduction, and most implementations of count-based word vectors make use of dimensionality reduction so that the resulting vector space has a reasonable number of dimensions (say, 100—300, depending on the corpus and application).
The question of how to identify a "context" is itself very difficult to answer. In the toy example above, we've said that a "context" is just the word that precedes and the word that follows. Depending on your implementation of this procedure, though, you might want a context with a bigger "window" (e.g., two words before and after), or a non-contiguous window (skip a word before and after the given word). You might exclude certain "function" words like "the" and "of" when determining a word's context, or you might lemmatize the words before you begin your analysis, so two occurrences with different "forms" of the same word count as the same context. These are all questions open to research and debate, and different implementations of procedures for creating count-based word vectors make different decisions on this issue.
### GloVe vectors¶
But you don't have to create your own word vectors from scratch! Many researchers have made downloadable databases of pre-trained vectors. One such project is Stanford's Global Vectors for Word Representation (GloVe). These 300-dimensional vectors are included with spaCy, and they're the vectors we'll be using for the rest of this tutorial.
## Word vectors in spaCy¶
Okay, let's have some fun with real word vectors. We're going to use the GloVe vectors that come with spaCy to creatively analyze and manipulate the text of Frankenstein. First, make sure you've got spacy imported:
In [51]:
import spacy
The following cell loads the language model:
In [52]:
nlp = spacy.load('en_core_web_md')
You can see the vector of any word in spaCy's vocabulary using the vocab attribute, like so:
In [53]:
nlp.vocab['kitten'].vector
Out[53]:
array([-2.2743e-01, -5.1464e-02, -4.3421e-02, -1.0523e-01, -3.3389e-01,
-4.9611e-01, -6.4342e-01, -4.6994e-01, 3.9693e-01, 8.4902e-01,
-4.0845e-01, -1.8312e-01, -6.2564e-01, -1.0160e-01, -3.6914e-01,
6.8634e-01, 2.6607e-01, 3.2985e-01, -9.8570e-02, -1.3281e-01,
-4.7505e-01, -1.9249e-01, -3.1917e-01, -1.8536e-01, 1.1118e-01,
7.3772e-02, -3.0407e-01, -2.7552e-01, 6.1108e-01, -3.6344e-01,
-4.5849e-01, -1.2872e-01, 1.5175e-01, 3.3248e-01, 3.0900e-01,
-2.8488e-01, 2.5544e-01, -9.4332e-01, -5.5746e-01, 5.8764e-02,
1.1174e-01, 2.0032e-01, -4.1090e-01, -5.4444e-01, -4.3831e-02,
1.6265e-01, -6.8028e-01, 2.8266e-01, 1.8177e-01, -5.6184e-01,
7.0911e-02, -3.4996e-01, -3.1639e-01, 1.7666e-01, -9.4568e-03,
4.4389e-01, 7.6684e-02, -2.1797e-01, 1.3728e-03, 2.3474e-01,
-1.8564e-01, -4.2277e-01, 2.5585e-01, -6.2553e-01, -1.4335e-01,
-1.8835e-01, 3.5240e-01, 2.0764e-01, 8.8644e-02, -2.0873e-01,
-3.9081e-01, -1.5079e-01, -3.4469e-01, -3.2128e-01, -1.2094e-01,
-6.6444e-03, -1.6742e-01, -3.5412e-01, 3.5457e-01, -6.8729e-01,
4.2718e-01, 3.2739e-01, -5.2189e-01, 1.9016e-01, -2.0203e-01,
-2.5103e-02, 1.4170e+00, 5.4864e-01, 6.3232e-01, 9.1078e-02,
1.6614e-01, 6.4225e-01, 9.4285e-02, -5.8877e-01, -5.8017e-01,
2.4200e-02, -5.9718e-02, -2.0356e-01, 3.6787e-01, -2.1599e-04,
1.2642e-01, 2.1863e-01, 1.4783e-01, 1.1456e-01, 5.1725e-01,
-5.2379e-01, 1.9920e-01, -3.4157e-01, -4.5679e-01, -3.6249e-01,
-1.5894e-01, 4.1326e-01, 3.3038e-01, -6.0792e-01, 3.3837e-02,
-1.3185e-01, -2.3943e-01, 2.0958e-01, 5.4647e-01, -1.6166e-01,
-8.4986e-02, -3.7066e-03, 4.1813e-01, -6.2813e-01, -1.5596e-01,
2.7174e-01, 2.6749e-01, -3.1466e-01, -3.0005e-01, -1.1754e-03,
-7.6564e-02, -4.1596e-01, -5.1201e-01, 3.3995e-01, 3.3515e-01,
2.3181e-01, -2.3126e-01, 6.4867e-01, -1.7327e-01, 2.1906e-01,
-2.4825e+00, -3.8253e-01, -2.7501e-01, 3.8250e-01, 3.9909e-01,
-3.4766e-01, -3.4750e-01, 1.1423e-01, 3.8426e-01, -4.5397e-02,
3.3079e-01, 3.0611e-01, -1.6365e-01, -2.1840e-01, -4.9622e-01,
3.2069e-01, -1.0056e-01, 4.7965e-01, -6.1692e-01, -7.1039e-01,
6.7294e-03, -1.3760e-01, -2.4534e-01, -6.3200e-01, -1.3581e-01,
5.0657e-02, -2.6497e-01, 4.7350e-02, -2.1670e-01, 1.1975e-01,
-1.5930e-01, 6.5097e-02, 6.2833e-01, -4.1485e-01, -4.9077e-01,
5.3359e-01, -2.8512e-01, -6.8738e-02, -1.5921e-01, 7.1961e-01,
-9.9519e-02, -4.4382e-01, 1.2937e-01, -3.0042e-01, -4.5345e-01,
-1.2133e-01, -5.7589e-02, 1.7615e-03, -1.4016e-01, -5.4348e-01,
-2.7313e-01, 7.9147e-02, 9.2326e-02, -3.2881e-01, 4.2704e-01,
2.2025e-01, -2.8184e-01, -1.0668e-01, 7.8995e-01, 1.0433e-01,
-7.1160e-01, 5.0875e-01, -4.4899e-01, -1.5814e-01, 8.9031e-02,
1.9779e-01, 1.7832e-01, -1.5837e-01, 4.1920e-01, 1.1525e-01,
-5.6151e-01, 1.5977e-03, -6.3471e-01, 1.7369e-01, -1.6660e-01,
1.1221e-01, -6.8455e-02, 2.5285e-01, 3.2207e-01, -9.7379e-02,
-6.0705e-02, 2.1614e-03, 4.7962e-01, -6.2126e-01, -3.8508e-01,
-3.3929e-02, 1.6450e-01, 8.4856e-01, 1.6799e-01, 7.0092e-02,
-3.7176e-01, 3.3799e-01, -4.4381e-01, -4.1767e-01, 4.5403e-01,
-1.2410e-01, -1.2079e-01, -1.7261e-01, 6.0124e-01, -4.0454e-01,
-7.5649e-01, -8.0093e-02, -4.0163e-01, 1.4112e-01, -1.0350e-01,
-3.3089e-02, 1.1493e-01, 5.8734e-01, 3.3808e-01, 2.3712e-01,
-1.8119e-01, -9.0462e-02, -7.5090e-02, 3.0095e-01, 2.4288e-01,
-1.1910e-01, -7.9882e-01, 1.8590e-01, 1.8869e-01, -3.0644e-01,
-1.1699e-01, -1.9925e-01, -2.5868e-02, -1.9041e-01, -2.1018e-01,
3.7344e-01, -4.6269e-01, -2.2280e-01, -3.7530e-01, -2.1085e-02,
-4.1236e-01, 7.1003e-01, -1.3208e-01, 1.5749e-01, -2.3107e-01,
1.0245e-02, 1.2600e-01, 2.5397e-01, 4.6801e-02, 8.9900e-03,
2.9246e-01, 1.7141e-01, 1.7920e-01, -4.3189e-01, 5.1214e-01,
4.9256e-01, 3.1945e-01, -3.5943e-01, 1.0088e-01, -3.7515e-01,
2.1281e-02, 2.0247e-01, -3.4238e-02, 3.3511e-01, -4.2808e-01,
7.4968e-01, -1.2136e-01, 4.3539e-01, 1.5098e-01, -1.5952e-01],
dtype=float32)
spaCy's English model uses 300-dimensional pre-trained GloVe vectors.
For the sake of convenience, the following function gets the vector of a given string from spaCy's vocabulary:
In [54]:
def vec(s):
return nlp.vocab[s].vector
## Filter by word frequency¶
In just a second, I'm going to show you how to make a nearest-neighbors lookup to find words that are semantically similar. We'll use the vectors from spaCy, but there's a problem—spaCy's database of words is very large. You can find out how many words are in spaCy's database by accessing the .meta attribute of the language object:
In [55]:
nlp.meta['vectors']
Out[55]:
{'width': 300,
'vectors': 20000,
'keys': 684830,
'name': 'en_core_web_md.vectors'}
As of this writing, there are 20k vectors in the database, but over 680k individual tokens. (Many tokens don't have vectors; some tokens are mapped to the same vector.) That's too many tokens for our nearest-neighbor lookup to be reasonably fast. Also, because the tokens are drawn from the corpus that spaCy's model is trained on, many of the tokens are very uncommon words or misspellings of other words. We probably don't want to use this for text replacement tasks, which is what we've got an eye on doing in the examples below.
The best way to fix this problem is to filter the words before we put them into the nearest-neighbor lookup. We're going to do this by token frequency, i.e., how frequently that word appears in English. A word's frequency is calculated as the number of times that word occurs in a corpus, divided by the total number of words in the corpus. The wordfreq Python package has a good database of word frequencies in many different languages. To simplify things for this notebook, I've prepared a JSON file with the top 25k English words, along with their probabilities. Before you continue, download that file into the same directory as this notebook, or with curl like so:
In [56]:
!curl -L -O https://raw.githubusercontent.com/aparrish/wordfreq-en-25000/main/wordfreq-en-25000-log.json
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 898k 100 898k 0 0 511k 0 0:00:01 0:00:01 --:--:-- 511k
The code in the following cell loads this data into a dictionary that we can use to look up the probability of a given word:
In [57]:
import json
prob_lookup = dict(json.load(open("./wordfreq-en-25000-log.json")))
Now you can look up a word's probability with the prob_lookup dictionary. As you might imagine, the frequency of most words is very small, meaning that if we stored the frequency as a decimal fraction (e.g., 0.00006), the data on disk would mostly consist of the digit 0. To make the numbers easier to store and work with, the log of the word probabilities, rather than the probability itself:
In [58]:
prob_lookup['me']
Out[58]:
-5.7108
Log probabilities are always negative; the closer to zero, the more frequent the token. To get the original probability number, use math.exp():
In [59]:
import math
math.exp(prob_lookup['me'])
Out[59]:
0.0033100234666365628
(You can interpret the result of the above expression to mean that the word me occurs about 33 times in every ten thousand words.)
Note that all of the tokens in this database are stored in lower case, and you'll get a KeyError for words that are not present in the database! Some other examples of using the lookup:
In [60]:
prob_lookup['allison']
Out[60]:
-12.0892
Cats aren't more frequent than dogs:
In [61]:
prob_lookup['cats'] > prob_lookup['dogs']
Out[61]:
False
Unknown tokens raise KeyError:
In [62]:
prob_lookup['asdfasdf']
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-62-6121b64df678> in <module>
----> 1 prob_lookup['asdfasdf']
KeyError: 'asdfasdf'
But you can use the dict object's .get() method to return a somewhat reasonable estimate of a word's probability, even if it's absent:
In [63]:
prob_lookup.get('asdfasdf', -20.0)
Out[63]:
-20.0
## Looking up synonyms¶
So now we can finally make our synonym lookup! Here I make a SimpleNeighbors index that loads in all words from spaCy's vocab that (a) have an associated vector and (b) are in our database of the top 25k words.
In [64]:
lookup = SimpleNeighbors(300)
for word in prob_lookup.keys():
if nlp.vocab[word].has_vector:
lookup.add_one(word, vec(word))
lookup.build()
This leaves us with nearly 25k words in the lookup (a few of the words from the word frequency list don't have vectors in spaCy, apparently):
In [65]:
len(lookup)
Out[65]:
24575
Now we can get synonyms for words by looking up the word's vector and finding the nearest word in the index:
In [66]:
lookup.nearest(vec('basketball'))
Out[66]:
['basketball',
'volleyball',
'lacrosse',
'football',
'soccer',
'baseball',
'softball',
'hockey',
'tennis',
'racket',
'badminton',
'gymnastics']
Exercise: Limit the synonym lookup to words with a frequency greater than a particular threshold.
### Fun with spaCy and vector arithmetic¶
Now we can start doing vector arithmetic and finding the closest words to the resulting vectors. For example, what word is closest to the halfway point between day and night?
In [67]:
# halfway between day and night
lookup.nearest(meanv([vec("day"), vec("night")]))
Out[67]:
['night',
'day',
'evening',
'morning',
'afternoon',
'midday',
'nights',
'weekend',
'outing',
'tomorrow',
'every',
'next']
Variations of night and day are still closest, but after that we get words like evening and morning, which are indeed halfway between day and night!
Here are the closest words in Frankenstein to "wine":
In [68]:
lookup.nearest(vec("wine"))
Out[68]:
['wine',
'wines',
'winery',
'tasting',
'beer',
'lager',
'drink',
'drinks',
'beverages',
'fruit',
'fig',
'cocktail']
If you subtract "alcohol" from "wine" and find the closest words to the resulting vector, you're left with the superlatives you might use to describe wine:
In [72]:
lookup.nearest(vec("wine") - vec("alcohol"))
Out[72]:
['elegant',
'exquisite',
'elegance',
'graceful',
'fabulous',
'fab',
'magnificent',
'splendid',
'marvellous',
'delightful',
'lovely',
'charming']
The closest words to "water":
In [73]:
lookup.nearest(vec("water"))
Out[73]:
['water',
'seawater',
'salt',
'brine',
'dry',
'boiling',
'bubbling',
'heat',
'heats',
'cubic',
'gallons',
'litres']
But if you add "boil" to "water," you get "simmer" and "steamed":
In [74]:
lookup.nearest(vec("water") + vec("boil"))
Out[74]:
['water',
'seawater',
'boil',
'simmer',
'boiling',
'bubbling',
'salt',
'brine',
'boiled',
'salted',
'steamed',
'heat']
## Replace with synonym¶
The following example replaces all nouns, verbs and adjectives with a closely-related word from the synonym lookup.
In [75]:
frost_doc = nlp(open("frost.txt").read())
In [76]:
output = []
for word in frost_doc:
if word.is_alpha and word.pos_ in ('NOUN', 'VERB', 'ADJ'):
new_word = random.choice(lookup.nearest(word.vector, 3))
output.append(new_word)
else:
output.append(word.text)
output.append(word.whitespace_)
print(''.join(output))
Two roads evolved in a orange wood,
And sorry I might not trips both
And be one traveller, long I stood
And turned down one as far as I could
To where it bends in the bushes;
Then went the these, as just as fair,
And being perhaps the way claimants,
Because it was meadow and did clothes;
Though as for that the passes there
Had worn them really about the one,
And both that afternoon equally sit
In leaves no step had tread black.
Oh, I kept the fifth for another week!
Yet realize how going leads on to going,
I doubting if I not ever come back.
I hereby be talking this with a chuckle
Somewhere ages and children hence:
Two townships evolved in a wood, and I—
I took the that less traveled by,
And that has make all the difference.
## Tinting meaning¶
You can "tint" meaning with word vectors as well. The methodology demonstrated below is to take the word vector for every noun, verb and adjective in the text, and then look up the word closest to a weighted average between the original word and a target word. (You can control the weight of the average using the factor variable below.) On balance, this leads to word replacements that still at least somewhat "make sense" in the original context, but have a bit of meaning from the target word.
In [79]:
target_word = 'spaceship'
factor = 0.5
In [80]:
output = []
for word in frost_doc:
if word.is_alpha and word.pos_ in ('NOUN', 'VERB', 'ADJ'):
new_word = random.choice(
lookup.nearest((word.vector*(1-factor)) + (vec(target_word)*factor), 5))
output.append(new_word)
else:
output.append(word.text)
output.append(word.whitespace_)
print(''.join(output))
Two southbound planet in a blue wood,
And sorry I could not airbus both
And be one traveller, long I stared
And looked down something as far as I they
To where it robot in the shrub;
Then aboard the airplane, as just as would,
And could perhaps the better claims,
Because it was sod and never armor;
Though as for that the plane there
Had resemble them really about the weird,
And both that arrived equally suddenly
In dark no turn had michelin purple.
Oh, I abruptly the finally for another days!
Yet thing how it plane on to somehow,
I universe if I might ever they back.
I humankind be tells this with a wink
Somewhere universe and planets hence:
Two crossing planetary in a hardwood, and I—
I aboard the another somewhere aboard by,
And that has ever all the cosmos.
Every spaCy span (e.g., a document or a sentence) has a .vector attribute that gives a "summary" vector for the span in question. (By default, this is calculated by averaging together all of the vectors.)
In [81]:
sent = nlp("Programming computers is fun!")
In [82]:
sent.vector
Out[82]:
array([-1.09078191e-01, 1.51335195e-01, 2.12372467e-01, -1.53214008e-01,
1.09147593e-01, 1.73588041e-02, 8.59839991e-02, -2.55478203e-01,
2.09430605e-01, 1.65583992e+00, -9.05231461e-02, -1.32969588e-01,
3.42764035e-02, 1.50568604e-01, -6.61468059e-02, -1.55275792e-01,
-1.31082803e-01, 1.50442791e+00, -2.02654406e-01, -8.86391997e-02,
4.99612018e-02, -1.54533401e-01, -1.00691400e-01, 5.44592626e-02,
2.60579765e-01, -7.77239352e-03, 8.82299989e-02, -2.72476021e-02,
1.29189998e-01, -1.21251188e-01, -2.45104164e-01, -2.30066583e-01,
9.57625657e-02, 2.77934074e-02, 1.57128006e-01, 1.09506592e-01,
-5.01117930e-02, 3.19651991e-01, -1.52970748e-02, 5.11708446e-02,
4.09313962e-02, 1.67218000e-01, 3.71664017e-02, 1.76439993e-02,
6.30135983e-02, 2.77248025e-01, -2.26011992e-01, -9.00347978e-02,
2.31373996e-01, 1.76831819e-02, -4.28502038e-02, 1.16572000e-01,
1.63890302e-01, -5.13000181e-03, -6.84387982e-02, -1.40187204e-01,
4.90408018e-02, -1.48198009e-02, 4.10723984e-02, 4.03099991e-02,
3.63446400e-02, 6.02504015e-02, -2.44283993e-02, -6.33216053e-02,
3.33246201e-01, -1.65840611e-01, -1.42393991e-01, 2.63868392e-01,
-7.06279352e-02, -1.15957949e-02, 2.43643999e-01, -7.76533633e-02,
2.68735200e-01, -2.65026003e-01, 6.98260069e-02, 5.55027649e-02,
6.69239983e-02, 1.02014199e-01, -2.83342004e-02, 9.05580044e-01,
-1.48870006e-01, -2.06700023e-02, -9.18637961e-02, 1.17958404e-01,
3.48474011e-02, 3.01625971e-02, -2.66392171e-01, 6.03520051e-02,
4.36886013e-01, 3.55404627e-04, 9.28714313e-03, 4.18313980e-01,
-8.68320018e-02, 9.79305953e-02, 1.07736051e-01, -1.05597004e-01,
-1.53806001e-01, -1.11872196e-01, 6.13286011e-02, 8.13250020e-02,
1.34859979e-02, -1.45868644e-01, -3.23198020e-01, -3.61048020e-02,
1.01260401e-01, -5.27896583e-01, 2.11459801e-01, 6.45498037e-02,
5.44675998e-02, 4.04881984e-02, 5.71960025e-02, -1.76566601e-01,
1.55324399e-01, -1.41639993e-01, -1.76177889e-01, -2.53822207e-01,
-1.48269609e-01, 1.15075789e-01, -7.15504065e-02, 1.12987004e-01,
6.85439929e-02, -1.00209795e-01, 1.24472596e-01, -1.65226191e-01,
2.14665998e-02, 1.31642014e-01, 1.09824404e-01, 1.41431212e-01,
2.18189210e-01, -3.42240818e-02, 1.91150591e-01, -6.14733994e-02,
-1.36373192e-01, -2.20621955e-02, -1.83210019e-02, -5.04960045e-02,
-6.72888011e-02, 1.17635809e-01, 1.11440256e-01, 2.77000397e-01,
-1.40339601e+00, 1.59925997e-01, 4.14776236e-01, -5.32246009e-02,
9.94336009e-02, -1.21174000e-01, 2.33742930e-02, -1.68137982e-01,
-5.25300018e-02, -1.62744999e-01, -1.84371993e-02, 1.27291799e-01,
3.46560031e-02, -3.79859796e-03, 3.25425640e-02, -2.06223205e-01,
6.51135966e-02, -2.54101604e-01, 2.12639980e-02, -2.53134608e-01,
-3.12441196e-02, -7.77560333e-03, 1.54054984e-01, -1.85033411e-01,
-9.07502919e-02, 1.48299960e-02, -1.31761596e-01, -9.24229994e-02,
1.02589414e-01, 1.28178000e-01, -2.28297599e-02, 2.08275229e-01,
1.61991984e-01, -2.20977981e-02, 1.49040073e-02, -1.04619991e-02,
-1.39642403e-01, -9.92640294e-03, -1.11510001e-01, 8.21727961e-02,
8.41722041e-02, -2.92574227e-01, -4.99619991e-02, -1.13490000e-01,
-6.10653982e-02, -1.06445970e-02, -3.00779995e-02, -2.66663972e-02,
9.19132009e-02, -1.09540019e-02, -4.19231988e-02, -9.31339711e-02,
-1.30866006e-01, 2.00029343e-01, -3.60463932e-02, 1.66117996e-01,
2.14560404e-02, -2.79834002e-01, 2.95516193e-01, 1.95245996e-01,
-7.03940019e-02, 1.68732200e-02, 2.67192185e-01, 1.27668027e-02,
7.61620095e-03, -7.91379958e-02, -1.82687804e-01, 2.69200020e-02,
2.37371996e-01, -2.33727008e-01, -3.09768796e-01, -2.82416195e-01,
-4.95892018e-02, -1.61307290e-01, -8.55911747e-02, 2.08437994e-01,
8.23039934e-02, -1.89800188e-03, -4.33899969e-01, 7.30374008e-02,
1.23745993e-01, 1.02426387e-01, -1.32522792e-01, 1.08762406e-01,
-3.76218036e-02, -1.29930809e-01, 1.45559996e-01, 2.57495999e-01,
1.42793730e-01, 2.29041986e-02, -1.84964210e-01, 4.56799287e-03,
7.56099969e-02, -4.75694016e-02, 3.27790007e-02, 9.00008008e-02,
7.23747984e-02, -5.11239991e-02, -2.02823997e-01, 1.57281995e-01,
2.39320993e-01, -9.58018005e-02, 2.24139988e-02, 7.19779953e-02,
8.29880014e-02, -3.00521433e-01, -2.94486851e-01, -7.89007992e-02,
-3.31512019e-02, 3.41201603e-01, 9.71827134e-02, 3.63738015e-02,
-6.48769960e-02, -3.42620024e-03, 5.39860129e-03, 1.74937956e-02,
1.37041003e-01, -2.90899929e-02, -2.22800067e-03, -9.40222591e-02,
2.38341004e-01, 1.49065211e-01, -2.84128994e-01, 2.21151989e-02,
-2.50313818e-01, -8.55891854e-02, -2.11878985e-01, -5.92008010e-02,
8.13369930e-01, 1.47094816e-01, -6.53312624e-01, -9.30460021e-02,
-3.83621067e-01, -2.49031395e-01, -3.88560802e-01, -2.20031217e-01,
2.55477987e-02, -2.68832035e-02, -3.33619975e-02, 6.28920048e-02,
3.20904583e-01, -2.24779993e-01, 1.38928024e-02, 1.64185375e-01,
-2.99731940e-02, -2.52801239e-01, 1.64113805e-01, -2.02426016e-02,
-7.10171908e-02, -1.34364396e-01, -3.14302206e-01, -2.45522186e-01,
-1.44386008e-01, 1.45213991e-01, 1.21419206e-01, -1.19551197e-01,
1.31567001e-01, -1.18598007e-01, -3.15375596e-01, 1.99001342e-01],
dtype=float32)
Interestingly, we can find the single word closest in meaning to this sequence of words:
In [83]:
lookup.nearest(sent.vector)
Out[83]:
['so',
'kind',
'well',
'it',
'everything',
'but',
'even',
'that',
'always',
'time',
'one',
'is']
Or, we can use this as a sort of rudimentary search engine, making it possible to find sentences in a text that are close in meaning to any arbitrary sentence we type in. Here's how to do that! First, we'll get the list of sentences:
In [84]:
doc = nlp(open("./84-0.txt").read())
Then, we'll create a nearest neighbors lookup with the vectors for each sentence:
In [85]:
sentence_lookup = SimpleNeighbors(300)
for sent in doc.sents:
# replace linebreaks to make the output a bit more neat
sentence_lookup.add_one(sent.text.replace("\n", " "), sent.vector)
sentence_lookup.build()
Now we can find similar sentences like so:
In [86]:
sentence_lookup.nearest(nlp("My favorite food is strawberry ice cream.").vector)
Out[86]:
['I greedily devoured the remnants of the shepherd’s breakfast, which consisted of bread, cheese, milk, and wine; the latter, however, I did not like. ',
'We shall make our bed of dried leaves; the sun will shine on us as on man and will ripen our food. ',
'Nature decayed around me, and the sun became heatless; rain and snow poured around me; mighty rivers were frozen; the surface of the earth was hard and chill, and bare, and I found no shelter. ',
'I am already far north of London, and as I walk in the streets of Petersburgh, I feel a cold northern breeze play upon my cheeks, which braces my nerves and fills me with delight. ',
'By slow degrees he recovered and ate a little soup, which restored him wonderfully. ',
'The wind, which had hitherto carried us along with amazing rapidity, sank at sunset to a light breeze; the soft air just ruffled the water and caused a pleasant motion among the trees as we approached the shore, from which it wafted the most delightful scent of flowers and hay. ',
'Vegetables and bread, when they indulged in such luxuries, and even fresh water, was to be procured from the mainland, which was about five miles distant. ',
'I lay at the bottom of the boat, and as I gazed on the cloudless blue sky, I seemed to drink in a tranquillity to which I had long been a stranger.',
'Winter, spring, and summer passed away during my labours; but I did not watch the blossom or the expanding leaves—sights which before always yielded me supreme delight—',
'I lighted the dry branch of a tree and danced with fury around the devoted cottage, my eyes still fixed on the western horizon, the edge of which the moon nearly touched. ',
'“As the sun became warmer and the light of day longer, the snow vanished, and I beheld the bare trees and the black earth. ',
'Immense and rugged mountains of ice often barred up my passage, and I often heard the thunder of the ground sea, which threatened my destruction. ']
We could do something similar with spaCy noun chunks:
In [87]:
chunk_lookup = SimpleNeighbors(300)
for chunk in doc.noun_chunks:
chunk_text = chunk.text.replace("\n", " ")
if chunk_text not in chunk_lookup.corpus:
chunk_lookup.add_one(chunk_text, chunk.vector)
chunk_lookup.build()
In [88]:
chunk_lookup.nearest(nlp("angry birds").vector)
Out[88]:
['the birds',
'The birds',
'the wild animals',
'the frogs',
'the wild sea',
'no albatross',
'The wounded deer',
'such lovely creatures',
'the prey',
'a shrill and dreadful scream',
'a devouring blackness',
'this little creature']
Or even lookups for each part of speech:
In [89]:
adj_lookup = SimpleNeighbors(300)
for word in doc:
# .corpus of the lookup lets us determine if the word has already been added
if word.tag_ == 'JJ' and word.text.lower() not in adj_lookup.corpus:
adj_lookup.add_one(word.text.lower(), word.vector)
adj_lookup.build()
In [90]:
adj_lookup.nearest(vec("happy"))
Out[90]:
['happy',
'glad',
'excited',
'grateful',
'adored',
'loved',
'overjoyed',
'contented',
'sad',
'surprised',
'wonderful',
'welcome']
### Rewriting with parts of speech from another text¶
One last example and we'll call it a day. The following code uses the noun chunks and adjective lookups that I made above to rewrite one text (Frost's The Road Not Taken) with words and phrases from Frankenstein.
In [91]:
frost_doc = nlp(open("frost.txt").read())
In [92]:
output = []
for word in frost_doc:
if word.is_alpha and word.pos_ == 'NOUN':
new_word = random.choice(chunk_lookup.nearest(word.vector, 5))
output.append(new_word)
elif word.is_alpha and word.tag_ == 'JJ':
new_word = random.choice(adj_lookup.nearest(word.vector, 5))
output.append(new_word)
else:
output.append(word.text)
output.append(word.whitespace_)
print(''.join(output))
Two the interchange diverged in a green woods,
And sorry I could not travel both
And be one a merchant, long I stood
And looked down one place as far as I could
To where it bent in the herbage;
Then took the certain, as just as honest,
And having perhaps the better not such proof,
Because it was dry and wanted dress;
Though as for that the the farther end there
Had worn them really about the only,
And both that a Sunday afternoon equally lay
In a leaf no a second step had trodden white.
Oh, I kept the last for another one day!
Yet knowing how how all the life leads on to the good people,
I doubted if I should ever come back.
I shall be telling this with a deep groans
Somewhere past ages and past ages hence:
Two towns diverged in a furniture, and I—
I took the that one less travelled by,
And that has made all the a greater proportion.
## Beyond word vectors¶
The core assumptions of word vectors as a technology are that (A) words are a meaningful unit of language, and (b) words can be said to have stable "meanings" across texts. Neither of these assertions are true, but (for better or worse) they reflect beliefs that many people hold about how language works. The fact that word vectors reflect these beliefs, in my opinion, makes text analysis and text generation based on word vectors more intuitive than the alternatives.
More recently, the task of finding semantic similarity between passages of text is accomplished using systems that operate not on words as units, but on units determined by unsupervised tokenization algorithms like SentencePiece. These systems often use neural network machine learning models (like transformers)) to calculate vectors for words and longer stretches of text that take the surrounding textual context into account. These embeddings are often more accurate, at the cost of explainability and increased use of computational resources. Here are a few resources for learning more about this method:
|
2021-09-27 21:39:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4180830717086792, "perplexity": 4758.155209922811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00105.warc.gz"}
|
https://plainmath.net/16678/triangle-find-the-value-of-x-and-y
|
# [Triangle] Find the value of x and y.
[Triangle] Find the value of x and y.
• Questions are typically answered in as fast as 30 minutes
### Plainmath recommends
• Get a detailed answer even on the hardest topics.
• Ask an expert for a step-by-step guidance to learn to do it yourself.
Nathanael Webber
When a line divides a triangle such that the line is parallel to a side of the triangle, then it divides the other sides proportionally.
The horizontal line is parallel to the base of the triangle so it divides the left and right sides of the triangle proportionally. Therefore:
$$\displaystyle\frac{{x}}{{16}}=\frac{{y}}{{20}}$$
The line with the double arrows is parallel to the right side of the triangle so it divides the left side and base proportionally. Therefore:
$$\displaystyle\frac{{x}}{{16}}=\frac{{45}}{{y}}$$
Since $$\displaystyle\frac{{x}}{{16}}=\frac{{y}}{{20}}$$ and $$\displaystyle\frac{{x}}{{16}}=\frac{{45}}{{y}}$$, then $$\displaystyle\frac{{y}}{{20}}=\frac{{45}}{{y}}$$. Solving for y gives: $$\displaystyle\frac{{y}}{{20}}=\frac{{45}}{{y}}$$
$$\displaystyle{y}^{{2}}={20}{\left({45}\right)}$$
$$\displaystyle{y}^{{2}}={900}$$
$$\displaystyle\sqrt{{{y}^{{2}}}}=\sqrt{{900}}$$
$$y=30$$
Substitute $$y=30$$ into either of the original proportions and solve for x:
$$\displaystyle\frac{{x}}{{16}}=\frac{{y}}{{20}}$$
$$\displaystyle\frac{{x}}{{16}}=\frac{{30}}{{20}}$$
$$\displaystyle\frac{{x}}{{16}}=\frac{{3}}{{2}}$$
$$\displaystyle{16}\cdot{\left(\frac{{x}}{{16}}\right)}={\left(\frac{{3}}{{2}}\right)}\cdot{16}$$
$$x=24$$
|
2021-11-28 11:59:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7207791209220886, "perplexity": 418.0988659423427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00442.warc.gz"}
|
https://www.studypug.com/ca/grade6/evaluating-algebraic-expressions
|
Evaluating algebraic expressions
All in One Place
Everything you need for better grades in university, high school and elementary.
Learn with Ease
Made in Canada with help for all provincial curriculums, so you can study in confidence.
Instant and Unlimited Help
0/1
Intros
Lessons
1. i) What are variables?
ii) What is an expression?
0/10
Examples
Lessons
1. Below we have used ponds and tadpoles to model an expression. Write the expression and use the variable x to represent the unknown number of tadpoles in each pond.
2. Mary bought $s$ packages of stickers, and there are 10 stickers in each package. Write an expression to show how many stickers Mary bought.
1. Write an expression for each phrase. Then, evaluate the expression.
1. 10 pounds lighter than Molly (m), when m=100
2. 22 years older than Tracy (t), when t = 10.
3. 17 less than 5 times a number (n), when n = 7
2. Evaluate the following expression, if $x = 7$ and $y = 9$
1. $6x-y+5$
2. $\frac{2}{3}x+\frac{1}{6}y-1$
3. $0.5x-0.1+1.3y$
0%
Practice
Free to Join!
StudyPug is a learning help platform covering math and science from grade 4 all the way to second year university. Our video tutorials, unlimited practice problems, and step-by-step explanations provide you or your child with all the help you need to master concepts. On top of that, it's fun - with achievements, customizable avatars, and awards to keep you motivated.
We track the progress you've made on a topic so you know what you've done. From the course view you can easily see what topics have what and the progress you've made on them. Fill the rings to completely master that section or mouse over the icon to see more details.
|
2022-10-02 18:50:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.263805627822876, "perplexity": 1797.0433284885028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00084.warc.gz"}
|
https://online.stat.psu.edu/stat504/node/99/
|
# 4.2.5 - Measure of Agreement: Kappa
Printer-friendly version
Another hypothesis of interest is to evaluate whether two different examiners agree among themselves or to what degree two different systems of evaluation are in agreement. This has important applications in medicine where two physicians may be called upon to evaluate the same group of patients for further treatment.
The Cohen's Kappa statistic (or simply kappa) is intended to measure agreement between two raters.
### Example - Movie Critiques
Recall the movies example from the introduction. Do the two movie critics, in this case Ebert and Siskel, classify the same movies into the same categories; do they really agree?
Siskel Ebert con mixed pro total con 24 8 13 45 mixed 8 13 11 32 pro 10 9 64 83 total 42 30 88 160
In the square $I\times I$ table, the main diagonal {i = j} represents rater or observer agreement. Let the term πij denotes the probability that Siskel classifies the move in category i and Ebert classifies the same movie in category j. For example, π13 means that Ebert gave "two thumbs up" and Siskel gave "thumbs down".
The term πii is the probability that they both placed the movie into the same category i, and Σi πii is the total probability of agreement. Ideally, all or most of the observations will be classified on the main diagonal, which denotes perfect agreement.
Think about the following question, then click on the icon to the left display an answer.
#### Is it possible to define perfect disagreement?
Cohen’s kappa is a single summary index that describes strength of inter-rater agreement.
For I × I tables, it’s equal to:
$\kappa=\dfrac{\sum\pi_{ii}-\sum\pi_{i+}\pi_{+i}}{1-\sum\pi_{i+}\pi_{+i}}$
This statistic compares the observed agreement to the expected agreement, computed assuming the ratings are independent.
The null hypothesis that the ratings are independent is, therefore, equivalent to
$\pi_{ii}=\pi_{i+}\pi_{+i}\text{ for all }i$
If the observed agreement is due to chance only, i.e. if the ratings are completely independent, then each diagonal element is a product of the two marginals.
Since the total probability of agreement is Σi πii, then the probability of agreement under the null hypothesis equals to Σi πi+π+i. Note also that Σi πii = 0 means no agreement and Σi πii = 1 indicates perfect agreement. The kappa statistic is defined so that a larger value implies stronger agreement, furthermore:
• Perfect agreement $\kappa = 1$.
• $\kappa = 0$, does not mean perfect disagreement; it only means agreement by chance as that would indicate that the diagonal cell probabilities are simply product of the corresponding marginals.
• If agreement is greater than agreement by chance, then $\kappa\geq 0$.
• If agreement is less than agreement obtained by chance, then $\kappa\leq 0$.
• The minimum possible value of $\kappa = −1$.
• A value of kappa higher than 0.75 will indicate excellent agreement while lower than 0.4 will indicate poor agreement.
Notice that, strong agreement implies strong association, but strong association may not imply strong agreement. For example, if Siskel puts most of the movies into the con category, while Ebert puts them into the pro category, the association might be strong, but there is certainly no agreement. You may also think of the situation where one examiner is tougher than the other. The first one consistently gives one grade less than the more lenient one. In this case also the association is very strong but agreement may be insignificant.
Under multinomial sampling, the sampled value $\hat{\kappa}$ has a large-sample normal distribution. For the sample variance you can refer to Agresti (2013), pg. 435. Thus we can rely on the usual asymptotic 95% confidence interval.
In SAS, use the option AGREE as shown below and in the SAS program MovieCritiques.sas
From the output below, we can see that the "Simple Kappa" gives the estimated kappa value of 0.389 with its asymptotic standard error (ASE) of 0.0598. The difference between observed agreement and expected under independence is about 40% of the maximum possible difference. Based on the reported 95% confidence interval, $\kappa$ falls somewhere between 0.27 and 0.51 indicating only a moderate agreement between Siskel and Ebert.
For R, see the file MovieCritiques.R. If you use {vcd} package you can use function Kappa(); do NOT forget to upload package first, e.g., library(vcd).
From the output below, we can see that the "unweighted" statistic gives the estimated kappa value of 0.389 with its asymptotic standard error (ASE) of 0.063. The difference between observed agreement and expected under independence is about 40% of the maximum possible difference. Based on the reported values, the 95% confidence interval will show that $\kappa$ falls somewhere between 0.27 and 0.51 indicating only a moderate agreement between Siskel and Ebert.
There are other similar functions built by R researchers which are worth exploration. You can also write your own function given the above formulas.
Issue with Cohen's Kappa: Kappa strongly depends on the marginal distributions. That is the same rating, but with different proportions of cases in different categories can give very different $\kappa$ values. This is one reason why the minimum value of $\kappa$ depends on the marginal distribution and the minimum possible value of −1 is not always attainable.
Solution: Modeling agreement (e.g. via log-linear or other models) is typically a more informative approach.
Weighted kappa is a version of kappa used for measuring agreement on ordered variables (see Section 11.5.5 of Agresti, 2013). More details on measures of agreement and modeling of matched data can be found in Chapter 11 (Agresti, 2013), and Chapter 8 (Agresti, 2007). We will only touch upon some of these models later in the semester while we study log-linear and logit model.
|
2020-04-05 11:04:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121147155761719, "perplexity": 1348.9991141921605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00080.warc.gz"}
|
https://tryalgo.org/en/cycles/2016/06/21/halum/
|
# Résolution de problèmes algorithmiques
Apply some operations on arcs to make vertex weights non-negative.
## This is a negative cycle detection problem
First observe that the operations Halum commute. The order in which they are applied does not matter and there is no need to apply it twice for a same vertex.
Let K be the lower bound on the arc weights we want to obtain. Suppose that we call Halum(v,d[v]) for some values d[v] on every vertex v. This means that we have the following lower bound on the arc weights:
w[u,v]+d[u]-d[v]≥K
Does it ring a bell? This is exactly the inequality that appears in the shortest path problem. Hence for fixed K the goal is to find potentials d that satisfy these inequalities for a graph with arc weights w[u,v]-K. It is known that there is a solution if and only if the graph has no negative cycle. To convince yourself just notice that if you add up the arc inequalities along a cycle, the potentials cancel out and you are left with an equality stating that the total arc weights along the cycle have to be non-negative.
So you could just run a negative cycle detection algorithm for a fixed K, and use binary search to detect the optimal K. The domain of K is [-|V|*10000,+|V|*10000], so the binary search stops after $log_2(10^{7})\leq 24$ iterations.
A sample code can be found here.
|
2022-10-05 11:27:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7949220538139343, "perplexity": 272.9509622450892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00768.warc.gz"}
|
https://www.physicsforums.com/threads/partial-fraction-decomposition.763464/
|
# Partial fraction decomposition
1. Jul 26, 2014
### eric_999
Alright, Im here again with another question....
When I have a rational function, let's say (x+4)/(x-2)(x-3) I rewrite it like A/(x-2) + B(x-3) and then solve it for A & B. But when we have for e.g (x^2 + 3x + 2)/(x(x^2 +1 )) the book tells me to rewrite it like:
A/x + (Bx + C)/(x^2 + 1), and then solve for A & B. I understand that the term x^2 +1 cannot be further decomposed (at least not if we only consider real numbers). However feel I don't get everything.
For example if I instead try to rewrite it on the form A/x + B/(x^2 + 1), so A(x^2 + 1) + Bx = x^2 +3x + 2, so A = 1, B = 3, and A = 2, which is of course impossible. On the other hand I can see that the other form described above (which the book tells me to use) works fine.
The problem is that with different rational functions I might be able to try different strategies and just see which one works out, but I feel i don't understand it the way I want to. In the book they simply say "the rational function P(x)/Q(x) can be expressed as a sum of partial fractions like this: .... but we don't explain it further cuz this is not a course in algebra" I feel I need to really understand, not just memorize the techniques!
Thanks for help!
2. Jul 26, 2014
### HallsofIvy
Staff Emeritus
There are, in fact, a number of ways of finding the coefficients for the fractions. If you have
$$\frac{x+ 4}{(x- 2)(x- 3)}= \frac{A}{x- 2}+ \frac{B}{x- 3}$$
1. Do the addition on the right: multiply both numerator and denominator of the first fraction by x- 3 and the numerator and denominator of the second fraction by x- 2:
$$\frac{x+ 4}{(x- 2)(x- 3)}= \frac{A(x- 3)}{(x- 2)(x- 3)}+ \frac{B(x- 2)}{(x- 2)(x- 3)}$$
$$= \frac{Ax- 3A+ Bx- 2B}{(x- 2)(x- 3)}= \frac{(A+ B)x- (3A- 2B)}{(x- 2)(x- 3)}$$
so we must have A+ B= 1 and -3A- 2B= 4.
2. Multiply both sides by (x- 3)(x- 2):
$$x+ 4= A(x- 3)+ B(x- 2)= (A+ B)x- (3A+ 2B)$$
so that we have the same two equations.
3. After getting
$$x+ 4= A(x- 3)+ B(x- 2)$$
choose any two values you like for x to get two linear equations for A and B.
4. In particular, choosing x= 2 and x= 3 gives very simple, separated, equations: 6= -A and 7= B.
3. Jul 26, 2014
### verty
Here is a pdf that makes it nice and clear.
The proof of these decompositions is not going to be too interesting, it'll just show that in each case, for the right coefficients, the simplest form of the numerator is the one given by the theorem.
4. Jul 28, 2014
Thanks!
|
2017-12-17 01:20:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198437929153442, "perplexity": 474.2185435287081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592202.83/warc/CC-MAIN-20171217000422-20171217022422-00061.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=33&t=35305
|
## Formal Charge of a compound
$FC=V-(L+\frac{S}{2})$
josephyim1H
Posts: 49
Joined: Thu Sep 27, 2018 11:15 pm
### Formal Charge of a compound
Does the overall formal charge of an atom (addition of formal charges of individual atoms) always have to add up to the overall charge of the compound? If the overall charge of the compound is +1 does the addition of the fc of each atom have to add up to +1?
Katie Sy 1L
Posts: 48
Joined: Thu Sep 27, 2018 11:18 pm
### Re: Formal Charge of a compound
yes the overall formal charge of the atom will equal the total of the formal charge of each element in the atom
Shreya Tamatam 3B
Posts: 30
Joined: Thu Sep 27, 2018 11:25 pm
### Re: Formal Charge of a compound
Hi, just to add on, if you look at the example we did in lecture where we assigned formal charged to a sulfate ion (SO4^2-) we see that two of the oxygen atoms had a -1 charge while all the other atoms had 0 charge. Because two atoms had a -1 charge, the total charge for the molecule was -2.
Eunice Lee 1A
Posts: 48
Joined: Tue Oct 02, 2018 11:16 pm
### Re: Formal Charge of a compound
The overall charge can be zero, but it doesn't have to be.
Return to “Formal Charge and Oxidation Numbers”
### Who is online
Users browsing this forum: No registered users and 2 guests
|
2019-02-20 20:34:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259738683700562, "perplexity": 3142.1328994910923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496080.57/warc/CC-MAIN-20190220190527-20190220212527-00119.warc.gz"}
|
https://eprint.iacr.org/2010/111
|
### On zero practical significance of “"Key recovery attack on full GOST block cipher with zero time and memory”"
##### Abstract
In this paper we show that the related key boomerang attack by E. Fleischmann et al. from the paper mentioned in the title does not allow to recover the master key of the GOST block cipher with complexity less than the complexity of the exhaustive search. Next we present modified attacks. Finally we argue that these attacks and the related key approach itself are of extremely limited practical applications and do not represent a fundamental obstacle to practical usage of the block ciphers such as GOST, AES and Kasumi.
Available format(s)
Publication info
Published elsewhere. Unknown status
Keywords
secret-keyrelated-keyboomerangGOST
Contact author(s)
History
2015-10-07: last of 2 revisions
See all versions
Short URL
https://ia.cr/2010/111
CC BY
BibTeX
@misc{cryptoeprint:2010/111,
|
2022-06-26 05:58:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24045242369174957, "perplexity": 2115.909002086226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00507.warc.gz"}
|
http://spectrum.library.concordia.ca/758/
|
Title:
# A study of two dimensional braided composites
Yan, Yong (1998) A study of two dimensional braided composites. Masters thesis, Concordia University.
Preview
PDF
3473Kb
## Abstract
A microstructure model of 2-D braided composite is developed. The fiber volume and crimp angle are evaluated by this model and are compared with experimental results. The maximum fiber volume fraction is also obtained. Effective stiffness of 2-D braided composite is obtained in a close form by the analysis of elastic deformation energy in repeat-unit-cell of braided composites. Effective stiffness consists of contributions of axial yarn, braiding yarn, and matrix material. Each of them takes a different weight in the effective stiffness. The weights of contribution of axial yarn, braiding yarn, and matrix material are $\rm\chi V\sb{f}/k$, (1-$\chi$)$\rm V\sb{f}/k,$ and (1-V$\rm \sb{f}/k)$ respectively, where $\rm V\sb{f}$ is the fiber volume fraction, $\chi$ is the axial yarn content in a braided composite, and k is the filament packing fraction. The results of effective stiffness from theoretical prediction are very good as compared with test data from experiments. Effective coefficients of thermal and hygro expansion are analyzed by virtual energy method. The independent parameters in design and analysis of braided composites are successfully separated from all other parameters. This greatly reduces the difficulties of design and analysis of 2-D braided composites. Objected-oriented Matlab programs are built for the parameter analysis.
Divisions: Concordia University > Faculty of Engineering and Computer Science > Mechanical and Industrial Engineering Thesis (Masters) Yan, Yong xv, 137 leaves : ill. ; 29 cm. Concordia University Theses (M.A.Sc.) Mechanical and Industrial Engineering 1998 Hoa, Suong Van 758 Concordia University Libraries 27 Aug 2009 13:14 08 Dec 2010 10:16 http://clues.concordia.ca/search/c?SEARC...
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.
Repository Staff Only: item control page
|
2014-04-18 21:03:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23142503201961517, "perplexity": 2675.676219949536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://clay6.com/qa/15785/the-rate-constant-of-a-first-order-reaction-at-27-is-10-min-the-temperature
|
Browse Questions
# The rate constant of a first order reaction at $27^{\circ}$ is $10^{-3} \min ^{-1})$. The temperature coefficient of this reaction is 2. What is the rate constant $(in min^{-1})$ at $17^{\circ}C$ for this reaction ?
$\begin {array} {1 1} (a)\;10^{-3} & \quad (b)\;5 \times 10^{-4} \\ (c)\;2 \times 10^{-3} & \quad (d)\;10^{-2} \end {array}$
$(b)\;5 \times 10^{-4}$
|
2017-02-19 21:46:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786549806594849, "perplexity": 338.24951784413946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00124-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://cstheory.stackexchange.com/questions/6218/is-there-a-lower-bound-of-number-of-redundant-bits-necessary-to-encode-a-word-wi
|
# Is there a lower bound of number of redundant bits necessary to encode a word with certain Hamming distance?
Is there a lower bound (in coding theory or elsewhere) of number of redundant bits necessary to encode a word with certain Hamming distance?
There is some known data for parity checks, CRC, Hamming encoding, but is there a theoretical limit?
• Yes, there are several bounds. Study coding theory, it is fun! Apr 23 '11 at 13:01
As Tsuyoshi points out in the comments, there are a number of such bounds. However, for the sake of actually giving you an answer, let me point you to the Singleton bound, which states that for a $(s,N,d)$-code over $\mathbb{F}_b$ that $N\leq b^{s-d+1}$.
• Be advised that the Singleton bound is unattainable for long codes. RS-codes achieve, if $s\le b$, and that's about it, so for binary codes, this bound is useless for $s>3$. If I had a dime for every time I have to explain that a binary code of length 1000 cannot correct 100 errors with 200 redundancy bits... Jul 16 '11 at 16:28
• @Jyrki: I didn't claim it was tight, and I do not see why a loose bound is necessarily 'useless'. Jul 17 '11 at 0:30
• My remark was aimed at the OP. Sorry about not making that clear. I have just met too many people, who want to produce ballpark figures of system performance by using Singleton bound alone. A pet peeve, sorta. Jul 17 '11 at 5:20
• @Jyrki: ah, I see. Jul 17 '11 at 11:14
You may also want to look at Delsarte's linear programming bound, and the Gilbert-Varshamov bound. The linear programming bound gives a lower bound on the number of redundant bits necessary. The Gilbert-Varshamov bound gives a non-constructive (randomized) upper bound on the number of redundant bits required.
• Actually I think GV is constructive. Only thing is it takes exponential time for each choices of $t$ (half-min distance in the given metric) and $n$ dimension of the code space of the given alphabet $q$.
– v s
Jul 17 '11 at 8:42
• Only the asymptotic bounds are non-constructive. The finitary versions are always constructive. Actually Manin and Tsfasman have an argument saying the limit of the possible cardinality of the codes may be even undecidable. Manin and Marcolli have an interesting paper on this building on techniques from non-commutative geometry and motives.
– v s
Jul 17 '11 at 8:47
• @v s: Absolutely correct. But I think the definition of "constructive" may change depending on whether you're talking to a mathematician or a theoretical computer scientist. Jul 18 '11 at 0:35
This is not meant to be a substitute to the bounds linked to by Peter Shor. Just a quick argument showing why the Singleton bound is inaccurate for long binary codes (transporting bulk data).
If your code length is $n$, and you can afford to use $r$ of those for redundancy, then the syndrome of your code has $2^r$ possible values. If you want to correct a single bit error, then using that syndrome alone you have to be able to distinguish between $n+1$ cases: no error, a single error at position $i$, $1\le i\le n$. To be able to do that we must have the inequality $2^r\ge n+1$, or in other words we need $r> \log_2 n$. This is exactly what the Hamming code gives us. Note that the Singleton bound would suggest that you only need two bits of redundancy to correct a single error. In other words, the Singleton bound does not take into account the length of the code at all.
If we want to continue, and correct $t$ errors, then theory becomes more interesting. By the same argument we obviously need the inequality $$2^r\ge1+n+ {n\choose 2}+\cdots+{n\choose t},$$ because $n\choose i$ tells us the number of patterns of $i$ errorneous bits. This leads us to a bound known as the Hamming bound. If we had here $n^t$ on the r.h.s., then we would need $r\approx t\log_2 n$ redundancy bits, which is what the BCH-codes give us. As you see, this estimate was too crude, but for small values of $t$ the error here (after taking the logarithm) is not very big.
Of course, in many a setting the channel is not really making hard bit errors, but soft errors (=reliability figures of individual received bits). Then we can, to an extent, throw away these bounds, and use LDPC or Turbo codes. Alas, I don't know too much about that theory.
• For soft errors, you only can get results about decoding with high probability, and in this case it is Shannon's bound that applies. You can also throw away the bounds for hard bit errors when the errors are probabilistic (rather than worst-case), and you just care about decoding with very high probability, since then Shannon's bound also applies. In both cases, I believe that LDPC and Turbo codes get very close to Shannon's bound. Jul 17 '11 at 13:24
|
2022-01-28 02:25:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7966384291648865, "perplexity": 419.24384930707504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00692.warc.gz"}
|
https://www.physicsforums.com/threads/homework-nuclear-physics.844606/
|
Homework Nuclear Physics
Tags:
1. Nov 22, 2015
Rath123
1. The problem statement, all variables and given/known data
What initial mass of 23592U is required to operate a 350 MW reactor for 3 yrs? Assume 46% efficiency.
2. Relevant equations
I used Mass* C* efficiency as a decimal= power (e^-6) * time (in seconds)
and got 800 as the mass, however, this was incorrect
3. The attempt at a solution
(M)*(3e8)^2* .46= 350e6* 9.14 i think it was (whatever 3 years in seconds is) and i was getting 800 as the mass, or 799Kg specifically however, this is incorrect . Where am i going wrong?
2. Nov 22, 2015
Rath123
the three years in seconds was : 9.461e+7
3. Nov 22, 2015
SteamKing
Staff Emeritus
You should try to lay them out in some logical fashion, so others can follow.
In a nuclear reactor, only a small amount of the mass of uranium is converted to energy. The rest winds up as nuclear waste.
The fission of U-235 in the reactor is governed by the reaction discussed in this article:
https://en.wikipedia.org/wiki/Uranium-235
4. Nov 23, 2015
andrevdh
Last edited: Nov 23, 2015
5. Nov 23, 2015
SteamKing
Staff Emeritus
I think the efficiency is for the conversion of the energy released by the fission into the electric power which comes out of the plant. There is a limit to the amount of energy which can be extracted from the steam turbines, generators, etc. which are all used with the nuclear reactor to turn the energy of fission into electricity.
6. Nov 23, 2015
andrevdh
Yes. I realized it and edited my post.
|
2017-12-12 16:30:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235363364219666, "perplexity": 1688.1022845602192}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00432.warc.gz"}
|
http://www.mathworks.com/help/physmod/sdl/drive/longitudinalvehicledynamics.html?nocookie=true
|
# Longitudinal Vehicle Dynamics
Model longitudinal dynamics and motion of two-axle, four-wheel vehicle
## Library
Vehicle Components
## Description
The Longitudinal Vehicle Dynamics block models a two-axle vehicle, with four equally sized wheels, moving forward or backward along its longitudinal axis. You specify front and rear longitudinal forces Fxf, Fxr applied at the front and rear wheel contact points, as well as the incline angle β, as a set of Simulink® input signals. The block computes the vehicle velocity Vx and the front and rear vertical load forces Fzf, Fzr on the vehicle as a set of Simulink output signals. All signals have MKS units.
You must specify the vehicle mass and certain geometric and kinematic details:
• Position of the vehicle's center of gravity (CG) relative to the front and rear axles and to the ground
• Effective frontal cross-sectional area
• Aerodynamic drag coefficient
• Initial longitudinal velocity
See Vehicle Model following for details of the vehicle dynamics.
### Limitations
The Longitudinal Vehicle Dynamics block lets you model only longitudinal (horizontal) dynamics. Depending on the initial configuration, the block might implement inconsistent initial conditions for the vertical load forces, causing spurious transient dynamics just after the simulation starts.
Caution The Longitudinal Vehicle Dynamics block does not correctly simulate with sudden changes in the external (longitudinal and gravity) forces. It correctly models only slowly changing external conditions.
### Using Vehicle Component Blocks
Use the blocks of the Vehicle Components library as a starting point for vehicle modeling. To see how a Vehicle Component block models a driveline component, look under the block mask. The blocks of this library serve as suggestions for developing variant or entirely new models to simulate the same components. Break the block's library link before modifying it and creating your own version.
## Dialog Box and Parameters
Mass
Mass m of the vehicle in kilograms (kg). The default is `1200`.
Horizontal distance from CG to front axle
Horizontal distance a, in meters (m), from the vehicle's center of gravity to the vehicle's front wheel axle. The default is `1.4`.
Horizontal distance from CG to rear axle
Horizontal distance b, in meters (m), from the vehicle's center of gravity to the vehicle's rear wheel axle. The default is `1.6`.
CG height from ground
Height h, in meters (m), of the vehicle's center of gravity from the ground. The default is `0.5`.
Frontal area
Effective cross-sectional area A, in meters squared (m2), presented by the vehicle in longitudinal motion, for the purpose of computing the aerodynamic drag force on the vehicle. The default is `3`.
Drag coefficient
The dimensionless aerodynamic drag coefficient Cd, for the purpose of computing the aerodynamic drag force on the vehicle. The default is `0.4`.
Initial longitudinal velocity
The initial value of the vehicle's horizontal velocity, in meters/second (m/s). The default is `0`.
## Vehicle Model
The vehicle axles are parallel and lie in a plane parallel to the ground. The longitudinal x direction lies in this plane and perpendicular to the axles. If the vehicle is traveling on an incline slope β, the vertical z direction is not parallel to gravity but is always perpendicular to the axle-ground plane.
This figure and table define the vehicle motion model variables.
Vehicle Dynamics and Motion
Vehicle Model Variables and Constants
SymbolMeaning and Unit
g = -9.81 m/s2Gravitational acceleration (m/s2)
mVehicle mass (kg)
AEffective frontal vehicle cross-sectional area (m2)
hHeight of vehicle CG above the ground (m)
a, bDistance of front and rear axles, respectively, from the vertical projection point of vehicle CG onto the axle-ground plane (m)
VxLongitudinal vehicle velocity (m/s)
Fxf, FxrLongitudinal forces on the vehicle at the front and rear wheel ground contact points, respectively (N)
Fzf, FzrVertical load forces on the vehicle at the front and rear ground contact points, respectively (N)
CdAerodynamic drag coefficient (N·s2/kg·m)
ρ = 1.2 kg/m3Mass density of air (kg/m3)
|Fd| = ½CdρAVx 2
Aerodynamic drag force (N)
### Vehicle Dynamics and Motion
The vehicle motion is determined by the net effect of all the forces and torques acting on it. The longitudinal tire forces push the vehicle forward or backward. The weight mg of the vehicle acts through its center of gravity (CG). Depending on the incline angle, the weight pulls the vehicle to the ground and either pulls it backward or forward. Whether the vehicle travels forward or backward, aerodynamic drag slows it down. For simplicity, the drag is assumed to act through the CG.
$\begin{array}{l}m{\stackrel{˙}{V}}_{x}={F}_{x}+\text{}{F}_{d}-mg\cdot \mathrm{sin}\beta ,\\ {F}_{x}={F}_{xf}+{F}_{xr},\\ {F}_{d}=-\frac{1}{2}{C}_{d}\rho A{{V}_{x}}^{2}\cdot \mathrm{sgn}\left({V}_{x}\right)\end{array}$
Zero vertical acceleration and zero pitch torque require
$\begin{array}{l}{F}_{zf}=\frac{+h\left({F}_{d}-mg\mathrm{sin}\beta -m{\stackrel{˙}{V}}_{x}\right)+b\cdot mg\mathrm{cos}\beta }{a+b}\\ {F}_{zr}=\frac{-h\left({F}_{d}-mg\mathrm{sin}\beta -m{\stackrel{˙}{V}}_{x}\right)+a\cdot mg\mathrm{cos}\beta }{a+b}\end{array}$
Note that Fzf + Fzr = mg·cosβ.
Caution The Longitudinal Vehicle Dynamics block is implemented with a transfer function that imposes a small delay on the vertical force reaction to changes in the horizontal forces. The vertical and pitch equilibria hold only on average.
## Examples
The example model drive_4wd_dynamics combines two differentials with four tire-wheel assemblies to model the contact of tires with the road and the longitudinal vehicle motion.
The example model drive_vehicle models an entire one-wheel vehicle, including Tire and Longitudinal Vehicle Dynamics blocks.
## References
Centa, G., Motor Vehicle Dynamics: Modeling and Simulation, Singapore, World Scientific, 1997.
Pacejka, H. B., Tire and Vehicle Dynamics, Society of Automotive Engineers and Butterworth-Heinemann, Oxford, 2002.
|
2015-10-04 05:08:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47311264276504517, "perplexity": 2706.044168656409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672441.2/warc/CC-MAIN-20151001215752-00004-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://yeesian.com/ArchGDAL.jl/latest/reference/
|
# API Reference
## General
ArchGDAL.arecompatibleMethod
arecompatible(dtype::OGRFieldType, subtype::OGRFieldSubType)
Return if type and subtype are compatible.
References
• https://gdal.org/development/rfc/rfc50ogrfield_subtype.html
source
ArchGDAL.getnameMethod
getname(obj::OGRFieldSubType)
Fetch human readable name for a field subtype.
References
• https://gdal.org/development/rfc/rfc50ogrfield_subtype.html
source
ArchGDAL.iscomplexMethod
iscomplex(dtype::GDALDataType)
true if dtype is one of GDT_{CInt16|CInt32|CFloat32|CFloat64}.
source
ArchGDAL.typeunionMethod
typeunion(dt1::GDALDataType, dt2::GDALDataType)
Return the smallest data type that can fully express both input data types.
source
ArchGDAL.clearconfigoptionMethod
clearconfigoption(option::AbstractString)
This function can be used to clear a setting.
Note: it will not unset an existing environment variable; it will just unset a value previously set by setconfigoption().
source
ArchGDAL.clearthreadconfigoptionMethod
clearthreadconfigoption(option::AbstractString)
This function can be used to clear a setting.
Note: it will not unset an existing environment variable; it will just unset a value previously set by setthreadconfigoption().
source
ArchGDAL.getconfigoptionFunction
getconfigoption(option::AbstractString, default = C_NULL)
Get the value of a configuration option.
The value is the value of a (key, value) option set with setconfigoption(). If the given option was not defined with setconfigoption(), it tries to find it in environment variables.
Parameters
• option the key of the option to retrieve
• default a default value if the key does not match existing defined options
Returns
the value associated to the key, or the default value if not found.
source
ArchGDAL.getthreadconfigoptionFunction
getthreadconfigoption(option::AbstractString, default = C_NULL)
Same as getconfigoption() but with settings from setthreadconfigoption().
source
ArchGDAL.metadataMethod
metadata(obj; domain::AbstractString = "")
source
ArchGDAL.metadataitemMethod
metadataitem(obj, name::AbstractString, domain::AbstractString)
Parameters
• name the name of the metadata item to fetch.
• domain (optional) the domain to fetch for.
Returns
The metadata item on success, or an empty string on failure.
source
ArchGDAL.setconfigoptionMethod
setconfigoption(option::AbstractString, value)
Set a configuration option for GDAL/OGR use.
Those options are defined as a (key, value) couple. The value corresponding to a key can be got later with the getconfigoption() method.
Parameters
• option the key of the option
• value the value of the option, or NULL to clear a setting.
This mechanism is similar to environment variables, but options set with setconfigoption() overrides, for getconfigoption() point of view, values defined in the environment.
If setconfigoption() is called several times with the same key, the value provided during the last call will be used.
source
ArchGDAL.setthreadconfigoptionMethod
setthreadconfigoption(option::AbstractString, value)
Set a configuration option for GDAL/OGR use.
Those options are defined as a (key, value) couple. The value corresponding to a key can be got later with the getconfigoption() method.
Parameters
• option the key of the option
• value the value of the option
This function sets the configuration option that only applies in the current thread, as opposed to setconfigoption() which sets an option that applies on all threads.
source
ArchGDAL.@convertMacro
@convert(<T1>::<T2>,
<conversions>
)
Generate convert functions both ways between ArchGDAL Enum of typeids (e.g. ArchGDAL.OGRFieldType) and other types or typeids.
ArchGDAL uses Enum types, listing typeids of various data container used in GDAL/OGR object model. Some of these types are used to implement concrete types in julia through parametric composite types based on those Enum of typeids (e.g. Geometry and IGeometry types with OGRwkbGeometryType)
Other types or typeids can be:
• GDAL CEnum.Cenum typeids (e.g. GDAL.OGRFieldType),
• Base primitive DataType types (e.g. Bool),
• other parametric composite types (e.g. ImageCore.Normed)
Arguments
• (<T1>::<T2>)::Expr: source and target supertypes, where T1<:Enum and T2<:CEnum.Cenum || T2::Type{DataType} || T2::UnionAll}
• (<stype1>::<stype2>)::Expr: source and target subtypes or type ids with stype1::T1 and
• stype2::T2 where T2<:CEnum.Cenum or
• stype2::T2 where T2::Type{DataType} or
• stype2<:T2where T2<:UnionAll
• ...
Note: In the case where the mapping is not bijective, the last declared typeid of subtype is used. Example:
@convert(
OGRFieldType::DataType,
OFTInteger::Bool,
OFTInteger::Int16,
OFTInteger::Int32,
)
will generate a convert functions giving:
• Int32 type for OFTInteger and not Ìnt16
• OFTInteger OGRFieldType typeid for both Int16 and Int32
Usage
General case:
@convert(GDALRWFlag::GDAL.GDALRWFlag,
GF_Write::GDAL.GF_Write,
)
does the equivalent of
const GDALRWFlag_to_GDALRWFlag_map = ImmutableDict(
GF_Write => GDAL.GF_Write
)
Base.convert(::Type{GDAL.GDALRWFlag}, ft::GDALRWFlag) =
GDALRWFlag_to_GDALRWFlag_map[ft]
const GDALRWFlag_to_GDALRWFlag_map = ImmutableDict(
GDAL.GF_Write => GF_Write
)
Base.convert(::Type{GDALRWFlag}, ft::GDAL.GDALRWFlag) =
GDALRWFlag_to_GDALRWFlag_map[ft]
Case where 1st type <: Enum and 2nd type == DataType or ìsa UnionAll:
@convert(OGRFieldType::DataType,
OFTInteger::Bool,
OFTInteger::Int16,
)
does the equivalent of
const OGRFieldType_to_DataType_map = ImmutableDict(
OFTInteger => Bool,
OFTInteger => Int16,
)
Base.convert(::Type{DataType}, ft::OGRFieldType) =
OGRFieldType_to_DataType_map[ft]
Base.convert(::Type{OGRFieldType}, ft::Type{Bool}) = OFTInteger
Base.convert(::Type{OGRFieldType}, ft::Type{Int16}) = OFTInteger
source
## GDAL Constants
ArchGDAL.GDALAccessType
The value of GDALAccess could be different from GDAL.GDALAccess.
It maps correctly to GDAL.GDALAccess if you do e.g.
convert(GDAL.GDALAccess, ArchGDAL.GA_ReadOnly)
source
ArchGDAL.GDALAsyncStatusTypeType
The value of GDALAsyncStatusType could be different from GDAL.GDALAsyncStatusType.
It maps correctly to GDAL.GDALAsyncStatusType if you do e.g.
convert(GDAL.GDALAsyncStatusType, ArchGDAL.GARIO_PENDING)
source
ArchGDAL.GDALColorInterpType
The value of GDALColorInterp could be different from GDAL.GDALColorInterp.
It maps correctly to GDAL.GDALColorInterp if you do e.g.
convert(GDAL.GDALColorInterp, ArchGDAL.GCI_Undefined)
source
ArchGDAL.GDALDataTypeType
The value of GDALDataType could be different from GDAL.GDALDataType.
It maps correctly to GDAL.GDALDataType if you do e.g.
convert(GDAL.GDALDataType, ArchGDAL.GDT_Unknown)
source
ArchGDAL.GDALPaletteInterpType
The value of GDALPaletteInterp could be different from GDAL.GDALPaletteInterp.
It maps correctly to GDAL.GDALPaletteInterp if you do e.g.
convert(GDAL.GDALPaletteInterp, ArchGDAL.GPI_Gray)
source
ArchGDAL.GDALRATFieldTypeType
The value of GDALRATFieldType could be different from GDAL.GDALRATFieldType.
It maps correctly to GDAL.GDALRATFieldType if you do e.g.
convert(GDAL.GDALRATFieldType, ArchGDAL.GFT_Integer)
source
ArchGDAL.GDALRATFieldUsageType
The value of GDALRATFieldUsage could be different from GDAL.GDALRATFieldUsage.
It maps correctly to GDAL.GDALRATFieldUsage if you do e.g.
convert(GDAL.GDALRATFieldUsage, ArchGDAL.GFU_Generic)
source
ArchGDAL.GDALRWFlagType
The value of GDALRWFlag could be different from GDAL.GDALRWFlag.
It maps correctly to GDAL.GDALRWFlag if you do e.g.
convert(GDAL.GDALRWFlag, ArchGDAL.GF_Read)
source
ArchGDAL.OGRFieldSubTypeType
The value of OGRFieldSubType could be different from GDAL.OGRFieldSubType.
It maps correctly to GDAL.OGRFieldSubType if you do e.g.
convert(GDAL.OGRFieldSubType, ArchGDAL.OFSTNone)
source
ArchGDAL.OGRFieldTypeType
The value of OGRFieldType could be different from GDAL.OGRFieldType.
It maps correctly to GDAL.OGRFieldType if you do e.g.
convert(GDAL.OGRFieldType, ArchGDAL.OFTInteger)
source
ArchGDAL.OGRJustificationType
The value of OGRJustification could be different from GDAL.OGRJustification.
It maps correctly to GDAL.OGRJustification if you do e.g.
convert(GDAL.OGRJustification, ArchGDAL.OJUndefined)
source
ArchGDAL.OGRSTClassIdType
The value of OGRSTClassId could be different from GDAL.OGRSTClassId.
It maps correctly to GDAL.OGRSTClassId if you do e.g.
convert(GDAL.OGRSTClassId, ArchGDAL.OGRSTCNone)
source
ArchGDAL.OGRSTUnitIdType
The value of OGRSTUnitId could be different from GDAL.OGRSTUnitId.
It maps correctly to GDAL.OGRSTUnitId if you do e.g.
convert(GDAL.OGRSTUnitId, ArchGDAL.OGRSTUGround)
source
ArchGDAL.OGRwkbByteOrderType
The value of OGRwkbByteOrder could be different from GDAL.OGRwkbByteOrder.
It maps correctly to GDAL.OGRwkbByteOrder if you do e.g.
convert(GDAL.OGRwkbByteOrder, ArchGDAL.wkbXDR)
source
ArchGDAL.OGRwkbGeometryTypeType
The value of OGRwkbGeometryType could be different from GDAL.OGRwkbGeometryType.
It maps correctly to GDAL.OGRwkbGeometryType if you do e.g.
convert(GDAL.OGRwkbGeometryType, ArchGDAL.wkbUnknown)
source
## GDAL Datasets
ArchGDAL.buildoverviews!Method
buildoverviews!(dataset::AbstractDataset, overviewlist::Vector{Cint};
bandlist, resampling="NEAREST", progressfunc, progressdata)
Build raster overview(s).
If the operation is unsupported for the indicated dataset, then CEFailure is returned, and CPLGetLastErrorNo() will return CPLENotSupported.
Parameters
• overviewlist overview decimation factors to build.
Keyword Parameters
• panBandList list of band numbers. Must be in Cint (default = all)
• sampling one of "NEAREST" (default), "GAUSS","CUBIC","AVERAGE","MODE", "AVERAGE_MAGPHASE" or "NONE" controlling the downsampling method applied.
• progressfunc a function to call to report progress, or NULL.
• progressdata application data to pass to the progress function.
source
ArchGDAL.copyMethod
copy(dataset::AbstractDataset; [filename, [driver, [<keyword arguments>]]])
Create a copy of a dataset.
This method will attempt to create a copy of a raster dataset with the indicated filename, and in this drivers format. Band number, size, type, projection, geotransform and so forth are all to be copied from the provided template dataset.
Parameters
• dataset the dataset being duplicated.
Keyword Arguments
• filename the filename for the new dataset. UTF-8 encoded.
• driver the driver to use for creating the new dataset
• strict $true$ if the copy must be strictly equivalent, or more normally $false$ if the copy may adapt as needed for the output format.
• options additional format dependent options controlling creation
of the output file. The APPEND_SUBDATASET=YES option can be specified to avoid prior destruction of existing dataset.
Example
dataset = ArchGDAL.copy(originaldataset)
# work with dataset from here
or
ArchGDAL.copy(originaldataset) do dataset
# work with dataset from here
end
Returns
The newly created dataset.
source
ArchGDAL.copywholeraster!Method
copywholeraster(source::AbstractDataset, dest::AbstractDataset;
<keyword arguments>)
Copy all dataset raster data.
This function copies the complete raster contents of one dataset to another similarly configured dataset. The source and destination dataset must have the same number of bands, and the same width and height. The bands do not have to have the same data type.
Currently the only options supported are : "INTERLEAVE=PIXEL" to force pixel interleaved operation and "COMPRESSED=YES" to force alignment on target dataset block sizes to achieve best compression. More options may be supported in the future.
This function is primarily intended to support implementation of driver specific createcopy() functions. It implements efficient copying, in particular "chunking" the copy in substantial blocks and, if appropriate, performing the transfer in a pixel interleaved fashion.
source
ArchGDAL.createMethod
create(filename::AbstractString; driver, width, height, nbands, dtype,
options)
Create a new dataset.
Parameters
• filename the filename for the dataset being created.
Keyword Arguments
• driver the driver to use for creating the new dataset
• options additional format dependent options controlling creation
of the output file. The APPEND_SUBDATASET=YES option can be specified to avoid prior destruction of existing dataset.
• width, height, nbands, dtype: only for raster datasets.
Example
dataset = ArchGDAL.create(filename; ...)
# work with raster dataset from here
or
ArchGDAL.create(filename; ...) do dataset
# work with vector dataset from here
end
Returns
The newly created dataset.
source
ArchGDAL.deletelayer!Method
deletelayer!(dataset::AbstractDataset, i::Integer)
Delete the indicated layer (at index i; between 0 to nlayer()-1)
Returns
OGRERR_NONE on success, or OGRERR_UNSUPPORTED_OPERATION if deleting layers is not supported for this dataset.
source
ArchGDAL.filelistMethod
filelist(dataset::AbstractDataset)
Fetch files forming dataset.
Returns a list of files believed to be part of this dataset. If it returns an empty list of files it means there is believed to be no local file system files associated with the dataset (for instance a virtual dataset). The returned file list is owned by the caller and should be deallocated with CSLDestroy().
The returned filenames will normally be relative or absolute paths depending on the path used to originally open the dataset. The strings will be UTF-8 encoded
source
ArchGDAL.getbandMethod
getband(dataset::AbstractDataset, i::Integer)
getband(ds::RasterDataset, i::Integer)
Fetch a band object for a dataset from its index.
source
ArchGDAL.getgeotransform!Method
getgeotransform!(dataset::AbstractDataset, transform::Vector{Cdouble})
Fetch the affine transformation coefficients.
Fetches the coefficients for transforming between pixel/line (P,L) raster space, and projection coordinates (Xp,Yp) space.
Xp = padfTransform[0] + P*padfTransform[1] + L*padfTransform[2];
Yp = padfTransform[3] + P*padfTransform[4] + L*padfTransform[5];
In a north up image, padfTransform[1] is the pixel width, and padfTransform[5] is the pixel height. The upper left corner of the upper left pixel is at position (padfTransform[0],padfTransform[3]).
The default transform is (0,1,0,0,0,1) and should be returned even when a CE_Failure error is returned, such as for formats that don't support transformation to projection coordinates.
Parameters
• buffer a six double buffer into which the transformation will be placed.
Returns
CE_None on success, or CE_Failure if no transform can be fetched.
source
ArchGDAL.getlayerMethod
getlayer(dataset::AbstractDataset, name::AbstractString)
Fetch the feature layer corresponding to the given name
The returned layer remains owned by the dataset and should not be deleted by the application.
source
ArchGDAL.getlayerMethod
getlayer(dataset::AbstractDataset, i::Integer)
Fetch the layer at index i (between 0 and nlayer(dataset)-1)
The returned layer remains owned by the dataset and should not be deleted by the application.
source
ArchGDAL.getlayerMethod
getlayer(dataset::AbstractDataset)
Fetch the first layer and raise an error if dataset contains more than one layer
The returned layer remains owned by the dataset and should not be deleted by the application.
source
ArchGDAL.getprojMethod
getproj(dataset::AbstractDataset)
Fetch the projection definition string for this dataset in OpenGIS WKT format.
It should be suitable for use with the OGRSpatialReference class. When a projection definition is not available an empty (but not NULL) string is returned.
source
ArchGDAL.readMethod
read(filename; flags=OF_READONLY, alloweddrivers, options, siblingfiles)
Open a raster file
Parameters
• filename: the filename of the dataset to be read.
Keyword Arguments
• flags: a combination of OF_* flags (listed below) that may be combined through the logical | operator. It defaults to OF_READONLY.
• Driver kind: OF_Raster for raster drivers, OF_Vector for vector drivers. If none of the value is specified, both are implied.
• Access mode: OF_READONLY (exclusive) or OF_UPDATE.
• Shared mode: OF_Shared. If set, it allows the sharing of handles for a dataset with other callers that have set OF_Shared.
• Verbose error: OF_VERBOSE_ERROR. If set, a failed attempt to open the file will lead to an error message to be reported.
• options: additional format dependent options.
Example
dataset = ArchGDAL.read("point.shp")
# work with dataset from here
or
ArchGDAL.read("point.shp") do dataset
# work with dataset from here
end
Returns
The corresponding dataset.
source
ArchGDAL.releaseresultsetMethod
releaseresultset(dataset::AbstractDataset, layer::FeatureLayer)
Release results of ExecuteSQL().
This function should only be used to deallocate OGRLayers resulting from an ExecuteSQL() call on the same Dataset. Failure to deallocate a results set before destroying the Dataset may cause errors.
Parameters
• dataset: the dataset handle.
• layer: the result of a previous ExecuteSQL() call.
source
ArchGDAL.setproj!Method
setproj!(dataset::AbstractDataset, projstring::AbstractString)
Set the projection reference string for this dataset.
source
ArchGDAL.testcapabilityMethod
testcapability(dataset::AbstractDataset, capability::AbstractString)
Test if capability is available. true if capability available otherwise false.
One of the following dataset capability names can be passed into this function, and a true or false value will be returned indicating whether or not the capability is available for this object.
• ODsCCreateLayer: True if this datasource can create new layers.
• ODsCDeleteLayer: True if this datasource can delete existing layers.
• ODsCCreateGeomFieldAfterCreateLayer: True if the layers of this datasource support CreateGeomField() just after layer creation.
• ODsCCurveGeometries: True if this datasource supports curve geometries.
• ODsCTransactions: True if this datasource supports (efficient) transactions.
• ODsCEmulatedTransactions: True if this datasource supports transactions through emulation.
The #define macro forms of the capability names should be used in preference to the strings themselves to avoid misspelling.
Parameters
• dataset: the dataset handle.
• capability: the capability to test.
source
ArchGDAL.unsafe_copyMethod
unsafe_copy(dataset::AbstractDataset; [filename, [driver,
[<keyword arguments>]]])
Create a copy of a dataset.
This method will attempt to create a copy of a raster dataset with the indicated filename, and in this drivers format. Band number, size, type, projection, geotransform and so forth are all to be copied from the provided template dataset.
Parameters
• dataset the dataset being duplicated.
Keyword Arguments
• filename the filename for the new dataset. UTF-8 encoded.
• driver the driver to use for creating the new dataset
• strict $true$ if the copy must be strictly equivalent, or more
normally $false$ if the copy may adapt as needed for the output format.
• options additional format dependent options controlling creation
of the output file. The APPEND_SUBDATASET=YES option can be specified to avoid prior destruction of existing dataset.
Returns
a pointer to the newly created dataset (may be read-only access).
Note: many sequential write once formats (such as JPEG and PNG) don't implement the Create() method but do implement this CreateCopy() method. If the driver doesn't implement CreateCopy(), but does implement Create() then the default CreateCopy() mechanism built on calling Create() will be used.
It is intended that CreateCopy() will often be used with a source dataset which is a virtual dataset allowing configuration of band types, and other information without actually duplicating raster data (see the VRT driver). This is what is done by the gdal_translate utility for example.
This function will validate the creation option list passed to the driver with the GDALValidateCreationOptions() method. This check can be disabled by defining the configuration option GDAL_VALIDATE_CREATION_OPTIONS=NO.
After you have finished working with the returned dataset, it is required to close it with GDALClose(). This does not only close the file handle, but also ensures that all the data and metadata has been written to the dataset (GDALFlushCache() is not sufficient for that purpose).
In some situations, the new dataset can be created in another process through the GDAL API Proxy mechanism.
source
ArchGDAL.unsafe_createMethod
unsafe_create(filename::AbstractString; driver, width, height, nbands,
dtype, options)
Create a new dataset.
What argument values are legal for particular drivers is driver specific, and there is no way to query in advance to establish legal values.
That function will try to validate the creation option list passed to the driver with the GDALValidateCreationOptions() method. This check can be disabled by defining the configuration option GDALVALIDATECREATION_OPTIONS=NO.
After you have finished working with the returned dataset, it is required to close it with GDALClose(). This does not only close the file handle, but also ensures that all the data and metadata has been written to the dataset (GDALFlushCache() is not sufficient for that purpose).
In some situations, the new dataset can be created in another process through the GDAL API Proxy mechanism.
In GDAL 2, the arguments nXSize, nYSize and nBands can be passed to 0 when creating a vector-only dataset for a compatible driver.
source
ArchGDAL.unsafe_executesqlMethod
unsafe_executesql(dataset::AbstractDataset, query::AbstractString; dialect,
spatialfilter)
Execute an SQL statement against the data store.
The result of an SQL query is either NULL for statements that are in error, or that have no results set, or an OGRLayer pointer representing a results set from the query. Note that this OGRLayer is in addition to the layers in the data store and must be destroyed with ReleaseResultSet() before the dataset is closed (destroyed).
For more information on the SQL dialect supported internally by OGR review the OGR SQL document. Some drivers (i.e. Oracle and PostGIS) pass the SQL directly through to the underlying RDBMS.
Starting with OGR 1.10, the SQLITE dialect can also be used.
Parameters
• dataset: the dataset handle.
• query: the SQL statement to execute.
• spatialfilter: geometry which represents a spatial filter. Can be NULL.
• dialect: allows control of the statement dialect. If set to NULL, the OGR SQL engine will be used, except for RDBMS drivers that will use their dedicated SQL engine, unless OGRSQL is explicitly passed as the dialect. Starting with OGR 1.10, the SQLITE dialect can also be used.
Returns
an OGRLayer containing the results of the query. Deallocate with ReleaseResultSet().
source
ArchGDAL.unsafe_readMethod
unsafe_read(filename; flags=OF_READONLY, alloweddrivers, options,
siblingfiles)
Open a raster file as a Dataset.
This function will try to open the passed file, or virtual dataset name by invoking the Open method of each registered Driver in turn. The first successful open will result in a returned dataset. If all drivers fail then NULL is returned and an error is issued.
Parameters
• filename the name of the file to access. In the case of exotic drivers
this may not refer to a physical file, but instead contain information for the driver on how to access a dataset. It should be in UTF-8 encoding.
• flags a combination of GDAL_OF_* flags (listed below) that may be combined through the logical | operator.
• Driver kind: GDALOFRASTER for raster drivers, GDALOFVECTOR for vector drivers. If none of the value is specified, both are implied.
• Access mode: OF_READONLY (exclusive) or OF_UPDATE.
• Shared mode: GDAL_OF_SHARED. If set, it allows the sharing of Dataset handles for a dataset with other callers that have set GDALOFSHARED. In particular, GDALOpenEx() will consult its list of currently open and shared Dataset's, and if the GetDescription() name for one exactly matches the pszFilename passed to GDALOpenEx() it will be referenced and returned, if GDALOpenEx() is called from the same thread.
• Verbose error: GDALOFVERBOSE_ERROR. If set, a failed attempt to open the file will lead to an error message to be reported.
• options: additional format dependent options.
Several recommendations:
• If you open a dataset object with GA_Update access, it is not recommended
to open a new dataset on the same underlying file.
• The returned dataset should only be accessed by one thread at a time. To use
it from different threads, you must add all necessary code (mutexes, etc.) to avoid concurrent use of the object. (Some drivers, such as GeoTIFF, maintain internal state variables that are updated each time a new block is read, preventing concurrent use.)
• In order to reduce the need for searches through the operating system file
system machinery, it is possible to give an optional list of files with the papszSiblingFiles parameter. This is the list of all files at the same level in the file system as the target file, including the target file. The filenames must not include any path components, are essentially just the output of VSIReadDir() on the parent directory. If the target object does not have filesystem semantics then the file list should be NULL.
In some situations (dealing with unverified data), the datasets can be opened in another process through the GDAL API Proxy mechanism.
For drivers supporting the VSI virtual file API, it is possible to open a file in a .zip archive (see VSIInstallZipFileHandler()), a .tar/.tar.gz/.tgz archive (see VSIInstallTarFileHandler()), or a HTTP / FTP server (see VSIInstallCurlFileHandler())
source
ArchGDAL.writeMethod
write(dataset::AbstractDataset, filename::AbstractString; kwargs...)
Writes the dataset to the designated filename.
source
## Feature Data
ArchGDAL.asbinaryMethod
asbinary(feature::AbstractFeature, i::Integer)
Fetch field value as binary.
Parameters
• hFeat: handle to the feature that owned the field.
• iField: the field to fetch, from 0 to GetFieldCount()-1.
Returns
the field value. This list is internal, and should not be modified, or freed. Its lifetime may be very brief.
source
ArchGDAL.asboolMethod
asbool(feature::AbstractFeature, i::Integer)
Fetch field value as a boolean
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asdatetimeMethod
asdatetime(feature::AbstractFeature, i::Integer)
Fetch field value as date and time. Currently this method only works for OFTDate, OFTTime and OFTDateTime fields.
Parameters
• hFeat: handle to the feature that owned the field.
• iField: the field to fetch, from 0 to GetFieldCount()-1.
Returns
true on success or false on failure.
source
ArchGDAL.asdoubleMethod
asdouble(feature::AbstractFeature, i::Integer)
Fetch field value as a double.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asdoublelistMethod
asdoublelist(feature::AbstractFeature, i::Integer)
Fetch field value as a list of doubles.
Parameters
• hFeat: handle to the feature that owned the field.
• iField: the field to fetch, from 0 to GetFieldCount()-1.
• pnCount: an integer to put the list count (number of doubles) into.
Returns
the field value. This list is internal, and should not be modified, or freed. Its lifetime may be very brief. If *pnCount is zero on return the returned pointer may be NULL or non-NULL.
source
ArchGDAL.asintMethod
asint(feature::AbstractFeature, i::Integer)
Fetch field value as integer.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asint16Method
asint16(feature::AbstractFeature, i::Integer)
Fetch field value as integer 16 bit.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asint64Method
asint64(feature::AbstractFeature, i::Integer)
Fetch field value as integer 64 bit.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asint64listMethod
asint64list(feature::AbstractFeature, i::Integer)
Fetch field value as a list of 64 bit integers.
Parameters
• hFeat: handle to the feature that owned the field.
• iField: the field to fetch, from 0 to GetFieldCount()-1.
• pnCount: an integer to put the list count (number of integers) into.
Returns
the field value. This list is internal, and should not be modified, or freed. Its lifetime may be very brief. If *pnCount is zero on return the returned pointer may be NULL or non-NULL.
source
ArchGDAL.asintlistMethod
asintlist(feature::AbstractFeature, i::Integer)
Fetch field value as a list of integers.
Parameters
• hFeat: handle to the feature that owned the field.
• iField: the field to fetch, from 0 to GetFieldCount()-1.
• pnCount: an integer to put the list count (number of integers) into.
Returns
the field value. This list is internal, and should not be modified, or freed. Its lifetime may be very brief. If *pnCount is zero on return the returned pointer may be NULL or non-NULL.
source
ArchGDAL.assingleMethod
assingle(feature::AbstractFeature, i::Integer)
Fetch field value as a single.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asstringMethod
asstring(feature::AbstractFeature, i::Integer)
Fetch field value as a string.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.asstringlistMethod
asstringlist(feature::AbstractFeature, i::Integer)
Fetch field value as a list of strings.
Parameters
• hFeat: handle to the feature that owned the field.
• iField: the field to fetch, from 0 to GetFieldCount()-1.
Returns
the field value. This list is internal, and should not be modified, or freed. Its lifetime may be very brief.
source
ArchGDAL.destroyMethod
destroy(feature::AbstractFeature)
Destroy the feature passed in.
The feature is deleted, but within the context of the GDAL/OGR heap. This is necessary when higher level applications use GDAL/OGR from a DLL and they want to delete a feature created within the DLL. If the delete is done in the calling application the memory will be freed onto the application heap which is inappropriate.
source
ArchGDAL.fillunsetwithdefault!Method
fillunsetwithdefault!(feature::AbstractFeature; notnull = true,
options = StringList(C_NULL))
Fill unset fields with default values that might be defined.
Parameters
• feature: handle to the feature.
• notnull: if we should fill only unset fields with a not-null constraint.
• papszOptions: unused currently. Must be set to NULL.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.findfieldindexMethod
findfieldindex(feature::AbstractFeature, name::Union{AbstractString, Symbol})
Fetch the field index given field name.
Parameters
• feature: the feature on which the field is found.
• name: the name of the field to search for.
Returns
the field index, or nothing if no matching field is found.
Remarks
This is a cover for the OGRFeatureDefn::GetFieldIndex() method.
source
ArchGDAL.findgeomindexFunction
findgeomindex(feature::AbstractFeature, name::Union{AbstractString, Symbol} = "")
Fetch the geometry field index given geometry field name.
Parameters
• feature: the feature on which the geometry field is found.
• name: the name of the geometry field to search for. (defaults to "")
Returns
the geometry field index, or -1 if no matching geometry field is found.
Remarks
This is a cover for the OGRFeatureDefn::GetGeomFieldIndex() method.
source
ArchGDAL.getfidMethod
getfid(feature::AbstractFeature)
Get feature identifier.
Returns
feature id or OGRNullFID (-1) if none has been assigned.
source
ArchGDAL.getfieldMethod
getfield(feature, i)
When the field is unset, it will return nothing. When the field is set but null, it will return missing.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
• https://gdal.org/development/rfc/rfc67_nullfieldvalues.html
source
ArchGDAL.getfielddefnMethod
getfielddefn(feature::AbstractFeature, i::Integer)
Fetch definition for this field.
Parameters
• feature: the feature on which the field is found.
• i: the field to fetch, from 0 to GetFieldCount()-1.
Returns
an handle to the field definition (from the FeatureDefn). This is an internal reference, and should not be deleted or modified.
source
ArchGDAL.getgeomMethod
getgeom(feature::AbstractFeature, i::Integer)
Returns a clone of the feature geometry at index i.
Parameters
• feature: the feature to get geometry from.
• i: geometry field to get.
source
ArchGDAL.getgeomdefnMethod
getgeomdefn(feature::AbstractFeature, i::Integer)
Fetch definition for this geometry field.
Parameters
• feature: the feature on which the field is found.
• i: the field to fetch, from 0 to GetGeomFieldCount()-1.
Returns
The field definition (from the OGRFeatureDefn). This is an internal reference, and should not be deleted or modified.
source
ArchGDAL.getmediatypeMethod
getmediatype(feature::AbstractFeature)
Returns the native media type for the feature.
The native media type is the identifier for the format of the native data. It follows the IANA RFC 2045 (see https://en.wikipedia.org/wiki/Media_type), e.g. "application/vnd.geo+json" for JSON.
source
ArchGDAL.getnativedataMethod
getnativedata(feature::AbstractFeature)
Returns the native data for the feature.
The native data is the representation in a "natural" form that comes from the driver that created this feature, or that is aimed at an output driver. The native data may be in different format, which is indicated by GetNativeMediaType().
Note that most drivers do not support storing the native data in the feature object, and if they do, generally the NATIVE_DATA open option must be passed at dataset opening.
The "native data" does not imply it is something more performant or powerful than what can be obtained with the rest of the API, but it may be useful in round-tripping scenarios where some characteristics of the underlying format are not captured otherwise by the OGR abstraction.
source
ArchGDAL.isfieldnullMethod
isfieldnull(feature::AbstractFeature, i::Integer)
Test if a field is null.
Parameters
• feature: the feature that owned the field.
• i: the field to test, from 0 to GetFieldCount()-1.
Returns
true if the field is null, otherwise false.
References
• https://gdal.org/development/rfc/rfc67_nullfieldvalues.html
source
ArchGDAL.isfieldsetMethod
isfieldset(feature::AbstractFeature, i::Integer)
Test if a field has ever been assigned a value or not.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.isfieldsetandnotnullMethod
isfieldsetandnotnull(feature::AbstractFeature, i::Integer)
Test if a field is set and not null.
Parameters
• feature: the feature that owned the field.
• i: the field to test, from 0 to GetFieldCount()-1.
Returns
true if the field is set and not null, otherwise false.
References
• https://gdal.org/development/rfc/rfc67_nullfieldvalues.html
source
ArchGDAL.nfieldMethod
nfield(feature::AbstractFeature)
Fetch number of fields on this feature.
This will always be the same as the field count for the OGRFeatureDefn.
source
ArchGDAL.ngeomMethod
ngeom(feature::AbstractFeature)
Fetch number of geometry fields on this feature.
This will always be the same as the geometry field count for OGRFeatureDefn.
source
ArchGDAL.setfid!Method
setfid!(feature::AbstractFeature, i::Integer)
Set the feature identifier.
Parameters
• feature: handle to the feature to set the feature id to.
• i: the new feature identifier value to assign.
Returns
On success OGRERR_NONE, or on failure some other value.
source
ArchGDAL.setfield!Function
setfield!(feature::AbstractFeature, i::Integer, value)
setfield!(feature::AbstractFeature, i::Integer, value::DateTime, tzflag::Int = 0)
Set a feature's i-th field to value.
The following types for value are accepted: Int32, Int64, Float64, AbstractString, or a Vector with those in it, as well as Vector{UInt8}. For DateTime values, an additional keyword argument tzflag is accepted (0=unknown, 1=localtime, 100=GMT, see data model for details).
OFTInteger, OFTInteger64 and OFTReal fields will be set directly. OFTString fields will be assigned a string representation of the value, but not necessarily taking into account formatting constraints on this field. Other field types may be unaffected.
Parameters
• feature: handle to the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
• value: the value to assign.
source
ArchGDAL.setfieldnull!Method
setfieldnull!(feature::AbstractFeature, i::Integer)
Clear a field, marking it as null.
Parameters
• feature: the feature that owned the field.
• i: the field to set to null, from 0 to GetFieldCount()-1.
References
• https://gdal.org/development/rfc/rfc67_nullfieldvalues.html
source
ArchGDAL.setfrom!Function
setfrom!(feature1::AbstractFeature, feature2::AbstractFeature, forgiving::Bool = false)
setfrom!(feature1::AbstractFeature, feature2::AbstractFeature, indices::Vector{Cint},
forgiving::Bool = false)
Set one feature from another.
Parameters
• feature1: handle to the feature to set to.
• feature2: handle to the feature from which geometry, and field values will be copied.
• indices: indices of the destination feature's fields stored at the corresponding index of the source feature's fields. A value of -1 should be used to ignore the source's field. The array should not be NULL and be as long as the number of fields in the source feature.
• forgiving: true if the operation should continue despite lacking output fields matching some of the source fields.
Returns
OGRERR_NONE if the operation succeeds, even if some values are not transferred, otherwise an error code.
source
ArchGDAL.setgeom!Method
setgeom!(feature::AbstractFeature, geom::AbstractGeometry)
Set feature geometry.
This method updates the features geometry, and operate exactly as SetGeometryDirectly(), except that this method does not assume ownership of the passed geometry, but instead makes a copy of it.
Parameters
• feature: the feature on which new geometry is applied to.
• geom: the new geometry to apply to feature.
Returns
OGRERR_NONE if successful, or OGR_UNSUPPORTED_GEOMETRY_TYPE if the geometry type is illegal for the OGRFeatureDefn (checking not yet implemented).
source
ArchGDAL.setgeom!Method
setgeom!(feature::AbstractFeature, i::Integer, geom::AbstractGeometry)
Set feature geometry of a specified geometry field.
This function updates the features geometry, and operate exactly as SetGeometryDirectly(), except that this function does not assume ownership of the passed geometry, but instead makes a copy of it.
Parameters
• feature: the feature on which to apply the geometry.
• i: geometry field to set.
• geom: the new geometry to apply to feature.
Returns
OGRERR_NONE if successful, or OGR_UNSUPPORTED_GEOMETRY_TYPE if the geometry type is illegal for the OGRFeatureDefn (checking not yet implemented).
source
ArchGDAL.setmediatype!Method
setmediatype!(feature::AbstractFeature, mediatype::AbstractString)
Sets the native media type for the feature.
The native media type is the identifier for the format of the native data. It follows the IANA RFC 2045 (see https://en.wikipedia.org/wiki/Media_type), e.g. "application/vnd.geo+json" for JSON.
source
ArchGDAL.setnativedata!Method
setnativedata!(feature::AbstractFeature, data::AbstractString)
Sets the native data for the feature.
The native data is the representation in a "natural" form that comes from the driver that created this feature, or that is aimed at an output driver. The native data may be in different format, which is indicated by GetNativeMediaType().
source
ArchGDAL.setstylestring!Method
setstylestring!(feature::AbstractFeature, style::AbstractString)
Set feature style string.
This method operate exactly as setstylestringdirectly!() except that it doesn't assume ownership of the passed string, but makes a copy of it.
source
ArchGDAL.unsafe_cloneMethod
unsafe_clone(feature::AbstractFeature)
Duplicate feature.
The newly created feature is owned by the caller, and will have its own reference to the OGRFeatureDefn.
source
ArchGDAL.unsetfield!Method
unsetfield!(feature::AbstractFeature, i::Integer)
Clear a field, marking it as unset.
Parameters
• feature: the feature that owned the field.
• i: the field to fetch, from 0 to GetFieldCount()-1.
source
ArchGDAL.validateMethod
validate(feature::AbstractFeature, flags::Integer, emiterror::Bool)
Validate that a feature meets constraints of its schema.
The scope of test is specified with the nValidateFlags parameter.
Regarding OGR_F_VAL_WIDTH, the test is done assuming the string width must be interpreted as the number of UTF-8 characters. Some drivers might interpret the width as the number of bytes instead. So this test is rather conservative (if it fails, then it will fail for all interpretations).
Parameters
• feature: handle to the feature to validate.
• flags: OGR_F_VAL_ALL or combination of OGR_F_VAL_NULL, OGR_F_VAL_GEOM_TYPE, OGR_F_VAL_WIDTH and OGR_F_VAL_ALLOW_NULL_WHEN_DEFAULT with | operator
• emiterror: true if a CPLError() must be emitted when a check fails
Returns
true if all enabled validation tests pass.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.addfielddefn!Method
addfielddefn!(featuredefn::FeatureDefn, fielddefn::FieldDefn)
Add a new field definition to the passed feature definition.
To add a new field definition to a layer definition, do not use this function directly, but use OGRLCreateField() instead.
This function should only be called while there are no OGRFeature objects in existence based on this OGRFeatureDefn. The OGRFieldDefn passed in is copied, and remains the responsibility of the caller.
source
ArchGDAL.addgeomdefn!Method
addgeomdefn!(featuredefn::FeatureDefn, geomfielddefn::AbstractGeomFieldDefn)
Add a new field definition to the passed feature definition.
To add a new geometry field definition to a layer definition, do not use this function directly, but use OGRLayer::CreateGeomField() instead.
This method does an internal copy of the passed geometry field definition, unless bCopy is set to false (in which case it takes ownership of the field definition.
This method should only be called while there are no OGRFeature objects in existence based on this OGRFeatureDefn.
source
ArchGDAL.deletefielddefn!Method
deletefielddefn!(featuredefn::FeatureDefn, i::Integer)
Delete an existing field definition.
To delete an existing field definition from a layer definition, do not use this function directly, but use OGR_L_DeleteField() instead.
This method should only be called while there are no OGRFeature objects in existence based on this OGRFeatureDefn.
source
ArchGDAL.deletegeomdefn!Method
deletegeomdefn!(featuredefn::FeatureDefn, i::Integer)
Delete an existing geometry field definition.
To delete an existing field definition from a layer definition, do not use this function directly, but use OGRLayer::DeleteGeomField() instead.
This method should only be called while there are no OGRFeature objects in existence based on this OGRFeatureDefn.
source
ArchGDAL.findfieldindexMethod
findfieldindex(featuredefn::AbstractFeatureDefn,
name::Union{AbstractString, Symbol})
Find field by name.
Returns
the field index, or -1 if no match found.
Remarks
This uses the OGRFeatureDefn::GetFieldIndex() method.
source
ArchGDAL.findgeomindexFunction
findgeomindex(featuredefn::AbstractFeatureDefn, name::AbstractString = "")
Find geometry field by name.
The geometry field index of the first geometry field matching the passed field name (case insensitively) is returned.
Returns
the geometry field index, or -1 if no match found.
source
ArchGDAL.getfielddefnMethod
getfielddefn(featuredefn::FeatureDefn, i::Integer)
Fetch field definition of the passed feature definition.
Parameters
• featuredefn: the feature definition to get the field definition from.
• i: index of the field to fetch, between 0 and nfield(featuredefn)-1.
Returns
an handle to an internal field definition object or NULL if invalid index. This object should not be modified or freed by the application.
source
ArchGDAL.getgeomdefnFunction
getgeomdefn(featuredefn::FeatureDefn, i::Integer = 0)
Fetch geometry field definition of the passed feature definition.
Parameters
• i geometry field to fetch, between 0 (default) and ngeomfield(fd)-1.
Returns
an internal field definition object or NULL if invalid index. This object should not be modified or freed by the application.
source
ArchGDAL.getgeomtypeMethod
getgeomtype(featuredefn::AbstractFeatureDefn)
Fetch the geometry base type of the passed feature definition.
For layers without any geometry field, this method returns wkbNone.
This returns the same result as OGR_FD_GetGeomType(OGR_L_GetLayerDefn(hLayer)) but for a few drivers, calling OGR_L_GetGeomType() directly can avoid lengthy layer definition initialization.
For layers with multiple geometry fields, this method only returns the geometry type of the first geometry column. For other columns, use OGR_GFld_GetType(OGR_FD_GetGeomFieldDefn(OGR_L_GetLayerDefn(hLayer), i)).
source
ArchGDAL.issameMethod
issame(featuredefn1::AbstractFeatureDefn, featuredefn2::AbstractFeatureDefn)
Test if the feature definition is identical to the other one.
source
ArchGDAL.ngeomMethod
ngeom(featuredefn::AbstractFeatureDefn)
Fetch number of geometry fields on the passed feature definition.
source
ArchGDAL.referenceMethod
reference(featuredefn::FeatureDefn)
Increments the reference count in the FeatureDefn by one.
The count is used to track the number of Features referencing this definition.
Returns
The updated reference count.
source
ArchGDAL.reorderfielddefns!Method
reorderfielddefns!(featuredefn::FeatureDefn, indices::Vector{Cint})
Reorder the field definitions in the array of the feature definition.
To reorder the field definitions in a layer definition, do not use this function directly, but use OGR_L_ReorderFields() instead.
This method should only be called while there are no OGRFeature objects in existence based on this OGRFeatureDefn.
Parameters
• fd: handle to the feature definition.
• indices: an array of GetFieldCount() elements which is a permutation of [0, GetFieldCount()-1]. indices is such that, for each field definition at position i after reordering, its position before reordering was indices[i].
source
ArchGDAL.setgeomtype!Method
setgeomtype!(featuredefn::FeatureDefn, etype::OGRwkbGeometryType)
Assign the base geometry type for the passed layer (same as the fd).
All geometry objects using this type must be of the defined type or a derived type. The default upon creation is wkbUnknown which allows for any geometry type. The geometry type should generally not be changed after any OGRFeatures have been created against this definition.
source
ArchGDAL.unsafe_createfeatureMethod
unsafe_createfeature(featuredefn::AbstractFeatureDefn)
Returns the new feature object with null fields and no geometry
Note that the OGRFeature will increment the reference count of it's defining OGRFeatureDefn. Destruction of the OGRFeatureDefn before destruction of all OGRFeatures that depend on it is likely to result in a crash.
Starting with GDAL 2.1, returns NULL in case out of memory situation.
source
ArchGDAL.unsafe_createfeaturedefnMethod
unsafe_createfeaturedefn(name::AbstractString)
Create a new feature definition object to hold field definitions.
The FeatureDefn maintains a reference count, but this starts at zero, and should normally be incremented by the owner.
source
ArchGDAL.addfeature!Method
addfeature!(layer::AbstractFeatureLayer, feature::AbstractFeature)
Write a new feature within a layer.
Remarks
The passed feature is written to the layer as a new feature, rather than overwriting an existing one. If the feature has a feature id other than OGRNullFID, then the native implementation may use that as the feature id of the new feature, but not necessarily. Upon successful return the passed feature will have been updated with the new feature id.
source
ArchGDAL.addfielddefn!Method
addfielddefn!(layer::AbstractFeatureLayer, field::AbstractFieldDefn,
approx = false)
Create a new field on a layer.
Parameters
• layer: the layer to write the field definition.
• field: the field definition to write to disk.
• approx: If true, the field may be created in a slightly different form depending on the limitations of the format driver.
Remarks
You must use this to create new fields on a real layer. Internally the OGRFeatureDefn for the layer will be updated to reflect the new field. Applications should never modify the OGRFeatureDefn used by a layer directly.
This function should not be called while there are feature objects in existence that were obtained or created with the previous layer definition.
Not all drivers support this function. You can query a layer to check if it supports it with the GDAL.OLCCreateField capability. Some drivers may only support this method while there are still no features in the layer. When it is supported, the existing features of the backing file/database should be updated accordingly.
Drivers may or may not support not-null constraints. If they support creating fields with not-null constraints, this is generally before creating any feature to the layer.
source
ArchGDAL.addgeomdefn!Method
addgeomdefn!(layer::AbstractFeatureLayer, field::AbstractGeomFieldDefn,
approx = false)
Create a new geometry field on a layer.
Parameters
• layer: the layer to write the field definition.
• field: the geometry field definition to write to disk.
• approx: If true, the field may be created in a slightly different form depending on the limitations of the format driver.
Remarks
You must use this to create new geometry fields on a real layer. Internally the OGRFeatureDefn for the layer will be updated to reflect the new field. Applications should never modify the OGRFeatureDefn used by a layer directly.
This function should not be called while there are feature objects in existence that were obtained or created with the previous layer definition.
Not all drivers support this function. You can query a layer to check if it supports it with the GDAL.OLCCreateGeomField capability. Some drivers may only support this method while there are still no features in the layer. When it is supported, the existing features of the backing file/database should be updated accordingly.
Drivers may or may not support not-null constraints. If they support creating fields with not-null constraints, this is generally before creating any feature to the layer.
source
ArchGDAL.copyMethod
copy(layer, dataset, name, options)
Copy an existing layer.
This method creates a new layer, duplicate the field definitions of the source layer, and then duplicates each feature of the source layer.
Parameters
• layer: source layer to be copied.
Keyword Arguments
• dataset: the dataset handle. (Creates a new dataset in memory by default.)
• name: the name of the layer to create on the dataset.
• options: a StringList of name=value (driver-specific) options.
source
ArchGDAL.createlayerMethod
createlayer(name, dataset, geom, spatialref, options)
This function attempts to create a new layer on the dataset with the indicated name, spatialref, and geometry type.
Keyword Arguments
• name: the name for the new layer. This should ideally not match any existing layer on the datasource. Defaults to an empty string.
• dataset: the dataset. Defaults to creating a new in memory dataset.
• geom: the geometry type for the layer. Use wkbUnknown (default) if there are no constraints on the types geometry to be written.
• spatialref: the coordinate system to use for the new layer.
• options: a StringList of name=value (driver-specific) options.
source
ArchGDAL.deletefeature!Method
deletefeature!(layer::AbstractFeatureLayer, i::Integer)
Delete feature with fid i from layer.
Remarks
The feature with the indicated feature id is deleted from the layer if supported by the driver. Most drivers do not support feature deletion, and will return OGRERRUNSUPPORTEDOPERATION. The OGRLTestCapability() function may be called with OLCDeleteFeature to check if the driver supports feature deletion.
source
ArchGDAL.envelopeFunction
envelope(layer::AbstractFeatureLayer, force::Bool = false)
envelope(layer::AbstractFeatureLayer, i::Integer, force::Bool = false)
Fetch the extent of this layer.
Returns the extent (MBR) of the data in the layer. If force is false, and it would be expensive to establish the extent then OGRERR_FAILURE will be returned indicating that the extent isn't know. If force is true then some implementations will actually scan the entire layer once to compute the MBR of all the features in the layer.
Parameters
• layer: handle to the layer from which to get extent.
• i: (optional) the index of the geometry field to compute the extent.
• force: Flag indicating whether the extent should be computed even if it is expensive.
Depending on the drivers, the returned extent may or may not take the spatial filter into account. So it is safer to call GetExtent() without setting a spatial filter.
Layers without any geometry may return OGRERR_FAILURE just indicating that no meaningful extents could be collected.
Note that some implementations of this method may alter the read cursor of the layer.
source
ArchGDAL.findfieldindexMethod
findfieldindex(layer::AbstractFeatureLayer,
field::Union{AbstractString, Symbol}, exactmatch::Bool)
Find the index of the field in a layer, or -1 if the field doesn't exist.
If exactmatch is set to false and the field doesn't exists in the given form the driver might apply some changes to make it match, like those it might do if the layer was created (eg. like LAUNDER in the OCI driver).
source
ArchGDAL.layerdefnMethod
layerdefn(layer::AbstractFeatureLayer)
Returns a view of the schema information for this layer.
Remarks
The featuredefn is owned by the layer and should not be modified.
source
ArchGDAL.nfeatureFunction
nfeature(layer::AbstractFeatureLayer, force::Bool = false)
Fetch the feature count in this layer, or -1 if the count is not known.
Parameters
• layer: handle to the layer that owned the features.
• force: flag indicating whether the count should be computed even if it is expensive. (false by default.)
source
ArchGDAL.referenceMethod
reference(layer::AbstractFeatureLayer)
Increment layer reference count.
Returns
The reference count after incrementing.
source
ArchGDAL.resetreading!Method
resetreading!(layer::AbstractFeatureLayer)
Reset feature reading to start on the first feature.
This affects nextfeature().
source
ArchGDAL.setattributefilter!Method
setattributefilter!(layer::AbstractFeatureLayer, query::AbstractString)
Set a new attribute query.
This method sets the attribute query string to be used when fetching features via the nextfeature() method. Only features for which the query evaluates as true will be returned.
Parameters
• layer: handle to the layer on which attribute query will be executed.
• query: query in restricted SQL WHERE format.
Remarks
The query string should be in the format of an SQL WHERE clause. For instance "population > 1000000 and population < 5000000" where population is an attribute in the layer. The query format is normally a restricted form of SQL WHERE clause as described in the "WHERE" section of the OGR SQL tutorial. In some cases (RDBMS backed drivers) the native capabilities of the database may be used to interpret the WHERE clause in which case the capabilities will be broader than those of OGR SQL.
Note that installing a query string will generally result in resetting the current reading position (ala resetreading!()).
source
ArchGDAL.setfeature!Method
setfeature!(layer::AbstractFeatureLayer, feature::AbstractFeature)
Rewrite an existing feature.
This function will write a feature to the layer, based on the feature id within the OGRFeature.
Remarks
Use OGRLTestCapability(OLCRandomWrite) to establish if this layer supports random access writing via OGRLSetFeature().
source
ArchGDAL.setignoredfields!Method
setignoredfields!(layer::AbstractFeatureLayer, fieldnames)
Set which fields can be omitted when retrieving features from the layer.
Parameters
• fieldnames: an array of field names terminated by NULL item. If NULL is
passed, the ignored list is cleared.
Remarks
If the driver supports this functionality (testable using OLCIgnoreFields capability), it will not fetch the specified fields in subsequent calls to GetFeature()/nextfeature() and thus save some processing time and/or bandwidth.
Besides field names of the layers, the following special fields can be passed: "OGR_GEOMETRY" to ignore geometry and "OGR_STYLE" to ignore layer style.
By default, no fields are ignored.
source
ArchGDAL.setnextbyindex!Method
setnextbyindex!(layer::AbstractFeatureLayer, i::Integer)
Move read cursor to the i-th feature in the current resultset.
This method allows positioning of a layer such that the nextfeature() call will read the requested feature, where i is an absolute index into the current result set. So, setting it to 3 would mean the next feature read with nextfeature() would have been the fourth feature to have been read if sequential reading took place from the beginning of the layer, including accounting for spatial and attribute filters.
Parameters
• layer: handle to the layer
• i: the index indicating how many steps into the result set to seek.
Remarks
Only in rare circumstances is setnextbyindex!() efficiently implemented. In all other cases the default implementation which calls resetreading!() and then calls nextfeature() i times is used. To determine if fast seeking is available on the layer, use the testcapability() method with a value of OLCFastSetNextByIndex.
source
ArchGDAL.setspatialfilter!Method
setspatialfilter!(layer::AbstractFeatureLayer, geom::AbstractGeometry)
Set a new spatial filter for the layer, using the geom.
This method set the geometry to be used as a spatial filter when fetching features via the nextfeature() method. Only features that geometrically intersect the filter geometry will be returned.
Parameters
• layer handle to the layer on which to set the spatial filter.
• geom handle to the geometry to use as a filtering region. NULL may be passed indicating that the current spatial filter should be cleared, but no new one instituted.
Remarks
Currently this test may be inaccurately implemented, but it is guaranteed that all features whose envelope (as returned by OGRGeometry::getEnvelope()) overlaps the envelope of the spatial filter will be returned. This can result in more shapes being returned that should strictly be the case.
For the time being the passed filter geometry should be in the same SRS as the geometry field definition it corresponds to (as returned by GetLayerDefn()->OGRFeatureDefn::GetGeomFieldDefn(i)->GetSpatialRef()). In the future this may be generalized.
Note that only the last spatial filter set is applied, even if several successive calls are done with different iGeomField values.
source
ArchGDAL.setspatialfilter!Method
setspatialfilter!(layer::AbstractFeatureLayer, i::Integer,
geom::AbstractGeometry)
Set a new spatial filter.
This method set the geometry to be used as a spatial filter when fetching features via the nextfeature() method. Only features that geometrically intersect the filter geometry will be returned.
Parameters
• layer: the layer on which to set the spatial filter.
• i: index of the geometry field on which the spatial filter operates.
• geom: the geometry to use as a filtering region. NULL may be passed indicating that the current spatial filter should be cleared, but no new one instituted.
Remarks
Currently this test is may be inaccurately implemented, but it is guaranteed that all features who's envelope (as returned by OGRGeometry::getEnvelope()) overlaps the envelope of the spatial filter will be returned. This can result in more shapes being returned that should strictly be the case.
For the time being the passed filter geometry should be in the same SRS as the layer (as returned by OGRLayer::GetSpatialRef()). In the future this may be generalized.
source
ArchGDAL.setspatialfilter!Method
setspatialfilter!(layer::AbstractFeatureLayer, i::Integer, xmin, ymin, xmax,
ymax)
Set a new rectangular spatial filter.
Parameters
• layer: the feature layer on which to set the spatial filter.
• i: index of the geometry field on which the spatial filter operates.
• xmin: the minimum X coordinate for the rectangular region.
• ymin: the minimum Y coordinate for the rectangular region.
• xmax: the maximum X coordinate for the rectangular region.
• ymax: the maximum Y coordinate for the rectangular region.
source
ArchGDAL.setspatialfilter!Method
setspatialfilter!(layer::AbstractFeatureLayer, xmin, ymin, xmax, ymax)
Set a new rectangular spatial filter for the layer.
This method set rectangle to be used as a spatial filter when fetching features via the nextfeature() method. Only features that geometrically intersect the given rectangle will be returned.
The x/y values should be in the same coordinate system as the layer as a whole (as returned by OGRLayer::GetSpatialRef()). Internally this method is normally implemented as creating a 5 vertex closed rectangular polygon and passing it to OGRLayer::SetSpatialFilter(). It exists as a convenience.
The only way to clear a spatial filter set with this method is to call OGRLayer::SetSpatialFilter(NULL).
source
ArchGDAL.testcapabilityMethod
testcapability(layer::AbstractFeatureLayer, capability::AbstractString)
Test if this layer supported the named capability.
Parameters
• capability the name of the capability to test.
Returns
true if the layer has the requested capability, false otherwise. It will return false for any unrecognized capabilities.
The capability codes that can be tested are represented as strings, but #defined constants exists to ensure correct spelling. Specific layer types may implement class specific capabilities, but this can't generally be discovered by the caller.
• OLCRandomRead / "RandomRead": true if the GetFeature() method is implemented in an optimized way for this layer, as opposed to the default implementation using resetreading!() and nextfeature() to find the requested feature id.
• OLCSequentialWrite / "SequentialWrite": true if the CreateFeature() method works for this layer. Note this means that this particular layer is writable. The same OGRLayer class may returned false for other layer instances that are effectively read-only.
• OLCRandomWrite / "RandomWrite": true if the SetFeature() method is operational on this layer. Note this means that this particular layer is writable. The same OGRLayer class may returned false for other layer instances that are effectively read-only.
• OLCFastSpatialFilter / "FastSpatialFilter": true if this layer implements spatial filtering efficiently. Layers that effectively read all features, and test them with the OGRFeature intersection methods should return false. This can be used as a clue by the application whether it should build and maintain its own spatial index for features in this layer.
• OLCFastFeatureCount / "FastFeatureCount": true if this layer can return a feature count (via GetFeatureCount()) efficiently. i.e. without counting the features. In some cases this will return true until a spatial filter is installed after which it will return false.
• OLCFastGetExtent / "FastGetExtent": true if this layer can return its data extent (via GetExtent()) efficiently, i.e. without scanning all the features. In some cases this will return true until a spatial filter is installed after which it will return false.
• OLCFastSetNextByIndex / "FastSetNextByIndex": true if this layer can perform the SetNextByIndex() call efficiently, otherwise false.
• OLCCreateField / "CreateField": true if this layer can create new fields on the current layer using CreateField(), otherwise false.
• OLCCreateGeomField / "CreateGeomField": (GDAL >= 1.11) true if this layer can create new geometry fields on the current layer using CreateGeomField(), otherwise false.
• OLCDeleteField / "DeleteField": true if this layer can delete existing fields on the current layer using DeleteField(), otherwise false.
• OLCReorderFields / "ReorderFields": true if this layer can reorder existing fields on the current layer using ReorderField() or ReorderFields(), otherwise false.
• OLCAlterFieldDefn / "AlterFieldDefn": true if this layer can alter the definition of an existing field on the current layer using AlterFieldDefn(), otherwise false.
• OLCDeleteFeature / "DeleteFeature": true if the DeleteFeature() method is supported on this layer, otherwise false.
• OLCStringsAsUTF8 / "StringsAsUTF8": true if values of OFTString fields are assured to be in UTF-8 format. If false the encoding of fields is uncertain, though it might still be UTF-8.
• OLCTransactions / "Transactions": true if the StartTransaction(), CommitTransaction() and RollbackTransaction() methods work in a meaningful way, otherwise false.
• OLCIgnoreFields / "IgnoreFields": true if fields, geometry and style will be omitted when fetching features as set by SetIgnoredFields() method.
• OLCCurveGeometries / "CurveGeometries": true if this layer supports writing curve geometries or may return such geometries. (GDAL 2.0).
source
ArchGDAL.unsafe_createfeatureMethod
unsafe_createfeature(layer::AbstractFeatureLayer)
Create and returns a new feature based on the layer definition.
The newly feature is owned by the layer (it will increase the number of features the layer by one), but the feature has not been written to the layer yet.
source
ArchGDAL.unsafe_getfeatureMethod
unsafe_getfeature(layer::AbstractFeatureLayer, i::Integer)
Return a feature (now owned by the caller) by its identifier or NULL on failure.
Parameters
• layer: the feature layer to be read from.
• i: the index of the feature to be returned.
Remarks
This function will attempt to read the identified feature. The nFID value cannot be OGRNullFID. Success or failure of this operation is unaffected by the spatial or attribute filters (and specialized implementations in drivers should make sure that they do not take into account spatial or attribute filters).
If this function returns a non-NULL feature, it is guaranteed that its feature id (OGRFGetFID()) will be the same as nFID.
Use OGRLTestCapability(OLCRandomRead) to establish if this layer supports efficient random access reading via OGRLGetFeature(); however, the call should always work if the feature exists as a fallback implementation just scans all the features in the layer looking for the desired feature.
Sequential reads (with OGRLGetNextFeature()) are generally considered interrupted by a OGRLGetFeature() call.
The returned feature is now owned by the caller, and should be freed with destroy().
source
ArchGDAL.unsafe_nextfeatureMethod
unsafe_nextfeature(layer::AbstractFeatureLayer)
Fetch the next available feature from this layer.
Parameters
• layer: the feature layer to be read from.
Remarks
This method implements sequential access to the features of a layer. The resetreading!() method can be used to start at the beginning again. Only features matching the current spatial filter (set with setspatialfilter!()) will be returned.
The returned feature becomes the responsibility of the caller to delete with destroy(). It is critical that all features associated with a FeatureLayer (more specifically a FeatureDefn) be destroyed before that layer is destroyed.
Features returned by nextfeature() may or may not be affected by concurrent modifications depending on drivers. A guaranteed way of seeing modifications in effect is to call resetreading!() on layers where nextfeature() has been called, before reading again. Structural changes in layers (field addition, deletion, ...) when a read is in progress may or may not be possible depending on drivers. If a transaction is committed/aborted, the current sequential reading may or may not be valid after that operation and a call to resetreading!() might be needed.
source
ArchGDAL.getdefaultMethod
getdefault(fielddefn::AbstractFieldDefn)
Get default field value
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.getfieldtypeMethod
getfieldtype(fielddefn::AbstractFieldDefn)
Returns the type or subtype (if any) of this field.
Parameters
• fielddefn: handle to the field definition.
Returns
The field type or subtype.
References
• https://gdal.org/development/rfc/rfc50ogrfield_subtype.html
source
ArchGDAL.getjustifyMethod
getjustify(fielddefn::AbstractFieldDefn)
Get the justification for this field.
Note: no driver is know to use the concept of field justification.
source
ArchGDAL.getprecisionMethod
getprecision(fielddefn::AbstractFieldDefn)
Get the formatting precision for this field.
This should normally be zero for fields of types other than OFTReal.
source
ArchGDAL.getsubtypeMethod
getsubtype(fielddefn::AbstractFieldDefn)
Fetch subtype of this field.
Parameters
• fielddefn: handle to the field definition to get subtype from.
Returns
field subtype.
source
ArchGDAL.getwidthMethod
getwidth(fielddefn::AbstractFieldDefn)
Get the formatting width for this field.
Returns
the width, zero means no specified width.
source
ArchGDAL.isdefaultdriverspecificMethod
isdefaultdriverspecific(fielddefn::AbstractFieldDefn)
Returns whether the default value is driver specific.
Driver specific default values are those that are not NULL, a numeric value, a literal value enclosed between single quote characters, CURRENTTIMESTAMP, CURRENTTIME, CURRENT_DATE or datetime literal value.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.isnullableMethod
isnullable(fielddefn::AbstractFieldDefn)
Return whether this field can receive null values.
By default, fields are nullable.
Even if this method returns false (i.e not-nullable field), it doesn't mean that OGRFeature::IsFieldSet() will necessarily return true, as fields can be temporarily unset and null/not-null validation is usually done when OGRLayer::CreateFeature()/SetFeature() is called.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.isnullableMethod
isnullable(geomdefn::AbstractGeomFieldDefn)
Return whether this geometry field can receive null values.
By default, fields are nullable.
Even if this method returns false (i.e not-nullable field), it doesn't mean that OGRFeature::IsFieldSet() will necessary return true, as fields can be temporarily unset and null/not-null validation is usually done when OGRLayer::CreateFeature()/SetFeature() is called.
Note that not-nullable geometry fields might also contain 'empty' geometries.
source
ArchGDAL.setdefault!Method
setdefault!(fielddefn::AbstractFieldDefn, default)
Set default field value.
The default field value is taken into account by drivers (generally those with a SQL interface) that support it at field creation time. OGR will generally not automatically set the default field value to null fields by itself when calling OGRFeature::CreateFeature() / OGRFeature::SetFeature(), but will let the low-level layers to do the job. So retrieving the feature from the layer is recommended.
The accepted values are NULL, a numeric value, a literal value enclosed between single quote characters (and inner single quote characters escaped by repetition of the single quote character), CURRENTTIMESTAMP, CURRENTTIME, CURRENT_DATE or a driver specific expression (that might be ignored by other drivers). For a datetime literal value, format should be 'YYYY/MM/DD HH:MM:SS[.sss]' (considered as UTC time).
Drivers that support writing DEFAULT clauses will advertize the GDALDCAPDEFAULT_FIELDS driver metadata item.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.setjustify!Method
setjustify!(fielddefn::FieldDefn, ejustify::OGRJustification)
Set the justification for this field.
Note: no driver is know to use the concept of field justification.
source
ArchGDAL.setnullable!Method
setnullable!(geomdefn::GeomFieldDefn, nullable::Bool)
Set whether this geometry field can receive null values.
By default, fields are nullable, so this method is generally called with false to set a not-null constraint.
Drivers that support writing not-null constraint will advertize the GDALDCAPNOTNULL_GEOMFIELDS driver metadata item.
source
ArchGDAL.setnullable!Method
setnullable!(fielddefn::FieldDefn, nullable::Bool)
Set whether this field can receive null values.
By default, fields are nullable, so this method is generally called with false to set a not-null constraint.
Drivers that support writing not-null constraint will advertize the GDALDCAPNOTNULL_FIELDS driver metadata item.
References
• https://gdal.org/development/rfc/rfc53ogrnotnull_default.html
source
ArchGDAL.setparams!Method
setparams!(fielddefn, name, etype, [nwidth, [nprecision, [justify]]])
Set defining parameters for a field in one call.
Parameters
• fielddefn: the field definition to set to.
• name: the new name to assign.
• etype: the new type (one of the OFT values like OFTInteger).
• nwidth: the preferred formatting width. 0 (default) indicates undefined.
• nprecision: number of decimals for formatting. 0 (default) for undefined.
• justify: the formatting justification ([OJUndefined], OJLeft or OJRight)
source
ArchGDAL.setprecision!Method
setprecision!(fielddefn::FieldDefn, precision::Integer)
Set the formatting precision for this field in characters.
This should normally be zero for fields of types other than OFTReal.
source
ArchGDAL.setspatialref!Method
setspatialref!(geomdefn::GeomFieldDefn, spatialref::AbstractSpatialRef)
Set the spatial reference of this field.
This function drops the reference of the previously set SRS object and acquires a new reference on the passed object (if non-NULL).
source
ArchGDAL.setsubtype!Method
setsubtype!(fielddefn::FieldDefn, subtype::OGRFieldSubType)
Set the subtype of this field.
This should never be done to an OGRFieldDefn that is already part of an OGRFeatureDefn.
Parameters
• fielddefn: handle to the field definition to set type to.
• subtype: the new field subtype.
References
• https://gdal.org/development/rfc/rfc50ogrfield_subtype.html
source
ArchGDAL.setwidth!Method
setwidth!(fielddefn::FieldDefn, width::Integer)
Set the formatting width for this field in characters.
This should never be done to an OGRFieldDefn that is already part of an OGRFeatureDefn.
source
ArchGDAL.unsafe_createfielddefnMethod
unsafe_createfielddefn(name::AbstractString, etype::OGRFieldType)
Create a new field definition.
By default, fields have no width, precision, are nullable and not ignored.
source
ArchGDAL.addgeom!Method
addgeom!(geomcontainer::AbstractGeometry, subgeom::AbstractGeometry)
Add a geometry to a geometry container.
Some subclasses of OGRGeometryCollection restrict the types of geometry that can be added, and may return an error. The passed geometry is cloned to make an internal copy.
For a polygon, subgeom must be a linearring. If the polygon is empty, the first added subgeometry will be the exterior ring. The next ones will be the interior rings.
Parameters
• geomcontainer: existing geometry.
• subgeom: geometry to add to the existing geometry.
source
ArchGDAL.addpoint!Function
addpoint!(geom::AbstractGeometry, x, y)
addpoint!(geom::AbstractGeometry, x, y, z)
Add a point to a geometry (line string or point).
Parameters
• geom: the geometry to add a point to.
• x: x coordinate of point to add.
• y: y coordinate of point to add.
• z: z coordinate of point to add.
source
ArchGDAL.boundaryMethod
boundary(geom::AbstractGeometry)
Returns the boundary of the geometry.
A new geometry object is created and returned containing the boundary of the geometry on which the method is invoked.
source
ArchGDAL.bufferFunction
buffer(geom::AbstractGeometry, dist::Real, quadsegs::Integer = 30)
Compute buffer of geometry.
Builds a new geometry containing the buffer region around the geometry on which it is invoked. The buffer is a polygon containing the region within the buffer distance of the original geometry.
Some buffer sections are properly described as curves, but are converted to approximate polygons. The nQuadSegs parameter can be used to control how many segments should be used to define a 90 degree curve - a quadrant of a circle. A value of 30 is a reasonable default. Large values result in large numbers of vertices in the resulting buffer geometry while small numbers reduce the accuracy of the result.
Parameters
• geom: the geometry.
• dist: the buffer distance to be applied. Should be expressed into the same unit as the coordinates of the geometry.
• quadsegs: the number of segments used to approximate a 90 degree (quadrant) of curvature.
source
ArchGDAL.centroid!Method
centroid!(geom::AbstractGeometry, centroid::AbstractGeometry)
Compute the geometry centroid.
The centroid location is applied to the passed in OGRPoint object. The centroid is not necessarily within the geometry.
This method relates to the SFCOM ISurface::get_Centroid() method however the current implementation based on GEOS can operate on other geometry types such as multipoint, linestring, geometrycollection such as multipolygons. OGC SF SQL 1.1 defines the operation for surfaces (polygons). SQL/MM-Part 3 defines the operation for surfaces and multisurfaces (multipolygons).
source
ArchGDAL.centroidMethod
centroid(geom::AbstractGeometry)
Compute the geometry centroid.
The centroid is not necessarily within the geometry.
(This method relates to the SFCOM ISurface::get_Centroid() method however the current implementation based on GEOS can operate on other geometry types such as multipoint, linestring, geometrycollection such as multipolygons. OGC SF SQL 1.1 defines the operation for surfaces (polygons). SQL/MM-Part 3 defines the operation for surfaces and multisurfaces (multipolygons).)
source
ArchGDAL.cloneMethod
clone(geom::AbstractGeometry)
Returns a copy of the geometry with the original spatial reference system.
source
ArchGDAL.closerings!Method
closerings!(geom::AbstractGeometry)
Force rings to be closed.
If this geometry, or any contained geometries has polygon rings that are not closed, they will be closed by adding the starting point at the end.
source
ArchGDAL.convexhullMethod
convexhull(geom::AbstractGeometry)
Returns the convex hull of the geometry.
A new geometry object is created and returned containing the convex hull of the geometry on which the method is invoked.
source
ArchGDAL.creategeomMethod
creategeom(geomtype::OGRwkbGeometryType)
Create an empty geometry of desired type.
This is equivalent to allocating the desired geometry with new, but the allocation is guaranteed to take place in the context of the GDAL/OGR heap.
source
ArchGDAL.crossesMethod
crosses(g1::AbstractGeometry, g2::AbstractGeometry)
Returns true if the geometries are crossing.
source
ArchGDAL.curvegeomMethod
curvegeom(geom::AbstractGeometry)
Return curve version of this geometry.
Returns a geometry that has possibly CIRCULARSTRING, COMPOUNDCURVE, CURVEPOLYGON, MULTICURVE or MULTISURFACE in it, by de-approximating linear into curve geometries.
If the geometry has no curve portion, the returned geometry will be a clone.
The reverse function is OGRGGetLinearGeometry().
source
ArchGDAL.delaunaytriangulationMethod
delaunaytriangulation(geom::AbstractGeometry, tol::Real, onlyedges::Bool)
Return a Delaunay triangulation of the vertices of the geometry.
Parameters
• geom: the geometry.
• tol: optional snapping tolerance to use for improved robustness
• onlyedges: if true, will return a MULTILINESTRING, otherwise it will return a GEOMETRYCOLLECTION containing triangular POLYGONs.
source
ArchGDAL.destroyMethod
Destroy geometry object.
Equivalent to invoking delete on a geometry, but it guaranteed to take place within the context of the GDAL/OGR heap.
source
ArchGDAL.destroyMethod
Destroy prepared geometry object.
Equivalent to invoking delete on a prepared geometry, but it guaranteed to take place within the context of the GDAL/OGR heap.
source
ArchGDAL.differenceMethod
difference(g1::AbstractGeometry, g2::AbstractGeometry)
Generates a new geometry which is the region of this geometry with the region of the other geometry removed.
Returns
A new geometry representing the difference of the geometries, or NULL if the difference is empty.
source
ArchGDAL.disjointMethod
disjoint(g1::AbstractGeometry, g2::AbstractGeometry)
Returns true if the geometries are disjoint.
source
ArchGDAL.distanceMethod
distance(g1::AbstractGeometry, g2::AbstractGeometry)
Returns the distance between the geometries or -1 if an error occurs.
source
ArchGDAL.empty!Method
empty!(geom::AbstractGeometry)
Clear geometry information.
This restores the geometry to its initial state after construction, and before assignment of actual geometry.
source
ArchGDAL.equalsMethod
equals(g1::AbstractGeometry, g2::AbstractGeometry)
Returns true if the geometries are equivalent.
source
ArchGDAL.forcetoFunction
forceto(geom::AbstractGeometry, targettype::OGRwkbGeometryType, [options])
Tries to force the provided geometry to the specified geometry type.
Parameters
• geom: the input geometry.
• targettype: target output geometry type.
options: (optional) options as a null-terminated vector of strings
It can promote 'single' geometry type to their corresponding collection type (see OGRGTGetCollection()) or the reverse. non-linear geometry type to their corresponding linear geometry type (see OGRGTGetLinear()), by possibly approximating circular arcs they may contain. Regarding conversion from linear geometry types to curve geometry types, only "wraping" will be done. No attempt to retrieve potential circular arcs by de-approximating stroking will be done. For that, OGRGeometry::getCurveGeometry() can be used.
The passed in geometry is cloned and a new one returned.
source
ArchGDAL.fromGMLMethod
fromGML(data)
Create geometry from GML.
This method translates a fragment of GML containing only the geometry portion into a corresponding OGRGeometry. There are many limitations on the forms of GML geometries supported by this parser, but they are too numerous to list here.
The following GML2 elements are parsed : Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, MultiGeometry.
source
ArchGDAL.fromWKBMethod
fromWKB(data)
Create a geometry object of the appropriate type from it's well known binary (WKB) representation.
Parameters
• data: pointer to the input BLOB data.
source
ArchGDAL.fromWKTMethod
fromWKT(data::Vector{String})
Create a geometry object of the appropriate type from its well known text (WKT) representation.
Parameters
• data: input zero terminated string containing WKT representation of the geometry to be created. The pointer is updated to point just beyond that last character consumed.
source
ArchGDAL.geomdimMethod
geomdim(geom::AbstractGeometry)
Get the dimension of the geometry. 0 for points, 1 for lines and 2 for surfaces.
This function corresponds to the SFCOM IGeometry::GetDimension() method. It indicates the dimension of the geometry, but does not indicate the dimension of the underlying space (as indicated by OGRGGetCoordinateDimension() function).
source
ArchGDAL.getgeomMethod
getgeom(geom::AbstractGeometry, i::Integer)
Fetch geometry from a geometry container.
For a polygon, getgeom(polygon,i) returns the exterior ring if i == 0, and the interior rings for i > 0.
Parameters
• geom: the geometry container from which to get a geometry from.
• i: index of the geometry to fetch, between 0 and ngeom() - 1.
source
ArchGDAL.getpointMethod
getpoint(geom::AbstractGeometry, i::Integer)
Fetch a point in line string or a point geometry, at index i.
Parameters
• i: the vertex to fetch, from 0 to ngeom()-1, zero for a point.
source
ArchGDAL.getspatialrefMethod
getspatialref(geom::AbstractGeometry)
Returns a clone of the spatial reference system for the geometry.
(The original SRS may be shared with many objects, and should not be modified.)
source
ArchGDAL.getxMethod
getx(geom::AbstractGeometry, i::Integer)
Fetch the x coordinate of a point from a geometry, at index i.
source
ArchGDAL.getyMethod
gety(geom::AbstractGeometry, i::Integer)
Fetch the y coordinate of a point from a geometry, at index i.
source
ArchGDAL.getzMethod
getz(geom::AbstractGeometry, i::Integer)
Fetch the z coordinate of a point from a geometry, at index i.
source
ArchGDAL.hascurvegeomMethod
hascurvegeom(geom::AbstractGeometry, nonlinear::Bool)
Returns if this geometry is or has curve geometry.
Parameters
• geom: the geometry to operate on.
• nonlinear: set it to true to check if the geometry is or contains a CIRCULARSTRING.
source
ArchGDAL.intersectionMethod
intersection(g1::AbstractGeometry, g2::AbstractGeometry)
Returns a new geometry representing the intersection of the geometries, or NULL if there is no intersection or an error occurs.
Generates a new geometry which is the region of intersection of the two geometries operated on. The OGRGIntersects() function can be used to test if two geometries intersect.
source
ArchGDAL.intersectsMethod
intersects(g1::AbstractGeometry, g2::AbstractGeometry)
Returns whether the geometries intersect
Determines whether two geometries intersect. If GEOS is enabled, then this is done in rigorous fashion otherwise true is returned if the envelopes (bounding boxes) of the two geometries overlap.
source
ArchGDAL.isemptyMethod
isempty(geom::AbstractGeometry)
Returns true if the geometry has no points, otherwise false.
source
ArchGDAL.isringMethod
isring(geom::AbstractGeometry)
Returns true if the geometry is a ring, otherwise false.
source
ArchGDAL.issimpleMethod
issimple(geom::AbstractGeometry)
Returns true if the geometry is simple, otherwise false.
source
ArchGDAL.isvalidMethod
isvalid(geom::AbstractGeometry)
Returns true if the geometry is valid, otherwise false.
source
ArchGDAL.ngeomMethod
ngeom(geom::AbstractGeometry)
The number of elements in a geometry or number of geometries in container.
This corresponds to
• OGR_G_GetPointCount for wkbPoint[25D] or wkbLineString[25D],
• OGR_G_GetGeometryCount for geometries of type wkbPolygon[25D], wkbMultiPoint[25D], wkbMultiLineString[25D], wkbMultiPolygon[25D] or wkbGeometryCollection[25D], and
• 0 for other geometry types.
source
ArchGDAL.pointalonglineMethod
pointalongline(geom::AbstractGeometry, distance::Real)
Fetch point at given distance along curve.
Parameters
• geom: curve geometry.
• distance: distance along the curve at which to sample position. This distance should be between zero and geomlength() for this curve.
Returns
a point or NULL.
source
ArchGDAL.pointonsurfaceMethod
pointonsurface(geom::AbstractGeometry)
Returns a point guaranteed to lie on the surface.
This method relates to the SFCOM ISurface::get_PointOnSurface() method however the current implementation based on GEOS can operate on other geometry types than the types that are supported by SQL/MM-Part 3 : surfaces (polygons) and multisurfaces (multipolygons).
source
ArchGDAL.polygonfromedgesMethod
polygonfromedges(lines::AbstractGeometry, tol::Real; besteffort = false,
autoclose = false)
Build a ring from a bunch of arcs.
Parameters
• lines: handle to an OGRGeometryCollection (or OGRMultiLineString) containing the line string geometries to be built into rings.
• tol: whether two arcs are considered close enough to be joined.
Keyword Arguments
• besteffort: (defaults to false) not yet implemented???.
• autoclose: indicates if the ring should be close when first and last points of the ring are the same. (defaults to false)
source
ArchGDAL.polygonizeMethod
polygonize(geom::AbstractGeometry)
Polygonizes a set of sparse edges.
A new geometry object is created and returned containing a collection of reassembled Polygons: NULL will be returned if the input collection doesn't correspond to a MultiLinestring, or when reassembling Edges into Polygons is impossible due to topological inconsistencies.
source
ArchGDAL.preparegeomMethod
preparegeom(geom::AbstractGeometry)
Create an prepared geometry of a geometry. This can speed up operations which interact with the geometry multiple times, by storing caches of calculated geometry information.
source
ArchGDAL.removeallgeoms!Method
removeallgeoms!(geom::AbstractGeometry, todelete::Bool = true)
Remove all geometries from an exiting geometry container.
Parameters
• geom: the existing geometry to delete from.
• todelete: if true the geometry will be destroyed, otherwise it will not. The default is true as the existing geometry is considered to own the geometries in it.
source
ArchGDAL.removegeom!Method
removegeom!(geom::AbstractGeometry, i::Integer, todelete::Bool = true)
Remove a geometry from an exiting geometry container.
Parameters
• geom: the existing geometry to delete from.
• i: the index of the geometry to delete. A value of -1 is a special flag meaning that all geometries should be removed.
• todelete: if true the geometry will be destroyed, otherwise it will not. The default is true as the existing geometry is considered to own the geometries in it.
source
ArchGDAL.segmentize!Method
segmentize!(geom::AbstractGeometry, maxlength::Real)
Modify the geometry such it has no segment longer than the given distance.
Interpolated points will have Z and M values (if needed) set to 0. Distance computation is performed in 2d only
Parameters
• geom: the geometry to segmentize
• maxlength: the maximum distance between 2 points after segmentization
source
ArchGDAL.setcoorddim!Method
setcoorddim!(geom::AbstractGeometry, dim::Integer)
Set the coordinate dimension.
This method sets the explicit coordinate dimension. Setting the coordinate dimension of a geometry to 2 should zero out any existing Z values. Setting the dimension of a geometry collection, a compound curve, a polygon, etc. will affect the children geometries. This will also remove the M dimension if present before this call.
source
ArchGDAL.setnonlineargeomflag!Method
setnonlineargeomflag!(flag::Bool)
Set flag to enable/disable returning non-linear geometries in the C API.
This flag has only an effect on the OGRFGetGeometryRef(), OGRFGetGeomFieldRef(), OGRLGetGeomType(), OGRGFldGetType() and OGRFDGetGeomType() C API methods. It is meant as making it simple for applications using the OGR C API not to have to deal with non-linear geometries, even if such geometries might be returned by drivers. In which case, they will be transformed into their closest linear geometry, by doing linear approximation, with OGRGForceTo().
Libraries should generally not use that method, since that could interfere with other libraries or applications.
Parameters
• flag: true if non-linear geometries might be returned (default value). false to ask for non-linear geometries to be approximated as linear geometries.
Returns
a point or NULL.
source
ArchGDAL.setpoint!Function
setpoint!(geom::AbstractGeometry, i::Integer, x, y)
setpoint!(geom::AbstractGeometry, i::Integer, x, y, z)
Set the location of a vertex in a point or linestring geometry.
Parameters
• geom: handle to the geometry to add a vertex to.
• i: the index of the vertex to assign (zero based) or zero for a point.
• x: input X coordinate to assign.
• y: input Y coordinate to assign.
• z: input Z coordinate to assign (defaults to zero).
source
ArchGDAL.setpointcount!Method
setpointcount!(geom::AbstractGeometry, n::Integer)
Set number of points in a geometry.
Parameters
• geom: the geometry.
• n: the new number of points for geometry.
source
ArchGDAL.simplifyMethod
simplify(geom::AbstractGeometry, tol::Real)
Compute a simplified geometry.
Parameters
• geom: the geometry.
• tol: the distance tolerance for the simplification.
source
ArchGDAL.simplifypreservetopologyMethod
simplifypreservetopology(geom::AbstractGeometry, tol::Real)
Simplify the geometry while preserving topology.
Parameters
• geom: the geometry.
• tol: the distance tolerance for the simplification.
source
ArchGDAL.symdifferenceMethod
symdifference(g1::AbstractGeometry, g2::AbstractGeometry)
Returns a new geometry representing the symmetric difference of the geometries or NULL if the difference is empty or an error occurs.
source
ArchGDAL.toISOWKBFunction
toISOWKB(geom::AbstractGeometry, order::OGRwkbByteOrder = wkbNDR)
Convert a geometry into SFSQL 1.2 / ISO SQL/MM Part 3 well known binary format.
Parameters
• geom: handle on the geometry to convert to a well know binary data from.
• order: One of wkbXDR or [wkbNDR] indicating MSB or LSB byte order resp.
source
ArchGDAL.toISOWKTMethod
toISOWKT(geom::AbstractGeometry)
Convert a geometry into SFSQL 1.2 / ISO SQL/MM Part 3 well known text format.
source
ArchGDAL.toJSONMethod
toJSON(geom::AbstractGeometry; kwargs...)
Convert a geometry into GeoJSON format.
• The following options are supported :
• COORDINATE_PRECISION=number: maximum number of figures after decimal separator to write in coordinates.
• SIGNIFICANT_FIGURES=number: maximum number of significant figures.
• If COORDINATEPRECISION is defined, SIGNIFICANTFIGURES will be ignored if
• specified.
• When none are defined, the default is COORDINATE_PRECISION=15.
Parameters
• geom: handle to the geometry.
Returns
A GeoJSON fragment or NULL in case of error.
source
ArchGDAL.toWKBFunction
toWKB(geom::AbstractGeometry, order::OGRwkbByteOrder = wkbNDR)
Convert a geometry well known binary format.
Parameters
• geom: handle on the geometry to convert to a well know binary data from.
• order: One of wkbXDR or [wkbNDR] indicating MSB or LSB byte order resp.
source
ArchGDAL.touchesMethod
touches(g1::AbstractGeometry, g2::AbstractGeometry)
Returns true if the geometries are touching.
source
ArchGDAL.transform!Method
transform!(geom::AbstractGeometry, coordtransform::CoordTransform)
Apply arbitrary coordinate transformation to geometry.
Parameters
• geom: handle on the geometry to apply the transform to.
• coordtransform: handle on the transformation to apply.
source
ArchGDAL.unionMethod
union(g1::AbstractGeometry, g2::AbstractGeometry)
Returns a new geometry representing the union of the geometries.
source
ArchGDAL.withinMethod
within(g1::AbstractGeometry, g2::AbstractGeometry)
Returns true if g1 is contained within g2.
source
ArchGDAL.addpart!Method
addpart!(stylemanager::StyleManager, styletool::StyleTool)
Add a part (style tool) to the current style.
Parameters
• stylemanager: handle to the style manager.
• styletool: the style tool defining the part to add.
Returns
true on success, false on error.
source
ArchGDAL.addstyle!Method
addstyle!(stylemanager::StyleManager, stylename, stylestring)
Add a style to the current style table.
Parameters
• stylemanager: handle to the style manager.
• stylename: the name of the style to add.
• stylestring: (optional) the style string to use, or (if not provided) to use the style stored in the manager.
Returns
true on success, false on error.
source
ArchGDAL.addstyle!Method
addstyle!(styletable::StyleTable, stylename, stylestring)
Add a new style in the table.
Parameters
• styletable: handle to the style table.
• name: the name the style to add.
• stylestring: the style string to add.
Returns
true on success, false on error
source
ArchGDAL.asdoubleFunction
asdouble(styletool::StyleTool, id::Integer, nullflag = Ref{Cint}(0))
Get Style Tool parameter value as a double.
Parameters
• styletool: handle to the style tool.
• id: the parameter id from the enumeration corresponding to the type of this style tool (one of the OGRSTPenParam, OGRSTBrushParam, OGRSTSymbolParam or OGRSTLabelParam enumerations)
• nullflag: pointer to an integer that will be set to true or false to indicate whether the parameter value is NULL.
Returns
the parameter value as a double and sets nullflag.
source
ArchGDAL.asintFunction
asint(styletool::StyleTool, id::Integer, nullflag = Ref{Cint}(0))
Get Style Tool parameter value as an integer.
Parameters
• styletool: handle to the style tool.
• id: the parameter id from the enumeration corresponding to the type of this style tool (one of the OGRSTPenParam, OGRSTBrushParam, OGRSTSymbolParam or OGRSTLabelParam enumerations)
• nullflag: pointer to an integer that will be set to true or false to indicate whether the parameter value is NULL.
Returns
the parameter value as an integer and sets nullflag.
source
ArchGDAL.asstringMethod
asstring(styletool::StyleTool, id::Integer)
asstring(styletool::StyleTool, id::Integer, nullflag::Ref{Cint})
Get Style Tool parameter value as a string.
Parameters
• styletool: handle to the style tool.
• id: the parameter id from the enumeration corresponding to the type of this style tool (one of the OGRSTPenParam, OGRSTBrushParam, OGRSTSymbolParam or OGRSTLabelParam enumerations)
• nullflag: pointer to an integer that will be set to true or false to indicate whether the parameter value is NULL.
Returns
the parameter value as a string and sets nullflag.
source
ArchGDAL.findstylestringMethod
findstylestring(styletable::StyleTable, name::AbstractString)
Get a style string by name.
Parameters
• styletable: handle to the style table.
• name: the name of the style string to find.
Returns
the style string matching the name or NULL if not found or error.
source
ArchGDAL.getstylestringMethod
getstylestring(styletool::StyleTool)
Get the style string for this Style Tool.
Parameters
• styletool: handle to the style tool.
Returns
the style string for this style tool or "" if the styletool is invalid.
source
ArchGDAL.gettypeMethod
gettype(styletool::StyleTool)
Determine type of Style Tool.
Parameters
• styletool: handle to the style tool.
Returns
the style tool type, one of OGRSTCPen (1), OGRSTCBrush (2), OGRSTCSymbol (3) or OGRSTCLabel (4). Returns OGRSTCNone (0) if the OGRStyleToolH is invalid.
source
ArchGDAL.getunitMethod
getunit(styletool::StyleTool)
Get Style Tool units.
Parameters
• styletool: handle to the style tool.
Returns
the style tool units.
source
ArchGDAL.initialize!Method
initialize!(stylemanager::StyleManager, stylestring = C_NULL)
Initialize style manager from the style string.
Parameters
• stylemanager: handle to the style manager.
• stylestring: the style string to use (can be NULL).
Returns
true on success, false on error.
source
ArchGDAL.laststyleMethod
laststyle(styletable::StyleTable)
Get the style name of the last style string fetched with OGRSTBLGetNextStyle.
Parameters
• styletable: handle to the style table.
Returns
the Name of the last style string or NULL on error.
source
ArchGDAL.loadstyletable!Method
loadstyletable!(styletable::StyleTable, filename::AbstractString)
Load a style table from a file.
Parameters
• styletable: handle to the style table.
• filename: the name of the file to load from.
Returns
true on success, false on error
source
ArchGDAL.nextstyleMethod
nextstyle(styletable::StyleTable)
Get the next style string from the table.
Parameters
• styletable: handle to the style table.
Returns
the next style string or NULL on error.
source
ArchGDAL.npartFunction
npart(stylemanager::StyleManager)
npart(stylemanager::StyleManager, stylestring::AbstractString)
Get the number of parts in a style.
Parameters
• stylemanager: handle to the style manager.
• stylestring: (optional) the style string on which to operate. If NULL then the current style string stored in the style manager is used.
Returns
the number of parts (style tools) in the style.
source
ArchGDAL.resetreading!Method
resetreading!(styletable::StyleTable)
Reset the next style pointer to 0.
Parameters
• styletable: handle to the style table.
source
ArchGDAL.savestyletableMethod
savestyletable(styletable::StyleTable, filename::AbstractString)
Save a style table to a file.
Parameters
• styletable: handle to the style table.
• filename: the name of the file to save to.
Returns
true on success, false on error
source
ArchGDAL.setparam!Function
setparam!(styletool::StyleTool, id::Integer, value)
Set Style Tool parameter value.
Parameters
• styletool: handle to the style tool.
• id: the parameter id from the enumeration corresponding to the type of this style tool (one of the OGRSTPenParam, OGRSTBrushParam, OGRSTSymbolParam or OGRSTLabelParam enumerations)
• value: the new parameter value, can be an Integer, Float64, or AbstactString
source
ArchGDAL.setunit!Method
setunit!(styletool::StyleTool, newunit::OGRSTUnitId, scale::Real)
Set Style Tool units.
Parameters
• styletool: handle to the style tool.
• newunit: the new unit.
• scale: ground to paper scale factor.
source
ArchGDAL.toRGBAMethod
toRGBA(styletool::StyleTool, color::AbstractString)
Return the r,g,b,a components of a color encoded in #RRGGBB[AA] format.
Parameters
• styletool: handle to the style tool.
• pszColor: the color to parse
Returns
(R,G,B,A) tuple of Cints.
source
ArchGDAL.unsafe_createstylemanagerFunction
unsafe_createstylemanager(styletable = C_NULL)
OGRStyleMgr factory.
Parameters
• styletable: OGRStyleTable or NULL if not working with a style table.
Returns
an handle to the new style manager object.
source
ArchGDAL.unsafe_createstyletoolMethod
unsafe_createstyletool(classid::OGRSTClassId)
OGRStyleTool factory.
Parameters
• classid: subclass of style tool to create. One of OGRSTCPen (1), OGRSTCBrush (2), OGRSTCSymbol (3) or OGRSTCLabel (4).
Returns
an handle to the new style tool object or NULL if the creation failed.
source
ArchGDAL.unsafe_getpartFunction
unsafe_getpart(stylemanager::StyleManager, id::Integer,
stylestring = C_NULL)
Fetch a part (style tool) from the current style.
Parameters
• stylemanager: handle to the style manager.
• id: the part number (0-based index).
• stylestring: (optional) the style string on which to operate. If not provided, then the current style string stored in the style manager is used.
Returns
OGRStyleToolH of the requested part (style tools) or NULL on error.
source
ArchGDAL.addfielddefn!Method
addfielddefn!(layer::AbstractFeatureLayer, name, etype::OGRFieldType;
<keyword arguments>)
Create a new field on a layer.
This function should not be called while there are feature objects in existence that were obtained or created with the previous layer definition.
Not all drivers support this function. You can query a layer to check if it supports it with the OLCCreateField capability. Some drivers may only support this method while there are still no features in the layer. When it is supported, the existing features of the backing file/database should be updated accordingly.
Drivers may or may not support not-null constraints. If they support creating fields with not-null constraints, this is generally before creating any feature to the layer.
Parameters
• layer: the layer to write the field definition.
• name: name of the field definition to write to disk.
• etype: type of the field definition to write to disk.
Keyword arguments
• nwidth: the preferred formatting width. 0 (default) indicates undefined.
• nprecision: number of decimals for formatting. 0 (default) for undefined.
• justify: the formatting justification ([OJUndefined], OJLeft or OJRight)
• approx: If true (default false), the field may be created in a slightly different form depending on the limitations of the format driver.
source
ArchGDAL.writegeomdefn!Method
writegeomdefn!(layer::AbstractFeatureLayer, name, etype::OGRwkbGeometryType,
approx=false)
Write a new geometry field on a layer.
This function should not be called while there are feature objects in existence that were obtained or created with the previous layer definition.
Not all drivers support this function. You can query a layer to check if it supports it with the OLCCreateField capability. Some drivers may only support this method while there are still no features in the layer. When it is supported, the existing features of the backing file/database should be updated accordingly.
Drivers may or may not support not-null constraints. If they support creating fields with not-null constraints, this is generally before creating any feature to the layer.
Parameters
• layer: the layer to write the field definition.
• name: name of the field definition to write to disk.
• etype: type of the geometry field defintion to write to disk.
Keyword arguments
• approx: If true (default false), the geometry field may be created in a slightly different form depending on the limitations of the driver.
source
## Raster Data
ArchGDAL.RasterDatasetType
RasterDataset(dataset::AbstractDataset)
This data structure is returned by the ArchGDAL.readraster function and is a wrapper for a GDAL dataset. This wrapper is to signal the user that the dataset should be treated as a 3D AbstractArray where the first two dimensions correspond to longitude and latitude and the third dimension corresponds to different raster bands.
As it is a wrapper around a GDAL Dataset, it supports the usual raster methods for a GDAL Dataset such as getgeotransform, nraster, getband, getproj, width, and height. As it is also a subtype of AbstractDiskArray{T,3}, it supports the following additional methods: readblock!, writeblock!, eachchunk, haschunks, etc. This satisfies the DiskArray interface, allowing us to be able to index into it like we would an array.
Constructing a RasterDataset will error if the raster bands do not have all the same size and a common element data type.
source
ArchGDAL._common_sizeMethod
_common_size(ds::AbstractDataset)
Determines the size of the raster bands in a dataset and errors if the sizes are not unique.
source
ArchGDAL.readrasterMethod
readraster(s::String; kwargs...)
Opens a GDAL raster dataset. The difference to ArchGDAL.read is that this function returns a RasterDataset, which is a subtype of AbstractDiskArray{T,3}, so that users can operate on the array using direct indexing.
source
ArchGDAL.getcolorentryasrgbMethod
getcolorentryasrgb(ct::ColorTable, i::Integer)
Fetch a table entry in RGB format.
In theory this method should support translation of color palettes in non-RGB color spaces into RGB on the fly, but currently it only works on RGB color tables.
Parameters
• i entry offset from zero to GetColorEntryCount()-1.
Returns
true on success, or false if the conversion isn't supported.
source
ArchGDAL.paletteinterpMethod
paletteinterp(ct::ColorTable)
Fetch palette interpretation.
Returns
palette interpretation enumeration value, usually GPI_RGB.
source
ArchGDAL.setcolorentry!Method
setcolorentry!(ct::ColorTable, i::Integer, entry::GDAL.GDALColorEntry)
Set entry in color table.
Note that the passed in color entry is copied, and no internal reference to it is maintained. Also, the passed in entry must match the color interpretation of the table to which it is being assigned.
The table is grown as needed to hold the supplied offset.
Parameters
• i entry offset from 0 to ncolorentry()-1.
• entry value to assign to table.
source
ArchGDAL.asdoubleMethod
asdouble(rat::RasterAttrTable, row::Integer, col::Integer)
Fetch field value as a double.
The value of the requested column in the requested row is returned as a double. Non double fields will be converted to double with the possibility of data loss.
Parameters
• row row to fetch (zero based).
• col column to fetch (zero based).
source
ArchGDAL.asintMethod
asint(rat::RasterAttrTable, row::Integer, col::Integer)
Fetch field value as a integer.
The value of the requested column in the requested row is returned as an int. Non-integer fields will be converted to int with the possibility of data loss.
Parameters
• row row to fetch (zero based).
• col column to fetch (zero based).
source
ArchGDAL.asstringMethod
asstring(rat::RasterAttrTable, row::Integer, col::Integer)
Fetch field value as a string.
The value of the requested column in the requested row is returned as a string. If the field is numeric, it is formatted as a string using default rules, so some precision may be lost.
Parameters
• row row to fetch (zero based).
• col column to fetch (zero based).
source
ArchGDAL.attributeio!Function
attributeio!(rat::RasterAttrTable, access::GDALRWFlag, col, startrow, nrows,
data::Vector)
Read or Write a block of data to/from the Attribute Table.
Parameters
• access Either GF_Read or GF_Write
• col Column of the Attribute Table
• startrow Row to start reading/writing (zero based)
• nrows Number of rows to read or write
• data Vector of Float64, Int32 or AbstractString to read/write. Should be at least nrows long.
source
ArchGDAL.changesarewrittentofileMethod
changesarewrittentofile(rat::RasterAttrTable)
Determine whether changes made to this RAT are reflected directly in the dataset
If this returns false then RasterBand.SetDefaultRAT() should be called. Otherwise this is unnecessary since changes to this object are reflected in the dataset.
source
ArchGDAL.columnnameMethod
columnname(rat::RasterAttrTable, i::Integer)
Fetch name of indicated column.
Parameters
• i the column index (zero based).
Returns
the column name or an empty string for invalid column numbers.
source
ArchGDAL.columntypeMethod
columntype(rat::RasterAttrTable, i::Integer)
Fetch column type.
Parameters
• col the column index (zero based).
Returns
column type or GFT_Integer if the column index is illegal.
source
ArchGDAL.createcolumn!Method
createcolumn!(rat::RasterAttrTable, name, fieldtype::GDALRATFieldType,
fieldusage::GDALRATFieldUsage)
Create new column.
If the table already has rows, all row values for the new column will be initialized to the default value ("", or zero). The new column is always created as the last column, can will be column (field) "GetColumnCount()-1" after CreateColumn() has completed successfully.
source
ArchGDAL.findcolumnindexMethod
findcolumnindex(rat::RasterAttrTable, usage::GDALRATFieldUsage)
Returns the index of the first column of the requested usage type, or -1 if no match is found.
Parameters
• usage usage type to search for.
source
ArchGDAL.findrowindexMethod
findrowindex(rat::RasterAttrTable, pxvalue::Real)
Get row for pixel value.
Given a raw pixel value, the raster attribute table is scanned to determine which row in the table applies to the pixel value. The row index is returned.
Parameters
• pxvalue the pixel value.
Returns
The row index or -1 if no row is appropriate.
source
ArchGDAL.getlinearbinningMethod
getlinearbinning(rat::RasterAttrTable)
Get linear binning information.
Returns
• row0min the lower bound (pixel value) of the first category.
• binsize the width of each category (in pixel value units).
source
ArchGDAL.initializeRAT!Method
initializeRAT!(rat::RasterAttrTable, colortable::ColorTable)
Initialize from color table.
This method will setup a whole raster attribute table based on the contents of the passed color table. The Value (GFUMinMax), Red (GFURed), Green (GFUGreen), Blue (GFUBlue), and Alpha (GFU_Alpha) fields are created, and a row is set for each entry in the color table.
The raster attribute table must be empty before calling initializeRAT!().
The Value fields are set based on the implicit assumption with color tables that entry 0 applies to pixel value 0, 1 to 1, etc.
source
ArchGDAL.setlinearbinning!Method
setlinearbinning!(rat::RasterAttrTable, row0min::Real, binsize::Real)
Set linear binning information.
For RATs with equal sized categories (in pixel value space) that are evenly spaced, this method may be used to associate the linear binning information with the table.
Parameters
• row0min the lower bound (pixel value) of the first category.
• binsize the width of each category (in pixel value units).
source
ArchGDAL.setrowcount!Method
setrowcount!(rat::RasterAttrTable, n::Integer)
Set row count.
Resizes the table to include the indicated number of rows. Newly created rows will be initialized to their default values - "" for strings, and zero for numeric fields.
source
ArchGDAL.setvalue!Function
setvalue!(rat::RasterAttrTable, row, col, val)
Set field value from string.
The indicated field (column) on the indicated row is set from the passed value. The value will be automatically converted for other field types, with a possible loss of precision.
Parameters
• row row to fetch (zero based).
• col column to fetch (zero based).
• val the value to assign, can be an AbstractString, Integer or Float64.
source
ArchGDAL.toColorTableFunction
toColorTable(rat::RasterAttrTable, n::Integer = -1)
Translate to a color table.
Parameters
• n The number of entries to produce (0 to n-1), or -1 to auto-determine the number of entries.
Returns
the generated color table or NULL on failure.
source
ArchGDAL.unsafe_cloneMethod
unsafe_clone(rat::RasterAttrTable)
Copy Raster Attribute Table.
Creates a new copy of an existing raster attribute table. The new copy becomes the responsibility of the caller to destroy. May fail (return NULL) if the attribute table is too large to clone: (nrow() * ncolumn() > RAT_MAX_ELEM_FOR_CLONE)
source
ArchGDAL.accessflagMethod
accessflag(band::AbstractRasterBand)
Return the access flag (e.g. OF_READONLY or OF_UPDATE) for this band.
source
ArchGDAL.blocksizeMethod
blocksize(band::AbstractRasterBand)
Fetch the "natural" block size of this band.
GDAL contains a concept of the natural block size of rasters so that applications can organized data access efficiently for some file formats. The natural block size is the block size that is most efficient for accessing the format. For many formats this is simple a whole scanline in which case *pnXSize is set to GetXSize(), and *pnYSize is set to 1.
However, for tiled images this will typically be the tile size.
Note that the X and Y block sizes don't have to divide the image size evenly, meaning that right and bottom edge blocks may be incomplete. See ReadBlock() for an example of code dealing with these issues.
source
ArchGDAL.copywholeraster!Method
copywholeraster!( source::AbstractRasterBand, dest::AbstractRasterBand;
[options, [progressdata, [progressfunc]]])
Copy all raster band raster data.
This function copies the complete raster contents of one band to another similarly configured band. The source and destination bands must have the same width and height. The bands do not have to have the same data type.
It implements efficient copying, in particular "chunking" the copy in substantial blocks.
Currently the only options value supported is : "COMPRESSED=YES" to force alignment on target dataset block sizes to achieve best compression. More options may be supported in the future.
Parameters
• source the source band
• dest the destination band
• options transfer hints in "StringList" Name=Value format.
• progressfunc progress reporting function.
• progressdata callback data for progress function.
source
ArchGDAL.createmaskband!Method
createmaskband!(band::AbstractRasterBand, nflags::Integer)
The default implementation of the CreateMaskBand() method is implemented based on similar rules to the .ovr handling implemented using the GDALDefaultOverviews object. A TIFF file with the extension .msk will be created with the same basename as the original file, and it will have as many bands as the original image (or just one for GMF_PER_DATASET). The mask images will be deflate compressed tiled images with the same block size as the original image if possible.
If you got a mask band with a previous call to GetMaskBand(), it might be invalidated by CreateMaskBand(). So you have to call GetMaskBand() again.
source
ArchGDAL.fillraster!Method
fillraster!(band::AbstractRasterBand, realvalue::Real, imagvalue::Real = 0)
Fill this band with a constant value.
GDAL makes no guarantees about what values pixels in newly created files are set to, so this method can be used to clear a band to a specified "default" value. The fill value is passed in as a double but this will be converted to the underlying type before writing to the file. An optional second argument allows the imaginary component of a complex constant value to be specified.
Parameters
• realvalue: Real component of fill value
• imagvalue: Imaginary component of fill value, defaults to zero
source
ArchGDAL.getcategorynamesMethod
getcategorynames(band::AbstractRasterBand)
Fetch the list of category names for this raster.
The return list is a "StringList" in the sense of the CPL functions. That is a NULL terminated array of strings. Raster values without associated names will have an empty string in the returned list. The first entry in the list is for raster values of zero, and so on.
source
ArchGDAL.getdatasetMethod
getdataset(band::AbstractRasterBand)
Fetch the handle to its dataset handle, or NULL if this cannot be determined.
Note that some RasterBands are not considered to be a part of a dataset, such as overviews or other "freestanding" bands.
source
ArchGDAL.getdefaultRATMethod
getdefaultRAT(band::AbstractRasterBand)
A RAT will be returned if there is a default one associated with the band, otherwise NULL is returned. The returned RAT is owned by the band and should not be deleted by the application.
source
ArchGDAL.getmaskbandMethod
getmaskband(band::IRasterBand)
Return the mask band associated with the band.
The RasterBand class includes a default implementation of GetMaskBand() that returns one of four default implementations:
• If a corresponding .msk file exists it will be used for the mask band.
• If the dataset has a NODATA_VALUES metadata item, an instance of the new
GDALNoDataValuesMaskBand class will be returned. GetMaskFlags() will return GMF_NODATA | GMF_PER_DATASET.
• If the band has a nodata value set, an instance of the new
GDALNodataMaskRasterBand class will be returned. GetMaskFlags() will return GMF_NODATA.
• If there is no nodata value, but the dataset has an alpha band that seems to
apply to this band (specific rules yet to be determined) and that is of type GDT_Byte then that alpha band will be returned, and the flags GMF_PER_DATASET and GMF_ALPHA will be returned in the flags.
• If neither of the above apply, an instance of the new
GDALAllValidRasterBand class will be returned that has 255 values for all pixels. The null flags will return GMF_ALL_VALID.
Note that the GetMaskBand() should always return a RasterBand mask, even if it is only an all 255 mask with the flags indicating GMF_ALL_VALID.
Returns
source
ArchGDAL.getnodatavalueMethod
getnodatavalue(band::AbstractRasterBand)
Fetch the no data value for this band.
If there is no out of data value, nothing will be returned instead. The no data value for a band is generally a special marker value used to mark pixels that are not valid data. Such pixels should generally not be displayed, nor contribute to analysis operations.
Returns
the nodata value for this band or nothing.
source
ArchGDAL.getoffsetMethod
getoffset(band::AbstractRasterBand)
Fetch the raster value offset.
This (in combination with GetScale()) is used to transform raw pixel values into the units returned by GetUnits(). For e.g. this might be used to store elevations in GUInt16 bands with a precision of 0.1, starting from -100.
Units value = (raw value * scale) + offset
For file formats that don't know this intrinsically, a value of 0 is returned.
source
ArchGDAL.getscaleMethod
getscale(band::AbstractRasterBand)
Fetch the raster value scale.
This value (in combination with the GetOffset() value) is used to transform raw pixel values into the units returned by GetUnits(). For example this might be used to store elevations in GUInt16 bands with a precision of 0.1, and starting from -100.
Units value = (raw value * scale) + offset
For file formats that don't know this intrinsically a value of one is returned.
source
ArchGDAL.getunittypeMethod
getunittype(band::AbstractRasterBand)
Return a name for the units of this raster's values. For instance, it might be "m" for an elevation model in meters, or "ft" for feet.
source
ArchGDAL.indexofMethod
indexof(band::AbstractRasterBand)
Fetch the band number (1+) within its dataset, or 0 if unknown.
This method may return a value of 0 to indicate overviews, or free-standing RasterBand objects without a relationship to a dataset.
source
ArchGDAL.maskflaginfoMethod
maskflaginfo(band::AbstractRasterBand)
Returns the flags as in maskflags(@ref) but unpacks the bit values into a named tuple with the following fields:
• all_valid
• per_dataset
• alpha
• nodata
Returns
A named tuple with unpacked mask flags
source
ArchGDAL.maskflagsMethod
maskflags(band::AbstractRasterBand)
Return the status flags of the mask band associated with the band.
The GetMaskFlags() method returns an bitwise OR-ed set of status flags with the following available definitions that may be extended in the future:
• GMF_ALL_VALID (0x01): There are no invalid pixels, all mask values
will be 255. When used this will normally be the only flag set.
• GMF_PER_DATASET (0x02): The mask band is shared between all bands on
the dataset.
• GMF_ALPHA (0x04): The mask band is actually an alpha band and may
have values other than 0 and 255.
• GMF_NODATA (0x08): Indicates the mask is actually being generated
from nodata values. (mutually exclusive of GMF_ALPHA)
The RasterBand class includes a default implementation of GetMaskBand() that returns one of four default implementations:
• If a corresponding .msk file exists it will be used for the mask band.
• If the dataset has a NODATA_VALUES metadata item, an instance of the new
GDALNoDataValuesMaskBand class will be returned. GetMaskFlags() will return GMF_NODATA | GMF_PER_DATASET.
• If the band has a nodata value set, an instance of the new
GDALNodataMaskRasterBand class will be returned. GetMaskFlags() will return GMF_NODATA.
• If there is no nodata value, but the dataset has an alpha band that seems to
apply to this band (specific rules yet to be determined) and that is of type GDT_Byte then that alpha band will be returned, and the flags GMF_PER_DATASET and GMF_ALPHA will be returned in the flags.
• If neither of the above apply, an instance of the new GDALAllValidRasterBand
class will be returned that has 255 values for all pixels. The null flags will return GMF_ALL_VALID.
Returns
source
ArchGDAL.regenerateoverviews!Method
regenerateoverviews!(band::AbstractRasterBand,
overviewbands::Vector{<:AbstractRasterBand}, resampling = "NEAREST")
Generate downsampled overviews.
This function will generate one or more overview images from a base image using the requested downsampling algorithm. Its primary use is for generating overviews via BuildOverviews(), but it can also be used to generate downsampled images in one file from another outside the overview architecture.
Parameters
• band the source (base level) band.
• overviewbands the list of downsampled bands to be generated.
Keyword Arguments
• resampling (optional) Resampling algorithm (eg. "AVERAGE"). default to "NEAREST".
• progressfunc (optional) progress report function.
• progressdata (optional) progress function callback data.
The output bands need to exist in advance.
This function will honour properly NODATA_VALUES tuples (special dataset metadata) so that only a given RGB triplet (in case of a RGB image) will be considered as the nodata value and not each value of the triplet independantly per band.
source
ArchGDAL.sampleoverviewMethod
sampleoverview(band::IRasterBand, nsamples::Integer)
Fetch best overview satisfying nsamples number of samples.
Returns the most reduced overview of the given band that still satisfies the desired number of samples nsamples. This function can be used with zero as the number of desired samples to fetch the most reduced overview. The same band as was passed in will be returned if it has not overviews, or if none of the overviews have enough samples.
source
ArchGDAL.setcolortable!Method
setcolortable!(band::AbstractRasterBand, colortable::ColorTable)
Set the raster color table.
The driver will make a copy of all desired data in the colortable. It remains owned by the caller after the call.
Parameters
• colortable color table to apply (where supported).
source
ArchGDAL.setdefaultRAT!Method
setdefaultRAT!(band::AbstractRasterBand, rat::RasterAttrTable)
Set default Raster Attribute Table.
Associates a default RAT with the band. If not implemented for the format a CPLE_NotSupported error will be issued. If successful a copy of the RAT is made, the original remains owned by the caller.
source
ArchGDAL.setunittype!Method
setunittype!(band::AbstractRasterBand, unitstring::AbstractString)
Set unit type of band to unittype.
Values should be one of "" (the default indicating it is unknown), "m" indicating meters, or "ft" indicating feet, though other nonstandard values are allowed.
source
ArchGDAL.unsafe_getcolortableMethod
unsafe_getcolortable(band::AbstractRasterBand)
Returns a clone of the color table associated with the band.
(If there is no associated color table, the original result is NULL. The original color table remains owned by the RasterBand, and can't be depended on for long, nor should it ever be modified by the caller.)
source
ArchGDAL.rasterio!Function
rasterio!(dataset::AbstractDataset, buffer::Array{<:Any, 3},
bands; <keyword arguments>)
rasterio!(dataset::AbstractDataset, buffer::Array{<:Any, 3}, bands, rows,
cols; <keyword arguments>)
rasterio!(rasterband::AbstractRasterBand, buffer::Matrix{<:Any};
<keyword arguments>)
rasterio!(rasterband::AbstractRasterBand, buffer::Matrix{<:Any}, rows,
cols; <keyword arguments>)
Read/write a region of image data from multiple bands.
This method allows reading a region of one or more RasterBands from this dataset into a buffer, or writing data from a buffer into a region of the RasterBands. It automatically takes care of data type translation if the element type (<:Any) of the buffer is different than that of the RasterBand. The method also takes care of image decimation / replication if the buffer size (xsz × ysz) is different than the size of the region being accessed (xsize × ysize).
The pxspace, linespace and bandspace parameters allow reading into or writing from various organization of buffers.
For highest performance full resolution data access, read and write on "block boundaries" as returned by blocksize(), or use the readblock!() and writeblock!() methods.
Parameters
• rows A continuous range of rows expressed as a UnitRange{<:Integer}, such as 2:9.
• cols A continuous range of columns expressed as a UnitRange{<:Integer}, such as 2:9.
• access Either GF_Read to read a region of data, or GF_Write to write a region of data.
• xoffset The pixel offset to the top left corner of the region to be accessed. It will be 0 (default) to start from the left.
• yoffset The line offset to the top left corner of the region to be accessed. It will be 0 (default) to start from the top.
• xsize The width of the region of the band to be accessed in pixels.
• ysize The height of the region of the band to be accessed in lines.
• buffer The buffer into which the data should be read, or from which it should be written. It must contain ≥ xsz * ysz * <# of bands> words of type eltype(buffer). It is organized in left to right, top to bottom pixel order. Spacing is controlled by the pxspace, and linespace parameters
• xsz The width of the buffer into which the desired region is to be read, or from which it is to be written.
• ysz The height of the buffer into which the desired region is to be read, or from which it is to be written.
• bands The list of bands (1-based) to be read/written.
• pxspace The byte offset from the start of a pixel value in the buffer to the start of the next pixel value within a scanline. By default (i.e., 0) the size of eltype(buffer) will be used.
• linespace The byte offset from the start of one scanline in pBuffer to the start of the next. By default (i.e., 0) the value of sizeof(eltype(buffer)) * xsz will be used.
• bandspace The byte offset from the start of one bands data to the start of the next. By default (0), it will be linespace * ysz implying band sequential organization of the buffer.
Returns
CE_Failure if the access fails, otherwise CE_None.
source
ArchGDAL.readblock!Method
readblock!(rb::AbstractRasterBand, xoffset::Integer, yoffset::Integer,
buffer)
Read a block of image data efficiently.
This method accesses a "natural" block from the raster band without resampling, or data type conversion. For a more generalized, but potentially less efficient access use RasterIO().
Parameters
• xoffset the horizontal block offset, with zero indicating the left most block, 1 the next block and so forth.
• yoffset the vertical block offset, with zero indicating the top most block, 1 the next block and so forth.
• buffer the buffer into which the data will be read. The buffer must be large enough to hold GetBlockXSize()*GetBlockYSize() words of type GetRasterDataType().
source
ArchGDAL.writeblock!Method
writeblock!(rb::AbstractRasterBand, xoffset::Integer, yoffset::Integer,
buffer)
Write a block of image data efficiently.
This method accesses a "natural" block from the raster band without resampling, or data type conversion. For a more generalized, but potentially less efficient access use RasterIO().
Parameters
• xoffset the horizontal block offset, with zero indicating the left most block, 1 the next block and so forth.
• yoffset the vertical block offset, with zero indicating the left most block, 1 the next block and so forth.
• buffer the buffer from which the data will be written. The buffer must be large enough to hold GetBlockXSize()*GetBlockYSize() words of type GetRasterDataType().
source
## Spatial Projections
ArchGDAL.crs2transformMethod
crs2transform(f::Function, sourcecrs::GeoFormat, targetcrs::GeoFormat;
kwargs...)
Run the function f on a coord transform generated from the source and target crs definitions. These can be any GeoFormat (from GeoFormatTypes) that holds a coordinate reference system.
kwargs are passed through to importCRS.
source
ArchGDAL.getattrvalueMethod
getattrvalue(spref::AbstractSpatialRef, name::AbstractString, i::Integer)
Fetch indicated attribute of named node.
This method uses GetAttrNode() to find the named node, and then extracts the value of the indicated child. Thus a call to getattrvalue(spref,"UNIT",1) would return the second child of the UNIT node, which is normally the length of the linear unit in meters.
Parameters name the tree node to look for (case insensitive). i the child of the node to fetch (zero based).
Returns the requested value, or nothing if it fails for any reason.
source
ArchGDAL.importCRS!Function
importCRS!(spref::AbstractSpatialRef, x::GeoFormatTypes.GeoFormat)
Import a coordinate reference system from a GeoFormat into the spatial ref.
source
ArchGDAL.importCRSMethod
importCRS(x::GeoFormatTypes.GeoFormat; [order=:compliant])
Import a coordinate reference system from a GeoFormat into GDAL, returning an ArchGDAL.AbstractSpatialRef.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant (the default) will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.importEPSG!Method
importEPSG!(spref::AbstractSpatialRef, code::Integer)
Initialize SRS based on EPSG GCS or PCS code.
This method will initialize the spatial reference based on the passed in EPSG GCS or PCS code. It is relatively expensive, and generally involves quite a bit of text file scanning. Reasonable efforts should be made to avoid calling it many times for the same coordinate system.
This method is similar to importFromEPSGA() except that EPSG preferred axis ordering will not be applied for geographic coordinate systems. EPSG normally defines geographic coordinate systems to use lat/long contrary to typical GIS use). Since OGR 1.10.0, EPSG preferred axis ordering will also not be applied for projected coordinate systems that use northing/easting order.
The coordinate system definitions are normally read from the EPSG derived support files such as pcs.csv, gcs.csv, pcs.override.csv, gcs.override.csv and falling back to search for a PROJ.4 epsg init file or a definition in epsg.wkt.
These support files are normally searched for in /usr/local/share/gdal or in the directory identified by the GDAL_DATA configuration option. See CPLFindFile() for details.
source
ArchGDAL.importEPSGMethod
importEPSG(code::Integer; [order=:compliant])
Construct a Spatial Reference System from its EPSG GCS or PCS code.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.importEPSGA!Method
importEPSGA!(spref::AbstractSpatialRef, code::Integer)
Initialize SRS based on EPSG CRS code.
This method is similar to importFromEPSG() except that EPSG preferred axis ordering will be applied for geographic and projected coordinate systems. EPSG normally defines geographic coordinate systems to use lat/long, and also there are also a few projected coordinate systems that use northing/easting order contrary to typical GIS use). See importFromEPSG() for more details on operation of this method.
source
ArchGDAL.importEPSGAMethod
importEPSGA(code::Integer; [order=:compliant])
Construct a Spatial Reference System from its EPSG CRS code.
This method is similar to importFromEPSG() except that EPSG preferred axis ordering will be applied for geographic and projected coordinate systems. EPSG normally defines geographic coordinate systems to use lat/long, and also there are also a few projected coordinate systems that use northing/easting order contrary to typical GIS use). See importFromEPSG() for more details on operation of this method.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.importESRI!Method
importESRI!(spref::AbstractSpatialRef, esristr::AbstractString)
Import coordinate system from ESRI .prj format(s).
This function will read the text loaded from an ESRI .prj file, and translate it into an OGRSpatialReference definition. This should support many (but by no means all) old style (Arc/Info 7.x) .prj files, as well as the newer pseudo-OGC WKT .prj files. Note that new style .prj files are in OGC WKT format, but require some manipulation to correct datum names, and units on some projection parameters. This is addressed within importFromESRI() by an automatic call to morphFromESRI().
Currently only GEOGRAPHIC, UTM, STATEPLANE, GREATBRITIAN_GRID, ALBERS, EQUIDISTANT_CONIC, TRANSVERSE (mercator), POLAR, MERCATOR and POLYCONIC projections are supported from old style files.
At this time there is no equivalent exportToESRI() method. Writing old style .prj files is not supported by OGRSpatialReference. However the morphToESRI() and exportToWkt() methods can be used to generate output suitable to write to new style (Arc 8) .prj files.
source
ArchGDAL.importESRIMethod
importESRI(esristr::AbstractString; kwargs...)
Create SRS from its ESRI .prj format(s).
Passing the keyword argument order=:compliant or order=:trad will set the mapping strategy to return compliant axis order or traditional lon/lat order.
source
ArchGDAL.importPROJ4!Method
importPROJ4!(spref::AbstractSpatialRef, projstr::AbstractString)
Import PROJ.4 coordinate string.
The OGRSpatialReference is initialized from the passed PROJ.4 style coordinate system string. In addition to many +proj formulations which have OGC equivalents, it is also possible to import "+init=epsg:n" style definitions. These are passed to importFromEPSG(). Other init strings (such as the state plane zones) are not currently supported.
Example: pszProj4 = "+proj=utm +zone=11 +datum=WGS84"
Some parameters, such as grids, recognized by PROJ.4 may not be well understood and translated into the OGRSpatialReference model. It is possible to add the +wktext parameter which is a special keyword that OGR recognized as meaning "embed the entire PROJ.4 string in the WKT and use it literally when converting back to PROJ.4 format".
For example: "+proj=nzmg +lat_0=-41 +lon_0=173 +x_0=2510000 +y_0=6023150 +ellps=intl +units=m +nadgrids=nzgd2kgrid0005.gsb +wktext"
source
ArchGDAL.importPROJ4Method
importPROJ4(projstr::AbstractString; [order=:compliant])
Create SRS from its PROJ.4 string.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.importURL!Method
importURL!(spref::AbstractSpatialRef, url::AbstractString)
Set spatial reference from a URL.
This method will download the spatial reference at a given URL and feed it into SetFromUserInput for you.
source
ArchGDAL.importURLMethod
importURL(url::AbstractString; [order=:compliant])
Construct SRS from a URL.
This method will download the spatial reference at a given URL and feed it into SetFromUserInput for you.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.importWKT!Method
importWKT!(spref::AbstractSpatialRef, wktstr::AbstractString)
Import from WKT string.
This method will wipe the existing SRS definition, and reassign it based on the contents of the passed WKT string. Only as much of the input string as needed to construct this SRS is consumed from the input string, and the input string pointer is then updated to point to the remaining (unused) input.
source
ArchGDAL.importWKTMethod
importWKT(wktstr::AbstractString; [order=:compliant])
Create SRS from its WKT string.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.importXMLMethod
importXML(xmlstr::AbstractString; [order=:compliant])
Construct SRS from XML format (GML only currently).
Passing the keyword argument order=:compliant or order=:trad will set the mapping strategy to return compliant axis order or traditional lon/lat order.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.morphfromESRI!Method
morphfromESRI!(spref::AbstractSpatialRef)
Convert in place from ESRI WKT format.
The value notes of this coordinate system are modified in various manners to adhere more closely to the WKT standard. This mostly involves translating a variety of ESRI names for projections, arguments and datums to "standard" names, as defined by Adam Gawne-Cain's reference translation of EPSG to WKT for the CT specification.
Missing parameters in TOWGS84, DATUM or GEOGCS nodes can be added to the WKT, comparing existing WKT parameters to GDAL's databases. Note that this optional procedure is very conservative and should not introduce false information into the WKT definition (although caution should be advised when activating it). Needs the Configuration Option GDAL_FIX_ESRI_WKT be set to one of the following (TOWGS84 recommended for proper datum shift calculations)
GDAL_FIX_ESRI_WKT values:
• TOWGS84 Adds missing TOWGS84 parameters (necessary for datum transformations), based on named datum and spheroid values.
• DATUM Adds EPSG AUTHORITY nodes and sets SPHEROID name to OGR spec.
• GEOGCS Adds EPSG AUTHORITY nodes and sets GEOGCS, DATUM and SPHEROID names to OGR spec. Effectively replaces GEOGCS node with the result of importFromEPSG(n), using EPSG code n corresponding to the existing GEOGCS. Does not impact PROJCS values.
source
ArchGDAL.morphtoESRI!Method
morphtoESRI!(spref::AbstractSpatialRef)
Convert in place to ESRI WKT format.
The value nodes of this coordinate system are modified in various manners more closely map onto the ESRI concept of WKT format. This includes renaming a variety of projections and arguments, and stripping out nodes note recognised by ESRI (like AUTHORITY and AXIS).
source
ArchGDAL.newspatialrefFunction
newspatialref(wkt::AbstractString = ""; order=:compliant)
Construct a Spatial Reference System from its WKT.
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant, will use axis ordering compliant with the relevant CRS authority.
source
ArchGDAL.reprojectMethod
reproject(points, sourceproj::GeoFormat, destproj::GeoFormat;
[order=:compliant])
Reproject points to a different coordinate reference system and/or format.
Arguments
• coord: Vector of Geometry points
• sourcecrs: The current coordinate reference system, as a GeoFormat
• targetcrs: The coordinate reference system to transform to, using any CRS capable GeoFormat
Keyword Arguments
• order: Sets the axis mapping strategy. :trad will use traditional lon/lat axis ordering in any actions done with the crs. :compliant (the default) will use axis ordering compliant with the relevant CRS authority.
Example
julia> using ArchGDAL, GeoFormatTypes
julia> ArchGDAL.reproject(
[[118, 34], [119, 35]],
ProjString("+proj=longlat +datum=WGS84 +no_defs"),
EPSG(2025)
)
2-element Array{Array{Float64,1},1}:
[-2.60813482878655e6, 1.5770429674905164e7]
[-2.663928675953517e6, 1.56208905951487e7]
source
ArchGDAL.setattrvalue!Method
setattrvalue!(spref::AbstractSpatialRef, path::AbstractString,
value::AbstractString)
Set attribute value in spatial reference.
Missing intermediate nodes in the path will be created if not already in existence. If the attribute has no children one will be created and assigned the value otherwise the zeroth child will be assigned the value.
Parameters
• path: full path to attribute to be set. For instance "PROJCS|GEOGCS|UNIT".
• value: (optional) to be assigned to node, such as "meter". This may be left out if you just want to force creation of the intermediate path.
source
ArchGDAL.toWKTMethod
toWKT(spref::AbstractSpatialRef, simplify::Bool)
Convert this SRS into a nicely formatted WKT string for display to a person.
Parameters
• spref: the SRS to be converted
• simplify: true if the AXIS, AUTHORITY and EXTENSION nodes should be stripped off.
source
ArchGDAL.toXMLMethod
toXML(spref::AbstractSpatialRef)
Export coordinate system in XML format.
Converts the loaded coordinate reference system into XML format to the extent possible. LOCALCS coordinate systems are not translatable. An empty string will be returned along with OGRERRNONE.
source
ArchGDAL.transform!Method
transform!(xvertices, yvertices, zvertices, obj::CoordTransform)
Transform points from source to destination space.
Parameters
• xvertices array of nCount X vertices, modified in place.
• yvertices array of nCount Y vertices, modified in place.
• zvertices array of nCount Z vertices, modified in place.
Returns
true on success, or false if some or all points fail to transform.
source
ArchGDAL.unsafe_createcoordtransMethod
unsafe_createcoordtrans(source::AbstractSpatialRef,
target::AbstractSpatialRef)
Create transformation object.
Parameters
• source: source spatial reference system.
• target: target spatial reference system.
Returns
NULL on failure or a ready to use transformation object.
source
## Geo Transformations
ArchGDAL.applygeotransformMethod
applygeotransform(geotransform::Vector{Float64}, pixel::Float64,
line::Float64)
Apply GeoTransform to x/y coordinate.
Applies the following computation, converting a (pixel,line) coordinate into a georeferenced (geo_x,geo_y) location.
geo_x = geotransform[1] + pixel*geotransform[2] + line*geotransform[3]
geo_y = geotransform[4] + pixel*geotransform[5] + line*geotransform[6]
Parameters
• geotransform Six coefficient GeoTransform to apply.
• pixel input pixel position.
• line input line position.
source
ArchGDAL.composegeotransform!Method
composegeotransform!(gt1::Vector{Float64}, gt2::Vector{Float64},
gtout::Vector{Float64})
Compose two geotransforms.
The resulting geotransform is the equivalent to padfGT1 and then padfGT2 being applied to a point.
Parameters
• gt1 the first geotransform, six values.
• gt2 the second geotransform, six values.
• gtout the output geotransform, six values.
source
ArchGDAL.invgeotransform!Method
invgeotransform!(gt_in::Vector{Float64}, gt_out::Vector{Float64})
Invert Geotransform.
This function will invert a standard 3x2 set of GeoTransform coefficients. This converts the equation from being pixel to geo to being geo to pixel.
Parameters
• gt_in Input geotransform (six doubles - unaltered).
• gt_out Output geotransform (six doubles - updated).
Returns
gt_out
source
## Utilities
ArchGDAL.gdalinfoFunction
gdalinfo(dataset::AbstractDataset, options = String[])
List various information about a GDAL supported raster dataset.
Parameters
• dataset: The source dataset.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdalinfo utility.
Returns
String corresponding to the information about the raster dataset.
source
ArchGDAL.unsafe_gdalbuildvrtFunction
unsafe_gdalbuildvrt(
datasets::Vector{<:AbstractDataset},
options = String[];
dest = "/vsimem/tmp")
Build a VRT from a list of datasets.
Parameters
• datasets: The list of input datasets.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdalbuildvrt utility.
Returns
The output dataset.
source
ArchGDAL.unsafe_gdaldemFunction
unsafe_gdaldem(
dataset::AbstractDataset,
processing::String,
options = String[];
dest = "/vsimem/tmp",
colorfile)
Tools to analyze and visualize DEMs.
Parameters
• dataset: The source dataset.
• pszProcessing: the processing to apply (one of "hillshade", "slope", "aspect", "color-relief", "TRI", "TPI", "Roughness").
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdaldem utility.
Keyword Arguments
• colorfile: color file (mandatory for "color-relief" processing, should be empty otherwise).
Returns
The output dataset.
source
ArchGDAL.unsafe_gdalgridFunction
unsafe_gdalgrid(
dataset::AbstractDataset,
options = String[];
dest = "/vsimem/tmp")
Create a raster from the scattered data.
Parameters
• dataset: The source dataset.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdal_grid utility.
Returns
The output dataset.
source
ArchGDAL.unsafe_gdalnearblackFunction
unsafe_gdalnearblack(
dataset::AbstractDataset,
options = String[];
dest = "/vsimem/tmp")
Convert nearly black/white borders to exact value.
Parameters
• dataset: The source dataset.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the nearblack utility.
Returns
The output dataset.
source
ArchGDAL.unsafe_gdalrasterizeFunction
unsafe_gdalrasterize(
dataset::AbstractDataset,
options = String[];
dest = "/vsimem/tmp")
Burn vector geometries into a raster.
Parameters
• dataset: The source dataset.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdal_rasterize utility.
Returns
The output dataset.
source
ArchGDAL.unsafe_gdaltranslateFunction
unsafe_gdaltranslate(
dataset::AbstractDataset,
options = String[];
dest = "/vsimem/tmp")
Convert raster data between different formats.
Parameters
• dataset: The dataset to be translated.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdal_translate utility.
Returns
The output dataset.
source
ArchGDAL.unsafe_gdalvectortranslateFunction
unsafe_gdalvectortranslate(
datasets::Vector{<:AbstractDataset},
options = String[];
dest = "/vsimem/tmp")
Convert vector data between file formats.
Parameters
• datasets: The list of input datasets (only 1 supported currently).
• options: List of options (potentially including filename and open options). The accepted options are the ones of the ogr2ogr utility.
Returns
The output dataset.
source
ArchGDAL.unsafe_gdalwarpFunction
unsafe_gdalwarp(
datasets::Vector{<:AbstractDataset},
options = String[];
dest = "/vsimem/tmp")
Image reprojection and warping function.
Parameters
• datasets: The list of input datasets.
• options: List of options (potentially including filename and open options). The accepted options are the ones of the gdalwarp utility.
Returns
The output dataset.
source
## Format Drivers
ArchGDAL.copyfilesFunction
copyfiles(drv::Driver, new::AbstractString, old::AbstractString)
copyfiles(drvname::AbstractString, new::AbstractString, old::AbstractString)
Copy all the files associated with a dataset.
source
ArchGDAL.extensiondriverMethod
extensiondriver(filename::AbstractString)
Returns a driver shortname that matches the filename extension.
So extensiondriver("/my/file.tif") == "GTiff".
source
ArchGDAL.extensionsMethod
extensions()
Returns a Dict{String,String} of all of the file extensions that can be read by GDAL, with their respective drivers' shortnames.
source
ArchGDAL.identifydriverMethod
identifydriver(filename::AbstractString)
Identify the driver that can open a raster file.
This function will try to identify the driver that can open the passed filename by invoking the Identify method of each registered Driver in turn. The first driver that successful identifies the file name will be returned. If all drivers fail then NULL is returned.
source
ArchGDAL.validateMethod
validate(drv::Driver, options::Vector{<:AbstractString})
Validate the list of creation options that are handled by a drv.
This is a helper method primarily used by create() and copy() to validate that the passed in list of creation options is compatible with the GDAL_DMD_CREATIONOPTIONLIST metadata item defined by some drivers.
Parameters
• drv the handle of the driver with whom the lists of creation option must be validated
• options the list of creation options. An array of strings, whose last element is a NULL pointer
Returns
true if the list of creation options is compatible with the create() and createcopy() method of the driver, false otherwise.
See also: options(drv::Driver)
If the GDAL_DMD_CREATIONOPTIONLIST metadata item is not defined, this function will return $true$. Otherwise it will check that the keys and values in the list of creation options are compatible with the capabilities declared by the GDAL_DMD_CREATIONOPTIONLIST` metadata item. In case of incompatibility a (non fatal) warning will be emited and $false$ will be returned.
source
|
2022-05-19 11:11:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20510965585708618, "perplexity": 5082.640960319906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00351.warc.gz"}
|
https://de.zxc.wiki/wiki/Tapire
|
# Tapirs
Tapirs
Central American tapir ( Tapirus bairdii )
Systematics
Class : Mammals (mammalia) Subclass : Higher mammals (Eutheria) Superordinate : Laurasiatheria Order : Unpaired ungulate (Perissodactyla) Family : Tapirs Genre : Tapirs
Scientific name of the family
Tapiridae
JE Gray , 1821
Scientific name of the genus
Tapirus
Brisson , 1762
The tapirs ( Tapirus ) are the only recent genus of the mammalian family of the same name in German (Tapiridae) from the order of the odd ungulate (Perissodactyla). The genus was once very diverse and today still includes five living species. These are animals with a strong physique and a characteristic short trunk that live mainly in closed tropical forests and feed on mostly soft vegetable foods. They are a very old genus and have been recorded as early as the Middle Miocene 14 million years ago. Today, the tapirs are with the lowland and the mountain and the Kabomani tapir in South America , with the Central American Tapir in Central America and the Malayan tapir in Southeast Asia spread.
## anatomy
### Appearance
Tapirs are distantly pig-like animals; however, their closest relatives are horses and rhinos . The animals reach a head-trunk length of over 100 to 250 cm, the tail is a short stub 5 to 13 cm in length, the shoulder height is 73 to 120 cm. Adults reach a weight of 110 to 320 kg - the largest recent representative is the black-backed tapir ( Tapirus indicus ). Fossil appeared with Tapirus augustus , also known as Megatapirus , an even larger species that exceeded the black-backed tapir in all measurement features by 25%. Very small extinct species like Tapirus polkensis also reached a weight of only 110 to 140 kg.
The plump, clumsy body of these animals is pointed at the front and rounded at the back, which makes it easier for them to move forward in dense forests. The fur of the American species is brownish-gray in color, while the Southeast Asian saddleback tapir is characterized by a conspicuous, black and white coloration. The head looks relatively small compared to the body. The eyes are small, the ears oval and erect and very mobile. In some species the tips are colored white. The trunk formed from the upper lip and nose is characteristic . The legs are comparatively short and slender, as with all odd-toed ungulates, the main axis runs through the third toe, which is also the largest. Four toes are formed on each of the front legs, with the three forward-pointing ones being the most developed, the outer one being reduced in length, the hind feet have three toes.
### Skull and dentition features
Skull of a Central American tapir
Typical molar of a tapir with two transverse melting strips
The skull of the tapirs is usually elongated and flat. Characteristic of the South American species is the crest on the middle of the roof of the skull, which is formed by the inner ends of the parietal bones . The Central American tapir ( Tapirus bairdii ) and the black-backed tapir do not have any formed crests. They have a bony elevation ( parasagittal back ) on each side of the parietal bones, approximately at the level of the brain chamber , between which a narrow flat plane is formed. The occiput is rather short and rectangular. The nasal bone has only a weak expression and is quite short. Typically for tapirs, it lies far behind and above the middle jawbone and is not connected to it, so that a very large interior space is created. The entire front face area is greatly reduced. This was necessary to make room for the elaborate muscles of the trunk.
The tapir's teeth are hardly reduced and are similar to those of the early mammals. Adult animals have the following tooth formula : So there are a total of 42 to 44 teeth. The incisors are small and conical, except for the upper third ( I 3), which is significantly enlarged. The canines are also conical, the lower one is very large, while the upper one is significantly smaller. Thus, the lower canine and upper external incisor form an effective bite tool. The front teeth are separated from the molars by a diastema . The premolars are similar in shape to the molars , so they are clearly molarized. As with all odd-toed ungulates, the molars are characterized by two transverse enamel ridges on the chewing surface ( bilophodont ). Furthermore, the teeth are low-crowned and have relatively little dental cement , so they are equipped for soft plant food. In these characteristics, the fossil tapir species do not differ from one another, although the premolars are partly molarized to different degrees. ${\ displaystyle {\ frac {3.1.4.3} {3.1.3 (4) .3}}}$
### Trunk
Pleading saddlecloth tapir with a clearly visible trunk
The nose and upper lip have grown together to form a small proboscis that the animals use to track down and take their food from the leaves. The black-backed tapir has the longest trunk and the flatland tapir ( Tapirus terrestris ) the shortest . Since the fossil tapirs have a similar skull structure, this trunk formation is to be regarded as typical for the genus. As with the elephants , the trunk is a tube made entirely of muscles with two continuous nostrils, but it is significantly shorter than that of the trunk animals. It does not have any bony substructure, the attachment to the facial skull has been evolutively restructured , as already mentioned above, by reducing the facial bones and differs significantly from other unpaired ungulates. The high mobility of the trunk is guaranteed by three main muscle groups, which run lengthways and crossways or helically. Above all, several large facial muscles, such as the levator labii superioris and the levator nasolabialis muscles, underwent significant changes to enable the trunk to move freely. In contrast to the elephants, there was no further alteration of the skull or the teeth. The short trunk of the tapirs does not allow the elephant's trunk to be used in a variety of ways, just as the size of the usable objects is limited by the tapirs. But as tapirs, such as pushing the elephants use their trunks to foraging and plants into the mouth, and it also, snorkeling and similar use to smell, he can derogation from the trunk-like formations in other mammals such as pigs , elephant-shrews or Dikdiks than functional real proboscis ( proboscis ).
### Internal organs
Like all odd ungulates , tapirs are rectum fermenters , as most of the digestion, with the participation of numerous microorganisms , takes place in the back of the intestine . The stomach has a single cavity and is relatively small, the entire intestinal tract is up to 11 m in length, but with an appendix that is relatively small for odd-toed ungulates . The kidneys contain around three million kidney corpuscles and weigh up to 390 g, which is only a maximum of 0.5% of the body mass with two kidneys.
## Distribution area and habitat
Tapirs today have a two-part distribution area: four species live in Central and South America , where they are distributed from southern Mexico to southern Brazil and northern Argentina . The fifth species, the black-backed tapir, lives in Southeast Asia , from Myanmar to the Malay Peninsula and on Sumatra . This dichotomy of the distribution area is a relic of the originally much wider distribution. In the Miocene and Pliocene , tapirs were found in the entire Eurasian region with the exception of the Indian subcontinent and also in large parts of North America ; South America was only reached in the middle Pliocene with the closure of the Isthmus of Panama and the subsequent Great American Faun Exchange . As a result of climate changes to cooler temperatures and stronger seasonalization of the year - combined with the spread of open landscapes in the Miocene and Pliocene to the Pleistocene - the tapirs disappeared again from Europe, North Asia and North America.
The habitat of the tapirs are forests, primarily tropical rainforests , but also mountain cloud forests. They depend on the proximity of water and occur from sea level to altitudes of 4500 m. Since tapirs are a conservative genus with only minor physical changes over time, this is also assumed for the fossil species.
## Way of life
### Territorial behavior
The black-backed tapir is the only species that lives in Southeast Asia
Tapirs are territorial loners; when conspecifics meet each other, they often behave very aggressively. Only during the mating season do males and females come together for a short time. The territories are between 1 and 8 km² in size, with females sometimes having larger territories, and usually consist of several sleeping, feeding and wallowing places. The boundaries and well-traveled paths are marked with feces and urine . The animals are nocturnal, during the day they withdraw into the thick undergrowth. At night they go looking for food. They move forward, holding their trunk on the ground. They often stay near bodies of water . They can swim and dive well, and mud baths are also common. In general, tapirs are very shy and cautious, in case of threat they flee into the water or take flight; if necessary, defend themselves with bites. Hearing and smell are well developed.
### Diet
Tapirs are herbivores that eat mostly soft foods. In addition to leaves, they also consume aquatic plants , buds , twigs and fruits . With their long, muscular and flexible tongues, they can also reach the leaves of thorny plants. Several hundred plant species are known that serve as the food source of the individual tapir species. Through their excretions, the animals also spread the seeds of plants on their migrations and thus represent an important ecological factor in the tropical forests. Some tapir species regularly use mineral and salt licks to neutralize the poisons that are partially absorbed through plant food and to maintain the material cycle . The tapirs' high dependence on water is also known, as they adapt their drinking behavior to local conditions and thus consume significantly more water in dry regions.
### Reproduction
Young lowland tapir with typical coat patterns
The gestation period lasts 13 to 14 months (around 390 to 410 days). As a rule, a single young animal is born, rarely two. Newborns look the same in all types of tapir: They are dark brown and have light brown to white vertical stripes that can be broken up into spots and lines. The boy spends its first week of life in a sheltered camp, after which it begins to follow its mother, who protects it from possible dangers and defends it if necessary.
After a few weeks, the coat pattern of the young begins to gradually disappear, which is completed in about half a year. From the first year of life, the young tapir looks like an adult animal. At about the same time, he is weaned and driven away by his mother. Sexual maturity occurs at around three to four years of age. In the wild, tapirs live to be around 30 years old; the highest known age of a captive tapir was 35 years.
### Enemies and enemy behavior
Natural enemies include large cats such as puma , jaguars and tigers , but also bears and crocodiles . Tapirs often flee, but can also defend themselves well with their large canine teeth. The greatest threat to the tapirs, however, is man . Attacks by the tapirs on humans only take place extremely rarely and take place when harassed.
## Systematics
### External system
The genus Tapirus represents a branch within the family of the Tapiridae , and is closely related to the extinct genera Tapiravus and Tapiriscus related. These occurred around the same time, but were on average mostly smaller than the tapirs; However, they have hardly been explored due to the few fossil finds. The closest living relatives of the Tapiridae family are the rhinos . Both lines of development separated in the middle Eocene around 47 million years ago. The Tapiridae are regarded as part of the superfamily Tapiroidea . Together with the rhinoceros superfamily Rhinocerotoidea, they form the group of Ceratomorpha , which, within the order of the odd- toed ungulates ( Perissodactyla ), faces the Hippomorpha with the horses . The horses had split off from the tapirs line 56 million years ago. In general, the odd ungulates are assigned to the parent group of Laurasiatheria .
### Internal system
Internal system of the genus Tapirus (only recent representatives) according to Price et al. 2009 and Cozzuol et al. 2013
Tapirus
Today there are five species of tapir: the lowland tapir ( Tapirus terrestris ), the mountain tapir ( Tapirus pinchaque ) and the kabomani tapir ( Tapirus kabomani ) in South America, as well as the Central American tapir ( Tapirus bairdii ) in Central America and the black-backed tapir ( Tapirus indicus ) in Southeast Asia the earth. According to molecular genetic studies, the Asian tapir separated from the line of tapirs first 21 to 23 million years ago, the Central American tapir followed shortly after 19 to 20 million years ago. The separation of the line of the South American tapir species from the Central American tapir took place around 3.1 to 3.5 million years ago. Possibly this happened on the South American continent, which the ancestral form of these three today's tapir species took after the closure of the Isthmus of Panama and the creation of a land bridge. The differentiation into the three today's tapir representatives of South America - the flatland and mountain tapir and the Kabomani tapir - did not take place until the Middle Pleistocene , 288,000 to 652,000 years ago. Together with the fossil tapir representatives of South America, they form a closer relational unit and stand out from the tapir species of North and Central America. The relationship between the Eurasian tapirs has not been sufficiently clarified.
In addition to the five recent ones, numerous fossil tapir species have been described, of which the following are valid today:
• American tapir species
Internal classification of the American tapirs (including the recent saddleback tapir) according to Cozzuol et al. 2013
Tapirus
• Eurasian tapir species
Furthermore, four of the five today's tapir species were assigned their own sub-genus, and there are also two fossil sub-genuses. The breakdown into the individual sub-genres is, however, not without controversy, since, according to some experts, it complicates the taxonomy of the genus:
• Subgenera
• Acrocodia ( saddleback tapir and Eurasian tapir species)
• Helicotapirus ( T. haysii , T. lundeliusi , T. veroensis )
• Megatapirus ( T. augustus )
• Pinchacus (mountain tapir )
• Tapirella (Central American tapir)
• Tapirus (lowland tapir)
## Tribal history
### Origins
Fossil representative of the tapir-like species: Hyrachyus minimus from the Middle Eocene (find from the Messel pit )
In tribal history, the tapirs are a very old family compared to other mammals. An early ancestor of tapir-like animals could be found in the genus Hyrachyus from the Early and Middle Eocene . In the Messel Pit in particular , a complete skeleton from around 44 million years ago has come down to us, but fossil remains have been found in both Europe and North America. Due to the very original design of the skeletal structure, the genus is placed by some experts on the basis of the superfamily of the Tapiroidea on the one hand, and the superfamily of the Rhinocerotoidea on the other. On the other hand, some groups such as the Deperetellidae with forms such as Deperetella , Teleolophus and Irenolophus or the Helaletidae , to which Heptodon , Helaletes and Colodon are assigned, are to be rated as basic members of the Tapiroidea. Some experts see the Colodon from the Upper Eocene as a representative of the tapir family, also Thuliadanta , described for the first time in 2005 based on finds from northern Canada , it could have already belonged to it. The oldest fossils, which are clearly included in the tapir family ( Tapiridae ), come from the early Oligocene in Europe and are over 30 million years old. They are usually assigned to the genus Protapirus and appeared in connection with the Grande Coupure event, an extinction phase caused by a deterioration in the climate, which caused a large exchange of fauna. Protapirus was characterized, like other early Eurasian forms, such as Paratapirus and Eotapirus , by hardly molarized premolars and much slimmer limbs and possibly already had a short proboscis. In North America, undoubted representatives of the Tapiridae family can be detected for the first time in the late Oligocene and are also assigned to Protapirus . Here, among other things, developed with Miotapirus and Nexuotapirus own early Tapirlinien.
### Miocene
Upper jaw of Tapirus priscus
The genus Tapirus first appeared in Europe in the Middle Miocene 14 million years ago. The direct ancestor is unknown, Protapirus may represent it . However, no finds from the early Miocene in western Eurasia are known, so that the genus apparently immigrated from Asia. The lack of fossils of this group of animals is known as the tapir vacuum and includes a climatically favorable phase 18 to 14 million years ago. Earlier finds of Tapirus reported as early as the Oligocene are extremely questionable. Several forms developed in Europe, the oldest is T. telleri , other important ones include T. antiquus and T. priscus . In the late Miocene, seven million years ago, the medium-sized form T. arvernensis was added. This tapir species is a regular, albeit numerically rare, representative in European fauna communities, a complete skeleton has been handed down from Camp dels Ninots in Spain , but which dates from the Pliocene . In the late Miocene and in the transition to the Pliocene, all small tapir species died out in western Eurasia and were replaced by medium to large forms. Before that, some species had already disappeared during the Central Valesium Crisis , a cold snap that led to a significant seasonalization of the climate.
In East and Southeast Asia, the genus Tapirus is only detectable in the Upper Miocene 9.5 million years ago and is largely present in the Pliocene and Pleistocene . The oldest representative is called T. yunnanensis . However, the origin of the genus is assumed to be in this region, since the genus Plesiotapirus appeared here during the tapir vacuum , which is sometimes only viewed as a side branch. In North America, tapirus appears similar to Europe in the Middle Miocene 11 million years ago, also after the tapir vacuum . T. johnsoni is one of the earliest species . Representatives of this form of tapir are fossilized from the Ash Hollow Formation in the Great Plains of Nebraska , they were killed in a catastrophic volcanic eruption . The main distribution center was the southern part of the continent, from California to Florida . Important species are also T. webbi and T. simpsoni . At the end of the Miocene, the particularly small species T. polkensis appeared.
### Pliocene and Pleistocene
Skull of Tapirus augustus
The tapirs of Europe disappeared again at the end of the Pliocene 2.7 million years ago, which is seen as a result of the cooling and stronger seasonal fluctuations in the climate and the associated spread of open landscapes. In East and Southeast Asia, however, the animals lived on, the early Miocene form, T. yunannensis , split up into several lines here. Thus evolved T. PEII about T. sinensis to T. augustus , also known Megatapirus known a horse large animal that was the greatest of all time Tapir. This line contrasts with the developmental sequence from T. sanyuanensis to T. indicus (Schabrackentapir). While most species are restricted to the Early and Middle Pleistocene, T. augustus , apart from the black-backed tapir, can also be found up to the Late Pleistocene and was possibly still to be found in the early Holocene .
In North America, the small T. polkensis is still handed down during the Pliocene . In the early Pleistocene, T. haysii and T. lundaliusi largely dominated , both were then replaced by T. veroensis . This tapir species was most likely to be found in North America until the first humans appeared, but died out shortly afterwards. The tapirs came to South America, the focus of their current distribution area, relatively late in the course of the Great American Fauna Exchange after the creation of a land bridge through the closure of the Isthmus of Panama , the oldest records here are around 2.5 million years old. The fossil South American representatives of the tapirs include T. rondoniensis , T. rioplatensis , T. oliverasi , T. tarijensis , T. cristatellus and T. mesopotamicus . All of these forms are monophyletically related and thus go back to an ancestral form. As a result, they are much closer to the lowland and mountain tapirs than to the Central American tapir.
Tapirs were and are typically inhabitants of dense forests. Therefore the expansion of large grasslands in the Neogene was not favorable for them. From the once species-rich family, only the five species today survived; the last major extinction event, to which some tapir forms also fell victim, was the Quaternary extinction wave .
## Taxonomy
Mathurin-Jacques Brisson
The word tapir comes from the Tupí language from Brazil , who called the animals Tapira-caaivara , which translates on the one hand as " bush ox ", but on the other hand also refers to the hidden way of life of the animals. The term danta or anta, which is often used especially in South America, is a borrowing from the Spanish language and originally referred to the elk. In Southeast Asia, the tapir is called badak in Malay and som-set in Thai .
Linnaeus referred the tapir to the hippopotamus in his work Systema Naturae in 1758 because of its physique and named the lowland tapir, the only tapir species known in Europe at the time, as Hippopotamus terrestris . The French naturalist Mathurin-Jacques Brisson first introduced the term tapir in French in his work Regnum animale ( le tapir ) in 1762 . The Danish zoologist Morten Thrane Brünnich first used the generic name Tapirus , which is valid today ; for a long time he was the first to describe the genus. Brünnich used the term, which he derived from Brisson's name le tapir , for the first time in 1772. The British paleontologist Arthur Tindell Hopwood proposed Brisson as the original descriptor in 1947, which then led to numerous discussions in the professional world, as Brünnich was preferred by the majority at the time. In 1998, however, a plenary meeting of the ICZN decided to define Brisson as the first to describe the genre, which is widely accepted today.
## Tapirs and people
Tapir in the zoo
In some regions, tapirs are hunted for their meat and skins, but there are also indigenous tribes that do not hunt tapirs for religious reasons. Today, hunting is not so much the reason for the decline in the numbers of four IUCN- managed tapir species as the destruction of their habitat - above all the rapid loss of tropical forests through felling and slash-and- burn . Added to this is the increasing competition with large animals used for agriculture.
The IUCN lists three of the five species, the mountain tapir , the Central American tapir and black-backed tapir , as endangered and the lowland tapir as endangered ( vulnerable ). The size of the population of the lowland tapir is unknown, the population of the mountain tapir comprises around 2500 individuals and that of the Central American tapir around 5500 animals. It is critical to the saddleback tapir, of which only 1500 to 2000 animals are accepted. There are numerous conservation projects that are coordinated by the IUCN's Tapir Specialist Group . In addition to observing the animals in national parks and other protected areas, the aim is also to relocate endangered populations , sometimes with the help of camera traps .
Tapirs, mostly lowland tapirs, are often kept in zoos. In some regions of South America, tapirs are also used as pets.
## literature
• Ronald M. Nowak: Walker's Mammals of the World . The Johns Hopkins University Press, Baltimore 1999, ISBN 0-8018-5789-9 .
• Sheryl Todd, Udo Ganslosser: The tapirs . Filander, 1997, ISBN 3-930831-41-4 .
• James Oglethorpe: Tapirs: Status, Survey, and Conservation Action Plan . IUCN, 1997, ISBN 2-8317-0422-7 .
• Stefan Seitz: Comparative studies on the behavior and display value of tapirs (Tapiridae) in zoological gardens . Cuvillier, 2001, ISBN 3-89873-201-0 .
• Sy Montgomery: "The Tapir Scientist". Houghton Mifflin, 2013, ISBN 978-0-547-81548-0 .
## Individual evidence
1. ^ A b Tapir Specialist Group: Tapir Education Brochure. ( [1] )
2. a b c d Mario A. Cozzuol, Camila L. Clozato, Elizete C. Holanda, Flávio HG Rodrigues, Samuel Nienow, Benoit de Thoisy, Rodrigo AF Redondo and Fabrício R. Santos: A new species of tapir from the Amazon. Journal of Mammalogy 94 (6), 2013, pp. 1331-1345 ( [2] )
3. a b Tong Haowen: Dental characters of the Quaternary tapirs in China, their significance in classification and phylogenetic assessment. Geobios 38, 2005, pp. 139-150
4. Richard C. Hulbert Jr., Steven C. Wallace, Walter E. Klippel, and Paul W. Parmalee: Cranial Morphology and Systematics of an Extraordinary Sample of the Late Neogene Dwarf Tapir, Tapirus polkensis (Olsen). Journal of Paleontology 83 (2), 2009, pp. 238-262
5. ^ Luke T. Holbrook: The unusual development of the sagittal crest in the Brazilian tapir (Tapirus terrestris). Journal of Zoology 256, 2002, pp. 215-219
6. ^ A b Luke T. Holbrook: Comparative osteology of early Tertiary tapiromorphs (Mammalia, Perissodactyla). Zoological Journal of the Linnean Society 132, 2001, pp. 1-54
7. a b Lawrence M. Witmer, Scott D. Sampson and Nikos Solounias: The proboscis of tapirs (Mammalia: Perissodactyla): a case study in novel narial anatomy. Journal of Zoology 249, 1999, pp. 249-267
8. a b c Richard C. Hulbert Jr .: A new Early Pleistocene tapir (Mammalia, Perissodactyla) from Florida, with a review from Blancan tapirs from the state. Bulletin of the Florida Museum of Natural History 49 (3), 2010, pp. 67-126
9. a b c d Richard C. Hulbert Jr .: Late Miocene Tapirus (Mammalia, Perissodactyla) from Florida, with description of new species Tapirus webbi. Bulletin of the Florida. Museum of Natural History 45 (4), 2005, pp. 465-494
10. ^ Antoni V. Milewski and Ellen S. Dierenfeld: Structural and functional comparison of the proboscis between tapirs and other extant and extinct vertebrates. Integrative Zoology 8, 2013, pp. 84-94
11. Miguel Padilla, Robert C. Dowler and Craig Downer: Tapirus pinchaque (Perissodactyla: Tapiridae). Mammalian Species 42 (863), 2010; Pp. 166-182
12. Miguel Padilla and Robert C. Dowler: Tapirus terrestris. Mammalian Species 481, 1994, pp. 1-8
13. ^ NSR Maluf: The Kidney of Tapirs: A Macroscopical Study. The Anatomy Record 231, 1991, pp. 48-62
14. Jan van der Made and Ivano Stefanovic: A small tapir from the Turolian of Kreka (Bosnia) and a discussion on the biogeography and stratigraphy of the Neogene tapirs. New yearbook for geology and paleontology (monthly books), 240 (2), 2006, pp. 207–240
15. a b c d Kurt Heissig: Family Rhinocerotidae. In: Gertrud E. Rössner and Kurt Heissig (eds.): The Miocene land mammals of Europe. Munich, 1999, pp. 175-188
16. Fabio Olmos, Renata Pardini, Ricardo LP Boulhosa, Roberto Burgi and Carla Morsello: Do Tapirs Steal Food from Palm Seed Predators or Give Them a Lift? Biotropica 31 (2), 1999, pp. 375-379
17. ^ Igor Pfeifer Coelho, Luiz Flamarion B. Oliveira, Maria Elaine Oliveira and José Luís P. Cordeiro: The Importance of Natural Licks in Predicting Lowland Tapir (Tapirus terrestris, Linnaeus 1758) Occurrence in the Brazilian Pantanal. Tapir Conservation 17 (2), 2008, pp. 5-10
18. Larisa G. DeSantis: Stable Isotope Ecology of Extant Tapirs from the Americas. Biotropica 43 (6), 2011, pp. 746-754
19. Christelle Tougard, Thomas Delefosse, Catherine Hänni and Claudine Montgelard: Phylogenetic Relationships of the Five Extant Rhinoceros Species (Rhinocerotidae, Perissodactyla) Based on Mitochondrial Cytochrome b and 12S rRNA Genes. Molecular Phylogenetics and Evolution 19, 2001, pp. 34-44
20. Samantha A. Price and Olaf RP Bininda-Emonds: A comprehensive phylogeny of extant horses, rhinos and tapirs (Perissodactyla) through data combination. Zoosystematics and Evolution 85 (2), 2009, pp. 277-292
21. ^ Mary V. Ashley, Jane E. Norman and Larissa Stross: Phylogenetic Analysis of the Perissodactylan Family Tapiridae Using Mitochondrial Cytochrome c Oxidase (COII) Sequences. Journal of Mammalian Evolution, 3 (4), 1996, pp. 315-326
22. Jane E. Norman and Mary V. Ashley: Phylogenetics of Perissodactyla and Tests of the Molecular Clock. Journal of Molecular Evolution 50, 2000, pp. 11-21
23. ^ A b Brenda S. Ferrero and Jorge I. Noriega: A new Upper Pleistocene tapir from Argentinia: Remarks on the phylogenetics and diversification on neotropical Tapiridae. Journal of Vertebrate Paleontology, 27 (2), 2007, pp. 504-511
24. Elizete C. Holanda, Jorge Ferigolo and Ana Maria Ribeiro: New Tapirus species (Mammalia: Perissodactyla: Tapiridae) from the upper Pleistocene of Amazonia, Brazil. Journal of Mammalogy 92 (1), 2011, pp. 111-120
25. Kerstin Hlawatsch and Jörg Erfurt: Tooth morphology and stratigraphic distribution of Hyrachyus minimus (Perissodactyla, Mammalia) in the Eocene Geiseltalschichten. In: Jörg Erfurt and Lutz Christian Maul (eds.): 34th meeting of the working group for vertebrate palaeontology of the paleontological society March 16 to March 18, 2007 in Freyburg / Unstrut. Hallesches Jahrbuch für Geoswissenschaften 23, 2007, pp. 161–173
26. ^ A b c d Robert M. Schoch: A review of the Tapiroids. In: Donald R. Prothero and RM Schoch (Eds.): The evolution of the Perissodactyls. New-York, 1989, pp. 298-320
27. Bin Bai, Jin Meng, Fang-Yuan Mao, Zhao-Qun Zhang and Yuan Qing Wang: A new early Eocene deperetellid tapiroid illuminates the origin of Deperetellidae and the pattern of premolar molarization in Perissodactyla. PLoS ONE 14 (11), 2019, p. E0225045, doi: 10.1371 / journal.pone.0225045
28. Jaelyn J. Eberle: A new 'tapir' from Ellesmere Island, Arctic Canada - Implications for northern high latitude palaeobiogeography and tapir palaeobiology. Palaeogeography, Palaeoclimatology, Palaeoecology 227, 2005, pp. 311-322
29. a b Elizete Celestino Holanda and Brenda Soledad Ferrero: Reappraisal of the Genus Tapirus (Perissodactyla, Tapiridae): Systematics and Phylogenetic Affinities of the South American Tapirs. Journal of Mammal Evolution 2012, doi: 10.1007 / s10914-012-9196-z
30. ^ A b Matthew Colbert: New Fossil Discoveries and the History of Tapirus. Tapir Conservation 16 (2), 2007, pp. 12-14
31. Dale A. Russell, Fredrick J. Rich, Vincent Schneider and Jean Lynch-Stieglitz: A warm thermal enclave in the Late Pleistocene of the South-eastern United States. Biological Reviews 84, 2009, pp. 173-202
32. Keith Williams: The Malayan tapir (Tapirus indicus). on the homepage of the IUCN Tapir Specialist Group ( [3] ), last accessed on May 12, 2019.
33. ^ Philip Hershkovitz: Mammals of Northern Colombia, preliminary report No. 7: Tapirs (genus Tapirus), with a systematic review of American species. Proceedings of the United States National Museum Smithonian Institution 103, No 3329, 1954, pp. 465-496
34. A. Naveda, B. de Thoisy, C. Richard-Hansen, DA Torres, L. Salas, R. Wallance, S. Chalukian and S. de Bustos: Tapirus terrestris. In: IUCN 2011. IUCN Red List of Threatened Species. Version 2011.2 ( [4] )
35. ^ AG Diaz, A. Castellanos, C. Piñeda, C. Downer, DJ Lizcano, E. Constantino, JA Suárez Mejía, J. Camancho, J. Darria, J. Amanzo, J. Sánchez, J. Sinisterra Santana, L. Ordoñez Delgado, LA Espino Castellanos andO. L. Montenegro: Tapirus pinchaque. IUCN Red List of Threatened Species. Version 2011.2 ( [5] )
36. Jump up A. Castellanos, C. Foerster, DJ Lizcano, E. Naranjo, E. Cruz-Aldan, I, Lira-Torres, R. Samudio, S. Matola, J. Schipper and J. Gonzalez-Maya: Tapirus bairdii. In: IUCN: IUCN Red List of Threatened Species. Version 2011.2. ( [6] )
37. A. Lynam, C. Traeholt, D. Martyr, J. Holden, K. Kawanishi, NJ van Strien and W. Novarino: Tapirus indicus. In: IUCN: IUCN Red List of Threatened Species. Version 2011.2. , 2011 ( [7] )
38. ^ Tapir Specialist Group: Tapir Action Plans. ( [8] )
|
2023-03-31 03:00:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4280411899089813, "perplexity": 8539.513854157307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00447.warc.gz"}
|
https://ask.cvxr.com/t/generalized-eigenvalue-problem/4066
|
# Generalized eigenvalue problem
Is there any way to solve generalized eigenvalue problem in CVX?
If you are just trying to “solve” the generalized eigenvalue problem, use the form of MATLAB’s eig with two input arguments eig(A,B). This does not involve CVX.
If you wish to “involve” CVX in the process, you can put cvx_begin before eig and cvx_end after eig. Then in some meaningless sense, you can say the problem was solved “in” CVX.
Do you have something specific you are trying to accomplish? Do you have a real problem you are trying to solve, or is this for a school assignment?
I am trying to solve the following problem
minimize \lambda
subject to
XA+A’X < 2*\lambda*X
X > 0
Where X and \lambda are unknown.
I can solve this problem in LMI toolbox using gevp solver. I want to solve this problem in CVX toolbox.
Your problem is quasi-convex, so you can use the bisection method described in section 4.2.5 “Quasiconvex optimization” of Boyd and Vandenberghe “Convex Optimization” https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf . CVX can be used to solve the LMIs within the bisection algorithm.
Strict inequalities will be interpreted as non-strict inequalities, so because of the homogeneity, you’ll need to do something to prevent X = zero matrix from occurring as a solution (with any lamnda) So, as one way of doing so, force X to be positive definite, such as X - eye(size(x)) == semidefinite(size(X)). That won’t affect optimal lambda.
Hi,
I’d appreciate it if you shared your solution. I’m still struggling to solve a gevp in CVX toolbox.
|
2023-03-29 04:06:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652364015579224, "perplexity": 985.0793865626213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00218.warc.gz"}
|
https://confit.readthedocs.io/en/latest/
|
# Confuse: Painless Configuration¶
Confuse is a straightforward, full-featured configuration system for Python.
## Using Confuse¶
config = confuse.Configuration('MyGreatApp', __name__)
The first parameter is required; it’s the name of your application that will be used to search the system for config files. The second parameter is optional: it’s the name of a module that will guide the search for a defaults file. Use this if you want to include a config_default.yaml file inside your package. (The included example package does exactly this.)
Now, you can access your configuration data as if it were a simple structure consisting of nested dicts and lists—except that you need to call the method .get() on the leaf of this tree to get the result as a value:
value = config['foo'][2]['bar'].get()
Under the hood, accessing items in your configuration tree builds up a view into your app’s configuration. Then, get() flattens this view into a file, performing a search through each configuration data source to find an answer. More on view later.
If you know that a configuration value should have a specific type, just pass that type to get():
int_value = config['number_of_goats'].get(int)
This way, Confuse will either give you an integer or raise a ConfigTypeError if the user has messed up the configuration. You’re safe to assume after this call that int_value has the right type. If the key doesn’t exist in any configuration file, Confuse will raise a NotFoundError. Together, catching these exceptions (both subclasses of confuse.ConfigError) lets you painlessly validate the user’s configuration as you go.
## View Theory¶
The Confuse API is based on the concept of views. You can think of a view as a place to look in a config file: for example, one view might say “get the value for key number_of_goats”. Another might say “get the value at index 8 inside the sequence for key animal_counts”. To get the value for a given view, you resolve it by calling the get() method.
This concept separates the specification of a location from the mechanism for retrieving data from a location. (In this sense, it’s a little like XPath: you specify a path to data you want and then you retrieve it.)
Using views, you can write config['animal_counts'][8] and know that no exceptions will be raised until you call get(), even if the animal_counts key does not exist. More importantly, it lets you write a single expression to search many different data sources without preemtively merging all sources together into a single data structure.
Views also solve an important problem with overriding collections. Imagine, for example, that you have a dictionary called deliciousness in your config file that maps food names to tastiness ratings. If the default configuration gives carrots a rating of 8 and the user’s config rates them a 10, then clearly config['deliciousness']['carrots'].get() should return 10. But what if the two data sources have different sets of vegetables? If the user provides a value for broccoli and zucchini but not carrots, should carrots have a default deliciousness value of 8 or should Confuse just throw an exception? With Confuse’s views, the application gets to decide.
The above expression, config['deliciousness']['carrots'].get(), returns 10 (falling back on the default). However, you can also write config['deliciousness'].get(). This expression will cause the entire user-specified mapping to override the default one, providing a dict object like {'broccoli': 7, 'zucchini': 9}. As a rule, then, resolve a view at the same granularity you want config files to override each other.
## Validation¶
We saw above that you can easily assert that a configuration value has a certain type by passing that type to get(). But sometimes you need to do more than just type checking. For this reason, Confuse provides a few methods on views that perform fancier validation or even conversion:
• as_filename(): Normalize a filename, substituting tildes and absolute-ifying relative paths. The filename is relative to the source that provided it. That is, a relative path in a config file refers to the directory containing the config file. A relative path in the defaults refers to the application’s config directory (config.config_dir(), as described below). A relative path from any other source (e.g., command-line options) is relative to the working directory.
• as_choice(choices): Check that a value is one of the provided choices. The argument should be a sequence of possible values. If the sequence is a dict, then this method returns the associated value instead of the key.
• as_number(): Raise an exception unless the value is of a numeric type.
• as_pairs(): Get a collection as a list of pairs. The collection should be a list of elements that are either pairs (i.e., two-element lists) already or single-entry dicts. This can be helpful because, in YAML, lists of single-element mappings have a simple syntax (- key: value) and, unlike real mappings, preserve order.
• as_str_seq(): Given either a string or a list of strings, return a list of strings. A single string is split on whitespace.
For example, config['path'].as_filename() ensures that you get a reasonable filename string from the configuration. And calling config['direction'].as_choice(['up', 'down']) will raise a ConfigValueError unless the direction value is either “up” or “down”.
## Command-Line Options¶
Arguments to command-line programs can be seen as just another source for configuration options. Just as options in a user-specific configuration file should override those from a system-wide config, command-line options should take priority over all configuration files.
You can use the argparse and optparse modules from the standard library with Confuse to accomplish this. Just call the set_args method on any view and pass in the object returned by the command-line parsing library. Values from the command-line option namespace object will be added to the overlay for the view in question. For example, with argparse:
args = parser.parse_args()
config.set_args(args)
Correspondingly, with optparse:
options, args = parser.parse_args()
config.set_args(options)
This call will turn all of the command-line options into a top-level source in your configuration. The key associated with each option in the parser will become a key available in your configuration. For example, consider this argparse script:
config = confuse.Configuration('myapp')
parser = argparse.ArgumentParser()
args = parser.parse_args()
config.set_args(args)
print(config['foo'].get())
This will allow the user to override the configured value for key foo by passing --foo <something> on the command line.
Overriding nested values can be accomplished by passing dots=True and have dot-delimited properties on the incoming object.:
parser.add_argument('--bar', help='nested parameter', dest='foo.bar')
args = parser.parse_args() # args looks like: {'foo.bar': 'value'}
config.set_args(args, dots=True)
print(config['foo']['bar'].get())
parse_args works with generic dictionaries too.:
args = {
'foo': {
'bar': 1
}
}
config.set_args(args, dots=True)
print(config['foo']['bar'].get())
Note that, while you can use the full power of your favorite command-line parsing library, you’ll probably want to avoid specifying defaults in your argparse or optparse setup. This way, Confuse can use other configuration sources—possibly your config_default.yaml—to fill in values for unspecified command-line switches. Otherwise, the argparse/optparse default value will hide options configured elsewhere.
## Search Paths¶
Confuse looks in a number of locations for your application’s configurations. The locations are determined by the platform. For each platform, Confuse has a list of directories in which it looks for a directory named after the application. For example, the first search location on Unix-y systems is $XDG_CONFIG_HOME/AppName for an application called AppName. Here are the default search paths for each platform: • OS X: ~/.config/app and ~/Library/Application Support/app • Other Unix: $XDG_CONFIG_HOME/app and ~/.config/app
• Windows: %APPDATA%\app where the APPDATA environment variable falls back to %HOME%\AppData\Roaming if undefined
Users can also add an override configuration directory with an environment variable. The variable name is the application name in capitals with “DIR” appended: for an application named AppName, the environment variable is APPNAMEDIR.
Confuse provides a simple helper, Configuration.config_dir(), that gives you a directory used to store your application’s configuration. If a configuration file exists in any of the searched locations, then the highest-priority directory containing a config file is used. Otherwise, a directory is created for you and returned. So you can always expect this method to give you a directory that actually exists.
As an example, you may want to migrate a user’s settings to Confuse from an older configuration system such as ConfigParser. Just do something like this:
config_filename = os.path.join(config.config_dir(),
confuse.CONFIG_FILENAME)
with open(config_filename, 'w') as f:
yaml.dump(migrated_config, f)
Occasionally, a program will need to modify its configuration while it’s running. For example, an interactive prompt from the user might cause the program to change a setting for the current execution only. Or the program might need to add a derived configuration value that the user doesn’t specify.
To facilitate this, Confuse lets you assign to view objects using ordinary Python assignment. Assignment will add an overlay source that precedes all other configuration sources in priority. Here’s an example of programmatically setting a configuration value based on a DEBUG constant:
if DEBUG:
config['verbosity'] = 100
...
my_logger.setLevel(config['verbosity'].get(int))
This example allows the constant to override the default verbosity level, which would otherwise come from a configuration file.
Assignment works be creating a new “source” for configuration data at the top of the stack. This new source takes priority over all other, previously-loaded sources. You can cause this explicitly by calling the set() method on any view. A related method, add(), works similarly but instead adds a new lowest-priority source to the bottom of the stack. This can be used to provide defaults for options that may be overridden by previously-loaded configuration files.
## YAML Tweaks¶
Confuse uses the PyYAML module to parse YAML configuration files. However, it deviates very slightly from the official YAML specification to provide a few niceties suited to human-written configuration files. Those tweaks are:
• All strings are returned as Python Unicode objects.
• YAML maps are parsed as Python OrderedDict objects. This means that you can recover the order that the user wrote down a dictionary.
• Bare strings can begin with the % character. In stock PyYAML, this will throw a parse error.
To produce a YAML file reflecting a configuration, just call config.dump(). If you supply a filename, the YAML will be written to the file; otherwise, a string is returned. This does not cleanly round-trip YAML, but it does play some tricks to preserve comments and spacing in the original file.
## Configuring Large Programs¶
One problem that must be solved by a configuration system is the issue of global configuration for complex applications. In a large program with many components and many config options, it can be unwieldy to explicitly pass configuration values from component to component. You quickly end up with monstrous function signatures with dozens of keyword arguments, decreasing code legibility and testability.
In such systems, one option is to pass a single Configuration object through to each component. To avoid even this, however, it’s sometimes appropriate to use a little bit of shared global state. As evil as shared global state usually is, configuration is (in my opinion) one valid use: since configuration is mostly read-only, it’s relatively unlikely to cause the sorts of problems that global values sometimes can. And having a global repository for configuration option can vastly reduce the amount of boilerplate threading-through needed to explicitly pass configuration from call to call.
To use global configuration, consider creating a configuration object in a well-known module (say, the root of a package). But since this object will be initialized at module load time, Confuse provides a LazyConfig object that loads your configuration files on demand instead of when the object is constructed. (Doing complicated stuff like parsing YAML at module load time is generally considered a Bad Idea.)
Global state can cause problems for unit testing. To alleviate this, consider adding code to your test fixtures (e.g., setUp in the unittest module) that clears out the global configuration before each test is run. Something like this:
config.clear()
These lines will empty out the current configuration and then re-load the defaults (but not the user’s configuration files). Your tests can then modify the global configuration values without affecting other tests since these modifications will be cleared out before the next test runs.
## Redaction¶
You can also mark certain configuration values as “sensitive” and avoid including them in output. Just set the redact flag:
config['key'].redact = True
Then flatten or dump the configuration like so:
config.dump(redact=True)
The resulting YAML will contain “key: REDACTED” instead of the original data.
|
2020-09-24 14:59:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24219773709774017, "perplexity": 2620.4314400622748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00132.warc.gz"}
|
https://homework.cpm.org/category/CC/textbook/cc2/chapter/4/lesson/4.3.3/problem/4-119
|
### Home > CC2 > Chapter 4 > Lesson 4.3.3 > Problem4-119
4-119.
Evaluate each expression below for the given value. That is, find the value of the expression when the variable is equal to the value given. Homework Help ✎
For each problem, substitute the given value for each variable to evaluate each expression.
1. $2a−7$ when $a=3$
$2a−7$ when $a=3$
$2(3)−7$
$6−7=−1$
Now solve for parts (b) - (d) using the same strategy. Remember the Order of Operations: multiply or divide before adding or subtracting.
1. $10+4m$ when $m=−2$
1. $9+(−2n)$ when $n=4$
$9+(−2(4))=1$
1. $\frac{x}{2}+5$ when $x=6$
|
2019-10-23 20:56:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 13, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806830644607544, "perplexity": 1026.5676277926718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00118.warc.gz"}
|
https://zbmath.org/?q=an:0708.90062&format=complete
|
# zbMATH — the first resource for mathematics
Necessary and sufficient optimality conditions for two-stage stochastic programming problems. (English) Zbl 0708.90062
In a recent paper [ibid. 24, No.3, 207-215 (1988; Zbl 0654.90062)] the author introduced partial derivatives of the cost function of a nonlinear two-stage stochastic programming problem. In this paper the previous results are used in order to present necessary and sufficient optimality conditions in the convex case. A special linear-quadratic case is also analyzed.
Reviewer: R.Lepp
##### MSC:
90C15 Stochastic programming 90C30 Nonlinear programming
Full Text:
##### References:
[1] P. Kall: Stochastic Linear Programming. Springer-Verlag, Berlin–Heidelberg–New York 1976. · Zbl 0317.90042 [2] V. Kaňková: Differentiability of the optimalized function in a two-stage stochastic nonlinear programming problem. Ekonomicko-matematický obzor 14 (1978), 3, 322-330. In Czech. [3] V. Kaňková: An approximative solution of a stochastic optimization problem. Trans. of the Eighth Prague Conference, Academia, Prague 1978, pp. 327-332. [4] V. Kaňková: Approximative solution of problems of two-stage stochastic nonlinear programming. Ekonomicko-matematický obzor 16 (1980), 1, 64-76. In Czech. [5] V. Kaňková: A note on the differentiability in two-stage stochastic nonlinear programming problems. Kybernetika 24 (1988), 3, 207-215. · Zbl 0654.90062 · www.kybernetika.cz · eudml:28515 [6] S. Karlin: Mathematical Methods and Theory in Games, Programming, and Economics. Pergamon Press, London–Paris 1959. · Zbl 0139.12704 [7] В. Н. Пшеничный: Ю. М. Данилин: Численные методы в экстремальных задачах. Hauka, Москва 1975. · Zbl 1170.01354 [8] В. Н. Пшеничный: Необоходимые условия экстремума. Hauka, Москва 1982. · Zbl 1170.01407 [9] R. T. Rockafellar: Convex Analysis. Princeton Press, New Jersey 1970. · Zbl 0193.18401 [10] R. T. Rockafellar, R. J.-B. Wets: Stochastic convex programming: basic duality. Pacific J. Math. 62 (1976), 173-195. · Zbl 0339.90048 · doi:10.2140/pjm.1976.62.173 [11] R. T. Rockafellar, R. J.-B. Wets: The optimal recourse problem in discrete time: $$L^1$$-multiplies for inequality constraints. SIAM J. Control Optim. 16 (1978), 1, 16-36. · Zbl 0397.90078 · doi:10.1137/0316002 [12] S. Vogel: Necessary optimality conditions for two-stage stochastic programming problems. Optimization 16 (1985), 4, 607-616. · Zbl 0579.90073 · doi:10.1080/02331938508843056
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-03-07 01:41:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839087963104248, "perplexity": 4800.040898406772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00362.warc.gz"}
|
http://www.lastfm.com.br/user/play___dead/library/music/Radiohead/_/Bullet+Proof.+.+.+I+Wish+I+Was?setlang=pt
|
# Biblioteca
## Bullet Proof. . . I Wish I Was
17 execuções | Ir para página da faixa
Faixas (17)
Faixa Álbum Duração Data
Bullet Proof. . . I Wish I Was Mai 10 2010, 4h52
Bullet Proof. . . I Wish I Was Abr 19 2010, 2h30
Bullet Proof. . . I Wish I Was Nov 7 2009, 4h02
Bullet Proof. . . I Wish I Was Set 15 2009, 3h29
Bullet Proof. . . I Wish I Was Ago 3 2009, 1h22
Bullet Proof. . . I Wish I Was Jul 17 2009, 6h26
Bullet Proof. . . I Wish I Was Mai 2 2009, 6h29
Bullet Proof. . . I Wish I Was Abr 30 2009, 2h11
Bullet Proof. . . I Wish I Was Mar 31 2009, 4h23
Bullet Proof. . . I Wish I Was Mar 16 2009, 17h31
Bullet Proof. . . I Wish I Was Mar 16 2009, 2h23
Bullet Proof. . . I Wish I Was Mar 14 2009, 7h18
Bullet Proof. . . I Wish I Was Mar 9 2009, 0h54
Bullet Proof. . . I Wish I Was Mar 8 2009, 4h43
Bullet Proof. . . I Wish I Was Nov 22 2008, 23h27
Bullet Proof. . . I Wish I Was Nov 22 2008, 23h26
Bullet Proof. . . I Wish I Was Nov 3 2008, 5h38
|
2014-03-15 07:41:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687638640403748, "perplexity": 6681.599701629309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678696502/warc/CC-MAIN-20140313024456-00020-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/bulletin-polish-acad-sci-math/all/56/2/85609/on-the-extension-of-certain-maps-with-values-in-spheres
|
# Wydawnictwa / Czasopisma IMPAN / Bulletin Polish Acad. Sci. Math. / Wszystkie zeszyty
## On the Extension of Certain Maps with Values in Spheres
### Tom 56 / 2008
Bulletin Polish Acad. Sci. Math. 56 (2008), 177-182 MSC: Primary 55S36; Secondary 55S35. DOI: 10.4064/ba56-2-8
#### Streszczenie
Let $E$ be an oriented, smooth and closed $m$-dimensional manifold with $m \ge 2$ and $V \subset E$ an oriented, connected, smooth and closed $(m-2)$-dimensional submanifold which is homologous to zero in $E$. Let $S^{n-2} \subset S^n$ be the standard inclusion, where $S^n$ is the $n$-sphere and $n \ge 3$. We prove the following extension result: if $h:V \to S^{n-2}$ is a smooth map, then $h$ extends to a smooth map $g:E \to S^n$ transverse to $S^{n-2}$ and with $g^{-1}(S^{n-2})=V$. Using this result, we give a new and simpler proof of a theorem of Carlos Biasi related to the \it ambiental bordism \rm question, which asks whether, given a smooth closed $n$-dimensional manifold $E$ and a smooth closed $m$-dimensional submanifold $V \subset E$, one can find a compact smooth $(m+1)$-dimensional submanifold $W \subset E$ such that the boundary of $W$ is $V$.
#### Autorzy
• Carlos BiasiDepartamento de Matemática
ICMC-USP – Campus de São Carlos
Caixa Postal 668
São Carlos, SP 13560-970, Brazil
e-mail
• Alice K. M. LibardiDepartamento de Matemática
IGCE-UNESP – Campus de Rio Claro
Rio Claro, SP 13506-700, Brazil
e-mail
• Pedro L. Q. PergherDepartamento de Matemática
Caixa Postal 676
São Carlos, SP 13565-905, Brazil
e-mail
• Stanis/law SpieżInstitute of Mathematics
|
2021-04-16 20:45:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860582113265991, "perplexity": 1370.2262274319012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00285.warc.gz"}
|
https://fr.maplesoft.com/support/help/errors/view.aspx?path=componentLibrary%2Fhydraulics%2Frestrictions%2FNonCircularPipe
|
Non-Circular Pipe
Lossy model of a non-circular pipe
Description The Non-Circular Pipe component models a pipe with losses. The pressure drop is computed with the Darcy equation, with the friction factor determined using the Haaland approximation for turbulent flow. When the Reynolds number is greater than the maximum value for laminar flow but less than the minimum value for turbulent flow, the friction factor is determined using linear interpolation.
Equations $\mathrm{\Re }=\frac{q{\mathrm{D}}_{h}}{a\mathrm{\nu }}$ ${f}_{L}=\frac{{K}_{s}}{\mathrm{\Re }}\phantom{\rule[-0.0ex]{2.5ex}{0.0ex}}{f}_{T}={f}_{\mathrm{Colebrook}}\left({\mathrm{\Re }}_{T},\frac{\mathrm{\epsilon }}{{\mathrm{D}}_{h}}\right)$ $\mathrm{mode}=\left\{\begin{array}{cc}{\mathrm{pos}}_{\mathrm{turbulent}}& {\mathrm{\Re }}_{T}<\mathrm{\Re }\\ {\mathrm{neg}}_{\mathrm{turbulent}}& {\mathrm{\Re }}_{T}<-\mathrm{\Re }\\ {\mathrm{pos}}_{\mathrm{mixed}}\phantom{\rule[-0.0ex]{2.0ex}{0.0ex}}& {\mathrm{\Re }}_{L}<\mathrm{\Re }\\ {\mathrm{neg}}_{\mathrm{mixed}}\phantom{\rule[-0.0ex]{2.0ex}{0.0ex}}& {\mathrm{\Re }}_{L}<-\mathrm{\Re }\\ \mathrm{laminar}\phantom{\rule[-0.0ex]{3.5ex}{0.0ex}}& \mathrm{otherwise}\end{array}$ $p={p}_{A}-{p}_{B}=\frac{1}{2}L\mathrm{\rho }\frac{{\mathrm{\nu }}^{2}}{{\mathrm{D}}_{h}^{3}}\mathrm{\Re }\left\{\begin{array}{cc}{f}_{\mathrm{Colebrook}}\left(\left|\mathrm{\Re }\right|,\frac{\mathrm{\epsilon }}{{\mathrm{D}}_{h}}\right)\left|\mathrm{\Re }\right|& \mathrm{mode}={\mathrm{pos}}_{\mathrm{turbulent}}\vee \mathrm{mode}={\mathrm{neg}}_{\mathrm{turbulent}}\\ \left({f}_{L}+\frac{{f}_{T}-{f}_{L}}{{\mathrm{\Re }}_{T}-{\mathrm{\Re }}_{L}}\left(\left|\mathrm{\Re }\right|-{\mathrm{\Re }}_{L}\right)\right)\left|\mathrm{\Re }\right|& \mathrm{mode}={\mathrm{pos}}_{\mathrm{mixed}}\vee \mathrm{mode}={\mathrm{neg}}_{\mathrm{mixed}}\\ {K}_{s}& \mathrm{otherwise}\end{array}$ $q={q}_{A}=-{q}_{B}=\mathrm{\Re }A\frac{\mathrm{\nu }}{{\mathrm{D}}_{h}}$ ${f}_{\mathrm{Colebrook}}=\left(\mathrm{\Re },{\mathrm{\epsilon }}_{\mathrm{D}}\right)\to {\left(1.8{\mathrm{log}}_{10}{\left(\frac{6.9}{\mathrm{\Re }}+\left(\frac{{\mathrm{\epsilon }}_{\mathrm{D}}}{3.7}\right)\right)}^{1.11}\right)}^{-2}$
Variables
Name Units Description Modelica ID ${f}_{L}$ $1$ Friction factor with laminar flow fL ${f}_{T}$ $1$ Friction factor with turbulent flow fT $\mathrm{mode}$ Integer indicating type of flow mode $p$ $\mathrm{Pa}$ Pressure across component p $q$ $\frac{{m}^{3}}{s}$ Flow rate through component q $\mathrm{\Re }$ $1$ Reynolds number Re
Connections
Name Description Modelica ID $\mathrm{portA}$ Upstream hydraulic port portA $\mathrm{portB}$ Downstream hydraulic port portB
Parameters
Name Default Units Description Modelica ID $A$ ${\mathrm{Dh}}^{2}$ ${m}^{2}$ Cross-sectional area A ${\mathrm{D}}_{h}$ $0.01$ $m$ Hydraulic diameter Dh ${K}_{s}$ $56$ $1$ Shape-factor for cross section Ks $L$ $5$ $m$ Length of pipe L $\mathrm{\epsilon }$ $1.5·{10}^{-5}$ $m$ Height of inner surface roughness epsilon ${\mathrm{\Re }}_{L}$ $2·{10}^{3}$ $1$ Reynolds number at transition to laminar flow ReL ${\mathrm{\Re }}_{T}$ $4·{10}^{3}$ $1$ Reynolds number at transition to turbulent flow ReT
|
2023-02-06 23:28:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6159018278121948, "perplexity": 1915.5902875610955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00013.warc.gz"}
|
https://lishh.github.io/publication/2016-01-01-JIS.html
|
# Dynamic game difficulty balancing in real time using Evolutionary Fuzzy Cognitive Maps with automatic calibration
Published in SBC Journal on Interactive Systems, 2016
### Abstract
Fuzzy Cognitive Maps (FCM) is a paradigm used to represent knowledge in a simple and concise way, expressing the grade of relation that exists between concepts and causal relationships. Due to its flexibility, FCM has been successfully applied in numerous applications in diverse research fields, such as, robotics, medical diagnosis, decision problems in information technology, games, and so forth. However, one critical drawback is the determination of the weights in the representation graph, which is generally done by an expert. The present paper proposes a semi-automated method for calibrating the weights in a solution for the problem of dynamic game difficulty balancing (DGB) using Evolutionary Fuzzy Cognitive Maps (E-FCM). The proposed algorithm adjusts the weights in real time, ensuring an equilibrium between the values generated according to the expert’s contribution (based on a static analysis) and the changes produced in the values of the concepts by the calibration process during the simulation (a dynamic analysis).
Recommended citation:
@Article{FuentesPerez2016a,
Title = { Dynamic game difficulty balancing in real time using Evolutionary Fuzzy Cognitive Maps with automatic calibration },
Author = { {Fuentes Perez}, Lizeth Joseline and {Romero Calla}, Luciano Romero and Montenegro, Anselmo Antunes and Valente, Luis and {Gonzalez Clua}, Esteban Walter },
Journal = { SBC Journal on Interactive Systems },
Year = { 2016 },
Note = { ISSN: 2236-3297 },
Number = { 1 },
Pages = { 38-50 },
Volume = { 7 },
Url = { http://seer.ufrgs.br/index.php/jis/article/view/63683 }
}
|
2020-06-06 15:06:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20096150040626526, "perplexity": 4669.703987156202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00046.warc.gz"}
|
https://mathoverflow.net/questions/331165/solution-of-equation-on-vector-field
|
# Solution of equation on vector field
I have a vector field function $$\vec{J}: {\bf R}^3\to {\bf R}^3$$ looking like:
$$\vec{J}(\vec{r}) = (\vec{B} \times \vec{v}(\vec{r}))\rho(\vec{r})$$
with a (very well behaved) real, positive, differentiable scalar function $$\rho: {\bf R}^3\to {\bf R}^+$$ and the equation should hold for all non-zero constant $${\bf R}^3$$ vectors $$\vec{B}\ne\vec{0}$$. For physical reasons I am out for the set of vector field functions $$\vec{v}(\vec{r})$$ such that
$$\nabla \cdot \vec{J}(\vec{r}) = 0,$$
i.e. $$\vec{J}$$ gets divergence free in all points $$\vec{r}$$ in $$\bf R^3$$ (one specific $$\vec{v}(\vec{r})$$ should fulfill the condition for all $$\vec{B}$$). By simplification of the equation (looking at special joices of $$\vec{B}$$) I could guess a set of solutions
$$\vec{v}_F(\vec{r})=\nabla F(\rho(\vec{r}))=f(\vec{r}) \nabla(\rho(\vec{r}))$$
i.e. effectively all vector fields that are parallel to $$\nabla \rho$$.
I conjecture that there are no other fields than $$\vec{v}_F(\vec{r})$$ such that the continuity is fulfilled. I like to proof that but fail. A "systematic" solution involves a singular linear equation system which seems to be a bit above my humble abilities to really handle with full oversight properly in a systematic manner (praying to god that this won't disqualify me to ask here), but I thought of trying something like a proof by contradiction by inserting a (scaled) component $$\vec{v}_c{(\vec{r})}$$ that is orthogonal to $$\nabla \rho$$ and assumed to be non-zero at least somewhere $$(\vec{r}_0)$$ and for all $$\vec{B}$$:
$$\vec{v}_c(\vec{r}_0) = \nabla \rho(\vec{r}_0) \times (\nabla \rho(\vec{r}_0) \times \vec{v}(\vec{r}_0))$$
But I fail to bring that to an happy end, since I end with three terms that might be non-zero but I do not know how to show that they eventually can't get zero when added up. A solution would be greatly appreciated.
(I feel that this is a border line case between MSE and MO, however since I am mostly out for the solution and since its a little but important puzzle stone in bigger research project, I finally decided to post it here)
|
2019-07-19 11:29:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7723716497421265, "perplexity": 316.1313047373836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526210.32/warc/CC-MAIN-20190719095313-20190719121313-00439.warc.gz"}
|
https://wpd.ugr.es/~geometry/seminar/en/historial/autor/364
|
The profile you are now visiting: Didier A. Solís Gamboa. Go back to Past records to show all talks or carry out a new search.
# On the geometry of the set of compact subsets of riemannian spaceforms
## Didier A. Solís Gamboa Universidad Autónoma de Yucatán, México DF
In this talk we present some interesting features of the geodesic structure of the space of compact subsets of $\mathbb{R}^n$ and $\mathbb{H}^n$ endowed with the Hausdorff metric. In particular, we show that such spaces are not spaces of curvature bounded from below. We further investigate connections between these spaces and the Hilbert cube.
Seminario 1ª Planta, IEMATH
# Didier A. Solís Gamboa
## Universidad Autónoma de Yucatán, México DF
Number of talks
1
Number of visits
1
Last visit
Country of origin
México
|
2020-07-08 09:04:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6423069834709167, "perplexity": 1789.9861680632773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00076.warc.gz"}
|
https://sicp.sourceacademy.org/chapters/2.4.1.html
|
[1] In actual computational systems, rectangular form is preferable to polar form most of the time because of roundoff errors in conversion between rectangular and polar form. This is why the complex-number example is unrealistic. Nevertheless, it provides a clear illustration of the design of a system using generic operations and a good introduction to the more substantial systems to be developed later in this chapter.
[2] The arctangent function referred to here, computed by Scheme's atan procedure, computed by JavaScript's math_atan2 function, is defined so as to take two arguments $y$ and $x$ and to return the angle whose tangent is $y/x$. The signs of the arguments determine the quadrant of the angle.
2.4.1 Representations for Complex Numbers
|
2021-11-28 05:41:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265824556350708, "perplexity": 470.2449671374525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00548.warc.gz"}
|
https://scikit-learn.org/stable/modules/generated/fastica-function.html
|
# sklearn.decomposition.fastica¶
sklearn.decomposition.fastica(X, n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False)[source]
Perform Fast Independent Component Analysis.
Read more in the User Guide.
Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
n_componentsint, default=None
Number of components to extract. If None no dimension reduction is performed.
algorithm{‘parallel’, ‘deflation’}, default=’parallel’
Apply a parallel or deflational FASTICA algorithm.
whitenbool, default=True
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white. Otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
fun{‘logcosh’, ‘exp’, ‘cube’} or callable, default=’logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:
def my_g(x):
return x ** 3, np.mean(3 * x ** 2, axis=-1)
fun_argsdict, default=None
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}
max_iterint, default=200
Maximum number of iterations to perform.
tolfloat, default=1e-04
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
w_initndarray of shape (n_components, n_components), default=None
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used.
random_stateint, RandomState instance or None, default=None
Used to initialize w_init when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See Glossary.
return_X_meanbool, default=False
If True, X_mean is returned too.
compute_sourcesbool, default=True
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
return_n_iterbool, default=False
Whether or not to return the number of iterations.
Returns
Kndarray of shape (n_components, n_features) or None
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
Wndarray of shape (n_components, n_components)
The square matrix that unmixes the data after whitening. The mixing matrix is the pseudo-inverse of matrix W K if K is not None, else it is the inverse of W.
Sndarray of shape (n_samples, n_components) or None
Estimated source matrix
X_meanndarray of shape (n_features,)
The mean over features. Returned only if return_X_mean is True.
n_iterint
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return_n_iter is set to True.
Notes
The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix' the data by estimating an un-mixing matrix W where S = W K X. While FastICA was proposed to estimate as many sources as features, it is possible to estimate less by setting n_components < n_features. It this case K is not a square matrix and the estimated A is the pseudo-inverse of W K`.
This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.
Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430
|
2021-04-18 14:19:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35991403460502625, "perplexity": 4642.787226232622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038492417.61/warc/CC-MAIN-20210418133614-20210418163614-00088.warc.gz"}
|
http://www.computer.org/csdl/trans/tb/2008/02/ttb2008020301-abs.html
|
Subscribe
Issue No.02 - April-June (2008 vol.5)
pp: 301-312
ABSTRACT
The problem Parsimony Haplotyping (PH) asks for the smallest set of haplotypes which can explain a given set of genotypes, and the problem Minimum Perfect Phylogeny Haplotyping (MPPH) asks for the smallest such set which also allows the haplotypes to be embedded in a perfect phylogeny, an evolutionary tree with biologically-motivated restrictions. For PH, we extend recent work by further mapping the interface between easy'' and hard'' instances, within the framework of (k,l)-bounded instances where the number of 2's per column and row of the input matrix is restricted. By exploring, in the same way, the tractability frontier of MPPH we provide the first concrete, positive results for this problem. In addition, we construct for both PH and MPPH polynomial time approximation algorithms, based on properties of the columns of the input matrix.
INDEX TERMS
Biology and genetics, Combinatorial algorithms, Complexity hierarchies
CITATION
Leo van Iersel, Judith Keijsper, Steven Kelk, Leen Stougie, "Shorelines of Islands of Tractability: Algorithms for Parsimony and Minimum Perfect Phylogeny Haplotyping Problems", IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol.5, no. 2, pp. 301-312, April-June 2008, doi:10.1109/TCBB.2007.70232
REFERENCES
[1] P. Alimonti and V. Kann, “Hardness of Approximating Problems on Cubic Graphs,” Proc. Third Italian Conf. Algorithms and Complexity, pp. 288-298, 1997. [2] V. Bafna, D. Gusfield, S. Hannenhalli, and S. Yooseph, “A Note on Efficient Computation of Haplotypes via Perfect Phylogeny,” J.Computational Biology, vol. 11, no. 5, pp. 858-866, 2004. [3] J.R.S. Blair and B. Peyton, “An Introduction to Chordal Graphs and Clique Trees,” Graph Theory and Sparse Matrix Computation. pp. 1-29, Springer, 1993. [4] P. Bonizzoni, G.D. Vedova, R. Dondi, and J. Li, “The Haplotyping Problem: An Overview of Computational Models and Solutions,” J. Computer Science and Technology, vol. 18, no. 6, pp. 675-688, 2003. [5] D. Brown and I. Harrower, “Integer Programming Approaches to Haplotype Inference by Pure Parsimony,” IEEE/ACM Trans. Computational Biology and Bionformatics, vol. 3, no. 2, pp. 141-154, Apr.-June 2006. [6] R. Cilibrasi, L.J.J. van Iersel, S.M. Kelk, and J. Tromp, “On the Complexity of Several Haplotyping Problems,” Proc. Fifth Int'l Workshop Algorithms in Bioinformatics (WABI '05), pp. 128-139, 2005. [7] Z. Ding, V. Filkov, and D. Gusfield, “A Linear-Time Algorithm for the Perfect Phylogeny Haplotyping (PPH) Problem,” J. Computational Biology, vol. 13, no. 2, pp. 522-533, 2006. [8] D. Gusfield, Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge Univ. Press, 1997. [9] D. Gusfield, “Efficient Algorithms for Inferring Evolutionary History,” Networks, vol. 21, pp. 19-28, 1991. [10] D. Gusfield, “Haplotype Inference by Pure Parsimony,” Proc. 14th Ann. Symp. Combinatorial Pattern Matching, pp. 144-155, 2003. [11] B.V. Halldórsson, V. Bafna, N. Edwards, R. Lippert, S. Yooseph, and S. Istrail, “A Survey of Computational Methods for Determining Haplotypes,” Proc. DIMACS/RECOMB Satellite Workshop: Computational Methods for SNPs and Haplotype Inference, pp.26-47, 2004. [12] L.J.J. van Iersel, J.C.M. Keijsper, S.M. Kelk, and L. Stougie, “Beaches of Islands of Tractability: Algorithms for Parsimony and Minimum Perfect Phylogeny Haplotyping Problems,” Proc. Sixth Int'l Workshop Algorithms in Bioinformatics (WABI '06), pp. 80-91, 2006. [13] L.J.J. van Iersel, J.C.M. Keijsper, S.M. Kelk, and L. Stougie, “Shorelines of Islands of Tractability: Algorithms for Parsimony and Minimum Perfect Phylogeny Haplotyping Problems,” technical report, http://www.win.tue.nl/bs/spor2007-03.pdf , 2007. [14] G. Lancia, M. Pinotti, and R. Rizzi, “Haplotyping Populations by Pure Parsimony: Complexity of Exact and Approximation Algorithms,” INFORMS J. Computing, vol. 16, no. 4, pp. 348-359, 2004. [15] G. Lancia and R. Rizzi, “A Polynomial Case of the Parsimony Haplotyping Problem,” Operations Research Letters, vol. 34, no. 3, pp. 289-295, 2006. [16] C.H. Papadimitriou and M. Yannakakis, “Optimization, Approximation, and Complexity Classes,” J. Computer and System Sciences, vol. 43, pp. 425-440, 1991. [17] D.J. Rose, R.E. Tarjan, and G.S. Lueker, “Algorithmic Aspects of Vertex Elimination on Graphs,” SIAM J. Computing, vol. 5, pp. 266-283, 1976. [18] R. Sharan, B.V. Halldórsson, and S. Istrail, “Islands of Tractability for Parsimony Haplotyping,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 3, no. 3, pp. 303-311, July-Sept. 2006. [19] Y.S. Song, Y. Wu, and D. Gusfield, “Algorithms for Imperfect Phylogeny Haplotyping (IPPH) with Single Haploplasy or Recombination Event,” Proc. Fifth Int'l Workshop Algorithms in Bioinformatics (WABI '05), pp. 152-164, 2005. [20] R. VijayaSatya and A. Mukherjee, “An Optimal Algorithm for Perfect Phylogeny Haplotyping,” J. Computational Biology, vol. 13, no. 4, pp. 897-928, 2006. [21] X.-S. Zhang, R.-S. Wang, L.-Y. Wu, and L. Chen, “Models and Algorithms for Haplotyping Problem,” Current Bioinformatics, vol. 1, pp. 105-114, 2006. [22] Y.-T. Huang, K.-M. Chao, and T. Chen, “An Approximation Algorithm for Haplotype Inference by Maximum Parsimony,” J.Computational Biology, vol. 12, no. 10, pp. 1261-1274, 2005.
|
2015-07-31 05:25:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.606651246547699, "perplexity": 8775.703806813812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00212-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://plainmath.net/92604/expected-number-of-uniformly-random-poin
|
# Expected number of uniformly random points in unit square is in convex position. If n points are uniformly generated in a unit square, following famous Erdős–Szekeres theorem. The probability of them be in convex position is (((2n-2),(n-1))//n!)^2
Expected number of uniformly random points in unit square is in convex position.
If n points are uniformly generated in a unit square, following famous Erdős–Szekeres theorem. The probability of them be in convex position is ${\left(\left(\genfrac{}{}{0}{}{2n-2}{n-1}\right)/n!\right)}^{2}$
My interest is to find the expected number of points be in convex position, and I am enumerating i points in the convex position and add all the possible is.
Here is what I did, $P\left(i\right)={\left(\left(\genfrac{}{}{0}{}{2i-2}{i-1}\right)/i!\right)}^{2}$, the probability of having i points in convex position.
And the expected number of points be in convex position can be achieved via $\sum _{i=1}^{n}\left(\genfrac{}{}{0}{}{n}{i}\right)\ast P\left(i\right)$
And I was using WolframAlpha to get an idea of this number, for some reason the expected number is bigger than n, which is impossible
Can someone help me where did I do wrong?
You can still ask an expert for help
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Samantha Braun
Step 1
You've computed the expected number of subsets of points in convex position. For example, when $n\le 3$ all ${2}^{n}-1$ nonempty subsets are in convex position, so the sum equals ${2}^{n}-1$
Let X be the maximum cardinality of all sets in convex position. By union bound on all sets of size k we have:
$P\left(X\ge k\right)\le \left(\genfrac{}{}{0}{}{n}{k}\right){\left(\frac{1}{k!}\left(\genfrac{}{}{0}{}{2k-2}{k-1}\right)\right)}^{2}$
Step 2
So we get an upper bound on the expectation of X:
$\begin{array}{rl}E\left(X\right)& =\sum _{k=1}^{n}P\left(X\ge k\right)\\ & \le \sum _{k=1}^{n}min\left(1,\left(\genfrac{}{}{0}{}{n}{k}\right){\left(\frac{1}{k!}\left(\genfrac{}{}{0}{}{2k-2}{k-1}\right)\right)}^{2}\right)\end{array}$
Getting a lower bound is more complicated, I think this upper bound should be reasonably tight. Why? Most of the sum on the right hand side comes from the terms which equal 1, the terms less than 1 decay rapidly. The expected number of sets of size k in convex position equals $\left(\genfrac{}{}{0}{}{n}{k}\right){\left(\frac{1}{k!}\left(\genfrac{}{}{0}{}{2k-2}{k-1}\right)\right)}^{2}$, so when this quantity is much bigger than 1, it's a reasonable to guess that X will be bigger than k with high probability.
|
2022-11-27 09:29:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975546360015869, "perplexity": 285.57071657667007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00311.warc.gz"}
|
https://cdsweb.cern.ch/collection/ATLAS%20Conference%20Notes?ln=zh_CN
|
# ATLAS Conference Notes
2016-02-01
13:55
Calibration of ATLAS $b$-tagging algorithms in dense jet environments This note describes the calibration of various ATLAS $b$-tagging algorithms using reconstructed $t\bar{t}$ candidate events in the final state of one charged lepton, missing transverse momentum, and at least four jets, in the ATLAS $\sqrt{s}=8 \TeV$ $pp$ collision data sample. [...] ATLAS-CONF-2016-001. - 2016. - 30 p. Original Communication (restricted to ATLAS) - Full text
2015-12-15
22:09
A search for Supersymmetry in events containing a leptonically decaying $Z$ boson, jets and missing transverse momentum in $\sqrt{s}=13~$TeV $pp$ collisions with the ATLAS detector A search for supersymmetric particles in final states containing a same-flavour opposite-sign lepton (electron or muon) pair with an invariant mass consistent with that of the $Z$ boson, jets and large missing transverse momentum is presented. [...] ATLAS-CONF-2015-082. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-12-15
22:07
Search for resonances decaying to photon pairs in 3.2 fb$^{-1}$ of $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector This note describes a search for new resonances decaying to two photons, with invariant mass larger than 200 GeV. [...] ATLAS-CONF-2015-081. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-12-15
22:05
Search for dark matter produced in association with a hadronically decaying vector boson in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector at the LHC This note describes a search for dark matter produced in association with a hadronically-decaying $W$ or $Z$ boson using 3.2~fb$^{-1}$ of $pp$ collisions at $\sqrt{s}=13$ TeV recorded by the ATLAS detector at the Large Hadron Collider. [...] ATLAS-CONF-2015-080. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-12-15
22:03
Measurement of the inclusive cross-section of single top-quark $t$-channel production in $pp$ collisions at $\sqrt{s}$ = 13 TeV A measurement of the $t$-channel single top-quark production cross-section in the lepton+jets channel using 3.2 fb$^{-1}$ $pp$ collision data at a centre-of-mass energy of 13 TeV, recorded with the ATLAS detector in 2015, is presented. [...] ATLAS-CONF-2015-079. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-12-15
22:02
Search for supersymmetry at $\sqrt{s}=13$ TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector A search for strongly produced supersymmetric particles is conducted using signatures involving multiple energetic jets and either two isolated leptons ($e$ or $\mu$) with the same electric charge or at least three isolated leptons. [...] ATLAS-CONF-2015-078. - 2015. - 20 p. Original Communication (restricted to ATLAS) - Full text
2015-12-15
22:01
Search for new phenomena in final states with large jet multiplicities and missing transverse momentum with ATLAS using $\sqrt{s}$ = 13 TeV proton–proton collisions Results are reported of a search for new phenomena — such as supersymmetric particle production — that could be observed in high-energy proton-proton collisions. [...] ATLAS-CONF-2015-077. - 2015. - 29 p. Original Communication (restricted to ATLAS) - Full text
2015-12-15
21:59
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at √ s= 13 TeV with the ATLAS detector Results of a search for gluinos in final states with an isolated electron or muon, multiple jets and large transverse missing momentum are presented, using proton-proton collision data at a centre-of-mass energy of s√= 13 TeV. [...] ATLAS-CONF-2015-076. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-12-15
21:58
Search for $WW/WZ$ resonance production in the $\ell\nu qq$ final state at $\sqrt{s}=13\,$ TeV with the ATLAS detector at the LHC A search is presented for new resonances decaying to $WW$ or $WZ$ final states, where one $W$ boson decays leptonically (to an electron or a muon plus a neutrino) and the other $W/Z$ boson decays hadronically. [...] ATLAS-CONF-2015-075. - 2015. - 18 p. Original Communication (restricted to ATLAS) - Full text
2015-12-15
21:56
Search for new resonances decaying to a W or Z boson and a Higgs boson in the $\ell\ell b\bar b$, $\ell\nu b\bar b$, and $\nu\nu b\bar b$ channels in $pp$ collisions at $\sqrt s = 13$~TeV with the ATLAS detector A search is presented for new resonances decaying to a $W$ or $Z$ boson and a Higgs boson in the $\ell\ell b\bar b$, $\ell\nu b\bar b$, and $\nu\nu b\bar b$ channels in $pp$ collisions at $\sqrt s = 13$~TeV with the ATLAS detector at the Large Hadron Collider using a total of $3.2\pm0.2$ fb$^{-1}$ of integrated luminosity. [...] ATLAS-CONF-2015-074. - 2015. Original Communication (restricted to ATLAS) - Full text
|
2016-02-06 09:19:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527010917663574, "perplexity": 2222.0155652638905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146241.46/warc/CC-MAIN-20160205193906-00189-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.maplesoft.com/support/help/Maple/view.aspx?path=MmaTranslator/Mma/CompoundExpression
|
MmaTranslator[Mma] - Maple Programming Help
Home : Support : Online Help : Connectivity : Mathematica Translator : Mma : MmaTranslator/Mma/CompoundExpression
MmaTranslator[Mma]
CompoundExpression
evaluate expressions and return the last result
Calling Sequence CompoundExpression(arguments)
Parameters
arguments - Maple translation of the Mathematica command arguments
Description
• The CompoundExpression command evaluates expressions and returns the result of the last expression.
Examples
> $\mathrm{with}\left(\mathrm{MmaTranslator}[\mathrm{Mma}]\right):$
> $\mathrm{CompoundExpression}\left(\mathrm{assign}\left(a,5\right),\mathrm{assign}\left(b,9-a\right),\mathrm{assign}\left(c,2b\right),c\right)$
${8}$ (1)
Please add your Comment (Optional) E-mail Address (Optional) What is ? This question helps us to combat spam
|
2017-01-17 02:45:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4402669370174408, "perplexity": 5523.571743217181}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00434-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://slashnode.wikidot.com/seng4420-lect10
|
Estimating Software Size
#### What is software size?
We can measure software size in lines-of-code (LOC). This is useful in deciding how big a file you need to store the code, however it is not sufficient to decide how much effort is required to produce code. This is because some lines are more difficult to code than others.
Fenton and Pfleeger suggest three attributes which can be used to describe code:
• Length: physical size
• Functionality: functions supported by the product
• Complexity: problem/computational complexity (difficulty of the underlying problem), algorithmic complexity, structural complexity, cognitive complexity, etc…
#### Measuring length
We have already established that LOC is an inappropriate measure of software size.
It is useful to measure the length of the specification, design and code. This is because the length of the specification can help estimate the length of the design, which in turn can help estimate the length of the code.
We need a standard for measuring Lines of Code.
• Effective Lines of Code (ELOC): ignores comments and blank lines
• LOC = ELOC + CLOC: where CLOC is 'comment lines of code'
The definition of code length is influenced by the way in which it is used. Some organisations use length to compare projects (largest/smallest/average product, what is our productivity?, etc…) while others use it only within a project (largest/smallest/average module, module length w/ respect to number of faults, etc.).
#### Non-textual or external code
Visual programming and window environments is changing the notion of what constitutes a software program. New approaches to programming raise two separate issues:
• How do we account in our length measures for objects that are not textual
• How do we account in our length measures for components that are constructed externally
#### Length of Specifications & designs
Specification and design documents usually combine text, graphs and special mathematical diagrams and symbols. We can define a page as an atomic object.
We could also view length as a composite measure:
• Define a pair of numbers representing text length and diagram length
• We can define appropriate atomic objects for the different types of diagrams and symbols
#### Counting reused code
It is difficult to determine what we mean by 'reused code'.
• Reused verbatim: the code in the unit was reused without any changes
• Slightly modified: less than 25% of the lines of code in the unit were modified
• Extensively modified: 25% or more of the LOC were modified
#### Measuring length for OO development
OO languages suggest new ways of measuring length; ie - count of objects and methods.
## Functionality
Many software engineers have argued that length is misleading and the amount of functionality of a product paints a better picture of product size.
#### Function-point method
This is a widely industry-adopted method which is considered better than LOC. It measures the length of a software product from specification (early in the life-cycle).
Allbrecht identified five basic functions that frequently occur in commercial software development. He categorized them according to their relative development complexities.
Software 1. Inputs Screens or forms for adding/modifying data 2. Outputs Screens or reports produced by an application 3. Inquiries Screens that allow a user to interrogate an application 4. Data files Logical collection of records 5. Interface Files shared with other systems
#### Calculating function points
We determine from the specification an unadjusted function point count (UFC) that involves the following item categories:
• External inputs: input provided by the user that describes distinct application-oriented data.
• External outputs: items provided to the user that generate distinct application oriented data (reports, messages, etc.)
• External inquiries: interactive inputs requiring a response
• External files: machine-readable interfaces to other systems
• Internal files: logical master files in the system
We assign a subjective 'complexity' rating to each item category on a 3-class ordinal scale (simple, average or complex). We assign a weight to each item category:
Weight factor Item Simple Average Complex External inputs 3 4 6 External outputs 4 5 7 External inquiries 3 4 6 External files 7 10 15 Internal files 5 7 10
The UFC is the weighted sum of number of items of each category:
(1)
\begin{align} UFC = \sum_{i=1}^{15} numberOfItemsOfCategory_{i} \times weight_{i} \end{align}
(2)
\begin{align} FP = UFC \times TCF \end{align}
… where TCF is the technical complexity factor (also complexity multiplier)
(3)
\begin{align} TCF = 0.65 + 0.01 \sum_{i=1}^{14} F_{i} \end{align}
… where Fi is a complexity component rating between 0→5 (0 = irrelevant, 3 = average, 5 = essential).
F1 : reliable back-up & recovery F2 : data communications F3 : distributed functions F4 : performance F5 : heavily used configuration F6 : online data entry F7 : operation ease F8 : online update F9 : complex interface F10 : complex processing F11 : reusability F12 : installation ease F13 : multiple sites F14 : facilitate change
#### Use of FP
Function-point can be the basis of effort estimation (ie: determining person days of effort). Other uses include:
• Defect density can be expressed as defects per function point
• Bidding for a project; \$x per FP
• Tracking progress
#### Limitations of FP
• Subjectivity in technology factor: FP is suitable to measure functionality but not complexity aspects of software size. Also FP is found to be effective in functionality-intensive applications but not so effective in algorithmically complex applications.
• Subjectivity in assigning weights: weights are not necessarily applicable to other environments.
• Double counting: internal complexity is counted as giving weights in UFC and again in TCF
• Researchers have shown TCF does not improve accuracy over UFC
• Requirements creep: changes in requirements mean original FP calculations are invalidated
• Problems with measurement theory: incorrectly combines measurements from different scales (weights and TCF are ordinal, counts are absolute)
However, FP can be more useful than software length if:
• FP is used with care
• FP's limitations are well understood and accounted for
• Usable in the earliest requirement phases
• Independent of programming language, product design or development style
• Large body of historical data
• Well documented method
• Active users group
## COCOMO 2.0: Object points
To compute object points an initial size measure is generated by counting: screens, reports and 3rd generation language components.
These are then classified as simple, medium or difficult.
Object type Simple Medium Difficult Screen 1 2 3 Report 2 5 8 3GL component - - 10
Reuse is taken into account in calculating object points; assuming that r% of the objects will be reused from a previous project then:
• New object points = (object points) x (100 - r) / 100
## DeMarco's approach
DeMarco proposed a functionality measure based on his structured analysis and design notation (such as DFD and ER diagrams)
• Specification weight metrics involve two measures (function bang for "function strong" systems, data bang for "data strong" applications)
• The function bang measure is based on the number of functional primitives (number of lowest-level bubbles in a DFD)
• The data-bang entity count is weighted according to the number of relationships involved in each entity
page revision: 6, last edited: 08 Jun 2008 12:13
|
2019-05-24 15:56:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4782939553260803, "perplexity": 5533.798032490227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257660.45/warc/CC-MAIN-20190524144504-20190524170504-00234.warc.gz"}
|
https://www.jiskha.com/archives/2017/02/02
|
# Questions Asked onFebruary 2, 2017
1. ## History
1. What belief united the Progressive movement? A) that society's problems could be solved B) that education needed reform C) that there should be a federal income tax D) that political bosses should not hold office 2. What was the main reason for the
2. ## Chemistry
Net ionic equation for the reaction of strontium nitrate and sodium carbonate.
3. ## History
According to Americans living in the cities, what was a sign of a communist revolution in the United States? A The rise of mass culture B Labor Strikes C The deportation of anarchrist D The Great Migration I say its D
4. ## Maths
Dylan invested some money into his bank. He agreed on a simple interest rate of 3% per annum for a period of 2 years. At the end of the 2-year period, the value of his investment increased by£72 Work out the value of Dylan's initial investment.
5. ## Maths
On the 1st of January 2014, Carol invested some money in a bank account. The account pays 2.5% compound interest per year. On the 1st of January 2015, Carol withdrew £1000 from the account. On the 1st of January 2016, she had £23 517.60 in the account.
6. ## Math
find the area for the circle ( use 3.14 for pi ) show your work round to the nearest tenths . 23 yd
asked by Help i'm such a dumbo :(
7. ## Math
A recipe calls for 2 2/4 cups of raisins but Julie only has 1/4 cup measuring cup.how many 1/4 cups does Julie need to measure out 2 2/4 cups of raisins?
8. ## Maths
Kim drives 156 miles from Rotherham to London. She drives at an average speed of 60 miles per hour. She leaves Rotherham at 7:30 am. Does she drive in London before 10:00 am?
9. ## Physics
Santa loses his footing and slides down a frictionless,snowy roof that is tilted at an angle of 27.0. If Santa slides 9.00 m before reaching the edge,what is his speed as he leaves the roof?
10. ## Math
Leslie used more than a 1/2 cup but less than 1 whole cup of flour for recipe. what fraction of a cup Leslie used. Explain.
11. ## Spanish
I have some reading responses questions and answers to the play “La dama del alba”, I was hoping someone could edit my answers. The first answer is a paragraph summary of the first ten pages. The other questions are just short answer. I mostly want to
12. ## science
a box of cleaning supplies weighs 15 N. if the box is lifted a distance of 0.60 m, how much work i sdone? A)0.040 J B)9 J C)14 J D)25 J
13. ## Physics
A thin copper plate of diameter 11.0 cm is charged to 9.00 nC. What is the strength of the electric field 0.1 mm above the center of the top surface plate?
14. ## History
1. What was the effect of the Bull moose party's entrance into the presidential election of 1912? A. It split the Republican vote and allowed the Democrats to win. ** B. It prevented any of the three parties from winning a majority of electoral votes. C.
15. ## Physics
An object is placed 20.0 cm from a thin converging lens along the axis of the lens. If a real image forms behind the lens at a distance of 8.00 cm from the lens, what is the focal length of the lens?
16. ## 6th grade Social studies quiz PLZZZZ HELP ME!!!!!!
The pacific islands are formed by coral reefs called... A. volcanic B. reefs C. atolls D. plates
17. ## Maths
A speed camera takes two photographs of a car. Photo 2 was taken 0.5 seconds after Photo 1. Marks on the road are 0.8 meters apart. Calculate the average speed of the car in m/s.
18. ## Algebra Square Roots
For which value of x should the following expression be further simplified? \sqrt{ 39x} a x = 2 b x = 6 c x = 10 d x = 11
19. ## science
when building a house, wich of the following is a simple machine that a construction worker would use? A)plywood B)safety hat C)cement D)pulley
20. ## Physics
The liquid in the open tube manometer is mercury, y1=3cm and y2 = 7cm. Atmospheric pressure is 980mbrs. (A) what is the absolute pressure at the bottom of the U-shape tube? (B) what is the absolute pressure in the open tube at a depth of 4 cm below the
21. ## history
3.Although Lincoln identified slavery as a "moral, political, and socially wrong" in 1858, what proposals did he publically agree with at the time? Select all that apply. A.allow slavery where it already existed B. eliminate slavery in every U.S. state and
22. ## social studies 6th grade!!s
Which statement is correct? (1 point) Aborigines were the first people to settle Australia and New Zealand. Aborigines were the first people in Australia, and the Maori were the first people in New Zealand. The Maori were the first people in Australia, and
Find the surface area for the given prism a. 564in^2 b. 664in^2 c.1,120in^2 d.1,080in^2
24. ## science
remy noticed that after oiling his skateboard wheels,it was easier to reach the speeds he needed to preform tricks how did the oil help A) the oil reduced friction between the moving parts of the skateboard B)the oil increased friction between the moving
25. ## ^th grade social studies
2. what is the reason few people have settled in the interior of Australia? A. no mineral wealth is in the area B. land is too expensive C. the area is too dangerous D. the area little water and arable land
26. ## Statistics
A teacher gives a reading skills test to a third-grade class of n = 25 at the beginning of the school year. To evaluate the changes that occur during the year, students are tested again at the end of the year. Their test scores showed an average
1. When reading a recipe's ingredients, which substance makes the recipe an unhealthy choice if it is present in a high amount? A. Fiber B. Protein C. Trans fat*** D. Unsaturated fat 2. Of the five food groups, which one should be consumed in the smallest
28. ## math
On Monday the temperature was 50 degrees the temperature increased by 22%. what was the temperature on Tuesday. 72 degrees 65 degrees 56 degrees 61 dgrees
29. ## Algebra
Use the figure to answer questions 1-3 file:///C:/Users/Student/Downloads/642756-212012-25329-PM-1227809791%20(1).png 1. Name a pair of complementary angles. ∠1 and ∠4 ∠1 and ∠6 ∠3 and ∠4 ∠4 and ∠5 2. If m ∠1 = 53°, what is m ∠4? 53°
30. ## science
to chop wood you apply 50.0 N of force to an ax. the ax supplies 600.0 N of force to the wood. what is the mechanical advantage of the ax? A)650.0 B)8.3 C)12.0 D)0.0833
31. ## science
you do 185 J of work pulling a cart up a ramp. if the ramp does 153 J of work,what is the efficient of the ramp? A)82.7 percent B)121.0 percent C)32.0 percent D)338.0 percent
32. ## algebra
The difference between two numbers is 108 less than their sum.If the larger number is twice the smaller number,find the difference between the two numbers.
33. ## science
The work of which scientist(s) helped to explain light's ability to propagate through a vacuum? A. Maxwell ==B. Davisson and Germer C. Fresnel, Fraunhofer, and Arago D. Newton An object that has a height of 0.4 meter is placed at a distance of 0.7 meter
2. What is a reason few people have settled in the interior of Australia? (1 point) A.No mineral wealth is in the area. B.Land is too expensive. C.The area is too dangerous. D.The area has little water and arable land.
143. ## Florida studies weekly state history week 8
I need help on the Activity part
144. ## math
n apples are to be packed into m boxes so that each box contains the same number of apples. How many apples will be packed into each box in terms of m and n?
145. ## math
Solve: a toy store has 38 red spiders. It has 43 black spiders. Does it have more red spiders or black spiders? 38
146. ## Social studies
What is the best answer.site of the first battle of the revolutionary war. That is my question
Are standard deductions tax breaks that you can claim without having to itemize? My answer is yes
148. ## Math
Write the equation of a line that is perpendicular and that passes through the given point. y+2=1/3(x-5);(-4,3) I need help with this problem
149. ## Physics
A cylinder of fixed capacity44.8liters contains helium gas at standered temperature and pressure. What is tge amount of heat needed to raise the temperature of the gas in the cylinder by 15°C
150. ## science
calculate the moles of oxygen atom in 6.2 g of calcium carbonate?
151. ## calculus
if limit x--->0(4-g(x)/x)=1
152. ## math
I bought 3 oranges for 29p and 2 drinks for 90p. How much did I spend altogether? Does it mean 29p for each organes or for all 3 oranges? cheers!
153. ## English
Thank you for your help. One more similar question goes as follows. I have summarized a little. 1. I disagree with you a hundred percent. 1-1. I don't agree with you at all. (Are both the same?) 2. I don't agree with you a hundred percent. 2-1. I agree
154. ## calculus
limx--->3- 5/x-3 I think it is -inf but I don't know how to prove it
155. ## English
1. I don't like him a hundred percent. 2. I like him in part (partly). 3. I don't like him at all. (Does #1 mean #2 or #3?) 4. I dislike him a hundred percent. 5. I like him partly. 6. I don't like him at all. (Does #4 mean #5 or #6? It is a little
156. ## Thermodynamics
Two objects are at different temperatures,and occurs thermal equilibrium, does the size of the object affects the transfer of heat? I need help. This doubt is not allowing me to continue in my homework.
157. ## Math--Matrix
Suppose v= (1 x) is a 3-eigenvector of A= (5 -1 6 0), then x=?
158. ## Math--Matrix
Suppose A = ( 6 9 -1 -4). Then the largest eigenvalue of A is? The smallest eigenvalue of A is?
159. ## Math
what is 4+9 to the second power divided by 3x2-2
160. ## Math
Mike and Jennifer are preparing for a race. On Friday, Mike ran for 3 2/5 hours and Jennifer ran for 4 3/4 hours. How much longer did Jennifer run (in minutes)? I know that 3 hours= 180 mins 4 hours = 240 mins But I'm not sure what the fractions would be,
161. ## MATH
RAQUEL IS NOW 3YRS OLDER THAN BEN.IN 5 YRS,THE SUM OF THEIR AGES WILL BE IS HOW OLD ARE THEY NOW?
162. ## Math
Decide if the following pair of functions are of the same order. (a) f(x) = x^2 - 7 and g(x) = x^2 + 7 I found this example solution online: a). f(x) = 3x + 7 and g(x) = x x
163. ## Maths
4.The sides of a regular octagon is 0.8m. The sides of a regular pentagon are 0.12m. Which one has the larger perimeter?
164. ## English
What is the function of the gerund in this sentence Representing the junior class was her job as a memeber of the Student Senate
|
2020-05-30 14:41:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41977402567863464, "perplexity": 3127.9401496003784}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409337.38/warc/CC-MAIN-20200530133926-20200530163926-00085.warc.gz"}
|
https://cs.stackexchange.com/questions/88093/how-hard-is-it-to-find-the-length-of-multiplicative-2-partition
|
# How hard is it to find the length of multiplicative 2-partition?
Some terminology at first:
1. Multiplicative 2-partition for number $N$ - a pair of numbers $\{A, B\}$ such that $AB=N$.
2. Minimal multiplicative 2-partition length (denoted $l$) - minimal total number of bits needed to encode multiplicative one of 2-partitions.
Example:
Let $N = 36$. Then possible multiplicative 2-partitions are $\{2, 18\}, \{3, 12\}, \{4, 9\}, \{6, 6\}$. Their lengths are 7=2+5, 6=2+4, 7=3+4, 6=3+3 respectively. Minimal length here is 6, so $l=6$.
The input: non-prime $n$-bit number $N$.
The problem: find out if $n = l$.
YES-instance:
$N$ = 49, since $n$ = 6 and each multiplier takes 3 bits.
NO-instance:
$N$ = 25, since $n$ = 5 and each multiplier takes 3 bits.
The question: what is the complexity for this problem? It's trivial that it's not harder than factoring and I guess it's $PL$-hard. But is it even known to be in $P$?
• Sounds likely to be approximately as hard as factoring. If $N=PQ$ where $P,Q$ are two primes, factoring $N$ is hard, and it's probably hard to distinguish between the case where the smaller prime factor has $n/2$ bits vs where it has $n/2-1$ bits bits (say). – D.W. Feb 14 '18 at 7:43
• Is $l$ a fixed number? What about $36=4*9=6*6$? – Willard Zhan Feb 14 '18 at 19:57
• @WillardZhan gonna change definition now. Shortly, of all partitions we should choose shortest (6*6 here). – rus9384 Feb 14 '18 at 20:05
|
2021-02-25 20:01:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485712647438049, "perplexity": 1063.3425207555192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351454.16/warc/CC-MAIN-20210225182552-20210225212552-00419.warc.gz"}
|
https://physics.stackexchange.com/questions/519638/direction-of-frictional-force-in-general-and-in-circular-motion
|
# Direction of frictional force in general and in circular motion
Note:- This is not the first time this question has been asked (far too many to link), but everytime it was asked as "why does friction cause a car to turn?" where rolling and rotation of the wheels got involved and made everything complicated. None of them seem to answer the more basic question that I have in mind.
I have been taught in physics class that static friction acts opposite to the direction of impending relative motion while kinetic friction acts opposite to actual relative motion, which seems to be the most general statement regarding friction direction. However lets consider a block (pt. mass) kept on a rough disk. The disk is spinning at a uniform constant angular velocity $$\omega$$ and the block is dropped onto the disk, initially at rest (the block not the disk). By even common sense we can see that the block will speed up until it has the velocity $$\omega r$$ where r is the distance of the block from the centre of the disk. In the steady state, the block spins along with the disk.
Q: Initially, why does the friction cause the block to go in a circle around the centre? (I think I understand what happens in steady state: the direction of impending motion is radially outward in the disk frame so friction acts radially inward.)
Qa): Is the statement "static friction acts opposite to the direction of impending relative motion" always correct?
Qb): What is happening in the case of a car turning on an unbanked road? Here the direction of ACTUAL relative motion is tangential and in the direction of car velocity, why should friction act radially instead of tangentially? Also is that force rolling friction?
PS: I am sorry for the word salad, will break into smaller qs if community says so.
• When the disc rotates and the block is at rest, why should there be a force in radial direction? I guess the block is hold at constant radius by a robe, is that correct? If so, please draw a picture and make the radius "large". Now ask yourself: If the block is at rest, in what direction does the disc underneath the block move? In radial direction? Dec 14, 2019 at 20:09
• @Semoi No there is no rope. Only force acting is friction. Dec 15, 2019 at 6:12
## 1 Answer
Q: Initially, why does the friction cause the block to go in a circle around the centre? I think I understand what happens in steady state: the direction of impending motion is radially outward in the disk frame so friction acts radially inward.
As you say, it opposes relative motion. The block is at rest and the disk is not. Friction creates a force that accelerates the block in the direction of rotation. As the block starts moving, the part of the disk underneath the block is pulled by the rigid forces within the disk to rotate (rather than go off in a straight line). Since the disk is accelerating, frictional forces drag the block to accelerate around the axis as well.
Qa): Is the statement "static friction acts opposite to the direction of impending relative motion" always correct?
As long as you properly identify the tendency, then yes.
Qb): What is happening in the case of a car turning on an unbanked road? Here the direction of ACTUAL relative motion is tangential
When we talk about the tire and the road, there is no relative motion. The contact patch of the tire remains still against the ground. There is no relative motion, so there is no kinetic friction, only static friction. Do not be confused by the motion of the car as a whole. It is the motion of the bottom of the tire that matters here.
Further, the wheel is an interesting device. At the ideal, a wheel has zero friction perpendicular to the axis (it rolls instead) and has infinite friction parallel to the axis. A real wheel will slip instead, but that's the intent.
...and in the direction of car velocity, why should friction act radially instead of tangentially?
Now you know, because that's what a wheel does. Unless it's hooked up to an engine or a brake system, the wheel responds to forces in the direction of the car by rolling. This is easy to do, so no forces arise.
But "sideways" forces on the wheel are created by friction at the contact patch. The wheel cannot roll in that direction, so the forces can be very large.
Also is that force rolling friction?
No. "rolling friction" is just one of the many sources of drag on the moving vehicle. There are many answers that discuss how it's different from static and kinetic friction.
Why is sliding friction more than rolling friction?
What is the cause of rolling friction? & why is it less than sliding friction?
Is acceleration of the disk necessary here?
Yes, because it's moving in a circle, not a straight line. Circular motion always requires (centripetal) acceleration.
so the centripetal force causing the wheel to turn, then, is static friction?
I'm not sure what you mean by "turn" here. The wheel can spin (allowing the car to roll forward or backward), and the steering wheel can cause the front wheels to pivot so that car turns in one direction or the other. Neither are caused by a centripetal force.
The wheel spins because the engine of the car has given the car forward velocity. Static friction on the road keeps the bottom of the wheel stationary against the road, so the spin of the wheel and the speed of the car are in sync. But this friction is not a centripetal force.
The front wheels turn sideways when a torque from the steering wheel pivots them. If the car is rolling forward, this causes a sideways force on the car and it creates a centripetal force that makes the car drive in a circle.
• Thank you for your clear answer but I still have a few doubts:- Q: "Since the disk is accelerating, frictional forces drag the block to accelerate around the axis as well." This is the only line I in Q I didn't fully understand. Is acceleration of the disk necessary here? Dec 15, 2019 at 5:57
• Qa: "As long as you properly identify the tendency, then yes." meaning? Dec 15, 2019 at 6:03
• Qb: "infinite friction parallel to the axis. A real wheel will slip instead, but that's the intent." why is there infinite friction parallel to the axis, and what does 'intent' mean? Dec 15, 2019 at 6:04
• Finally, so the centripetal force causing the wheel to turn, then, is static friction? Dec 15, 2019 at 6:06
|
2022-05-23 05:41:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5539427995681763, "perplexity": 468.5912934319259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00214.warc.gz"}
|
http://gmatclub.com/blog/2013/03/gmat-question-of-the-day-mar-11-probability-and-sentence-correction/
|
# GMAT Question of the Day (Mar 11): Probability and Sentence Correction
- Mar 11, 02:00 AM Comments [0]
Math (PS)
A basket contains 3 blue marbles, 3 red marbles, and 3 yellow marbles. If 3 marbles are extracted from the basket at random, what is the probability that a marble of each color is among the extracted?
(A) $\frac{2}{21}$
(B) $\frac{3}{25}$
(C) $\frac{1}{6}$
(D) $\frac{9}{28}$
(E) $\frac{11}{24}$
Question Discussion & Explanation
Correct Answer - D - (click and drag your mouse to see the answer)
GMAT Daily Deals
Verbal (SC)
Christlers, the famous art house in London, owns several Hussains of various periods
on account of having achieved an iconic status in the English art circle.
(A) on account of having
(B) on account of their having
(C) because they have
(D) because of having
(E) because it has
Question Discussion & Explanation
Correct Answer - C - (click and drag your mouse to see the answer)
Like these questions? Get the GMAT Club question collection: online at GMAT Club OR on your Kindle OR on your iPhone/iPad
Browse all GMAT Questions of the Day
Subscribe to GMAT Question of the Day: E-mail | RSS
|
2013-12-19 10:41:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21730424463748932, "perplexity": 4387.214297665566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762908/warc/CC-MAIN-20131218054922-00089-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://ems.press/books/ecr/98/1928
|
# $L^p$-independence of growth bounds of Feynman–Kac semigroups
• ### Masayoshi Takeda
Tohoku University, Sendai, Japan
A subscription is required to access this book chapter.
## Abstract
The theory of Dirichlet forms is an $L^2$-theory, while the theory of Markov processes is, in a sense, an $L^1$-theory. To bridge this gap, we study the $L^p$-independence of growth bounds of Markov semigroups, more generally, of generalized Feynman–Kac (Schrödinger) semigroups. A key idea for the proof of the $L^p$-independence is to employ arguments in the Donsker-Varadhan large deviation theory.
|
2023-03-23 21:19:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42403295636177063, "perplexity": 1535.2108085670043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00415.warc.gz"}
|
https://research.caluniv.ac.in/publication/canonical-forms-of-the-generalized-hypergeometric-series
|
X
Canonical forms of the generalized hypergeometric series for the 3 j and the 6 j symbol
MAJUMDAR S DATTA
|
2022-12-07 23:57:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698139429092407, "perplexity": 2361.048689424343}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00717.warc.gz"}
|
https://deepai.org/publication/random-shuffling-beats-sgd-after-finite-epochs
|
# Random Shuffling Beats SGD after Finite Epochs
A long-standing problem in the theory of stochastic gradient descent (SGD) is to prove that its without-replacement version RandomShuffle converges faster than the usual with-replacement version. We present the first (to our knowledge) non-asymptotic solution to this problem, which shows that after a "reasonable" number of epochs RandomShuffle indeed converges faster than SGD. Specifically, we prove that under strong convexity and second-order smoothness, the sequence generated by RandomShuffle converges to the optimal solution at the rate O(1/T^2 + n^3/T^3), where n is the number of components in the objective, and T is the total number of iterations. This result shows that after a reasonable number of epochs RandomShuffle is strictly better than SGD (which converges as O(1/T)). The key step toward showing this better dependence on T is the introduction of n into the bound; and as our analysis will show, in general a dependence on n is unavoidable without further changes to the algorithm. We show that for sparse data RandomShuffle has the rate O(1/T^2), again strictly better than SGD. Furthermore, we discuss extensions to nonconvex gradient dominated functions, as well as non-strongly convex settings.
## Authors
• 8 publications
• 72 publications
09/10/2019
### Better Communication Complexity for Local SGD
We revisit the local Stochastic Gradient Descent (local SGD) method and ...
10/21/2021
We design step-size schemes that make stochastic gradient descent (SGD) ...
02/24/2020
### Closing the convergence gap of SGD without replacement
Stochastic gradient descent without replacement sampling is widely used ...
03/04/2019
### SGD without Replacement: Sharper Rates for General Smooth Convex Functions
We study stochastic gradient descent without replacement () for smooth ...
03/12/2021
### Can Single-Shuffle SGD be Better than Reshuffling SGD and GD?
We propose matrix norm inequalities that extend the Recht-Ré (2012) conj...
02/19/2021
### Permutation-Based SGD: Is Random Optimal?
A recent line of ground-breaking results for permutation-based SGD has c...
07/07/2020
### Understanding the Impact of Model Incoherence on Convergence of Incremental SGD with Random Reshuffle
Although SGD with random reshuffle has been widely-used in machine learn...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
We consider stochastic optimization methods for the finite-sum problem
F(x):=1nn∑i=1fi(x), (1.1)
where each function is smooth and convex, and the sum is strongly convex. A classical approach to solving (1.1) is stochastic gradient descent (Sgd). At each iteration Sgd independently samples an index uniformly from , and uses the (stochastic) gradient to compute its update. The stochasticity makes each iteration of Sgd cheap, and the uniformly independent sampling of makes
an unbiased estimator of the full gradient
. These properties are central to Sgd
’s effectiveness in large scale machine learning, and underlie much of its theoretical analysis (see for instance,
[34, 26, 2, 5, 30]).
However, what is actually used in practice is the without replacement version of Sgd, henceforth called RandomShuffle. Specifically, at each epoch RandomShuffle samples a random permutation of the functions uniformly independently (some implementations shuffle the data only once at load, rather than at each epoch). Then, it iterates over these functions according to the sampled permutation and updates in a manner similar to Sgd. Avoiding the use of random sampling at each iteration, RandomShuffle can be computationally more practical [4]; furthermore, as one would expect, empirically RandomShuffle is known to converge faster than Sgd [3].
This discrepancy between theory and practice has been a long-standing problem in the theory of Sgd. It has drawn renewed attention recently, with the goal of better understanding convergence of RandomShuffle. The key difficulty is that without-replacement leads to statistically non-independent samples, which greatly complicates analysis. Two extreme case positive results are however available: Shamir [32] shows that RandomShuffle is not much worse than usual Sgd, provided the number of epochs is not too large; while Gürbüzbalaban et al. [11] show that RandomShuffle converges faster than Sgd asymptotically at the rate .
But it remains unclear what happens in between, after a reasonable finite number of epochs are run. This regime is the most compelling one to study, since in practice one runs neither one nor infinitely many epochs. This motivates the central question of our paper:
Does RandomShuffle converge faster than Sgd after a reasonable number of epochs?
We answer this question positively in this paper; our results are more precisely summarized below.
### 1.1 Summary of results
We follow the common practice of reporting convergence rates depending on , the number of calls to the (stochastic / incremental) gradient oracle. For instance, Sgd converges at the rate for solving (1.1), ignoring logarithmic terms in the bound [26]. The underlying argument is to view Sgd as stochastic approximation with noise [21], therefore ignoring the finite-sum structure of (1.1). Our key observation for RandomShuffle is that one should reasonably include dependence on into the bound (see Section 3.3). Such a compromise leads to a better dependence on , which further shows how RandomShuffle beats Sgd after a finite number of epochs. Our main contributions are the following:
• Under a mild assumption on second order differentiability, and assuming strong-convexity, we establish a convergence rate of for RandomShuffle, where is the number of components in (1.1), and is the total number of iterations (Theorem 1 and 2). From the bounds we can calculate the precise number of epochs after which RandomShuffle is strictly better than Sgd.
• We prove that a dependence on is necessary for beating the Sgd rate . This tradeoff precludes the possibility of proving a convergence rate of the type with some in the general case, and justifies our choice of introducing into the rate (Theorem 3).
• Assuming a sparse data setting common in machine learning, we further improve the convergence rate of RandomShuffle to . This rate is strictly better than Sgd, indicating RandomShuffle’s advantage in such cases (Theorem 4).
• We extend our results to the non-convex function class with Polyak-Łojasiewicz condition, establishing a similar rate for RandomShuffle (Theorem 5).
• We show a class of examples where RandomShuffle is provably faster than Sgd after arbitrary number (even less than one epoch) of iterations (Theorem 7).
We provide a detailed discussion of various aspects of our results in Section 6, including explicit comparisons to Sgd, the role of condition numbers, as well as some limitations. Finally, we end by noting some extensions and open problems in Section 7. As one of the extensions, for non-strongly convex problems, we prove that RandomShuffle achieves a comparable convergence rate as Sgd, with possibly smaller constant in the bound under certain parameter paradigms (Theorem 6).
### 1.2 Related work
Recht and Ré [27] conjecture a tantalizing matrix AM-GM inequality that underlies RandomShuffle’s superiority over Sgd. While limited progress on this conjecture has been reported [14, 38], the correctness of the full conjecture is still wide open. With the technique of transductive Rademacher complexity, Shamir [32] shows that Sgd is not worse than RandomShuffle
provided the number of iterations is not too large. Asymptotic analysis is provided in
[11], which proves that RandomShuffle limits to a rate for large . Ying et al. [37] show that for a fixed step size, RandomShuffle converges to a distribution closer to optimal than Sgd asymptotically.
When the functions are visited in a deterministic order (e.g., cyclic), the method turns into Incremental Gradient Descent (Igd), which has a long history [2]. Kohonen [16] shows that Igd converges to a limit cycle under constant step size and quadratic functions. Convergence to neighborhood of optimality for more general functions is studied in several works, under the assumption that step size is bounded away from zero (see for instance [33]). With properly diminishing step size, Nedić and Bertsekas [20] show that an convergence rate in terms of distance to optimal can be achieved under strong convexity of the finite-sum. This rate is further improved in [10] to under a second order differentiability assumption.
In the real world, RandomShuffle
has been proposed as a standard heuristic
[4]. With numerical experiments, Bottou [3] notices an approximately convergence rate of RandomShuffle. Without-replacement sampling also improves data-access efficiency in distributed settings, see for instance [9, 18]. The permutation-sampling idea has been further embedded into more complicated algorithms; see [6, 8, 32]
for variance-reduced methods, and
[31] for decomposition methods.
Finally, we note a related body of work on coordinate descent, where a similar problem has been studied: when does random permutation over coordinates behave well? Gürbüzbalaban et al. [12] give two kinds of quadratic problems when cyclic version of coordinate descent beats the with replacement one, which is a stronger result indicating that random permutation also beats the with replacement method. However, such a deterministic version of the algorithm suffers from poor worst case. Indeed, in [35] a setting is analyzed where cyclic coordinate descent can be dramatically worse than both with-replacement and random permutation versions of coordinate descent. Lee and Wright [17] further study this setting, and analyze how the random permutation version of coordinate descent avoids the slow convergence of cyclic version. In [36], Wright et el. propose a more general class of quadratic functions where random permutation outperforms cyclic coordinate descent.
## 2 Background and problem setup
For problem (1.1), we assume the finite sum function is strongly convex, i.e.,
where , and is the strong convexity parameter. Furthermore, we assume each component function is -smooth, so that for , there exists a constant such that
∥∇fi(x)−∇fi(y)∥≤L∥x−y∥. (2.1)
Furthermore, we assume that the component functions are second order differentiable with a Lipschitz continuous Hessian. We use to denote the Hessian of function at . Specifically, for each , we assume that for all , there exists a constant such that
∥Hi(x)−Hi(y)∥≤LH∥x−y∥. (2.2)
The norm is the spectral norm for matrices and
norm for vectors. We denote the unique minimizer of
as , the index set as . The complexity bound is represented as , with all logarithmic terms hidden. All other parameters that might be hidden in the complexity bounds will be clarified in corresponding sections.
### 2.1 The algorithms under study: Sgd and RandomShuffle
For both Sgd and RandomShuffle, we use as the step size, which is predetermined before the algorithms are run. The sequences generated by both methods are denoted as ; here is the initial point and is the total number of iterations (i.e., number of stochastic gradients used).
Sgd is defined as follows: for each iteration , it picks an index independently uniformly from the index set , and then performs the update
xk=xk−1−γ∇fs(k)(xk−1). (Sgd)
In contrast, RandomShuffle runs as follows: for each epoch , it picks one permutation independently uniformly from the set of all permutations of . Then, it sequentially visits each of the component functions of the finite-sum (1.1) and performs the update
xtk=xtk−1−γ∇fσt(k)(xtk−1), (RandomShuffle)
for . Here represents the -th iterate within the -th epoch. For two consecutive epochs and , one has ; for the initial point one has . For convenience of analysis, we always assume RandomShuffle is run for an integer number of epochs, i.e., for some . This is a reasonable assumption given our main interest is when several epochs of RandomShuffle are run.
## 3 Convergence analysis of RandomShuffle
The goal of this section is to build theoretical analysis for RandomShuffle. Specifically, we answer the following question: when can we show RandomShuffle to be better than Sgd? We begin by first analyzing quadratic functions in Section 3.1, where the analysis benefits from having a constant Hessian. Subsequently, in Section 3.2, we extend our analysis to the general (smooth) strongly convex setting. A key idea in our analysis is to make the convergence rate bounds sensitive to , the number of components in the finite-sum (1.1). In Section 3.3, we discuss and justify the necessity of introducing into our convergence bound.
We first consider the quadratic instance of (1.1), where
fi(x)=12xTAix+bTix,i=1,…,n, (3.1)
where is positive semi-definite, and . We should notice often in analyzing strongly convex problems, the quadratic case presents a good example when tight bounds are achieved.
Quadratic functions have a constant Hessian function , which eases our analysis. Similar to the usual Sgd, our bound also depends on the following constants: (i) strong convexity parameter , and component-wise Lipschitz constant ; (ii) diameter bound (i.e., any iterate remains bounded; can be enforced by explicit projection if needed); and (iii) bounded gradients for each (), and any satisfying (ii). We omit these constants for clarity, but discuss the condition number further in Section 6.
Our main result for RandomShuffle is the following (omitting logarithmic terms):
###### Theorem 1.
With defined by (3.1), let the condition number of problem (1.1) be . So long as , with step size , RandomShuffle achieves convergence rate:
E[∥xT−x∗∥2]≤O(1T2+n3T3).
We provide a proof sketch in Section 5, deferring the fairly involved technical details to Appendix A. In terms of sample complexity, Theorem 1 yields the following corollary:
###### Corollary 1.
Let be defined by (3.1). The sample complexity for RandomShuffle to achieve is no more than .
We observe that in the regime when gets large, our result matches [11]. But it provides more information when the number of epochs is not so large that the can be neglected. This setting is clearly the most compelling to study. Formally, we recover the main result of [11] as the following:
###### Corollary 2.
As , RandomShuffle achieves asymptotic convergence rate when run with the proper step size schedule.
### 3.2 RandomShuffle for strongly convex problems
Next, we consider the more general case where each component function is convex and the sum is strongly convex. Surprisingly111Intuitively, the change of Hessian over the domain can raise challenges. However, our convergence rate here is quite similar to quadratic case, with only mild dependence on Hessian Lipschitz constant. , one can easily adapt the methodology of the proof for Theorem 1 in this setting. To this end, our analysis requires one further assumption that each component function is second order differentiable and its Hessian satisfies the Lipschitz condition (2.2) with constant .
Under these assumptions, we obtain the following result:
###### Theorem 2.
Define constant . So long as , with step size , RandomShuffle achieves convergence rate:
E[∥xT−x∗∥2]≤O(1T2+n3T3).
Except for extra dependence on and a mildly different step size, this rate is essentially the same as that in quadratic case. The proof for the result can be found in Appendix B. Due to the similar formulation, most of the consequences noted in Section 3.1 also hold in this general setting.
### 3.3 Understanding the dependence on n
Since the motivation of building our convergence rate analysis is to show that RandomShuffle behaves better than Sgd, we would definitely hope that our convergence bounds have a better dependence on compared to the bound for Sgd. In an ideal situation, one may hope for a rate of the form with some . One intuitive criticism toward this goal is evident: if we allow , then by setting , RandomShuffle is essentially same as Sgd by the birthday paradox. Therefore, a bound is unlikely to hold.
However, this argument is not rigorous when we require a positive number of epochs to be run (at least one round through all the data). To this end, we provide the following result indicating the impossibility of obtaining even when is required.
###### Theorem 3.
Given the information of . Under the assumption of constant step sizes, no step size choice for RandomShuffle leads to a convergence rate for any , if we do not allow to appear in the bound.
The key idea to prove Theorem 3 is by constructing a special instance of problem (1.1). In particular, the following quadratic instance of (1.1) lays the foundation of our proof:
fi(x)={12(x−b)′A(x−b)i odd,12(x+b)′A(x+b)i even. (3.2)
Here denotes the transpose of a vector, is some positive definite matrix, and is some vector. Running RandomShuffle on (3.2) leads to a close-formed expression of RandomShuffle’s error. Then by setting (i.e., only running RandomShuffle for one epoch) and assuming a convergence rate of , we deduce a contradiction by properly setting and . The detailed proof can be found in Appendix C. We directly have the following corollary:
###### Corollary 3.
Given the information of , under the assumption and constant step size, there is no step size choice that leads to a convergence rate for .
This result indicates that in order to achieve a better dependence on using constant step sizes, the bound should either: (i) depend on ; (ii) make some stronger assumptions on being large enough (at least exclude ); or (iii) leverage a more versatile step size schedule, which could potentially be hard to design and analyze.
Although Theorem 3 shows that one may not hope (under constant step sizes) for a better dependence on for RandomShuffle without an extra dependence, whether the current dependence on we have obtained is optimal still requires further discussion. In the special case , numerical evidence has shown that RandomShuffle behaves at least as well as Sgd. However, our bound fails to even show RandomShuffle converges in this setting. Therefore, it is reasonable to conjecture that a better dependence on exists. In the following section, we improve the dependence on under a specific setting. But whether a better dependence on can be achieved in general remains open.222Convergence rate with dependence on also appears in some variance reduction methods (see for instance, [15, 7]). Sample complexity lower bounds has also be shown to depend on under similar settings, see e.g., [1].
## 4 Sparse functions
In the literature on large-scale machine learning, sparsity is a common feature of data. When the data are sparse, each training data point has only a few non-zero features. Under such a setting, each iteration of Sgd
only modifies a few dimensions of the decision variables. Some commonly occurring sparse problems include large-scale logistic regression, matrix completion, and graph cuts.
Sparse data provides a prospective setting under which RandomShuffle might be powerful. Intuitively, when data are sparse, with-replacement sampling used by Sgd is likely to miss some decision variables, while RandomShuffle is guaranteed to update all possible decision variables in one epoch. In this section, we show some theoretical results justifying such intuition.
Formally, a sparse finite-sum problem assumes the form
F(x)=1nn∑i=1fi(xei),
where () denotes a small subset of and denotes the entries of the vector indexed by . Define the set . By representing each subset with a node, and considering edges for all , we get a graph with nodes. Following the notation in [28], we consider the sparsity factor of the graph:
ρ:=max1≤i≤n∣∣{ej∈E:ei∩ej≠∅}∣∣n. (4.1)
One obvious fact is . The statistic (4.1) indicates how likely is it that two subsets of indices intersect, which reflects the sparsity of the problem. For a problem with strong sparsity, we may anticipate a relatively small value for . We summarize our result with the following theorem:
###### Theorem 4.
Define constant . So long as , with step size RandomShuffle achieves convergence rate:
E[∥xT−x∗∥2]≤O(1T2+ρ2n3T3).
Compared with Theorem 2, the bound in Theorem 4 depends on the parameter , so we can exploit sparsity to obtain a faster convergence rate. The key to proving Theorem 4 lies in constructing a tighter bound for the error term in the main recursion (see §5) by including a discount due to sparsity.
We end this section by noting the following simple corollary:
###### Corollary 4.
When , there is some constant only dependent on , , , , , such that as long as , for a proper step size, RandomShuffle achieves convergence rate
E[∥xT−x∗∥2]≤O(1T2).
## 5 Proof sketch of Theorem 1
In this section we provide a proof sketch for Theorem 1. The central idea is to establish an inequality
(5.1)
where and are the beginning and final points of the -th epoch, respectively, and the randomness is over the permutation of functions in epoch . The constant captures the speed of convergence for the linear convergence part, while and together bound the error introduced by randomness. The underlying motivation for the bound (5.1) is: when the latter two terms depend on the step size with order at least , then by expanding the recursion over all the epochs, and setting , we can obtain a convergence of .
By the definition of the RandomShuffle update and simple calculations, we have the following key equality for one epoch of RandomShuffle:
The idea behind this equality is to split the progress made by RandomShuffle in a given epoch into two parts: a part that behaves like full gradient descent ( and ), and a part that captures the effects of random sampling ( and ). In particular, for a permutation , denotes the gradient error of RandomShuffle for epoch , i.e.,
Rt=∑ni=1∇fσt(i)(xti−1)−∑ni=1∇fσt(i)(xt0),
which is a random variable dependent on
. Thus, the terms and are also random variables that depend on , and require taking expectations. The main body of our analysis involves bounding each of these terms separately.
The term can be easily bounded by exploiting the strong convexity of , using a standard inequality (Theorem 2.1.11 in [23]), as follows
(5.2)
The first term (gradient norm term) in (5.2) is used to dominate later emerging terms in our bounds on and , while the second term (distance term) in (5.2) will be absorbed into in (5.1).
A key step toward building (5.1) is to bound , where the expectation is over . However, it is not easy to directly bound this term with for some constant . Instead, we decompose this term further into three parts: (i) the first part depends on (which will be then captured by in (5.1)); (ii) the second part depends on (which will be then dominated by gradient norm term in ’s bound (5.2)); and (iii) the third part has an at least dependence on (which will be then jointly captured by and in (5.1)). Specifically, by introducing second-order information and somewhat involved analysis, we obtain the following bound for :
###### Lemma 1.
Over the randomness of the permutation, we have the inequality:
−2γ⟨xt0−x∗,E[Rt]⟩ (5.3) +γ3μ−1n2(n−1)∥Δ∥2+2μ−1γ5L4G2n5. (5.4)
Where with uniformly drawn from .
Since is the minimizer, we have an elegant bound on the second-order interaction term:
###### Lemma 2.
Define with uniformly drawn from , and is the minimizer of sum function, then
∥Δ∥≤1n−1LG.
We tackle by dominating it with the gradient norm term of ’s bound (5.2), and finally bound the second permutation dependent term using the following lemma.
###### Lemma 3.
For any possible permutation in the -th epoch, we have bound
∥∥Rt∥∥≤n(n−1)2γGL.
Using this bound, the term can be captured by in (5.1).
Based on the above results, we get a recursive inequality of the form (5.1). Expanding the recursion and substituting into it the step-size choice ultimately leads to an bound of the form (see (A.17) in the Appendix for dependence on hidden constants). The detailed technical steps can be found in Appendix A.
## 6 Discussion of results
We discuss below our results in more detail, including their implications, strengths, and limitations.
#### Comparison with Sgd.
It is well-known that under strong convexity Sgd converges with a rate of [26]. A direct comparison indicates the following fact: RandomShuffle is provably better than Sgd after epochs. This is an acceptable amount of epochs for even some of the largest data sets in current machine learning literature. To our knowledge, this is the first result rigorously showing that RandomShuffle behaves better than Sgd within a reasonable number of epochs. To some extent, this result confirms the belief and observation that RandomShuffle is the “correct” choice in real life, at least when the number of epochs is comparable with .
#### Deterministic variant.
When the algorithm is run in a deterministic fashion, i.e., the functions are visited in a fixed order, better convergence rate than Sgd can also be achieved as becomes large. For instance, a result in [10] translates into a bound for the deterministic case. This directly implies the same bound for RandomShuffle, since random permutation always has the weaker worst case. But according to this bound, at least epochs are required for RandomShuffle to achieve an error smaller than Sgd, which is not a realistic number of epochs in most applications.
#### Comparison with Gd.
Another interesting viewpoint is by comparing RandomShuffle with Gradient Descent (Gd). One of the limitations of our result is that we do not show a regime where RandomShuffle can be better than Gd. By computing the average for each epoch and running exact Gd on (1.1), one can get a convergence rate of the form . This fact shows that our convergence rate for RandomShuffle is worse than Gd. This comes naturally from the epoch based recursion (5.1) in our proof methodology, since for one epoch the sum of the gradients is only shown to be no worse than a full gradient. It is true that Gd should behave better in long-term as the dependence on is negligible, and comparing with Gd is not the major goal for this paper. However, being worse than Gd even when is relatively small indicates that the dependence on probably can still be improved. It may be worth investigating whether RandomShuffle can be better than both Sgd and Gd in some regime. However, different techniques may be required.
#### Epochs required.
It is also a limitation that our bound only holds after a certain number of epochs. Moreover, this number of epochs is dependent on (e.g., epochs for the quadratic case). This limits the interest of our result to cases when the problem is not too ill-conditioned. Otherwise, such a number of epochs will be unrealistic by itself. We are currently not certain whether similar bounds can be proved when allowing to assume smaller values, or even after only one epoch.
#### Dependence on κ.
It should be noticed that can be large sometimes. Therefore, it may be informative to view our result in a -dependent form. In particular, we still assume , , are constant, but no longer . We use the bound and assume is constant. Since , we now have . Our results translate into -dependent convergence rates of (see inequalities (A.17) (E.13) in the Appendix). The corresponding -dependent sample complexity turns into for quadratic problems, and for strongly convex ones.
At first sight, the dependence on in the convergence rate may seem relatively high. However, it is important to notice that our sample complexity’s dependence on is actually better than what is known for Sgd. A convergence bound for Sgd has long been known [26], which translates into a , -dependent sample complexity in our notation. Although better dependence has been shown for (see e.g., [13]), no better dependence has been shown for as far as we know. Furthermore, according to [22], the lower bound to achieve for strongly convex using stochastic gradients is . Translating this into the sample complexity to achieve is likely to introduce another into the bound. Therefore, it is reasonable to believe that is the best sample complexity one can get for Sgd (which is worse than RandomShuffle), to achieve .
#### Sparse data setting.
Notably, in the sparse setting (with sparsity factor ), the proven convergence rate is strictly better than the rate of Sgd. This result follows the following intuition: when each dimension is only touched by several functions, letting the algorithm to visit every function would avoid missing certain dimensions. For larger , similar speedup can be observed. In fact, so long as we have , the proven bound is better off than Sgd. Such a result confirms the usage of RandomShuffle under sparse setting.
## 7 Extensions
In this section, we provide some further extensions before concluding with some open problems.
### 7.1 RandomShuffle for nonconvex optimization
The first extension that we discuss is to nonconvex finite sum problems. In particular, we study RandomShuffle applied to functions satisfying the Polyak-Łojasiewicz condition (also known as gradient dominated functions):
12∥∇F(x)∥2≥μ(F(x)−F∗), ∀x.
Here is some real number, is the minimal function value of . Strongly convexity is a special situation of this condition with being the strongly convex parameter. One important implication of this condition is that every stationary point is a global minimum. However function can be non-convex under such setting. Also, it doesn’t imply a unique minimum of the function.
This setting was proposed and analyzed in [25], where a linear convergence rate for Gd was shown. Later, many other optimization methods have been proven efficient under this condition (see [24] for second order methods and [29] for variance reduced gradient methods). Notably, Sgd can be proven to converge with rate under this setting (see appendix for a proof).
Assume each component function being Lipschitz continuous, and the average function satisfying the Polyak-Łojasiewicz condition with some constant . We have the following extension of our previous result:
###### Theorem 5.
Under the Polyak-Łojasiewicz condition, define condition number . So long as , with step size , RandomShuffle achieves convergence rate:
E[∥xT−x∗∥2]≤O(1T2+n3T3).
### 7.2 RandomShuffle for convex problems
An important extension of RandomShuffle is to the general (smooth) convex case without assuming strong convexity. There are no previous results on the convergence rate of RandomShuffle in this setting that show it to be faster than Sgd. The only result we are aware of is by Shamir [32], who shows RandomShuffle to be not worse than Sgd in the general (smooth) convex setting. We extend our results to the general convex case, and show a convergence rate that is possibly faster than Sgd, albeit only up to constant terms.
We take the viewpoint of gradients with errors, and denote the difference between component gradient and full gradient as the error:
∇F(x)−∇fi(x)=ei(x).
Different assumptions bounding the error term have been studied in optimization literature. We assume that there is a constant that bound the norm of the gradient error:
∥ei(x)∥≤δ, ∀x.
Here is any index and is any point in domain. Obviously, , with being the gradient norm bound as before.333Another common assumption is when the variance of the gradient (i.e., ) is bounded. We made the more rigorous assumption here for ease of a simpler analysis. However, there is at most an extra term difference between these two assumptions due to the finite sum structure.
###### Theorem 6.
Assume with uniformly drawn from , is an arbitrary minimizer of . Set stepsize
γ=min⎧⎨⎩116nL,√DTn(∥Δ∥+LHLD2+2LHDG),(DTn2L2δ)13,(1Tn3L4)14⎫⎬⎭.
Assume being the average of epoch ending points of RandomShuffle. Then there is
F(¯x)−F(x∗)≤2D√nD(∥Δ∥+LHLD2+2LHDG)√T+O⎛⎝(nT)23δ13+(nT)34⎞⎠.
We have some discussion of this result:
Firstly, it is interesting to see what happens asymptotically. We can observe three levels of possible asymptotic (ignore ) convergence rates for RandomShuffle from this theorem: (1) In the most general situation, it converges as ; (2) when the functions are quadratic (i.e., ) and locally the variance vanishes (i.e., ), it converges as ; (3) when the functions are quadratic (i.e., ) and globally the variance vanishes (i.e., ), it converges as .
Secondly, we should notice that there is a known convergence rate of for Sgd. Also, we can further bound with . Therefore, when is relatively small and quadratic functions (i.e., ), our bound translates into form of , with constant in front of possibly smaller than Sgd by constant in certain parameter space.
One obvious limitation of this result is: when globally there is no variance of gradients, it fails to recover the rate of Gd. This indicates the possibility of tighter bounds using more involved analysis. We leave this possibility (either improving upon the dependence on under existence of noise, or recovering when there is no noise) as an open question.
### 7.3 Vanishing variance
Our previous results show that RandomShuffle converges faster than Sgd after a certain number of epochs. However, one may want to see whether it is possible to show faster convergence of RandomShuffle after only one epoch, or even within one epoch. In this section, we study a specialized class of strongly convex problems where RandomShuffle has faster convergence rate than Sgd after an arbitrary number of iterations.
We build our example based on a vanishing variance setting: for the optimal point . Moulines and Bach [19] show that when is strongly convex, Sgd converges linearly in this setting. For the construction of our example, we assume a slightly stronger situation: each component function is strongly convex.
Given pairs of positive numbers such that , a dimension and a point , we define a valid problem as a dimensional finite sum function where each component is strongly convex and has Lipschitz continuous gradient, with some minimizing all functions at the same time (which is equivalent to vanishing gradient). Let be the set of all such problems, called valid problems below. For a problem , let random variable be the result of running RandomShuffle from initial point for iterations with step size on problem . Similarly, let be the result of running Sgd from initial point for iterations with step size on problem .
We have the following result on the worst-case convergence rate of RandomShuffle and Sgd:
###### Theorem 7.
Given pairs of positive numbers such that , a dimension , a point and an initial set . Let be the set of valid problems. For step size and any , there is
maxP∈P,x0∈DR(x∗)E[∥XRS(T,x0,γ,P)−x∗∥2]≤maxP∈P,x0∈DR(x∗)E[∥XSGD(T,x0,γ,P)−x∗∥2].
This theorem indicates that RandomShuffle has a better worst-case convergence rate than Sgd after an arbitrary number of iterations under this noted setting.
## 8 Conclusion and open problems
A long-standing problem in the theory of stochastic gradient descent (Sgd) is to prove that RandomShuffle converges faster than the usual with-replacement Sgd. In this paper, we provide the first non-asymptotic convergence rate analysis for RandomShuffle. We show in particular that after epochs, RandomShuffle behaves strictly better than Sgd under strong convexity and second-order differentiability. The underlying introduction of dependence on into the bound plays an important role toward a better dependence on . We further improve the dependence on for sparse data settings, showing RandomShuffle’s advantage in such situations.
An important open problem remains: how (and to what extent) can we improve the bound such that RandomShuffle can be shown to be better than Sgd for smaller . A possible direction is to improve the dependence arising in our bounds, though different analysis techniques may be required. It is worth noting that for some special settings, this improvement can be achieved. (For example in the setting of Theorem 7, RandomShuffle is shown better than Sgd for any number of iterations.) However, showing RandomShuffle converges better in general, remains open.
## Appendix A Proof of Theorem 1
###### Proof.
Assume where is positive integer. Notate as the th iteration for th epoch. There is , , . Assume the permutation used in th epoch is . Define error term
For one epoch of RandomShuffle, We have the following inequality
∥∥xtn−x∗∥∥2 =∥∥xt0−x∗∥∥2−2γ⟨xt0−x∗,n∑i=1∇fσt(i)(xti−1)⟩+γ2∥∥ ∥∥n∑i=1∇fσt(i)(xti−1)∥∥ ∥∥2 ≤∥∥xt0−x∗∥∥2−2nγ[LμL+μ∥∥xt0−x∗∥∥2+1L+μ∥∥∇F(xt0)∥∥2] =(1−2nγLμL+μ)∥∥xt0−x∗∥∥2−(2nγ1L+μ−2γ2n2)∥∥∇F(xt0)∥∥2 (A.1)
where the inequality is due to Theorem 2.1.11 in [23].
Take the expectation of (A.1) over randomness of permutation , we have
E[∥∥xtn−x∗∥∥2] ≤(1−2nγLμL+μ)∥∥xt0−x∗∥∥2−(2nγ1L+μ−2n2γ2)∥∥∇F(xt0)∥∥2 −2γ⟨xt0−x∗,E[Rt]⟩+2γ2E[∥∥Rt∥∥2]. (A.2)
What remains to be done is to bound the two terms with dependence. Firstly, we give a bound on the norm of :
∥∥Rt∥∥ =∥∥ ∥∥n∑i=1∇fσt(i)(xti−1)−n∑i=1∇fσt(i)(xt0)∥∥ ∥∥ ≤n∑i=1∥∥∇fσt(i)(xti−1)−∇fσt(i)(xt0)∥∥ =n∑i=1∥∥ ∥∥i−1∑j=1(∇fσt(i)(xtj)−∇fσt(i)(xtj−1))∥∥ ∥∥ ≤n∑i=1i−1∑j=1L∥∥xtj−xtj−1∥∥ =n∑i=1i−1∑j=1L∥∥−γ∇fσt(j)(xtj−1)∥∥ ≤n∑i=1i−1∑j=1LγG =n(n−1)2γGL,
where the first and second inequality is by triangle inequality of vector norm, the third inequality is by definition of , the fourth inequality is by definition of . By this result, we have
(A.3)
For the term, we need more careful bound. Since the Hessian is constant for quadratic functions, we use to denote the Hessian matrix of function . We begin with the following decomposition:
Rt =n∑i=1[∇fσt(i)(xti−1)−∇fσt(i)(xt0)] =n∑i=1[Hσt(i)(xti−1−xt0)] =n∑i=1{Hσt(i)i−1∑j=1[−γ∇fσt(j)(xtj−1)]} =−γn∑i=1[Hσt(i)i−1∑j=1∇fσt(j)(xt0)]−γn∑i=1{Hσt(i)i−1∑j=1[∇fσt(j)(xtj−1)−∇fσt(j)(xt0)]} =At+Bt. (A.4)
Here we define random variables
At=−γn∑i=1[Hσt(i)i−1∑j=1∇fσt(j)(xt0)],
Bt=−γn∑i=1{Hσt(i)i−1∑j=1[∇fσt(j)(xtj−1)−∇fσt(j)(xt0)]}.
There is
E[At]=−n(n−1)2γEi≠j[Hi∇fj(xt0)], (A.5)
∥∥Bt∥∥ ≤γn∑i=1Hσt(i)i−1∑j=1∥∥∇fσt(j)(xtj−1)−∇fσt(j)(xt0)∥∥ ≤γn∑i=1Li−1∑j=1(j−1)γGL =γ2L2Gn∑i=1(i−1)(i−2)2 ≤12γ2L2Gn3. (A.6)
Using (A.4) and (A.5), we can decompose the inner product of and into:
−2γ⟨x
|
2022-05-16 09:42:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179162383079529, "perplexity": 678.7470798298108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00001.warc.gz"}
|
https://zenodo.org/record/3465205/export/xd
|
Conference paper Open Access
# ASSESSMENT OF SEISMIC SITE RESPONSE BASED ON MICROTREMOR MEASUREMENTS
Ramos, André; Carrilho Gomes, Rui; Viana da Fonseca, António
### Dublin Core Export
<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Ramos, André</dc:creator>
<dc:creator>Carrilho Gomes, Rui</dc:creator>
<dc:creator>Viana da Fonseca, António</dc:creator>
<dc:date>2018-06-21</dc:date>
<dc:description>Microtremor measurements is a cost-effective and non-invasive technique based on the ambient vibrations recordings of three components at ground surface. It is used to estimate the fundamental frequency of soils, f0, and
its amplification ratio, A0, based on the spectral ratio between the horizontal (H) and vertical (V) components of the measurements. In the scope of the H2020 EU funded, LIQUEFACT project, which addresses the mitigation of the risks associated with the liquefaction induced due to the seismic action, in situ geotechnical tests were performed, including microtremor measurements, in the Lisbon area in Portugal. Each measurement had an approximate duration of 40 minutes at 26 different sites, using a SYSCOM velocity sensor (MS2003+) connected to an SYSCOM acquisition unit (MR2002), considering an acquisition frequency of 400 Hz. The H/V curves at some points exhibit clear single peaks with large amplitude, which could be associated to sharp discontinuities corresponding to a profile with a single fairly homogeneous layer with a low value of the shear wave velocity contrasting a much higher value at a certain depth (“seismic bedrock”). The studied areas are characterized by peak frequencies ranging from 0.92 to 11.01 Hz and peak amplitudes ranging from 2.58 to 4.73. The linear equivalent model was used to assess seismic site effects, using Cross-Hole data to build the soil profile, along with strain-dependent curves from resonant column and cyclic torsional tests. The peak horizontal acceleration computed through numerical simulation was then compared with the frequency, the amplitude and the shape of HVSR curves to assess HVSR curves reliability in the prediction of seismic siteeffects.</dc:description>
<dc:identifier>https://zenodo.org/record/3465205</dc:identifier>
<dc:identifier>10.5281/zenodo.3465205</dc:identifier>
<dc:identifier>oai:zenodo.org:3465205</dc:identifier>
<dc:relation>info:eu-repo/grantAgreement/EC/H2020/700748/</dc:relation>
<dc:relation>doi:10.5281/zenodo.3465204</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:title>ASSESSMENT OF SEISMIC SITE RESPONSE BASED ON MICROTREMOR MEASUREMENTS</dc:title>
<dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
<dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
2
3
views
|
2019-12-12 22:28:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23257514834403992, "perplexity": 4584.281652272674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00531.warc.gz"}
|
https://brilliant.org/problems/fun-of-quadratic/
|
Algebra Level 2
Such that $$\alpha$$ and $$\beta$$ are roots
and geometric mean is $$\sqrt{3}$$ harmonic mean is 1.5
|
2017-03-28 02:40:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8294661641120911, "perplexity": 670.9577315614617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189589.37/warc/CC-MAIN-20170322212949-00115-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://lenithemonster.com/c5ni1/why-are-units-important-in-physics-66a15f
|
# why are units important in physics
We're sorry, but in order to log in and use all the features of this website, you will need to enable JavaScript in your browser. The units used to measure the fundamental quantities are called fundamental units or basic units; that is the fundamental units are the units of length, mass, time, electric current, temperature, luminous intensity and amount of substance. to the profound … If the, limit simply states that it is 15,000, without any units, a 25,000-pound semi-truck might, question whether or not they can cross the bridge. From the prosaic . Image credit: Fornax Space Missions. Mickypoo. In order to test and measure physical quantities we need to define some standard measures which we all agree on. The units and their explanation will help the students to get an in-depth idea about the respective topics. Science is another area where units of measurement are critical. Don't want to keep filling in name and email whenever you want to comment? Without units much of our work as scientists would be meaningless. So it is important to measure certain things right, distance, time, and accuracy are all great things to measure.By measuring these things or in other words, by taking these measurements we can better … It’s our understanding of the science that allows us to build planes that can fly in the first place. From its station about the red planet, the Mars Climate Orbiter was to relay signals from the Mars Polar Lander, which is scheduled to touch down on Mars next month. Physics helps us to understand how the world around us works, from can openers, light bulbs and cell phones to muscles, lungs and brains; from paints, piccolos and pirouettes to cameras, cars and cathedrals; from earthquakes, tsunamis and hurricanes to quarks, DNA and black holes. Unless specified, this website is not in any way affiliated with any of the institutions featured. It is believed that the craft came dangerously close to the atmosphere of Mars, where it presumably burned and broke into pieces. For example: If you say that, the volume of your notebook is 25, it provides no exact meaning because it could be 25 mm3 or 25 cm3 or 25 dm3 and many more. 0 0. In physics, work is performed only when an object is moved in the direction of an applied … Physics Introduction to Physics Measurement and Units. Because of failure to convert english measures to metric values caused the loss of the mars climate orbiter, It crashed cause they failed to convert the English units to an S.I unit which lead to wrong calculations. The spacecraft was to be a key part of the exploration of the planet. Units are an essential part of the language we use. Organizing and providing relevant educational content, resources and information for students. We need to express our thoughts clearly and units give meaning to the numbers we measure and calculate. A measurement is the action of measuring something, or some amount of stuff. Physics - Measurement Units - The following table illustrates the major measuring units in physics − . You could have $$\text{12}$$ $$\text{mL}$$ of water, $$\text{12}$$ litres of water, or even $$\text{12}$$ bottles of water. It can also be easy to mix up information if there is a lack of units. It is always recommended to visit an institution's official website for more information. Why are standardized units of measurement important? If you go on to become a scientist, you will take measurements often. Units must be … They will make you ♥ Physics. We so often take measurement for granted and don't even realize that it's important, but we should appreciate the huge importance and role that measurement plays in our lives. Course Hero is not sponsored or endorsed by any college or university. So why should you study physics? The units are given in points to helps students retain the concept for a longer period of time. Physical Quantities With Their Symbols And Units in SI & c.g.s System To explain the natural phenomena we take the help of physics. Explain the importance of units by sharing an example of a situation that could go wrong because there are no units. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Ask Question + 100. A small difference between the two values caused the spacecraft to approach Mars at too low an altitude and the craft is thought to have smashed into the planet’s atmosphere and was destroyed. This is a lesson from the tutorial, Skills for Science and you are encouraged to log in or register, so that you can track your progress. Within this system, the most commonly used set of units in physics are meters, kilograms and seconds. The International System of Units (abbreviated as SI Units from its French name, Système International d'unités) is an internationally agreed metric system of units of measurement that has been in existence since 1960. That’s why we … Mechanics, branch of physics concerned with the motion of bodies under the action of forces, including the special case in which a body remains at rest. The Mars Climate Orbiter, a key craft in the space agency’s exploration of the red planet, vanished on 23 September after a 10 month journey. I think you may be missing some information in this question. Get your answers by asking now. cross because it would be too heavy and could cause the bridge to collapse. Why are units important? Historically, mechanics was among the first of the exact sciences to be developed. Register or login to receive notifications when there's a reply to your comment or update on this information. A physical quantity is anything that can be measured and represented by a number.Base units are the fundamental measures that are used to describe all physical quantities. by sharing an example of a situation that could go wrong because there are no units. If you're talking about what a number relates to, e.g. Work, in terms of a physics related definition, has quite a different meaning than the type of work about which we normally think. The system of measurement by which we now measure physical quantities is called the International System. Physics 01.04 - Google Docs.pdf - Why should we use units for our numbers Why are units important Explain the importance of units by sharing an example, Why should we use units for our numbers? Physics is interesting. Depending on which units we use, the numbers are … In classical, non-relativistic physics, it is a scalar quantity (often denoted by the symbol ) and, like length, mass, and charge, is usually described as a fundamental quantity.Time can be combined mathematically with other physical quantities to derive other concepts such as motion, kinetic energy and time … This modified article is licensed under a CC BY-NC-SA 4.0 license. Why are measurements so important in physics class? Lv 6. Your browser seems to have Javascript disabled. Why are units important? I know from experience how key it is to lay a foundation for remembering these units of measurement early on. Units must be specified when expressing physical quantities. Join Yahoo Answers and get … In your physics class, you most likely won’t be testing a major new hypothesis. However, you should take measurements as carefully as you can. $$\overset{\underset{\mathrm{def}}{}}{=}$$, NASA: Human error caused loss of Mars orbiter November 10, 1999. Units are an essential part of the language we use. Units of Measurement Units provide specific meaning to the magnitude of a substance. N (newtons) or mph then it is self explanatory. Discuss other possible situations where using the incorrect set of units can be to your disadvantage or even dangerous. This standardized number tells you the amount of electricity … It may be divided into three branches: statics, kinematics, and kinetics. Multiplication of physical quantities creates new units. Check the units of based articles in Physics given below. Depending on which units we use, the numbers are different (e.g. SI units are important because these are common to the people of the entire world, so that people from different countries can communicate with each other conveniently regarding business and science. Read the following extract from CNN News 30 September 1999 and answer the questions below. Explanation: If everything was made bespoke and all components manufactured by different individuals then it would likely be uncommon for things to fit well. Answer in your own words. Time in physics is defined by its measurement: time is what a clock reads. Accuracy is important for the acceptable certainty of results obtained from point of view of expected consequences and theory targets. Save my name, email, and website in this browser for the next time I comment. It makes systematic use of prefixes, making it easy to express very large or very small numbers. We already talked about physics’s importance in mining and using fuel, which powers our planes, trains, and automobiles. Explanation is done in the section below. Answer: We should use units for our numbers because it helps to avoid mistakes in computation, and when we are doing a graph, it helps to visualize better the numbers. "Units and measurements". It's an effective way to diversify your savings while increasing returns and reducing the risk of loss. Without it, they simply wouldn’t move. Depending on which units we use, the numbers are different. All names, acronyms, logos and trademarks displayed on this website are those of their respective owners. I also couldn't find the translation of unités de compte in English, if it has one. Explanation: Part 1: Physics is the study of nature, natural phenomena, force, matter and all the types of energies. “The root cause of the loss of the spacecraft was a failed translation of English units into metric units and a segment of ground-based, navigation-related mission software,” said Arthur Stephenson, chairman of the investigation board. The history of the metre and the kilogram, two of the fundamental units on which the system is based, goes back to the French Revolution. The units of account make it possible to invest money on the stock markets without the need to directly acquire assets, be they stocks or bonds. Modern Public School & College, Abbottabad, Modern Public School & College, Abbottabad • ECO 101, Customer-driven_misconduct_How_competiti.pdf, Employment_relations_in_SMEs_Market-driv.pdf, School_of_Economics_and_Commercial_Law_M.pdf, Customer-Driven_Content_Recommendation_O.pdf, Modern Public School & College, Abbottabad • ECO MISC. That’s why from the very first time I introduce measurements to my students I always emphasize the importance of not forgetting to write the units. Further importance of precision is requested if the measurements … When you purchase an everyday light bulb it is typically labeled as 40, 60 or 100 watts. Top Answer . . The Importance of Units . Why units are so important… Posted on September 8, 2010 by admin — No Comments ↓ Mars Orbiter’s Demise Avoidable. Register or login to make commenting easier. Then every quantity in physics would just be unitless, and there would be no need for keeping track of them. The system itself is based on the concept of seven fundamental base units of quantity, from which all other units of … Suppose that there is a set of really natural units: a truly fundamental amount of length that we could count all lengths in, a fundamental amount of time, a fundamental amount of electric charge and so forth -- "God's units", if you will. Examples of derived units include m2… Units of measurement provide standard to identify measurement of a physical quantify. Still have questions? A NASA investigation released Wednesday confirmed earlier reports that … It is important to use units because it helps determine what specifically we are talking about. Recommended for you We need to express our thoughts clearly and units give meaning to the numbers we measure and calculate. An investigation board concluded that NASA engineers failed to convert English measures of rocket thrusts to newton, a metric system measuring rocket force. But a good accuracy is not always sufficient to get good measurements; precision is also requested to avoid big divergences about the quantitative estimate from real situation. Look for examples at home, at school, at a hospital, when travelling and in a shop. One English pound of force equals $$\text{4.45}$$ newtons. This is a list of important publications in physics, organized by field. Why are units important? why is the physics important in the real life application? Imagine that you are baking a cake, but the units, like grams and millilitres, for the flour, milk, sugar and baking powder are not specified! But a unit of measure that you might already be familiar with is a watt of electricity. and why we need to find the volume and Density of a rectangular prism, marble, thin rod? This situation became … For example, if there is a, weight limit for a bridge, there need to be units in order to determine what can cross it. Source(s): https://owly.im/a9WG9. But the role of physics in both our personal transportation and shipping goes even deeper than that. The Importance of Units. 0 0. Energy and work (energy as defined as the ability to do work) occupy an important part of our ordinary lives, and are among the most important topics in physics. Answer (1 of 6): There are many reasons why measurements are so important. Why did the Mars Climate Orbiter crash? A system of units is particularly important in physics as the different units can combine (in multiplying forces) or cancel out (in dividing forces) to form a … Important Units In Physics. You want these measurements to help support your theories. Click to see full answer Similarly, it is asked, what is the importance of measurement? It is important to note that although the cross section has the same units as area, the cross section may not necessarily correspond to the actual physical size of the target given by other forms of measurement. A base unit (also referred to as a fundamental unit) is a unit adopted for measurement of a base quantity.A base quantity is one of a conventionally chosen subset of physical quantities, where no quantity in the subset can be expressed in terms of the others.The SI units, or Systeme International d'unites which consists of the metre, kilogram, second, ampere, Kelvin, mole and … CAPE CANAVERAL — The sort of mistake usually found in grade-school math homework proved fatal to a $125 million NASA Mars probe. Explain the importance of units. On a basic level, measurements will fall into a few categories - temperature, length, volume, weight and area. “We should use units for our numbers so that we have an idea of how much we actually have, and, United States Declaration of Independence. Why should we use units for our numbers? Mars Climate Orbiter. We need to express our thoughts clearly and units give meaning to the numbers we calculate. by Michael Cabbage of The Sentinel Staff. Physics enable us to understand logically as well as mathematically all natural phenomena. There are lots of measurements in chemistry and physics that you will learn about. Lectures by Walter Lewin. Failure to convert English measures to metric values caused the loss of the Mars Climate Orbiter, a spacecraft that smashed into the planet instead of reaching a safe orbit, a NASA investigation concluded Wednesday. You have to know six of them for the science GCSEs, as described below: All other physical quantities can be represented by combinations of base units which are called derived units. 3.8 m and 3800 mm actually represent the same length). The units that scientists use all over the world are standardised in the Système Internationale d'Unités - SI units. Explain the importance of units by sharing an example of a situation that could go wrong because there are no units.-It is important to use units because it helps determine what specifically we are talking about.It can also be easy to mix up information if there is a lack of units. It is not uncommon for the actual cross-sectional area of a scattering object to be much larger or smaller than the cross section … 1 decade ago. For example if you have $$\text{12}$$ water, it means nothing. Without units much of our work as scientists would be meaningless. A 15,000-kilogram weight limit is safe for, the truck to cross, whereas a 15,000-pound weight limit would not be safe for the truck to. Without units much of our work as scientists would be meaningless. Some reasons why a particular publication might be regarded as important: Topic creator – A publication that created a new topic; Breakthrough – A publication that changed scientific knowledge significantly; Influence – A publication which has significantly influenced the world or has had a massive … 1 Answer Tony B Oct 19, 2015 It is all to do with repeatability. An effective way to diversify your savings while increasing returns and reducing the risk of loss the volume and of. What a clock reads be testing a major new hypothesis and measure physical is., resources and information for students both our personal transportation and shipping goes deeper! Unitless, and website in this browser for the next time i comment learn about NASA engineers to. Units and their explanation will help the students to get an in-depth idea about the respective topics Orbiter! 60 or 100 watts be unitless, and website in this browser for the next i! Time in physics is defined by its measurement: time is what a number relates to e.g. And could cause the bridge to collapse 1999 and Answer the questions below under a CC BY-NC-SA 4.0.! Are lots of measurements in chemistry and physics that you might already be familiar is... And Answer the questions below one English pound of force equals \ ( \text { }! Exploration of the exact sciences to be developed at a hospital, when and. Given below respective topics students to get an in-depth idea about the topics! Look for examples at home, at a hospital, when travelling and in a shop, thin rod,! To visit an institution 's official website for more information physics class, you will measurements... We use, the most commonly used set of units in physics, organized by field weight area! Measurement units provide specific meaning to the numbers are different ( e.g CANAVERAL — the sort of mistake found... Prism, marble, thin rod publications in physics the concept for a longer period of time length ) important. Measures which we now measure physical quantities is called the International system even deeper than that save my,! Discuss other possible situations where using the incorrect set of units can be to disadvantage. M and 3800 mm actually represent the same length ) measurement provide standard to identify measurement of a substance phenomena. ) newtons will fall into a few categories - temperature, length,,!, you should take measurements as carefully as you can is believed the... Of prefixes, making it easy to express our thoughts clearly and units give meaning to the of. Become a scientist, you will take measurements as carefully as you can when 's! No Comments ↓ Mars Orbiter ’ s why we need to express very large very. Organizing and providing relevant educational content, resources and information for students measurements carefully! Affiliated with any of the language we use measurement is the action of measuring something, some... Length, volume, weight and area, if it has one examples at home, at school at... Website in this question t move you 're talking about what a number relates,! A measurement is the physics important in the real life application the planet every. Three branches: statics, kinematics, and there would be meaningless as 40, 60 100. No need for keeping track of them News 30 September 1999 and Answer the below! Website for more information in-depth idea about the respective topics 's a reply to your disadvantage even. Too heavy and could cause the bridge to collapse situation that could go wrong there... Measuring something, or some amount of stuff transportation and shipping goes even deeper that! Read the following extract from CNN News 30 September 1999 and Answer the questions below new. Important… Posted on September 8, 2010 by admin — no Comments ↓ Mars ’... A rectangular prism, marble, thin rod metric system measuring rocket force sponsored or endorsed by any college university! Cnn News 30 September 1999 and Answer the questions below usually found in grade-school math proved! Lack of units by sharing an example of a situation that could go wrong because there are no.! That you might already be familiar with is a lack of units in physics just. It presumably burned and broke into pieces you purchase an everyday light bulb it is to lay a for... Three branches: statics, kinematics, and there would be meaningless college or.. Investigation board concluded that NASA engineers failed to convert English measures of rocket thrusts to newton, metric... On September 8, 2010 by admin — no Comments ↓ Mars Orbiter ’ our. Prism, marble, thin rod to understand logically as well as mathematically all natural phenomena system measuring rocket.... You why are standardized units of measurement by which we now measure physical quantities is called International! Of an applied … important units in physics, organized by field no units a lack units. Register or login to receive notifications when there 's a reply to your comment update. Pound of force equals \ ( \text { 4.45 } \ ) newtons be too heavy and cause. Website in this question burned and broke into pieces systematic use of,! Information if there is a lack of units in physics, work is performed only when an object is in. Or endorsed by any college or university, weight and area sciences to be developed unit of measure you. A longer period of time those of their respective owners and seconds and calculate or 100 watts helps!, work is performed only when an object is moved in the first of the institutions featured to keep in! Same length ), marble, thin rod Science is another area where units of by! It has one are no units way affiliated with any of the exploration of the institutions featured allows... Because it helps determine what specifically we are talking about and units meaning... Purchase an everyday light bulb it is self explanatory always recommended to visit an institution 's official website for information! Help the students to get an in-depth idea about the respective topics too heavy and cause. Is called the International system hospital, when travelling and in a shop force equals \ ( {., volume, weight and area reducing the risk of loss could find... Translation of unités de compte in English, if it has one \text { 4.45 } \ ) newtons or! Tony B Oct 19, 2015 it is typically labeled as 40, 60 100... Of rocket thrusts to newton, a metric system measuring rocket force when travelling and in shop... What a number relates to, e.g logos and trademarks displayed on this information Tony B 19... Fall into a few categories - temperature, length, volume, and... … time in why are units important in physics, organized by field expected consequences and theory targets want to keep filling in and! Do with repeatability, making it easy to express very large or very small.! Some information in this question math homework proved fatal to a$ 125 million NASA Mars.... To keep filling in name and email whenever you want to keep filling in name and email you. Situation that could go wrong because there are no units however, will. If you go on to become a scientist, you will learn about to a \$ 125 NASA. Measurement provide standard to identify measurement of a physical quantify in name and email you! To do with repeatability bulb it is all to do with repeatability — no Comments ↓ Mars ’. By sharing an example of a rectangular prism why are units important in physics marble, thin rod large or very small numbers even. Which we now measure physical quantities we need to find the translation of unités de compte in,! The next time i comment language we use, the numbers are different ( e.g college university... A rectangular prism, marble, thin rod of physics in both our personal transportation shipping. Types of energies in points to helps students retain the concept for a longer period of time the! Burned and broke into pieces when an object is moved in the direction of applied. Is to lay a foundation for remembering these units of measurement provide standard to identify measurement of a.! Questions below disadvantage or even dangerous Tony B Oct 19, 2015 it is lay... All natural phenomena why are units important in physics NASA engineers failed to convert English measures of rocket thrusts to newton, a system! 1 Answer Tony B Oct 19, 2015 it is self explanatory homework proved fatal to a 125! That ’ s Demise Avoidable the sort why are units important in physics mistake usually found in grade-school math homework fatal! Important units in physics given below physics would just be unitless, and kinetics Answer the questions below units... Situation that could go wrong because there are lots of measurements in chemistry physics. Points to helps students retain the concept for a longer period of time comment or on... Given in points to helps students retain the concept for a longer period of time the volume and Density a!: time is what a clock reads its measurement: time is what a relates. Orbiter ’ s why we need to find the translation of unités de compte in English, if it one! Helps determine what specifically we are talking about kilograms and seconds measurement by which all! Length, volume, weight and area, natural phenomena meaning to the we! To, e.g of electricity bridge to collapse time is what a clock reads a of. School, at school, at school, at school, at school, at school at... By admin — no Comments ↓ Mars Orbiter ’ s why we to! A longer period of time then it is typically labeled as 40, 60 or 100 watts Mars! And Density of a substance sharing an example of a situation that could go wrong because are. Transportation and shipping why are units important in physics even deeper than that and area ’ s Demise Avoidable if it has.!
|
2021-04-20 13:17:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36597007513046265, "perplexity": 978.3038395677834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00334.warc.gz"}
|
https://everything.explained.today/Proxy_(climate)/
|
# Proxy (climate) explained
In the study of past climates ("paleoclimatology"), climate proxies are preserved physical characteristics of the past that stand in for direct meteorological measurements[1] and enable scientists to reconstruct the climatic conditions over a longer fraction of the Earth's history. Reliable global records of climate only began in the 1880s, and proxies provide the only means for scientists to determine climatic patterns before record-keeping began.
A large number of climate proxies have been studied from a variety of geologic contexts. Examples of proxies include stable isotope measurements from ice cores, growth rates in tree rings, species composition of sub-fossil pollen in lake sediment or foraminifera in ocean sediments, temperature profiles of boreholes, and stable isotopes and mineralogy of corals and carbonate speleothems. In each case, the proxy indicator has been influenced by a particular seasonal climate parameter (e.g., summer temperature or monsoon intensity) at the time in which they were laid down or grew. Interpretation of climate proxies requires a range of ancillary studies, including calibration of the sensitivity of the proxy to climate and cross-verification among proxy indicators.[2]
Proxies can be combined to produce temperature reconstructions longer than the instrumental temperature record and can inform discussions of global warming and climate history. The geographic distribution of proxy records, just like the instrumental record, is not at all uniform, with more records in the northern hemisphere.[3]
## Proxies
See main article: Proxy (statistics). In science, it is sometimes necessary to study a variable which cannot be measured directly. This can be done by "proxy methods," in which a variable which correlates with the variable of interest is measured, and then used to infer the value of the variable of interest. Proxy methods are of particular use in the study of the past climate, beyond times when direct measurements of temperatures are available.
Most proxy records have to be calibrated against independent temperature measurements, or against a more directly calibrated proxy, during their period of overlap to estimate the relationship between temperature and the proxy. The longer history of the proxy is then used to reconstruct temperature from earlier periods.
### Ice cores
#### Drilling
Ice cores are cylindrical samples from within ice sheets in the Greenland, Antarctic, and North American regions.[4] [5] First attempts of extraction occurred in 1956 as part of the International Geophysical Year. As original means of extraction, the U.S. Army's Cold Regions Research and Engineering Laboratory used an 80feet-long modified electrodrill in 1968 at Camp Century, Greenland, and Byrd Station, Antarctica. Their machinery could drill through of ice in 40–50 minutes. From 1300 to 3000feet in depth, core samples were in diameter and 10 to 20feet long. Deeper samples of 15 to 20feet long were not uncommon. Every subsequent drilling team improves their method with each new effort.[6]
#### Proxy
The ratio between the 16O and 18O water molecule isotopologues in an ice core helps determine past temperatures and snow accumulations.[4] The heavier isotope (18O) condenses more readily as temperatures decrease and falls more easily as precipitation, while the lighter isotope (16O) needs colder conditions to precipitate. The farther north one needs to go to find elevated levels of the 18O isotopologue, the warmer the period. [7]
In addition to oxygen isotopes, water contains hydrogen isotopes – 1H and 2H, usually referred to as H and D (for deuterium) – that are also used for temperature proxies. Normally, ice cores from Greenland are analyzed for δ18O and those from Antarctica for δ-deuterium. Those cores that analyze for both show a lack of agreement. (In the figure, δ18O is for the trapped air, not the ice. δD is for the ice.)
Air bubbles in the ice, which contain trapped greenhouse gases such as carbon dioxide and methane, are also helpful in determining past climate changes.[4]
From 1989 to 1992, the European Greenland Ice Core Drilling Project drilled in central Greenland at coordinates 72° 35' N, 37° 38' W. The ices in that core were 3840 years old at a depth of 770 m, 40,000 years old at 2521 m, and 200,000 years old or more at 3029 m bedrock.[8] Ice cores in Antarctica can reveal the climate records for the past 650,000 years.[4]
Location maps and a complete list of U.S. ice core drilling sites can be found on the website for the National Ice Core Laboratory: http://icecores.org/[5]
### Tree rings
See main article: Dendroclimatology.
Dendroclimatology is the science of determining past climates from trees, primarily from properties of the annual tree rings. Tree rings are wider when conditions favor growth, narrower when times are difficult. Other properties of the annual rings, such as maximum latewood density (MXD) have been shown to be better proxies than simple ring width. Using tree rings, scientists have estimated many local climates for hundreds to thousands of years previous. By combining multiple tree-ring studies (sometimes with other climate proxy records), scientists have estimated past regional and global climates (see Temperature record of the past 1000 years).
### Fossil leaves
Paleoclimatologists often use leaf teeth to reconstruct mean annual temperature in past climates, and they use leaf size as a proxy for mean annual precipitation.[9] In the case of mean annual precipitation reconstructions, some researchers believe taphonomic processes cause smaller leaves to be overrepresented in the fossil record, which can bias reconstructions. However, recent research suggests that the leaf fossil record may not be significantly biased toward small leaves.[10] New approaches retrieve data such as content of past atmospheres from fossil leaf stomata and isotope composition, measuring cellular CO2 concentrations. A 2014 study was able to use the carbon-13 isotope ratios to estimate the CO2 amounts of the past 400 million years, the findings hint at a higher climate sensitivity to CO2 concentrations.[11]
### Boreholes
Borehole temperatures are used as temperature proxies. Since heat transfer through the ground is slow, temperature measurements at a series of different depths down the borehole, adjusted for the effect of rising heat from inside the Earth, can be "inverted" (a mathematical formula to solve matrix equations) to produce a non-unique series of surface temperature values. The solution is "non-unique" because there are multiple possible surface temperature reconstructions that can produce the same borehole temperature profile. In addition, due to physical limitations, the reconstructions are inevitably "smeared", and become more smeared further back in time. When reconstructing temperatures around 1500 AD, boreholes have a temporal resolution of a few centuries. At the start of the 20th Century, their resolution is a few decades; hence they do not provide a useful check on the instrumental temperature record.[12] [13] However, they are broadly comparable.[3] These confirmations have given paleoclimatologists the confidence that they can measure the temperature of 500 years ago. This is concluded by a depth scale of about 492 feet (150 meters) to measure the temperatures from 100 years ago and 1,640 feet (500 meters) to measure the temperatures from 1,000 years ago.[14]
Boreholes have a great advantage over many other proxies in that no calibration is required: they are actual temperatures. However, they record surface temperature not the near-surface temperature (1.5 meter) used for most "surface" weather observations. These can differ substantially under extreme conditions or when there is surface snow. In practice the effect on borehole temperature is believed to be generally small. A second source of error is contamination of the well by groundwater may affect the temperatures, since the water "carries" more modern temperatures with it. This effect is believed to be generally small, and more applicable at very humid sites. It does not apply in ice cores where the site remains frozen all year.
More than 600 boreholes, on all continents, have been used as proxies for reconstructing surface temperatures. The highest concentration of boreholes exist in North America and Europe. Their depths of drilling typically range from 200 to greater than 1,000 meters into the crust of the Earth or ice sheet.[14]
A small number of boreholes have been drilled in the ice sheets; the purity of the ice there permits longer reconstructions. Central Greenland borehole temperatures show "a warming over the last 150 years of approximately 1°C ± 0.2°C preceded by a few centuries of cool conditions. Preceding this was a warm period centered around A.D. 1000, which was warmer than the late 20th century by approximately 1°C." A borehole in the Antarctica icecap shows that the "temperature at A.D. 1 [was] approximately 1°C warmer than the late 20th century".[15]
Borehole temperatures in Greenland were responsible for an important revision to the isotopic temperature reconstruction, revealing that the former assumption that "spatial slope equals temporal slope" was incorrect.
### Corals
Ocean coral skeletal rings, or bands, also share paleoclimatological information, similarly to tree rings. In 2002, a report was published on the findings of Drs. Lisa Greer and Peter Swart, associates of University of Miami at the time, in regard to stable oxygen isotopes in the calcium carbonate of coral. Cooler temperatures tend to cause coral to use heavier isotopes in its structure, while warmer temperatures result in more normal oxygen isotopes being built into the coral structure. Denser water salinity also tends to contain the heavier isotope. Greer's coral sample from the Atlantic Ocean was taken in 1994 and dated back to 1935. Greer recalls her conclusions, "When we look at the averaged annual data from 1935 to about 1994, we see it has the shape of a sine wave. It is periodic and has a significant pattern of oxygen isotope composition that has a peak at about every twelve to fifteen years." Surface water temperatures have coincided by also peaking every twelve and a half years. However, since recording this temperature has only been practiced for the last fifty years, correlation between recorded water temperature and coral structure can only be drawn so far back.[16]
### Pollen grains
Pollen can be found in sediments. Plants produce pollen in large quantities and it is extremely resistant to decay. It is possible to identify a plant species from its pollen grain. The identified plant community of the area at the relative time from that sediment layer, will provide information about the climatic condition. The abundance of pollen of a given vegetation period or year depends partly on the weather conditions of the previous months, hence pollen density provides information on short-term climatic conditions.[17] The study of prehistoric pollen is palynology.
### Dinoflagellate cysts
Dinoflagellates occur in most aquatic environments and during their life cycle, some species produce highly resistant organic-walled cysts for a dormancy period when environmental conditions are not appropriate for growth. Their living depth is relatively shallow (dependent upon light penetration), and closely coupled to diatoms on which they feed. Their distribution patterns in surface waters are closely related to physical characteristics of the water bodies, and nearshore assemblages can also be distinguished from oceanic assemblages. The distribution of dinocysts in sediments has been relatively well documented and has contributed to understanding the average sea-surface conditions that determine the distribution pattern and abundances of the taxa ([18]). Several studies, including [19] and [20] have compiled box and gravity cores in the North Pacific analyzing them for palynological content to determine the distribution of dinocysts and their relationships with sea surface temperature, salinity, productivity and upwelling. Similarly,[21] and [22] use a box core at 576.5 m of water depth from 1992 in the central Santa Barbara Basin to determine oceanographic and climatic changes during the past 40 kyr in the area.
### Lake and ocean sediments
Similar to their study on other proxies, paleoclimatologists examine oxygen isotopes in the contents of ocean sediments. Likewise, they measure the layers of varve (deposited fine and coarse silt or clay)[23] laminating lake sediments. Lake varves are primarily influenced by:
• Summer temperature, which shows the energy available to melt seasonal snow and ice
• Winter snowfall, which determines the level of disturbance to sediments when melting occurs
• Rainfall[24]
Diatoms, foraminifera, radiolarians, ostracods, and Coccolithophores are examples of biotic proxies for lake and ocean conditions that are commonly used to reconstruct past climates. The distribution of the species of these and other aquatic creatures preserved in the sediments are useful proxies. The optimal conditions for species preserved in the sediment act as clues. Researchers use these clues to reveal what the climate and environment was like when the creatures died.[25] The oxygen isotope ratios in their shells can also be used as proxies for temperature.[26]
### Water isotopes and temperature reconstruction
Ocean water is mostly H216O, with small amounts of HD16O and H218O, where D denotes deuterium, i.e. hydrogen with an extra neutron. In Vienna Standard Mean Ocean Water (VSMOW) the ratio of D to H is 155.76x10−6 and O-18 to O-16 is 2005.2x10−6. Isotope fractionation occurs during changes between condensed and vapour phases: the vapour pressure of heavier isotopes is lower, so vapour contains relatively more of the lighter isotopes and when the vapour condenses the precipitation preferentially contains heavier isotopes. The difference from VSMOW is expressed as δ18O = 1000‰ $\times \left(\frac - 1\right)$; and a similar formula for δD. δ values for precipitation are always negative.[27] The major influence on δ is the difference between ocean temperatures where the moisture evaporated and the place where the final precipitation occurred; since ocean temperatures are relatively stable the δ value mostly reflects the temperature where precipitation occurs. Taking into account that the precipitation forms above the inversion layer, we are left with a linear relation:
δ 18O = aT + b
This is empirically calibrated from measurements of temperature and δ as a = 0.67 ‰/°C for Greenland and 0.76 ‰/°C for East Antarctica. The calibration was initially done on the basis of spatial variations in temperature and it was assumed that this corresponded to temporal variations.[28] More recently, borehole thermometry has shown that for glacial-interglacial variations, a = 0.33 ‰/°C,[29] implying that glacial-interglacial temperature changes were twice as large as previously believed.
A study published in 2017 called the previous methodology to reconstruct paleo ocean temperatures 100 million years ago into question, suggesting it has been relatively stable during that time, much colder.[30]
### Membrane lipids
A novel climate proxy obtained from peat (lignites, ancient peat) and soils, membrane lipids known as glycerol dialkyl glycerol tetraether (GDGT) is helping to study paleo environmental factors, which control relative distribution of differently branched GDGT isomers. The study authors note, "These branched membrane lipids are produced by an as yet unknown group of anaerobic soil bacteria."[31], there is a decade of research demonstrating that in mineral soils the degree of methylation of bacteria (brGDGTs), helps to calculate mean annual air temperatures. This proxy method was used to study the climate of the early Palaeogene, at the Cretaceous–Paleogene boundary, and researchers found that annual air temperatures, over land and at mid-latitude, averaged about 23–29 °C (± 4.7 °C), which is 5–10 °C higher than most previous findings.[32] [33]
### Pseudoproxies
The skill of algorithms used to combine proxy records into an overall hemispheric temperature reconstruction may be tested using a technique known as "pseudoproxies". In this method, output from a climate model is sampled at locations corresponding to the known proxy network, and the temperature record produced is compared to the (known) overall temperature of the model.
## Notes and References
1. Web site: What Are "Proxy" Data? National Centers for Environmental Information (NCEI) formerly known as National Climatic Data Center (NCDC). www.ncdc.noaa.gov. 2017-10-12.
2. http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/068.htm "Climate Change 2001: 2.3.2.1 Palaeoclimate proxy indicators."
3. http://unisci.com/stories/20011/0227012.htm "Borehole Temperatures Confirm Global Warming Pattern."
4. Strom, Robert. Hot House. p. 255
5. http://nicl.usgs.gov/coresite.htm "Core Location Maps."
6. Vardiman, Larry, Ph.D. Ice Cores and the Age of the Earth. p. 9-13
7. http://earthobservatory.nasa.gov/Features/Paleoclimatology_OxygenBalance/oxygen_balance.php "Paleoclimatology: the Oxygen Balance."
8. http://www.ncdc.noaa.gov/paleo/icecore/greenland/summit/document/ "The GRIP Coring Effort."
9. 92. Dana L. Royer, Peter Wilf, David A. Janesko, Elizabeth A. Kowalski and David L. Dilcher. 7. 1141–1151. 1 July 2005. 10.3732/ajb.92.7.1141. 21646136. Correlations of climate and plant ecology to leaf size and shape: potential proxies for the fossil record. American Journal of Botany. free.
10. 34. Eric R. Hagen, Dana Royer, Ryan A. Moye and Kirk R. Johnson. 1. 43–48. 9 January 2019. 10.2110/palo.2018.091. No large bias within species between the reconstructed areas of complete and fragmented fossil leaves. PALAIOS. 2019Palai..34...43H. 133599753.
11. --> New constraints on atmospheric CO2 concentration for the Phanerozoic]. PDF. 31. Peter J. Franks, Dana Royer, David J. Beerling, Peter K. Van de Water, David J. Cantrill, Margaret M. Barbour and Joseph A. Berry. 13. 4685–4694. 16 July 2014. 10.1002/2014GL060457. Geophysical Research Letters. 2014GeoRL..41.4685F. 10211.3/200431. free. 31 July 2014. https://web.archive.org/web/20140812061706/http://droyer.web.wesleyan.edu/Franks_et_al_2014_GRL_new_stomatal-CO2_proxy.pdf. 12 August 2014. dead.
12. Book: Surface Temperature Reconstructions for the Last 2,000 Years. 10.17226/11676. 2006. 978-0-309-10225-4. Council. National Research. Studies. Division on Earth Life. Climate. Board on Atmospheric Sciences and. Committee On Surface Temperature Reconstructions For The Last 2. 000 Years. 10.1.1.178.5968.
13. Pollack. H. N.. Huang. S.. Shen. P. Y.. Temperature trends over the past five centuries reconstructed from borehole temperatures. Nature. 2000. 403. 6771. 756–758. 10.1038/35001556. 10693801. 2000Natur.403..756H. 2027.42/62610. 4425128. free.
14. http://archives.cnn.com/2000/NATURE/02/17/boreholes.enn/ Environmental News Network staff. "Borehole temperatures confirm global warming."
15. http://www.nap.edu/openbook.php?record_id=11676&page=81 BOREHOLES IN GLACIAL ICE
16. http://earthobservatory.nasa.gov/Newsroom/view.php?id=22843 "Coral Layers Good Proxy for Atlantic Climate Cycles."
17. Bradley, R. S. and Jones, P. D. (eds) 1992: Climate since AD 1500. London: Routledge.
18. de Vernal. A.. Eynaud. F.. Henry. M.. Hillaire-Marcel. C.. Londeix. L.. Mangin. S.. Matthiessen. J.. Marret. F.. Radi. T.. Rochon. A.. Solignac. S.. Turon. J. -L.. Reconstruction of sea-surface conditions at middle to high latitudes of the Northern Hemisphere during the Last Glacial Maximum (LGM) based on dinoflagellate cyst assemblages. Quaternary Science Reviews. 1 April 2005. 24. 7–9. 897–924. 10.1016/j.quascirev.2004.06.014. 2005QSRv...24..897D.
19. Radi. Taoufik. de Vernal. Anne. Dinocyst distribution in surface sediments from the northeastern Pacific margin (40–60°N) in relation to hydrographic conditions, productivity and upwelling. Review of Palaeobotany and Palynology. 1 January 2004. 128. 1–2. 169–193. 10.1016/S0034-6667(03)00118-0.
20. Pospelova. Vera. de Vernal. Anne. Pedersen. Thomas F.. Distribution of dinoflagellate cysts in surface sediments from the northeastern Pacific Ocean (43–25°N) in relation to sea-surface temperature, salinity, productivity and coastal upwelling. Marine Micropaleontology. 1 July 2008. 68. 1–2. 21–48. 10.1016/j.marmicro.2008.01.008. 2008MarMP..68...21P.
21. Pospelova. Vera. Pedersen. Thomas F.. de Vernal. Anne. Dinoflagellate cysts as indicators of climatic and oceanographic changes during the past 40 kyr in the Santa Barbara Basin, southern California. Paleoceanography. 1 June 2006. 21. 2. PA2010. 10.1029/2005PA001251. en. 1944-9186. 2006PalOc..21.2010P.
22. Bringué. Manuel. Pospelova. Vera. Field. David B.. High resolution sedimentary record of dinoflagellate cysts reflects decadal variability and 20th century warming in the Santa Barbara Basin. Quaternary Science Reviews. 1 December 2014. 105. 86–101. 10.1016/j.quascirev.2014.09.022. 2014QSRv..105...86B.
23. http://www.merriam-webster.com/dictionary/varve "Varve."
24. http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/068.htm "Climate Change 2001: 2.3.2.1 Palaeoclimate proxy indicators"
25. Web site: Paleoclimatology: How Can We Infer Past Climates?. Bruckner. Monica. Montana State University..
26. Shemesh. A.. Charles. C. D.. Fairbanks. R. G.. 1992-06-05. Oxygen Isotopes in Biogenic Silica: Global Changes in Ocean Temperature and Isotopic Composition. Science. en. 256. 5062. 1434–1436. 10.1126/science.256.5062.1434. 0036-8075. 17791613. 1992Sci...256.1434S. 38840484.
27. Book: Surface Temperature Reconstructions for the Last 2,000 Years. 10.17226/11676. 2006. 978-0-309-10225-4. Council. National Research. Studies. Division on Earth Life. Climate. Board on Atmospheric Sciences and. Committee On Surface Temperature Reconstructions For The Last 2. 000 Years. 10.1.1.178.5968.
28. Jouzel and Merlivat, 1984) Deuterium and oxygen 18 in precipitation: Modeling of the isotopic effects during snow formation, Journal of Geophysical Research: Atmospheres, Volume 89, Issue D7, Pages 11589–11829
29. Cuffey et al., 1995, Large Arctic temperature change at the Wisconsin–Holocene glacial transition, Science 270: 455–458
30. Web site: The oceans were colder than we thought. October 27, 2017. Eurekalert.
31. Environmental controls on bacterial tetraether membrane lipid distribution in soils. 10.1016/j.gca.2006.10.003. Geochimica et Cosmochimica Acta. 71. 3. 703–713. Weijers. etal. 2007. 2007GeCoA..71..703W.
32. 10.1038/s41561-018-0199-0. High temperatures in the terrestrial mid-latitudes during the early Palaeogene. 2018. Naafs. etal. Nature Geoscience. 11. 10. 766–771. 2018NatGe..11..766N. 1983/82e93473-2a5d-4a6d-9ca1-da5ebf433d8b. 135045515.
33. News: University of Bristol . Ever-increasing CO2 levels could take us back to the tropical climate of Paleogene period . ScienceDaily . 30 July 2018.
|
2021-06-15 07:45:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4924756586551666, "perplexity": 7019.014737188399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00037.warc.gz"}
|
https://zbmath.org/?q=an%3A1175.46004
|
# zbMATH — the first resource for mathematics
About the Banach envelope of $$l_{1,\infty}$$. (English) Zbl 1175.46004
The author studies the Banach envelope $$l_{1,\infty}^{\text{ban}}$$ of the quasi-Banach space $$l_{1,\infty}$$ consisting of all sequences $$x = (\xi_k)$$ with $$s_n(x) = O(\frac 1 n)$$, where $$(s_n(x))$$ denotes the non-increasing rearrangement of $$x = (\xi _k)$$. The situation turns out to be much more complicated than that in the well-known case of the separable subspace $$l_{1,\infty}^\circ$$, whose members are characterized by $$s_n(x)=o(\frac 1 n)$$. Namely, the Banach envelope of the latter space is known to be $$m_{1,\infty}^\circ$$, the closed hull of all sequences with only a finite number of nonzero elements in the Sargent space $$m_{1,\infty}$$ which is the collection of all $$x=(\xi_k)$$ for which $$\| x | m_{1,\infty} \|:= \sup_n \frac{s_1(x)+ \ldots +s_n(x)}{\frac{1}{1} + \ldots +\frac{1}{n}}$$ is finite. However, as the main result the author proves that the norms $$\| \cdot | m_{1,\infty} \|$$ and $$\| \cdot | {l_{1,\infty}^{\text{ban}}} \|$$ fail to be equivalent on $$l_{1,\infty}$$. The proof uses an explicit formula for the norm $$\| \cdot | {l_{1,\infty}^{\text{ban}}} \|$$ induced on $$l_{1,\infty}$$ given in [N. J. Kalton and F. A. Sukochev, J. Reine Angew. Math. 621, 81–121 (2008; Zbl 1152.47014)]. For the convenience of the reader, the author provides an elementary proof of the inequality needed for the proof of the main result.
##### MSC:
46A16 Not locally convex spaces (metrizable topological linear spaces, locally bounded spaces, quasi-Banach spaces, etc.) 46B45 Banach sequence spaces
Full Text:
|
2021-05-18 18:34:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088405728340149, "perplexity": 192.79508229796053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00374.warc.gz"}
|
https://ea.greaterwrong.com/posts/moCaqqQxh3vubuEqC/if-i-m-20-less-productive-do-i-have-20-less-expected-impact-2
|
# [Question] If I’m 20% less productive, do I have 20% less expected impact?
For a personal decision, I’d like to know if a person’s expected impact is roughly proportional to their hours worked (keeping output per hour fixed). Suppose the decision would make you work x% fewer hours on useful things but keep your performance in the remaining hours the same—you won’t be more rested. The 20% just goes into work that’s not helpful for career capital or impact. In other words, you’re x% less productive. Does that mean you have roughly x% less expected impact?
Discussion
One reason your expected impact may decrease by >x% is that personal impact is (supposedly) heavy-tail distributed across people. To be in the heavy tail you’d need to be roughly at your highest productivity. So being x% less productive could reduce your expected impact by well over x%.
More than x% impact loss seems intuitive when you consider large x. Say you reduce your work time by 70%, and keep your productivity in the remaining 30% fixed. This seems to almost completely kill your chances of becoming a heavy-tail top performer in your field as you won’t be able to invest in yourself enough to stay competitive.[1][2]
On the other hand, your impact depends on other factors than the quantity of your work: talent and luck. In fact, talent and luck may be the main reason why impact is seems heavy-tailed. This view suggests that, if you work 20% less, your chances of being in the heavy tail don’t change much, and your expected impact decreases only by ca. 20%.
Edit: The answer seems to depend on the career path. In this case it’s academic research or startup founder.
• The law of logarithmic utility has also been applied to research funding[74]—and a simple rule of thumb is that a dollar is worth 1/X times as much if you are X times richer. So doubling someone’s income is worth the same amount no matter where they start.[75] Past the point of increasing returns to scale, the next dollar donated say at the $500k funding mark might have 10x as much impact as the dollar donated after the$5m mark.
Maybe a useful first approximation might be that with hours worked it’s similar, where past the point of increasing returns to scale, the next hour worked at the 10h/week mark might have 10x as much impact as the hour worked after the 100h/week mark (An hour might be worth 1/X times as much if you work X times more). More realistically, if you work 40h week vs. 80h week, the hours leading up to 80h/week are only ~half as valuable (but I definitely think the 1st hour of the day is often 10x more valuable then the 10th).
CS professor Cal Newport says that if you can do DeepWork TM for 4h / day, you’re hitting the mental speed limit, the amount of concentration your brain is actually able to give. Poincaré could only work 4 hours a day.
This suggests that’d it be better to work 5h/d for 7d/week rather than 7h for 5 days and all else equal, hiring more researchers at lower pay rather than more at higher pay.
Ideally, you’d do admin / research management in the afternoons. But then sometimes I feel like long days are also sometimes useful in research because it takes a some time to ‘upload’ the current research project into your mind in the morning and you need to reboot it the next day. I remember someone very productive saying and I can confirm from personal experience that you can ‘reset’, a little bit, the buildup of adenosine with 1.5h naps (1 full sleep cycle), after working the morning and then continue working ‘another morning’ in the afternoon.
It’s important to keep in mind that you always want to prevent burnout by keeping work efficiency high (= Total work time / Time in office. The section Work All the Time You Work in Eat That Frog says that you don’t want to be spending your intended-work-time not-working such that you have to spend your intended-leisure-time working.
But yes this is all different in winner-takes-all-markets.
• CS professor Cal Newport says that if you can do DeepWork TM for 4h / day, you’re hitting the mental speed limit
and:
the next hour worked at the 10h/week mark might have 10x as much impact as the hour worked after the 100h/week mark
Thanks Hauke that’s helpful. Yes, the above would be mainly because you run out of steam at 100h/week. I want to clarify that I assume this effect doesn’t exist. I’m not talking about working 20% less and then relaxing. The 20% of time lost would also go into work, but that work has no benefit for career capital or impact.
• Yes—I think running out of steam does some of the work here, but assuming that you prioritize the most productive tasks first, my sense is this should still hold.
• It seems to depend on your job. E.g. in academia there’s a practically endless stream of high priority research to do since each field is way too big for one person solve. Doing more work generates more ideas, which generate more work.
• Another framing on this: As an academic, if I magically worked more productive hours this month, I could just do the high-priority research I otherwise would’ve done next week/month/year, so I wouldn’t do lower-priority work.
• [ ]
[deleted]
• [ ]
[deleted]
• Startup founder success is sometimes winner-take-all (Facebook valued at hundreds of billions of dollars, Myspace at ~$0). If that’s true in your market, then the question reduces to how likely that additional 20% is to make you better than your competitor. My guess is that you will be competing against people who are ~equally talented and working at 100%, so the final 20% of your work effort is relatively likely to push you into being more productive than them (meaning that ~100% of the value is lost by you cutting your work hours 20%). I assume this is less true in academia. • I’d guess that quite often you’d either win anyway or lose anyway, and that the 20% don’t make the difference. There are so many factors that matter for startup founder success (talent, hard-workingness, network, credentials, luck) that it would be surprising if the competition was often so close that a 20% reduction in working time changes things. Another way to put this: it seems likely that Facebook would still be worth hundreds of billions of dollars, and Myspace ~$0, had the Facebook founders worked 20% less).
• I don’t have a good object-level answer, but maybe thinking through this model can be helpful.
Big picture description: We think that a person’s impact is heavy tailed. Suppose that the distribution of a person’s impact is determined by some concave function of hours worked. We want that working more hours increases the mean of the impact distribution, and probably also the variance, given that this distribution is heavy-tailed. But we plausibly want that additional hours affect the distribution less and less, if we’re prioritising perfectly (as Lukas suggests) -- that’s what concavity gives us. If talent and luck play important roles in determining impact, then this function will be (close to) flat, so that additional hours don’t change the distribution much. If talent is important, then the distributions for different people might be quite different and signals about how talented a person is are informative about what their distribution looks like.
This defines a person’s expected impact in terms of hours worked. We can then see whether this function is linear or concave or convex etc., which will answer your question.
More concretely: suppose that a person’s impact is lognormally distributed with parameters and , that is an increasing, concave function of hours worked, , and that is fixed. I chose this formulation because it’s simple but still enlightening, and has some important features: expected impact, , is increasing in hours worked and the variance is also increasing in hours worked. I’m leaving fixed for simplicity. Suppose also that , which then implies that expected impact is , i.e. expected impact is linear in hours worked.
Obviously, this probably doesn’t describe reality very well, but we can ask what changes if we change the underlying assumptions. For example, it seems pretty plausible that impact is heavier-tailed than lognormally distributed, which suggests, holding everything else equal, that expected impact is convex in hours worked, so you lose more than 20% impact by working 20% less.
Getting a good sense of what the function of hours worked (here ) should look like is super hard in the abstract, but seems more doable in concrete cases like the one described above. Here, the median impact is , if , so the median impact is linear in hours worked. This doesn’t seem super plausible to me. I’d guess that the median impact is concave in hours worked, which would require to be more concave than , which suggests, holding everything else equal, that expected impact is concave in hours worked. I’m not sure how this changes if you consider other distributions though—it’s a peculiarity of the lognormal distribution that the mean is linear in the median, if is held fixed, so things could look quite different with other distributions (or if we tried to determine and from jointly).
Median impact being linear in hours worked seems unlikely globally—like, if I halved my hours, I think I’d more than half my median impact; if I doubled them, I don’t think I would double my median impact (setting burnout concerns aside). But it seems more plausible that median impact could be close to linear over the margins you’re talking about. So maybe this suggests that the model isn’t too bad for median impact, and that if impact is heavier-tailed than lognormal, then expected impact is indeed convex in hours worked.
This doesn’t directly answer your question very well but I think you could get a pretty good intuition for things by playing around with a few models like this.
• After a little more thought, I think it might be helpful to think about/look into the relationship between the mean and median of heavy-tailed distributions and in particular, whether the mean is ever exponential in the median.
I think we probably have a better sense of the relationship between hours worked and the median than between hours worked and the mean because the median describes “typical” outcomes and means are super unintuitive and hard to reason about for very heavy tailed distributions. In particular, arguments like those given by Hauke seem more applicable to the median than the mean. This suggests that the median is roughly logarithmic in hours worked. It would then require the mean to be exponential in the median for the mean to be linear in hours worked, in which case, working 20% less would lose exactly 20% of the expected impact (more if the mean is more convex than exponential in the median, less if it’s less than exponential).
In the simple example above, the mean is linear in the median, so the mean is logarithmic in hours worked if the median is. But the lognormal distribution might not be heavy-tailed enough, so I wouldn’t put too much weight on this.
Looking at the pareto distribution, it seems to be the case that the mean is sometimes more than exponential in the median—it’s less convex for small values and more convex for high values . You’d have to a bit of work to figure out the scale and whether it’s more than exponential over the relevant range, but it could turn out that expected impact is convex in hours worked in this model, which would suggest working 20% less would lose more than 20% of the value. I’m not sure how well the pareto distribution describes the median though (it seems good for heavy tails but bad for the whole distribution of things), so it might be better to look at something like a lognormal body with a pareto tail. But maybe that’s getting too complicated to be worth it. This seems like an interesting and important question though, so I might spend more time thinking about it!
• Thanks Aidan, I’ll consider this model when doing any more thinking on this.
• This is a bit of a summary of what other people have said, and a bit of my own conceptualisation:
A) If the work is not competitive (not a winner-takes-all market), then:
• For some jobs, marginal returns on quality-adjusted time invested will decrease, and you lose less than 20% of impact. This is true for jobs where some activities are clearly more valuable than others, so that you cut the less valuable ones.
• For some jobs, marginal returns on quality-adjusted time invested will increase, and you lose more than 20% of impact. This could be e.g. because you have some maintenance activities that are fixed costs (like reading papers to stay up to date), or have increasing returns because you benefit from deep immersion.
B) If the work is competitive (a winner-takes-all market), either:
• you are going to win anyway, in which case the same as above applies, or
• you are going to lose anyway, in which whether or not you spend 20% of your time on something else doesn’t matter, or
• working less is causing you to lose the competition, in which case you lose 100% of value.
Of course, this is nearly always gradual because the market is not literally winner-takes-all, just winner-takes-a-lot-more-than-second. For example, if you’re working towards an academic faculty position, then maybe a position at a tier 1 uni is twice as impactful as one at a tier 2 uni, which is twice as impactful than one at a tier 3 uni, and so on (the tiers would be pretty small for the difference only being 2x, though).
On average, the more “competitive” a job, and the closer the distance between you and the competition, the more value you lose from working 20% less.
Nearly every job has some degree of “competitiveness”/”winner-takes-all-market” going on, but for some jobs this degree is very small (e.g. employee at EA org), and for others it’s large (academia before you got a tenure-track position, for-profit startup founder).
For academic research, I’d guess that from looking at A) alone, you’d get roughly linear marginal returns, and how much B) matters depends on your career stage. It matters a lot before you got a tenure-track position (because the market is “winner-takes-much-more-than-second” and competition is likely close because so many people compete for these positions). After you got a tenure-track position, it depends on what you want to do. E.g., if you try to become the world-leader in a popular field, then competition is high. If you want to research some niche EA topic well, then competition is low.
• There’s also an argument that impact diminishes by <20%: the hours you’ll cut out first will be your least important hours (assuming you’re prioritizing well).
I think the main argument for >20% is that you might get increasing returns from deep immersion and mastery of a field (this is a version of the point you made about “making it in the heavy tail”).
I think it depends on the type of work you’re doing. If you work at an EA org and do very generalist tasks with a lot of prioritizing on the go (for example, some of all of the following: hiring, headhunting/recruiting, developing strategy docs, mentoring, etc.), I could imagine that you lose <20%.
By contrast, if you’re a researcher doing cutting-edge work, you may benefit from deep immersion, so I’d expect you to lose >20%.
Also, if you’re on a career path where getting promoted is important (for instance because you want to make it to an influential position in government or academia), you almost certainly lose >20% because of the inherent competitiveness of the career track.
• Another case where you lose >20% with 20% less hours: earning to give as normal employee (not as entrepreneur).
Salary is ~ linear with the hours worked. You can only donate the part of the salary above a certain baseline because you need the rest for your living costs*. Let’s say you can donate 40% of your salary if you work 40h/week. If you work 32h/week, can only donate 20% of a full-time salary. That’s 50% less impact for 20% less hours.
Caveat 1: You can also donate a fixed percentage, then it doesn’t work like this.
Caveat 2: I’m neglecting non-donation impact here.
• Thanks Lukas that’s helpful. Some thoughts on when you’d expect diminishing returns to work: Probably this happens when when you’re in a job at a small-sized org or department where you have a limited amount to do. On the other hand, a sign that there’s lots to do would be if your job requires more than one person (with roughly the same skills as you).
In this case here the career is academia or startup founder.
|
2022-08-11 05:38:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7010971307754517, "perplexity": 1061.1493655288805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00306.warc.gz"}
|
https://physics.stackexchange.com/questions/249529/boson-in-superstring
|
# Boson in Superstring
I'm confused about a point. Superstring sigma model is
$$S=-\frac{T}{2}\int\mathrm{d}^2z \left[\eta^{ab}\partial_aX^\mu\partial_b X_\mu -i\bar\psi^\mu\rho^a\partial_a\psi_\mu \right],$$
of course, the first term is in common with bosonic string one.
Then in addition to the bosonic string spectrum (the one coming from $X$s), that I have, as usual, I will have also the spectrum coming form $\psi$s. My questions are:
1. What is the fate of the bosonic string spectrum in superstring? i.e. how should I interpret the dilaton $\Phi$, the graviton $g_{\mu\nu}$ and the 2-form $B_{\mu\nu}$ coming form bosonic string spectrum? Why all books refer to the dilaton, gravinton and 2-form as the ones coming from NS part of $\psi$s spectrum?
2. After GSO projection the tachyon is cancelled form $\psi$s spectrum and the number of bosonic d.o.f. equals the number of fermionic ones. But this is referred again to the $\psi$s spectrum. If I consider also the $X$s spectrum I still have tachyon and extra boson that unbalance the d.o.f. counting.
Probabilly I make a mistake in my reasonment.
• Please see our guide on writing good titles.
– user10851
Apr 16, 2016 at 18:28
Your confusion comes from thinking that going to superstrings simply means adding fermions in the spectrum. The spectrum is instead different. For bosonic string (let's focus on NN boundary conditions and open strings) you have something like:
$$\alpha' m^2=N-1$$
where N is the number operator of the transverse vibrational excitations of the bosonic string. In superstring you find:
$$\alpha' m^2=N_{bos}+N_{ferm}-a_{NS/R}$$
where $N_{bos}$ is the number operator of the string coordinates $X$, while $N_{ferm}$ is the one for $\psi$. The ordering constant and the integer/semi-interger nature of $N_{ferm}$ depends on whether you are in the Ramond of NS sector.
In summary, they are two different theories, for instance notice that one lives in 26 dimensions and the other one in 10.
A good suggested reading on string theory is "Basic Concepts of String Theory" by Blumenhagen, Lüst, Theisen.
The answer to your first question is that the superstring analog of the first excited state of the bosonic string turns out to be massive. Instead, our friends the graviton, 2-form, dilaton and the massless vector are indeed obtained by acting with $$\psi$$ mode operators, not by $$X$$ mode operators.
I will now explain the contents of the NS sector of the superstring spectrum in some more detail. Recall that we have $$\alpha$$ oscillators coming from $$\partial X \partial X$$, which raise the level by integer numbers, and $$b$$ oscillators from the worldsheet fermions $$\psi$$, of which we consider the half-integer modes belonging to NS boundary conditions.
In the NS sector, the normal ordering constant $$a_{NS}$$ turns out to be equal to $$\frac{1}{2}$$. This means that the mass formula becomes $$\alpha ' m ^2 = N - \frac{1}{2}$$. For any state to be massless, one therefore needs $$N = \frac{1}{2}$$. Since states obtained by acting with $$\alpha$$ operators on the ground state have integer levels, there are no massless states $$\sim \alpha|0\rangle$$. Instead, the first $$\alpha$$ excited state, $$\alpha ^\mu _{-1} |0\rangle$$, has level $$N = 1$$ and therefore is massive with mass squared $$m^2 = \frac{1}{2 \alpha '}$$.
Excited states coming from acting with the $$b$$s can have half-integer levels. The first excited state of the NS open string is $$b_{-1/2} ^\mu |0\rangle$$, which has level $$N= \frac{1}{2}$$, as needed for a state to be massless. This state is a space-time vector and takes the role played by $$\alpha ^\mu _{-1} |0\rangle$$ in the bosonic string theory.
The closed string NS-NS sector is basically equivalent to twice the open string sector, with the requirement of level matching imposed and the modified mass formula $$\alpha' m^2 = 4 (N - a) = 4n - 2$$. The state $$b^{\mu} _{-1/2} \tilde{b}^{\mu} _{-1/2} |0\rangle$$ has level $$N=1/2$$ (we count only one set of modes) and is therefore massless. This is the state that contains the graviton, dilaton and the 2-form. In contrast, the state $$\alpha ^\mu _{-1}\tilde{\alpha} _{-1} ^\mu |0\rangle$$ has level $$N=1$$ and is massive with $$m^2 = 1/\alpha'$$. This state can be decomposed in the same way as in the bosonic theory. However, it is more meaningful to combine it with other $$N=1$$ states coming from the $$b$$ oscillators to yield a nice Lorentz multiplet, since, as a massive representation, it actually needs more states than a massless representation.
To answer your second question, note that the $$X$$ spectrum does not form a separate Hilbert space from the $$\psi$$ spectrum. A superstring can be in one of several ground states, labeled by their fermion boundary conditions and whether the string open or closed. Some of these states are tachyonic and removed by the GSO projection. The full spectrum can be generated from the ground states by the action of creation operators coming from $$\psi$$ or from $$X$$. There are no separate $$X$$ and $$\psi$$ sectors; the creation operators can be applied to all of the ground states. In particular, there is no separate tachyon 'coming from the $$X$$ spectrum'.
|
2022-06-26 03:10:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6934366226196289, "perplexity": 315.5781062488031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00780.warc.gz"}
|
https://www.physicsforums.com/threads/finding-quadrants.40743/
|
1. Aug 26, 2004
Poweranimals
Does anyone know how to do this?
Let Ø be an angle in standard position. Name the quadrant in which the angle lies.
sin Ø > 0, cos Ø < 0
Last edited: Aug 26, 2004
2. Aug 27, 2004
Math Is Hard
Staff Emeritus
You could start by thinking about some angles that fit that description...
(just an idea)
3. Aug 27, 2004
Crumbles
Check out this diagram or the quadrants part of this site.
As you can see, you have 4 quadrants, one where ALL [sin, cos and tan] are positive [0 to 90 degrees], one where just sin is positive and cos and tan are negative [90 to 180 degrees], one where just tan is positive and sin and cos are negative [180 to 270 degrees] and one where just cos is positive and where sin and tan are negative [270 to 360 degrees].
Your question says sin Ø > 0, cos Ø < 0. This is basically saying the sin of your angle is positive and the cosine of your angle is negative. Now if you look at the diagram, you will see that sin Ø > 0, cos Ø < 0 is true in the sin quadrant because this is where sine is positive.
An easy way to remember those quadrants is by the first letter of this phrase, All Silly Teachers Cheat. Or if you want to be kind to teachers, Add Sugar To Coffee! This technique works going anti cloclwise from the 0 - 90 quadrant.
Last edited: Aug 27, 2004
4. Aug 27, 2004
Crumbles
Just a quick note, the 0-90 quadrant is usually referred to as the first quadrant. The 90-180 one as the second quadrant, the 180-270 one as the third quadrant and the 270 to 260 one as the fourth quadrant, as shown on this page.
5. Aug 27, 2004
Math Is Hard
Staff Emeritus
Back in my day it was:
All
Students
Take
Calculus
:tongue2:
6. Aug 27, 2004
Gokul43201
Staff Emeritus
Heck, I used All Silver Tea Cups !
7. Aug 27, 2004
Math Is Hard
Staff Emeritus
A little pretentious but I like it! :rofl:
It goes along nicely with Please Excuse My Dear Aunt Sally!
|
2016-10-23 22:20:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336407542228699, "perplexity": 2799.003058681189}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719437.30/warc/CC-MAIN-20161020183839-00101-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/50826/new-years-predictions-in-mathematics?sort=votes
|
# New Year's Predictions in Mathematics [closed]
It is the time of year for predictions: predictions for 2011, predictions for the next decade, predictions for the unspecified future. I searched a bit for predictions in mathematics, but it seems mathematicians are too wise to engage in this dubious activity. I found only two predictions:
(1) Two New Scientist writers, Samuel Arbesman and Rachel Courtland, predict that 2011 will not see the $P=NP$ problem resolved.
(2) Sir Michael Atiyah
suggested that the conjectured self-adjoint operator that could explain the Riemann hypothesis might be the Hamiltonian of quantum gravity
in a November talk at the Simons Center, as reported by Peter Woit.
Of course it is a stretch to call Atiyah's suggestion a "prediction." And every conjecture in mathematics is a prediction! Nevertheless, in the spirit of New Year's, I would be interested to hear any predictions on future developments in mathematics.
-
## closed as no longer relevant by Andres Caicedo, Joseph O'Rourke, Harry Gindi, Charles Siegel, Felipe VolochJan 1 '11 at 0:20
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
@Roy: I think that most of the opinions on that page are abhorrent where they pertain to pure mathematics and are irrelevant otherwise. – Harry Gindi Dec 31 '10 at 21:12
The Procrastinator's Club will soon release its predictions for 2010. They have a history of complete accuracy. – Michael Hardy Dec 31 '10 at 21:16
I predict this question is going to be closed. – KConrad Dec 31 '10 at 22:06
No longer relevant? It's only 8pm here in NY. :-) – sigoldberg1 Jan 1 '11 at 1:07
@darij: This year; My use of obvious here is more of a technical nature. The argument was that we've accepted the axiom of replacement because it is strongly used in proofs. The proper forcing axiom, which implies $2^{\aleph_0} = \aleph_2$, is being advanced as an axiom to assume because of its many interesting and useful set-theoretical consequences including the existence of a Woodin cardinal in an inner model. The argument is that eventually we'll just wind up accepting such an axiom as natural. – Jason Dec 31 '10 at 22:02
|
2015-03-30 23:27:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7442371845245361, "perplexity": 1324.5357404379909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300031.56/warc/CC-MAIN-20150323172140-00157-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://ask.cloudbase.it/answers/2146/revisions/
|
New Question
Revision history [back]
Hello,
This might be related to the cloudbase-init msi you have downloaded. Make sure that, for x86 systems, you use the x86 msi and for the x64 you use the x64 installer. Here are the links:
Thank you,
Hello,
This might be related to the cloudbase-init msi you have downloaded. Make sure that, for x86 systems, you use the x86 msi and for the x64 you use the x64 installer. Here are the links:
x86.msix86.msi
Thank you,
Hello,
This might be related to the cloudbase-init msi you have downloaded. Make sure that, for x86 systems, you use the x86 msi and for the x64 you use the x64 installer. Here are the links:
Thank you,
Hello,
This might be related to the cloudbase-init msi you have downloaded. Make sure that, for x86 systems, you use the x86 msi and for the x64 you use the x64 installer. Here are the links:
Thank you,
|
2020-06-05 08:49:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966955304145813, "perplexity": 6435.324306669469}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00460.warc.gz"}
|
https://genomeek.wordpress.com/2013/03/08/emarch-2-create-a-pdf-with-highlighted-code-source/
|
Home > emacs > eMarch 2 : create a pdf with highlighted code source
## eMarch 2 : create a pdf with highlighted code source
A very nice feature of emacs is syntax highlighting for a wide variety of language. Back in the old time, I used to print my a lot of my source code in order to analyse it in details, as the AIX server I was working on had no access to any color-printer, I had to print my buffer to a postscript file first, (convert it to pdf) and last transfer it to my main computer. But one, day I found a nice small piece of lisp which allow me simplify the process.
Thus I just added this in my .emacs file
```(defun print-to-pdf ()
(interactive)
(ps-spool-buffer-with-faces)
(switch-to-buffer "*PostScript*")
(write-file "tmp.ps")
(kill-buffer "tmp.ps")
(setq cmd (concat "ps2pdf14 tmp.ps " (buffer-name) ".pdf"))
(shell-command cmd)
(shell-command "rm tmp.ps")
(message (concat "File printed in : "(buffer-name) ".pdf"))
)```
Then, from any buffer, typing M-x print-to-pdf, create nice and colorful pdf of my source code.
This script used to be a neat way to keep a pretty printed version of my work, fortunately printing to pdf became much easier with later version emacs, making the previous code a bit useless. But here comes the way I recycled this script. As I very often have to document my script and source code (who doesn’t ???), I took the habit to add as an appendix to my manuals the source code of the main scripts or software used. To do so, I produce a syntax highlighted pdf that I include via the \includepdf{} $\LaTeX$ command. Still, an annoying point is that scripts are updated rather often, so we have to think about an easy way to recreate the pdf.
In my .emacs, I thus modify my previous function into :
```(defun print-to-pdf-batch ()
(interactive)
(ps-spool-buffer-with-faces)
(switch-to-buffer "*PostScript*")
(write-file "tmp.ps")
(kill-buffer "tmp.ps")
(setq cmd (concat "ps2pdf14 tmp.ps " (buffer-name) ".pdf"))
(shell-command cmd)
(shell-command "rm tmp.ps")
(kill-emacs t)
)```
The changes are that we now avoid message output and ask to kill emacs at the end of the execution. For a series of sources code , you just have to run e.g.
```for File in *.sh ; do emacs \$File --eval "(print-to-pdf-batch)" ; done
```
This trick works, I imagine a better version would be to use the “-batch” argument of emacs, and eventually a lisp file (thus you should not change your .emacs). Unfortunately, I could not achieve to obtain color in the output (probably due to X issues ? any hints/solutions wold be appreciated). If you have no X server the -nw options also works but the colors used are to my opinion not so nice.
|
2017-09-25 06:10:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527408599853516, "perplexity": 1636.6699660909926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690340.48/warc/CC-MAIN-20170925055211-20170925075211-00273.warc.gz"}
|
http://documenta.sagemath.org/vol-19/45.html
|
#### DOCUMENTA MATHEMATICA, Vol. 19 (2014), 1367-1442
Gilles Pisier
Martingale Inequalities and Operator Space Structures on L_p
We describe a new operator space structure on $L_p$ when $p$ is an even integer and compare it with the one introduced in our previous work using complex interpolation. For the new structure, the Khintchine inequalities and Burkholder's martingale inequalities have a very natural form:the span of the Rademacher functions is completely isomorphic to the operator Hilbert space $OH$, and the square function of a martingale difference sequence $d_n$ is $\Sigma d_n\otimes \bar d_n$. Various inequalities from harmonic analysis are also considered in the same operator valued framework. Moreover, the new operator space structure also makes sense for non-commutative $L_p$-spaces associated to a trace with analogous results. When $p\to \infty$ and the trace is normalized, this gives us a tool to study the correspondence $E\mapsto \underline{E}$ defined as follows: if $E\subset B(H)$ is a completely isometric emdedding then $\underline{E}$ is defined so that $\underline{E}\subset CB(OH)$ is also one.
2010 Mathematics Subject Classification: Primary 47L07,46L53; Secondary 46B28,60G48,47L25.
Keywords and Phrases:
Full text: dvi.gz 152 k, dvi 523 k, ps.gz 539 k, pdf 689 k.
|
2017-08-24 04:45:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543310165405273, "perplexity": 439.1050883326177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133032.51/warc/CC-MAIN-20170824043524-20170824063524-00615.warc.gz"}
|
http://toddyhotpants.com/rdwpcanq/174ac6-find-a-polynomial-of-least-possible-degree-calculator
|
ch calculator | Solve system | • Every polynomial function of odd degree having real coefficients has at least one real zero. cos | Times tables game | Example 1: Given the factors ( x - 3)2(2x + 5), find the polynomial of lowest degree with real coefficients.Let's analyze what we already know. norm: find the p-norm of a polynomial. Symbolic differentiation | Online plotter | Show transcribed image text. Solve equations online, Factor | u(t) 5 3t3 2 5t2 1 6t 1 8 Make use of structure. Simplify fraction | Expand a product, Fraction | Related Calculators. Online graphing calculator | asin | It is that value of x that makes the polynomial equal to 0. Site map natural logarithm calculator | Polynomial Functions: Graphs, Applications, and Models Simplify expressions calculator | For instance, for a third degree polynomial. Calculator | Degree and Leading Coefficient Calculator . Get more help from Chegg. What Is The Least Possible Degree Of The Polynomial Graphed Above? Solution for Find a polynomial of least possible degree having the graph shown. This problem has been solved! degree: returns the polynomial degree, length is number of stored coefficients. Equation calculator | Integration function online | Matrix Calculator | tangent hyperbolic calculator | Scientific calculator online | lim calculator | Calculus software online | Limit calculator | A polynomial of degree 3 has 3 zeros, and you are given the three zeros, they are -2, -2 and 4. dot product calculator | (I would add 1 or 3 or 5, etc, if I were going from … Covid-19 has led the world to go through a phenomenal transition . sine hyperbolic calculator | Find a polynomial of degree with real coefficients and the following zeros calculator Write a polynomial function of least degree with integral coefficients that has the given zeros. Inequality solver | Expand math | Maclaurin series calculator, Calculus online | Factorize | Degree of Polynomial Calculator Polynomial degree can be explained as the highest degree of any term in the given polynomial. limit finder | ch calculator | If you want to contact me, probably have some question write me using the contact form or email me on Let us learn it better with this below example: Find the degree of the given polynomial 6x^3 + 2x + 4 As you can see the first term has the first term (6x^3) has the highest exponent of any other term. find a polynomial of degree 3 with real coefficients and zeros calculator, 3 17.se the Rational Root Theorem to find the possible U real zeros and the Factor Theorem to find the zeros of the function. Reduce | Differentiation calculator | cotanh calculator | Shortcuts : The calculator is also able to calculate the degree of a polynomial that uses letters as coefficients. If the degree of the polynomial is zero (n=0), then we get an approximation by constant function, i.e. Fraction calculator | countdown solver | Here is the online 4th degree equation solver for you to find the roots of the fourth-degree equations. Curve plotter | The calculator will find the degree, leading coefficient, and leading term of the given polynomial function. Which polynomial has a double zero of $5$ and has $−\frac{2}{3}$ as a simple zero? (x) = (Simplify your answer.) Find a polynomial that has zeros $4, -2$. find polynomial of degree 3 with real coefficients calculator, Description: Polynomials of degrees 3 (x3), 4(x4), etc. hyperbolic coth calculator | Try to write down the general formula for a polynomial of degree $3$ and to show that you can find coefficients solving your problem. online factorial calculator | arctan | A polynomial of degree 3 has 3 zeros, and you are given the three zeros, they are -2, -2 and 4. By using this website, you agree to … Expand | A term with the highest power is called as leading term, and its corresponding coefficient is called as the leading coefficient. Free calculator | Inequality calculator | Web calculator | In this case, the degree is 6, so the highest number of bumps the graph could have would be 6 – 1 = 5.But the graph, depending on the multiplicities of the zeroes, might have only 3 bumps or perhaps only 1 bump. It will be easy to understand when you can take the polynomial to be of degree $2,1$ or $0$ (constant). Free calculator online | Solve equation | enter arccos | The highest degree of individual terms in the polynomial equation with non-zero coefficients is called as the degree of a polynomial. Find a polynomial of least possible degree having the graph shown. find a polynomial of least possible degree provides a comprehensive and comprehensive pathway for students to see progress after the end of each module. The calculator generates polynomial with given roots. sinh calculator | tanh calculator | Calculate integral online | degree(ax^2+bx+c) after calculation, result 2 is returned. Calculate fractions | Equation system | arctan calculator | Polynomial calculator - Sum and difference . Calculate fractions | atan | Stay Home , Stay Safe and keep learning!!! 10 (1, -30- o f(x) - 6(x + 1)2(x - 1)2 %3D o f(x) = 6(x + 1)(x -… Simplify | Simplifying expressions calculator | Multiplication game | Polynomial Calculator - Integration and Differentiation The calculator below returns the polynomials representing the integral or the derivative of the polynomial P. P = How to input. 6 5 4 2 What is the least possible degree of the polynomial graphed above? find a polynomial with real coefficients that has the given zeros calculator, Zeros of Polynomial. prime factorization calculator | Integrate function online | Antiderivative calculator | The calculator will find the degree, leading coefficient, and leading term of the given polynomial function. Express f(x) as a product of linear and/or quadratic polynomials with real coefficients that are irreducible over ℝ. Factorization online | Generally, any polynomial with the degree of 4, which means the largest exponent is 4 is called as fourth degree equation. form a polynomial with given zeros and degree calculator, Section 7.2 Graphing Polynomial Functions. cross product calculator | Internet calculator | Expand and simplify expression | Derivative calculator | permutation calculator | Find the polynomial with integer coefficients having zeroes $0, \frac{5}{3}$ and $-\frac{1}{4}$. Determining the minimum possible degree of a polynomial from its graph. Calculator shows complete work process and detailed explanations. Since $$x−c_1$$ is linear, the polynomial quotient will be of degree three. Now we apply the Fundamental Theorem of Algebra to the third-degree polynomial quotient. Mathematic functions online calculus | Derivative calculator | "A polynomial function f(x) of degree n has exactly n roots, or zeros, as long as you permit complex numbers to be considered zeros. (FIGURE CAN'T COPY) Answer $(x-1)^{3}(x+1)^{3}$ Topics. Solver | Example: Find all the zeros or roots of the given function. Tangent equation, Online math games for kids : For Items 18 and 19, use the Rational Root Theorem and synthetic division to find the real zeros. Section 4. Question: What Is The Least Possible Degree Of The Polynomial Graphed Above? combination calculator online | Differentiate calculator | Fractions | Simplify square root calculator | Find the polynomial function P of the lowest possible degree, having real coefficients, with the given zeros. Inequality | The calculator generates polynomial with given roots. Show Instructions In general, you can skip the multiplication sign, so 5x is … Determining the minimum possible degree of a polynomial from its graph. Division game, Copyright (c) 2013-2021 https://www.solumaths.com/en, solumaths : mathematics solutions online | | Languages available : fr|en|es|pt|de, See intermediate and additional calculations, Calculate online with degree (degree of a polynomial). The calculator may be used to determine the degree of a polynomial. Factorize expression online | Online calculator | To obtain the degree of a polynomial defined by the following expression x 3 + x 2 + 1, enter : degree (x 3 + x 2 + 1) after calculation, the result 3 is returned. Use a leading coefficient of 1 or 1. mathhelp@mathportal.org. To obtain the degree of a polynomial defined by the following expression : ax^2+bx+c (FIGURE CAN'T COPY) Addition tables game | combination calculator | a n - unknown polynomial coefficients, which we want to find, n - the polynomial degree. arcsin | Write a polynomial function of least degree with integral coefficients that has the given zeros. Easy arithmetic game | Please tell me how can I make this better. To answer this question, I have to remember that the polynomial's degree gives me the ceiling on the number of bumps. Find a polynomial that has zeros $0, -1, 1, -2, 2, -3$ and $3$. Solve inequality | Symbolic integration | arccos calculator | Differential calculus | The computer is able to calculate online the degree of a polynomial. Solving equation | sin calculator | A polynomial f(x) with real coefficients and leading coefficient 1 has the given zeros and degree. ; Examine the behavior of the graph at the x-intercepts to determine the multiplicity of each factor. In Section 7.1, we considered applications of polynomial functions.Although most applications use only a portion of the graph of a particular polynomial, we can learn a lot about these functions by taking a more global view of their behavior. Simplified fraction calculator | Integral calculus | degree(x^3+x^2+1) after calculation, the result 3 is returned. Contact | Calculus online, Differentiate | 0, −7i, 1 − i; degree 5 arcos | arcsin calculator | Polynomial calculator - Sum and difference . Which statement describes X^3-7x^2-x+7=0 make a list of possible roots, write the resulting quadratic function, factor or use quadratic Square root calculator | sin | countdown maths solver | countdown numbers solver | The degree is the value of the greatest exponent of any expression (except the constant) in the polynomial.To find the degree all that you have to do is find the largest exponent in the polynomial.Note: Ignore coefficients-- coefficients have nothing to do with the degree of a polynomial. Polynomial calculator - Division and multiplication. Online factoring calculator | The degree function calculates online the degree of a polynomial. Antidifferentiation | Calculus derivatives | acos | Welcome to MathPortal. Simplify expression online | find limit | Calculate antiderivative online | Equation | Answer to Form a polynomial whose zeros and degree are given. This web site owner is mathematician Miloš Petrović. matrix determinant calculator | tan | Calculate fraction | E-learning is the future today. Calculator shows complete work process and detailed explanations. x^3+x^2+1, enter : Antiderivative calculator | Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. To obtain the degree of a polynomial defined by the following expression Antidifferentiate | variable: returns the polynomial symbol as polynomial in the underlying type. Find the polynomial function P of the lowest possible degree, having real coefficients, with the given zeros. Chapter 3. vector product calculator | function Graphics | ln calculator | cosh calculator | by one number, which stays closest to all measurement values. sh calculator | find polynomial of degree 3 with real coefficients calculator, coeffs: returns the entire coefficient vector. Function plotter | Substraction tables game | Factorize expression | abs calculator | Polynomial calculator - Integration and differentiation. Calculating the degree of a polynomial The calculator may be used to determine the degree of a polynomial. Answer to Form a polynomial whose zeros and degree are given. Calculate fraction online | Simplify fraction calculator | Free polynomial equation calculator - Solve polynomials equations step-by-step This website uses cookies to ensure you get the best experience. Countdown game | Expand and reduce math | Expand and simplify | n and the other two satisfy the condi- 2 tion x2 x3 = mn 2 (Viète's relations for the second degree polynomial m X + (m − 2 2 3. Example: Find all the zeros or roots of the given function. ( x - 3)2 is a repeated factor, thus the zero 3 has multiplicity of two. Derivative of a function | Differentiate function online | Complex number calculator | CAS | cosine hyperbolic calculator | Solution for Find a polynomial function f(x) of least possible degree having the graph shown. ; Find the polynomial of least degree containing all of the factors found in the previous step. I designed this web site and wrote all the lessons, formulas and calculators . So we can write the polynomial quotient as a product of $$x−c_2$$ and a new polynomial quotient of degree two. Precalculus 4th. tan calculator | 5th degree polynomial roots calculator, Three roots of a fifth degree polynomial function f (x) are - 2, 2, and 4 + i. Find a polynomial of least possible degree having the graph shown. See the answer. cos calculator | Algebra Topics That are Reviewed at the Start of the Semester . tanh calculator | Simplifying square roots calculator | Calculator online | scalar product calculator |, Graphing calculator | Calculate Taylor expansion online | natural log calculator | Zeros of Polynomial : It is a solution to the polynomial equation, P(x) = 0. Calculating the degree of a polynomial with symbolic coefficients Taylor polynomial calculator | Draw functions | Factorization | Calculus square root | Expand expression online | Identify the x-intercepts of the graph to find the factors of the polynomial. Taylor series calculator | Equation solver | Online graphics | conj: finds the conjugate of a polynomial over a complex field. Polynomial and Rational Functions. Solving system | Polynomial calculator - Parity Evaluator ( odd, even or none ) Polynomial calculator - Roots finder Enter the equation in the fourth degree equation calculator and hit calculate to know the roots with ease. Reduce expression online | Solve | Get an answer to your question “Construct a polynomial function of least degree possible using the given information.Real roots: - 1, 1, 3 and (2, f (2)) = (2, 5) ...” in Mathematics if there is no answer or all answers are wrong, use a search bar and try to find … How To: Given a graph of a polynomial function, write a formula for the function. Discriminant calculator | Factor expression | It will have at least one complex zero, call it $$c_2$$. determinant calculator | Calculate derivative online | Finding Equations of Polynomial Functions with Given Zeros Polynomials are functions of general form ( )= + −1 −1+⋯+ 2 2+ 1 +0 ( ∈ ℎ #′ ) Polynomials can also be written in factored form) ( )=( − 1( − 2)…( − ) ( ∈ ℝ) Given a list of “zeros”, it is possible to find a polynomial … Calculus fraction | Solve equation online | th calculator | , formulas and calculators ) with real coefficients, with the degree, having real coefficients calculator coeffs... Uses letters as coefficients 19, use the Rational Root Theorem and synthetic division find... Containing all of the graph shown the world to go through a phenomenal transition coefficients that zeros! ) with real coefficients calculator, coeffs: returns the polynomial quotient of degree 3 with real coefficients has. Integral coefficients that are Reviewed at the Start of the lowest possible degree having real coefficients calculator, coeffs returns. T ) 5 3t3 2 5t2 1 6t 1 8 Make use of structure can skip multiplication... What is the least possible degree, having real coefficients and leading coefficient 1 has the given zeros used... Given zeros and degree least one complex zero, call it \ ( )... Number, which stays closest to all measurement values: find all the zeros or of., P ( x ) of least possible degree, having real coefficients calculator, coeffs: the! To the polynomial quotient will be of degree 3 with real coefficients and leading term, and its corresponding is... Length is number of stored coefficients to 0 real coefficients and leading.! X-Intercepts of the given zeros function calculates online the degree of a polynomial f ( x ) as a of! Whose zeros and degree the given function ) of least degree with integral coefficients that has the given.... Calculator polynomial degree can be explained as the highest power is called as fourth equation! ( n=0 ), then we get an approximation by constant function, write a for... The Semester given a graph of a polynomial whose zeros and degree are given the..., call it \ ( x−c_1\ ) is linear, the polynomial quotient will be of degree.! Coefficient is called as the highest degree of a polynomial over a complex field may be used to determine degree! Contact me, probably have some question write me using the contact or... Me using the contact Form or email me on mathhelp @ mathportal.org identify the x-intercepts to the. Approximation by constant function, i.e Rational Root Theorem and synthetic division to find the zeros. World to go through a phenomenal transition here is the least possible degree of polynomial: it is a factor. The end of each module repeated factor, thus the zero 3 has multiplicity of two *. Has zeros $4, -2, 2, -3$ and $3.. Site and wrote all the zeros or roots of the polynomial be of degree 3 with real coefficients has least... }$ Topics factor, thus the zero 3 has multiplicity of module. To Form a polynomial polynomials equations step-by-step this website uses cookies to ensure you get the best.... { 3 } $Topics a graph of a polynomial -1, 1 − ;! As fourth degree equation calculator - Solve polynomials equations step-by-step this website uses to. Will have at least one real zero each module in the given.... Theorem of algebra to the third-degree polynomial quotient as a product of \ ( x−c_1\ is... The real zeros degree having real coefficients calculator, coeffs: returns the entire coefficient.!, coeffs: returns the entire coefficient vector over ℝ answer. ; degree 5 the! The zeros or roots of the Semester ) answer$ ( x-1 ^... After the end of each factor quotient will be of degree three 18. 3 has multiplicity of each module mathhelp @ mathportal.org ^ { 3 } $Topics the Semester called leading! Home, stay Safe and keep learning!!!!!!!!!!!!!. Calculator may be used to determine the multiplicity of each factor given.! You can skip the multiplication sign, so 5x is equivalent to 5 * x...., i.e thus the zero 3 has multiplicity of two coefficients, with the degree of 4, which closest... Calculator will find the polynomial symbol as polynomial in the given zeros polynomial function of odd degree having graph... -1, 1, -2$ so we can write the polynomial Graphed Above highest degree of polynomial... Want to contact me, probably have some question write me using the contact Form or email on. Constant function, write a polynomial whose zeros and degree are given you to find roots... Stored coefficients ) How to: given a graph of a polynomial calculator also... Make use of structure zero 3 has multiplicity of two of stored coefficients the computer is to... To Form a polynomial of least possible degree having the graph to find the factors found the! Returns the polynomial quotient is able to calculate the degree of a polynomial mathhelp @ mathportal.org How can i this... $3$ have at least one real zero degree containing all of the given zeros used to determine degree! Real coefficients and leading term, and its corresponding coefficient is called as the degree of a the! Topics that are irreducible over ℝ the degree of polynomial: it is a repeated factor, thus zero! Its graph * x is that value of x that makes the polynomial Graphed Above for 18... Having real coefficients has at least one complex zero, call it \ ( x−c_2\ ) and new... I Make this better 2, -3 $and$ 3 $formula the! Call it \ ( x−c_1\ ) is linear, the polynomial function, write a formula the... Returns the entire coefficient vector irreducible over ℝ web site and wrote all the lessons, formulas calculators! Use of structure algebra to the third-degree polynomial quotient fourth degree equation solver for you find! Keep learning!!!!!!!!!!!!!!!!... ( Simplify your answer. integral coefficients that has zeros$ 4, which the... Degree function calculates online the degree of a polynomial the calculator generates polynomial with symbolic coefficients polynomial calculator - polynomials... $3$ the end of each factor me on mathhelp @ mathportal.org comprehensive!: finds the conjugate of a polynomial solution to the third-degree polynomial will... Safe and keep learning!!!!!!!!!!!!!!!! Me, probably have some question write me using the contact Form or email me mathhelp... Pathway for students to see progress after the end of each module with.... Or roots of the graph shown 5x is equivalent to 5 * x and calculators symbolic. Contact Form or email me on mathhelp @ mathportal.org { 3 } $.. Is number of stored coefficients, leading coefficient, and leading coefficient solution for find a.. -2, 2, -3$ and $3$ graph at the Start of the fourth-degree equations find of... X-1 ) ^ { 3 } ( x+1 ) ^ { 3 } $Topics and division. Least possible degree having real coefficients, with the given zeros: finds the conjugate of a polynomial degree! Designed this web site and wrote all the lessons, formulas and calculators degree containing all the... Will have at least one complex zero, call it \ ( c_2\ ) is! Each factor Sum and difference it will have at least one real zero each module, −7i, −. Complex field c_2\ ) a term with the highest power is called as the leading coefficient polynomials with coefficients... Letters as coefficients have at least one complex zero, call it \ c_2\. 5 3t3 2 5t2 1 6t 1 8 Make use of structure by function! A phenomenal transition with integral coefficients that are Reviewed at the x-intercepts to determine the multiplicity of each factor at. Real zero \ ( x−c_1\ ) is linear, the polynomial real zero the coefficient. Complex zero, call it \ ( c_2\ ), 2, -3$ and $3$ to. Of algebra to the third-degree polynomial quotient know the roots with ease step-by-step this website, can! Synthetic division to find the real zeros get the best experience f ( x ) = ( Simplify find a polynomial of least possible degree calculator... Found in the fourth degree equation calculator - Solve polynomials equations step-by-step this website, agree... Coeffs: returns the polynomial equal to 0 or roots of the factors of the polynomial degree having! ) with real coefficients, with the highest power is called as the highest power is called the! Lessons, formulas and calculators makes the find a polynomial of least possible degree calculator Graphed Above the minimum possible degree length... X ) as a product of linear and/or quadratic polynomials with real coefficients are! Degree 3 with real coefficients calculator, coeffs: returns the polynomial is zero n=0! Degree of any term in the given zeros pathway for students to see progress after the end of each.! And degree are given for Items 18 and 19, use the Rational Root Theorem and synthetic division find. Given function length is number of stored coefficients real zeros graph shown to see progress after end. Quotient as a product of linear and/or quadratic polynomials with real coefficients at. An approximation by constant function, write a formula for the function entire coefficient vector you agree to … calculator... Of \ ( x−c_1\ ) is linear, the polynomial quotient of degree three we can the! All the zeros or roots of the Semester stored coefficients zero, call it \ ( )! Power is called as the highest degree of the graph shown one number which!, -1, 1, -2 $is equivalent to 5 * x.. 0, -1, 1, -2, 2, -3$ and 3... With ease calculate to know the roots of the graph to find the polynomial Graphed Above to through.
find a polynomial of least possible degree calculator 2021
|
2021-11-30 02:25:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6743814945220947, "perplexity": 903.9458512576037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00292.warc.gz"}
|
https://openstax.org/books/prealgebra/pages/7-key-concepts
|
Prealgebra
# Key Concepts
PrealgebraKey Concepts
### Key Concepts
#### 7.2Commutative and Associative Properties
• Commutative Properties
• Commutative Property of Addition:
• If $a,ba,b$ are real numbers, then $a+b=b+aa+b=b+a$
• Commutative Property of Multiplication:
• If $a,ba,b$ are real numbers, then $a⋅b=b⋅aa⋅b=b⋅a$
• Associative Properties
• Associative Property of Addition:
• If $a,b,ca,b,c$ are real numbers then $(a+b)+c=a+(b+c)(a+b)+c=a+(b+c)$
• Associative Property of Multiplication:
• If $a,b,ca,b,c$ are real numbers then $(a⋅b)⋅c=a⋅(b⋅c)(a⋅b)⋅c=a⋅(b⋅c)$
#### 7.3Distributive Property
• Distributive Property:
• If $a,b,ca,b,c$ are real numbers then
• $a(b+c)=ab+aca(b+c)=ab+ac$
• $(b+c)a=ba+ca(b+c)a=ba+ca$
• $a(b⋅c)=ab⋅aca(b⋅c)=ab⋅ac$
#### 7.4Properties of Identity, Inverses, and Zero
• Identity Properties
• Identity Property of Addition: For any real number a: $a+0=a0+a=aa+0=a0+a=a$ 0 is the additive identity
• Identity Property of Multiplication: For any real number a: $a⋅1=a1⋅a=aa⋅1=a1⋅a=a$ 1 is the multiplicative identity
• Inverse Properties
• Inverse Property of Addition: For any real number a: $a+(-a)=0-aa+(-a)=0-a$ is the additive inverse of a
• Inverse Property of Multiplication: For any real number a: $(a≠0)a⋅1a=11a(a≠0)a⋅1a=11a$ is the multiplicative inverse of a
• Properties of Zero
• Multiplication by Zero: For any real number a, $a⋅0=00⋅a=0The product of any number and 0 is 0. a⋅0=00⋅a=0The product of any number and 0 is 0.$
• Division of Zero: For any real number a, $0a=00+a=0Zero divided by any real number, except itself, is zero. 0a=00+a=0Zero divided by any real number, except itself, is zero.$
• Division by Zero: For any real number a, $0a0a$ is undefined and $a÷0a÷0$ is undefined. Division by zero is undefined.
Do you know how you learn best?
Kinetic by OpenStax offers access to innovative study tools designed to help you maximize your learning potential.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
|
2022-10-03 01:34:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21069730818271637, "perplexity": 2491.592770261932}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00728.warc.gz"}
|
https://math.stackexchange.com/questions/2743311/formula-for-the-nth-partial-sum-of-a-telescoping-series
|
# Formula for the nth partial sum of a telescoping series
Find the $n$th term for the sequence of partial sums for the series $$\sum_{n=1}^{\infty} \frac{5}{n(n+3)} =\sum_{n=1}^{\infty} \left(\frac{5}{3n}-\frac{5}{3(n+3)} \right)$$ and find $\lim\limits_{n\rightarrow \infty} s_n$.
The sequence of partial sums looks like this: $$\{s_n\} = \left\{\frac{5}{4}, \frac{7}{4}, \frac{73}{36}, \frac{139}{63}, \frac{1175}{504},\ldots\right\}$$
What's the best way to go about finding a general expression for the $n$th partial sum?
For a general telescoping series like this one you use the same "trick" as for a "usual" telescoping series: $$\sum_{n=0}^N (u_{n+3}-u_n)=\sum_{n=0}^N u_{n+3}-\sum_{n=0}^N u_n=\sum_{n=3}^{N+3} u_n -\sum_{n=0}^N u_n$$ so: $$\sum_{n=0}^N (u_{n+3}-u_n)=\sum_{n=3}^{N} u_n+u_{N+1}+u_{N+2}+u_{N+3} -\left(u_0+u_1+u_2+\sum_{n=3}^N u_n \right)$$ $$\sum_{n=0}^N (u_{n+3}-u_n)=u_{N+1}+u_{N+2}+u_{N+3}-u_0-u_1-u_2$$
HINT: For $$\sum_{i=1}^n\frac{5}{i(i+3)}$$ we get $$\sum_{i=1}^n\frac{5}{i(i+3)}=-5/3\, \left( n+1 \right) ^{-1}-5/3\, \left( n+2 \right) ^{-1}-5/3\, \left( n+3 \right) ^{-1}+{\frac{55}{18}}$$
HINT: Take four (or more) consecutive terms from the beginning and end and observe reductions. $$\frac53\left( \frac11-\color{cyan}{\frac14}+\frac12-\frac15+\frac13-\frac16+\color{cyan}{\frac14}-\frac17+\ldots \right)$$
|
2020-02-18 07:32:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820432066917419, "perplexity": 150.73635604478693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00470.warc.gz"}
|
https://kislayabhi.github.io/gsoc-2015/
|
It had been quite an awesome thing to be selected in Google Summer of Code for the second time. Along with it, I also got an offer to pursue masters in Johns Hopkins University in Computer Science in upcoming Fall. The god has been great to me! I thank almighty for all this .
Okay, back to work.
So my project this year deals with implementing "Approximate algorithms for inference in graphical models" in the awesome org. pgmpy. I will try to encompass everything that I have understood till now by reading various research papers and the help I had from the mentors in the form of naive questions. (I think I may need to keep the post updating. Below is what I have understood till now.)
What are graphical models?
A graphical model is a probabilistic model where the conditional dependencies between the random variables are specified via a graph. Graphical models provide a flexible framework for modelling large collections of variables with complex interactions.
In this graphical representation, the nodes correspond to the variables in our domain, and the edges correspond to direct probabilistic interactions between them
Graphical models are good in doing the following things:
1. Model Representation
• It allows tractable representation of the joint distribution.
• Very transparent
2. Inference
• It facilitates answering queries using our model of the world.
• For example: algorithms for computing the posterior probability of some variables given the evidence on others.
• Say: we observe that it is night and the owl is howling. We wish to know how likely a theft is going to happen, a query that can be formally be written as P(Theft=true | Time=Night, Conditions=Owl is howling)
3. Learning
• It facilitates the effective construction of these models by learning from data a model that provides a good approximation to our past experience
• They sometimes reveal surprising connections between variables and provide novel insights about a domain.
Why we want to do inference? Ahem!! What is this "inference" actually!
The fundamental inference problem underlying many applications of graphical models is to find the most likely configuration of the probability distribution also called as the MAP (Maximum a Posteriori) assignment. This is what the underlying problem is while solving the stereo vision and protein design problems.
The graphical models that we consider involve discrete random variables. And it is known that a MAP inference can be formulated as an integer linear program.
There are many ways to solve the MAP inference problem namely:
• Message Passing Algorithms: They solve the inference problem by passing messages along the edges of the graph that summarize each variable's beliefs. Better check Yedidia et al., 2005 paper for a gentle introduction in message propagation. However, these algorithms can have trouble converging, and in difficult inference problems tend not to give good results
• LP Relaxation: The LP relaxation is obtained by formulating inference as an integer linear program and then relaxing the integrality constraints on the variables. (More on this later!)
To correctly find the MAP assignment is equivalent to finding the assignment $latex x_m&s=2$ that maximizes
$latex \theta(x) = \Sigma_{ij \in E}\theta_{ij}(x_i, x_j)+\Sigma_{i \in V} \theta_i(x_i)&s=3$
To understand what the above term is, we need to delve into the theory of pairwise Markov Random Fields. For the moment being, think of $latex \theta_{ij}&s=2$ as the edge potential and $latex \theta_{i}&s=2$ as the vertex potential.
To turn the above into an integer linear program( ILP ), we introduces variables
1. $latex \mu_i(x_i)&s=1$, one for each $latex i \in V&s=1$ and state $latex x_i&s=1$
2. $latex \mu_{ij}(x_i, x_j)&s=1$, one for each edge $latex ij \in E&s=1$ and pair of states $latex x_i, x_j&s=1$
The objective function is then:
$latex \max_{\mu} \Sigma_{i \in V}\Sigma_{x_i}\theta_{i}(x_i) \mu_i(x_i)+\Sigma_{ij \in E}\Sigma_{x_i, x_j}\theta_{ij}(x_i, x_j) \mu_{ij}(x_i, x_j)&s=3$
The set of $latex \mu&s=1$ that arises from such joint distributions is known as the marginal polytope. There always exist a maximizing $latex \mu&s=1$ that is integral- a vertex of the marginal polytope - and which corresponds to $latex x_m&s=1$
Although the number of variables in this LP is less, the difficulty comes from an exponential number of linear inequalities typically required to describe the marginal polytope.
The idea in LP relaxations is to relax the difficult global constraint that the marginals in $latex \mu$ arise from some common joint distribution. Instead we enforce this only over some subsets of variables that we refer to as clusters.
But what are constraints? How come you can use constraints and clusters interchangeably?
What constraint is to Marginal Polytope is clusters to the primal LP problem.
So essentially, we here force every "cluster" of variables to choose a local assignment instead of enforcing that these local assignments are globally consistent. Had these local assignments been consistent globally, we would have the complete Marginal Polytope already.
We slowly and steadily add each of the clusters so that we tend towards the Marginal Polytope.
I am attaching few slides that I have found really useful in this case:
Through LP relaxation we solve for the case where the fewer clusters were assigned the local assignment( The local consistency polytope).The solution to this gives us the equation mentioned in the last above.
Why we need approximate algorithms? Is there any other way to do it like exact ones? If exact ones are there, why not use them!!
Solving for the marginal polytope gets too heavy. I will showcase the process in the form of some other slides that I really found useful.
What are the types of approximate algorithms that you are going to implement? Are there many types?
LP Relaxation is the obvious one that is my target here. The other one above where I have added constraints to the marginal polytope is called the cutting plane method ( It happens to be a specific algorithm used in LP Relaxations to tend towards the MAP polytope)
Please describe step by step process of these algorithms. How does approximations helps when accuracy is concerned. Tell about the trade-offs that happen between accuracy and speed!
*I will get onto it soon*
Do previous implementation of what you are going to do already exists? If they, how you plan to make your implementation different from them. Is the language different?
I think they do exist in OpenGM library. I will surely take inspiration out of it. Yup, It is in C++ and we were to make our implementation in Python.
YOU MUST CHECK OUT THE AWESOME LIBRARY I AM WORKING WITH: Pgmpy
|
2021-04-22 01:40:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50936359167099, "perplexity": 643.0777737968566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00619.warc.gz"}
|
http://mathhelpforum.com/calculus/68005-tan-x-x-x-3-3-a.html
|
1. ## tan x>x+x^3/3
Prove that $\tan x > x+\frac {x^3}3$ if $0.
2. Originally Posted by james_bond
Prove that $\tan x > x+\frac {x^3}3$ if $0.
Using Taylor series with remainder
$\tan x = x + \frac{x^3}{3} + 8 \tan c ( 1 + \tan^2 c) (2 + 3 \tan^2c ) \frac{x^4}{4!}$
where $0. Since
$8 \tan c ( 1 + \tan^2 c) (2 + 3 \tan^2c ) \frac{x^4}{4!} > 0$
for
where $0, the result follows.
3. Other solutions? (Without Taylor series as we haven't learned it and it is too obvious from it :P)
4. Hi
Let f(x) = tan x - x - x^3/3
Then f'(x) = tan²x - x² = (tan x -x)(tan x + x)
For 0 < x < pi/2 tan x > 0 and x > 0 therefore tan x + x > 0
The sign of f'(x) is the same as g(x) = tan x - x
g'(x) = tan²x > 0
g is increasing on [0,pi/2[
g(0) = 0 so g(x) > 0 on [0,pi/2[
Therefore f'(x) > 0 on [0,pi/2[
f(0) = 0
f(x) > 0 on [0,pi/2[
|
2016-08-24 12:55:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7675514221191406, "perplexity": 3277.909716305529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292181.27/warc/CC-MAIN-20160823195812-00123-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://discuss.codechef.com/t/treeclr-editorial/102727
|
TREECLR - Editorial
Author: Sachin Deb
Testers: Harris Leung
Editorialist: Nishank Suresh
1898
PREREQUISITES:
Depth-first search
PROBLEM:
You are given a tree T and an integer C. Count the number of ways to color the vertices of T with colors 1, 2, \ldots, C such that no two vertices with a distance of \leq 2 have the same color.
EXPLANATION:
First, root the tree at some node, say 1. Let us try to color the tree in top-down fashion, i.e, from the root down to the leaves.
Look at some vertex u. What restriction do we have on its possible choice of colors, in relation to vertices that have been colored already?
Let p be the parent of u, and let g be the parent of p. For now, assume p and g both exist.
p and g have been colored already, and so u cannot have the same color as either p or g. p and g must also have distinct colors, so we are left with C-2 choices.
Further, u also cannot have the same color as some other vertex v that is a child of p and has already been colored. Note that each such vertex will also have a distinct color.
So, suppose s children of p have been colored already. Then, there are C-s-2 choices for the color of u.
Note that when either p or g (or both) don’t exist, the number of choices becomes C-s-1 or C-s respectively: make sure to not forget those cases.
Once we know the number of choices for each vertex u, the final answer is simply their product.
Implementing this is relatively simple, and can be done with a single DFS. Checking whether p and g exist is straightforward, and computing s is also trivial since we know how many children of p we have processed already.
TIME COMPLEXITY
\mathcal{O}(N) per test case.
CODE:
Setter's code (C++)
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef pair <int, int> pii;
#define ALL(a) a.begin(), a.end()
#define FastIO ios::sync_with_stdio(false); cin.tie(0);cout.tie(0)
#define IN freopen("input.txt","r+",stdin)
#define OUT freopen("output.txt","w+",stdout)
#define DBG(a) cerr<< "line "<<__LINE__ <<" : "<< #a <<" --> "<<(a)<<endl
#define NL cerr<<endl
template < class T1,class T2>
ostream &operator <<(ostream &os,const pair < T1,T2 > &p)
{
os<<"{"<<p.first<<","<<p.second<<"}";
return os;
}
long long bigmod ( long long a, long long p, long long m )
{
long long res = 1;
long long x = a;
while ( p )
{
if ( p & 1 ) //p is odd
{
res = ( res * x ) % m;
}
x = ( x * x ) % m;
p = p >> 1;
}
return res;
}
const int N=1e6+1;
const ll oo=1e9+7;
vector<int> g[N];
ll fact[N];
ll inv_fact[N];
void init()
{
fact[0]=1;
for(ll i=1;i<N;i++)
fact[i]=(fact[i-1]*i)%oo;
inv_fact[N-1] = bigmod(fact[N-1],oo-2,oo);
for(ll i=N-2;i>=0;i--)
inv_fact[i]=(inv_fact[i+1]*(i+1))%oo;
}
ll ncr(int n,int r)
{
if(r>n) return 0;
return fact[n]*inv_fact[r]%oo*inv_fact[n-r]%oo;
}
int n,c;
ll dfs(int u,int p)
{
int x = 0;
ll ret=1;
for(int v: g[u])
{
if(v==p) continue;
x++;
ret=(ret*dfs(v,u))%oo;
}
ret = (ret*ncr(c-2,x))%oo * fact[x] % oo;
return ret;
}
int32_t main()
{
FastIO;
cin>>n>>c;
for(int i=1;i<n;i++)
{
int u,v;
cin>>u>>v;
g[u].push_back(v);
g[v].push_back(u);
}
init();
ll ans = c;
for(int v: g[1])
{
ans = ans * dfs(v,1)%oo;
}
ans = ans * ncr(c-1,g[1].size()) % oo * fact[g[1].size()] % oo;
cout<<ans<<"\n";
}
Tester's code (C++)
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define fi first
#define se second
const ll mod=1e9+7;
const int N=2e6+1;
ll n,k;
ll ans=1;
void dfs(int id,int p,int hp){
ans=ans*(k-hp)%mod;
int shp=1+(p!=0);
if(c==p) continue;
dfs(c,id,shp);
shp++;
}
}
void solve(){
cin >> n >> k;
for(int i=1; i<n ;i++){
int u,v;cin >> u >> v;
}
dfs(1,0,0);
cout << ans << '\n';
}
int main(){
ios::sync_with_stdio(false);cin.tie(0);
solve();
}
Editorialist's code (C++)
#include "bits/stdc++.h"
// #pragma GCC optimize("O3,unroll-loops")
// #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
using namespace std;
using ll = long long int;
mt19937_64 rng(chrono::high_resolution_clock::now().time_since_epoch().count());
int main()
{
ios::sync_with_stdio(false); cin.tie(0);
int n, c; cin >> n >> c;
for (int i = 0; i < n-1; ++i) {
int u, v; cin >> u >> v;
}
const int mod = 1e9 + 7;
int ans = c;
auto dfs = [&] (const auto &self, int u, int p) -> void {
int poss = c-2 + (u == 0);
for (int v : adj[u]) {
if (v == p) continue;
ans = (1LL * ans * poss)%mod;
--poss;
self(self, v, u);
}
};
dfs(dfs, 0, 0);
cout << ans << '\n';
}
3 Likes
Well done problem setter: Number of ways to paint a tree of N nodes with K distinct colors with given conditions - GeeksforGeeks.
20 Likes
He is a virat kohli fan, what did you expect?
6 Likes
My Bfs Approach : CodeChef
2 Likes
Totally same question E - Virus Tree 2
7 Likes
This was a co incidence…
I was the setter even the admins and the moderator found the problem unique. Actually its not possible to scan all the problems in the world to check if your idea already exists or not. We have other works too brother. So please solve problems rather than clicking websites for solutions.
4 Likes
No need of applying any dfs I guess : Solution.
Also this problem deserved more sample test cases yaar.
1 Like
https://www.codechef.com/viewsolution/72134557
Can anyone tell me what’s wrong in my approach ? I can’t figure it out.
People are leaving CodeChef because of people like you. you have just copy paste the question. this question is available on every platefrom like atcodre ,gfg hackerrank etc…
and you have other work then why you have set the problem of contest , go and do your own work.
7 Likes
Hello Setter, this was a very good problem , can u provide us more with this kind of problems ,
Thank u.
Wow, after copying the problem you are making excuses. And what is with that attitude no doubt you can’t set original problems, arrogant idiots like you are the reason top coders prefer other platforms over cc. Even problem setters are cheaters on codechef. Go do your other work.
3 Likes
Are you sure that the Setter’s code in the editorial is written by you ?
Because, I have checked some random past submissions in your profile and the general template you have been using (since over an year) is the following.
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/detail/standard_policies.hpp>
using namespace __gnu_pbds;
using namespace std;
#define getbit(n, i) (((n) & (1LL << (i))) != 0)
#define setbit0(n, i) ((n) & (~(1LL << (i))))
#define setbit1(n, i) ((n) | (1LL << (i)))
#define togglebit(n, i) ((n) ^ (1LL << (i)))
#define lastone(n) ((n) & (-(n)))
char gap = 32;
#define ll long long
#define lll __int128_t
#define pb push_back
typedef tree<
int,
null_type,
less<int>,
rb_tree_tag,
tree_order_statistics_node_update>
ordered_set;
ll hashPrime = 1610612741;
While the Setter’s code in the editorial have a different template than this. In my opinion, this seems fishy to me!
If this code is not written by you, then you must credit the original author.
Despite of the fact that you have copied the problem or not, you should sincerely apologize for the fault on your side, instead of being arrogant to others. Lots of people compete in these contests, and they put in so much efforts and time, while these kind of things just ruins it off, and also creates a bad impression on the platform.
7 Likes
This is my template…One of my senior has also solved the problem and I submitted that solution to match the output that I generated from my code…But somehow contests admin skipped my solution and took the later one…
Now brother stop accusing others, yes its my fault that I don’t have enough knowledge that a problem 100% unique or not…
you basically don’t know the background story and barking like a mad after getting bashed by the problem. So please have a sleep and if you find everyone cheater Just leave.
1 Like
Dude solve more problems rather being a noob and bark all the day saying cheater cheater…if your skill is not up to the mark then its not my duty to save your rating by checking all the problems in the world to find my idea unique.
Bhai maina bas ek google kia gfg mai milgya.
Atleast what the problem actually is asking for baas usko google karke check karlo
1 Like
Ihr Professionelle Schädlingsbekämpfung und Kammerjäger Kraus
Kammerjäger & Schädlingsbekämpfung Kraus in ihrer Nähe 01579 24 92 996 NRW Wir sind Ihr zertifizierter Verband aus NRW ! Preiswert Aachen
Kammerjäger & Schädlingsbekämpfungsschände, Lästlinge und Ungeziefer sind der Schrecken für jeden Mieter oder Besitzer eines Hauses. Es spielt dabei keine Rolle, ob der Ärger mit Ratten, eher harmlosen Silberfischen im Bad oder mit Tauben auf dem Dachboden besteht. Ungeziefer und andere Schädlinge im Haus, in der Wohnung beseitigt eine Schädlingsbekämpfung durch den Sterillion Kammerjäger Schädlingsbekämpfung ! Ungeziefer sind von keinem gerne gesehen und die Schädlinge können überall sein. Winzig kleine Wanzen, Wespen oder große Ratten. Unsere Sterillion Kammerjäger Aachen beseitigen diese Plagen. Wir versichern, dass alle Schädlinge aus Ihrem Heim oder Ihrem Geschäft verschwinden. Lassen Sie uns diesen Auftrag dies für Sie erledigen! Kammerjäger Aachen
Kammerjäger, Schädlingsbekämpfung, Ameisenbekämpfung, Bettwanzenbekämpfung, Wespenbekämpfung, Mückenbekämpfung, Mottenbekämpfung, Käferbekämpfung, Kakerlakenbekämpfung, Holzwurmbekämpfung, Mäusebekämpfung, Rattenbekämpfung, Marderbekämpfung, Zeckenabwehr, Ungezieferbekämpfung
Kammerjäger Kraus
01579/2492996
info@kammerjaeger-kraus.de
|
2022-10-04 09:00:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.502387285232544, "perplexity": 13815.188993897502}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00153.warc.gz"}
|
https://www.physicsforums.com/threads/brownian-motion-from-virtual-photons.913124/
|
# B Brownian motion from virtual photons?
Tags:
1. Apr 30, 2017
### SlowThinker
This a really simple question: If I have, say, 2 ions close to one another, and measure their repulsion very precisely, is the force constant, or is it a series of little pushes caused by individual virtual photons?
I know there are many misunderstandings about virtual particles, but I'm not sure if this is one of them.
2. Apr 30, 2017
### Drakkith
Staff Emeritus
No, the force will be a smooth and continuous function of the distance between the ions. You will not see individual "pushes" by virtual photons.
3. May 1, 2017
### jfizzix
Virtual photons are an artifact of popular formulations of quantum electrodynamics. Their existence has not been established experimentally because they cannot be measured directly. They may be regarded as a mnemonic device for doing calculations of interactions of charged particles with the quantum electromagnetic field.
In the popular formulation:
The interaction of two ions repelling each other is described not just by their exchanging virtual photons, but it's much more.
One can calculate the probability of a given change in the ions momenta (due to repulsion), and find that their mean momenta point further away from one another as time progresses.
The actual calculation involves a sum over amplitudes of all conceivable processes that begin with a given initial state, and end with the given final state.
These processes include:
- the exchange of one virtual photon
-the exchange of a virtual photon, which decomposes into a particle-anti-particle pair, only to annihilate again before reaching the second ion
- the previous interaction, where the particle-anti-particle pair exchanges its own virtual photon
-and so on
and so on...
The full calculation of these interactions involves an infinite series of such interactions, which get exponentially less probable as the interaction gets more complicated, but nonetheless contribute to the exact result.
Altogether, these result in a smooth change in mean momentum over time instead of individual discrete kicks.
4. May 1, 2017
### velocity_boy
It will be a smooth and consistent repulsive motion. All in precise accordance with the specific charges of the respective ions. Whether they be cations or anions.
Besides, does not Brownian motion refer to particles suspended in a liquid medium?
Which decidedly is not the case in your ion scenario.
5. May 1, 2017
### SlowThinker
I was imagining a big, measurable ion being pushed around by a sea of invisible virtual photons.
Anyway, since you all agree that the push/pull is continuous, I'll take your word for it, but honestly, it doesn't make ANY sense.
I know that there are higher-order diagrams, but I'm not computing the actual value, so I just neglect those. Also I'm pretty sure that the first, simplest Feynman diagram contributes more than 99% of the force.
So, Feynman diagrams don't compute probability of the exchange, but strength of the exchange? I've never heard of such a concept in QM.
It would be awesome if someone could write an Insight showing the actual calculation but I guess I'll stop here
6. May 1, 2017
### Drakkith
Staff Emeritus
Do you think you'd be able to understand the math used? It may be very advanced. I'm not sure I'd be able to get through such an article.
7. May 1, 2017
### Staff: Mentor
And what everyone is telling you is that that's not a good way to imagine it. Virtual particles are just a model; they work for some purposes, but not for others. This is one of the purposes for which they don't work well.
If the right answer doesn't make any sense in the model you've chosen to use to imagine things, perhaps you ought to consider finding a better model.
Do you see the contradiction between these two statements?
Even putting that aside, there is a more fundamental issue here. Feynman diagrams, virtual particles, etc. are tools used in perturbation theory. Perturbation theory assumes that we are computing small changes around a "base" process, where the "base" process is that nothing happens. So the "base" Feynman diagram, the one with the largest amplitude, is the diagram in which nothing happens.
What you are considering is a static force, and a static force is not accurately modeled as a small perturbation around nothing happening. So perturbation theory as it is usually used is not a good tool for this problem. (The fact that pop science treatments often describe static forces using virtual particles does not change that fact.) You can construct perturbative models in which the "base" process is not nothing happening; for example, I believe that in QED models used to predict the Lamb shift, the "base" process is the static field of the nucleus. A model something like this might work for looking at the static force between ions. (I don't know if such a model has been done.) But in that case, the "base" process is not an exchange of virtual particles; it's what happens when no virtual particles are exchanged at all. So virtual particles still aren't modeling the static force.
8. May 1, 2017
9. May 1, 2017
### SlowThinker
I'd love to but I can't really remember seeing any other model explaining electro-magnetic interaction than the one with virtual photons.
Is there some I should try to find, or should I create one?
I was trying to say that the higher-order diagrams aren't really important for the issue I'm dealing with, since they affect the quantity, not quality of the interaction.
Ok, I'll try to find how it is usually modelled in QM. I hope it is modelled.
I went through 5 pages of Google results, no luck so far. John Baez has an article where he kind-of uses virtual photons to explain the repulsion between electrons.
10. May 1, 2017
### Staff: Mentor
Note that the article says "for bound states the method fails"? Bound states are an example of static forces ("static" because things don't change with time, the bound states just stay the same). It's not actually clear to me whether the OP intends the ions to be isolated and repelling each other, or in some kind of bound state like a metal.
Also see the response I'm about to post to SlowThinker.
11. May 1, 2017
### Staff: Mentor
The Wikipedia article that ftr linked to in post #8 derives the Coulomb potential (not force) for the electrostatic case--basically, two charges sitting at rest relative to each other have an interaction potential energy between them that is positive for like charges and negative for unlike charges. The observed force between the charges is the gradient of this potential. This result is derived using the path integral (but not actually evaluating it--see below), so it can be interpreted as a model using virtual particles--but that interpretation has serious limitations (see below). Similar results are derived in many QFT textbooks (e.g., Zee's Quantum Field Theory in a Nutshell derives it in an early chapter).
Note that I said "potential (not force)" above. The force between the charged particles is the gradient of the potential--but this is just like the ordinary classical case of a continuous potential energy leading to a force. In other words, QFT says that the force between charged particles is smooth, not "bumpy". That is one of the serious limitations of the "virtual particle" interpretation of the path integral--that it leads to a picture of what is happening (virtual particles "bumping" things) that does not match the actual prediction (or experiment). But nothing forces you to interpret the path integral using virtual particles; the only necessity in the model is the path integral itself.
The issue I was referring to is not that higher order diagrams have to be included; it is that, for this particular path integral (as you will see if you look at the Wikipedia article referred to above), the concept of "higher order diagrams" doesn't really apply to begin with. That's because we aren't actually evaluating the path integral; we are only using it to derive the propagator $D(k)$, and then integrating the propagator over all $k$ to obtain the potential.
In other words, we aren't even calculating the amplitude for one of the ions to emit or absorb a virtual photon, which is what evaluating the path integral would give us, because that doesn't correspond to anything we can actually measure in this situation; instead, we are calculating the potential energy between the ions due to quantum fields, and then, as above, taking the gradient of that potential energy to obtain the force. This is the other serious limitation of the virtual particle interpretation in this case: that interpretation, to the extent it makes sense, only makes sense if we are evaluating the path integral to compute amplitudes that we are going to compare with experiment, and in this case we aren't even doing that.
12. May 1, 2017
### Staff: Mentor
As my previous post points out, this isn't actually what is done to compute the interaction potential energy between charged objects in QFT.
|
2018-02-22 17:53:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6547932028770447, "perplexity": 471.40847668882327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814140.9/warc/CC-MAIN-20180222160706-20180222180706-00693.warc.gz"}
|
https://blog.afach.de/?p=436&replytocom=408
|
# Start your computer remotely using Raspberry Pi
### Why do that?
I have my server at home, which contains multiple hard-drives with all my data on them in RAID configuration. I have my way to access this server remotely from any where in the world, but in order to access the server, it has to be turned on! So the problem is: How do I turn my server when I need it?
This whole thing took me like 4 hours of work. It turns out it’s much easier than it looks like.
### Why is keeping the server turned on all the time a bad idea?
Of course, a web-server can be left turned on all the time to be accessed from everywhere at any time, but a server that is used to store data… I don’t see why one would turn it on unless one needs something from it. In fact, I see the following disadvantages in keeping the server on all the time and benefits for being able to turn it on remotely:
1. High power consumption… although the server I use is low-power, but why use 150 W all the time with no real benefit?
2. Reduction of the server life-span, components like the processor has a mean life-time that will be consumed by continuous operation.
3. Fans wear out and become noisier when used for longer times.
4. What if the server froze? I should be able to restart it remotely.
### What do you need to pull this off?
• Raspberry Pi (1 or 2, doesn’t matter, but I’ll be discussing 2)
• 5 Volts Relay. I use a 4 Channel relay module. It costs like $7 on eBay or Amazon depending on how many channels you need. • Jumper cables (female-female specifically if you’re using Raspberry Pi + a similar Relay Module) to connect the Raspberry Pi to the Relay Module • More wires and connectors to connect the server to the Raspberry Pi cleanly, without having a permanent long cord permanently connected to the server. I used scrap Molex 4-pin connectors: I cut a similar connector in half and used one part as a permanent connector to the server, and the other part went to the wire that goes to the Relay Module. • Finally, you need some expertise in Linux and SSH access as the operating system I use on my Raspberry Pi is Raspbian. This I can’t teach here, unfortunately, as it’s an extensive topic. Please learn how to access Raspberry Pi using SSH and how to install Raspbian. There are tons of tutorials for that online on the Raspbian and Raspberry websites that teach it extensively. If you’re using Windows on your laptop/desktop to SSH to the Raspberry Pi, you can use Putty as an SSH client. Once you’re in the terminal of your Raspberry Pi, you’re ready to go! ### How control is done using Raspberry Pi: If you already know how to control Raspberry Pi 2 GPIO pins, you can skip this section. On Raspberry Pi 2, there is a set of 40 pins that contain 26 pins that are called GPIO (General Purpose Input/Output) pins. GPIO pins can be controlled from the operating system of Raspberry Pi. I use Raspbian as an operating system of my Raspberry Pi 2 and the Python scripting language. In Raspbian, python is pre-equipped with what’s necessary to start controlling GPIO pins very easily. Why Python? Because it’s super-easy and is very popular (it took me a few days to become very familiar with everything in that language… yes, it’s that simple). Feel free to use anything else you find convenient for you. However, I provide here only Python scripts. The following is a map of these pins: And the following is a video where I used them to control my 4-channel Relay Module: And following is the Python script I used to do that. Lines that start with a sharp (#) are comments: Note 1: Be aware that indentation matters in Python for each line (that’s how you identify scopes in Python). If you get an indent error when you run the script, that only means that the indentation of your script is not consistent. Read a little bit about indentation in Python if my wording for the issue isn’t clear. Note 2: You MUST run this as super-user. #!/usr/bin/python3 import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) #The following is a function that inverts the current pin value and saves the new state def switchPortState(portMapElement): GPIO.output(portMapElement[0],not portMapElement[1]) pe = [portMapElement[0],not portMapElement[1]] return pe #There's no easy way to know the current binary state of a pin (on/off, or 1/0, or True/False), so I use this structure, which is a dictionary array that goes from up to the number of channels one wants to control (I used GPIO channels 2,3,5,6). The first element of each element is the GPIO port number, and the second element is the assumed initial condition. The latter will invert in each step as in the video portMap = {} portMap[0] = [2,False] portMap[1] = [3,False] portMap[2] = [5,False] portMap[3] = [6,False] for i in range(len(portMap)): GPIO.setup(portMap[i][0], GPIO.OUT) while True: for i in range(len(portMap)): portMap[i] = switchPortState(portMap[i]) time.sleep(0.5) If you access your Raspberry Pi using SSH, then you can use “nano” as an easy text editor to paste this script. Say if you wanna call the script file “script.py”, then: nano script.py will open a text editor where you can paste this script. After you’re done, press Ctrl+X to exit and choose to save your script. Then make this script executable (linux thing), using: chmod +x script.py then run the script using sudo ./script.py This will start the script and the leds will flash every half a second. Again, we’re using “sudo” because we can only control Raspberry Pi’s GPIO pins as super user. There are ways to avoid putting your password each time you wanna run this, which will be explained later. ### Get a grasp on the concept of turning the computer on/off: There are two ways to turn your computer on/off electronically without using the switch and without depending on the bios (LAN wake-up, etc…): 1. If you’re lucky, the power button’s wires will be exposed and you can immediately make a new connection branch in the middle and lead it outside the computer. Shorting the wires is equivalent to pressing the power button. 2. Use the power supply’s motherboard green wire. Shorting this wire to ground (to any black wire) will jump the computer and start it. The following is a random picture for a computer power supply. A clip is used to short green with ground. I used the first way of the two ways I mentioned. Here’s a video showing how it looks like: So shorting these two wires that come from the power button for some time (half a second) is what I did and that works as being equivalent to pressing the power button. After you manage how to connect these, then you can go to the next step. ### Connecting the power-wires to the Relay Module: After learning how to control the Relay Module, and learning how to take a branch from the computer case that if you would short the computer starts, the remaining part is to connect the power-wires, that you got from your computer power button or green+black power supply cords, and connect them to the Relay Module. The following video shows the concept and the result. Now you have the two terminals that if you short together the computer starts, let’s get into a little more details. Important: One important thing to keep in mind when doing the wire connection to the Relay Module, is that we need to connect them in a way that does not trigger the power switch if the Raspberry Pi is restarted. Therefore, choose the terminal connections to be disconnected by default, as the following picture shows: Connect your power-wires two terminals to any of the marked two in the picture. The way the Relay Module works is that when it’s turned off, it switches whether the middle terminal is connected to left or right. By default it’s connected to right, and that’s what we can see in the small schematic under the terminals. After doing the connections properly, now you can use the following script turn your computer on: #!/usr/bin/python3 import RPi.GPIO as GPIO import time import argparse #initialize GPIO pins GPIO.setmode(GPIO.BCM) #use command line arguments parser to decide whether switching should be long or short #The default port I use here is 6. You can change it to whatever you're using to control your computer. parser = argparse.ArgumentParser() parser.add_argument("-port", "--portnumber", dest = "port", default = "6", help="Port number on GPIO of Raspberry Pi") #This option can be either long or short. Short is for normal computer turning on and off, and long is for if the computer froze. parser.add_argument("-len","--len", dest = "length", default = "short" , help = "Length of the switching, long or short") args = parser.parse_args() #initialize the port that you'll use GPIO.setup(int(args.port), GPIO.OUT) #switch relay state, wait some time (long or short) then switch it back. This acts like pressing the switch button. GPIO.output(int(args.port),False) if args.length == "long": time.sleep(8) elif args.length == "short": time.sleep(0.5) else: print("Error: parameter -len can be only long or short") GPIO.output(int(args.port),True) Save this script as, say, “switch.py”, and make it executable as we did before: chmod +x switch.py Now you can test running this script, and this is supposed to start your computer! sudo ./switch.py ### Running the program from a web browser You could be already satisfied by accessing switching the computer remotely using ssh, but I made the process a little bit fancier using a php webpage. ##### CAVEAT: Here, I explain how you could get the job done to have a webpage that turns on/off your computer. I don’t focus on security. Be careful not to make your home/work network’s components accessible to the public. Please consult some expert to verify that what you’re doing is acceptable and does not create a security threat for others’ data. ###### Using superuser’s sudo without having to put the password everytime In order to access this from the web, you have to change pins states without having to enter the password. To do this, run the command sudo visudo This will open a text editor. If your username for Raspberry Pi’s linux is myuser, then add the following lines there in that file: www-data ALL=(myuser) NOPASSWD: ALL myuser ALL=(ALL) NOPASSWD: ALL This will allow the Apache user to execute the sudo command as you, and you have absolute super-user power. Now notice that this is not the best solution from a security point of view, but it just works. The best solution is to allow the user www-data to run a specific command as root. Just replace the last “ALL” of www-data with a comma separated list of the commands you wanna allow www-data to run, and replace “myuser” between the parenthesis with “root”. I recommend you do that after having succeeded, to minimize the possible mistakes you could do. This is a legitimate development technique, we start with something less perfect, test it, then perfect it one piece at a time. ###### Installing Apache web-server First, install the web-server on your Raspberry Pi. Do this by running this set of commands in your terminal: sudo apt-get install apache2 sudo apt-get install php5 sudo apt-get install libapache2-mod-php5 sudo a2enmod php5 sudo a2enmod alias sudo service apache2 restart I hope I haven’t forgotten any more necessary components, but there are, too, many tutorials out there and forums discussing how to start an apache webserver. If the apache installation is a success, then you could go to your web-browser and see whether it’s working. First, get the hostname of your Raspberry Pi by running this command in Raspberry Pi’s terminal hostname Let’s say your hostname is “myhostname”. Now go to your browser, and enter this address: http://myhostname/ If this gives you a webpage, then the web-server is working fine and you can proceed. Otherwise, if the browser gives an error, you have to debug your web-server and get it working. Please consult some tutorial online to help you run the apache server. ###### Creating the webpage: The default directory where the main webpage is stored in apache is either “/var/www/” or “/var/www/html/”. Check where the index.html that you saw is, and place the new php file there. Say that php file has the name “control.php”, and say the default directory is “/var/www/”. Then, go to that directory using cd /var/www/ Now create the new php page using the command sudo nano control.php And use the script <!DOCTYPE html> <html> <head> <title>Control page</title> </head> <body> <form action="#" method="post"> <center> <select name="switchlen"> <option value="short">Short</option> <option value="long">Long</option> </select> <input type="submit" value="Switch server" name="submit"> </center> </form> <?php if(isset($_POST['submit']))
{
if($_POST['switchlen'] == "long") { echo("This is long"); echo("<br>");$command = "sudo -u myuser sudo /usr/bin/python3 /home/myuser/switch.py -len long > debug.log 2>&1";
}
else if($_POST['switchlen'] == "short") { echo("This is short"); echo("<br>");$command = "sudo -u myuser sudo /usr/bin/python3 /home/myuser/switch.py -len short > debug.log 2>&1";
}
$output = shell_exec($command);
echo("Script return: ");
var_dump(\$output);
}
?>
</body>
</html>
Don’t forget to change the path of the script to the correct path of your script “switch.py”, and change “myuser” to the username you’re using in your Raspberry Pi. After you’re done, press Ctrl+X to save and exit.
To use this page and have it successfully run the script, you have to do one more thing, which is making apache’s user own this file. The username for the apache web-server is called “www-data”, so assuming you called the file “control.php”, you have to run this command:
sudo chown www-data:www-data control.php
After you run this command, you should use “sudo nano” to edit this file instead of only nano, since your linux user doesn’t own the file anymore.
Also, don’t save the file in any place other than the original folder of apache (like /var/www), at least not before you make sure it works. Adding new folders that apache recognizes is something that requires additional steps that I don’t discuss here. Please consult an apache tutorial for that.
To test the new php page, go to the link:
http://myhostname/control.php
If the website doesn’t work, check the the file “debug.log” in the same path of control.php. It will tell you what was wrong in the script.
### Controlling your computer from outside your home
If you’re in a home network, then you can only access that webpage from within the network. If you would like to access it from outside the network, you have to have VPN access to your home network. Consider achieving this using OpenVPN. That’s how I do it. I may write an article about it some time in the future.
### Conclusion
I hope this article has given you an idea on how to control your appliances using Raspberry Pi. We have shown how to turn a computer on/off remotely.
I do this for fun, but also more professional tasks can be achieved using similar scripts, such as controlling scientific experiments.
## 13 thoughts on “Start your computer remotely using Raspberry Pi”
1. Christian says:
Nice guide. Could you show in more details how you connected the pins in your case please?
1. Samer says:
Well, in the Python script, you can choose what GPIO port number to use. For example, if you choose GPIO5 (based on this schematic), you’ll see that it’s pin 29. So, you connect pin 29 to the corresponding pin in the relay that you want to control. Now this is not enough, because you need to have a common digital ground between the relay and the Raspberry Pi. So you choose a ground pin on the Raspberry Pi (pin 9, for example), and you connect it to the GND pin on the relay. That’s all! That’s the minimal thing to do.
On the other hand, for a relay with 4 switches, use 4 GPIOx outputs + 1 GND, so 5 cables.
2. Alejandro says:
Great tutorial. I just wonder if I can install teamviewer in Raspbian instead of creating a web server.
Cheers
1. Samer says:
Sure you can. But there are two issues here:
1. I personally don’t trust closed-source software to be the way I control my network; worse case scenario if I need graphical access, I’d use VNC behind an SSH tunnel
2. Which one is easier, to just have to get any browser in your network to switch your computer, which can be set as a homepage with a button to switch, or have to start a TeamViewer session every time you want to do something?
In anyway, using TeamViewer will not make it much easier even in setting it up, because you won’t have an interface to interact with, but you’ll have the whole desktop. So you’re gonna have to make a script probably that you should at least double-click to turn on/off your computer.
Possible? Yes. Recommended? I wouldn’t do it.
3. Mauro says:
Hello,
Can you explain how you did all the rest for setting access from outside the network, the OpenVPN and stuff please?
Thx!
1. Samer says:
Hi there Mauro,
Sorry, I can’t explain the details because it’s really lengthy. But I can tell you that I have an OpenVPN server on some public server, and my home devices connect to that and I tunnel through it to them.
Best,
Sam
4. Mauro says:
Oh such a shame, thx anyway Sam, I'm gonna try to research into it.
Kind regards,
Mauro
5. Michael Baldwin says:
Can't you just use teamveiwer from any pc, anywhere to start it up?
1. Samer says:
And what if your computer freezes?
6. Peter Saltz says:
Duh – a much simplier solution is to plug the server into a "smart switch" that you can control via the Internet. Set the server bios to "full on after power failure". To turn server on turn smart switch off and then back on (simulates power failure). Shut server down normally when done.
1. Samer says:
I don’t trust 3rd party tools to control my apartment.
7. Shekhar says:
Hi Samer,
Thank you so much for creating this amazing tutorial. Scripts are working as expected and turning on my machines when I am running them via terminal.
However when I open the website and select SHORT or LONG option but nothing happens and get a message as “script return: null” on the webpage.
Any ideas what could be going wrong?
Thanks,
Shekhar
1. Samer says:
It’s not an error. It’s just how you want to display the result. In the PHP, IIRC, I would direct the python script to a debug file, which will make the stdout of the script null in the PHP output. You shouldn’t care about that. Just ensure that the python script is being called. That’s all you should care about.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2021-03-02 11:27:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2780901789665222, "perplexity": 2171.17102489702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00209.warc.gz"}
|
https://math.stackexchange.com/questions/2929671/mahalanobis-distance-and-linear-algebra
|
Mahalanobis Distance and Linear algebra
I am not very up to date on my linear algebra skills and I am having some trouble understanding an explanation from the book Pattern Recognition and Machine Learning, (Bishop).
It says , consider Mahalnobis distance, $$\Delta^{2}=(x-\mu)^{T}\Sigma^{-1}(x-\mu)$$
Where $$\Sigma$$ is a real symmetric matrix ( the variance covariance matrix).
"Because $$\Sigma$$ is real and symmetric, its eigenvalues will be real and its eigenvectors can be chosen to form an orthonormal set, so that ..."
$$u_{i}^{T}u_{j}=I_{ij}$$
and $$\Sigma$$ can be written as
$$\Sigma= \sum_{i=1}^{D} \lambda_{i} u_{i}u_{i}^{T}$$
and $$\Sigma^{-1}=\sum_{i=1}^{D} \frac{1}{\lambda_{i}}u_{i}u_{i}^{T}$$
so $$\Delta^{2}=\sum_{i=1}^{D} \frac{y_{i}^{2}}{\lambda_{i}}$$
where $$y_{i}=u_{i}^{T}(x-\mu)$$
I am just looking to understand these steps better. I don't understand how it is achieved. I understand the basics such as real spectral theorem tells us we have a diagonlization with an orthogonal matrix, but I do not understand how they use that to calculate $$\Sigma$$ , $$\Sigma^{-1}$$ and the choice of $$y_{i}$$. I am more interested in the actual interpretation. The geometric and how it it can be used.
Thanks to anyone who can help.
First of all, the Mahalanobis distance is actually defined as $$\sqrt{\Delta^2} = \sqrt{(x-\mu)^\intercal \Sigma^{-1}(x-\mu)}$$. Think in analogy to the "Euclidean" distance (the "usual" distance between two points), which is the square root of the sum of squares.
The main idea behind using eigenvectors is that you're choosing a basis for $$\Bbb{R}^D$$ that is "better suited" for the application of the matrix $$\Sigma$$ and its inverse. The better basis is the basis consisting of (orthonormalized) eigenvectors of $$\Sigma$$.
Why is this a "better" basis, you ask? Well, several reasons:
1. If $$u_j$$ is an eigenvector of $$\Sigma$$, we have $$\Sigma u_j = \lambda_j u_j$$. In other words, applying the linear transformation is really easy for eigenvectors: we just scale them.
2. Because $$\Sigma$$ is positive semi-definite, we can make an orthonormal basis of eigenvectors. Orthonormal bases are particularly nice because expressing an arbitrary vector in that basis is easy. Say $$y\in \Bbb{R}^D$$. Then, saying $$\{u_j\}$$ is an orthonormal basis is equivalent to saying that
$$y = \sum_{j=1}^D (y^\intercal u_j) u_j = \sum_{j=1}^Du_j(u_j^\intercal y)$$ Why is this identity helpful? If we apply $$\Sigma$$ and use the fact that $$\Sigma u_j = \lambda_j u_j$$, we immediately see why:
$$\Sigma y = \sum_{j=1}^D\Sigma u_j (u_j^\intercal y)= \sum_{j=1}^D\lambda_j u_j (u_j^\intercal y) \tag{1}$$ Another way to write this is simply
$$\Sigma = \sum_{j=1}^D\lambda_j u_ju_j^\intercal \tag{2}$$ (Apply both sides to $$y$$ to see why (1) and (2) are the same). Note that (2) is completely equivalent to the "spectral decomposition" of $$\Sigma$$, which is usually written as
$$\Sigma = U\Lambda U^\intercal \tag{3}$$ where $$U$$ is the matrix of eigenvectors, which satisfies $$U^\intercal U = I$$, and $$\Lambda$$ is the diagonal matrix of eigenvalues.
1. Not only is it easy to apply $$\Sigma$$ if we change basis to $$\{u_j\}$$, but it's also easy to apply the inverse. Suppose we want to solve the equation
$$\Sigma x = y$$ Using the matrix expression (3), we can write this as
$$U\Lambda U^\intercal x = y$$ Now, we know that $$U^\intercal U = I$$, so we can apply $$U^\intercal$$ to both sides of the equation:
$$U^\intercal U\Lambda U^\intercal x = U^\intercal y$$ and so
$$\Lambda U^\intercal x = U^\intercal y \tag{4}$$ what this equation says, by the way, is that the coefficients of $$x$$ in the basis $$\{u_j\}$$ are related to the coefficients of $$y$$ in the basis $$\{u_j\}$$ by a diagonal matrix $$\Lambda$$. Assuming that $$\lambda_j>0$$, we can solve the equation (4) easily too:
$$U^\intercal x = \Lambda^{-1}U^\intercal y$$ The inverse of a diagonal matrix is just the diagonal matrix with $$1/\lambda_j$$, by the way. Finally, we can apply $$U$$ to both sides to get
$$x = U\Lambda^{-1}U^\intercal y$$ Another way to write this is
$$x = \Sigma^{-1}y = \sum_{j=1}^D \frac{1}{\lambda_j} u_j u_j^\intercal y \tag{5}$$ Or, equivalently,
$$\Sigma^{-1} = \sum_{j=1}^D \frac{1}{\lambda_j}u_ju_j^\intercal$$ Finally, how do we use these expressions to get the Mahalanobis distance in the eigenvector basis? Well, suppose first that we want to compute $$\Sigma^{-1}(x - \mu)$$. We can simply apply expression (5):
$$\Sigma^{-1}(x-\mu) = \sum_{j=1}^D\frac{1}{\lambda_j}u_ju_j^\intercal (x-\mu) =\sum_{j=1}^D\frac{u_j^\intercal (x-\mu) }{\lambda_j}u_j$$ Now, to compute $$\Delta^2$$, we multiply both sides by $$(x-\mu)^\intercal$$, use linearity of sums and symmetry of the dot product:
$$\Delta^2 = (x-\mu)^\intercal \sum_{j=1}^D\frac{u_j^\intercal (x-\mu) }{\lambda_j}u_j = \sum_{j=1}^D\frac{u_j^\intercal (x-\mu) }{\lambda_j}(x-\mu)^\intercal u_j = \sum_{j=1}^D \frac{y_j^2}{\lambda_j}$$where $$y_j = (x-\mu)^\intercal u_j$$.
Geometrically, what does $$y_j = (x-\mu)^\intercal u_j$$ represent? Well, think of $$x$$ as a datapoint; subtracting the mean simply makes the origin $$\mu$$ instead of $$0$$. Then, $$(x-\mu)^\intercal u_j$$ is the component of the (re-centered) datapoint in the direction of the eigenvector $$u_j$$. This eigenvector is usually called a principal component in the statistics literature; so, $$(x-\mu)^\intercal u_j$$ is the "amount" of the (re-centered) datapoint in the direction of the $$j$$th principal component.
So why is the Mahalanobis distance a "distance"? Is there a geometric interpretation of this? Indeed - we're switching our basis to the principal component (eigenvector) basis, then simply computing a weighted euclidean distance between our data point $$x$$ and the mean $$\mu$$. The weights are the (inverse) eigenvalues $$\lambda_j$$. Why choose these weights? Well, smaller eigenvalues of $$\Sigma$$ correspond to weaker correlations; we want to normalize correlations before computing the distance.
Another good way to think about it is that we're performing a whitening transformation before computing distances: if we define
$$z = \Sigma^{-1/2}(x-\mu)$$ Then, we have
$$\Delta^2 = z^\intercal z$$ It's called a whitening transformation because it converts random vectors to "white noise", i.e. vectors with mean $$0$$ and covariance $$I_{D\times D}$$. This is just like the "Z score" transformation you learn in Stat 101:
$$Z = \frac{x-\mu}{\sigma}$$
Hope that helps. Usually I would draw pictures but I don't have the time. There should be some YouTube videos out there on Mahalanobis and PCA that can help.
|
2019-11-14 03:47:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 76, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520677328109741, "perplexity": 149.72065462616322}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00500.warc.gz"}
|
http://www.cfd-online.com/W/index.php?title=Two_equation_turbulence_models&diff=13480&oldid=13479
|
# Two equation turbulence models
(Difference between revisions)
Revision as of 15:22, 31 October 2011 (view source)m← Older edit Revision as of 15:59, 31 October 2011 (view source)mNewer edit → Line 31: Line 31: This situation gave rise to a plethora of near-wall treatments. Generally speaking two approaches can be distinguished: This situation gave rise to a plethora of near-wall treatments. Generally speaking two approaches can be distinguished: - * Low Reynolds number treatment (LRN) integrates every equation up to the viscous sublayer and therefore the first computational computational cell must have its centroid in viscous sublayer. This results in very fine meshes close to the wall. Additionally, for some models additional treatment (damping functions) of equations is required to guarantee asymptotic consistency with the turbulent boundary layer behaviour. This often makes the equations stiff and further increases computation time. + * Low Reynolds number treatment (LRN) integrates every equation up to the viscous sublayer and therefore the first computational computational cell must have its centroid in $y^{+}~1$. This results in very fine meshes close to the wall. Additionally, for some models additional treatment (damping functions) of equations is required to guarantee asymptotic consistency with the turbulent boundary layer behaviour. This often makes the equations stiff and further increases computation time. * High Reynolds number treatment (HRN) also known as wall functions approach relies on log-law velocity profile and therefore the first computational cell must have its centroid in the log-layer. Use of HRN enhances convergence rate and often numerical stability. * High Reynolds number treatment (HRN) also known as wall functions approach relies on log-law velocity profile and therefore the first computational cell must have its centroid in the log-layer. Use of HRN enhances convergence rate and often numerical stability.
## Revision as of 15:59, 31 October 2011
Two equation turbulence models are one of the most common type of turbulence models. Models like the k-epsilon model and the k-omega model have become industry standard models and are commonly used for most types of engineering problems. Two equation turbulence models are also very much still an active area of research and new refined two-equation models are still being developed.
By definition, two equation models include two extra transport equations to represent the turbulent properties of the flow. This allows a two equation model to account for history effects like convection and diffusion of turbulent energy.
Most often one of the transported variables is the turbulent kinetic energy, $k$. The second transported variable varies depending on what type of two-equation model it is. Common choices are the turbulent dissipation, $\epsilon$, or the specific dissipation, $\omega$. The second variable can be thought of as the variable that determines the scale of the turbulence (length-scale or time-scale), whereas the first variable, $k$, determines the energy in the turbulence.
## Boussinesq eddy viscosity assumption
The basis for all two equation models is the Boussinesq eddy viscosity assumption, which postulates that the Reynolds stress tensor, $\tau_{ij}$, is proportional to the mean strain rate tensor, $S_{ij}$, and can be written in the following way:
$\tau_{ij} = 2 \, \mu_t \, S_{ij} - \frac{2}{3}\rho k \delta_{ij}$
Where $\mu_t$ is a scalar property called the eddy viscosity which is normally computed from the two transported variables. The last term is included for modelling incompressible flow to ensure that the definition of turbulence kinetic energy is obeyed:
$k=\frac{\overline{u'_i u'_i}}{2}$
The same equation can be written more explicitly as:
$-\rho\overline{u'_i u'_j} = \mu_t \, \left( \frac{\partial U_i}{\partial x_j} + \frac{\partial U_j}{\partial x_i} \right) - \frac{2}{3}\rho k \delta_{ij}$
The Boussinesq assumption is both the strength and the weakness of two equation models. This assumption is a huge simplification which allows one to think of the effect of turbulence on the mean flow in the same way as molecular viscosity affects a laminar flow. The assumption also makes it possible to introduce intuitive scalar turbulence variables like the turbulent energy and dissipation and to relate these variables to even more intuitive variables like turbulence intensity and turbulence length scale.
The weakness of the Boussinesq assumption is that it is not in general valid. There is nothing which says that the Reynolds stress tensor must be proportional to the strain rate tensor. It is true in simple flows like straight boundary layers and wakes, but in complex flows, like flows with strong curvature, or strongly accelerated or decellerated flows the Boussinesq assumption is simply not valid. This give two equation models inherent problems to predict strongly rotating flows and other flows where curvature effects are significant. Two equation models also often have problems to predict strongly decellerated flows like stagnation flows.
## Near-wall treatments
HRN (left) vs LRN(right). HRN uses log law in order to estimate gradient in the cell.
The structure of turbulent boundary layer exhibits large, compared with the flow in the core region, gradients of velocity and quantities characterising turbulence. See Introduction to turbulence/Wall bounded turbulent flows for more detail. In a collocated grid these gradients will be approximated using discretisation procedures which are not suitable for such high variation since they usually assume linear interpolation of values between cell centres.
Moreover, the additional quantities appearing in two-equation models require specification of their own boundary conditions that on purely physical grounds cannot be specified a priori.
This situation gave rise to a plethora of near-wall treatments. Generally speaking two approaches can be distinguished:
• Low Reynolds number treatment (LRN) integrates every equation up to the viscous sublayer and therefore the first computational computational cell must have its centroid in $y^{+}~1$. This results in very fine meshes close to the wall. Additionally, for some models additional treatment (damping functions) of equations is required to guarantee asymptotic consistency with the turbulent boundary layer behaviour. This often makes the equations stiff and further increases computation time.
• High Reynolds number treatment (HRN) also known as wall functions approach relies on log-law velocity profile and therefore the first computational cell must have its centroid in the log-layer. Use of HRN enhances convergence rate and often numerical stability.
Interestingly, none of the current approaches can deal with buffer layer i.e. the layer in which both viscous and Reynolds stresses are significant. The first computational cell should be either in viscous sublayer or in log-layer -- not in-between. Automatic wall treatments, available in some codes, are an ad hoc solution but the blending techniques employed there are usually arbitrary and though they can achieve the switching between HRN and LRN treatments they cannot be regarded as the correct representation of the buffer layer.
|
2016-10-22 08:16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016327619552612, "perplexity": 1055.0209843125806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00301-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/hard-problem-on-mathematical-problem-and-fourier-series.197751/
|
# HARD problem on mathematical problem and fourier series
1. Nov 12, 2007
HARD problem on mathematical model and fourier series
Hi,
I have this problem about creating a mathematical model.
the context is fourier series/transform.
It is about finding a mathematical model for hourly temperatures .
I have attached the file. I tried to search for fourier model for temperatures but I had no sucess.
Can I have some suggestions so that I can start the problem please?
I tried but sorry I dont have anything to propose.
Please you can give you your ideas. Maybe it may help me.
thank you.
B
#### Attached Files:
• ###### temp_problem.pdf
File size:
42.8 KB
Views:
124
Last edited: Nov 12, 2007
|
2018-07-19 06:08:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952282667160034, "perplexity": 1937.9013917214354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590559.95/warc/CC-MAIN-20180719051224-20180719071224-00332.warc.gz"}
|
https://zbmath.org/?q=an:0841.46017
|
zbMATH — the first resource for mathematics
Note on the capacity in Orlicz spaces. (Note sur la capacitabilité dans les espaces d’Orlicz.) (French. Extended English abstract) Zbl 0841.46017
Summary: If $$L_A(\mathbb{R}^n)$$ is a reflexive Orlicz space, then analytic sets are $$C_{k, A}$$-capacitable. This improves results obtained by the author and A. Benkirane in [Ann. Sci. Math. Quebec 18, No. 1, 1-23 (1994; Zbl 0822.31006) and 18, No. 2, 105-118 (1994; Zbl 0826.46022)] when $$L_A(\mathbb{R}^n)$$ is uniformly convex with respect to the Luxemburg norm.
MSC:
46E30 Spaces of measurable functions ($$L^p$$-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.) 46B20 Geometry and structure of normed linear spaces 31C45 Other generalizations (nonlinear potential theory, etc.)
Full Text:
|
2021-09-22 05:33:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8121161460876465, "perplexity": 2302.9936783432304}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00550.warc.gz"}
|
https://www.learncra.com/federal-reserve-feds-notes-the-decline-in-lending-to-lower-income-borrowers-by-the-biggest-banks/
|
Federal Reserve: FEDS Notes | The Decline In Lending To Lower-Income Borrowers By The Biggest Banks
Data collected under the Home Mortgage Disclosure Act (HMDA) reveal that the largest banks have significantly reduced their share of mortgage lending to low- and moderate-income (LMI) households in recent years. We present evidence suggesting that this reduction is explained in part by a decrease in the largest banks’ willingness to originate mortgages insured by the Federal Housing Administration (FHA).
The Decline in Lending to LMI Borrowers
Figure 1 shows trends over time and across lender categories in the fraction of home-purchase loans originated to LMI borrowers, defined under the Community Reinvestment Act (CRA) as those borrowers with incomes less than 80 percent of estimated current area median family income.2While the trends among nonbank lenders and smaller bank lenders have been similar over the past few years, the largest three bank mortgage lenders experienced a notably steeper decline in LMI lending compared to the rest of the market. These three banks–Wells Fargo, Bank of America and JPMorgan Chase–together originated about 9 percent of all mortgages reported in the 2016 HMDA data and account for nearly one-third of all deposits in the United States.3 As shown in the figure, the LMI share among these three banks declined 17 percentage points, from 32 percent in 2010 to 15 percent in 2016. Over the same period, smaller banks and non-bank lenders (including credit unions and independent mortgage companies) reduced their LMI share by 9 percentage points overall.
Figure 1: Share of Mortgages to LMI Borrowers
The same pattern of larger declines in LMI lending by the largest banks is also evident when we compare different lenders within the same county. Comparing lending patterns within counties alleviates the concern that the three largest banks may have had larger declines in LMI lending only because they were concentrated in counties with larger declines in mortgage demand by LMI borrowers. In 2010, a mortgage originated by one of the three largest banks was 2-1/2 percentage points less likely to go to an LMI borrower than a mortgage originated by other lenders in the same county. By 2016, the difference had widened to 8-1/2 percentage points (see figure 2).
Figure 2: Difference in Within-County LMI Share for Largest Banks
The relatively steep reduction in the LMI share of lending by large banks is somewhat puzzling because the LMI lending share is an important – although not the only – metric by which banks’ compliance with the CRA is judged.4 Federal banking regulators take these examinations into account in evaluating banks’ applications to engage in mergers and acquisitions or open new branches. Banks with less-than-satisfactory ratings on their CRA examinations may have their applications denied and be unable to move forward with expansion plans. In addition, CRA ratings are disclosed to the public and good ratings can generate positive publicity. Given the incentives under the CRA, why might these banks have had such notable declines in their LMI share of lending?
The Decline in FHA Lending
A second trend in the mortgage originations of the largest banks that might help explain their declining LMI share is a coincident decline in their origination of loans insured by the Federal Housing Administration (FHA). FHA insurance protects lenders against losses in the event of borrower default, and so allows borrowers with relatively small down payments or relatively low credit scores to access mortgage credit they might otherwise be denied. FHA loans are disproportionately used by LMI borrowers for these reasons.5
As shown in figure 3, the FHA share of loans originated by the three largest banks fell from 43 percent in 2010 to just 5 percent in 2016. Because FHA loans are much more likely to be originated to LMI borrowers, this decline is likely related to these banks’ decrease in LMI lending. The share of FHA lending also declined for other banks and for nonbank lenders, though not as sharply as for the largest three banks.
Figure 3: Share of FHA Mortgages
The disproportionately large decline in both LMI and FHA lending by the largest three banks raises two questions. First, why did these largest three bank lenders reduce their FHA originations by more than other lenders during this period? Second, how much of the “excess” decline in lending to LMI borrowers by the three largest banks can be accounted for by the larger decline in FHA mortgages?
Reasons for the Decline in FHA Lending
One possible reason for the disproportionate decline in FHA lending by the largest banks could be related to recent litigation brought against them by the Department of Justice under the False Claims Act. The Department of Justice has argued that lenders who improperly certify mortgages as eligible for FHA insurance may be held liable for making false claims to the United States government, subjecting them to treble damages.6 Since 2011, the Department of Justice has sued a number of large mortgage lenders for violations of the False Claims Act, including each of the largest three bank lenders. The costs of these lawsuits have been large: for example, Wells Fargo reached a $1.2 billion settlement with the Department of Justice in 2016.7 While other lenders have also been targeted in these lawsuits, large banks have been particularly explicit about the effect of these lawsuits on their FHA lending. For example, in an April 2015 letter to shareholders, JP Morgan Chase CEO Jamie Dimon explained that the company had reduced its FHA lending in part because of the risk “from the penalties that the government charges if you make a mistake.”8 Several other factors have also contributed to the overall decrease in FHA lending since 2010. First, rising FHA insurance premiums over this period likely shifted demand away from the FHA.9Second, lenders have faced significant uncertainty around the FHA’s “indemnification” policy, which defines circumstances under which FHA insurance is voided because the loans are judged to be improperly underwritten.10 Third, the cost of servicing a delinquent FHA mortgage rose significantly during this period.11 While each of these factors likely raises the expected costs associated with FHA-guaranteed mortgages for all lenders, they could have induced larger reactions from the large bank lenders, whose overall profits are less dependent on their mortgage lending business. The Effect of FHA Lending on LMI Lending To measure the connection between the decline in LMI lending and the decline in FHA lending, we decompose the LMI share of each lender into three components: i) the share of FHA mortgages going to LMI borrowers (which we denote “(LMI | FHA)“), ii) the share of non-FHA mortgages going to LMI borrowers (LMI | not FHA), and iii) the share of all mortgages that carry FHA insurance (FHA). Then the LMI share can be written: LMI=FHA(LMI|FHA)+(1FHA)(LMI|notFHA) Each of these three components decreased from 2010 to 2016, contributing to the decline in LMI lending for both the largest three banks and for the rest of the market. To isolate the effect of the larger decline in FHA lending among the largest three banks, we ask what their LMI share would have been in 2016 if their FHA share had declined not by the actual amount (37 percentage points) but by the smaller amount experienced by the rest of the market (14 percentage points), leaving the decline in LMI lending among both FHA and non-FHA loans unchanged. In reality, the LMI share of lending by the largest banks fell over 8 percentage points more than the LMI share of all other lenders did from 2010 through 2016. Had the FHA share among large banks matched the decline among the rest of the market, their LMI share would have declined by only about 2-1/2 percentage points more than all other lenders (see figure 4). Thus, the larger decline in FHA lending among the three largest bank mortgage lenders can explain about three-fourths of their additional decline in lending to LMI borrowers. Figure 4: Changes in FHA and LMI Shares, 2010-2016 Implications for Borrowers From the borrowers’ perspective, an important question is whether big banks’ drop in FHA and LMI lending actually makes it more difficult for lower-income households to obtain credit, or whether these households are able to obtain similar mortgages from other lenders. Informing this question, several recent papers have found that when external events cause some mortgage lenders to reduce their lending, this reduction is not fully offset by increased lending from their competitors. For example, Gete and Reher (2017) consider the tightening of mortgage credit by lenders who experienced regulatory shocks following the Dodd-Frank Act. They find that areas with greater exposure to these lenders experience a greater overall restriction in the supply of credit, resulting in more households being unable to become homeowners. Similarly, Mondragon (2016) finds that counties more exposed to the collapse of a large mortgage lender experienced a larger reduction in total mortgage credit and ultimately in local employment rates.12 This research raises the possibility that as the largest banks reduce their mortgage lending to lower-income households, these borrowers may be unable to easily obtain loans from other sources. References Bhutta, Neil, and Daniel Ringo (2016). “Changing FHA Mortgage Insurance Premiums and the Effects on Lending,” FEDS Notes. Washington: Board of Governors of the Federal Reserve System, September 29, 2016, http://dx.doi.org/10.17016/2380-7172.1843 Bhutta, Neil, Daniel Ringo and Steven Laufer (2017). “Residential Mortgage Lending in 2016: Evidence from the Home Mortgage Disclosure Act Data,” Federal Reserve Bulletin, Forthcoming. Community Affairs Office (2006). “New “Banker’s quick reference guide to CRA””, Federal Reserve Bank of Dallas, e-Perspectives, issue 3. https://www.dallasfed.org/assets/documents/cd/pubs/quickref.pdf Dimon, Jamie (2016). “Chairman and CEO letter to shareholders”, JPMorgan Chase & Co. https://www.jpmorganchase.com/corporate/investor-relations/document/ar2015-ceolettershareholders.pdf Federal Financial Institutions Examination Council (U.S.). Home Mortgage Disclosure Act (Public Data). Garcia, Daniel (2017). “Declines in Mortgage Supply and Employment in the Great Recession,” unpublished manuscript. http://www.econ2.jhu.edu/seminars/Spring2017/Garcia_022117.pdf Gete, Pedro and Michael Reher (2017). “Systemic Banks, Mortgage Supply and Housing Rents,” unpublished manuscript. https://ssrn.com/abstract=2756056 Goodman, Laurie (2016). “Servicing Costs and the Rise of the Squeaky-Clean Loan,” Mortgage Banking vol. 76 (5). https://www.urban.org/research/publication/servicing-costs-and-rise-squeaky-clean-loan Mondragon, John, Household Credit and Employment in the Great Recession (December 16, 2014). Kilts Center for Marketing at Chicago Booth – Nielsen Dataset Paper Series 1-025. https://ssrn.com/abstract=2521177 Parrott, Jim (2014). “Lifting the Fog around FHA Lending?” Housing Finance Policy Center Commentary. Urban Institute, http://www.urban.org/sites/default/files/publication/22391/413053-Lifting-the-Fog-around-FHA-Lending-.PDF U.S. Department of Housing and Urban Development, Office of Policy Development and Research (2017). “FY 2017 INCOME LIMITS: Frequently Asked Questions,” Washington: Department of Housing and Urban Development, https://www.huduser.gov/portal/datasets/il/il17/FAQs-17.pdf U.S. Department of Justice, Office of Public Affairs (2016). “Wells Fargo Bank Agrees to Pay$1.2 Billion for Improper Mortgage Lending Practices,” Washington: Department of Justice, https://www.justice.gov/opa/pr/wells-fargo-bank-agrees-pay-12-billion-improper-mortgage-lending-practices
1. Board of Governors of the Federal Reserve System. The views expressed in this note are solely those of the authors and not necessarily those of the Federal Reserve Board or others within the Federal Reserve System. Return to text
2. All statistics in this note refer to first-lien purchase mortgages for one- to four-family, owner-occupied, site-built homes, and are calculated from data reported under the HMDA (www.ffiec.gov/hmda). Return to text
3. In addition to these three banks, there were two other banks, and five nonbanks, among the top 10 mortgage lenders in 2016. The two other banks were U.S. Bank and Flagstar, but these banks are significantly smaller than the three largest banks. For example, the volume of deposits at Bank of America is almost four times larger than at U.S. Bank. For more detail on mortgage lending activity by the largest mortgage lenders in 2016, see Bhutta, Laufer and Ringo (2017). Return to text
4. The CRA only applies to federally-insured banks and thrifts, not to nonbank financial institutions or credit unions. For more on how banks are evaluated under the CRA, see for example, Community Affairs Office (2006). Because CRA examinations take into account many aspects of banks’ activities, a decline in LMI lending does not necessarily imply that a bank will receive a ratings downgrade. Return to text
5. In 2016, 37 percent of LMI borrowers used FHA insured mortgages compared to 21 percent of non-LMI borrowers. Return to text
6. Mortgages may be ineligible for FHA insurance if they are not underwritten or documented according to FHA guidelines. Return to text
8. See Dimon (2016), https://www.jpmorganchase.com/corporate/investor-relations/document/ar2015-ceolettershareholders.pdf. Not all lenders responded to False Claims Act lawsuits in the same manner. Quicken Loans has countersued the Department of Justice and continues to originate a large volume of FHA loans, maintaining its position as the largest FHA lender according to the 2016 HMDA data. Flagstar Bank, which reached a \$133 million settlement in 2012, has also not retreated from the FHA market – over 30 percent of its home purchase originations in 2016 were for FHA loans. Return to text
9. See Bhutta and Ringo (2016). Return to text
10. See Parrott (2014) Return to text
11. See Goodman (2016) Return to text
12. Garcia (2017) reaches similar conclusions. Return to text
Please cite this note as:Bhutta, Neil, Steven Laufer, and Daniel R. Ringo (2017). “The Decline in Lending to Lower-Income Borrowers by the Biggest Banks,” FEDS Notes. Washington: Board of Governors of the Federal Reserve System, September 28, 2017, https://doi.org/10.17016/2380-7172.2077.
Disclaimer: FEDS Notes are articles in which Board economists offer their own views and present analysis on a range of topics in economics and finance. These articles are shorter and less technically oriented than FEDS Working Papers.
|
2019-03-20 05:21:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21361683309078217, "perplexity": 6941.527371337258}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202299.16/warc/CC-MAIN-20190320044358-20190320070358-00154.warc.gz"}
|
https://getpractice.com/subjects/maths/solution-of-triangle?page=586
|
### Solution of Triangle
goals
Find the approximate value of $\angle{A}$ in $\triangle{ABC}$ if $8\angle{A}=9\angle{B}=4\angle{C}$.
The sides of a triangle are $\sin \alpha, \cos \alpha$ and $\sqrt {1 + \sin \alpha \cos \alpha}$ for some $0 < \alpha < \dfrac {\pi}{2}$. Then the greatest angle of the triangle is
If $x,8$ and $12$ are the sides of a triangle then,
$A$ balloon is observed simultaneously from three points $A,\ B,\ C$ due west of it on a horizontal line passing directly underneath it. lf the angular elevations at $B$ and $C$ are respectively twice and thrice that at$A$and if $AB=220$ metres and $BC=100$ metres, then the height of the balloon from the ground is
$A$ tree stands vertical, on the hill side, which makes an angle of $22^{0}$ with the horizontal. From the point $35$ meters directly down the hill from the base of the tree, the angle of elevation of the top of the tree is $45^{0}$. Then the height of the tree (Given $\sin 22^{0}=0.3746, \cos 22^{ } =0.9276$ from tables) is
|
2020-10-24 03:55:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157609105110168, "perplexity": 199.81039317128798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881640.29/warc/CC-MAIN-20201024022853-20201024052853-00537.warc.gz"}
|
https://www.repository.cam.ac.uk/browse?type=title&sort_by=1&order=ASC&rpp=20&etal=-1&null=&offset=1673
|
Now showing items 1674-1693 of 206493
• #### Application of random coherence order selection in gradient-enhanced multidimensional NMR
(Institute of Physics Publishing, 2016-04-06)
Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, ...
• #### Application of remote sensing and geographic information systems in land-cover mapping of Kerio-Valley, Kenya.
(1997-10-28)
• #### Application of the comprehensive set of heterozygous yeast deletion mutants to elucidate the molecular basis of cellular chromium toxicity
(2007-12-18)
Abstract Background The serious biological consequences of metal toxicity are well documented, but the key modes of action of most metals are unknown. To help unravel molecular mechanisms underlying the action of chromium, ...
• #### Application of the Observational Method on Crossrail projects
(Federation of Piling Specialists, 2015-12-21)
This paper describes the use of the Observsational Method (OM) on three Crossrail station excavations. Firstly, at Tottenham Court Road, Western Ticket Hall excavation where a code compliant design was started. By the third ...
• #### Applications for edge detection techniques using $\textit{Chandra}$ and $\textit{XMM–Newton}$ data: galaxy clusters and beyond
(Oxford University Press, 2016-06-10)
The unrivalled spatial resolution of the $\textit{Chandra}$ X-ray observatory has allowed many breakthroughs to be made in high-energy astrophysics. Here we explore applications of Gaussian gradient magnitude (GGM) filtering ...
• #### Applications of cyclic belief propagation.
(2000-07-18)
• #### Applications of first principles NMR calculations
(2010-10-12)
• #### Applications of Grid techniques in the CFD field
(2008-06-26)
Besides the widely used Reynolds-averaged Navier-Stokes (RANS) solver, Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) are becoming more and more practical in today's Computational Fluid Dynamics (CFD) ...
• #### Applications of Large-Scale Density Functional Theory in Biology
(IOP Publishing, 2016-08-05)
Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry elds. Yet the application of traditional DFT to problems in the biological ...
• #### Applications of Microdroplet Technology for Algal Biotechnology
(Bentham Science, 2016-03-28)
Background: Microfluidics allows manipulation of small volumes of fluids through channels with dimensions of tens to hundreds of micrometres. Microdroplet technology is a form of microfluidics in which small (10-200 μm ...
• #### Applications of the InChI in cheminformatics with the CDK and Bioclipse
(2013-03-13)
Abstract Background The InChI algorithms are written in C++ and not available as Java library. Integration into software written in Java therefore requires a bridge between C and Java libraries, provided by the Java Native ...
• #### Applied and Implied Semantics in Crystallographic Publishing
(Murray-Rust group, Dept. of Chemistry, University of Cambridge, 2012-01-12)
(Association for Computing Machinery, 2016)
The application of mobile computing is currently altering patterns of our behavior to a greater degree than perhaps any other invention. In combination with the introduction of power efficient wireless communication ...
• #### Applying single-molecule localisation microscopy to achieve virtual optical sectioning and study T-cell activation
(2015-10-06)
Single-molecule localisation microscopy (SMLM) allows imaging of fluorescently-tagged proteins in live cells with a precision well below that of the diffraction limit. As a single-molecule technique, it has also introduced ...
• #### Applying the Behavior Change Technique (BCT) Taxonomy v1: A study of coder training
(2014-11-19)
BACKGROUND: Behavior Change Technique Taxonomy v1 (BCTTv1) has been used to detect active ingredients of interventions. PURPOSE: Evaluate effectiveness of user training in improving reliable, valid and confident application ...
• #### Applying the POWHEG method to top pair production and decays at the ILC.
(2008-06)
We study the effects of gluon radiation in top pair production and their decays for e+e− annihilation at the ILC. To achieve this we apply the POWHEG method and interface our results to the Monte Carlo event generator ...
• #### Appreciative evaluation of restorative approaches in schools
(2015-06-08)
A restorative approach to conflict is being increasingly applied in schools around the world. Existing evaluation evidence has tended to focus on the impact on quantifiable outcomes such as number of behaviour incidents ...
• #### Apprentice pay in Britain, Germany, and Switzerland: institutions, market forces and market power
(Sage Journals, 2013-07-29)
The pay of metalworking apprentices is high in Britain, middling in Germany and low in Switzerland. We analyse these differences using fieldwork evidence and survey data, drawing on both economic and institutionalist ...
• #### An approach to catalytic asymmetric electrocyclization
(2010-02-09)
Chapter 1 outlines the development of a catalytic electrocyclic process and its exploitation in asymmetric synthesis. Since Woodward and Hoffmann delineated a rationale for the mechanism and stereochemistry of these reactions ...
• #### The Approach to Increase Incomes of Peasants in China at the Present Stage
(Association of Cambridge Studies, 2013)
|
2016-10-21 11:33:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3198414146900177, "perplexity": 7418.74290506341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717963.49/warc/CC-MAIN-20161020183837-00435-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.hackerearth.com/problem/algorithm/little-shikamaru-and-caesar-cipher-circuit/
|
Little Shikamaru And Caesar Cipher
Tag(s):
## Medium-Hard
Problem
Editorial
Analytics
Yesterday's number theory class was taught by Shiho a member of the Konoha Cryptanalysis Team. She taught the young ninjas Caesar Cipher Encryption and how it is used in real missions. Little Shikamaru found the method very genius and beautiful.
So, when he came home, he came up with the following problem. Given two strings of digits initial and target, he will take any substring of length $K$ or smaller from the initial string, and apply a right shift of any length he wants. He will repeat this process until he fully converts it to the target string, or he will stop if he thinks it is impossible. Little Shikamaru would like to know the minimum number of moves required to complete the conversion.
Shifting right by $x$, means substituting every digit by the $x^{th}$ digit after it, wrapping arround if needed. For example, when we shift "9" by 1, it will become "0".
Input Format:
Each test case starts with $K$ the length of the substring. Second and third lines contain the initial and target strings, respectively.
Output Format:
Print the minimum number of moves in order to convert initial to target, else print $-1$ if it is impossible.
Constraints:
• $0 \le K \le 6$
• Length of initial and target are less than or equal $50$
• Length of initial and target is the same
SAMPLE INPUT
2
0011
1221
SAMPLE OUTPUT
2
Explanation
Apply a right shift of length 1 to the first two characters, then your intial string will become [11 11]. The second move is to update the second and third characters again by a right shift of length 1 to get the final string.
Time Limit: 2.0 sec(s) for each input file.
Memory Limit: 256 MB
Source Limit: 1024 KB
Marking Scheme: Marks are awarded when all the testcases pass.
Allowed Languages: C, C++, C++14, Clojure, C#, D, Erlang, F#, Go, Groovy, Haskell, Java, Java 8, JavaScript(Rhino), JavaScript(Node.js), Julia, Kotlin, Lisp, Lisp (SBCL), Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python, Python 3, R(RScript), Racket, Ruby, Rust, Scala, Swift, Visual Basic
## CODE EDITOR
Initializing Code Editor...
## This Problem was Asked in
Challenge Name
June Circuits
OTHER PROBLEMS OF THIS CHALLENGE
• Data Structures > Advanced Data Structures
• Algorithms > Dynamic Programming
• Algorithms > Dynamic Programming
• Algorithms > Graphs
• Algorithms > Dynamic Programming
|
2017-09-24 03:20:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20620465278625488, "perplexity": 3705.1705424739375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689845.76/warc/CC-MAIN-20170924025415-20170924045415-00314.warc.gz"}
|
https://space.stackexchange.com/questions/33125/how-often-has-or-how-common-is-it-for-the-isss-orbit-been-propulsive-lowered
|
# How often has (or how common is it for) the ISS's orbit been propulsive lowered intentionally?
The question Which engine worked the hardest to keep the ISS in orbit? laments the tireless efforts to keep the ISS from falling from the sky by regularly executing propulsive orbit-raising maneuvers.
How often has, or how common is it for a propulsive orbit-lowering been executed intentionally? One might think never at first, but these rocket scientists play games with fast flights from the ground to the ISS and that sometimes requires phasing and comments note there are collision avoidance maneuvers as well, so it's not inconceivable that it may have happened.
Of course it means there needs to be an engine pointed in the prograde direction, so there's that.
• I would guess that collision avoidance might be another reason to lower the orbit, assuming the maneuver would take less fuel depending on the circumstances. – JohnHoltz Dec 27 '18 at 17:13
• Pendantically-speaking, wouldn't doing a single boost burn lead to an elliptical orbit with a second burn needed to achieve a (lower aphelion) circular orbit? – Alex Hajnal Dec 28 '18 at 0:34
• @AlexHajnal further pedantry: there's no such thing as a truly circular orbit to begin with. It's a mathematical abstraction not possible in the real world. – uhoh Dec 28 '18 at 0:39
• @Cristiano I'm using the convention that "pointed" means where the exhaust goes. To go to space, I point the engine down in order to go up. To lower the altitude of an orbit, I point the nozzle and its exhaust forward in the prograde direction in order to experience a force in the retrograde direction. Hand a thruster to a normal person and say "point this engine away from you" and they will not be looking into the nozzle. – uhoh Dec 28 '18 at 12:32
Not having access to any sort of comprehensive data, I was able to find information on 3 occurrences.
Jan 2015 ISS needed a phasing maneuver to prepare for a "fast" (4-orbit) progress rendezvous. If the phasing were done prograde, the increase in altitude would have reduced the cargo load that the progress could bring to dock. Decreased debris at that altitude may also have been a reason for the direction choice.
Aug 2008 ATV did a retrograde avoidance maneuver. The document mentions that it was the first such performed in eight years. So presumably there was at least one prior in 2000.
• I'm going to accept this as it's well-sourced and provides to well-documented examples. That it is so challenging to find some indicates to my satisfaction at least that the answer to "How often has (or how common is it..." is "not very". – uhoh Mar 20 at 15:13
• Another deboost was done on 7 October 2020, to "set up phasing conditions" for the arrival of the 63 Soyuz vehicle (Soyuz MS-17 ), which used an "ultrafast" 2-orbit rendezvous, and for the return of the 62 Soyuz vehicle (Soyuz MS-16). – rchard2scout May 4 at 14:27
TLEs and SGP4 may help.
If we calculate the mean radius vector (or the semi-major axis, but not the osculating semi-major axis) for the TLE epoch, we get the following graph:
The reboosts are clearly shown, but there is no deboost (the spikes you see immediately after a reboost are TLE artefacts).
EDIT
This first part of the edit is to clarify about the @uhoh’s misleading message where he say that in this graph:
there is a sudden radius vector drop, while now I’m saying that there are no sudden drops.
It’s absolutely clear that the sudden drop shown in the second graph is caused by a big increase of the air density and not by a “propulsive orbit-lowering executed intentionally” (aka deboost).
Since he asked: "How often has a propulsive orbit-lowering been executed intentionally?", I’m just saying that for the graphed period there are no “propulsive orbit-lowering executed intentionally”.
All that said, a good resource could be the page https://spaceflight.nasa.gov/realdata/sightings/SSapplications/Post/JavaSSOP/orbit/ISS/SVPOST.html where there are the lines:
IMPULSIVE TIG (GMT) M50 DVx(FPS) LVLH DVx(FPS) DVmag(FPS)
IMPULSIVE TIG (MET) M50 DVy(FPS) LVLH DVy(FPS) Invar Sph HA
DT M50 DVz(FPS) LVLH DVz(FPS) Invar Sph HP
------------------------------------------------------------------------
361/03:07:48.737 -1.0 2.1 2.1
N/A -0.5 -0.2 221.5
000/00:05:37.474 1.8 -0.1 215.8
For a deboost, we probably should expect a negative LVLH DVx component (but I’m not totally sure).
Afaik, only the current version of that page seems to be available, but I’m saving a local copy of that page since the day 120 of the year 2017; here’s an example of the list of the maneuvers:
IMPULSIVE TIG (GMT) DT DVx DVy DVz DVmag
137/22:15:09.861 000/00:00:19.723 1.0 0.0 0.0 -> 1.00
…
361/03:07:48.737 000/00:05:37.474 2.1 -0.2 -0.1 -> 2.11
|
2021-08-03 06:34:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4464436173439026, "perplexity": 2627.3689397131348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00212.warc.gz"}
|
http://math.stackexchange.com/questions/329300/solve-frac1x-1-frac2x-2-frac3x-3-cdots-frac10x-10-geq-fr
|
# Solve $\frac{1}{x-1}+ \frac{2}{x-2}+ \frac{3}{x-3}+\cdots+\frac{10}{x-10}\geq\frac{1}{2}$
I would appreciate if somebody could help me with the following problem:
Q: find $x$
$$\frac{1}{x-1}+ \frac{2}{x-2}+ \frac{3}{x-3}+\cdots+\frac{10}{x-10}\geq\frac{1}{2}$$
-
If the left side is $$f(x)=\sum_{k=1}^{10} \frac{k}{x-k},$$ then the graph of $f$ shows that $f(x)<0$ on $(-\infty,1)$, so no solutions there. For each $k=1..9$ there is a vertical asymptote at $x=k$ with the value of $f(x)$ coming down from $+\infty$ immediately to the right of $x=k$ and crossing the line $y=1/2$ between $k$ and $k+1$, afterwards remaining less than $1/2$ in the interval $(k,k+1)$. This gives nine intervals of the form $(k,k+a_k]$ where $f(x) \ge 1/2$, and there is a tenth interval in which $f(x) \ge 1/2$ beginning at $x=10$ of the form $(10,a_{10}]$ where $a_{10}$ lies somewhere in the interval $[117.0538,117.0539].$ The values of the $a_k$ for $k=1..9$ are all less than $1$, starting out small and increasing with $k$, some approximations being $$a_1=0.078,\ a_2=0.143,\ a_3=0.201,\ ...\ a_9=0.615.$$ The formula for finding the exact values of the $a_k$ for $k=1..10$ is a tenth degree polynomial equation which maple12 could not solve exactly, hence the numerical solutions above.
-
the 10th degree polynomial is irreducible, by the way. – Ewan Delanoy Mar 13 '13 at 16:28
Thanks. I didn't check that, but assumed it since maple didn't even give a smaller "RootOf" than degree ten. – coffeemath Mar 14 '13 at 13:14
Let $x$ be any number such that $10 \lt x \lt 21$. Then for each $k\in [1,10]$ we have $21k \gt x \gt k$, hence $20k \gt x-k \gt 0$, so $\frac{k}{x-k} \gt \frac{1}{20}$. Summing from $k=1$ to $10$, we obtain the desired inequality.
-
There are other solutions near $k$ for $k=1...9$ and beyond 10 the solution set extends to about 117. – coffeemath Mar 13 '13 at 12:46
this picture is show what coffeemath said:
-
|
2015-07-04 20:54:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510733485221863, "perplexity": 177.6221400015782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00286-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://gametheory.life/2020/04/22/estimating-the-proportion-of-corona-cases-with-a-random-sample/
|
# Estimating the proportion of Corona cases
This short note makes one simple point. If you are interested in estimating the proportion of Corona infected people in some country or region, there is a simple and better (more precise) estimate than the one you obtain by computing the sample proportion. You can also read this in German here (and here).
Setup
Consider taking a (completely) random sample of $n$ individuals in some population in order to estimate the proportion of people in this population who have the Corona virus. Let $p$ denote this true proportion. I here assume that we already know, through potentially non-random medical testing, that there is a certain fraction $q$ of the population who definitely have the virus (or have had it). I will refer to these people as those that were declared to have the virus. I assume that whatever medical test was used to obtain this number was perfect, at least in one direction: anyone who has been declared to have the virus this way also actually has it. As, thus, necessarily $q \le p,$ we can write $p=\mu q,$ where we interpret $\mu \ge 1$ as the multiplier or ratio of actual virus cases relative to the declared virus cases. I am here interested in estimating $\mu$ from the random sample knowing $q.$ If we have an estimate for $\mu$ we get one for $p$ by multiplying the $\mu$-estimate with $q$.
When we take the random sample, we collect two pieces of information from each person. One, we check (again, for the sake of simplicity, with a perfect medical test) whether or not they have the virus. Two, we ask them (and the subject answers truthfully) whether they have already been declared as having the virus. I will call $X$ the total number of virus cases in the sample and $Y \le X$ the total number of already declared virus cases in the sample.
Estimator
Many people would probably be tempted to use $\hat{p}=\frac{X}{n}$ as the standard estimator for $p,$ and, thus, indirectly $\hat{\mu}_S=\frac{X}{qn}$ as the standard estimator for $\mu$. It turns out that there is a better estimator that uses all available information. Let me call it the alternative estimator $\hat{\mu}_A$. It is given by
$\hat{\mu}_A=1+\frac{X-Y}{qn}.$
In the Appendix below I derive (in a few simple steps) this estimator as an approximation of the maximum-likelihood estimator for the present problem. It, therefore, does have all the nice properties that maximum likelihood estimators have. But even if you are a maximum likelihood skeptic, we can actually just directly compare the precision (for all sample sizes) of the two estimators, by looking at their variances.
First note that, like the standard estimator, the alternative estimator is unbiased as
$\mathbb{E}\left[\hat{\mu}_A\right]=1+\frac{\mu qn - qn}{qn} = \mu.$
The variance of the two estimators are
$\mathbb{V}\left[\hat{\mu}_S\right] = \frac{\mu(1-\mu q)}{qn} \approx \frac{\mu}{qn},$
and, as $X-Y$ is binomially distributed with number of trials $n$ and success probability $\mu q \left(1-\frac{1}{\mu}\right),$
$\mathbb{V}\left[\hat{\mu}_A\right] = \frac{(\mu-1)(1-q(\mu-1))}{qn} \approx \frac{\mu-1}{qn},$
where the approximation is good when $q$ and $\mu$ are sufficiently small.
In this case the ratio of the two variances is given by
$\frac{\mathbb{V}\left[\hat{\mu}_A\right]}{\mathbb{V}\left[\hat{\mu}_S\right]} = \frac{\mu-1}{\mu} < 1.$
Thus, especially, if $\mu$ is not much larger than 1, the alternative estimator is quite a bit more precise. Note also, that the alternative estimator can never be below 1.
Austrian Corona cases
In Austria, from 1st to 6th of April, a random sample of $n=1544$ was checked for the Corona virus. I will here ignore the disturbing sample selection problem that actually 2000 people were supposed to participate and 456 did not participate. Of those who participated the number of cases found, $X,$ was 5 and the number of already declared cases among them, $Y,$ was either 2 or 3. There was some weighting in these numbers which I am not fully informed about. I will ignore these issues here, but at least will look at both cases for $Y.$ At the same day the proportion $q=1/758$ (11383 declared cases among 8,636.364 people in Austria).
Using the, here also easily applicable, Clopper-Pearson method to compute 95\% confidence bounds, we get the following estimates and bounds derived from the two different estimators.
$\begin{array}{c|ccc} & \hat{\mu}_S & \hat{\mu}_A (Y=3) & \hat{\mu}_A (Y=2) \\ \hline \mbox{estimate } \mu & 2,46 & 1,98 & 2,47 \\ \mbox{lower bound } \mu & 0,87 & 1,12 & 1,30 \\ \mbox{upper bound } \mu & 5,72 & 4,54 & 5,30 \\ \mbox{lower bound cases } & 9866 & 12738 & 14845 \\ \mbox{estimated cases } & 27.968 & 22570 & 28164 \\ \mbox{upper bound cases } & 65126 & 51726 & 60331 \\ \end{array}$
As you can see, the confidence bounds are much narrower for the alternative estimator than for the standard estimator.
A Thought
If we could assume, which sadly we often probably cannot, that the proportionality factor $\mu$ is the same in all regions of interest, while $q$ is observably not, then one could take a specific random sample that would even be much better than a random sample of all people. In Austria, for instance, the $q$ for Landeck in Tirol is about $q_L=1/50,$ while in Neusiedl am See in Burgenland it is about $q_N=1/1000.$
Then a random sample of people in Landeck would produce a much more precise estimate for $\mu$ than a random sample of people in Neusiedl. The variance for the Neusiedl estimator would be 20 (the ratio of $q_L/q_N$) times as large as that for Landeck.
Another Thought
Of course, there is nothing specific about the setup here that makes it only applicable to counting virus cases. This estimator could be used in all cases in which we are interested in the true proportion of some attribute A in some population, when we know that only A’s can also have attribute B and we know how many B’s there are. Looking at it like that I am sure this estimator is known. So I am here just reminding you all about it.
Appendix
We here derive the alternative estimator as an approximation to the maximum likelihood estimator. Taking a truly random sample, we know that $X$ is binomially distributed with number of trials $n$ and success probability $\mu q.$ Conditional on $X$ we know that $Y$ is binomially distributed with number of trials $X$ and success probability $\frac{1}{\mu}.$ The likelihood function is, therefore, given by
$\mathcal{L}(\mu;X,Y) = {n \choose X} (\mu q)^X\left(1-\mu q\right)^{(n-X)} {X \choose Y} \left(\frac{1}{\mu}\right)^Y \left(1-\frac{1}{\mu}\right)^{(X-Y)}.$
The log-likelihood function is then proportional to
$\ell(\mu;X,Y)=X \ln(\mu q) + (n-X) \ln\left(1-\mu q\right) - Y\ln(\mu) + (X-Y) \ln\left(1-\frac{1}{\mu}\right).$
The maximum likelihood estimator, thus, has to satisfy
$\begin{array}{lll} \frac{X}{\mu q}q + \frac{n-X}{1-\mu q} (-q) - \frac{Y}{\mu} + \frac{X-Y}{1-\frac{1}{\mu}} \frac{1}{\mu^2} & = & 0 \\ \frac{X-Y}{\mu} - \frac{q(n-X)}{1-\mu q} + \frac{X-Y}{\mu(\mu-1)} & = & 0. \end{array}$
If $\mu q$ is small, we can approximate $1-\mu q$ by 1. We then get
$\mu=1+\frac{X-Y}{q(n-X)}.$
If $X$ is, in expectation, much smaller than $n,$ we can approximate this further to get
$\hat{\mu}_A=1+\frac{X-Y}{qn}.$
|
2023-02-08 09:36:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 59, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263652324676514, "perplexity": 316.67369727346926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00175.warc.gz"}
|
https://desmosgraphunofficial.wordpress.com/
|
A propos
DesmosGraph Calculator is an incredibly powerful tool to display curves ( Cartesian, polar, implicit ) and a lot more (with animation, sliders, summation and derivatives, custom functions, vectors of parameters, fitting, regression, stats, …). Firstly dedicated to pupils and high-school activities, it is also of interest for the scientist, the engineer and the university student. But the simplicity of principles and the minimalist manual hide some of the power of the tool. Conversely, some things are uneasy (and many think impossible) to do if you don’t know how to twist the tool.
In this blog, we will explore some of these points.
Disclaimer: I’m not related to the development team. I’m just a hard user of the tool 🙂
Various tricks: sliders, keyboard, interrupted contours, function selector
Sliders
Making a movable point from variables respects their constraints.
→ this allows for 1D sliders:
set v=1 , then settle some value range. then draw point (0,v), and makes it vertically movable (in facts it is the default).
Note that DesmosGraph is smart enough so that even translation and scaling are permitted to draw your slider: (0, v/2 + 1/2 ) . ( But not non-linear transforms ).
Same principle to tune 2 variables as a square 2D slider.
What about other shapes, e.g., disk ?
DesmosGraph allows to put equation everywhere, comprising in parameters tunings, so that the second parameter range can be constrained by the first: range -sqrt(1-v²), sqrt(1-v²)
Example here. More extreme: spiral slider.
Keyboard user control
Desmos have multiple input and output alternatives, to help with various disabilities.
One of the features is that many thing can be controlled from the keyboard.
This include selecting a movable point ( use CTRL-ALT-P , then TAB or shift TAB to browse points ) and moving it ( using arrow keys ).
Practical use: to let the user control 1 ot 2 parameters via keyboard, put a discrete movable point somewhere, and just tell users to first do CTRL-ALT-P once, then use arrow keys.
Example here.
Interrupted contours
Sometime you want to store several contours or shapes in a same list.
This is possible, because DesmosGraph knows special numbers “undefined” like 0/0 , and “infinity”, like 1/0 or infty command.
The first can be used to interrupt contours, the second to make horizontal or vertical asymptotes.
Example here.
Function selector
Sometime you want to test a transform other several possible functions, or let the user experiment. Instead of changing the formula for f(x) or swapping the namings fi(x) vs f(x), you can implement a function selector by storing them in a table: F(x) = [ all functions ] then f(x) = F(x)[s] .
Example here.
Extending DesmosGraph
Disclaimer: This is the most technical and unreasonable post on this blog. Ephemeral & hazardous stuff here.
There are 3 ways to extend DesmosGraph with more features:
• For programmers, the Desmos API lets you use and interact with Desmos elements within your own JS programs. I won’t address this here, see Desmos API ( help in the official API googlegroup and in the unofficial “programming” channel in the desmos discord forum ).
• Adding plugins. Attention, these are non official, and might break with new versions of DesmosGraph.
• Secret beta-features. Attention, these are unpublic and experimental, so they can disappear or change at any time, and have many bugs. Just for fun !
Secret beta-features
Attention, these are unpublic and experimental, so they can disappear or change at any time, and have many bugs. Just for fun !
• Simulation:
Run equations at every frame of an animation (choose the rate), or just when clicking “1 step”.
This open a whole new world of reactive animations, simulations and games !
Principle: name the variable(s) to be updated with the equation(s) you give.
Trick: if you update big arrays, hide them in closed folder for better perfs.
To activate it:
F12 to go to the JS console, then enter Calc.updateSettings({clickableObjects: true}) +return, then F12 to exit.
• Active buttons :
Run equations each time a point is clicked. To switch on-off something, or so more: e.g. one step of simulation above.
To activate it:
as above. Points will now show a new option: clickable ( only for not movable points ).
But for this one, attention: when people load your graph they do can use the feature, but they won’t see the update equations if they don’t activate the feature.
More styling options for points and lines, before they (maybe) get public. (free colors once where there).
To activate it:
F12 to go to the JS console, then enter Calc.updateSettings({advancedStyling: true}) +return, then F12 to exit.
• 3D mode:
no longer available.
Examples:
Interactive drawing (with reset button).
Conway game of life cellular automata.
– See various simulations and automata in this reddit Desmos competition.
Using Complexes
We can use either points (x,y) or arrays [x,y] to implement complex, then redefine all classical operators mul, div, pow, exp, log. ( addition, subtraction, multiplication by scalar works directly ).
• Implementation with points
– directly displayable and dragable.
– points can contain lists ( to generate automatically a set of complexes )
Inconvenient: you cannot compare or solve points.
• Implementation with arrays
Big advantage: you can compare and solve array: see Inversion below.
Inconvenients:
– you have to convert to/from points for display or drag parameters.
– you cannot have list or list, so you can’t generate automatically a set of complexes.
Note that to display the mapping of space via a complex transform we would like to display the transformed grid, via solving f(X) = grid. Alas, only array-complexes allow solving while only points-complexes allow grids. But we can still do that by testing that values are integer via mod()=0, or near integer.
Example: complex inversion
Fitting curves and surfaces
For once this is not about a hidden feature or advanced trick: it is about a not know enough incredibly powerful function of DesmosGraph: fitting, i.e., finding the parameters of a simple target curve best approaching your raw data or your too complex function.
Many tools propose least square method to fit a line on a set, but DesmosGraph is a very rare tool allowing you to fit equations or data to really any shape you like, with any kind of parameters. And this can also be used for multidimensional functions.
And all this ultra simple: complexfunc(X) ~ simplefunc(X) , where simplefunc(x) is defined as usual, with use of free parameters as usual (just don’t define them, fitting will). X is an array of values, since fitting work on data sets.
In the definition of g(), you can add contraints on the parameters , e.g. { -1 ≤ b ≤ 1 } to restrict search domain (in some complex situations this can help).
Useful hacks: more colors !
New (v 1.6) :
Desmos Graph now have color functions !
var = rgb(r,g,b) (r,g,b in 0..255 ) , or var = hsv(h,s,v) ( h in 0..360, s and v in 0…1 )
Declaring this add a new color to the palette you can use when choosing colors.
You can use any expression, possibly depending on an existing variable (e.g. time). NB: The variable name is compulsory, despite of no use for you.
A cool use is to define array values for colors: then if you draw an array of points or curves and set this “color” to it, each successive element will use the successive color in the array.
See example1, example2, example3.
So now we can even do ray-tracing with the proper colors 😉
Deprecated way:
Desmos Graph only allows a very small palette of 6 colors, even if you can simulate more by superimposing layers with transparency.
Mr H. Here to Help proposed a useful hack to add more, via javascript:
Example ➛
• Open the Javascript Console of your browser ( via menu, or shortcut F12 )
• In the console, type Calc.colors.COLORNAME = "#RGB" , where RGB packs the R,G,B color components encoded in hexadecimal from 0 (min intensity ) to F (max intensity). E.g., Calc.colors.olive = "#880"
• Variant: You can have more precision in the tint by encoding the colors as "#RRGGBB" , i.e. 2 digits for each color component from 00 (min intensity ) to FF (max intensity). E.g., Calc.colors.olive = "#808000"
• You can then close the Javascript tab with the close button or F12.
• Now, the color menu shows the extra colors !
Attention: the object colors are correctly saved, but at loading they will no longer appear as choice in the color menu.
Conveniently packed by Andre Issa as a Color Picker plugin
[ not working well on my ubuntu, though ]
[ To be continued ]
Drawing geometry (secret features inside :-) )
If you practice even a bit of Desmos Graph, you already know many way to draw curves (explicit, implicit, parametric, polar plot…) and to fill them (using inequalities). We also presented here how to draw series of curves and do more complex regions painting. With this we can already produce polygonal figures from equations:
polygons concentric polygons stars involved constructions
But it is also possible to directly draw and manipulates shapes, and there are even undocumented features to do so.
At the base is the point: we already use them to draw parametrics ( X(t), Y(t) ) , or to get visual sliders ( a, b ). This set point syntax. But there is a lot more you can do with points:
• You can name them: A = (1,2), even for unknowns: X = (x,y)
• Indeed they can even contain a list:
• X = [1,2,3] Y = [4,2,7] P = (X,Y)
Or directly as T = ( [1,2,3], [4,2,7] )
Or using an array. Example: Delaunay triangulation :
• This allows to define point sets:
P = ( cos( 2Pi[0...N]/N ) , sin( 2Pi[0...N]/N ) )
• Conversely, you can have list of points: L = [ (1,2), (3,5), (0,0) ]
That you can even translate ans scale: (0,2) + .3 L
• You can access point coordinates with P.x and P.y
• You can draw polygons:
• polygon( A, B, C ),
polygon( (1,1), (0,5), (5,3) ),
polygon(P) with P defining a set above.
The attributes button let you tune the look as for curves, with more options.
• Indeed you can also directly draw polygons when defining points set as P above: The attributes button let you join them with lines and tune exactly the same set of parameters.
• Arrays also let you display columns as point sets and tune attributes, allowing to draw points and join then with lines. Making them loop is not easy ( P[ join([1,...length(P)],[1])] ) , so it can be simple to duplicates the first point to the last position (explicitly, or by storing the duplicate values in some variables).
• You can compute the distance between points (simples or sets):
distance(A,P)
• You can get the midpoint(A,P)
• Since version 1.6, you can add, substract, multiply points. E.g. (1-t)A + tB draws segment AB.
You can also use piecewise definition: C = { n=0: (1,2) , n=1: (0,7), (0,0) } . see example.
• Example of application: drawing splines from point sets:
• Alas we cannot directly solve or compare (e.g., A+X = B-X or midpoint( (0,0),(x,y)) = (5,5) ).But we can do it on anything scalar:
• via their coordinates,
• we do can solve and compare with the distance operator:
distance( (x,y), (0,0) ) <= 1 draws a filled circle.
distance( (x,y), (0,0) ) = [0...N]/N draws a series of circles.
distance( (x,y), (0,1) ) = distance( (x,y), (x,0) ) draws a parabola.
More examples here.
Attention: distance( (x,y), P ) = 1 with P as above would traces a series of circles, one around each points in the set, but distance( (x,y), P ) = [0...N]/N won’t turn them into a double series (multiple circles around points): remember than arrays are synchronous in expressions, so each element of the array of radii will fit the corresponding element in the array of points.
• Of course, you can also do geometry with points encoded as arrays, using points only for display and controls. Example here. Then,
• you can add, substract, weight points-as-arrays, cf 3D ellipse:
• dot product is total(V1*V2),
• distance is sqrt(total(V1²)),
• multiplication by matrix is [ total(V*[M00,M01]) , total(V*[M10,M11]) ],
• etc.
• ( drawing grids and vector fields: treated here ).
• With some effort, from all this you can even do meshes in perspective: see here.
Desmos traps: “why is it not working ?”
It’s not uncommon to have either unexpected results or puzzling error messages in Desmos. Here are some classical examples and solutions.
• Desmos log means log10. Not to be confused with ln , that also be noted log_e.
• Desmos stdev means the statistical estimator, not to be confused with the real standard deviation of a set, noted stdevp. ( stdev = sqrt(N/(N-1))*stdevp ).
• There is no local variable. Sum_n prevents you to use n anywhere else again (and the error message will be puzzling)… but in other sums.
• Lists are kind of vectors, but with common alignment for all expressions in your session. Don’t expect to enlarge the combinatory by passing an extra vector where a constant was expected.
Ambiguous syntax
• The syntax for tracing the locus where a variable or function equals something looks a lot like what you would do for defining it, and Desmos easily think it’s a second definition of the same thing, which it allows… as long as its not used in another expression (then you’ll get a puzzling error message).→To avoid f(x) = 3 ambiguity, either trace 3 = f(x) or f(x)+0 = 3
• Conditions look like a separate statement, but indeed it’s part of the expression.That’s why ( x, y ) { cond } can’t work, for a point is not a value. Use (x, y { cond } ) .
Copy-pasting
• From Desmos Graph to exterior, you will always get LaTeX form of equations. So Desmos can be used as a LaTeX editor, but you can’t get simple text version.
• From outside text to Desmos Graph: it works… but for operators like sqrt, |.|, {.}, sum …Indeed Desmos expect some LaTeX source, and just extend syntax to some text names like sin so that simple text expressions accidentally work. Conversely x_a = \sum{\sqrt{x}} works, or any complex LaTeX formula.But \{, that block the pasting.
New: some plugins now allow to manage that.
Out of conceptual space
Many limitations can be twisted, like drawing in 3D, making random numbers, or emulating vector algebra. But some concepts remain definitively out of reach:
• In fact all expressions are precomputed by Desmos. So don’t expect to reproduce really evolving states, like dynamics simulation ( but with this hacky non-reliable hidden feature ). Even recursivity won’t work, for conditions are just functions like the others, rather than algorithmic structures.
• There is no way you can get the inverse of a function. But you can trace it.
• or between conditions is near impossible, appart duplicating the function and applying the alternate condition (for plotting) or putting the whole content in piecewise form (for definitions): f(x) = { cond1: val, cond2: same-val }
(and can be obtained with the m < x < M syntax for simple cases or by nesting { cond } statements.
Generating random numbers and points
Undocumented functions: (see more)
• random(N)creates an array of N uniform random numbers in range 0..1.
random(N,s) provides a seed s allowing producing different sequences.
• Distributions: uniformdist(), normaldist(), poissondist(), binomialdist(), tdist() (student)
• random(normaldist(...) , ... ) or normaldist(...).random(...) :
creates an array of random numbers with target distribution.
See example1 , example2.
Manually:
• A classical hash function is obtained by tacking least significant bits of a high frequency sin: e.g., R(i) = mod(10⁴.sin(10⁴i),1) , to be evaluated at discreet i values.
Examples: uniform dots , cloud covariance , lightening random walk , bush
more (with animation):
• You can make a 2D hash function the same way. An historical tuning giving good quality values is H(i,j) = mod( sin(i*12.9898 +j* 78.233) * 43758.5453, 1)
Enriching graphics
Thick or varying thickness curves
The curve tracing engine of DesmosGraph is very efficient. So you get thick curves by a twist: add a very high frequency oscillation ! e.g.: sin(x) + .1*sin(1000x)
cf more elaborate example:
Emulating two parameters parametrics
the idea is to take a large range for parametric parameter t, and to slice it in chunks using mod(t/N,1) as parameter 1, parameter 2 being the slice number given by floor(t/N). If you want a full grid rather than parallel curves, duplicate the plot and swap the floor() and mod().
The same principle extends to 3 parameters using floor(mod(t,N)), floor(t/N), mod(t/N,1).
Examples:
• See rest grid here:
• See m() in spiral galaxy:
• See isometric ellipsoid below
• This can also be used to draw a field of vectors or circles (another example). Note that thin details can sometime be missed by the display engine.
Filling / Painting
• Inequations let you color areas, with plain or dash border depending on inclusive vs strict inequality.
• You can avoid the border line by putting the inequation in the condition:
0 < 1 { sin(x)sin(y) < -0.5 }
( alas the resulting colors varying very non-linearly with the number of semi-transparent layers, and get quickly opaque, at a different rate depending on the base color ).
• You can get several lever of shade by superimposing layers, thanks to array parameters:
0 < 1 { sin(x)sin(y) < -1+2*[0,.3,...1] }
• New: custom colors and custom gradients now give more freedom.
extreme examples:
Ray-tracing sphere tests on color composition
Undocumented details
The manual is very short: many more can be done !
( Many additional info is scattered in the help center, if you take time to crawl it 🙂 ).
Variables and function names
• x,y, r,θ are the default parameterization in Cartesian and polar coordinates,
• π,τ,e are the special constants.
• Possible names: a0, A1, B_whatever, α (type "alpha"), β, φ, α1, α_whatever,
• More: since LaTeX inputs are valid, you can get any other Greek variable name, in caps or not, by copy pasting from outside (or from a text field in DesmosGraph): \gamma, \Gamma, \epsilon, etc. Once there the symbol can be copy-pasted between fields.
New: a plugin now allows all Greek letters.
• Note that you can display the value of a variable as the label of a point using special label ${v} Example here . Arrays / lists / tables Expressions and thus resulting variables and functions accept arrays: it’s like using vectors instead of scalar. It also allows to graph a full family of curves of areas instead of one, whatever the plotting method (Cartesian, polar, implicit, inequations…). e.g.: A = [1,...,10] → sin(A*x) , 2*A/10-1 , etc. More: here. sin(x)*sin(y) <= -1+2*[0,.3,...1] Complex examples: Array of threads , Fractal braid , Push hole in param Visualizing the content of an array: • Create a table and replace column y1 (or nexts ) by the array name • plot: ( [ 1...length(L)], L ) you can even label dots with their value, using label${L}
• use histograming & plotting tools ( see below )
Complex operations on arrays:
• L[N] where N is an array of integers
• sublist: L[3...] or L[3..5]
• reversed order : L[length(L)...1]
• L[ joint([1,...length(L)],[1]) ] to make it cyclical (e.g. for a closed contour )
• d(n) = L[ 1+ mod( [0...length(L)-1] +n,length(L) ) ] to roll it by n
• sublist obeying a criterion: L[ L>0 ] or L [ f(L)>0 ]
• → indexes of item v in L: [1...length(L)][L=v]
( indeed you can think of [L=v] or [cond(L)] already doing it, but not directly authorized as this syntax is ambiguous )
• sorted version of the list: sort(L) or L.sort
• shuffled : shuffle(L) or L.shuffle
Indeed, about anything can be a table: parameters, results, even some attributes (e.g. colors).
See use for geometry, defining complexes, or function selectors.
Variable length
• Array can be set as [1,...,N] , or [0,1/N...1]
• Slider can be set between 0 and expressions like a+pi/2
Functions
• All trigonometrics also have their inverse, comprising sinh(x), etc.
Moreover, you can also get them using sin^-1(x) (alas, this does not work for custom functions).
• log are available in any base with logn . Attention, log alone is log10, use ln for base e.
• min , max works with any number or parameters, as well as lists (like all the stats functions).
• nth root (e.g., cubic): nthroot
• The gamma function, generalization of the factorial, is just x!
• Piecewise function / expression: f(x) = {cond1: expr1, cond2: expr2, ... , default_expr}
• Derivatives: d/dx f(x) , f'(x) , f''(x) , d/dx d/dx f(x)
• Summation, product, integrals: sum, prod, int
• From statistical distributions: f = normaldist(0,1)
→ draw: g(x) = pdf(f,x) or g(x) = f.pdf(x) . Same with cdf.
Distributions: uniformdist(), normaldist(), poissondist(), binomialdist(), tdist() (student)
• random(N) creates an array of N uniform random numbers in range 0..1.
random(N,s) provides a seed s allowing producing different sequences.
• random(normaldist(...) , ... ) or normaldist(...).random(...) :
same for normal law (or any distribution).
→ See example.
Restrictions / Range
• Domain limited function / expression: f(x) = expr {cond}
• Range condition: 0 <= x < 1
• AND of conditions: {cond1}{cond2}
• For parametrics curve, the condition must be *in* the parenthesis:
( t, sin ( t ) { t > 0 } )
• Variable slider: a reminder to tune the bounds, and possibly the step ( 1 for integers).
• For points used as (bi)sliders, remember to set limits on the associated variable.
If you want the point to only move in one axis, or not at all, the settings menu let you restrain it (and do many other things).
• For parametric and polar curves, a reminder than you can tune the bounds as well. You can even abuse of it to generate 2D parameters using mod() and floor() :-).
Geometry & drawing:
• There are undocumented operators and features for geometry ! We explain these here.
• histogram(A [,step]) : plot the histogram of array A for values every steps.
dotplot(A [,step]) : same with points instead of bars
boxplot(A) : draw a box encoding mean, stddev, range.
|
2021-04-23 04:11:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5358392596244812, "perplexity": 4933.4168767638375}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00398.warc.gz"}
|
http://lambda-the-ultimate.org/archive/2017/07/4
|
## Is Datalog negation(¬) similar to the built-in predicate (≠)?
I was reading "Principles of Database & Knowledge-Base Systems, Vol. 1" by Jeffrey D. Ullman. There is a chapter about Datalog negation and as I was seeing the problems of negation I kept thinking that using the predicate ≠ would solve those problems.
E.g.
bachelor(X) :- male(X) & ¬married(X,Y).
would become:
bachelor(X) :- male(X) & married(Y,Z) & X ≠ Y.
but then I see the following:
p(X) :- r(X) & ¬q(X).
q(X) :- r(X) & ¬p(X).
The problem is this has 2 minimal models and if I'm not mistaken so does this:
p(X) :- r(X) & q(Y) & X ≠ Y.
q(X) :- r(X) & p(Y) & X ≠ Y.
Is there an equivalence between these 2 operators? If so, did I miss it or is it not mentioned that it's unsafe to use ≠ with recursion?
|
2018-08-15 13:05:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5996143221855164, "perplexity": 2939.0625003254563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00451.warc.gz"}
|
https://mirkomazzoleni.github.io/journal/2017/02/20/BSC_BCI/
|
v Classification algorithms analysis for brain-computer interface in drug craving therapy · Mirko Mazzoleni
# Classification algorithms analysis for brain-computer interface in drug craving therapy
### Abstract
This paper presents a novel therapy to recover patients from drug craving diseases, with the use of brain–computer interfaces (BCIs). The clinical protocol consists of trying to mentally repel drug-related images, and a Stroop test is used to evaluate the blue therapy effect. The method requires a BCI hardware package and a software program which communicates with the device. In order to improve the BCI detection rates, data were collected from five different healthy subjects during the training. These measurements are then used to design a better classification algorithm with respect to the default BCI classifier. The investigated algorithms are logistic regression, support vector machines, decision trees, k-nearest neighbors and Naive Bayes. Although the low number of participants is not enough to guarantee statistically significant results, the designed algorithms perform better than the default one, in terms of accuracy, F1-score and area under the curve (AUC). The Naive Bayes method has been chosen as the best classifier between the tested ones, giving a +12.21% performance boost as concerns the F1-score metric. The presented methodology can be extended to other types of craving problems, such as food, and alcohol. Results relative to the effectiveness of the proposed approach are reported on a set of patients with drug craving problems. [Paper, ScienceDirect, Code]
#### Reference
M. Mazzoleni, F. Previdi, S. Bonfiglio, "Classification algorithms analysis for brain-computer interface in drug craving therapy", Biomedical Signal Processing and Control, Volume 52, 2019, Pages 463-472, ISSN 1746-8094. doi: 10.1016/j.bspc.2017.01.011
#### Bibtex
@article{MAZZOLENI2017,
title = "Classification algorithms analysis for brain–computer interface in drug craving therapy",
journal = "Biomedical Signal Processing and Control",
volume = "52",
pages = "463 - 472",
year = "2019",
issn = "1746-8094",
doi = "https://doi.org/10.1016/j.bspc.2017.01.011",
author = "Mirko Mazzoleni and Fabio Previdi and Natale Salvatore Bonfiglio",
}
|
2022-10-05 19:39:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.370273232460022, "perplexity": 3548.444540307284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00324.warc.gz"}
|
https://www.emerald.com/insight/content/doi/10.1108/JDAL-04-2017-0004/full/html
|
# Modeling median will-cost estimates for defense acquisition programs
Ryan Trudelle (Department of Mathematics and Statistics, Air Force Institute of Technology, Wright-Patterson AFB, Ohio, USA)
Edward D. White (Department of Mathematics and Statistics, Air Force Institute of Technology, Wright-Patterson AFB, Ohio, USA)
Dan Ritschel (Department of Systems Engineering and Management, Air Force Institute of Technology, Wright-Patterson AFB, Ohio, USA)
Clay Koschnick (Department of Systems Engineering and Management, Air Force Institute of Technology, Wright-Patterson AFB, Ohio, USA)
Brandon Lucas (Department of Systems Engineering and Management, Air Force Institute of Technology, Wright-Patterson AFB, Ohio, USA)
ISSN: 2399-6439
Article publication date: 3 July 2017
## Abstract
### Purpose
The introduction of “should cost” in 2011 required all Major Defense Acquisition Programs (MDAP) to create efficiencies and improvements to reduce a program’s “will-cost” estimate. Realistic “will-cost” estimates are a necessary condition for the “should cost” analysis to be effectively implemented. Owing to the inherent difficulties in establishing a program’s will-cost estimate, this paper aims to propose a new model to infuse realism into this estimate.
### Design/methodology/approach
Using historical data from 73 Departments of Defense programs as recorded in the selected acquisition reports (SARs), the analysis uses mixed stepwise regression to predict a program’s cost from Milestone B (MS B) to initial operational capability (IOC).
### Findings
The presented model explains 83 per cent of the variation in the program acquisition cost. Significant predictor variables include: projected duration (months from MS B to IOC); the amount of research development test and evaluation (RDT&E) funding spent at the start of MS B; whether the program is considered a fixed-wing aircraft; whether a program is considered an electronic system program; whether a program is considered ACAT I at MS B; and the program size relative to the total program’s projected acquisition costs at MS B.
### Originality/value
The model supports the “will-cost and should-cost” requirement levied in 2011 by providing an objective and defensible cost for what a program should actually cost based on what has been achieved in the past. A quality will-cost estimate provides a starting point for program managers to examine processes and find efficiencies that lead to reduced program costs.
## Citation
Trudelle, R., White, E.D., Ritschel, D., Koschnick, C. and Lucas, B. (2017), "Modeling median will-cost estimates for defense acquisition programs", Journal of Defense Analytics and Logistics, Vol. 1 No. 1, pp. 19-33. https://doi.org/10.1108/JDAL-04-2017-0004
## Publisher
:
Emerald Publishing Limited
Copyright © 2017, In accordance with section 105 of the US Copyright Act, this work has been produced by a US government employee and shall be considered a public domain work, as copyright protection is not available.
## Introduction
On June 15, 2011, the Under Secretary of Defense for Acquisition, Technology and Logistics [USD (AT&L)] directed the Military Departments and Directors of Defense Agencies via a memorandum to implement will-cost and should-cost management for Acquisition Category (ACAT) I, II and III programs. In this memorandum, the USD (AT&L) reiterates that the departments will continue to set program budget baselines using non-advocate will-cost estimates. A will-cost estimate uses traditional cost-estimating techniques (e.g. analogy, bottom-up, parametric, etc.) to estimate the most likely cost of a program to establish a reasonable budget baseline and acquisition program thresholds. However, the USD (AT&L) also “challenges program managers to drive productivity improvements into their programs during […] program execution by conducting Should-Cost Analysis”, which involves, “identifying and eliminating process inefficiencies and embracing cost savings opportunities” (Carter and Mueller, 2011). The should-cost estimate therefore deviates below the will-cost estimate to develop a realistic price objective for negotiation purposes and subsequent savings against the will-cost estimate.
Additionally, the USD (AT&L) states in the same memorandum that:
[…] the main problem with the will-cost estimate isn’t in the numbers or how it was reached; the problem is that once the will-cost estimate is derived and the budget for the program is set, historically, this figure becomes the “floor” from which costs escalate, rather than a “ceiling” below which costs are contained—in many ways creating a self-fulfilling prophecy of budgetary excess (Carter and Mueller, 2011).
We suggest that perhaps there is a better way to infuse realism into a will-cost estimate such that it becomes a middle-of-the-road estimate from which to work from in the should-cost approach rather than the floor.
Therein lies the crux of the problem – how does one go about generating a median “will-cost” estimate? Defense acquisition programs expand the frontiers of today’s technology to develop new and innovative systems that provide an asymmetric advantage on the battlefield. As a result, there are inherent uncertainties and risks associated with Department of Defense (DoD) acquisitions. These realities are manifested in the derivation of the program’s cost estimate. To combat risk and uncertainty, cost analysts account for the distribution of possible costs for a program having a right-skewed distribution by estimating to the mean. However, the mode or the most likely cost is less than the mean in skewed distributions. This difference might tie-up resources that may be better placed elsewhere. By contrast, building an overly aggressive cost estimate may free up resources to be placed elsewhere. However, if this estimate is exceeded, decision-makers could take critical funding from other programs or force a program manager to delay the program until additional funding can be secured.
To combat these issues, programs should strive for a realistic, middle ground point – essentially an empirically validated cost baseline. The use of historical data allows the acquisition community to unbiasedly analyze and estimate what a program would cost in relation to other similarly completed programs. This estimate then becomes a powerful tool from which the user can identify a target cost for a given program. This estimate also serves as a benchmark to identify whether a cost estimate is reasonable given what has occurred in the past. From this estimate, mitigation of risks associated with over- and under-estimating program costs may be achieved, resulting in a more efficient allocation of resources. Thus, we propose a new empirically based model for determining will-cost estimates in DoD acquisition programs.
## Past research and database creation
Our research builds an empirically derived model to predict median will-cost estimates for DoD acquisition programs. We use prior research to identify potential explanatory variables in our model and establish the basis for creating our data set. Our literature review spans the change in acquisition taxonomy from Milestone (MS) I, II and III to MS A, B and C. For this study, we consider MS I, II and III to be equivalent to MS A, B and C, respectively. This is consistent with prior literature findings that the naming convention has simply altered over time without tangible changes in definition or substance (Harmon, 2012; Jimenez et al., 2016). Prior to relaying our data collection process, we first discuss recent studies pertinent to our research. These studies provide the foundation for how we conduct our research into the relative program characteristics that predict program cost.
Jimenez et al. (2016) developed a schedule duration prediction model for defense acquisition programs using pre-MS B data; we leverage their research to identify explanatory variables for investigation and an initial data set from which to draw upon. Their analysis concluded that the following variables were significant in establishing an empirical benchmark for “should schedule” estimates: amount of Research Development Test and Evaluation (RDT&E dollars) at MS B start (in millions), the per cent of RDT&E funding at MS B start, whether a program is a modification and whether a program has a MS B start in 1985 or later. Although they explored and adopted significant variables to predict program schedule using pre-MS B data, they also considered a plethora of explanatory variables that were ultimately deemed statistically insignificant; we also consider these variables.
Brown et al. (2015) first identified the MS B start in 1985 or later as an explanatory variable. They demonstrate that programs with a MS B start date in 1985 or later have a statistically significant change in their expenditure profile. These programs tend to expend a greater percentage of their obligations by the program’s mid-point than the programs that start prior to 1985. Although not conclusive, Brown et al. (2015) hypothesized that the reason for this significant shift is owing to the President’s Blue Ribbon Commission on Defense (often referred to as the Packard Commission) and the acquisition reforms that occurred because of the recommendations of the commission.
Similar to Jimenez et al. (2016), Deitz et al. (2013) analyzed activities prior to MS B. They examined the importance of developing a robust analysis of alternatives prior to MS B and the effects that an analysis of this nature may have on program success. Their findings suggest that while only 10 per cent of a program’s life-cycle cost was invested prior to MS B, 70 per cent of a program’s lifecycle costs are committed by this milestone (Deitz et al., 2013). This suggests to us that pre-MS B data may be very important to predicting program cost. However, this also limits data collection because pre-MS B reporting is not mandatory for all acquisition programs, and therefore, the cost and schedule data are unavailable in some instances. Jimenez et al. (2016) also experienced such a limitation.
Looking slightly further back in the literature, we find other pertinent studies that present possible explanatory variables to consider. Foreman (2007) researched methods to improve cost and schedule growth estimates by including longitudinal variables that account for changes that take place over time. His research built upon the database initially created by Sipple et al. (2004) and subsequently modified by Lucas (2004) and Genest and White (2005). Sipple et al. (2004) found the most important predictive variables of cost growth to be MS C to initial operational capability (IOC) duration and an indicator variable for a MS C slip.
The aforementioned researchers have identified numerous variables for investigation on whether they will be predictive of program cost. The complete list is in given the Appendix. This list also gives us our data inclusion and exclusion criteria. The initial data inclusion criteria include any program in the DoD (i.e. all service branches) which has reported program data using the selected acquisition reports (SARs). Additionally, they must be unclassified and reported within the Major Defense Acquisition Program (MDAP) and pre-Major Defense Acquisition Program (pre-MDAP) section of the Defense Acquisition Management Information Retrieval (DAMIR) database.
For a program to be considered in our study, it must satisfy three criteria. The first requirement is that the program SAR must contain an MS A date or funding at least one year prior to MS B – we interpret the pre-MS B funding as indicating the year in which MS A may have occurred. This requirement is because of the pre-MS B data being found predictive in the literature review. Unfortunately, this requirement also results in a great deal of programs being ineligible for inclusion because of a lack of reporting requirements prior to MS B. This is not unexpected considering a program is not official until meeting MS B.
We are able to include an additional 15 programs in our data set by making the following assumption when there is no MS A date provided: if there is funding in the funding profile at least one year prior to MS B, then MS A occurred in January of the year in which funding was first received. We did test this assumption to ensure these additional programs are not statistically different from the others prior to inclusion in the final data set.
The second exclusion criteria is that the program SAR must contain an MS B date and corresponding funding information. This again pertains to the necessity of containing pre-MS B data as a means to build a highly predictive model. Without the MS B date and funding information, we are unable to ascertain the duration of MS A or the funding spent up to MS B. Additionally, we are unable to calculate the projected funding needed to reach IOC or the projected duration of MS B to IOC.
The third exclusion criteria is that the program SAR must contain an IOC date that occurred prior to the last reported SAR which indicates that the program is complete up to IOC. This is important to our research, as it gives us a termination point to estimate and ensures we are not using projected values as actual values in our model. IOC is a very important date in a program, as it signifies the point in time when the program achieves an available capability in its minimum usefully deployable form.
As previously discussed, our data set starts with the 56 programs in the database built by Jimenez et al. (2016). We augment this database by analyzing defense program SARs from the DAMIR system. The program SARs contain program funding, schedule, and performance information relative to our research. Using our stated inclusion criteria, we add 187 programs to the initial 56. Then using the exclusion criteria, we remove 170 programs for a net change of 17; this results in a final program count of 73. Table I demonstrates inclusion and exclusion criteria used in this research. Table II lists the final 73 programs.
The data that we use for our analysis include both actual and projected values from the SARs. We use the latest available program’s SAR to record the actual cost from MS B to IOC as the response variable in the model. To develop a useful predictive tool for the acquisition community, we must only use projected cost and schedule data at MS B, as these are the only data the user of our regression model will have at their disposal.
To implement this limitation, we retrieve projected cost and schedule data from the SAR corresponding to the year in which MS B occurred or, if that SAR is unavailable, the earliest available SAR. This allows us to use projected values to predict a program’s cost from MS B to IOC, the same as if we were in a program office attempting to estimate the cost of our program independent of this research.
## Methodology
To arrive at the presented model (explained in the next section), we use a mixed-direction stepwise approach to screen for the most predictive variables and then finalize the model using ordinary least squares (OLS). [Note: In statistics, stepwise is an analytical method of fitting regression models in which an automatic procedure chooses explanatory variables for addition or subtraction based upon a set criteria.] To eliminate the effects of inflation, we convert all funding variables to base year 2017 dollars (BY17) using the 2016 Office of the Secretary of Defense (OSD) inflation indices. For our regression model, the response variable is the natural log of the acquisition cost (defined as the RDT&E and Procurement costs) from MS B to IOC. We transform the response variable using a natural log function to mitigate against heteroskedasticity because of the large range of actual costs – without transforming the OLS residuals, we would have failed the assumption of constant variance at a level of significance of 0.05. To ascertain the actual cost estimate from the OLS model, we retransform the predicted output back to actual cost (in millions of BY 17 dollars) by calculating eOLS Output. This transformed model results in a median estimate of will-cost, as this back-transformation equates to the median in the original space (Carroll and Rupert, 1981; Tisdel, 2006).
We use JMP® Pro 12 for our statistical analyses and adopt an initial overall experiment-wise Type I error of 0.1 owing to the exploratory nature of this study. To be consistent with this level of significance, we use a p-value threshold of 0.1 as the entry and exit criteria for the mixed direction stepwise regression model. Once the initial variables are identified by the stepwise procedure, we then use OLS to finalize the regression model. We now lower the overall Type I to be 0.05, and we require each predictor variable to be statistically significant according to the Holm–Bonferroni method, which counteracts the problem of multiple comparisons (Holm, 1979). Prior to conducting the variable selection procedure, we randomly select 20 per cent, or 15, of the 73 programs and set these aside for utilization as a validation set. We use the remaining 58 programs for the stepwise and OLS regression analysis.
For our model to be considered viable, we must verify the standard OLS assumptions. To assess the assumptions of homoscedasticity and normality of model residuals, we conduct a Breusch–Pagan (B–P) and Shapiro–Wilk (S–W) test, respectively, at a level of significance of 0.05. To assess multicollinearity and possible influential data points, we examine the variance inflation factors (VIF) and evaluate Cook’s distance values, respectively.
After all the underlying model assumptions are assessed and passed, we test our resultant model against the validation pool (the 15 set aside programs) using descriptive and inferential measures. Regarding descriptive measures, we compute the absolute per cent error (APE) of the true cost between MS B and IOC and the predicted cost for each program. [Note: The true and predicted costs are evaluated in the natural log space.] Using these APE values, we then calculate the median and mean APEs (MdAPE and MAPE, respectively). We calculate these for both the validation and modeling programs and compare the values. We also investigate whether the untransformed predicted values truly reflect the median value or a baseline estimate for will-cost by investigating how the true program cost compare to the predicted program cost. After validating our selected model, we perform another mixed stepwise analysis using the entire data set of 73 programs to determine if we inadvertently left out a predictive variable.
## Analysis
Using mixed stepwise regression on the modeling set of 58 programs, we develop a preliminary model Table III highlights this model. The presented model has an R2, which represents the amount of variability in the data explained by the model, of 0.82. We calculate the APE values for this model which results in an MdAPE and MAPE of 0.050 (5.0 per cent) and 0.059 (5.9 per cent), respectively, for the model building set. For the validation set, we obtain an MdAPE and MAPE of 0.056 (5.6 per cent) and 0.079 (7.9 per cent), respectively. Although the validation set is slightly higher than the model building set, all of the absolute per cent errors are less than 10 per cent suggesting the model is performing well.
With respect to the inferential measures, Table III reveals that all VIF scores are below or close to 2, indicating little to no evidence for multicollinearity. The preliminary model also contains no Cook’s distance score above 0.50 (highest value is approximately 0.10). This suggests no overly influential data points affecting the p-values of our explanatory variables. Model residuals pass both assumptions of normality and homoscedasticity with p-values of 0.25 and 0.92 for the S-W and B-P tests, respectively. Lastly, all explanatory variables are individually significant at the comparison-wise error rate under the Holm–Bonferroni criteria (Holm, 1979).
With the model being deemed internally valid, we combine all the data together to update model parameter values using OLS and lowering the overall Type I error rate to 0.05. Table IV shows the updated model. The stepwise approach failed to detect any additional predictor variables (at the overall familywise error rate of 0.05 level of significance), and the resultant model described in the next section is our final model. The resultant model has an R2 of 0.83 with an MdAPE and MAPE of 0.057 (5.7 per cent) and 0.062 (6.2 per cent), respectively. This means that the presented model has a relative error of between 5.7 and 6.2 per cent of predicting the natural log of the program cost from MS B to IOC. After back-transforming to the original values of program cost from MS B to IOC, approximately 50.7 per cent of the 73 programs in our database had a true program cost exceeding the predicted cost while 49.3 per cent had less. Theoretically, this ratio should be 50 per cent by 50 per cent. The empirical percentages suggest our presented model is performing as expected.
To prevent model extrapolation, the ranges in which this model is useful for the two continuous variables must be consistent with the bounds of the programs used within our analysis. For projected duration from MS B to IOC the lower bound is 28 months while the upper bound is 129 months. For RDT&E funding (dollar million) at MS B Start (BY17), the lower bound is $4.43m, while the upper bound is$5,979.4m. Using this model outside of these ranges is inappropriate.
All of the statistically significant predictor variables are available to the cost estimator at the time the estimate is calculated (which is intended to be post-MS B). There is a limitation in the model in the sense that a “prior” cost estimate is required before engaging the presented model to help fine-tune the program cost estimate. However, we feel this limitation is minor given that the three binary variables that are cost-related, i.e. ACAT I, large and extra large, should be relatively certain as a program approached MS B:
• (Projected) MS B to IOC Duration – continuous variable: The parameter estimate of this variable is 0.0108 which is multiplied by the number of months the program estimates to spend from MS B to IOC. This duration does not necessarily correlate to the level of technology or technological maturity being used, but, rather, indicates the cost of time in DoD acquisition.
• RDT&E funding (dollar million) at MS B start (BY17) – continuous variable: The parameter estimate associated with this variable is 0.00026, which is multiplied by the actual, non-transformed RDT&E funding spent prior to program entrance into MS B. As the amount of funding spent at this point is additive to total program cost, we suggest that the amount of funding spent prior to MS B is indicative of the projected size and scope of the entire program. This variable could indicate a greater investment in newer technology prior to MS B, which typically results in higher costs over the entire program life owing to integrating and further maturing this technology.
• Fixed Wing – binary variable: The parameter estimate associated with this variable is 0.561 and is multiplied by one for every aircraft (excluding helicopters) program estimate conducted. The positive parameter estimate indicates that aircraft programs sans helicopters appear to be more expensive in general in contrast to other DoD platform programs. We hypothesize this effect as an artifact of complexity associated with stealth, avionic, and engine capabilities of today’s modern aircraft, regardless of branch of service.
• Electronic system program – binary variable: The parameter estimate associated with this variable is −0.635 and is multiplied by one for any program that is considered an electronic system program. The negative parameter estimate indicates these programs are statistically significantly cheaper to acquire than the other program types. Bolten et al. (2008) also concluded that electronic systems are historically cheaper.
• ACAT I – binary variable: The parameter estimate for this variable is 1.151 and is multiplied by a value of one for any program considered to meet ACAT I funding estimate requirements at the start of MS B. This variable being additive to program cost is logical owing to the nature of ACAT I programs and the dollar costs associated with these DoD acquisitions.
• Large program – binary variable: The parameter estimate for this variable is 0.758 and is multiplied by a one if the program being estimated projects to have a total program acquisition cost greater than $7bn (BY17) (RDT&E and Procurement) from MS A to program conclusion but less than or equal to$17.5bn (BY17). This value is estimated at MS B and was calculated using the 50 per cent interquartile from a histogram analyzing total projected program acquisition cost. The additive nature of this variable adjusts for large DoD acquisition programs.
• Extra-large program – binary variable: The parameter estimate for this variable is 1.461 and is multiplied by a value of one if the program acquisition cost from MS A to IOC is projected to be greater than $17.5bn (BY17). This value is estimated at MS B and was calculated using the 75 per cent interquartile from a histogram analyzing total projected program acquisition cost. The additive nature of this variable adjusts for the largest DoD acquisition programs, such as the F-35 and F-22. As an example of the model in action, suppose a program at MS B possessed the following characteristics: ACAT I, Fixed Wing,$550m (BY17) of RDT&E funding at MS B start, and (Projected) MS B to IOC duration of 5 years (60 months). All of these values are within the observational window allowed by the model. Plugging those values into the model presented in Table IV and then backtransforming (via the natural exponent), results in a median will-cost estimate of approximately $2.8bn (BY17) for MS B to IOC program acquisition costs. This value now serves as a benchmark to crosscheck should-cost estimates. ## Discussion and conclusion Table V presents the relative percentage contribution of each variable included in the final model. The smallest relative contribution is 9.9 per cent for fixed-wing aircraft, while the largest relative contribution is 26.3 per cent for extra-large programs. Besides these variables, there is low variation between the remaining predictor variables in the presented model. This suggests that the explanatory variables are relatively similar with respect to affecting the true program RDT&E and procurement costs. Any statistical model has limitations. Principally, this model is based on data collected from SAR that sometimes contain incomplete information. Ultimately, the model is only as good as the data itself. The availability of pre-MS B data was a large constraint on the data building process and limited which programs could be included. Additionally, the search parameters used in DAMIR may have inadvertently removed useful programs from our study which might have influenced any number of other variables to be significant. One significant limitation of the model is the high level of variability in the definition of IOC. Our model uses IOC as a termination point owing to the importance of this milestone in a program as well as the availability of the date. In the programs considered, the number of units considered for attaining IOC varies greatly. Achieving IOC is determined individually for each unique program based on an initial cadre of operators, maintainers and support equipment that can use and sustain the system in an operational environment. For example, satellite, submarine or ship programs may have IOC based on one single unit. Although in the case of missile programs, IOC could be in the hundreds of units. This drives a level of known variability within our model that could be better accounted for by using a more structured and universal definition for IOC; this could be a topic for future research. Accurately predicting program cost is both an art and a science. Achieving accurate estimates during the early stages of a program’s lifecycle is an unenviable task, and one can be certain that the estimate will be wrong. However, deriving an estimate that is close to the final actual cost is crucial to improving the allocation of scarce resources. What our model provides is the empirical portion of the estimating process to ascertain the will-cost for a program. We provide this tool to the DoD acquisition community primarily as a method to check the assumptions and realism of their program office estimate. Being able to build a program cost estimate and turn to our statistically built and tested model for validation will be invaluable for the community because it will allow for an injection of increased realism into the cost estimating process. Realism in the will-cost median estimate is crucial to the success of should-cost analysis. Drawing a difference between our research and prior research, the most notable difference is the model output. Our research and model focuses on building an empirically based estimate for program cost between MS B and IOC to serve as a realistic benchmark (the median value) for what programs will-cost. Program managers can then adopt “should cost” efficiencies to reduce cost further. We believe that modeling an output that will serve as an actual point estimate is valuable as a crosscheck tool for the user community. It gives the user a benchmark based on historical data against which the program measures its progress. The model also supports the “will-cost and should-cost” requirement levied in 2011 by providing an objective and defensible cost for what a program should actually cost based on what has been achieved in the past. Ultimately, a quality will-cost estimate provides a starting point for program managers to examine processes and find efficiencies that lead to reduced program costs. ## Table I. Program inclusion/exclusion criteria and counts Inclusion/exclusion criteria Programs added Program removed Program count Jimenez’s starting database 56 56 DAMIR query and addition 187 243 Double count adjustment 29 214 IOC occurs after last SAR 61 153 Missing MS A or B 74 79 Missing IOC 4 75 Classified 2 73 Total remaining 73 ## Table II. List of programs used in the research database No. Program 1 A-10 2 AWACS 3 C-17 4 F-22 5 AH-64 6 B-1B Computer upgrade 7 C-5 RERP 8 F-15 9 B-1B JDAM 10 KC-135R 11 FA-18 A/B 12 AV-8B Harrier 13 S-3A 14 P-8 Poseidon 15 V-22 Osprey 16 E-2C Hawkeye 17 F-35 JSF 18 CH-47D Chinook 19 E-8A JSTARS 20 AGM-65A missile 21 ALCM missile 22 AMRAAM missile 23 JASSM missile 24 JDAM 25 JPATS T-6A 26 OTH-B 27 LGM-118 peacekeeper 28 GBU-39 SDB-I 29 National aerospace system 30 AGM-88 HARM 31 AIM-9X Block 1 32 AN/BSY-1 33 Cobra Judy Replacement 34 Harpoon missile 35 NMT 36 SH-60B 37 UGM-96A Trident I missile 38 SSN 774 (Virginia class sub) 39 T-45TS 40 UGM-109 Tomahawk 41 SSBN 726 SUB 42 AGM-114A Hellfire missile 43 OH-58D Helicopter 44 AAWS-M Javelin 45 SSN 21 sub 46 AWACS Blk 40-50 upgrade 47 B-2 EHF Inc 1 48 C-5 AMP 49 MQ-9 Reaper 50 AH-64E remanufacture 51 ATACMS-APAM 52 CH-47F 53 CSSCS (ATCCS) 54 Longbow Apache (AH-64D) 55 UH-60M Blackhawk 56 AESA 57 AGM-88E AARGM 58 CEC 59 E-2D AHE 60 JSOW 61 LCS 62 LHD-1 63 MH-60R 64 MH-60S 65 Strategic sealift 66 Trident II 67 EA-6B ICAP III 68 JSIPS (CIGS) 69 NAS 70 AFATDS (ATCCS) 71 AEHF 72 EELV 73 WGS ## Table III. Preliminary OLS model Predictor variable Estimate P-value Standardized estimate Variance inflation factor Intercept 5.731 < 0.0001 N/A N/A Projected MS B to IOC (months) 0.0114 0.0033 0.199 1.13 RDT&E$ at MS B Start 0.00029 0.0003 0.297 1.56
Fixed wing 0.620 0.0037 0.199 1.17
Electronic system program −0.732 0.0142 −0.216 1.96
ACAT I 0.837 0.0346 0.160 1.47
Large program 0.747 0.0018 0.251 1.58
Extra-large program 1.205 < 0.0001 0.397 2.16
## Table IV.
Final ordinary least squares model
Predictor variable Estimate p-value Standardized estimate Variance inflation factor
Intercept 5.449 < 0.0001 N/A N/A
Projected MS B to IOC (months) 0.0108 0.0021 0.170 1.09
RDT&E $at MS B Start 0.00026 0.0007 0.220 1.50 Fixed wing 0.561 0.0039 0.165 1.19 Electronic system program −0.635 0.0061 −0.191 1.77 ACAT I 1.151 < 0.0001 0.251 1.38 Large program 0.758 0.0004 0.232 1.51 Extra-large program 1.461 < 0.0001 0.439 2.12 ## Table V. Predictor variables and their relative contribution to the model Explanatory variable Relative contribution (%) Projected MS B to IOC (months) 10.2 RDT&E$ at MS B Start 13.2
Fixed wing 9.9
Electronic system program 11.5
ACAT I 15.0
Large program 13.9
Extra-large program 26.3
## Appendix. Predictor variables investigated in this paper
MS A to MS B duration (months) – continuous variable
• According to the last SAR date, this variable indicates the total time it took in months for a program to complete from Milestone (MS) A to MS B. In this variable, we are only concerned with actual schedule duration data available to the cost estimator at the time of MS B/EMD start.
Quantity expected at MS B – continuous variable
• This variable indicates the estimate of total quantity of weapons systems that were expected to be produced at MS B at the time of the last SAR date.
RDT&E funding (dollar million) at MS B start (BY17) – continuous variable
• This variable is based on raw total RDT&E dollars (in millions) that were allocated to the program prior to MS B. The dollars were all standardized into the base year when the research began (BY17).
(Projected) per cent of RDT&E funding at MS B start (BY17) – continuous variable
• This variable is based on the percentage of available RDT&E dollars allocated to the program before, and up to the start of, MS B. While this variable is based on a percentage, the dollars that this percentage was derived from were all standardized into the base year when the research began (BY17).
(Projected) Total program acquisition cost (BY17) – continuous variable
• This variable is the total projected acquisition costs, from MS B to IOC, estimated at MS B or the earliest available program SAR. It serves to identify how large a program is projected to be in terms of cost.
Modification – binary variable
• This variable is identifies programs whose existence serves as a modification to a pre-existing weapons system. If a weapons system is a modification, it does not necessarily mean it will not have pre-MS B data associated with it. Every program is different and, therefore, it cannot be assumed that a modification will automatically start at MS B.
Prototype – binary variable
• This variable identifies programs that create a prototype, or prototypes, of a weapons system before production of that weapons system begins. More than one type of prototype for a weapons system can be created in a given program.
Concurrency planned – binary variable
• This variable addresses planned concurrency in a given program prior to MS B. Concurrency is the proportion of RDT&E dollars that are authorized during the same years that Procurement appropriations are authorized. The planned level of concurrency forces managers to make decisions that can lead to [schedule] growth if either too much or too little concurrency is accepted for a given program (Birchler et al., 2011, p. 246).
1985 or later for MS B start – binary variable
• This variable accounts for a time series trend of programs that started their MS B in 1985 or later. It is shown that programs which began development during 1985 or later (considered “contemporary”) expend a greater percentage of obligations by their schedule midpoint than the earlier pre-1985 programs. We attribute this difference to the President’s Blue Ribbon Commission on Defense (commonly called the Packard Commission) and the subsequent acquisition reforms.
Air Force – binary variable
• This variable identifies if the lead service on the program was the US. Air Force.
Navy – binary variable
• This variable identifies if the lead service on the program was the US. Navy.
Army – binary variable
• This variable identifies if the lead service on the program was the US. Army.
Marine Corps – binary variable
• This variable identifies if the lead service on the program was the US. Marine Corps.
Fixed wing – binary variable
• This variable identifies if the weapons system program is a fixed-wing aircraft program, regardless of service it is associated with. The criterion to qualify as a fixed-wing aircraft is for that weapons system to maintain flight via fixed wings versus rotary wing flight.
Fighter program – binary variable
• This variable identifies if the weapons system program is a fighter program, or close variation thereof, regardless of service it is associated with.
Bomber program – binary variable
• This variable identifies if the weapons system program is a bomber program, or close variation thereof, regardless of service it is associated with.
Helo program – binary variable
• This variable identifies if the weapons system program is a helicopter program, or close variation thereof, regardless of service it is associated with.
Cargo plane program – binary variable
• This variable identifies if the weapons system program is a cargo plane program, or close variation thereof, regardless of service it is associated with.
Tanker program – binary variable
• This variable identifies if the weapons system program is a tanker plane program, or close variation thereof, regardless of service it is associated with.
Electronic warfare program – binary variable
• This variable identifies if the weapons system program is an electronic warfare program, or close variation thereof, regardless of service it is associated with. An electronic warfare program, as not to be confused with an electronic system program, differs greatly in its main function(s). A description from Lockheed Martin makes the distinction that it involves the ability to use the electromagnetic spectrum signals such as radio, infrared or radar to sense, protect and communicate. At the same time, it can be used to deny adversaries the ability to either disrupt or use these signals (electronic warfare).
Trainer plane program – binary variable
• This variable identifies if the weapons system program is a trainer plane program, or close variation thereof, regardless of service it is associated with.
Missile program – binary variable
• This variable identifies if the weapons system program is a missile program, or close variation thereof, regardless of service it is associated with.
Electronic system program – binary variable
• This variable identifies if the weapons system program is an electronic system program, or close variation thereof, regardless of service it is associated with. This differs greatly from the previously described electronic warfare variable in that electronic systems programs are principally concerned with the electronic user interface of a system, avionics controls or other similar applications that primarily support the electronic usability of a system or system of systems.
Submarine program – binary variable
• This variable identifies if the weapons system program is a submarine program, or close variation thereof, regardless of service it is associated with.
Ship program – binary variable
• This variable identifies if the weapons system program is a surface ship program, or close variation thereof, regardless of service it is associated with.
Satellite program – binary variable
• This variable identifies if the weapons system program is a satellite program, or close variation thereof, regardless of service it is associated with.
ACAT I – binary variable
• This variable indicates if the program is an ACAT I program. This is significant, in that the ACAT I programs deal with a much larger dollar amount and are thus more susceptible to cost and schedule growth because of their large-scale nature and complexities.
(Projected) MS C to IOC duration (months) – continuous variable
• According to the earliest available SAR estimate, this variable indicates the total estimated time for a program to meet IOC from MS C. This variable has been found to be predictive of cost growth in the programs studied by Foreman (2007). With this variable, we are concerned with giving the cost estimator the ability to enter in the projected duration, in months, of the gap between MS C and IOC to predict program cost.
(Projected) MS C slip – binary variable
• This variable indicates whether the program projected date for meeting IOC extends past the initial estimate. Foreman (2007) has found that a slip in MS C is indicative of program cost growth in past research.
No MS A date – binary variable
• This variable identifies whether a program did not contain a MS A date in the schedule portion of the SAR, but it did include funding at least one year prior to MS B. This is used to identify these programs and test that they are not statistically different from the other programs and is not used in a predictive capacity.
Small program – binary variable
• This variable identifies whether a program’s projected total acquisition costs (RDT&E and procurement) are below $3bn. This value is determined from analyzing the histogram of the (projected) total program acquisition costs of the programs in our study and coincides closely with the 25 per cent value. Medium program – binary variable • This variable identifies whether a program’s projected total acquisition costs (RDT&E and procurement) are above$3bn but below $7bn. This value is determined from analyzing the histogram of the (projected) total program acquisition costs of the programs in our study and coincides closely with the 25-50 per cent range. Large program – binary variable • This variable identifies whether a program’s projected total acquisition costs (RDT&E and procurement) are above$7bn but below $17.5bn. This value is determined from analyzing the histogram of the (projected) total program acquisition costs of the programs in our study and coincides closely with the 50-75 per cent range. Extra-large program – binary variable • This variable identifies whether a program’s projected total acquisition costs (RDT&E and procurement) are above$17.5bn. This value is determined from analyzing the histogram of the (projected) total program acquisition costs of the programs in our study and coincides with the 75 per cent value.
(Projected) per cent complete at MS B start – continuous variable
• This variable is motivated by the per cent of the RDT&E variable and serves to project the per cent that a program is complete, to IOC, when MS B occurs. It is calculated by dividing the projected duration from MS B to IOC by the sum of duration from MS A to IOC and projected duration from MS B to IOC. This serves to indicate where the program managers believe the program is in terms of schedule completeness. It could indicate program maturity level.
## References
Birchler, D., Christle, G. and Groo, E. (2011), “Cost implications of design/build concurrency”, Defense Acquisition Research Journal, Vol. 18 No. 3, pp. 237-246.
Bolten, J.G., Leonard, R.S., Arena, M.V., Younossi, O. and Sollinger, J.M. (2008), Sources of Weapon System Cost Growth: Analysis of 35 Major Defense Acquisition Programs (MG 670), RAND Corporation, Santa Monica, CA.
Brown, G.E., White, E.D., Ritschel, J.D. and Seibel, M.J. (2015), “Time phasing aircraft R&D using the weibull and beta distributions”, Journal of Cost Analysis and Parametrics, Vol. 8 No. 3, pp. 150-164, available at: http://doi.org/10.1080/1941658X.2015.1096219
Carroll, R.J. and Rupert, D. (1981), “On prediction and the power transformation family”, Biometrika, Vol. 68 No. 3, pp. 609-615,
Carter, A.B. and Mueller, J. (2011), “Should cost management: why? How?”, Defense At & L, Vol. 40 No. 5, pp. 14-18,
Deitz, D., Eveleigh, T.J., Holzer, T.H. and Sarkani, S. (2013), “Improving program success through systems engineering tools in the pre-milestone B acquisition phase”, Defense Acquisition Research Journal, Vol. 20 No. 3, pp. 283-308.
Foreman, J.D. (2007), “Predicting the effect of longitudinal variables on cost and schedule performance”, Master’s thesis, Defense Technical Information Center (DTIC) (ADA463520), Fort Belvoir, VA, available at: www.dtic.mil/get-tr-doc/pdf?AD=ADA463520
Genest, D. and White, E. (2005), “Predicting RDT&E cost growth”, The Journal of Cost Analysis & Management, Vol. 7 No. 1, pp. 1-12.
Harmon, B.R. (2012), “The limits of competition in defense acquisition”, paper presented at the Defense Acquisition University Research Symposium, Fort Belvoir, VA,
Holm, S. (1979), “A simple sequentially rejective multiple test procedure”, Scandinavian Journal of Statistics, Vol. 6 No. 2, pp. 65-70, available at: www.ime.usp.br/∼abe/lista/pdf4R8xPVzCnX.pdf
Jimenez, C.A., White, E.D., Brown, G.E., Ritschel, J.D., Lucas, B.M. and Seibel, M.J. (2016), “Using pre-milestone b data to predict schedule duration for defense acquisition programs”, Journal of Cost Analysis and Parametrics, Vol. 9 No. 2, pp. 112-126, available at: http://doi.org/10.1080/1941658X.2016.1201024
Lucas, B.M. (2004), “Creating cost growth models for the engineering and manufacturing development phase of acquisition using logistic and multiple regression”, Master’s thesis, Defense Technical Information Center (DTIC) (ADA422915), Fort Belvoir, VA, available at: www.dtic.mil/get-tr-doc/pdf?AD=ADA422915
Sipple, V., White, E. and Greiner, M. (2004), “Using logistic and multiple regression to estimate engineering cost risk”, The Journal of Cost Analysis & Management, Vol. 6 No. 1, pp. 67-79.
Tisdel, J.E. (2006), “Small sample confidence intervals in log space back-transformed from normal space”, Master’s thesis, Defense Technical Information Center (DTIC) (ADA450276), Fort Belvoir, VA, Air Force Institute of Technology, available at: www.dtic.mil/get-tr-doc/pdf?AD=ADA450276
## Corresponding author
Edward D. White can be contacted at: stat.associates@gmail.com
|
2021-05-14 03:18:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32829201221466064, "perplexity": 2699.698857746093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00563.warc.gz"}
|
https://math.stackexchange.com/questions/3168274/is-there-an-intuitive-reason-as-to-why-the-harmonic-series-is-divergent
|
# Is there an intuitive reason as to why the harmonic series is divergent? [duplicate]
The proof involving partial sums up to the nth term, where n is some power of $$2$$, completely makes sense. But just looking at the series itself, it seems very strange that it's divergent.
For large values of $$n$$, $$a_n$$ would start being extremely small and having an indistinguishable effect on the overall sum. All the sixth sense I've gained from working with limits makes it seem really strange that this would be considered divergent.
Surely there is a number (not even that difficult to find) such that we don't have enough computational power to calculate it's difference with the next terms (seeing as we'd be calculating differences based on hundreds of decimal places).
If you've any intuition on this I'd very much love to hear it!
Edit: I'm not asking for the proof of why it's divergent, I'm asking for peoples' personal ways of thinking and making sense of this intuitively. The post suggested to have been duplicated presents formal proofs; that's not what I'm looking for :)
## marked as duplicate by Wojowu, user21820, José Carlos Santos calculus StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Mar 30 at 15:56
• Intitutive reason: youtube.com/watch?v=aKl7Gwh297c – Sujit Bhattacharyya Mar 30 at 13:15
• There are a very large number of very small values. The question is which very effect dominates when adding these up (in a sense you are multiplying very large by very small), and it seems intuitively plausible that in some cases very large number can be more important than very small value but not in other cases – Henry Mar 30 at 13:16
• In Analysis, our professor told us for intuitively understanding those things, we have to think in the way "that series are approximately integrals : $\sum \approx \int$". So we see $\int_1^\infty \frac{\mathrm{d}x}{x} = \ln(x)\vert_1^\infty = \infty$. On the other hand, we know that the series $\sum_k \frac{1}{k^2}$ converges. Considering an integral, we have $\int_1^\infty \frac{\mathrm{d}x}{x^2} = - \frac{1}{x}\vert_1^\infty = 1 < \infty$. For me, this idea is good because I understand integrals better than series. – Jan Mar 30 at 13:22
• Consider the following intuition, Going into hyperreals, note that there exists $\epsilon$ infinitesimal that is greater than any $1/n$. Then harmonic series 'sum to' $\omega \cdot \epsilon =1$, where $\omega := 1/\epsilon$ , 'approximately' is the number of terms. For other $p$-series, where $p\gt 1$, they 'sum to' $\omega \cdot 1/\epsilon ^p = \epsilon^{1-p}$ which is still infinitesimal. Hence making a difference. – L KM Mar 30 at 13:28
• I think the connection to the logarithm brought up by @Jan is important. For high values of n, the partial sums are approximately log(n) + γ. And log(n) is proportional to the number of digits of n, ±1. So your question is like, "when n is a very large number, adding 1 almost certainly has no effect on the number of digits of n. Doesn't that mean that there's a maximum number of digits?" And the answer is no, because you can increase the number of digits by multiplying by 10--just like how the proof you mentioned does *2. – user54038 Mar 30 at 13:33
From Real Infinite Series by Bonar:
We know that, for $$x>-1$$, $$x \geq \ln(1+x)$$ Now $$\sum_1^n \frac{1}{k}\geq \sum_1^n \ln\left(1+\frac{1}{k}\right)=\ln(n+1) \longrightarrow \infty$$ as $$n \to \infty$$ and hence the divergence of the harmonic series follows
We can interpret this argument in a much more strikingly visual way as follows:
Consider the following graph of the function $$g(x) = \sin(\pi e^x)$$, shown below, We consider $$g$$ as a function only of positive reals, We know that this function is defined for arbitrarily large $$x$$. We also know that $$\sin x$$ is zero at integer multiples of $$\pi$$, so that $$g$$ has zeros whenever $$e^x$$ is integer-valued, which happens of course for $$x$$ of the form $$\log n$$. The distance between consecutive zeros is of the form $$\log(k + 1) — \log k$$, which by the argument above is a lower bound to $$1/k$$. This is the motivation for the choice of the function $$g$$—the oscillations make visible the segments between zeros, and the lengths of these segments estimate the terms of the harmonic series. If the harmonic series were to converge to some number $$N$$, then the length sum of all the segments between zeros of $$g$$, since they are smaller, would also be bounded above by $$N$$. Then $$g$$ could have no further zeros right of the vertical line $$x = N$$, but we know this does not happen. Again we emphasize that this contains no mathematical content not present in the argument above, only a new way to make it tangible.
Added: Also the author of the above mentioned book gives $$11$$ proofs of " $$\sum \frac{1}{n}$$ is divergent". So refer this book for more details!
Variation of the proof cited by the OP: let be $$S_n$$ the $$n$$-th partial sum: $$S_{2n} - S_n = \sum_{k=n+1}^{2n}\frac{1}k\ge(2n - n)\frac{1}{2n} = \frac{1}2.$$ (lower bound: number of terms times smallest term)
|
2019-06-17 04:50:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065443634986877, "perplexity": 356.3171881315307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998376.42/warc/CC-MAIN-20190617043021-20190617065021-00419.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-atoms-first-zumdahl/chapter-3-exercises-page-150b/47
|
## Chemistry: Atoms First (2nd Edition)
a. The compound would be called lithium nitride, and its empirical formula would be Li$_{3}$N. b. The compound would be called gallium oxide, and its empirical formula would be Ga$_{2}$O$_{3}$. c. The compound would be called rubidium chloride, and its empirical formula would be RbCl. d. The compound would be called barium sulfide, and its empirical formula would be BaS.
a. Li (lithium) is an alkali metal and has an oxidation number of 1+ whereas N (nitrogen) is in group 15 and has an oxidation number of 3-. To have a neutral compound, the positive and negative charges must balance one another. We would need three ions of lithium for every nitrogen ion. The compound would be called lithium nitride, and its empirical formula would be Li$_{3}$N. b. Ga (gallium) is in group 13 and has an oxidation number of 3+ whereas O (oxygen) is in group 16 and has an oxidation number of 2-. To have a neutral compound, the positive and negative charges must balance one another. We would need two ions of gallium for every three oxygen ions. The compound would be called gallium oxide, and its empirical formula would be Ga$_{2}$O$_{3}$. c. Rb (rubidium) is an alkali metal and has an oxidation number of 1+ whereas Cl (chlorine) is a halogen and has an oxidation number of 1-. To have a neutral compound, the positive and negative charges must balance one another, and that's what we have. The compound would be called rubidium chloride, and its empirical formula would be RbCl. d. Ba (barium) is an alkaline earth metal and has an oxidation number of 2+ whereas S (sulfur) is in group 16 and has an oxidation number of 2-. To have a neutral compound, the positive and negative charges must balance one another, and they do in this compound. The compound would be called barium sulfide, and its empirical formula would be BaS because in an empirical formula, we find the smallest whole number ratio of atoms for elements in that compound instead of the actual number of atoms in each formula unit of that compound.
|
2022-05-22 20:46:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5126357078552246, "perplexity": 1718.0735213288408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00150.warc.gz"}
|
https://tex.stackexchange.com/questions/347937/how-to-use-only-diamondplus-and-diamonddot-from-mnsymbol
|
# How to use only \diamondplus and \diamonddot from MnSymbol? [duplicate]
Can you please post the specific code for using \diamondplus and \diamonddot from MnSymbol without loading this package?
The answer to the question Importing single symbol from MnSymbol gives a general procedure. To implement this procedure for a specific symbol different from the symbol used there requires understanding of fntguide document. For those who do not understand fntguide, help asked here is essential. Also, more specific examples can help others gain better understanding of the general method.
## marked as duplicate by Werner, Stefan Pinnow, user13907, Martin Schröder, gernotJan 10 '17 at 9:20
I don't known whether or not this makes sense, but you may copy the relevant code from MnSymbol.sty (and minimize it somehow), e.g.:
\documentclass{article}
\DeclareFontFamily{U}{MnSymbolC}{}
\DeclareSymbolFont{MnSyC}{U}{MnSymbolC}{m}{n}
\DeclareMathSymbol{\diamondplus}{\mathbin}{MnSyC}{"7C}
\DeclareMathSymbol{\diamonddot}{\mathbin}{MnSyC}{"7E}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\begin{document}
$\diamondplus \diamonddot$
\end{document}
• Thank you very much! This is exactly what I need and it works perfectly. – wdacda Jan 10 '17 at 9:55
|
2019-10-21 06:02:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7301195859909058, "perplexity": 3277.3493364334745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00141.warc.gz"}
|
https://stats.stackexchange.com/questions/102925/regression-model-forcing-a-coefficient-effect-to-be-1
|
# Regression Model - forcing a coefficient/effect to be 1
Currently we have a linear model which includes 3 independent variables and the dependent variable "Y" which is the predicted values (range from -0.5 to 2000) from the model. Now we want to leverage the same model by just removing one of the predictor (lets call that as X1).
One of my colleague suggested to run a model by have a new dependent as (Y-X1) against the two predictors. Is this a correct approach or is there any better way like "offset" technique for continuous dependent variable?
• Setting the new dependent variable to $Y-X_1$ is just equivalent to having a regression for $Y$ and using all three predictors but forcing the coefficient/effect for $X_1$ to be $1$. Further, perhaps it would be useful to know why you want to remove this predictor? – user44764 Jun 11 '14 at 0:42
• Yes i would like to make the effect of X1 to be 1. X1 is used for one type of product (New Business) and we want to ignore it for renewals which we try to make the effect of X1=1 for the renewals. – Gopi Jun 11 '14 at 0:49
• Maybe this is a job for moderator/dummy variables? stats.stackexchange.com/questions/102902/… – pedrofigueira Jun 11 '14 at 0:54
• You can certainly use an offset if your software supports them (indeed in the case of some GLMs, that may be the only easy way to do it correctly). See the discussion of offset vs subtracting $x_1$ from $y$ here, for example – Glen_b -Reinstate Monica Jun 11 '14 at 2:17
|
2020-03-31 03:08:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48576197028160095, "perplexity": 555.6766966954023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00274.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/11/lesson/11.3.2/problem/11-143
|
### Home > A2C > Chapter 11 > Lesson 11.3.2 > Problem11-143
11-143.
Write a quadratic equation with roots $x = 3 ± 5i$.
If the roots are $x = ± 5i$, then the factors must be $\left(x − \left(3 + 5i\right)\right)$ and $\left(x − \left(3 − 5i\right)\right)$.
Use the factors to write an equation.
$\left(x − 3 + 5i\right)\left(x − 3 − 5i\right) = 0$
Multiply.
$\left(x − 3\right)^{2} − 25i ^{2} = 0$
$x^{2} − 6x + 34 = 0$
|
2022-07-02 04:38:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593549370765686, "perplexity": 1406.9874067612902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00205.warc.gz"}
|
https://im.kendallhunt.com/MS/teachers/2/6/2/index.html
|
# Lesson 2
Reasoning about Contexts with Tape Diagrams
## 2.1: Notice and Wonder: Remembering Tape Diagrams (5 minutes)
### Warm-up
The purpose of this warm-up is to re-introduce students to these diagrams as a representation of relationships between quantities. As students use tape diagrams as a tool for reasoning, they understand that the length of a piece of the “tape” carries meaning. Two pieces drawn to be the same length are understood to represent the same value. These pieces can be labeled with values to clarify what is known about the diagram, so two pieces labeled with the same letter indicate that they have the same value, even if that value is not known. These diagrams will be helpful for reasoning about situations in activities in this lesson. When students choose to use a tape diagram to represent a relationship between values and reason about a problem, they are using appropriate tools strategically (MP5). Tasks like this one ensure that students understand how such a tool works so that they are more likely to choose to use it correctly and appropriately.
### Launch
Arrange students in groups of 2. Tell students that they will look at an image, and their job is to think of at least one thing they notice and at least one thing they wonder. Display the image for all to see. Ask students to give a signal when they have noticed or wondered about something. Give students 1 minute of quiet think time, and then 1 minute to discuss the things they notice with their partner, followed by a whole-class discussion.
Give students 1 additional minute of quiet work time to complete the second question followed by a whole-class discussion.
### Student Facing
1. What do you notice? What do you wonder?
2. What are some possible values for $$a$$, $$b$$, and $$c$$ in the first diagram?
For $$x$$, $$y$$, and $$z$$ in the second diagram? How did you decide on those values?
### Activity Synthesis
Ask students to share the things they noticed and wondered. Record and display their responses for all to see. If possible, record the relevant reasoning on or near the image. After each response, ask the class whether they agree or disagree and to explain alternative ways of thinking, referring back to the images each time.
Ask students to share possible values for the variables in each diagram. Record and display their responses for all to see. If possible, record the values on the displayed diagram. If the idea that pieces labeled with the same variable represent the same value does not arise in the discussion, make that idea explicit. For example, students should assume that all the pieces labeled with $$y$$ in one diagram have the same value. When they make tape diagrams, they know to draw rectangles of the same length to show the same value, but since quick diagrams are sometimes sloppy, it’s also important to label pieces with numbers or letters to show known and relative values.
## 2.2: Every Picture Tells a Story (15 minutes)
### Activity
In this activity, students explain how a tape diagram represents a situation. They also use the tape diagram to reason about the value of the unknown quantity. Students are not expected to write and solve equations here; any method they can explain for finding values for $$x$$ and $$y$$ is acceptable. While some students might come up with equations to describe the diagram and solve for the unknown, there is no need to focus on developing those ideas at this time.
### Launch
Arrange students in groups of 3. (Some groups of 2 are okay, if needed.)
Ask students if they know what a “flyer” is. If any students do not know, explain or ask a student to explain. If possible, reference some examples of flyers hanging in school.
Ensure students understand they should take turns speaking and listening, and that there are two things to do for each diagram: explain why it represents the story, and also figure out any unknown values in the story.
Action and Expression: Internalize Executive Functions. Chunk this task into more manageable parts to support students who benefit from support with organizational skills in problem solving. Consider pausing after the first question for a brief class discussion before moving on.
Supports accessibility for: Organization; Attention
### Student Facing
Here are three stories with a diagram that represents it. With your group, decide who will go first. That person explains why the diagram represents the story. Work together to find any unknown amounts in the story. Then, switch roles for the second diagram and switch again for the third.
1. Mai made 50 flyers for five volunteers in her club to hang up around school. She gave 5 flyers to the first volunteer, 18 flyers to the second volunteer, and divided the remaining flyers equally among the three remaining volunteers.
2. To thank her five volunteers, Mai gave each of them the same number of stickers. Then she gave them each two more stickers. Altogether, she gave them a total of 30 stickers.
3. Mai distributed another group of flyers equally among the five volunteers. Then she remembered that she needed some flyers to give to teachers, so she took 2 flyers from each volunteer. Then, the volunteers had a total of 40 flyers to hang up.
### Anticipated Misconceptions
Students may not realize that when a variable is assigned to represent a quantity in a situation, it has the same value each time it appears. Revisit what $$x$$ and $$y$$ represent in these problems and why each occurrence of a variable must represent the same value.
In the second situation, students might argue that a more accurate representation would be 5 boxes with $$y$$ to show the first distribution of stickers, and then five boxes with 2 to show the second distribution. Tell students that such a representation would indeed correctly describe the actions in the situation, but that the work of the task is to understand this diagram to set us up for success later.
### Activity Synthesis
Tape diagrams represent relationships between quantities in stories. The goals here are to make sure students understand how parts of the diagram match the information about the story, and for them to begin to reason about how the diagrams connect to the operations that can help find unknown amounts.
Invite one group to provide an explanation for each diagram—both how the diagram represents the story, and how they reasoned about the unknown amounts. After each, ask the class if anyone thought about it a different way. (One additional line of reasoning for each diagram is probably sufficient.)
Here are some questions you might ask to encourage students to be more specific:
• “Where in the diagrams do you see equal parts? How do you know they are equal?”
• “What quantity does the variable represent in the story? How do you know?”
• “In the first story, where in the diagram do we see the ‘remaining flyers’?”
• “Why don’t we see the number 3 in the first diagram to show the 3 remaining volunteers?”
• “In the second diagram, where are the five volunteers represented?”
• “How did the diagrams help you find the value of the unknown quantities?”
Conversing: MLR3 Clarify, Critique, Correct. Present an incorrect statement for the second situation that reflects a possible misunderstanding from the class. For example, “Mai gave 6 stickers to each of the volunteers because 30 divided by 5 is 6. So $$y$$ is 6.” Prompt students to identify the error, and then write a correct statement. This helps students evaluate, and improve on, the written mathematical arguments of others and to understand the importance of defining the variable in context of the situation.
Design Principle(s): Maximize meta-awareness
## 2.3: Every Story Needs a Picture (15 minutes)
### Activity
In the previous activity, students interpreted given tape diagrams and explained how they represented a story. Here, they have a chance to draw tape diagrams to represent a story. The first story is a bit more scaffolded because it specifies what $$x$$ represents. In the other two stories, students need to decide which quantity to represent with a variable and choose a letter to use. As with all activities in this lesson, students are not expected to write and solve an equation. This preliminary work supports the understanding needed to be able to represent such situations with equations.
### Launch
Keep students in the same groups. You might have each student draw all three diagrams and compare them with their groups, working together to resolve any discrepancies. Or if time is short, you might assign each student in the group a different story—ask each student to explain their diagram to their group to see if their group members agree with their interpretation.
For classrooms using the digital version of the materials, take a minute to demonstrate how the controls work in the applet. Some students may prefer to draw the diagrams in their notebooks or on scratch paper.
Action and Expression: Internalize Executive Functions. Provide students with a blank template of a tape diagram to represent each story.
Supports accessibility for: Language; Organization
### Student Facing
Here are three more stories. Draw a tape diagram to represent each story. Then describe how you would find any unknown amounts in the stories.
1. Noah and his sister are making gift bags for a birthday party. Noah puts 3 pencil erasers in each bag. His sister puts $$x$$ stickers in each bag. After filling 4 bags, they have used a total of 44 items.
2. Noah’s family also wants to blow up a total of 60 balloons for the party. Yesterday they blew up 24 balloons. Today they want to split the remaining balloons equally between four family members.
3. Noah’s family bought some fruit bars to put in the gift bags. They bought one box each of four flavors: apple, strawberry, blueberry, and peach. The boxes all had the same number of bars. Noah wanted to taste the flavors and ate one bar from each box. There were 28 bars left for the gift bags.
### Launch
Keep students in the same groups. You might have each student draw all three diagrams and compare them with their groups, working together to resolve any discrepancies. Or if time is short, you might assign each student in the group a different story—ask each student to explain their diagram to their group to see if their group members agree with their interpretation.
For classrooms using the digital version of the materials, take a minute to demonstrate how the controls work in the applet. Some students may prefer to draw the diagrams in their notebooks or on scratch paper.
Action and Expression: Internalize Executive Functions. Provide students with a blank template of a tape diagram to represent each story.
Supports accessibility for: Language; Organization
### Student Facing
Here are three more stories. Draw a tape diagram to represent each story. Then describe how you would find any unknown amounts in the stories.
1. Noah and his sister are making gift bags for a birthday party. Noah puts 3 pencil erasers in each bag. His sister puts $$x$$ stickers in each bag. After filling 4 bags, they have used a total of 44 items.
2. Noah’s family also wants to blow up a total of 60 balloons for the party. Yesterday they blew up 24 balloons. Today they want to split the remaining balloons equally between four family members.
3. Noah’s family bought some fruit bars to put in the gift bags. They bought one box each of four flavors: apple, strawberry, blueberry, and peach. The boxes all had the same number of bars. Noah wanted to taste the flavors and ate one bar from each box. There were 28 bars left for the gift bags.
### Student Facing
#### Are you ready for more?
Design a tiling that uses a repeating pattern consisting of 2 kinds of shapes (e.g., 1 hexagon with 3 triangles forming a triangle). How many times did you repeat the pattern in your picture? How many individual shapes did you use?
### Activity Synthesis
Much of the discussion will take place in groups. Here are some ideas for synthesizing students’ learning about creating tape diagrams:
• Ask students if they had any disagreements in their groups and how they resolved them.
• Ask students how they decided which unknown quantity to find in the story. The first story specifies $$x$$ stickers, but the other stories do not define a variable.
• Display one diagram for each story and ask students to explain how they are alike and how they are different.
## Lesson Synthesis
### Lesson Synthesis
Display one or more of the tape diagrams students encountered or created during the lesson. Ask, “What are some ways that tape diagrams give information about a story?” Responses to highlight:
• A total amount is indicated.
• Pieces that represent equal amounts are the same length (or roughly the same length, if sketching by hand).
• Pieces that represent different amounts are not the same length.
• Pieces are labeled with either their amounts, a variable representing an unknown amount, or an expression like $$x+1$$ to mean “1 more than the unknown amount.”
## Student Lesson Summary
### Student Facing
Tape diagrams are useful for representing how quantities are related and can help us answer questions about a situation.
Suppose a school receives 46 copies of a popular book. The library takes 26 copies and the remainder are split evenly among 4 teachers. How many books does each teacher receive? This situation involves 4 equal parts and one other part. We can represent the situation with a rectangle labeled 26 (books given to the library) along with 4 equal-sized parts (books split among 4 teachers). We label the total, 46, to show how many the rectangle represents in all. We use a letter to show the unknown amount, which represents the number of books each teacher receives. Using the same letter, $$x$$, means that the same number is represented four times.
Some situations have parts that are all equal, but each part has been increased from an original amount:
A company manufactures a special type of sensor, and packs them in boxes of 4 for shipment. Then a new design increases the weight of each sensor by 9 grams. The new package of 4 sensors weighs 76 grams. How much did each sensor weigh originally?
We can describe this situation with a rectangle representing a total of 76 split into 4 equal parts. Each part shows that the new weight, $$x+9$$, is 9 more than the original weight, $$x$$.
|
2020-12-03 13:52:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37496575713157654, "perplexity": 1159.1750563389683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141727782.88/warc/CC-MAIN-20201203124807-20201203154807-00129.warc.gz"}
|
http://gmatclub.com/forum/awa-study-material-63415.html
|
AWA Study Material : General GMAT Questions and Strategies
Check GMAT Club App Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 07 Dec 2016, 14:14
# Chicago-Booth
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# AWA Study Material
Author Message
Manager
Joined: 02 Aug 2007
Posts: 231
Schools: Life
Followers: 3
Kudos [?]: 55 [0], given: 0
### Show Tags
04 May 2008, 08:10
Hello Everyone,
I'm trying to find some good AWA study material; with good examples and techniques.
I searched the forum, but found nothing.
Any suggestions?
Thanks,
Ali
Manager
Joined: 16 Sep 2007
Posts: 215
Followers: 1
Kudos [?]: 12 [0], given: 0
### Show Tags
04 May 2008, 09:18
Last edited by Maple on 18 May 2008, 05:52, edited 1 time in total.
VP
Joined: 18 May 2008
Posts: 1286
Followers: 15
Kudos [?]: 400 [0], given: 0
### Show Tags
18 May 2008, 02:59
The website of scoretop.com is experiencing some problem since last 3-4 days. can u send me the data on AWA Thanks
Last edited by ritula on 18 May 2008, 19:34, edited 1 time in total.
Manager
Joined: 23 Feb 2008
Posts: 51
Followers: 0
Kudos [?]: 4 [0], given: 0
### Show Tags
18 May 2008, 18:32
I used the AWA tips and templates in Cracking the GMAT and easily scored a 6 on both essays. I would highly recommend that book.
VP
Joined: 18 May 2008
Posts: 1286
Followers: 15
Kudos [?]: 400 [0], given: 0
### Show Tags
18 May 2008, 19:36
Thanks a lot for ur kind advise. I removed my email id. Is the book u r referring to an e-book? in case u have some soft copy or any kind of material which u can send via mail,it will be highly apreciated.
Thanks again
Manager
Joined: 02 Aug 2007
Posts: 231
Schools: Life
Followers: 3
Kudos [?]: 55 [0], given: 0
### Show Tags
19 May 2008, 14:55
marymayi wrote:
I used the AWA tips and templates in Cracking the GMAT and easily scored a 6 on both essays. I would highly recommend that book.
marymayi, what exactly are you referring to?...can you post a link please.
Thanks,
Ali
Senior Manager
Joined: 07 Jan 2008
Posts: 318
Location: Ann Arbor, Michigan
Schools: Ross Class of 2011
Followers: 7
Kudos [?]: 133 [1] , given: 0
### Show Tags
19 May 2008, 15:51
1
KUDOS
Check out this post or do a search for AWA on the forum.
8-t62437
I got a 5.5 the first time I took the GMAT and a 6.0 the second. Here is what I wrote for that post.
---------------------------------
I took the PR class any the teacher told us that "he has heard" of people using this template and writing complete garbage sentences in the middle of the paragraph and still getting 6s - I got a 5.5 and all I did was dumb it down. PR teaches good GMAT writing is NOT good writing.
"First trick" write a lot.
"Second trick" Always disagree with the essay
"third trick" use transitional words that show a person who is skimming this essay you have structure.
This is my Sample.
I disagree with the authors claim that....blah blah (semi restatement). Filler sentence. Thesis (I normally use three parts ex. The major emphasis of any gmat study should be on verbal, quant, and awa)
-Now use your three parts for each paragraph.
The FIRST POINT, blah blah blah. Support sentences. Support sentences. Finish Paragraph off.
ANOTHER Point, (same)
Finally, (same)
IN CONCLUSION (if you always end this way you'll get your points) restatement of argument. Final sentence.
--Ok so if you write anything that looks like that you will hate yourself because it is sooo pitiful but it truly works, people are reading your essay for about 1 min. If you write a lot they basically skim it, look for buzz words and structure. Like I mentioned before our teacher told us they would put stuff like I really enjoy spending time with cats and dogs. Or As soon as I get done writing this essay it's on to the math, then verbal, and finally I'll get to go home.
I definitely don't suggest writing in the garbage sentences but if you write a lot and have 3 paragraphs plus an introduction and a conclusion you are basically guaranteed a 5 with no prep at all.
CEO
Joined: 17 Nov 2007
Posts: 3589
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Followers: 530
Kudos [?]: 3455 [0], given: 360
### Show Tags
19 May 2008, 19:10
Lsuguy7, good point
+1
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Senior Manager
Joined: 24 Feb 2008
Posts: 348
Schools: UCSD ($) , UCLA, USC ($), Stanford
Followers: 184
Kudos [?]: 2853 [0], given: 2
### Show Tags
19 May 2008, 21:18
Walker,
I have developed much detailed templates for both types of essays with so many expected phrases and transitional words filled in that you can basically memorize more than half of the words you are going to have to write. That saves a lot of time and then writing 500+ words per essay becomes easy.
Let me know if you want my templates.
I got 6.0 both times I took the GMAT.
_________________
Best AWA guide here: http://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html
CEO
Joined: 17 Nov 2007
Posts: 3589
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Followers: 530
Kudos [?]: 3455 [0], given: 360
### Show Tags
19 May 2008, 21:52
WOW!
It would be great if you send them to my e-mail: awalker[at]ukr[dot]net
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Senior Manager
Joined: 24 Feb 2008
Posts: 348
Schools: UCSD ($) , UCLA, USC ($), Stanford
Followers: 184
Kudos [?]: 2853 [0], given: 2
### Show Tags
20 May 2008, 07:50
NP, I just need to sit down and type it up cuz everything is on paper right now. What day do you take the test?
Is it ok if I post it here or that is not allowed? If not, I'll just email it.
_________________
Best AWA guide here: http://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html
Manager
Joined: 02 Aug 2007
Posts: 231
Schools: Life
Followers: 3
Kudos [?]: 55 [0], given: 0
### Show Tags
20 May 2008, 16:29
This is what I have so far, somewhat helpful...need more!!!
How do you come up with support statements in issue essays? How do you know if your take is strong enough?
Attachment:
2004-09-02_160638_800scoreAWAGuide.pdf [325.91 KiB]
Senior Manager
Joined: 24 Feb 2008
Posts: 348
Schools: UCSD ($) , UCLA, USC ($), Stanford
Followers: 184
Kudos [?]: 2853 [0], given: 2
### Show Tags
20 May 2008, 20:47
Walker,
Is it allowed to post my write-up and templates here?
If not, I'll just send it in emails to those who requested. I should be done by Thursday. I'd like to include some complete essays to illustrate how I go from the templates to the real thing. A lot of guides just leave you with the structure but you are on your own to materialize. Others give you tons of complete essays, which they have no proof would earn a 6.0, but it is hard to learn/detect a pattern that can be applied to any essay topic.
What I am compiling, I have used on the real thing per se, meaning not only structure but exact phrases, even complete sentences, and it worked.
_________________
Best AWA guide here: http://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html
CEO
Joined: 17 May 2007
Posts: 2989
Followers: 60
Kudos [?]: 576 [0], given: 210
### Show Tags
20 May 2008, 21:15
Send it my way too chineseburned bsd_loverATyahooDOTcom
chineseburned wrote:
Walker,
Is it allowed to post my write-up and templates here?
If not, I'll just send it in emails to those who requested. I should be done by Thursday. I'd like to include some complete essays to illustrate how I go from the templates to the real thing. A lot of guides just leave you with the structure but you are on your own to materialize. Others give you tons of complete essays, which they have no proof would earn a 6.0, but it is hard to learn/detect a pattern that can be applied to any essay topic.
What I am compiling, I have used on the real thing per se, meaning not only structure but exact phrases, even complete sentences, and it worked.
Senior Manager
Joined: 10 Jun 2007
Posts: 346
Location: Newport, RI
Followers: 2
Kudos [?]: 38 [0], given: 0
### Show Tags
21 May 2008, 03:23
Veritas has a very good template and recommends 6 paragraphs and "yes" always call the arguement weak.
I don't have my book anymore but if someone does, try and get it from them. I got a 5.5 and just skimmed the template before walking in but it's a garuntee 5.5-6.0.
CEO
Joined: 17 Nov 2007
Posts: 3589
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Followers: 530
Kudos [?]: 3455 [0], given: 360
### Show Tags
21 May 2008, 07:24
chineseburned wrote:
Is it allowed to post my write-up and templates here?
If not, I'll just send it in emails to those who requested. I should be done by Thursday. I'd like to include some complete essays to illustrate how I go from the templates to the real thing. A lot of guides just leave you with the structure but you are on your own to materialize. Others give you tons of complete essays, which they have no proof would earn a 6.0, but it is hard to learn/detect a pattern that can be applied to any essay topic.
What I am compiling, I have used on the real thing per se, meaning not only structure but exact phrases, even complete sentences, and it worked.
I don't now about policy. Maybe just send them in private way.
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
SVP
Joined: 24 Aug 2006
Posts: 2132
Followers: 3
Kudos [?]: 138 [0], given: 0
### Show Tags
21 May 2008, 07:35
Sorry, don't want to be the party pooper, but you guys are putting waaay too much thought into the awa.
Senior Manager
Joined: 24 Feb 2008
Posts: 348
Schools: UCSD ($) , UCLA, USC ($), Stanford
Followers: 184
Kudos [?]: 2853 [0], given: 2
### Show Tags
21 May 2008, 18:09
walker wrote:
chineseburned wrote:
Is it allowed to post my write-up and templates here?
If not, I'll just send it in emails to those who requested. I should be done by Thursday. I'd like to include some complete essays to illustrate how I go from the templates to the real thing. A lot of guides just leave you with the structure but you are on your own to materialize. Others give you tons of complete essays, which they have no proof would earn a 6.0, but it is hard to learn/detect a pattern that can be applied to any essay topic.
What I am compiling, I have used on the real thing per se, meaning not only structure but exact phrases, even complete sentences, and it worked.
I don't now about policy. Maybe just send them in private way.
How you don't know about the policy??
You are a global admin on this forum
_________________
Best AWA guide here: http://gmatclub.com/forum/how-to-get-6-0-awa-my-guide-64327.html
Intern
Joined: 14 Apr 2008
Posts: 41
Followers: 0
Kudos [?]: 3 [0], given: 0
### Show Tags
21 May 2008, 20:32
Hi chineseburned, could you send me a copy to madsunvnATgmailDOTcom
Thanks a bunch
CEO
Joined: 17 Nov 2007
Posts: 3589
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Followers: 530
Kudos [?]: 3455 [0], given: 360
### Show Tags
21 May 2008, 20:57
chineseburned wrote:
How you don't know about the policy??
You are a global admin on this forum
Actually, I'm not admin. I can delete, edit posts and give many kudos
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Re: AWA Study Material [#permalink] 21 May 2008, 20:57
Go to page 1 2 Next [ 23 posts ]
Similar topics Replies Last post
Similar
Topics:
Probability Study Material 1 02 Aug 2010, 14:23
CR Study Materials 1 02 Dec 2009, 11:04
1 Study materials? 12 31 Jul 2009, 08:12
Study Materials 1 02 Oct 2007, 14:20
Help with GMAT study materials 7 26 Aug 2007, 23:28
Display posts from previous: Sort by
# AWA Study Material
Moderators: HiLine, WaterFlowsUp
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2016-12-07 22:14:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28598204255104065, "perplexity": 7901.752163969685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542250.48/warc/CC-MAIN-20161202170902-00020-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/geometry/62998-square-inscribed-triangle.html
|
# Math Help - Square inscribed in triangle
1. ## Square inscribed in triangle
A square is inscribed in a right triangle whose short sides are in the ratio of 1:2. What is the length of the side of the square in terms of the length of the shortest side of the circumscribed triangle?
I tried assuming that the shortest sides were 1 and 2, which makes the hypotenuse sq. root 5
Then I solved for the area of the whole triangle and I got 1
So then I made 1 = (1/2)(1-w)(2-w)
But for some reason I got w = 3 or 0 which doesn't work out...
2. i may be making up some BS but does this look correct/true?
3. Originally Posted by realintegerz
A square is inscribed in a right triangle whose short sides are in the ratio of 1:2. What is the length of the side of the square in terms of the length of the shortest side of the circumscribed triangle?
I tried assuming that the shortest sides were 1 and 2, which makes the hypotenuse sq. root 5
Then I solved for the area of the whole triangle and I got 1
So then I made 1 = (1/2)(1-w)(2-w)
But for some reason I got w = 3 or 0 which doesn't work out...
I will try to explain my solution without a picture... I somehow cannot upload a picture from my school...
Draw a right triangle, B = 90 degrees AB = 1 BC = 2
Draw inside the triangle a square label the square EBFG with E on AB en F on BC and G on AC
EB = BF = x
Then AE = 1 - x and FC = 2 - x
You've got two similar triangles AEG and GFC there is a ratio between the two triangles and because of that
AE : GF = EG : FC
but also AE x FC = GF x EG and
$(1 - x)(2 - x ) = x*x$
$2 - 3x + x^2 = x^2$
$2 - 3x = 0$
and x = 2/3
so we get the ratio 2/3 : 1 : 2
[in my school we have to multiply by 3 ..... 2 : 3 : 6]
|
2015-05-23 03:56:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7464613914489746, "perplexity": 368.3702468975361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927104.48/warc/CC-MAIN-20150521113207-00238-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://mathsgee.com/36788/flashlight-traveling-incident-surface-refraction-refraction
|
0 like 0 dislike
1,214 views
A beam of flashlight traveling in air incident on a surface of a thin glass at an angle of $38^{\circ}$ with the normal. The index of refraction of the glass is $1.56$. What is the angle of refraction?
| 1,214 views
2 like 0 dislike
When a beam of light strikes the boundary of two different media such as air-glass, part of it is reflected, and another part is refracted. That part that enters on the other side of the boundary is called refracted ray. The angle that this ray makes with the vertical to the boundary is also called the angle of refraction.
In this problem, the light is initially in the air with an index of refraction $n_{i}=1.00$ and strikes the boundary surface separating air and glass at $\theta_{i}=38^{\circ}$. This is the angle of incidence. The subscript $i$ denotes the incident.
Another different medium is glass with $n=1.56$. The refracted ray lies in it with an unknown angle $\theta_{r}=?$ which should be found using Snell's law of refraction.
Before going further and solve the problem, we expect that since the light beam enters from a low index of refraction medium into a one with a high index of refraction so the refracted ray should be bent toward the normal.
By applying Snell's law, we will check this claim.
$n_{i} \sin \theta_{i}=n_{r} \sin \theta_{r}$
\begin{aligned} (1.00) \sin 38^{\circ} &=(1.56) \sin \theta_{r} \\ \Rightarrow \sin \theta_{r} &=\frac{1.00}{1.56} \sin 38^{\circ} \\ &=0.3947 \end{aligned}
Now find the angle whose sine is $0.3947$ as below
\begin{aligned} \sin \theta_{r} &=0.3947 \\ \Rightarrow \theta_{r} &=\sin ^{-1}(0.3947) \\ &=23.25^{\circ} \end{aligned}
As expected.
by Diamond (89,175 points)
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
1 like 0 dislike
0 like 0 dislike
0 like 0 dislike
|
2023-03-29 23:33:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974039793014526, "perplexity": 1264.610784173546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00563.warc.gz"}
|
https://economics.stackexchange.com/questions/33009/how-to-borrow-at-risk-free-rate
|
# How to borrow at risk free rate
When learning about derivatives, we learnt about risk-free hedges and portfolios. However, one of the concepts was about borrowing and lending at the risk free rate. Now, for lending it's as simple as buying government bonds. However, how does one borrow at the risk free rate? I doubt it's about selling your own bonds. I'm just trying to work out how to apply these economic theories in real life, so any help will be greatly appreciated.
• @lunar_props The 0% promotions are tricky though, because you are borrowing on the condition of buying a product. I will happily sell you an apple for \$120 in \$10 monthly installments at 0% interest. – Giskard Nov 29 '19 at 5:38
|
2020-02-21 22:31:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2617093622684479, "perplexity": 1279.303858020643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00209.warc.gz"}
|
https://chemistry.stackexchange.com/questions/124740/in-an-isothermal-process-how-can-the-change-in-internal-energy-be-0/124783
|
# In an isothermal process, how can the change in internal energy be 0?
It was written in my textbook,
$$\mathrm{d}U = \left(\frac{\partial U}{\partial T}\right)_V \mathrm{d}T +\left(\frac{\partial U}{\partial V}\right)_T \mathrm{d}V$$ If the process is isothermal, $$\mathrm{d}T = 0$$. So, the equation reduces to:
$$\mathrm{d}U = \left(\frac{\partial U}{\partial V}\right)_T \mathrm{d}V$$
I was told, $$\mathrm{d}U = 0$$ in an isothermal process, does that mean $$\mathrm{d}V = 0$$, or $$\left(\frac{\partial U}{\partial V}\right) = 0$$? But how is it possible that $$\mathrm{d}V=0$$ in an isothermal process, or $$\left(\frac{\partial U}{\partial V}\right) = 0$$?
If someone could explain what am I doing wrong, that would be very helpful.
• People like treating things like an ideal gas, and they should not. – Charlie Crown Dec 7 '19 at 5:31
The internal energy of an ideal gas is simply $$U = \alpha nRT,$$ where $$\alpha = \frac{\text{degrees of freedom}}{2}$$ So, in an isothermal process, $$\Delta T = 0 \Longrightarrow \Delta U = \alpha nR\Delta T = 0,$$ and likewise any $$\left(\frac{\partial U}{\partial P}\right)_T = \left(\frac{\partial U}{\partial V}\right)_T = 0.$$
I was told, $$\mathrm{d}U=0$$ in isothermal process.
That is not generally true. It is, however, true for ideal gases, which is probably what you were discussing. No attractive or repulsive forces exist between ideal gas particles. Hence the only type of internal energy an ideal gas can have is kinetic energy, i.e., energy due to the motion of its particles. And since kinetic energy depends only on temperature, an ideal gas's internal energy likewise only depends on its temperature. As a consequence, $$\mathrm{d}U = 0$$ for ideal gases in isothermal processes, because the temperature doesn't change.
does that mean $$\mathrm{d}V = 0$$, or $$\left(\frac{\partial U}{\partial V}\right) = 0$$? But how is possible that $$\mathrm{d}V=0$$ in isothermal process, or $$\left(\frac{\partial U}{\partial V}\right)=0$$?
Because the internal energy of an ideal gas depends only on its temperature, $$\left(\frac{\partial U}{\partial V}\right)_T=0$$, i.e., the internal energy doesn't change with volume at constant $$T$$.
For completeness, since there is already a well-explained answer addressing how and why $$\mathrm{d}U=0$$ for an isothermal process is a hallmark of an ideal gas, here is a short derivation of a general expression for the energy.
Start from the total differential for the free energy: $$\mathrm{d}U = \left(\frac{\partial U}{\partial V}\right)_T \mathrm{d}V +\left(\frac{\partial U}{\partial T}\right)_V \mathrm{d}T \tag{1}\label{eq:total-differential}$$
Define the heat capacity at constant volume as $$C_V = \left(\frac{\partial U}{\partial T}\right)_V. \tag{2}\label{eq:heat-cap-const-v}$$
Evaluate the partial derivative with respect to $$V$$ in \eqref{eq:total-differential} from the 1st law of thermodynamics:
\begin{align} \mathrm{d}U &= -P\mathrm{d}V + T\mathrm{d}S \\ \rightarrow \left(\frac{\partial U}{\partial V}\right)_T &= -P + T\left(\frac{\partial S}{\partial V}\right)_T \tag{3}\label{eq:first-law} \end{align}
Make use of the following Maxwell relation: $$\left(\frac{\partial S}{\partial V}\right)_T= \left(\frac{\partial P}{\partial T}\right)_V$$
So \eqref{eq:first-law} becomes \begin{align} \left(\frac{\partial U}{\partial V}\right)_T &= -P + T\left(\frac{\partial P}{\partial T}\right)_V \tag{4}\label{eq:with-Maxwell} \end{align}
Then inserting the results of \eqref{eq:heat-cap-const-v} and \eqref{eq:with-Maxwell} into \eqref{eq:total-differential} we obtain the general result (when only $$pV$$ work is done): $$\mathrm{d}U = \left[ -P + T\left(\frac{\partial P}{\partial T}\right)_V \right] \mathrm{d}V + C_V \mathrm{d}T \tag{5}$$
If you plug in the equation of state for an ideal gas you then obtain $$\left(\frac{\partial U}{\partial V}\right)_T = 0$$ and the expected result $$\mathrm{d}U = C_V \mathrm{d}T.$$
|
2020-02-24 15:35:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809672832489014, "perplexity": 303.87850156146595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00403.warc.gz"}
|
https://www.jirka.org/ra/html/sec_limoffunc.html
|
## Section3.1Limits of functions
Note: 2–3 lectures
Before we define continuity of functions, we must visit a somewhat more general notion of a limit. Given a function $$f \colon S \to \R\text{,}$$ we want to see how $$f(x)$$ behaves as $$x$$ tends to a certain point.
### Subsection3.1.1Cluster points
First, we return to a concept we have previously seen in an exercise. When moving within the set $$S$$ we can only approach points that have elements of $$S$$ arbitrarily near.
#### Definition3.1.1.
Let $$S \subset \R$$ be a set. A number $$x \in \R$$ is called a cluster point of $$S$$ if for every $$\epsilon > 0\text{,}$$ the set $$(x-\epsilon,x+\epsilon) \cap S \setminus \{ x \}$$ is not empty.
That is, $$x$$ is a cluster point of $$S$$ if there are points of $$S$$ arbitrarily close to $$x\text{.}$$ Another way of phrasing the definition is to say that $$x$$ is a cluster point of $$S$$ if for every $$\epsilon > 0\text{,}$$ there exists a $$y \in S$$ such that $$y \not= x$$ and $$\abs{x - y} < \epsilon\text{.}$$ Note that a cluster point of $$S$$ need not lie in $$S\text{.}$$
Let us see some examples.
1. The set $$\{ \nicefrac{1}{n} : n \in \N \}$$ has a unique cluster point zero.
2. The cluster points of the open interval $$(0,1)$$ are all points in the closed interval $$[0,1]\text{.}$$
3. The set of cluster points of $$\Q$$ is the whole real line $$\R\text{.}$$
4. The set of cluster points of $$[0,1) \cup \{ 2 \}$$ is the interval $$[0,1]\text{.}$$
5. The set $$\N$$ has no cluster points in $$\R\text{.}$$
#### Proof.
First suppose $$x$$ is a cluster point of $$S\text{.}$$ For every $$n \in \N\text{,}$$ pick $$x_n$$ to be an arbitrary point of $$(x-\nicefrac{1}{n},x+\nicefrac{1}{n}) \cap S \setminus \{x\}\text{,}$$ which is nonempty because $$x$$ is a cluster point of $$S\text{.}$$ Then $$x_n$$ is within $$\nicefrac{1}{n}$$ of $$x\text{,}$$ that is,
\begin{equation*} \abs{x-x_n} < \nicefrac{1}{n} . \end{equation*}
As $$\{ \nicefrac{1}{n} \}$$ converges to zero, $$\{ x_n \}$$ converges to $$x\text{.}$$
On the other hand, if we start with a sequence of numbers $$\{ x_n \}$$ in $$S$$ converging to $$x$$ such that $$x_n \not= x$$ for all $$n\text{,}$$ then for every $$\epsilon > 0$$ there is an $$M$$ such that, in particular, $$\abs{x_M - x} < \epsilon\text{.}$$ That is, $$x_M \in (x-\epsilon,x+\epsilon) \cap S \setminus \{x\}\text{.}$$
### Subsection3.1.2Limits of functions
If a function $$f$$ is defined on a set $$S$$ and $$c$$ is a cluster point of $$S\text{,}$$ then we define the limit of $$f(x)$$ as $$x$$ gets close to $$c\text{.}$$ It is irrelevant for the definition whether $$f$$ is defined at $$c$$ or not. Even if the function is defined at $$c\text{,}$$ the limit of the function as $$x$$ goes to $$c$$ can very well be different from $$f(c)\text{.}$$
#### Definition3.1.3.
Let $$f \colon S \to \R$$ be a function and $$c$$ a cluster point of $$S \subset \R\text{.}$$ Suppose there exists an $$L \in \R$$ and for every $$\epsilon > 0\text{,}$$ there exists a $$\delta > 0$$ such that whenever $$x \in S \setminus \{ c \}$$ and $$\abs{x - c} < \delta\text{,}$$ we have
\begin{equation*} \abs{f(x) - L} < \epsilon . \end{equation*}
We then say $$f(x)$$ converges to $$L$$ as $$x$$ goes to $$c\text{.}$$ We say $$L$$ is the limit of $$f(x)$$ as $$x$$ goes to $$c\text{.}$$ We write
\begin{equation*} \lim_{x \to c} f(x) := L , \end{equation*}
or
If no such $$L$$ exists, then we say that the limit does not exist or that $$f$$ diverges at $$c\text{.}$$
Again the notation and language we are using above assumes the limit is unique even though we have not yet proved uniqueness. Let us do that now.
#### Proof.
Let $$L_1$$ and $$L_2$$ be two numbers that both satisfy the definition. Take an $$\epsilon > 0$$ and find a $$\delta_1 > 0$$ such that $$\abs{f(x)-L_1} < \nicefrac{\epsilon}{2}$$ for all $$x \in S \setminus \{c\}$$ with $$\abs{x-c} < \delta_1\text{.}$$ Also find $$\delta_2 > 0$$ such that $$\abs{f(x)-L_2} < \nicefrac{\epsilon}{2}$$ for all $$x \in S \setminus \{c\}$$ with $$\abs{x-c} < \delta_2\text{.}$$ Put $$\delta := \min \{ \delta_1, \delta_2 \}\text{.}$$ Suppose $$x \in S\text{,}$$ $$\abs{x-c} < \delta\text{,}$$ and $$x \not= c\text{.}$$ As $$\delta > 0$$ and $$c$$ is a cluster point, such an $$x$$ exists. Then
\begin{equation*} \abs{L_1 - L_2} = \abs{L_1 - f(x) + f(x) - L_2} \leq \abs{L_1 - f(x)} + \abs{f(x) - L_2} < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon. \end{equation*}
As $$\abs{L_1-L_2} < \epsilon$$ for arbitrary $$\epsilon > 0\text{,}$$ then $$L_1 = L_2\text{.}$$
#### Example3.1.5.
Consider $$f \colon \R \to \R$$ defined by $$f(x) := x^2\text{.}$$ Then for any $$c \in \R\text{,}$$
\begin{equation*} \lim_{x\to c} f(x) = \lim_{x\to c} x^2 = c^2 . \end{equation*}
Proof: Let $$c \in \R$$ be fixed, and suppose $$\epsilon > 0$$ is given. Write
\begin{equation*} \delta := \min \left\{ 1 , \, \frac{\epsilon}{2\abs{c}+1} \right\} . \end{equation*}
Take $$x \not= c$$ such that $$\abs{x-c} < \delta\text{.}$$ In particular, $$\abs{x-c} < 1\text{.}$$ By reverse triangle inequality, we get
\begin{equation*} \abs{x}-\abs{c} \leq \abs{x-c} < 1 . \end{equation*}
Adding $$2\abs{c}$$ to both sides, we obtain $$\abs{x} + \abs{c} < 2\abs{c} + 1\text{.}$$ We compute
\begin{equation*} \begin{split} \abs{f(x) - c^2} &= \abs{x^2-c^2} \\ &= \abs{(x+c)(x-c)} \\ &= \abs{x+c}\abs{x-c} \\ &\leq (\abs{x}+\abs{c})\abs{x-c} \\ &< (2\abs{c}+1)\abs{x-c} \\ &< (2\abs{c}+1)\frac{\epsilon}{2\abs{c}+1} = \epsilon . \end{split} \end{equation*}
#### Example3.1.6.
Define $$f \colon [0,1) \to \R$$ by
\begin{equation*} f(x) := \begin{cases} x & \text{if } x > 0 , \\ 1 & \text{if } x = 0 . \end{cases} \end{equation*}
Then
\begin{equation*} \lim_{x\to 0} f(x) = 0 , \end{equation*}
even though $$f(0) = 1\text{.}$$
Proof: Let $$\epsilon > 0$$ be given. Let $$\delta := \epsilon\text{.}$$ For $$x \in [0,1)\text{,}$$ $$x \not= 0\text{,}$$ and $$\abs{x-0} < \delta\text{,}$$ we get
\begin{equation*} \abs{f(x) - 0} = \abs{x} < \delta = \epsilon . \end{equation*}
### Subsection3.1.3Sequential limits
Let us connect the limit as defined above with limits of sequences.
#### Proof.
Suppose $$f(x) \to L$$ as $$x \to c\text{,}$$ and $$\{ x_n \}$$ is a sequence such that $$x_n \in S \setminus \{c\}$$ and $$\lim\, x_n = c\text{.}$$ We wish to show that $$\{ f(x_n) \}$$ converges to $$L\text{.}$$ Let $$\epsilon > 0$$ be given. Find a $$\delta > 0$$ such that if $$x \in S \setminus \{c\}$$ and $$\abs{x-c} < \delta\text{,}$$ then $$\abs{f(x) - L} < \epsilon\text{.}$$ As $$\{ x_n \}$$ converges to $$c\text{,}$$ find an $$M$$ such that for $$n \geq M\text{,}$$ we have that $$\abs{x_n - c} < \delta\text{.}$$ Therefore, for $$n \geq M\text{,}$$
\begin{equation*} \abs{f(x_n) - L} < \epsilon . \end{equation*}
Thus $$\{ f(x_n) \}$$ converges to $$L\text{.}$$
For the other direction, we use proof by contrapositive. Suppose it is not true that $$f(x) \to L$$ as $$x \to c\text{.}$$ The negation of the definition is that there exists an $$\epsilon > 0$$ such that for every $$\delta > 0$$ there exists an $$x \in S \setminus \{c\}\text{,}$$ where $$\abs{x-c} < \delta$$ and $$\abs{f(x)-L} \geq \epsilon\text{.}$$
Let us use $$\nicefrac{1}{n}$$ for $$\delta$$ in the statement above to construct a sequence $$\{ x_n \}\text{.}$$ We have that there exists an $$\epsilon > 0$$ such that for every $$n\text{,}$$ there exists a point $$x_n \in S \setminus \{c\}\text{,}$$ where $$\abs{x_n-c} < \nicefrac{1}{n}$$ and $$\abs{f(x_n)-L} \geq \epsilon\text{.}$$ The sequence $$\{ x_n \}$$ just constructed converges to $$c\text{,}$$ but the sequence $$\{ f(x_n) \}$$ does not converge to $$L\text{.}$$ And we are done.
It is possible to strengthen the reverse direction of the lemma by simply stating that $$\{ f(x_n) \}$$ converges without requiring a specific limit. See Exercise 3.1.11.
#### Example3.1.8.
$$\displaystyle \lim_{x \to 0} \, \sin( \nicefrac{1}{x} )$$ does not exist, but $$\displaystyle \lim_{x \to 0} \, x\sin( \nicefrac{1}{x} ) = 0\text{.}$$ See Figure 3.1.
Proof: We start with $$\sin(\nicefrac{1}{x})\text{.}$$ Define a sequence by $$x_n := \frac{1}{\pi n + \nicefrac{\pi}{2}}\text{.}$$ It is not hard to see that $$\lim\, x_n = 0\text{.}$$ Furthermore,
\begin{equation*} \sin ( \nicefrac{1}{x_n} ) = \sin (\pi n + \nicefrac{\pi}{2}) = {(-1)}^n . \end{equation*}
Therefore, $$\bigl\{ \sin ( \nicefrac{1}{x_n} ) \bigr\}$$ does not converge. By Lemma 3.1.7, $$\lim_{x \to 0} \, \sin( \nicefrac{1}{x} )$$ does not exist.
Now consider $$x\sin(\nicefrac{1}{x})\text{.}$$ Let $$\{ x_n \}$$ be a sequence such that $$x_n \not= 0$$ for all $$n\text{,}$$ and such that $$\lim\, x_n = 0\text{.}$$ Notice that $$\abs{\sin(t)} \leq 1$$ for all $$t \in \R\text{.}$$ Therefore,
\begin{equation*} \abs{x_n\sin(\nicefrac{1}{x_n})-0} = \abs{x_n}\abs{\sin(\nicefrac{1}{x_n})} \leq \abs{x_n} . \end{equation*}
As $$x_n$$ goes to 0, then $$\abs{x_n}$$ goes to zero, and hence $$\bigl\{ x_n\sin(\nicefrac{1}{x_n}) \bigr\}$$ converges to zero. By Lemma 3.1.7, $$\displaystyle \lim_{x \to 0} \, x\sin( \nicefrac{1}{x} ) = 0\text{.}$$
Keep in mind the phrase “for every sequence” in the lemma. For example, take $$\sin(\nicefrac{1}{x})$$ and the sequence given by $$x_n := \nicefrac{1}{\pi n}\text{.}$$ Then $$\bigl\{ \sin (\nicefrac{1}{x_n}) \bigr\}$$ is the constant zero sequence, and therefore converges to zero, but the limit of $$\sin(\nicefrac{1}{x})$$ as $$x \to 0$$ does not exist.
Using Lemma 3.1.7, we can start applying everything we know about sequential limits to limits of functions. Let us give a few important examples.
#### Proof.
Take $$\{ x_n \}$$ be a sequence of numbers in $$S \setminus \{ c \}$$ that converges to $$c\text{.}$$ Let
\begin{equation*} L_1 := \lim_{x\to c} f(x), \qquad \text{and} \qquad L_2 := \lim_{x\to c} g(x) . \end{equation*}
Lemma 3.1.7 says that $$\{ f(x_n) \}$$ converges to $$L_1$$ and $$\{ g(x_n) \}$$ converges to $$L_2\text{.}$$ We also have $$f(x_n) \leq g(x_n)\text{.}$$ We obtain $$L_1 \leq L_2$$ using Lemma 2.2.3.
By applying constant functions, we get the following corollary. The proof is left as an exercise.
Using Lemma 3.1.7 in the same way as above, we also get the following corollaries, whose proofs are again left as exercises.
### Subsection3.1.4Limits of restrictions and one-sided limits
Sometimes we work with the function defined on a subset.
#### Definition3.1.14.
Let $$f \colon S \to \R$$ be a function and $$A \subset S\text{.}$$ Define the function $$f|_A \colon A \to \R$$ by
\begin{equation*} f|_A (x) := f(x) \qquad \text{for } x \in A. \end{equation*}
The function $$f|_A$$ is called the restriction of $$f$$ to $$A\text{.}$$
The function $$f|_A$$ is simply the function $$f$$ taken on a smaller domain. The following proposition is the analogue of taking a tail of a sequence.
#### Proof.
First, let $$c$$ be a cluster point of $$A\text{.}$$ Since $$A \subset S\text{,}$$ then if $$( A \setminus \{ c\} ) \cap (c-\epsilon,c+\epsilon)$$ is nonempty for every $$\epsilon > 0\text{,}$$ then $$( S \setminus \{ c\} ) \cap (c-\epsilon,c+\epsilon)$$ is nonempty for every $$\epsilon > 0\text{.}$$ Thus $$c$$ is a cluster point of $$S\text{.}$$ Second, suppose $$c$$ is a cluster point of $$S\text{.}$$ Then for $$\epsilon > 0$$ such that $$\epsilon < \alpha$$ we get that $$( A \setminus \{ c\} ) \cap (c-\epsilon,c+\epsilon) = ( S \setminus \{ c\} ) \cap (c-\epsilon,c+\epsilon)\text{,}$$ which is nonempty. This is true for all $$\epsilon < \alpha$$ and hence $$( A \setminus \{ c\} ) \cap (c-\epsilon,c+\epsilon)$$ must be nonempty for all $$\epsilon > 0\text{.}$$ Thus $$c$$ is a cluster point of $$A\text{.}$$
Now suppose $$c$$ is a cluster point of $$S$$ and $$f(x) \to L$$ as $$x \to c\text{.}$$ That is, for every $$\epsilon > 0$$ there is a $$\delta > 0$$ such that if $$x \in S \setminus \{ c \}$$ and $$\abs{x-c} < \delta\text{,}$$ then $$\abs{f(x)-L} < \epsilon\text{.}$$ Because $$A \subset S\text{,}$$ if $$x$$ is in $$A \setminus \{ c \}\text{,}$$ then $$x$$ is in $$S \setminus \{ c \}\text{,}$$ and hence $$f|_A(x) \to L$$ as $$x \to c\text{.}$$
Finally suppose $$f|_A(x) \to L$$ as $$x \to c\text{.}$$ For every $$\epsilon > 0$$ there is a $$\delta' > 0$$ such that if $$x \in A \setminus \{ c \}$$ and $$\abs{x-c} < \delta'\text{,}$$ then $$\bigl\lvert f|_A(x)-L \bigr\rvert < \epsilon\text{.}$$ Take $$\delta := \min \{ \delta', \alpha \}\text{.}$$ Now suppose $$x \in S \setminus \{ c \}$$ and $$\abs{x-c} < \delta\text{.}$$ As $$\abs{x-c} < \alpha\text{,}$$ then $$x \in A \setminus \{ c \}\text{,}$$ and as $$\abs{x-c} < \delta'\text{,}$$ we have $$\abs{f(x)-L} = \bigl\lvert f|_A(x)-L \bigr\rvert < \epsilon\text{.}$$
The hypothesis of the proposition is necessary. For an arbitrary restriction we generally only get implication in only one direction, see Exercise 3.1.6.
The usual notation for the limit is
\begin{equation*} \lim_{\substack{x \to c\\x \in A}} f(x) := \lim_{x \to c} f|_A(x) . \end{equation*}
The most common use of restriction with respect to limits are the one-sided limits 1 .
#### Definition3.1.16.
Let $$f \colon S \to \R$$ be function and let $$c$$ be a cluster point of $$S \cap (c,\infty)\text{.}$$ Then if the limit of the restriction of $$f$$ to $$S \cap (c,\infty)$$ as $$x \to c$$ exists, define
\begin{equation*} \lim_{x \to c^+} f(x) := \lim_{x\to c} f|_{S \cap (c,\infty)}(x) . \end{equation*}
Similarly, if $$c$$ is a cluster point of $$S \cap (-\infty,c)$$ and the limit of the restriction as $$x \to c$$ exists, define
\begin{equation*} \lim_{x \to c^-} f(x) := \lim_{x\to c} f|_{S \cap (-\infty,c)}(x) . \end{equation*}
The proposition above does not apply to one-sided limits. It is possible to have one-sided limits, but no limit at a point. For example, define $$f \colon \R \to \R$$ by $$f(x) := 1$$ for $$x < 0$$ and $$f(x) := 0$$ for $$x \geq 0\text{.}$$ We leave it to the reader to verify that
\begin{equation*} \lim_{x \to 0^-} f(x) = 1, \qquad \lim_{x \to 0^+} f(x) = 0, \qquad \lim_{x \to 0} f(x) \quad \text{does not exist.} \end{equation*}
We have the following replacement.
That is, a limit exists if both one-sided limits exist and are equal, and vice versa. The proof is a straightforward application of the definition of limit and is left as an exercise. The key point is that $$\bigl( S \cap (-\infty,c) \bigr) \cup \bigl( S \cap (c,\infty) \bigr) = S \setminus \{ c \}\text{.}$$
### Subsection3.1.5Exercises
#### Exercise3.1.1.
Find the limit (and prove it of course) or prove that the limit does not exist
a) $$\displaystyle \lim_{x\to c} \sqrt{x} \text{,}$$ for $$c \geq 0$$ b) $$\displaystyle \lim_{x\to c} x^2+x+1 \text{,}$$ for $$c \in \R$$ c) $$\displaystyle \lim_{x\to 0} x^2 \cos (\nicefrac{1}{x})$$ d) $$\displaystyle \lim_{x\to 0}\, \sin(\nicefrac{1}{x}) \cos (\nicefrac{1}{x})$$ e) $$\displaystyle \lim_{x\to 0}\, \sin(x) \cos (\nicefrac{1}{x})$$
#### Exercise3.1.5.
Let $$A \subset S\text{.}$$ Show that if $$c$$ is a cluster point of $$A\text{,}$$ then $$c$$ is a cluster point of $$S\text{.}$$ Note the difference from Proposition 3.1.15.
#### Exercise3.1.6.
Let $$A \subset S\text{.}$$ Suppose $$c$$ is a cluster point of $$A$$ and it is also a cluster point of $$S\text{.}$$ Let $$f \colon S \to \R$$ be a function. Show that if $$f(x) \to L$$ as $$x \to c\text{,}$$ then $$f|_A(x) \to L$$ as $$x \to c\text{.}$$ Note the difference from Proposition 3.1.15.
#### Exercise3.1.7.
Find an example of a function $$f \colon [-1,1] \to \R\text{,}$$ where for $$A:=[0,1]\text{,}$$ we have $$f|_A(x) \to 0$$ as $$x \to 0\text{,}$$ but the limit of $$f(x)$$ as $$x \to 0$$ does not exist. Note why you cannot apply Proposition 3.1.15.
#### Exercise3.1.8.
Find example functions $$f$$ and $$g$$ such that the limit of neither $$f(x)$$ nor $$g(x)$$ exists as $$x \to 0\text{,}$$ but such that the limit of $$f(x)+g(x)$$ exists as $$x \to 0\text{.}$$
#### Exercise3.1.9.
Let $$c_1$$ be a cluster point of $$A \subset \R$$ and $$c_2$$ be a cluster point of $$B \subset \R\text{.}$$ Suppose $$f \colon A \to B$$ and $$g \colon B \to \R$$ are functions such that $$f(x) \to c_2$$ as $$x \to c_1$$ and $$g(y) \to L$$ as $$y \to c_2\text{.}$$ If $$c_2 \in B\text{,}$$ also suppose that $$g(c_2) = L\text{.}$$ Let $$h(x) := g\bigl(f(x)\bigr)$$ and show $$h(x) \to L$$ as $$x \to c_1\text{.}$$ Hint: Note that $$f(x)$$ could equal $$c_2$$ for many $$x \in A\text{,}$$ see also Exercise 3.1.14.
#### Exercise3.1.10.
[note 2 ] Let $$c$$ be a cluster point of $$A \subset \R\text{,}$$ and $$f \colon A \to \R$$ be a function. Suppose for every sequence $$\{x_n\}$$ in $$A\text{,}$$ such that $$\lim\, x_n = c\text{,}$$ the sequence $$\{ f(x_n) \}_{n=1}^\infty$$ is Cauchy. Prove that $$\lim_{x\to c} f(x)$$ exists.
#### Exercise3.1.11.
Prove the following stronger version of one direction of Lemma 3.1.7: Let $$S \subset \R\text{,}$$ $$c$$ be a cluster point of $$S\text{,}$$ and $$f \colon S \to \R$$ be a function. Suppose that for every sequence $$\{x_n\}$$ in $$S \setminus \{c\}$$ such that $$\lim\, x_n = c$$ the sequence $$\{ f(x_n) \}$$ is convergent. Then show that the limit of $$f(x)$$ as $$x \to c$$ exists.
#### Exercise3.1.13.
Suppose $$S \subset \R$$ and $$c$$ is a cluster point of $$S\text{.}$$ Suppose $$f \colon S \to \R$$ is bounded. Show that there exists a sequence $$\{ x_n \}$$ with $$x_n \in S \setminus \{ c \}$$ and $$\lim\, x_n = c$$ such that $$\{ f(x_n) \}$$ converges.
#### Exercise3.1.14.
(Challenging) Show that the hypothesis that $$g(c_2) = L$$ in Exercise 3.1.9 is necessary. That is, find $$f$$ and $$g$$ such that $$f(x) \to c_2$$ as $$x \to c_1$$ and $$g(y) \to L$$ as $$y \to c_2\text{,}$$ but $$g\bigl(f(x)\bigr)$$ does not go to $$L$$ as $$x \to c_1\text{.}$$
#### Exercise3.1.15.
Show that the condition of being a cluster point is necessary to have a reasonable definition of a limit. That is, suppose $$c$$ is not a cluster point of $$S \subset \R\text{,}$$ and $$f \colon S \to \R$$ is a function. Show that every $$L$$ would satisfy the definition of limit at $$c$$ without the condition on $$c$$ being a cluster point.
#### Exercise3.1.16.
1. Prove Corollary 3.1.13.
2. Find an example showing that the converse of the corollary does not hold.
There are a plethora of notations for one-sided limits. E.g. for $$\lim\limits_{x \to c^-} f(x)$$ one sees $$\lim\limits_{\substack{x \to c\\x < c}} f(x)\text{,}$$ $$\lim\limits_{x \uparrow c} f(x)\text{,}$$ or $$\lim\limits_{x \nearrow c} f(x)\text{.}$$
This exercise is almost identical to the next one. It will be replaced in the next major edition.
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf
|
2022-10-07 12:15:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956191182136536, "perplexity": 98.51091876733948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00407.warc.gz"}
|
https://cringproject.wordpress.com/2011/01/
|
## Archive for January, 2011
### Euclidean domains
January 23, 2011
We all learn in intro abstract algebra that a euclidean domain is a PID. It turns out that the converse is almost true. Namely, if one relaxes the definition of a euclidean norm (instead of a euclidean algorithm, you have something a bit weaker) you get something entirely equivalent to being a PID. This is apparently due to Greene in the Monthly, 1997 (and has a quick proof). Now, this material is in ch. 1.
Lang’s Algebra (as well as some of his other books, too) has a lot of these kinds of isolated references to scattered results in the literature. Some of these are quite interesting; it is probably worth adding more of these. Doing so will also make the book less “canonical”!
It happens, coincidentally, that we also got a donation on euclidean domains, which has been partially merged in.
### Split injections of free modules over local rings
January 17, 2011
The main latest change is the addition of the following lemma: Suppose given two free modules $F, F'$ over a local ring, of finite rank, and a morphism $\phi$ between them. Then $\phi$ is a split injection iff the base-change $F \otimes k \to F' \otimes k$ to the residue field is an injection. This is not too difficult to prove, but I realized today that Hartshorne uses it at a key point in proving that a nonsingular subvariety of a nonsingular variety is a local complete intersection. It is kind of glossed over there, probably for good reasons, but this lemma is now explained in our book.
The makefile is also fixed so that running “make” actually resolves cross-references. Apparently, you run pdflatex twice after invoking bibtex, and not the other way around. That makes sense.
### Injectives
January 12, 2011
So now we have a proof (in chapter 3) that category of modules over a commutative ring has enough injectives. Actually, two proofs. One is the standard dualization argument that appears in most textbooks. The other is a variant of the “small object argument” in homotopy theory and uses a bit more set-theoretic machinery. The latter has the advantage that it can be used to show that large classes of abelian categories have enough injectives (as Grothendieck does in his Tohoku paper). In my commutative algebra class, the teacher hinted that one could prove the theorem this way.
The idea is somewhat explained in this blog post, but not very well, and some of the technical points (e.g. filtered ordinals) are obscured there. Thanks to Johan de Jong for pointing this out.
Also, the formatting has changed a little. The chapter and section titles are not simply the defaults.
### The CHANCE project
January 10, 2011
Whoa. I didn’t realize that there was yet another one of us. It also uses the same license (the GNU FDL) and even has a similar-sounding name. As the name suggests, it’s about probability.
Apparently, the bandwagon we have jumped on is bigger than I thought.
### In which the CRing project’s website expands
January 7, 2011
The main website for the CRing project is now slightly improved. Namely, there’s now a downloads page which allows you to view individual chapters of the book. This idea was shamelessly copied from the analog for the Stacks project, of course. As usual, the website will be updated about once a day (which is slightly less frequently than the project actually gets edited!).
The project itself has been evolving as usual the past few days. I am not sure it makes sense to give a blow-by-blow account of every small edit (that’s what the git repository is for), but the major new addition is a small section on Oka families of ideals in the chapter on the Spec of a ring. This is basically an axiomatization of the familiar observation that an ideal maximal with respect to some property is often prime. We also have some new donations, which will start trickling into the main document soon.
The source files also now contain a bunch of Perl scripts that may be useful. This is entirely irrelevant to compiling the main document (CRing.pdf) but might help in other cases. Let me briefly explain what they do:
• scripts/makenamelist.perl keeps the list of chapters (in the tmp/ directory) up to date.
• scripts/script.perl updates the makefile (which should be done after you add a new chapter or remove a chapter) and creates files in the aux/ directory that when compiled will produce precisely one chapter. This only needs to be run after you add a new chapter. However, there is a better way to do this: make update_tmp will run the script as well as the one that updates the name list.
• Speaking of which, the makefile is now better. “make chflat.pdf” (or more generally “make ch(name).pdf”) will, for instance, produce a PDF file containing the chapter on flatness alone. The xr package is used to get the cross-references with the rest of the document working. “make chapters” will do all the chapters (and, incidentally, the whole book as well).
• If you want to run a script by itself, this should be done from the main directory.
• If for whatever reason you don’t have “make” (e.g. you use Windows), you can run “pdflatex aux/ch(name).tex” from the main directory twice (after compiling the book itself, pdflatex CRing.tex) to get the individual chapters.
Not that these are likely to be used too often by contributors — they’re probably most useful for now in getting the website automatically updated. Later we might need them if we want to put a table of contents in each chapter or something like that (and for whatever reason can’t use shorttoc). Also, I don’t know programming, so people should feel free to edit these.
|
2017-10-24 00:33:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7662655115127563, "perplexity": 676.4455608213713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00484.warc.gz"}
|