content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Quadratic and Linear WL Placement Using Quadratic Programming
Quadratic and Linear WL Placement
Using Quadratic Programming: Gordian &
Shantanu Dutt
ECE Dept., Univ. of Illinois at
Acknowledgements: Adapted from the slides
“Gordian Placement Tool: Quadratic and Linear Problem Formulation “
Ryan Speelman
Jason Gordon
Steven Butt
UCLA, EE 201A 5-6-04
Papers Covered
• Kleinhaus, G. Sigl, F. Johannes, K. Antreich,
"GORDIAN: VLSI Placement by Quadratic
Programming and Slicing Optimization", IEEE
Trans. on CAD, pp 356-365, 1991.
• G. Sigl, K. Doll and F.M. Johannes, "Analytical
placement: A Linear or Quadratic Objective
Function?", Proc. DAC, pp 427-432, 1991.
Quadratic Problem Formulation
• Find approximate positions for blocks (global
• Try to minimize the sum of squared wire length.
• Sum of squared wire length is quadratic in the
cell coordinates.
• The global optimization problem is formulated as
a quadratic program.
• It can be proved that the quadratic program is
convex, and as such, can be solved in
polynomial time
Quadratic Problem Formulation
Let (xi ,yi ) Coordinate s of the center of cell i
wij Weight of the net between cell i and cell j
x, y Solution v ectors
Cost of the net between cell i and cell j
wij ( xi x j ) 2 ( yi y j ) 2
1 T 1 T
Total cost x Qx d x x y Qy d y y const
T T
Constants in the total cost equation are derived
From information about chip constraints such as
Fixed modules
Quadratic Problem Formulation
• Look closer at the one-dimensional problem
– Cost = ½ xTCx + dTx
• At the ith level of optimization, the placement
area is divided up into at most q ≤ 2i regions
• The centers of these regions impose constraints
on the global placement of the modules
• A(i)x = u(i)
• The entries of the matrix A are all 0 except for
one nonzero entry corresponding to the region
that a given module belongs to
Quadratic Problem Formulation
• Combine the objective function and the linear constraints
to obtain the linearly constrained quadratic programming
problem (LQP)
• Since the terms of this function define a convex
subspace of the solution space, it has a unique
global minimum (x*)
• Gordian does not use partitioning to reduce the
problem size, but to restrict the freedom of
movement of the modules
• Decisions in the partitioning steps place modules
close to their final positions, so good partitioning is
• Decisions are made based on global placement
constraints, but also need to take into account the
number of nets crossing the new cut line
Fp, Fp’ are new partition areas Cp is the sum of the weights
Alpha is the area ratio, usually 0.5 Of the nets that cross the partition
Improving Partitioning
• Variation of cut direction and position
– Going through a sorted list of module coordinates,
you can calculate Cp for every value of α by drawing
the partition line after each module in sequence
• Module Interchange
– Take a small set of modules in the partition and apply
a min-cut approach
• Repartitioning
– In the beginning steps of global optimization, modules
are usually clustered around the centers of their
– If regions are cut near the center, placing a module
on either side of the region could be fairly arbitrary
– Apply a heuristic, if two modules overlap near a cut
then they are merged into one of the regions
Final Placement
• A final placement is the last, but possibly most important,
step in the GORDIAN Algorithm
• After the main body of the GORDIAN algorithm finishes,
which is the alternating global optimization and
partitioning steps, each of the blocks containing k or less
modules needs to be optimized.
• For the Standard Cell Design the modules are collected
in rows, for the macro-cell design an area optimization is
performed, packing the modules in a compact slicing
Standard Cell Final Placement
• In Standard Cell Designs the Modules are approximately the same
height but can vary drastically in width.
• The region area is determined by the widths of the channels between
the rows and by the lengths of the rows.
• The goal is to obtain narrow widths between rows by having equally
distributed low wiring density and rows with equal length.
• To create rows of about equal length is necessary to have a low area
design. This is done by estimating the number of feed-throughs in
each row and making rows with large feed-throughs shorter than
average to allow for the feed-through blocks that will be needed. In
the end the row lengths should not vary from the average by more
than 1-5%
• A final row length optimization is created by interchanging select
modules in nearby rows who have y-coordinates close to the cut-line
Linear or Quadratic Objective
- Gordian used a quadratic objective function as the cost
function in the global optimization step
- Is a linear objective function better?
- What are the tradeoffs for each?
- What are the results of using a linear objective function
compared with using a quadratic one?
Comparison of Linear and Quadratic
Objective Function
Quadratic objective function Linear objective function
d lav d d lav d
Min S nets ni lav + di)2 Min S nets ni lav + di)
• +ve and –ve deviations add up • +ve and –ve deviations cancel
• Thus the above formulation also each other
minimizes the deviations di (in • Thus the above formulation only
addition to lav minimizes lav
Comparison of Linear and Quadratic
Objective Function
- Minimization of the quadratic objective function tends to
make very long nets
-Minimization of the linear objective function results in
shorter nets overall
Comparison cont’d
- Quadratic objective function leads to more routing in
this standard cell circuit example
- This observation is the motivation to explore linear
objective functions in further detail for placement
• Retains the basic strategy of the Gordian
algorithm by alternating global placement and
partitioning steps
• Modifications include the objective function
for global placement and the partitioning
- Linear objective function
- Iterative partitioning
Model for the Linear Objective
- All modules connected by net v are in the set Mv
- The pin coordinates are
- The module center coordinates are
with the relative pin coordinates being
- The coordinates of the net nodes are always in the center of their
connected pins, meaning
Linear Objective Function
Quadratic objective function
Linear objective function
- Quadratic objective functions have been used in the past because
they are continuously differentiable and therefore easy to
minimize by solving a linear equation system.
- Linear objective functions have been minimized by linear
programming with a large number of constraints
- This is much more expensive in terms of computation time
- An adjustment to the function needs to be made
Quadratic Programming for the
Linear Objective Function
- We can rewrite the objective function as:
- The above is iterated k times until | k - k-1| <e
- Thus in the k’th iteration we are solving:
k k
k-1 k
Quadratic Programming for the
Linear Objective Function
- Through experimentation the area after final routing is better
if the factor is replaced by a net specific factor
Quadratic Programming cont’d
-The advantages of this approach are:
1. The summation reduces the influence of nets with many
connected modules and emphasizes the majority of nets
connecting only two or three modules.
2. The force on modules close to the net node is reduced since;
this helps in optimizing a WL metric close to HPBB in which
only the coordinates of boundary cells of the BB (those that are
far from the “centroid” or “net node” coordibnates)
- To solve the problem an iterative solution
method is constructed with iteration count k
for the modified objective
The quadratic programming problem
can now be solved by a conjugate
gradient method with preconditioning
by incomplete Cholesky factorization
Iterative Partitioning
- Modules in a region are bipartitioned
iteratively instead of in one step
- Module set is partitioned into
such that
- Also, to distribute the models better over the whole placement area,
positioning constraints fix the center of gravity of modules in the set
on the center coordinate of the region
, i.e.
Iterative Partitioning
- The modified iterative partitioning
forces the modules more and more
away from the center of the region
The second iteration step partitions the set into the sets
The iterative process finishes when the set becomes empty.
The number of modules assigned to the sets and is
determined by the area constraint
- Gordian algorithm does well with large amounts of modules
- Global optimization combined with partitioning schemes
-The choice of the objective function is crucial to an analytical
placement method.
- GordianL yields area improvements of up to 20% after final
-The main reason for this improvement was the length reduction of
nets connecting only two and three pins.
|
{"url":"http://www.docstoc.com/docs/23930420/Quadratic-and-Linear-WL-Placement-Using-Quadratic-Programming","timestamp":"2014-04-21T06:22:25Z","content_type":null,"content_length":"65766","record_id":"<urn:uuid:38526033-f942-448b-beec-7c2f81cbeafc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Internal simulation capability include energy minimization and molecular dynamcs.
Molecular Mechanics
Two different algorithms, the conjugate gradient and variable metric methods, are implemented in Direct Force Field for energy minimization. The conjugate gradient method is based on the
Polak-Ribiere method, which is similar to the Fletcher-Reeve method. The variable metric method is also called quasi-Newton method. Both methods involve calculating the derivatives of the potential
Restrained energy minimization can be performed to explore energy profiles. This is done by adding energy terms on selected internal coordinates to the total energy function. The restraint function
is a harmonic function in which the reference value and force constant can be specified. The restraint energy value is subtracted from the total energy in the results reported.
Molecular Dynamics
The Verlet velocity algorithm is implemented in Direct Force Field. The available ensembles are:
• NVE (microcanonical)
• NVT (canonical)
• NPT (isothermal-isobaric)
The temperature can be modularized using two methods:
• Direct Velocity Scaling
• Andersen Stochastic Method
Pressure changes can be accomplished by changing the coordinates of the particles and the size of the unit cell in periodic boundary conditions. Berendsen's method couples the system to a pressure
"bath" to maintain the pressure at a certain target. The following screenshot shows the MD job dialog.
|
{"url":"http://www.aeontechnology.com/Product_Simulation_Internal.php","timestamp":"2014-04-17T15:31:07Z","content_type":null,"content_length":"4551","record_id":"<urn:uuid:f1bc3a61-58a3-4a0d-b2d0-768b60481aa1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pain-Topics News/Research UPDATES
Part 15 – NNT, NNH, and Harm-to-Benefit Ratios
Pain research reports typically convey an array of statistical analyses, but among the more useful, and at the same time least frequently presented, is the Number-Needed-to-Treat (NNT) for benefit or
harm from a therapy or intervention. In simplest terms, NNT helps healthcare providers, and their patients, to assess the extent to which a pain treatment is likely to be helpful or harmful in
specific ways. This article in the series “Making Sense of Pain Research” explains why and how to calculate and use NNT statistics, which are simple in concept yet can be challenging to properly
The concept of NNT was proposed 25 years ago as a useful measure of the clinical effects and consequences of a treatment [Laupacis et al. 1988]. It estimates the effort that practitioners must expend
in treating patients to help attain a good outcome (eg, pain relief) or to avoid an undesirable consequence (eg, adverse effect) of a therapy or intervention [DiCenso 2001; McAlister 2008].
Additionally, NNT is a meaningful way of expressing the magnitude of a treatment effect in contrast to either a control (eg, placebo or no treatment) or comparative intervention.
Furthermore, knowing the NNT helps to determine whether likely treatment benefits will overcome associated harms and costs. For example, it could be reasonable to treat 2 patients with a new,
relatively safe, but very expensive analgesic to achieve 50% pain relief in one of those patients (NNT=2), when compared with an older, safe, but much less expensive analgesic requiring that 4
patients must be treated for one of them to achieve the same level of pain relief (NNT=4).
In overview, NNT is a statistical measure of effect that helps to answer several clinical questions of importance to practitioners and their patients:
• How many patients need to be treated with a therapy or intervention for one of them to benefit in some way?
• How many patients need to be treated for one of them to be harmed (eg, adverse effect) in some way?
• How many patients might benefit from treatment for each one who experiences some harmful effect?
When an NNT reflects an undesirable event (eg, adverse effect) it is usually denoted as NNH, or Number-Needed-to-Harm. Some authors suggest that, whether signifying benefit or harm, the concept of
NNT remains the same and should be designated as either “NNT-B” (NNT to benefit) or “NNT-H” (NNT to harm) [Cates 2005]. This makes sense; but NNT-B or NNT-H are rarely used, and it is most important
that researchers unambiguously communicate in their reports what is intended by an NNT or NNH and its significance.
In the following discussions, the term “NNT” is used primarily; although, the calculations of NNH are exactly the same as for NNT. In both cases, smaller numbers imply either greater benefit or
greater harm, depending on the context. A correctly described NNT or NNH should always specify the treatment and its outcome of interest, the comparator, the duration of time necessary for achieving
the outcome, and the 95% Confidence Interval with P-value for the statistic [Moore 2009].
The NNT may be reported for individual clinical trials or in meta-analyses that combine multiple studies. When the NNT is not provided by researchers, it often can be calculated from data in the
report or estimated from other statistical measures of effect, as described below. However, while NNT is a relatively straightforward measure of effect, there are subtleties and nuances of this
statistic that readers of pain research literature should understand for proper interpretation and use.
NNT – An Essential Measure of Treatment Effect
The Number-Needed-to-Treat (NNT) was first discussed in Part 6 of this series [here], which considered bottom-line treatment effects presented as “risk statistics” in reports of clinical trials.
Relative Risk (RR), Relative Risk Reduction (RRR), and Absolute Risk Reduction (ARR) were explained in some detail (and readers may find it helpful to review the earlier material in Part 6).
The notion of “risk” in statistical parlance refers to the probability of an event occurring that is either beneficial or harmful depending on what sort of outcomes are being measured in the research
study. For calculating NNT in clinical trials, the most important metric is the ARR — or, its counterpart, Absolute Risk Increase, ARI — both of which might be most simply described as the Absolute
Risk Difference (ARD) between 2 groups that are being compared: an experimental (eg, therapy or intervention) group and a control (eg, placebo, comparator, or no treatment) group [Citrome 2007;
Sackett et al. 1997].
The ARD is simply the difference between the probability or rate of an event of interest occurring in the control group (Control Event Rate, or CER) and the rate in the experimental group (
Experimental Event Rate, or EER). Thus, ARD = CER-EER [Citrome 2007, 2008; McAlister 2008; Sackett et al. 1997]. The ARD can reflect an increase or decrease in event rates, depending on study design
and what is being measured.
Since the CER and EER are expressed as probabilities — ie, the proportion, or the percentage converted to decimal form — values for each can range from 0.0 to 1.0. And, the difference between the
two, or ARD, can range in value from -1.0 to +1.0, with 0.0 indicating no difference between groups. The clinical meaning of positive vs negative ARDs must be interpreted within the framework of the
individual study design (discussed below).
The NNT is then simply calculated as the inverse (reciprocal) of the Absolute Risk Difference, or NNT = 1/ARD. Mathematically, this indicates how many persons must receive the experimental treatment
rather than the control/comparator treatment for 1 of them to realize the effect, whether it is beneficial or harmful (ie, NNH).
Example: Suppose hypothetically that in a large study (total n=5000) the Relative Risk of hip fractures due to falls in elderly patients with knee osteoarthritis was 60% (RR=0.60) in the
group prescribed a new analgesic compared with those prescribed an older drug during a one-year period. The Relative Risk Reduction would be 40% (RRR = 1–0.60 = 0.40). This sounds clinically
important and worthwhile since, compared with the older drug, the newer analgesic reduced fracture occurrences by 40%.
However, in absolute terms, suppose that during the year of study 20/2500 or 0.80% of patients in the control group taking the older analgesic experienced falls/factures (CER), compared with
12/2500 or 0.48% taking the new drug (EER). This is still a 40% reduction in risk, and possibly statistically significant, but the absolute risk difference (ARD, or CER-EER) is only 0.32%,
and the NNT=313 (ie, 1/0.0032=312.5).
That is, for every 313 elderly patients with osteoarthritis prescribed the new analgesic rather than the older drug during a one-year period, 1 additional patient might be spared a hip
fracture due to a fall. This effect also may be expressed as a frequency: 1 out of 313 elderly patients taking the new analgesic rather than the older one for a year might avoid hip fracture.
Either way, this benefit of the new analgesic would not seem very impressive, unless it also has much lower cost and/or offers important other advantages over the older drug.
The smaller the NNT the greater the effect size, or fewer patients needed to be treated for benefit or harm. Depending on study design and context, calculated NNTs can be positive or negative in
value; although, they always are presented as positive whole numbers. Calculations resulting in numbers with fractions (eg, 312.5 in the above example) are rounded to the nearest whole number, since
fractions of patients are meaningless in this context.
To be conservative, most authors recommend that the NNT should always be rounded upward; eg, NNT=3.6 would become 4 [Citrome 2008]. Although, in some cases small fractions might be more appropriately
rounded downward; for example, if 3.1 is rounded up to 4, instead of down to 3, it might significantly understate the NNT effect size (ie, the larger NNT artificially portrays a smaller effect).
Research report authors should indicate how rounding was done since it can make a difference when NNT or NNH values are small in size.
NNT Requirements, Limitations & Caveats
Unfortunately, when learning about research statistics, it seems almost axiomatic that no estimate of treatment effect size is ever so simple that it cannot be complicated beyond the comprehension of
average readers of the medical literature. So it is with NNT; essentially a simple concept, but with important requirements, limitations, and cautions regarding its calculation and interpretation.
When wrongly construed — whether intentionally or not — various factors can bias the presentation of NNT, or NNH, in a report to make a pain treatment unduly appear either more or much less
A. Study Design & Context Are Critical
NNTs are only useful when the evidence on which they are based fulfills criteria of good quality in terms of methodology, sample size, accuracy, reliability, and validity [Moore 2009].
Additionally, the study’s design and purpose must be taken into account.
For example, “risk” is usually thought of as being negative, but statistically it can represent events that are either favorable (eg, pain relief) or unfavorable (eg, adverse effects) depending
on the study design. As noted above, the Absolute Risk Difference (ARD), can reflect either an increase or decrease in “risks” per se, which may be desirable or undesirable depending on the
context of what is being measured.
Example: If the outcome of interest is pain relief with a new analgesic, one might expect more event occurrences in the treatment than in the control (eg, placebo) group — the corresponding
ARD (CER-EER) would be negative, even though the outcome is favorable. In the case of a treatment that reduces an undesirable outcome (eg, occurrence of a side effect), there would be more
events in the control group and the ARD would be both positive and favorable.
Just to add another layer of potential confusion, some biostatisticians suggest calculating the ARD as EER–CER, rather than CER–EER as shown above. In this case, the absolute value remains the
same but the sign, plus or minus, is reversed. In all cases, when converting the ARD to an NNT (1/ARD) the plus or minus sign is usually discarded, but it is still important to keep in mind
whether the NNT represents more events in the control or experimental group.
A single study or trial can have a number of NNTs representing different therapeutic endpoints of interest and representing benefits or harms. Furthermore, when comparisons of NNTs are made
across studies, as in meta-analyses, it is essential that there is clinical homogeneity — ie, the data have been derived for the same treatments and outcomes, assessed against similar
comparators, in similar patient populations at the same stage of disease, and followed for the same duration of time [Anon 2003; McAlister 2008]. Otherwise, conclusions about the similarity of
NNTs, or NNHs, can be misleading.
B. NNTs Require Dichotomous Data
An NNT (or NNH) can only be calculated from dichotomous or binary data; that is, those in which subjects are categorized as either achieving a specified endpoint or not doing so [Moore 2009]. For
example, the proportion (eg, percentage) of subjects achieving ≥50% pain relief from baseline as a result of a therapy versus those who do not reach this endpoint. Or, the percentage of subjects
who experience an adverse effect of importance versus those who do not.
Continuous or scaled data representing a range of individual responses — such as differences between groups in overall changes in average pain scores on a 100-point visual analog scale — cannot
be portrayed as NNTs unless the data are transformed or categorizing into yes/no types of outcomes. For example, within each group, categorizing changes in pain scores below a certain point as a
favorable response and the rest as unfavorable.
The need for dichotomous data is sometimes neglected by researchers and readers of the literature. It can be very tempting to just subtract a mean percentage outcome on an endpoint measure in the
treatment group from that in the control group and treat it like an ARD (Absolute Risk Difference) to calculate an NNT — which would be incorrect.
C. Categories Can Be Misleading
A typical NNT may represent the proportion of subjects achieving 50% pain relief due to a therapy, but it does not depict that many patients might have actually fared much better, say 75% or 90%
relief; or, at the other extreme, much worse (eg, only 10% or 25% pain relief). Or, an NNH may indicate the proportion of subjects who experienced an adverse effect compared with those who did
not, but it does not suggest if it was a singular event for most patients or a more frequent problem in many of them.
D. Timeframes Matter
The interpretation of NNTs must take into account the timeframes during which events were observed [DiCenso 2001, McAlister 2008]. Most studies in the pain field are rather short term, involving
weeks rather than months or years, even though in clinical practice it often may take some time for beneficial or harmful effects to fully emerge.
It is essential that statements describing NNTs specify the time periods of observation. An NNT of 3 may seem favorable, but does this require 4 weeks of treatment, 6 months, a year, or longer?
And, there could be important clinical implications for a treatment requiring only 4 weeks versus several months for 1 in 3 patients to benefit.
Along with that, NNTs across similar studies of the same treatment, but with different timeframes, cannot be accurately compared with each other [Moore 2009]. For example, in two
identically-designed studies, with one lasting 4 weeks and the other 6 months, NNTs of 3 in each study favoring some treatment may not be truly equivalent; largely, because each NNT was observed
during different timeframes.
Furthermore, an NNT cannot usually be simply divided by some number to denote a shorter time period. For example, an NNT=12 for treatment during 4 years is not the same as an NNT=3 for 1 year (12
/4); unless it is known that the rates of events (CER and EER) are consistently and equally distributed over the total 4-year timeframe. In most cases, events may occur either early or late
during an observation period; so, if it is desirable to estimate NNTs at various time points a time-to-event analysis — eg, Cox proportional hazards or Kaplan-Meier survival analysis — must be
applied to the data.
In that regard, NNTs for benefit or harm can be calculated from epidemiological data collected over time and reflected in incidence or prevalence statistics [discussed in Part 14 of this series
here]. It is important, however, that time periods of data collection are taken into account, with an understanding that rates of events may vary at different time points and cumulatively over
time. This complicates data interpretation and can be a source of bias in how NNTs are presented in research reports that ignore time-based effects.
Another important caveat is that epidemiological data reported as patient-years must be used cautiously for calculating NNTs, since NNTs portray effects in terms of individual patients (not
patient-years) during the time period of observation [Suissa 2009]. For example, data expressed as 200 patient-years could reflect 200 patients treated for 1 year, 100 patients treated during 2
years, 50 patients treated for 4 years, and so on.
Example: Part 14 in this series described a study assessing benefits of a zoster-virus vaccine for decreasing the incidence of painful herpes zoster (shingles) and postherpetic neuralgia
(PHN) in older adults [Oxman et al. 2005]. The researchers reported that, overall, during their 5-year study the vaccine significantly (p<0.001) reduced the shingles incidence rate from 11.12
per 1000 person-years in the placebo group to 5.42 per 1000 person-years in the vaccine group. In the report, they did not present NNTs for the data.
A later article describing this study noted that the NNT was 175; ie, “1 case of shingles is avoided for every 175 people vaccinated” [Thakur and Philip 2012]. This was probably calculated
from the overall incidence rates per 1000 person-years: ARD = 0.00542 (placebo) – 0.01112 (vaccine) = –0.0057; NNT=1/0.0057≈175. However, this was faulty in that the NNT calculated this way
reflects patient-years rather than individual patients, and the authors did not qualify their NNT by specifying the total observation period, or 5 years in this case.
Other authors described the NNT to prevent 1 case of shingles over 3 years as 58 [Fashner and Bell 2011]. They did not indicate how this was calculated; however, if they erroneously assumed
that the NNT of 175 represented a single year, then spreading that out instead over each of 3 years (175/3) would be an NNT≈58.
Figure, from Oxman et al. 2005], which illustrate how cumulative incidence proportions in placebo and vaccine groups gradually diverged over time.
From the curves it can be estimated that cumulative incidence rates at 3 years were about 3.2% in the placebo group and 1.5% for vaccine, or an NNT≈59 (1/[0.032–0.015]; rounded up). This is
close to the number from Fashner and Bell [2011] noted above, but more validly calculated.
Similarly, at the end of the study the cumulative incidences were about 5.3% (placebo) and 2.5% (vaccine), or an NNT≈36 (1/[0.053–0.025]). That is, during a 5-year period 1 of 36 persons
treated with the zoster vaccine rather than placebo could be spared from developing shingles. This is remarkably different from and more favorable than the NNT of 175 suggested in some
reports [eg, Thakur and Philip 2012].
At the very least, the above example shows how NNT calculations can differ depending on the subtle ways that data are presented and interpreted. Unfortunately, interpretations in the pain
literature sometimes are incorrect and critical readers need to understand enough about NNTs to check them for accuracy and validity.
Deriving NNTs from Other Effect-Size Estimates
As discussed above, an NNT derived from an Absolute Risk Difference (ARD) is a clinically intuitive measure of treatment effect, provided certain constraints are taken into account. One reason NNTs
may not be more frequently presented in pain research reports is because other estimates of effect size — Odds Ratio (OR), Risk Ratio (RR), Relative Risk Reduction (RRR), or Standardized Mean
Difference (SMD) — that may seem impressive could turn out to be less clinically remarkable if converted to NNT [Kraemer and Kupfer 2006; Moore 2009]. In the hypothetical example above regarding a
new analgesic for osteoarthritis in elderly patients, a seemingly important 40% RRR in fractures due to falls turned out to have a lackluster NNT of 313.
Regrettably, authors reporting other effect-size statistics often do not provide all of the necessary event-rate data to derive an Absolute Risk Difference (ARD) between groups for directly
calculating the NNT (or NNH). So, what can be done? Fortunately, conversion formulas have been devised that allow transforming other effect-size estimates into NNTs, as described below.
Caveat: Readers with “statistiphobia” to some extent may wish to skip the remainder of this section. The important point is that NNTs can be calculated from other measures of effect — whether OR,
RR, RRR, or SMD — but only if report authors are willing and able to do so for helping readers to better understand the clinical impact of their study outcomes. Otherwise, readers are left to do
the conversions on their own, using the approaches below.]
An important requirement of the conversion formulas below is having a known or an estimated value for event/risk rates in the control or comparator group. The focus in the formulas on rates in the
group not receiving the new/experimental treatment of interest is because that group is used as the reference for comparison purposes when calculating and defining the NNT (or NNH).
If data suggesting the Control Event Rate (CER) are presented somewhere in the research report, then these numbers can be directly plugged into the formulas below. If not, it is necessary to estimate
a parameter called the “Patient Expected Event Rate,” or PEER. This also is sometimes called an “Assumed Control Risk,” or ACR (eg, in the Cochrane Handbook, Higgins and Green 2011).
Numerically, PEER is the indirectly estimated risk of an event in the control group of a particular study, or it can be the background prevalence in an untreated population. Essentially, it
represents the assumed event rate for an outcome measure in patients who do not receive the experimental therapy or intervention of interest; ie, those who either receive placebo, a comparator or
conventional treatment, or no treatment.
How does one go about determining the PEER? This can require investigation, since some authors recommend finding similar trials in the literature that may or may not test the exact same experimental
treatment, but do involve the same clinical health condition and provide outcome data for a control group of patients (eg, those receiving placebo or no treatment). Review articles or product
information sometimes can be helpful in this regard or, depending on the circumstances, there may be baseline population prevalence data available for estimating event rates in persons receiving
usual care or no treatment for a particular condition.
Example: In a large, cross-sectional, data-mining study spanning one year, researchers used prescription of medications for erectile dysfunction and/or testosterone replacement as a surrogate
measure of sexual dysfunction/hypogonadism in men receiving long-term opioid therapy for chronic back pain [Deyo et al. 2013; also discussed in a Pain-Topics UPDATE here]. Patients receiving
higher daily opioid doses (≥120mg morphine-equivalents) exhibited greater evidence of sexual dysfunction than those who did not receive opioid analgesics; Odds Ratio (OR) = 1.58.
The report authors do not provide specific event-rate data in patients not receiving opioids (comparator group); however, according to review articles, the prevalence of symptomatic hypogonadism
in the general male population is typically about 5% to 6%. Using these data as the PEER, along with the OR in the study, the NNH can be calculated (see formula below) to yield values of 32 to
37. That is, for every 32 to 37 men treated with high-dose opioid analgesics during one year, rather than not being treated with those agents, 1 additional patient than normally expected may
experience symptoms of sexual dysfunction/hypogonadism.
1. Calculating NNT from a Risk Ratio (RR)
It is not uncommon for research reports in the pain field to provide RRs (Risk Ratios, or Relative Risks), along with their Confidence Intervals and P-values. Converting RR or RRR to NNT can be
done with the aid of simple formulas [Chatellier et al. 1996; DiCenso 2001; Higgins and Green 2011; McAlister et al. 2000; Sackett et al. 1997].
When the CER in known or a PEER can be estimated, the following formula is used with an RR:
NNT from RR = 1/([CER or PEER] x [1-RR])
If the Relative Risk Ratio (RRR) is given instead of RR, and since the RRR is equal to [1-RR] in the above formula, NNT also can be calculated as follows:
NNT from RRR = 1/([CER or PEER] x RRR)
Whether the NNT pertains to a benefit or harm (ie, NNH) depends on the research design. These calculations are relatively easy to perform; however, there also is a convenient Microsoft Excel
worksheet in our Pain-Topics PTCalcs program for doing this [NNT-from-RR available here]. Using the worksheet facilitates easily testing different CER or PEER values to see how NNT results might
vary. However, as noted above, if the RR or RRR is not statistically significant (ie, if the 95% CI range crosses 1.0, or p>0.05) any NNT also will be non-significant.
2. Calculating NNT from an Odds Ratio (OR)
Odds Ratios (ORs) were discussed in Part 7 of this series [here], and these estimates of effect can be confusing or difficult to interpret. Still, in many pain research reports only the ORs for
outcomes of interest are indicated and it would be more clinically useful if these also could be converted to NNTs. There are formulas that readers can use to convert an OR to NNT (or NNH), at
least approximately [Anon 2012; Higgins and Greeen 2011; Kraemer and Kupfer 2006; McAlister et al. 2000; Sackett et al. 1996, 1997].
First, the best possible (ie, lowest) NNT may be calculated from an Odds Ratio alone by the following equation:
Minimal NNT from OR = (√OR)+1 / (√OR)–1
Using this formula, although the actual NNT may be larger, it will not be smaller than this value.
Second, if the CER is known or a PEER can be estimated, a more accurate NNT can be calculated as follows:
NNT from OR = 1–([CER or PEER] x [1–OR]) / ([CER or PEER] x [1–OR] x [1–CER or PEER])
In all cases, if the OR pertains to a harm of some sort (eg, chance of an adverse effect), then the same formulas are used for calculating NNH. The calculations are rather tedious, so there is a
Microsoft Excel worksheet in the Pain-Topics PTCalcs program for doing these [NNT-from-OR available here]. In this worksheet, values for OR and CER or PEER can be simply inserted to derive
results, and a range of values can be easily tested.
Table at right (adapted from Anon 2012; McAlister et al. 2000; Sackett et al. 1996) displays NNTs (in lighter gray boxes) derived from a selection of ORs, and CERs or PEERs. As usual, the NNTs
might represent NNHs, depending on study design and context. Furthermore, an OR >1.0 may represent either benefit or harm, and similarly for values <1.0, depending on how outcome variables are
being measured and presented in the study.
It should be noted, as pointed out in Part 7 of this series, when risks of events are relatively small — that is, few events occurring in either treatment or control groups relative to the total
sizes of the populations or patient groups being studied — Odds Ratios and Risk Ratios become approximately equal in size. This may be especially evident in large-scale trials using
epidemiological data or medical records databases and, in such cases, the NNT for the OR might be derived most simply by the formula above for converting RR to NNT (insert the OR as if it were an
3. Calculating NNT from a Standardized Mean Difference (SMD)
The Standardized Mean Difference (SMD) is an effect size measurement previously discussed in Part 8 of this series [here]. It also is known as “Cohen’s d” and is very useful for gauging the
clinical importance of outcomes in pain research reports.
However, SMDs are derived from continuous data, so they do not fit the requirement that NNTs must be calculated from dichotomous data. Transforming SMDs into NNTs is a somewhat complex and
indirect process, which has been described in the Cochrane Handbook for Systematic Reviews of Interventions [Higgins and Greeen 2011, here]:
1. First, the SMD is converted to the natural logarithm of the Odds Ratio: lnOR = ∏/√3 x SMD (≈1.81 x SMD)
2. This lnOR is then converted to the OR base value: OR = e^lnOR (=2.718^lnOR)
3. Then, the OR is used in the formula above for converting OR to NNT. To represent CER or PEER, a value is used that signifies the proportion of subjects in the control/comparator group who have
improved by some amount from baseline in the continuous outcome variable — ie, control responder Proportion Improved.
These calculations can be difficult; so, again, a Pain-Topics PTCalcs Excel worksheet is available to make the process easier, using the SMD provided in a research report and an estimated value
for the Proportion Improved [NNT-from-SMD available here].
Table at right [adapted from Higgins and Green 2011 (here)] displays NNTs derived from select SMDs, assuming different Proportions Improved in the control or comparator group. Resulting NNTs must
be regarded as approximations, since there is an assumption that the underlying continuous variable being assessed has a logistic distribution with equal standard deviations in the control and
treatment groups. As with the PEER described above, Proportion Improved needs to be guesstimated from study data or other sources when report authors do not provide sufficient information for a
more accurate determination.
Assessing the Significance of NNTs
As with all other measures of effect, both the statistical significance and clinical significance of NNTs must be considered. Both, Confidence Intervals (CIs) and P-values can be calculated for NNTs
and should be indicated by report authors as measures of statistical significance and strength of evidence [DiCenso 2001]. Prior UPDATES in this series discussed P-values [here] and Confidence
Intervals [here].
If the CI and/or P-value are not provided for an NNT, then one needs to look at the data that went into calculating the NNT (or NNH if that is the focus). As noted above, if any of those measures —
eg, RR, RRR, EER, CER, or ARD — were not statistically significant, then the NNT would not be significant either and probably should not have been featured in the study report.
As with CIs for other statistics, the narrower the range of the confidence limits the more precision and strength of evidence can be assumed in the NNT. There are complex formulas for calculating
confidence limits for an NNT, but in many cases, some authors suggest simply inverting the confidence limits (if known) of the Absolute Risk Difference, or ARD [Altman 1998; Bender 2000, Cates 2005;
Citrome 2007].
Example: Given an ARD=0.40 and its 95% CI = 0.30 to 0.50, then the NNT=1/0.40 and the confidence limits for the NNT = 1/0.50 to 1/0.30; that is, NNT=3; 95% CI = 2 to 4 (numbers rounded up).
If the CI for the ARD is not statistically significant to begin with — ie, the range includes both positive and negative numbers and crosses the null value of 0.0 — then the CI for the NNT also will
be nonsignificant and cannot be accurately calculated with this method.
Having adequate study sample sizes is important for deriving NNTs that have favorably narrow confidence limits. An important reason is that random effects tend to be irregular in small, short-term
studies, but predictability increases as study size and duration increase. This reflects the “Law of Large Numbers” and is an important phenomenon in all research designs [Hazewinkel 2001]. For
example, if an NNT is 10 for a favorable treatment effect we might expect to have 1,000 successes if 10,000 patients are treated; however, if only a dozen patients are treated we may observe 0, 1, 3,
or some other number of favorable outcomes due to random variation or chance.
The relationships have been graphically portrayed by Citrome [2008, Figure at right]. In general, it is proposed that single-digit NNT values represent outcomes that could be meaningful in everyday
clinical practice and somewhat compelling. Larger values, 50 or 100 or more, usually suggest differences in outcome and subsequent NNTs, or NNHs, that are clinically unimportant.
However, the numbers must be considered within context of the study. Some interventions with large NNTs still may be important if they prevent severe adverse events (eg, death, stroke, etc.).
Conversely, a small, significant NNT for a mild benefit (eg, 10% pain relief) or to prevent only a nuisance adverse effect (eg, dry mouth) may not be of clinical importance to most patients.
In terms of clinically significant effect sizes, Standardized Mean Differences (SMD, or Cohen’s d) of 0.2, 0.5, and 0.8 are generally considered small, medium, and large effects, respectively.
Citrome [2008, Figure at right] and others [Kraemer and Kupfer 2006] have proposed that the comparable NNTs are roughly 9, 4, and 3, respectively.
However, those NNT values do not fully account for potential variability in NNTs depending on event rates in control/comparator groups. The Table at right, using the approach suggested above from
the Cochrane Handbook [Higgins and Green, 2011] for calculating NNTs from SMDs — and a Proportion Improved value of 40% — shows that small, medium, and large effect sizes would roughly correspond to
NNTs of 12, 5, and 3, respectively. The Table also shows comparable estimates for OR and RR, if CER or PEER values of 0.40 are used.
This 40% value for Proportion Improved seems like a practical starting point, since long ago it was observed in pain-treatment studies that >30% of patients in placebo-control groups often
demonstrate meaningful improvement [Beecher 1955; also discussed in UPDATE here]. However, unless the specific CER or Proportion Improved is known, it is recommended that a range of values should be
tested when calculating the likely NNT (or NNH).
It must remembered that the size of the NNT alone provides an effect-size estimate of the likelihood of either a benefit or harm as measured in a trial, but this does not directly indicate if the
clinical importance of the effect itself being measured is marginal, minimal, moderate, or substantial — that is, whether a treatment is worthy of use [McAlister 2008]. Nor does the NNT suggest if
the effect occurs early in treatment or later, or if it continues or fades over time. These are qualities that must be evaluated separately as part of the research study design.
Example: A study that uses ≥20% improvement in pain during 4-weeks as a threshold for therapeutic success may produce an impressively low NNT favoring the treatment over some control condition.
However, in terms of clinical significance, one still needs to question whether that level of pain relief is important to patients and if the effects continue beyond the relatively short-term
period of observation. If 50% or greater improvement had been used as the treatment endpoint threshold in a much longer-term trial we might be more confident in the clinical importance of a low
In sum, there are statistical measures that can assess the probability of an NNT being a chance or random finding (P-value), and the strength of the evidence in terms of the width of the Confidence
Interval. However, the clinical significance of an NNT needs to be assessed on a case-by-case basis taking into account study design and the thresholds used to define endpoints.
There can be a fairly narrow range of NNTs that are of clinical importance in pain medicine. For example, reviews of analgesics for acute pain, providing ≥50% relief within 4 to 6 hours, have found
NNTs commonly ranging from 2 to 5 (numbers rounded up) and with confidence intervals overlapping considerably [Moore et al. 2011, also see UPDATE here]. Only rarely do analgesics still in use exhibit
an NNT≥10 (eg, codeine), which is generally considered clinically unacceptable.
Harm-to-Benefit Comparisons: NNH/NNT
For any therapy or intervention there are likely to be trade-offs between possible harms and potential benefits, so considering in isolation only the NNT or an NNH associated with a treatment tells
only part of the story [DiCenso 2001]. It also can be helpful to understand the relationship of these effects to each other, since a successful new treatment would have a low NNT and a high NNH in
comparison with another therapy or intervention.
Therefore, in looking at the reported outcomes of a clinical trial, an important question is: “How many patients might benefit from a treatment for each one who experiences some sort of harm?” In
answer to that, a metric called “Likelihood to be Helped or Harmed,” or LHH, has been suggested [Citrome and Weiss-Citrome 2012]. The formula for calculating this is simply: LHH = NNH/NNT.
Shah and colleagues [2012; also discussed in UPDATE here] conducted a review and meta-analysis to estimate NNT and NNH values for pharmacotherapies used in treating irritable bowel syndrome (IBS)
with diarrhea. Calculated NNTs depicted event rates in patients who “responded” favorably to therapy. The NNHs were based on study discontinuations by patients due to combined adverse effects
associated with each therapy.
An interesting feature of this report by Shah et al. was a comparative analysis of 3 agents, providing a Likelihood to be Helped or Harmed analysis, or what the authors called “Benefit-to-Harm
Ratios” (see Table, numbers were not rounded).
In this analysis, compared with tricyclic antidepressants and alosetron, rifaximin was clearly the most beneficial with very few discontinuations (ie, large NNH) and an LHH, or NNH/NNT ratio of
846. That is, there was only 1 discontinuation of the drug due to adverse effects (harm) for every 846 patients who favorably responded to the therapy (benefit). For the other two agents,
approximately 1 patient discontinued therapy for every 3 who benefitted. However, if only the NNTs were considered, rifaximin would not have appeared to be the most advantageous among the 3
With better reporting of research outcomes in the pain management literature, providing adequate benefits and harms data, an LHH analysis might be applied to a variety of therapies. However, while
NNT and NNH are clinically useful and intuitive measures of effect size — allowing assessments of both clinical efficacy and tolerability — there are some important points regarding LHH to consider,
for example [Citrome 2007 and 2008; Citrome and Weiss-Citrome 2012]:
a. An LHH, or harm-to-benefit ratio (NNH/NNT) much greater than 1.0 is obviously preferred, since it denotes that many patients benefit for each one harmed in some way. At a ratio of 1.0 there is an
equal trade-off between benefits and harms; less than 1.0 would denote that harm exceeds benefit.
b. The clinical significance of LHH ratios must be considered in context. For example, a drug might have an unfavorably small LHH ratio, say 3, when it comes to dry mouth, but this might be viewed
as somewhat inconsequential since it is a minor side effect that is unlikely to influence treatment failures. Conversely, what seems like a desirably large LHH ratio, such as 500, involving a
serious adverse effect like stroke or heart failure might still be of great consequence for influencing a decision against a therapy.
c. An important limitation is that an LHH value, itself, may not account for the relative impact of time-to-event outcomes and duration of an effect. For example, an adverse effect may be less
troublesome if it arises early during treatment and is short-lived; or, a beneficial effect could be preferred if it occurs early in treatment and continues for some time.
d. Interpreting the clinical relevance of an NNT, NNH, or NNH/NNT can be somewhat subjective and may be most helpful when comparing those metrics across multiple therapies for the same disorder.
This is clearly demonstrated in the Shah et al. study of IBS therapies above, in which alosetron and tricyclic antidepressants appear comparable in terms of their LHH ratios, but rifaximin is
clearly superior on that measure.
e. It should be noted that in some of the earlier literature [McAlister et al. 2000] the LHH was described as a “benefit-to-risk ratio” calculated as [1/NNT]:[1/NNH]. However, this seems to be a
more complex way of accomplishing the same objective. For example, with NNT=20 and NNH=60, this becomes 0.05:0.017, or 3:1. Whereas, simply taking NNH/NNT would yield 60/20, or 3. Appropriate
interpretation of the two approaches arrives at essentially the same understanding of LHH.
Despite the data requirements, caveats, and limitations, calculations of NNT, NNH, and NNH/NNT (LHH) can be very useful for assessing and selecting optimum pain therapies; however, this is not always
possible with current pain management research reports. For example, Shah and colleagues [2012] found that their analytical approach was impractical when it came to treatments for IBS with
constipation due to missing data in available studies. Whether inadvertently or otherwise, research report authors often do not provide adequate data for calculating the sort of clinically meaningful
comparisons afforded by NNT, NNH, and LHH.
Clinicians, in consultation with patients, need to decide when treatment effects are sufficiently large and beneficial to more than offset harms or costs of a therapy or intervention. NNT, NNH, and
LHH are statistical tools that can help put those effects into a meaningful context; although, they are only part of the total assessment when such decisions must be made. At the same time, some
researchers have found that more appropriately conservative decisions are made when data are presented in terms of NNT, NNH, and/or LHH than as SMD, OR, RR, RRR, or other measures of effect [Moore
Unfortunately, effect sizes expressed as NNT or NNH are commonly omitted from research reports. Of greatest concern, sometimes when study data are converted to these measures — rather than SMD, OR,
RR, RRR, etc. — what appeared to be an advantageous therapy or intervention is revealed as being of much lesser consequence. Therefore, the burden of discovering the true quality of evidence
presented in pain research reports often rests with educated consumers of the literature who understand the nuances of effects expressed as NNT, NNH, and/or LHH metrics and know how to calculate
> Altman DG. Confidence intervals for the number needed to treat. BMJ. 1998(Nov);317:1309-1312.
> Anon. Calculating and using NNTs. Bandolier (Oxford University). 2003(Feb); online [here].
> Anon. Number Needed to Treat (NNT). Centre for Evidence-Based Med (Oxford Univ). 2012; online [here].
> Beecher HK. The powerful placebo. JAMA. 1955;159(17):1602-1606.
> Bender R. Improving the calculation of confidence intervals for the number needed to treat. In: Hasman A, et al. eds. Medical Infobahn for Europe. IOS press, 2000 [PDF here].
> Cates C. NNT - No need to be confused. UPDATE newsletter. 2005(Sep 15), online [PDF here].
> Chatellier G, Zapletal E, Lemaitre D, Menard J, Degoulet P. The number needed to treat: A clinically useful nomogram in its proper context. BMJ. 1996;312:426-429.
> Citrome L, Weiss-Citrome A. A Systematic Review of Duloxetine for Osteoarthritic Pain: What is the Number Needed to Treat, Number Needed to Harm, and Likelihood to be Helped or Harmed? Postgrad
Med. 2012(Jan);124(1):83-93 [access here].
> Citrome L. Compelling or irrelevant? Using number needed to treat can help decide. Acta Psychiatr Scand. 2008;117(6):412-419 [article here].
> Citrome L. Show me the evidence: Using number needed to treat. Southern Med J. 2007;100(9):881-884 [article here].
> Deyo RA, Smith DHM, Johnson ES, et al. Prescription Opioids for Back Pain and Use of Medications for Erectile Dysfunction. Spine. 2013(May 15);38(11):909-915 [abstract here].
> DiCenso A. Clinically useful measures of the effects of treatment. Evid Based Nurs. 2001;4:36-39 [available here].
> Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. [available here].
> Fashner J, Bell AL. Herpes zoster and postherpetic neuralgia: prevention and management. Am Fam Physician. 2011;83(12):1423-1437.
> Hazewinkel M, ed. Law of large numbers, in Encyclopedia of Mathematics. New York: Springer; 2001.
> Kraemer HC, Kupfer DJ. Size of treatment effects and their importance to clinical research and practice. Biol Psychiatry. 2006;59:990-996.
> Laupacis A, Sackett DL, Roberts RS. An assessment of clinically useful measures of the consequences of treatment. NEJM. 1988;318:1728-1733.
> McAlister FA. The “number needed to treat” turns 20 — and continues to be used and misused. CMAJ. 2008;179(6):549-553 [article here].
> McAlister FA, Straus SE, Guyatt GH, et al. User’s Guides to the Medical Literature: XX. Integrating Research Evidence With the Care of the Individual Patient. JAMA. 2000;283(21):2829-2836.
> Moore RA, Derry S, Eccleston C, Kalso E. Expect analgesic failure; pursue analgesic success. BMJ. 2013 (May);346:f2690 [abstract].
> Moore RA, Derry S, McQuay HJ, Wiffen PJ. Single dose oral analgesics for acute postoperative pain in adults. Cochrane Database of Systematic Reviews. 2011;9(CD008659) [available here].
> Moore RA, Eccleston C, Derry S, et al. “Evidence” in chronic pain – establishing best practice in the reporting of systematic reviews. PAIN. 2010;150:386-389.
> Moore RA. What is an NNT? Bandolier (Oxford Univ). 2009, online [PDF here].
> Oxman MN, Levin MJ, Johnson GR, et al. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. NEJM. 2005;352(22):2271-2284 [abstract here].
> Sackett DL, Deeks JJ, Altman DG. Down with odds ratios! Evidence-Based Med. 1996(Sep/Oct);1(6):164-166.
> Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice & Teach EBM. New York, NY: Churchill Livingstone; 1997.
> Shah E, Kim S, Chong K, et al. .Evaluation of Harm in the Pharmacotherapy of Irritable Bowel Syndrome. Am J Med. 2012(Apr);125(4):381-393 [abstract here].
> Suissa S. Calculation of number needed to treat [letter]. NEJM. 2009;361:424-425.
> Thakur R, Philips AG. Treating herpes zoster and postherpetic neuralgia: an evidence-based approach. J Fam Prac. 2012;61(9):S9-S15.
For a listing of and access to other articles in this series [Click Here>.
Don’t Miss Out. Stay Up-to-Date on Pain-Topics UPDATES!
Register [here] to receive periodic e-Notifications of new postings;
follow on Twitter [here] or on Facebook [here].
6 comments:
Hopefully Numbers Needed to Treat for benefit or harm from a therapy would be the safest way to treat those needing help the most. I would say its time to update this way of coming to a
conclusion beings though this concept was proposed 25 years ago. But lets not complicate things or make things anymore confusing and get help to those needing it the most.
I believe that medical science invents all these useless acronyms (UA), so that they can keep the general population (KGPO) out of the mix, therefore becoming a more "elite group". (EG)
Therefore, UA, KGPO, and EG = more arrogance, hence they deserve to get paid more, and listen less.
It seriously would take me an hour to memorize all those acronyms, when in reality, this entire paper was only a boast of someone's woefully inadequate ego.
@Anonymous in Fla. -- All of the acronyms in the entire Series on "Making Sense of Pain Research" are defined, and are universally used throughout medical research literature. We did not make
them up to impress or exclude anyone. If you are struggling with the acronyms, we respectfully suggest the you either re-read the Series articles or stay away from the research literature.
I think what Anonymous in Florida was trying to say was, (and I am well versed in all of the acronyms, medical research etc)that they are in pain and presenting such research in an indigestible
way, is not beneficial to the patient, only to the researcher.... This might help bridge the gap
I appreciate the above comments. However, it must be noted that this series on “Making Sense of Pain Research” is written for healthcare providers who need to better understand the science of
evidence-based pain medicine -- so they won’t be fooled by the research and can better serve their patients. While patients certainly are not excluded from reading the articles, we accept the
fact that many of them may find the subject matter overly challenging.
Often, within individual UPDATES articles discussing specific research studies, we do try to offer explanations that will make sense to patients as well as the intended audience of healthcare
providers. At least we try…. but may not always succeed. --SBL
This is a really useful series. Science is not for the elite. It is not a useless exercise. Patients deserve the best thinking we have on these topics instead of knee jerk or emotional reactions
to patient characteristics that have little to do with root cause. Even the patients need to get on this train and learn how to drive these tools. Health care literacy is an essential skill these
Stay Up-To-Date on UPDATES
Register below to receive notices of new postings,
or Follow/Like us on Twitter or Facebook.
Download series PDF here.
|
{"url":"http://updates.pain-topics.org/2013/09/calculating-benefits-harms-in-pain.html","timestamp":"2014-04-16T07:13:32Z","content_type":null,"content_length":"161161","record_id":"<urn:uuid:2bef869d-86b4-4267-a986-498c146596d8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Milwaukee Makerspace
LOVE” for our times! While “Love” may have been an appropriate sentiment from 1964 to 1970 when the 2D and 3D versions were made, I think that the revised text is more appropriate for the 2000′s and
2010′s. Fear is 8” tall and 4” deep, and while not a monumental outdoor sculpture, FEAR appears fairly sizable on a table top.
Fear, which is solid aluminum and weighs over 7 lbs, was cast last Thursday with quite a few other pieces. The great thing about having an aluminum foundry at the Makerspace is that the whole thing
cost about $7! - $4 for propane, $1 for Styrofoam, and $3 for some Rotozip bits. If FEAR were cast in bronze, it would weigh over 20 lbs, which would cost $200 for the metal alone. As it is, we
melted down old heat sinks, stock cutoffs and hard drive frames, so the metal is essentially free.
In the spirit of Indiana who made his own font, I drew FEAR up in Inkscape using Georgia Bold, but I increased the height of the Serifs a bit. Shane helped me with the file manipulation and G-code
generation (Thanks!), so I could use the CNC router to cut FEAR out of styrofoam. I exported FEAR’s hairline thickness outline as .dxf so it I could bring it into CamBam to generate the G-code. The
outer contour of FEAR was selected, and the following settings were chosen:
• General -> Enabled -> True
• General -> Name -> Outside
• Cutting Depth -> Clearance Plane -> 0.125 (inches)
• Cutting Depth -> Depth Increment -> 1.05 (inches)
• Cutting Depth -> Target Depth -> -1.05 (inches)
• Feedrates -> Cut Feedrate -> 300 (inches per second)
• Options -> Roughing/Finishing -> Finishing
• Tool -> Tool Diameter -> 0.125 (inches)
• Tool -> Tool Profile -> End Mill
Identical settings were chosen for the inner contours of FEAR, with the exception of General -> Name -> Inside. Then, I just selected “Generate G-code.” Check out the real-time video of Makerspace
CNC router running the G-code and cutting out the 1” thick Styrofoam (Owens Corning Foamular 150).
After cutting four 1” thick pieces, they were stacked and glued together. I buried the foam FEAR in petrobond, and then attached Styrofoam sprues and vents. For a more complete explanation of the
quick lost-styrofoam casting process, check out this post. Stay tuned for details of our next Aluminum pour, which will be in January in the New Milwaukee Makerspace!
Makerspace Aluminum Casting Foundry
I arrived at the Makerspace on Thursday without an idea of what I would cast in metal, and in less than two hours I was removing my piece from the steaming petrobond! Check out the fruit of two hours
of labor cast in metal!
That’s right! The Milwaukee Makerspace had its first (and second) aluminum pour on Thursday! Thanks to the hard work of several members, the Makerspace now has a fully functional aluminum casting
foundry. The custom built propane and diesel powered furnace melted an entire #16 crucible of aluminum in less than 20 minutes. Check out Brant’s video to see our fearless foundry foreman leading
the two pours!
To get the foundry running quickly, we’ve started out by using a lost-styrofoam casting method. That is, styrofoam is carved into the desired shape and then a sprue and vents are attached with hot
glue(!). This assembly is placed in a wooden form, and is surrounded by tightly packed petrobond, an oil bonded, reusable sand. Then, the molten aluminum is poured directly onto the styrofoam
sprue. The styrofoam is instantly vaporized by the 1250 degree Fahrenheit aluminum, which fills the void in the petrobond formerly occupied by the styrofoam. The air and perhaps even some of the
styrofoam residue escapes from the mold through the vents. We’ll be phasing in bonded sand and lost wax casting soon, so stay tuned for those details.
Eventually we’ll be having aluminum casting classes; however, we’re definitely going to be having aluminum pours on alternate Thursday evenings for the next few months. Join our mailing list /
google group to get more details. Metal pours are spectacular to watch, so feel free to stop by to see the action around 7 or 8 pm, or join the Makerspace and participate!
Lasers + Whisky = Delightful Wedding Gift
One of our members got married yesterday, and I crafted a fine gift for him and his wife at the Makerspace. The happy couple enjoys whisky, and I thought that providing a tour might be a nice idea.
The tour starts at inexpensive bourbon, moves through wheated whiskies, and on to rye. The tour continues in Scotland with some easy to enjoy Sherry cask finish bottlings, and then moves on to rare,
Islay and finally mature bottlings (25 Year old Talisker!).
I found some old mohogany baseboard that had some aging varnish on one side and some old caulking on another. After cutting two 18″ long sections, a few minutes of belt-sanding had them looking
great. I used a 1 1/4″ Forstner drill bit to bore 0.3″ deep pockets for the bottles to fit in. I used one of our two laser cutters to etch the name/age/proof of each of the whisky sample on top,
plus a congratulatory message on the reverse side. To bring out the rich orangy-red mahogany color, I wiped on Beeswax / Mineral Oil . Check it out close up, while imagining the symbolism of things
getting better with age!
Arduino-Powered Surround Sound Synthesizer
The Makerspace Eight Speaker Super Surround Sound System(MESSSSS) has been supplying music to the Makerspace for quite a while now, but I identified a problem even before the system was fully
installed. Stereo recordings played back on two speakers are great if you’re in the “sweet spot.” If not, traditional approaches to 5.1 audio improve things, but all rely on there being a single
“front of the room.” Unfortunately, it’s not clear which side of the 3000 square foot Makerspace shop is the front, and with four pairs of speakers in the room, even stereo imaging is difficult.
Fortunately, I’ve just completed the Makerspace Eight Speaker Super Surround Sound System’s Enveloping Surround Sound Synthesizer (MESSSSSESSS). The MESSSSSESSS takes stereo recordings and
distributes sound to the eight speakers in an entirely fair and user configurable way, thereby eliminating the need for a “front of the room.” Now listeners can be arbitrary distributed throughout a
room, and can even be oriented in random directions, while still receiving an enveloping surround sound experience!
The MESSSSSESSS user interface is somewhat simpler than most surround sound processers, as it consists of only four switches and one knob. Somewhat inspired by StrobeTV, the simplest mode references
questionable quadraphonic recordings, in that the music travels sequentially from speaker to speaker, chasing around the room either clockwise or counterclockwise at a rate selected by the knob. With
the flip of a switch, sound emanates from the eight speakers in a random order. Things get considerably less deterministic after flipping the Chaos Switch, adjusting the Chaos Knob, and entering
Turbo Mode: Its best to visit Milwaukee Makerspace to experience the madness for yourself. I’m legally obligated to recommend first time listeners be seated for the experience.
The MESSSSSESSS is powered entirely by an Arduino Uno’s ATmega328 that was programmed with an Arduino and then plugged into a socket in a small, custom board that I designed and etched at the
Makerspace. The ATmega328 outputs can energize relays that either do or don’t pass the audio signal to the four stereo output jacks. Care was taken to use diodes to clamp any voltage spikes that
may be created as the relays switch, thus preventing damage to the ATmega328 outputs.
As shown by the minimal part count above, using the ATmega328 “off the Arduino” is quite easy: Just connect pins 1 (The square one), 7 and 20 to 5 volts, and connect pins 8 and 22 to ground. Then,
add a 22uF cap and small bypass cap between power and ground, and a ceramic resonator to pins 19 and 20. You can even use an old cellphone charger as the power supply. Boom. That’s it. The real
benefits of making your own boards are having a well integrated system, and cost, as the Atmel chip is $4.50 while a whole Arduino is $30. Also visible in the photo are a programming header and the
two ribbon cables that route all the signals to and from the board.
The Rock Trailer
This trailer was conceived and created by DavidO of Korporate-Media.com, and alludes to The Tool at Hand project. Thanks to DavidO and all the participants!
Audiophile Headphones
Sick of thin bass when listening to your favorite music over headphones? Missing that cinematic surround sound experience when you are on the go? Craving the visceral bass impact of live concerts?
Trying to get to 11, but your headphones are stalled out at 6.283? Move over anemic earbuds, there’s a new product in town: BIGheadphones: Bass Impact Gear’s new headphone product, available in two
versions: Premium 5.1 (shown below in a user trial) and Mega Premium 7.2 (coming soon).
Reviewers are raging about the unprecedented dynamics, midrange clarity, and sound stage:
“Perhaps it was in the region of articulation and musical dynamics that this system impressed the most. The dynamic bloom from soft to extremely loud was exquisite, and so clearly delineated that
listeners could unravel musical phrases down into the concert hall’s noise floor and below.” – The Audio Critic
“BIGheadphones speak with an organic integrity. They are hewn from the living wood—endangered old growth Amazonian timber… I wept openly when forced to return the demo model.”– Stereophile
“BIGheadphones make critical listening a joy rather than a strain. I was flabbergasted by their brilliant pitch certainty. The midrange sounds were open, clear, and stunningly present. Playback
performance like this makes use of the word transparent not only forgivable, but mandatory.” –Audiophilia
“The 5.1 has an innate flair for speed and control that is incomparable. The command of bass dynamics moves beyond effortlessness to nonchalance. My eyeballs were vibrating! My hands are still
shaking as I write this review.” –Sound and Vision
“…the most important innovation in audio reproduction since the permanent magnet.” –Acta Acustica
“W.O.W.” –Bose listening panel
Reviewers agree that BIGheadphones are a huge leap in audio reproduction technology, larger than vacuum tubes, Stroh violins, carbon microphones and Edison cylinders combined.
Relative to planar speakers, typical box speakers are unable to develop the proper surface loudness or intensity typical of large instruments such as the piano. This audio feat poses no challenge
for BIGheadphones. Computationally modeled and optimized by a small and highly trained team of expert acoustical engineers over a period of 13 years, BIGheadphones were inspired by ingeniously
thinking “inside the box,” not outside the box. At the obsolete exterior listening position, a typical loudspeaker rarely generates even a realistic classical music concert level, but inside that
same speaker, the sound pressure levels can quite easily exceed the 115 dB of a stadium rock concert. This realization was the BIG breakthrough, but was only the beginning of the struggle pursued by
our elite acoustical research team. Our uberengineers had to break the chains of common design practice to breathe the refreshing mountain air of inside-the-box acoustics, where nearly everything is
To illustrate, achieving loud bass external to a speaker typically requires the box be a very large size. However, inside the box, the bass response is naturally flat to the lowest frequencies, and
the smaller the box the louder and more impactful it becomes. Further, our astute engineers shrewdly realized that the stop-band and pass-band inside and outside the box are also opposite, as
illustrated in the enlightening plot below of the subwoofer section of BIGheadphones. The Blue curve shows the hyposonic level inside, extending well below 10 infrasonic Hz, while the Red curve shows
the meager sound pressure level in the more traditional listening position two meters in front of them. Notice how the passband outside the box begins at 2kHz, whereas the passband inside the box
ends at 2 kHz. How many other speaker systems can boast of a subwoofer response that is flat over more than three orders of magnitude? Now that’s innovation! And this is just the customer-average
response—the bigger your head the broader the bandwidth that you can brag about to your audiophile friends.
The observant reader has already noticed that this plot shows BIGheadphone’s output level is a mere 142 dB – only 22 dB above the threshold of pain. Note though that this is with a paltry 1 Watt
input – in reality, they are capable of 17 dB higher output with the optional high output amplifier add-on kit, though this reduces the playback time to under 36 hours per charge. And that’s just
the subwoofer! The industry-leading, consciousness-altering bass response shown above is augmented by five horn loaded, carbon fiber reinforced porcelain dome, 2” diameter neodymium tweeters with
single crystal silver edge wound voice coils. With this critical addition, the frequency response of the BIGheadphones extends from below 10 Hz to 31 kHz and beyond! Get your BIGheadphone audition
today at your local Hi-Fi retailer! “BIGheadphones, the last audible note in audio reproduction!”
(Not available in France.)
Thanks to the editors at RSW, Inc.
Giant, Ominous Wind Chimes
A while back I bought five 4.5 foot long aluminum tubes because the price was so low that I couldn’t resist. They are 3.25 inches in (outer) diameter, and have a wall thickness of 0.1 inches.
Recently, I decided to make them into the longest and loudest wind chimes I’ve ever heard. The longest tube rings for over a minute after being struck by the clapper. After thinking for a while
about which notes I should tune the tubes to, I found that fairly large chimes are commercially available, but they are tuned to happy, consonant intervals. I consulted a few musically savvy friends
(Thanks Brian and Andrew!) to gather some more ideas for interesting intervals on my chosen theme of “Evil & Ominous.” I ended up with quite a few ideas, and with Andrew’s help, I sampled the sound
of the longest tube being struck, and recorded mp3’s of each set of notes to simulate the sound of the chimes ringing in the wind. I ended up with something delightful: D4, G#4, A4, C#5 and D5
(which are 294 Hz, 415 Hz, 440 Hz, 554 Hz, and 587 Hz). That’s right, there are two consonant intervals (octave and major 5^th), but look at all those minor seconds and tritones: Delightfully
Then the science started: How to determine the tube lengths to achieve the desired notes? How to suspend the chimes so they sound the best, and are the loudest? Where should the clapper strike the
chimes in order to produce the loudest sound or the best timbre?
Wind chimes radiate sound because they vibrate transversely like a guitar string, not because they support an internal acoustic standing wave like an organ pipe. Pages 152 & 162 of Philip Morse’s
book “Vibration and Sound” show that the natural frequencies, v, of hanging tubes are given by the following expression:
Pretty simple, right? One only needs to know rho and Q, the density and Young’s modulus of aluminum, l, the length of the tube, a & b, the inner and outer radius of the tube, and the beta of each
tube mode of interest. Don’t worry though, there is a simpler way. If all of the tubes have identical diameter and are made of the same material (6061-T6 Aluminum!), the equation indicates that
the natural frequency of a hanging tube scales very simply as the inverse of the tube length squared.
Using the above relationship (frequency ~ 1/(length*length)) to compute the ratios of tube lengths based on the ratio of frequencies produces:
Length of D4 tube = 1.000 * Length of D4 tube
Length of G#4 tube = 0.841 * Length of D4 tube
Length of A4 tube = 0.817 * Length of D4 tube
Length of C#5 tube = 0.728 * Length of D4 tube
Length of D5 tube = 0.707 * Length of D4 tube
The longest tube is 133.1 cm (52.40 inches) long, so all the tubes were scaled relative to it. Note that the frequencies are slightly different than the notes I was aiming for, but absolute pitch is
only a requirement when playing with other instruments.
~D4 = 293.66 Hz = 133.1 cm = 280.3 Hz
~G#4 = 415.3 Hz = 111.9 cm = 396.4 Hz
~A4 = 440.0 Hz = 108.7 cm = 420.0 Hz
~C#5 = 554.37 Hz = 96.9 cm = 529.1 Hz
~D5 = 587.33 Hz = 94.1 cm = 560.6 Hz
How accurately do these tubes need to be cut? For example, how important is it to cut the tube length to within 1 mm? This can be calculated simply, using the above equation. A length of 108.7cm
gives 420.0 Hz, whereas a length of 108.8cm gives 419.23 Hz. This spread is 0.67 Hz, which is a fairly small number, but these small intervals are often expressed in cents, or hundredths of a
half-step. This 1 mm length error gives a frequency shift of 31cents. Does this matter? Well, the difference in pitch of a major third in just and standard tuning is 14 cents, which is definitely
noticeable. It is preferable to be somewhat closer than this 1mm, or 2/3 Hz to the target interval.
The tubes were rough-cut to 2 mm longer than the desired length on a bandsaw to allow the ends to be squared up in case the cut was slightly crooked. The resonance frequency was then measured by
playing the desired frequency from a speaker driven by a sine wave generator with a digital display. I then struck the tube and listened for (and counted) the beats. If two beats per second are
heard, the frequency of the tube is 2 Hz different than the frequency played through the speaker. With this method using minimal equipment, I quickly experimentally measured the resonance frequency
to less than 0.5 Hz (one beat every two seconds), which is ~10 cents. I then fine tuned the tube length using a belt sander, and measured the resonance frequency several times while achieving the
correct length. In reality though, if I missed my target lengths I’d only be adding a little more beating and dissonance, which might have only added to the overall ominous timbre.
How to suspend the tubes? Looking at the mode shapes of the tube for guidance, I suspended the tubes by drilling a hole through the tube at one of its vibrational nodes, and running a plated steel
cable through it. Check out the plot below from Blevins’ New York Times Bestselling book “Formulas for Natural Frequency and Mode Shape.”
This plot shows a snapshot of the tube’s deflection as a function of position along the tube. Imagine that the left side of the tube is at 0, and the right side of the tube is at L. This plot shows
the first three mode shapes of a “straight slender free-free beam,” which my 1.33 meter long, 83mm diameter tube qualifies as. Just like a guitar string, this tube has multiple overtones (higher
modes, or harmonics) that can be excited to varying degree depending where the clapper strikes the tube. The guitar analog of this is the timbre difference one hears when picking (striking) the
string closer to or further from the end of the string (the bridge). This plot also shows where the tube should be suspended – from the locations where the tube has no motion in its first,
fundamental mode. Those two places, a distance of 0.221L from the tube’s ends, are circled in red. When striking the tube suspended from either of these locations, the tube rings the loudest and
for the longest time duration (as compared with any other suspension location). Similarly, when striking the tube in the location noted by the red arrow (the midpoint of the tube), the tube rings
the loudest. I won’t get into more math and fancy terms like “modal participation factor,” but it is true that suspending the tube from the circled red locations also results in the lack of
excitation of the third mode (which has a motional maximum at this location). Similarly, striking the tube at its midpoint results in the lack of excitation of the second mode, due to its motional
minimum at this location.
Thanks to David for the Ominous Photo. An Ominous Chime video will soon follow.
Making Carbonated Mineral Water
I really like the refreshing taste of San Pellegrino, but dislike that this water is bottled in Europe, shipped over water and delivered to me in Milwaukee, where we also have water. San Pellegrino
costs about $1.75 per liter, and comes in recyclable bottles. The homemade version I’ve been making for the last four months costs less than one penny per liter, and is made in my kitchen in reusable
bottles. The cost of the equipment was less than $150, which paid for itself after I’d carbonated my first 100 liters of water.
The equipment required is relatively simple: An aluminum tank that contains 5Lbs of CO2, a gas regulator, a hose ending with a locking Schrader air chuck, a plastic bottle, a bottle cap with a
Schrader valve stem mounted in it and two hose clamps. All of these items are visible in the photos below.
The aluminum tank and gas regulator are available locally at restaurant or homebrew supply stores, or online from places like beveragefactory.com or coppertubingsales.com. Prices at these latter two
places are $85 – $100 for the pair. I filled the CO2 tank for $9 at a local beer retailer. I purchased the locking chrome plated air chuck, the stainless steel hose barb connected to it, the hose
clamps, and the steel wire reinforced hose from a local hardware store for $15. The Schrader valve stems were purchased from a local auto parts store – they are fully chrome plated, and are sold as
replacement car tire valve stems for $2 each.
I initially used standard industrial air hose fittings instead of Schrader valves, but ran into several problems. Only one side of this type of fitting seals when the mating fittings are
disconnected. This means that after a liter is carbonated and the hose is detached from the plastic bottle, either all the CO2 in the hose leaks out, or some of the CO2 leaks out of the bottle.
Also, inexpensive industrial air fittings are either made of steel or bronze and begin to corrode due to exposure to the carbonic acid formed when the water is carbonated. Chrome plated Schrader
valves have neither of these problems, and are even less expensive than industrial air fittings.
The carbonation process is also simple. I fill a plastic San Pellegrino bottle 80% to 85% full of Brita filtered water chilled to ~36 degrees (standard refrigerator temperature), I squeeze all the
air out of the bottle and tighten the plastic cap with the Schrader valve onto it. I fully open the CO2 tank valve, set the gas regulator valve to 55 PSI (typical commercial waters are carbonated to
about 20 PSI), squeeze the locking Schrader air chuck, and lock it onto the bottle. CO2 immediately begins to flow, and inflates the bottle instantly. An audible hiss continues as the CO2
pressurizes the bottle, which I shake vigorously for 20 to 25 seconds, after which time the CO2 hiss has stopped. The hose is then disconnected from the bottle, and the water is carbonated!
All these details are important to successful carbonation. The empty space in the bottle (the 15% to 20% of the bottle that doesn’t contain water) is critical to allowing the CO2 to get and stay in
suspension. The amount of CO2 that is soluble in water increases with colder temperatures. Squeezing out all the air allows for more CO2 to fit in the bottle. Shaking the bottle increases the rate
at which the CO2 dissolves in the water. All of these factors make for more fizzy water (which is the goal, right?)
The taste of San Pellegrino can be more accurately replicated with the addition of minerals. With the addition of 1/8 tsp of Magnesium sulfate (Epsom salts) and 1/8 tsp of calcium chloride, one
achieves the 210mg/L of Calcium and 60mg/L of Magnesium that San Pellegrino has! Both of these minerals are wine/beer brewing additives, and can be purchased from local homebrew supply stores.
Check here for more mineral additive possibilities, and the book “The Good Water Guide” for the mineral composition of most commercial waters on Earth. I find that carbonating to 55 PSI rather than
a more reasonable 20 to 25 PSI makes for so much more joy that I (and my kidneys) don’t miss the extra minerals.
If you want to make this setup at home, please follow these safety guidelines. There are several which are very important, as a gas cylinder is somewhat dangerous, as its internal pressure is
between 700 to 800 PSI, depending on temperature. Carrying the cylinder by its valve is a bad idea. The tank should be secured at all times so it doesn’t tip over and damage the valve. When it is
transported, it should always be upright and it shouldn’t be left in a car sitting in the sun, as the internal pressure will increase hundreds of PSI. The regulator you purchase should have a
pressure safety valve which releases at ~60 PSI to vent excess pressure and prevent your plastic bottle from exploding. Similarly, your hose should be rated for higher than the pressure you intend
to carbonate to. You should never carbonate in glass bottles.
I measured the pH of my 55 PSI carbonated water, and found it to be 4.6, whereas the pH of Coke is a much more acidic 3.2, as shown below. The pH of my water prior to carbonation was a perfectly
neutral 7.0.
Basic and Premium StrobeTV
Do you ever want to watch a movie at home, but can’t decide between your favorite two? Maybe your favorite TV show is about to start, but you’re still watching a film on DVD? Perhaps you and a
friend can’t agree on which film to watch? Well, you need not fret if you’re an early adopter of StrobeTV – a new device that enables the viewing of TWO films simultaneously on a single television by
simply alternating between them! The basic version features two knobs for fine-tuning your experience. One sets “impatience,” which is how long you’ll view and listen to one film before switching
to the second film. This can be set from once every10 seconds and mere fractions of a second! The second knob controls “preference” or the relative amount of time spent watching the first or second
film – perhaps you want to view the first film 60% of the time, and the second 40%. Or, might it be 25% first and 75% second?
If you’re the proud owner of the Premium StrobeTV, a world of customization is achievable using the eight three position switches and two knobs. The upper row of switches configures the experience
you’ll enjoy for an amount of time set by the upper knob. When this time expires, your experience is set by the lower row of switches for the amount of time selected by the lower knob. Naturally,
the process repeats, endlessly alternating between your two chosen forms of entertainment – be that TV, DVD, Netflix, gaming consoles, etc. Perhaps you feel you’ll miss out on important action while
you’re watching one film? Premium StrobeTV allows you to listen to the left and right audio channels of the second film, while you view the image from the first film! Maybe you’re more comfortable
reserving the left speaker for playing the left channel audio for the film you’re not currently viewing? Or maybe the audio alternates? With StrobeTV, the possibilities are virtually endless.
Premium StrobeTV has an enhanced switching range, from once per 15 seconds to over forty times per second, ensuring you won’t miss a single detail!
Premium StrobeTV features an ingenious 2 meter long cable that allows the controls to be at your fingertips, while the wiring remains hidden away. Note that Premium StrobeTV, like its more
economical sibling, allows for the switching of four signals: right audio, composite video, left audio, and a forth signal of your choice!
Mechanized Cylindrical Sign Build for Parade Joy (Update 4)
We’re coming to the end of our South Shore Frolics Parade Float builds! This has been an incredible process. Last night we put the graphics on the cylindrical sign, and stood back to enjoy the glory
of our handywork.
Thanks to Tom, Kevin, Matt N., Bob, Mike, Shane, Elizabeth, Adam, Sean, Kristin, Amanda, Jason, Aaron, and anyone I might have missed who helped get this together!
I pose next to the completed thing. Intense!!!
|
{"url":"http://milwaukeemakerspace.org/category/joy/page/2/","timestamp":"2014-04-18T08:04:06Z","content_type":null,"content_length":"84199","record_id":"<urn:uuid:b85582fa-c853-48b6-a91f-873b16a0ea10>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Give any three of the criteria that go into selecting the range."
July 21st 2010, 11:41 AM
"Give any three of the criteria that go into selecting the range."
A two part question:
1.) Give the range of $y = cos^{-1}x.$(My answer: All real numbers)
2.) Give any 3 of the criteria that go into selecting the range.
I was under the impression that all there was to the range was maximum y-value minus minimum y value... so how can there be two or more criteria that go into selecting the range?
July 21st 2010, 12:56 PM
A two part question:
1.) Give the range of y = cos-1x. (My answer: All real numbers)
2.) Give any 3 of the criteria that go into selecting the range.
I was under the impression that all there was to the range was maximum y-value minus minimum y value... so how can there be two or more criteria that go into selecting the range?
$y = \cos^{-1}(x)$
domain is $[-1,1]$
range is $[0,\pi]$
this should be in your textbook, along with a graph of the inverse cosine function.
July 21st 2010, 01:03 PM
Okay, so listing 1.) look at the graph 2.) reference to the table of values or 3.) look at the trigometric function and its amplitude, and calculate the range (ex. $f(x)=3cosx [-3,3]$)
July 21st 2010, 07:15 PM
I was also wondering, if judging from whether it is an arcsin/arccos vs a sin/cos would qualify as one example. Would love to be able to have unique examples for class tomorrow! :D
|
{"url":"http://mathhelpforum.com/trigonometry/151605-give-any-three-criteria-go-into-selecting-range-print.html","timestamp":"2014-04-21T06:49:14Z","content_type":null,"content_length":"6511","record_id":"<urn:uuid:32920aaa-8403-418e-b9e9-8e32ae82a10e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Robert P. Munafo, 2012 Apr 20.
The values of
that are calculated during an iteration process. The iterates form a sequence of points
(also called the point's
or the
critical orbit
of the point's
Julia set
), with one member for each positive integer
. The sequence is defined by the
recurrence relation
Z[0] = 0
Z[n+1] = Z[n]^2 + C
where C is the point for which the iteration is being performed.
If the values of Z[n] diverge to infinity by getting progressively larger and larger, the point C is not in the Mandelbrot set.
If the values converge on a single value or a finite repeating set of N values, the point is in the Mandelbrot Set and is said to have period N. The set of N values is the limit cycle.
If the values follow a chaotic, non-repeating pattern and never diverge to infinity the point is in the Mandelbrot Set and also on the boundary. Not all points on the boundary have chaotic iteration,
however. The Misiurewicz points are the best examples. See also accuracy.
Typically in practice one needs to limit how many times the calculation Z[n+1] = Z[n]^2 + C is performed, using a maximum dwell value of some kind. See the page on algorithms for more information
about how to write a Mandelbrot program.
Much more informative pictures can be produced with the extra information provided by the distance estimator algorithm.
Julia sets can be plotted via the inverse iteration method.
See also inverse Mandelbrot iteration.
revisions: 20020529 oldest version on record; 20111208 add link to DEM/M; 20120420 link to maximum dwell
From the Mandelbrot Set Glossary and Encyclopedia, by Robert Munafo, (c) 1987-2012.
Mu-ency index s.13
|
{"url":"http://www.mrob.com/pub/muency/iterates.html","timestamp":"2014-04-19T14:29:47Z","content_type":null,"content_length":"6195","record_id":"<urn:uuid:48958223-be9b-4819-a440-60f9b93dd397>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Virtually every Prolog system has library(lists), but the set of provided predicates is diverse. There is a fair agreement on the semantics of most of these predicates, although error handling
may vary.
This library provides commonly accepted basic predicates for list manipulation in the Prolog community. Some additional list manipulations are built-in. See e.g., memberchk/2, length/2.
The implementation of this library is copied from many places. These include: "The Craft of Prolog", the DEC-10 Prolog library (LISTRO.PL) and the YAP lists library. Some predicates are reimplemented
based on their specification by Quintus and SICStus.
True if Elem is a member of List. The SWI-Prolog definition differs from the classical one. Our definition avoids unpacking each list element twice and provides determinism on the last element.
E.g. this is deterministic:
member(X, [One]).
Gertjan van Noord
List1AndList2 is the concatenation of List1 and List2
Concatenate a list of lists. Is true if ListOfLists is a list of lists, and List is the concatenation of these lists.
ListOfLists must be a list of possibly partial lists
True iff Part is a leading substring of Whole. This is the same as append(Part, _, Whole).
Is true when List1, with Elem removed, results in List2.
Semi-deterministic removal of first element in List that unifies with Elem.
True if XList is unifiable with YList apart a single element at the same position that is unified with X in XList and with Y in YList. A typical use for this predicate is to replace an element,
as shown in the example below. All possible substitutions are performed on backtracking.
?- select(b, [a,b,c,b], 2, X).
X = [a, 2, c, b] ;
X = [a, b, c, 2] ;
See also
selectchk/4 provides a semidet version.
Semi-deterministic version of select/4.
True if Y follows X in List.
Delete matching elements from a list. True when List2 is a list with all elements from List1 except for those that unify with Elem. Matching Elem with elements of List1 is uses \+ Elem \= H,
which implies that Elem is not changed.
See also
select/3, subtract/3.
There are too many ways in which one might want to delete elements from a list to justify the name. Think of matching (= vs. ==), delete first/all, be deterministic or not.
True when Elem is the Index'th element of List. Counting starts at 0.
type_error(integer, Index) if Index is not an integer or unbound.
See also
Is true when Elem is the Index'th element of List. Counting starts at 1.
See also
Select/insert element at index. True when Elem is the N'th (0-based) element of List and Rest is the remainder (as in by select/3) of List. For example:
?- nth0(I, [a,b,c], E, R).
I = 0, E = a, R = [b, c] ;
I = 1, E = b, R = [a, c] ;
I = 2, E = c, R = [a, b] ;
?- nth0(1, L, a1, [a,b]).
L = [a, a1, b].
As nth0/4, but counting starts at 1.
Succeeds when Last is the last element of List. This predicate is semidet if List is a list and multi if List is a partial list.
There is no de-facto standard for the argument order of last/2. Be careful when porting code or use append(_, [Last], List) as a portable alternative.
True when Length is the number of elements in the proper list List. This is equivalent to
proper_length(List, Length) :-
length(List, Length).
Is true when List1 and List2 are lists with the same number of elements. The predicate is deterministic if at least one of the arguments is a proper list. It is non-deterministic if both
arguments are partial lists.
See also
Is true when the elements of List2 are in reverse order compared to List1.
True when Xs is a permutation of Ys. This can solve for Ys given Xs or Xs given Ys, or even enumerate Xs and Ys together. The predicate permutation/2 is primarily intended to generate
permutations. Note that a list of length N has N! permutations, and unbounded permutation generation becomes prohibitively expensive, even for rather short lists (10! = 3,628,800).
If both Xs and Ys are provided and both lists have equal length the order is |Xs|^2. Simply testing whether Xs is a permutation of Ys can be achieved in order log(|Xs|) using msort/2 as
illustrated below with the semidet predicate is_permutation/2:
is_permutation(Xs, Ys) :-
msort(Xs, Sorted),
msort(Ys, Sorted).
The example below illustrates that Xs and Ys being proper lists is not a sufficient condition to use the above replacement.
?- permutation([1,2], [X,Y]).
X = 1, Y = 2 ;
X = 2, Y = 1 ;
type_error(list, Arg) if either argument is not a proper or partial list.
Is true if List2 is a non-nested version of List1.
See also
Ending up needing flatten/3 often indicates, like append/3 for appending two lists, a bad design. Efficient code that generates lists from generated small lists must use difference lists,
often possible through grammar rules for optimal readability.
True when Max is the largest member in the standard order of terms. Fails if List is empty.
See also
- compare/3
- max_list/2 for the maximum of a list of numbers.
True when Min is the smallest member in the standard order of terms. Fails if List is empty.
See also
- compare/3
- min_list/2 for the minimum of a list of numbers.
Sum is the result of adding all numbers in List.
True if Max is the largest number in List. Fails if List is empty.
See also
True if Min is the smallest number in List. Fails if List is empty.
See also
List is a list [Low, Low+1, ... High]. Fails if High < Low.
- type_error(integer, Low)
- type_error(integer, High)
True if Set is a proper list without duplicates. Equivalence is based on ==/2. The implementation uses sort/2, which implies that the complexity is N*log(N) and the predicate may cause a
resource-error. There are no other error conditions.
True when Set has the same elements as List in the same order. The left-most copy of duplicate elements is retained. List may contain variables. Elements E1 and E2 are considered duplicates iff
E1 == E2 holds. The complexity of the implementation is N*log(N).
List is type-checked.
Ulrich Neumerkel
See also
sort/2 can be used to create an ordered set. Many set operations on ordered sets are order N rather than order N**2. The list_to_set/2 predicate is is more expensive than sort/2 because it
involves, in addition to a sort, three linear scans of the list.
Up to version 6.3.11, list_to_set/2 had complexity N**2 and equality was tested using =/2.
True if Set3 unifies with the intersection of Set1 and Set2. The complexity of this predicate is |Set1|*|Set2|
See also
True if Set3 unifies with the union of Set1 and Set2. The complexity of this predicate is |Set1|*|Set2|
See also
True if all elements of SubSet belong to Set as well. Membership test is based on memberchk/2. The complexity is |SubSet|*|Set|.
See also
Delete all elements in Delete from Set. Deletion is based on unification using memberchk/2. The complexity is |Delete|*|Set|.
See also
|
{"url":"http://www.swi-prolog.org/pldoc/doc_for?object=section(2%2C'A.12'%2Cswi('%2Fdoc%2FManual%2Flists.html'))","timestamp":"2014-04-21T12:08:16Z","content_type":null,"content_length":"30899","record_id":"<urn:uuid:34bce407-8fd3-4647-914c-a46493d64faa>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
January 12th 2006, 02:37 AM #1
Jan 2006
hi :)
hi,i am new here,i am a 14 year-old kid,i am entirely attracted by math,i always read math books online,but i got many problems since i dont really know many symbols' meaning,can anyone introduce
a place where i can learn more about math symbols?
I do not think you can really learn what these symbols mean by reading about them. I think the best way to learn about these symbols is that study the branch of math which uses them.
Here are some:
Caution, eventhough this website explains these symbols it does not mean you will understand them all. Because just like I said the best (perhaps only) way to learn about these symbols is to
study the branch of math which uses them.
Well, depending on the symbol you might be able to pick it up easily. If you see something like this, $x^2$, that notation is easy to learn. However if you run into one of these, $\int$ it's
gonna take some time.
thx for the link
i have studied it,but i cant be able to understand all of them
but i will just go on my studying,thx a lot
i am studying an algebra book these days,it ends with precalculus algebra,what do u guys recommend me to study after the book?
Maybe Trigonometry?
well.................i thought tha trigo exist because of the Pyth. Theorem...
and it won't help much in my study,in fact,should i study calculus afterprecalculus agebra?
January 12th 2006, 04:45 PM #2
Global Moderator
Nov 2005
New York City
January 12th 2006, 05:08 PM #3
MHF Contributor
Oct 2005
January 12th 2006, 08:45 PM #4
Jan 2006
January 13th 2006, 11:24 AM #5
Global Moderator
Nov 2005
New York City
January 15th 2006, 08:35 PM #6
Jan 2006
|
{"url":"http://mathhelpforum.com/advanced-algebra/1600-hi.html","timestamp":"2014-04-17T04:27:04Z","content_type":null,"content_length":"39416","record_id":"<urn:uuid:fb39db14-138b-4848-bcb0-7d2f3260ec4a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Life of a Developer
Ever since I first played with
, it must have been at least 15 years ago, I've been intrigued by fractals. At first, they seemed like magic to me, but as I learned more maths, I can understand the why, but not always the how.
A couple of years ago I set out to write a fractal exploration application called Frakter. To be honest, it sucked pretty much, but it was an attempt. The nice things about it where:
• Fractals where kept in external scripts, so that the application could be extended.
• Colouring was done using special colour maps that could be loaded and saved.
• It used a background thread to render the fractals.
• It had a zooming history that one could type right into to find interesting ranges in the complex numbers realm.
Now and then I visit
Paul Bourke
's great collection of
loads of other stuff too
). Each time, I feel an urge to write something that can handle everything that he shows.
I want to be able to fit all different types of fractals into a class tree that I can write a GUI around. The data carried from the fractals to the UI would be enough to do basic 2D rendering, 3D
rendering, psychedelic colouring, and so on.
I've had a couple of half hearted attempts this far, but now I started from the other direction, i.e. starting with the fractal types I never get to otherwise.
Right now I've implemented
. The implementation is not even an attempt at being efficient and uses loads and loads and loads of memory, but it works. The next step is to create a colouring class that one can inherit to do the
funky stuff. Last value, current value and current iteration, fractal specific category are the "input values" I plan to use. Right now, fractal specific category, is "last used transformation set"
when dealing with IFS, for L-systems, it is zero.
1 Comments:
At 1:30 AM, ChrLov said...
|
{"url":"http://www.thelins.se/johan/2008/07/fractalicious.html","timestamp":"2014-04-16T04:38:30Z","content_type":null,"content_length":"22543","record_id":"<urn:uuid:1a3ab2c7-bbf2-486b-991c-cd90210f6523>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Dec 21st 2011, 06:45 PM #1
Citizen **
Join Date
Apr 2011
Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
I'm having a difficult time with the accounts in Matthew/Mark vs Luke at the Last Supper. My first assumption is all four gospels are written chronologically - meaning the events are written in
order (A happened, then B, then C).
Matthew 26:21-25 has Jesus declaring and condemning a betrayer. Then the institution of communion. Then the "won't partake" statement after communion.
Mark 14:18-21 is exactly the same (except for slight wording and the absence of Judas asking if he is the betrayer as is in Matthew) - communion is instituted and then the "won't partake"
statement afterward.
Luke, though, presents the "won't partake" statement BEFORE communion and Jesus declares and condemns the betrayer AFTER communion.
My question is: how is it possible to reconcile these specific details in the three gospels, while still maintaining the individual timelines? Most harmonies I've looked at simply adopt one
timeline from Mark, Matthew or Luke, then plug in the other gospels to make it fit.
I thought inerrancy meant there was no incorrect information in the bible - that it was perfect? If all four gospels are chronological, how could all four timelines be correct? Specifically, how
can Matthew/Mark be correct while Luke has these events in opposite order?
Thanks for any help.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
There's your problem.
The Gospels, as Greco-Roman biography, are not strictly chronological like modern biographies. Nothing wrong with this, it's just the nature of the genre.
I thought inerrancy meant there was no incorrect information in the bible - that it was perfect? If all four gospels are chronological, how could all four timelines be correct? Specifically, how
can Matthew/Mark be correct while Luke has these events in opposite order?
You can still hold to inerrancy while believing that the Gospel writers sometimes arrange their events in different orders for whatever reason.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
The Gospels, as Greco-Roman biography, are not strictly chronological like modern biographies. Nothing wrong with this, it's just the nature of the genre.
To add to this: Many Christians today take the modernist and postmodernist perspectives that if the gospel doesn't present events in a strictly chronological order, it thus means that the author
(s) were lying or deceiving their audiences. But in that culture and time, no one would have thought of it that way. It needs to be recognized by Christians (and critics) today, that it would
never be considered as deceptive in ancient times for one (or all four) of the gospel authors to rearrange closely-knit historical events if it served the purpose of the story they're presenting.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Thank you BrckBrln and markedward for your responses. Everyone else I have asked about this just dances around the issue itself or simply responds with, "there isn't a contradiction" but without
explaining why.
This is exactly what I was thinking, the mindset that I have been approaching the sections with. This is because I was taught that the bible was inerrant - not that it was infallible. These were
from a combination of teachers: Southern Baptists, Independent-Fundamental Baptists and Calvary Chapel Adherents. I was told the bible was a supernatural message originating from God, and written
by 40 men who were moved to do so by the Holy Spirit. Inerrancy (as it was explained to me) is the perfection of Scripture, the idea that every sentence, word, even the letters are in the perfect
place, and have several meanings (such as the gospels each having certain types of words that are divisible by 7 proving that each gospel was written first - i.e. supernatural design).
I was taught that the bible is inerrant in the originals and infallible in the copies and modern translations - meaning the inerrancy was lost through our twisting of Scripture, but God still
preserves the message that saves.
So, what books or websites would you recommend that deal with this concept specifically: that in their culture and time it was commonplace to write theologically and not chronologically?
Thanks again for the help.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
A quick browse through google provided this link:
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
If you place a coffee cup in the center of a table, and seat 4 people around it, one on each side, and ask them to describe the cup, 1 will tell you the cup has a handle on the right side, while
another will say there is a handle on the left side, another in the middle, and one won't mention a handle at all. All four are correct, and do not contradict each other. 4 perspectives.
Generally the gospel that is used for chronological purposes is Luke, because Luke states that his gospel 'sets in order':
It seemed good to me also, having had perfect understanding of all things from the very first, to write unto thee in order, most excellent Theophilus, Luke 1:3
Also, Luke was a Gentile, not a Hebrew, and he was a physician. Those two things suggest that his thinking was more linear, like ours, and thus he would naturally write chronologically.
The others however were Hebrew, and chronology was not nearly as important to them.
In studying the 4 gospels, one will eventually realize that each presents Jesus in a different light.
One shows Him as a suffering servant, another as a king, one shows him as son of God, and the 4th as son of man.
The things each author points out is directly related to his general 'theme', and thus you find some things in one gospel but not another. John for instance writes a small bit about the very
beginning of Jesus' ministry, then skips all the way to the 3rd year, and writes mostly about that timeframe. He admits that many more things could have been written about Jesus, about what He
did, so that the world could not contain the books, which tells us he was very selective in choosing what to write about.
Those four perspectives, by the way, coincide with the 4 colors found in the tabernacle and temples, which were scarlet, blue, white, and purple, the 4 camps with their standards (flags) which
contained pictures of 4 creatuers, around the tabernacle in the wilderness, and the 4 creatures around the throne of God in Revelation, which are a bullock, an eagle, a man, and a lion.
So how do they correspond?
Blue, like the sky or heavens - eagle, who's abode is the heavens, Christ theSon of God, Whose abode is heaven
Scarlet, like blood - bullock, the sacrifice and servant - Christ the Suffering Servant and Lamb of God
White, purity - man - Christ the Son of Man, the Last Adam
Purple, royalty - lion, king of the creatures - Christ The Coming King
This tends to tie much of the Bible together, from Genesis and Exodus and the prophets to the Gospels and Revelation, showing that there is actually only One Author of it all, who used a number
of different vessels to write His singular message.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
If you place a coffee cup in the center of a table, and seat 4 people around it, one on each side, and ask them to describe the cup, 1 will tell you the cup has a handle on the right side, while
another will say there is a handle on the left side, another in the middle, and one won't mention a handle at all. All four are correct, and do not contradict each other. 4 perspectives.
I would agree with you. But what I'm talking about is, take those 4 people and instead of them describing a static object (a coffee cup) have them recount a specific sequence of events (say, the
order of event that occurred in the upper room). It would be possible that person 1, 2, and 4 recounted relatively the same events in the same order, but person 3, though he had all the events
listed, had them listened in a different sequence.
We know it is impossible that sequence A, B, C, D, E could have happened in two distinct ways simultaneously. B has to come after A and before C, otherwise it would not be 'B'. So either 1, 2,
and 4 are incorrect and 3 has the correct sequence, 3 is wrong and 1, 2, 4 are correct or 1, 2, 3, and 4 are all incorrect. There is no way that 1, 2, 3, 4 could all be correct since this is a
sequence of events and there is only 1 possible outcome that could have happened.
Perspective does play a roll, which is why some things I mentioned by John and Matthew that are not mentioned by Mark and Luke, etc. These different pieces fit into the puzzle without problems.
The difficulty is when Luke describes the same events that Matthew and Mark describe, but in a different order of occurrence. This can't be correct. Someone got it wrong (or, 1 or more gospel
writers never intended to write chronologically in the first place as has been asserted).
If all 4 are chronological, then this is a direct hit to the idea of inerrancy, since this is the belief that even the words are perfect (in the originals).
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Thank you very much for the book recommendation. Actually has this one at the library, so I will be reading it next.
A quick browse through google provided this link:
http://books.google.com/books?id=trsvDbHlpdUC&pg=PA51&dq=ancient+Hebrew+no n-linear+writing+styles&hl=en&sa=X&ei=mZHyTtz6FaShiQ KHrpDODg&sqi=2&ved=0CGcQ6AEwBw#v=onepage&q=ancient Hebrew
non-linear writing styles&f=false
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Huh? I have never heard anything about anything such as this and I must admit it sounds a little hokey. Could you elaborate on what you are talking about here?
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
is it possible that Jesus spoke it twice?
some of the disciples did hear it, and then Jesus had to repeat.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Going by this theory, I tried an idea out to see if it might work. let's say these are the events in question: betrayal part, the breaking of bread part, and the drinking vine part. Matthew and
Mark both have it in the same order: betrayal-bread-vine. But Luke has it in the opposite order: vine-bread-betrayal. If Jesus repeated both the betrayal and the vine events then it would look
like this:
betrayal - vine - bread - vine - betrayal.
vine - betrayal - bread - betrayal - vine
i chose the former because it looked more plausible to me (but you can try out the sequences yourself). Here's what it would look like if using the books Luke and Matthew:
When it was evening, he reclined at table with the twelve. And as they were eating, he said, “Truly, I say to you, one of you will betray me.” And they were very sorrowful and began to say to him
one after another, “Is it I, Lord?” He answered, “He who has dipped his hand in the dish with me will betray me. The Son of Man goes as it is written of him, but woe to that man by whom the Son
of Man is betrayed! It would have been better for that man if he had not been born.” Judas, who would betray him, answered, “Is it I, Rabbi?” He said to him, “You have said so.” Matthew 26:20-25
When the hour had come, He reclined, and the apostles with Him. And He said to them, "I have earnestly desired to eat this Passover with you before I suffer; for I say to you, I shall never again
eat it until it is fulfilled in the kingdom of God." And when He had taken a cup and given thanks, He said, "Take this and share it among yourselves; for I say to you, I will not drink of the
fruit of the vine from now on until the kingdom of God comes." Luke 22:14-18
Now as they were eating, Jesus took bread, and after blessing it broke it and gave it to the disciples, and said, “Take, eat; this is my body.” And he took a cup, and when he had given thanks he
gave it to them, saying, “Drink of it, all of you, for this is my blood of the covenant, which is poured out for many for the forgiveness of sins. Matthew 26:26-28
I tell you I will not drink again of this fruit of the vine until that day when I drink it new with you in my Father’s kingdom.” Matthew 26:29
"But behold, the hand of the one betraying Me is with Mine on the table. For indeed, the Son of Man is going as it has been determined; but woe to that man by whom He is betrayed!" And they began
to discuss among themselves which one of them it might be who was going to do this thing. Luke 22:21-23
however this would mean that jesus gave out the cup of fruit of the vine twice... either it was given out twice or, the first time it was given to one part of the group, and the second time it
was given to the other part of the group.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Going by this theory, I tried an idea out to see if it might work. let's say these are the events in question: betrayal part, the breaking of bread part, and the drinking vine part. Matthew and
Mark both have it in the same order: betrayal-bread-vine. But Luke has it in the opposite order: vine-bread-betrayal. If Jesus repeated both the betrayal and the vine events then it would look
like this:
betrayal - vine - bread - vine - betrayal.
vine - betrayal - bread - betrayal - vine
i chose the former because it looked more plausible to me (but you can try out the sequences yourself).
however this would mean that jesus gave out the cup of fruit of the vine twice... either it was given out twice or, the first time it was given to one part of the group, and the second time it
was given to the other part of the group.
Hi David,
Thanks for this attempt. The concept is interesting, but I don't see how it actually works out. In order to place Jesus saying the Kingdom statement and the betrayer statement twice, it also
requires that they all questioned the betrayer was twice.
There are three verses or passages that are out of order in Luke. Luke 22:15-18, 19-20, and 23.
The order in Luke is: Kingdom, Communion, Betrayer Declared, Betrayer Condemned, All Question Themselves.
The order in Matthew/Mark is: Betrayer Declared, All Question Themselves, Betrayer Condemned, Communion, Kingdom (Matthew and Mark add the Betrayer dipping with Hand between Questioning and
Condemnation; Matthew adding Judas' question "Is it I" after Condemnation).
It's interesting that Kingdom, Communion is at the beginning in Luke but at the end in Matthew/Mark only reversed (Communion, Kingdom). Luke has Betrayer Declared, Betrayer Condemned, All
Question, while Matthew/Mark reverses Betrayer Condemned, All Question to All Question, Betrayer Condemned. It's almost as if these two accounts (Luke vs Matthew/Mark) are a mirror image of each
other, though this falls apart as you look at the point at which the mirror resides, as it is not a perfect reflection: Betrayer Condemned and Question order in Luke is reversed in Matthew/Mark,
leaving Betrayer Declared in the same position.
So, at whatever position we shift Luke down to align a specific event with Matthew/Mark, there are additional events that then conflict, even if we allow for the repetition, there is still
difficulty in placing Judas from John into the time frame.
When I read through the individual accounts, they all really seem to be referencing the same events (not events repeated several times). Just from a logical standpoint, I would have to ask why
Jesus is repeating himself so much? Why are the disciples also repeating themselves? Doesn't seem to jive.
The JW publications state that "apparently Luke was not written in chronological order." Of course, they adhere to Matthew/Mark's chronology because they don't want Judas at communion (because
they say only the 144,000 can take communion).
I am leaning toward the gospel writers not adhering to chronology (but not because of the JW explanation) simply because there seems to be no way to resolve the conflict without violating at
least one individual timeline, or accept that Scripture is not inerrant (which would be fine if, in fact, Scripture were only infallible, but I need to do more research on the subject before I
make a decision).
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
I've done some research on the chronology and harmony of the Gospels. I have found that there's a critical assumption concerning the timing that is often made which causes confusion when trying
to correlate the records.
This site shows detailed quotes, descriptions, charts and diagrams http://4Gospels.info . It includes information on the Last Supper.
Besides harmonizing the 4 accounts, the site makes the following ovservation: By starting with the chronology in the Gospel of John, noting Yahushua’s attendance at the Feasts, and correlating
this to the Synoptic Gospels, we discover that his public ministry was approximately a year long, with no significant gaps in the record.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Hi GitRDunn,
I can absolutely elaborate.
Chuck Missler, in his Matthew commentary, describes it as a code of authentication. He says it is:
An automatic security monitor, watching over every single letter of the
text, that doesn’t rust or wear out, running continually over several
thousand years… the Fingerprint signature of the Author; a noncompromisable
design. For instance, “sevens” in the Bible occur in over
600 passages; some overt, some structural, and some hidden. Are these
underlying Heptadic structures used as a signature? -
He goes on to provide this challenge (which he states is how the genealogy of Jesus Christ is written in Matthew 1:1-11 in Greek):
The number of words must be divisible by 7, evenly.
• The number of letters must also be divisible by 7, evenly.
• The number of vowels and the number of consonants must also be
divisible by 7.
• The number of words that begin with a vowel must be divisible by 7.
• The number of words that begin with a consonant must be divisible
by 7.
• The number of words that occur more than once must be divisible
by 7.
• The number of words that occur in more than one form must be
divisible by 7.
• The number of words that occur in only one form must be divisible
by 7.
• The number of nouns shall be divisible by 7.
• Only 7 words shall not be nouns.
• The number of names shall be divisible by 7.
• Only 7 other kinds of nouns are permitted.
• The number of male names shall be divisible by 7.
• The number of generations shall be divisible by 7.
He goes on to describe the chances of such patterns and discusses Ivan Panin's work in discovering some of the mathematical design underlining the Greek and Hebrew Scriptures.
Now, I don't mention this because I necessarily BELIEVE these claims to be true. I mention it because Missler is a staunch supporter of inerrancy - as he contends the bible is a "supernatural
message system from beyond our time domain that was created to resist hostile jamming."
If there is any credibility at all to these mathematical calculations, though, it would stand to reason that the text would be utterly perfect (in the originals). As such, then, there would be a
way to reconcile the four accounts into one timeline (that is, of course, if the original writers intended a timeline to begin with).
As a side note, I have tested some of Missler's claims concerning the Torah codes and they do not necessarily come out as he describes them. A response I got back from his website states that he
acknowledges this in a book he later wrote. I have never tested these claims of Matthew's genealogy. But that wasn't really the point I was getting at.
Re: Contradiction? Matthew/Mark vs Luke - Last Supper Chronology?
Hi GitRDunn,
I can absolutely elaborate.
Chuck Missler, in his Matthew commentary, describes it as a code of authentication. He says it is:
He goes on to provide this challenge (which he states is how the genealogy of Jesus Christ is written in Matthew 1:1-11 in Greek):
He goes on to describe the chances of such patterns and discusses Ivan Panin's work in discovering some of the mathematical design underlining the Greek and Hebrew Scriptures.
Now, I don't mention this because I necessarily BELIEVE these claims to be true. I mention it because Missler is a staunch supporter of inerrancy - as he contends the bible is a "supernatural
message system from beyond our time domain that was created to resist hostile jamming."
If there is any credibility at all to these mathematical calculations, though, it would stand to reason that the text would be utterly perfect (in the originals). As such, then, there would be a
way to reconcile the four accounts into one timeline (that is, of course, if the original writers intended a timeline to begin with).
As a side note, I have tested some of Missler's claims concerning the Torah codes and they do not necessarily come out as he describes them. A response I got back from his website states that he
acknowledges this in a book he later wrote. I have never tested these claims of Matthew's genealogy. But that wasn't really the point I was getting at.
That is certainly an interesting idea, although I don't know that I agree. Which language is this supposed to work it? If it's in English, which translation is it supposed to work for? It just
seems a little farfetched to me. Thank you for explaining, though.
Dec 21st 2011, 08:23 PM #2
Dec 21st 2011, 09:13 PM #3
Good bye.
Join Date
Nov 2005
Dec 21st 2011, 10:27 PM #4
Citizen **
Join Date
Apr 2011
Dec 22nd 2011, 02:20 AM #5
Dec 22nd 2011, 04:18 AM #6
Dec 23rd 2011, 06:34 PM #7
Citizen **
Join Date
Apr 2011
Dec 23rd 2011, 06:23 PM #8
Citizen **
Join Date
Apr 2011
Dec 22nd 2011, 08:27 AM #9
Dec 22nd 2011, 09:00 AM #10
Regular Contributor ***
Join Date
Jun 2009
Dec 31st 2011, 10:07 AM #11
Jan 8th 2012, 12:30 AM #12
Citizen **
Join Date
Apr 2011
Mar 3rd 2012, 03:38 AM #13
Join Date
Mar 2012
Dec 23rd 2011, 06:52 PM #14
Citizen **
Join Date
Apr 2011
Dec 23rd 2011, 09:01 PM #15
|
{"url":"http://bibleforums.org/showthread.php/233722-Contradiction-Matthew-Mark-vs-Luke-Last-Supper-Chronology?mode=hybrid","timestamp":"2014-04-18T03:02:34Z","content_type":null,"content_length":"152978","record_id":"<urn:uuid:e20ca888-20c6-4f40-b2d5-58ec6e85f79f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Revisiting Quantum Notions of Stress
Submitted by Pradeep Sharma on Sat, 2009-08-29 08:17.
I plan to submit the attached paper on quantum mechanical definition of stress in the next few weeks. Comments and feedback are welcome. Fair amount of work has been done on stress definition in the
context of classical molecular dynamics (also attracting some controversies). In contrast, there appear to be several open issues in the quantum case. Hopefully, the attached paper provides a
starting point.
Abstract: An important aspect of multiscale modeling of materials is to link continuum concepts such as fields to the underlying discrete microscopic behavior in a seamless manner. With the growing
importance of atomistic calculations to understand material behavior, reconciling continuum and discrete concepts is necessary to interpret molecular and quantum mechanical simulations. In this work,
we provide a quantum mechanical framework to a distinctly continuum quantity: mechanical stress. While the concept of the global macroscopic stress tensor in quantum mechanics has been well
established, there still exist open issues when it comes to a spatially varying local quantum stress tensor. We attempt to shed some light on this topic by establishing a general quantum mechanical
operator based approach to continuity equations and from those, introduce a local quantum mechanical stress tensor. Further, we elucidate the analogies that exist between (classical) molecular
dynamics based stress definition and the quantum stress. Our derivations seem to suggest that the local quantum mechanical stress may not be an observable in quantum mechanics and therefore traces
the non-uniqueness of the atomistic stress tensor to the gauge arbitrariness of the quantum mechanical state-function. Lastly, the virial stress theorem (of empirical molecular dynamics) is
re-derived in a transparent manner that elucidates the analogy between quantum mechanical global stress.
Dear Pradeep:
Interesting work and I'm glad it's finally completed. I have some minor comments.
1) In Eq.(12), you don't define T.
2) In Eq.(17), can \epslion_{\alpha\beta} be non-symmetric?
3) In Eq. (34), you motivate/define stress given the force f. This is something that has been considered by many as you mention throughout the paper. Again, as you mention, this is used in
electromagnetism to define Maxwell's "stress" tensor. I don't have any objections but is this what we do in continuum mechanics? I think the answer is no. We start with traction, something that
represents the mutual mechanical interaction of two pieces of the body in contact. Then as a consequence of balance of linear momentum Cauchy's theorem is proved; a stress tensor exists (traction is
linear in unit normal vector and given the unit normal, stress tensor is the operator acting on the unit normal giving the traction). In other words, we don't start with closed surfaces and the total
force acting on them; interaction of two bodies in contact on a (small and not closed) surface is considered. In electromagnetism, the total force acting on a subbody is obtained and then one looks
for something divergence of which is the force. I'm not sure if Maxwell "stress" is a real stress in the sense of continuum mechanics, though it can be a useful quantity. In the discrete setting, it
is true that there is this gauge invariance but I'm not sure if this is the best way of approaching the stress problem, though at this time I don't have a better alternative in mind. Just something
to think about/discuss.
4) If you're thinking about a mechanics journal, perhaps a short appendix on basics of quantum mechanics will make the paper self-contained.
5) Some minor typos: i) Page 10, six lines before Eq.(34), "stress .field" should read "stress fiend" I think. ii) Page 11, three lines after Eq. (36), "should be chose" should read "should be
chosen". iii) The same page right after Eq. (38) I think "and whenever" should be deleted. iv) Page 18, in the last line of the second paragraph "then stress" should read "than stress".
Arash, thanks very much for looking through the paper and for your comments. We (unfortunately) changed symbols around Equation (12) for the stess tensor. I will correct this and the typos you kindly
pointed out.
Regarding, epsilon, it can be unsymmetric but we consider a symmetric tensor to avoid worrying about rotation down the road.
Regarding you point #3; unless I have mis-interpreted your remark, I am not quite sure how to avoid the force-stress identity we used or whether starting by the traction route will get us a different
answer. I will be curious to know a bit more about your thoughts on this....
I am still unsure of which journal to send this to. Once I have refined a bit further, if a mechanics editor is willing to consider the paper, I will explore with the editor if a Appendix of the type
you suggest would be appropriate. I personally think it woudl be good but lately I have had a few such removed by referees as being superfluous.
Recent comments
|
{"url":"http://www.imechanica.org/node/6703","timestamp":"2014-04-17T21:23:28Z","content_type":null,"content_length":"27902","record_id":"<urn:uuid:6a918b73-54c5-4b7f-8937-e100f5637aa1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
|
matrices, inverses, transposes, r.r.e.f...
February 13th 2009, 01:28 PM
matrices, inverses, transposes, r.r.e.f...
Ok.. I have a series of problems..
I have a square matrix, P, I also have a function, F, that computes < m x n matrix|m x m identity matrix>
I have to find F(P), F(Transpose(P)), F(P . Transpose(P)), F(Transpose (P) . P)
I also have to find their row reduced echelon forms and comment on their results
obviously some of these matrices are square, and so the row reduced echelon form of F finds their inverse.. but what about for those that arent square?
what are the comments I should be making?
many thanks
February 15th 2009, 07:02 AM
a couple extra thoughts/questions..
I am aware that if you have a square matrix, H, and you row reduce the augmented matrix <H|I> You return with <I|InverseH>
is there a result for non square matrices?
And is there a result if you perform this with a non square matrix and its transpose - are the reults related?.
and what about if you carry this out on H . Transpose(H)
and then Transpose(H) . H of course these are square so you will end up with the inverse of each, but are these inverses related?
many thnaks
|
{"url":"http://mathhelpforum.com/advanced-algebra/73499-matrices-inverses-transposes-r-r-e-f-print.html","timestamp":"2014-04-19T08:10:07Z","content_type":null,"content_length":"4477","record_id":"<urn:uuid:fcaefbdd-2242-4e42-95c6-eede6df70a04>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Review
Exponential and Scientific Notation
Exponential notation is a convenient way to express large and small numbers. Numbers can be expressed as a coefficient multiplied by a base raised to an exponent (power).
A ืB^N
A = coefficient
B = base
N = exponent
The coefficient is not always written explicitly if it happens to be equal to one. The exponent represents the number of times the base number is multiplied by itself. If the exponent is negative
the expression represents a reciprocal. The examples below all have A = 1.
2^3 = 2ื2ื2 = 8
2^ 3 = 1/2^3 = 1/8 = 0.125
5^4 = 5ื5ื5ื5 = 625
5^ 4 = 1/5^4 = 1/625 = 0.0016
10^6 = 10ื10ื10ื10ื10ื10 = 1000000
10^ 6 = 1/10^6 = 1/1000000 = 0.000001
Expressing numbers in the binary system with base-2 is sometimes convenient for work in computer science. Other bases may be useful in particular specialized fields as well. In daily life the
decimal system with base-10 is used most often. Thus using exponential notation with base-10 would offer the most advantages and convenience in many applications. Numbers expressed in exponential
notation with base-10 using integer exponents are said to be written in scientific notation. In general, exponents need not be integers. Recall that roots are equivalent to fractional exponents.
In chemistry, as in daily life, the decimal system is used most commonly. More importantly, extremely large and small numbers are frequently encountered in chemistry. Scientific notation is
commonly used by chemists for these reasons. Any number may be expressed in scientific notation using the following form:
A ื10^N
A = coefficient, any number from 1-10
N = exponent, any integer (positive, negative, or zero)
Below are several examples that demonstrate the equivalence of scientific notation and standard decimal notation. As the examples show, the larger the magnitude of the exponent the more
convenient it is to use scientific notation and avoid writing many zeros.
Standard decimal notation Scientific notation
300,000,000 3ื10^8
6,000,000,000,000 6ื10^12
6,700,000,000,000 6.7ื10^12
602,000,000,000,000,000,000,000 6.02ื10^23
0.000000000000000000009 9ื10^ 21
0.00000005 5ื10^ 8
0.0000000500 5.00ื10^ 8
How were these equivalent expressions determined? What is needed is an algorithm, or procedure, for doing the conversion of numbers to and from scientific notation.
Converting to Scientific Notation
1) Move the decimal point so that it follows the first non-zero digit to generate a number from 1-10. Count the number of places moved to the left or right.
2) Write the number generated in step one multiplied by 10B where B is the number of places the decimal point was moved.
3) Make sure the sign of the exponent is correct. If the decimal was moved left, the sign is positive. If the decimal was moved right, the sign is negative.
Below are two examples. In each example, the degree symbol บ is used to represent the final position of the decimal point. The original position of the decimal is indicated to show how many
places the decimal was moved. Spaces are used in place of commas in large numbers. This notation will be used throughout this section.
A) Convert 140,000 to scientific notation.
Step 1) 1บ40 000. (moved 5 places to the left)
Step 2) 1.4ื10^5
Step 3) Exponent is positive because the decimal was moved left. The answer is 1.4ื10^5
B) Convert 0.00082 to scientific notation.
Step 1) 0.0008บ2 (moved 4 places to the right)
Step 2) 8.2ื10^ 4
Step 3) Exponent is negative because the decimal was moved right.
The answer is 8.2ื10^ 4
Converting from Scientific Notation
Procedure to convert from scientific notation to a standard decimal number:
1) Move the decimal point as many places as indicated by the exponent. If the exponent is positive, move the decimal point to the right. If the exponent is negative, move the decimal point to the
left. Add zeros to act as place holders if necessary after moving the decimal point.
2) Write the number generated in step one without the power of ten.
A) Convert 2.85ื10^4 to standard decimal notation.
Step 1) 2.8500บ ื10^4 (moved 4 places to the right)
Step 2) 28 500
The answer is 28,500
B) Convert 1.61ื10^ 19 to standard decimal notation.
Step 1) 0บ0000000000000000001.61ื10^ 19 (moved 19 places to the left)
Step 2) 0.000000000000000000161
The answer is 0.000000000000000000161
Engineering Notation
Any number can be expressed in scientific notation using the appropriate exponent, or power of 10. When using extremely large and extremely small numbers, it is often convenient to limit the
exponent to a multiple of three. When this is done the notation is called engineering notation. The result is that when written in engineering notation, the number of thousands, millions,
trillions, millionths, billionths, etc. is readily apparent.
A ื10^N A = coefficient, any number from 1-1000
N = exponent, integer that is a multiple of three
Examples: 5 million = 5,000,000 = 5ื1,000,000 = 5ื10^6
15.6 trillion = 15,600,000,000,000 = 15.6ื1,000,000,000,000 = 15.6ื10^12
2 thousandths = 0.002 = 2ื0.001 = 2ื10^ 3
350 millionths = 0.000350 = 350ื10^ 6
Note that 15.6ื10^12 (engineering notation) is the same as 1.56ื10^13 (scientific notation).
The procedures for converting a number to/from engineering notation are essentially the same as those for converting to/from scientific notation. The difference is that A can be a number from
1-1000 and the exponent B will always be a multiple of three (i.e. the decimal point is always moved three places at a time).
Most chemistry textbooks use scientific notation rather than engineering notation, but engineering notation does have two advantages. When using metric units, the only prefixes used when the
magnitude of the exponent is three or larger are multiples of three (e.g. kilo, mega, giga, micro, nano, etc.). Another advantage of engineering notation is that it can make certain calculations
easier when done by hand. When using a calculator, the calculator can generally be set to a mode that gives answers in either scientific or engineering notation so no additional work by students
is necessary.
Changing Exponents within Scientific or Engineering Notation
In general, a number can be written in exponential notation in any number of equivalent ways with any number of different coefficients. When the coefficient, A, is from 1-1000 it is called
engineering notation. When the coefficient is from 1-10 it is called scientific notation. The coefficient can also be from 0-1. This would be acceptable in generic exponential notation. In
solving particular problems it is often convenient to be able to change the exponent.
Procedure to convert from exponential notation to scientific notation: 1) Move the decimal point to generate a number from 1-10. Count the number of places moved to the left or right. 2) Adjust
the exponent. If the decimal was moved left, increase the exponent by the number of places. If the decimal was moved right, decrease the exponent.
A) Convert 36.9ื10^6 to scientific notation.
Step 1) 3บ6.9ื10^6 (moved 1 place to the left)
Step 2) 3.69ื10^7
The answer is 3.69ื10^7
B) Convert 0.0014ื10^ 3 to scientific notation.
Step 1) 0.001บ4ื10^ 3 (moved 3 places to the right)
Step 2) 1.4ื10^ 6
The answer is 1.4ื10 ^6
Mathematical Operations using Scientific Notation
Any mathematical operation that can be performed on a number expressed in standard decimal notation can be performed on a number written in any form of exponential notation, including scientific
and engineering notation. It is often true that these operations are more easily performed in scientific notation, especially when doing calculations by hand. Multiplication, division, and
logarithms are generally more convenient using scientific notation. Addition and subtraction are often less convenient.
The concept order of magnitude refers to the exponent from a number written in exponential notation. One million is 106, and any number of the form Aื106 is in the millions. The order of magnitude
is equal to six (though a salary in the millions is called a seven figure salary ). If the value of A is an integer number of millions it will have six zeros, though, e.g. 2ื10^6, 3ื10^6 and 4ื10^6
each have six zeros). If A is a decimal, there will be fewer than six zeros, e.g. 2.5ื10^6 and 9.8ื10^6 each have five zeros. Calculating a logarithm provides information about order of magnitude.
Multiplication or Division using Scientific Notation
Multiplication and division operations performed on numbers expressed in scientific notation are often simpler to perform than on standard numbers. When very large and small numbers are being
used, scientific notation allows one to multiply or divide small numbers less than ten and then add or subtract the exponents.
Procedure to multiply numbers expressed in scientific notation
1) Regroup so that the coefficients are together and the exponents are together
2) Multiply the coefficients. Add the exponents to generate the new exponent
3) If needed, change the exponent so that the coefficient is from 1-10 (Section 4.2a)
A) Calculate z = (3.0ื10^6) ื(2.5ื10^3)
Step 1) z = (3.0ื2.5) ื(10^6ื10^3)
Step 2) z = 7.5ื10^(6+3)
The answer is 7.5ื10^9
B) Calculate z = (4.0ื10^3) ื(3.0ื10^4)
Step 1) z = (4.0ื3.0) ื(10^3ื10^4)
Step 2) z = 12ื10^(3+4) = 12ื10^7
Step 3) z = 12ื10^7 = 1บ2.ื10^7 = 1.2ื10^8
The answer is 1.2ื10^8
Procedure to divide numbers expressed in scientific notation
1) Regroup so that the coefficients are together and the exponents are together
2) Divide the coefficients. Subract the exponents to generate the new exponent
3) If needed, change the exponent so that the coefficient is from 1-10 (Section 4.2a)
A) Calculate z = (8.0ื10^6) / (2.0ื10^4)
Step 1) z = (8.0/2.0) ื(10^6/10^4)
Step 2) z = 4.0ื10^(6 4)
The answer is 4.0ื10^2
B) Calculate z = (2.8ื10^8) / (5.0ื10^3)
Step 1) z = (2.8/5.0) ื(10^8ื10^3)
Step 2) z = 0.56ื10^(8 3) = 0.56ื10^5
Step 3) z = 0.56ื10^5 = 0.5บ6ื10^5 = 5.6ื10^4
The answer is 5.6ื10^4
Addition or Subtraction using Scientific Notation
Addition and subtraction operations performed on numbers expressed in scientific notation are often less simple to perform than on standard numbers. Both numbers need to have a common exponent.
Usually the number with the smaller exponent is converted to have the larger exponent. After exponents have been changed, the addition or subtraction can proceed normally.
Procedure to add or subtract numbers expressed in scientific notation
1) If needed, change one exponent so that both are equal
2) Align the decimals and add or subtract
3) If needed, change the exponent so that the coefficient is from 1-10
A) Calculate (8.00ื10^6) + (2.0ื10^4)
Step 1) 2.0ื10^4 = 0บ02.0ื10^4 = 0.020ื10^6
Step 2) 8.00 ื10^6
+ 0.020 ื10^6
The answer is 8.02ื10^6
B) Calculate (2.8ื10^4) (5ื10^3)
Step 1) 5ื10^3 = 0บ5.ื10^3 = 0.5ื10^4
Step 2) 2.8ื10^4
The answer is 2.3ื10^4
Metric and SI units
Metric and SI units are based on decimal conversions. Metric prefixes represent specific powers of 10. Thus, measurements written in scientific notation or engineering notation can easily be
converted from base units to units with a prefix and vice versa. All the prefixes corresponding to exponents equal to three or larger have equivalents that are multiples of three.
Below is a table of some of the common metric prefixes with the numerical equivalents.
Prefix Numerical equivalent Equivalent power of 10
Giga (G) 1,000,000,000 10^9
Mega (M) 1,000,000 10^6
Kilo (k) 1,000 10^3
Hecto (h) 100 10^2
Deka (da) 10 10^1
- 1 10^0
Deci (d) 0.1 10^-1
Centi (c) 0.01 10^-2
Milli (m) 0.001 10^-3
Micro (ต) 0.000001 10^-6
Nano (n) 0.000000001 10^-9
Scientific and engineering notation can be used to give a convenient equivalent expression for metric units that include a prefix. The prefix and ื10N can directly substitute for each other. Below
are some examples.
1.5 kg = 1.5ื10^3 g
2.54ื10^ 2 m = 2.54 cm
454 nm = 454ื10^ 9 m
8.4ื10^ 3 L = 8.4 mL
|
{"url":"http://facweb.bhc.edu/academics/science/Lelandc/math/math.notation.htm","timestamp":"2014-04-16T18:57:21Z","content_type":null,"content_length":"16029","record_id":"<urn:uuid:8cddc10c-b6a8-468f-b478-96aa116335c6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cypress, CA
Find a Cypress, CA Calculus Tutor
...After receiving a B.S. in Nutritional Sciences and a B.A. in Integrative Biology from UC Berkeley in 2009, I moved to New York City to teach middle school science in the South Bronx through
Teach for America, a well-respected non-profit organization that works towards addressing educational inequ...
31 Subjects: including calculus, chemistry, English, reading
...I started tutoring as a favor for a friend and have found that tutoring is one of the most rewarding experiences I can have. Many of my students have gone from D's with no understanding to A's
with the ability to peer tutor their classmates. It is always wonderful to enter a student's home and ...
11 Subjects: including calculus, geometry, probability, algebra 1
With over 20 years as an Engineer, in the aerospace industry, and dual Master's degrees in Applied Mathematics and Mechanical Engineering, I have a knowledge base that is hard to beat. I'm
passionate about knowledge and will go above and beyond to help you understand not only the material, ...
9 Subjects: including calculus, geometry, statistics, algebra 1
...I am looking to earn some extra money in order to pay for textbooks and classes. I prefer to tutor in math. I took up to AP-Calculus BC in high school and scored a four out of five on the
AP-Calculus AB test.
22 Subjects: including calculus, reading, English, geometry
...I find most students require instruction for the verbal section like a an athlete needs conditioning exercises--mnemonics to aid their vocabulary skills, practice analogies, a refreshment of
primer concepts in proper grammar. To that end, my role is usually that of a strict coach, monitoring the...
13 Subjects: including calculus, physics, algebra 1, algebra 2
Related Cypress, CA Tutors
Cypress, CA Accounting Tutors
Cypress, CA ACT Tutors
Cypress, CA Algebra Tutors
Cypress, CA Algebra 2 Tutors
Cypress, CA Calculus Tutors
Cypress, CA Geometry Tutors
Cypress, CA Math Tutors
Cypress, CA Prealgebra Tutors
Cypress, CA Precalculus Tutors
Cypress, CA SAT Tutors
Cypress, CA SAT Math Tutors
Cypress, CA Science Tutors
Cypress, CA Statistics Tutors
Cypress, CA Trigonometry Tutors
Nearby Cities With calculus Tutor
Artesia, CA calculus Tutors
Bellflower, CA calculus Tutors
Buena Park calculus Tutors
Cerritos calculus Tutors
Fullerton, CA calculus Tutors
Garden Grove, CA calculus Tutors
Hawaiian Gardens calculus Tutors
La Palma calculus Tutors
Lakewood, CA calculus Tutors
Los Alamitos calculus Tutors
Mirada, CA calculus Tutors
Norwalk, CA calculus Tutors
Rossmoor, CA calculus Tutors
Stanton, CA calculus Tutors
Westminster, CA calculus Tutors
|
{"url":"http://www.purplemath.com/Cypress_CA_calculus_tutors.php","timestamp":"2014-04-20T09:19:50Z","content_type":null,"content_length":"23948","record_id":"<urn:uuid:f302b768-5bb9-4b0f-bfaf-0d83549cc142>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What to expect from pre-calc?
06-12-2007 #1
What to expect from pre-calc?
So next year (my last year!) I'm doing a pre-calculus course, sort of. It's not strictly pre-calculus, but it's the closest our school system has to offer AFAIK.
Anyway, since I know the majority of you have been through high-school and stuff already, I was wondering what exactly to expect from this. From what I gather from my teachers, it's basically the
hardest math course I can do in our school. But I haven't exactly been told anything specific about the course, other than that I can't use a calculator (I think). Although I'm pretty sure you
can't use a calculator for calculus anyway, but what about the other stuff in the course?
Course Description:
Mathematics 3207 is considered to be part of the Advanced Math program. It is the last course in a series of four which make up the senior high advanced stream. We recommend that students have
either Math 2205 or Math 3205 completed before taking this course. Topics include Sequences and Series, Functions with an emphasis on Graphing and an Introduction to Calculus, Trigonometry,
Complex Numbers, and essential Algebra necessary for success in post-secondary Mathematics. Students should have access to a graphing calculator.
Functions with an emphasis on Graphing
After awhile these get annoying. Basically, studying functions and the the shapes that their graphs result in.
f(x) = x <------------ line
Circles, curves of such, etc. etc.. I imagine you already know this kind of stuff.
Introduction to Calculus,
Introduction to the real stuff....
Cos, sin, tan.... If you've programmed with them, and understand it, that'll help. Combine it into formulas and such. I can never remember these well.
Complex Numbers
i is a number such that i^2 = -1.
It's technically impossible, but somehow works. Enjoy it.
essential Algebra necessary for success in post-secondary Mathematics.
Probably linear equations and how to solve systems of them. If you haven't covered Gauss-Jordan stuff, yet, this'll be it.
Students should have access to a graphing calculator.
Or a C compiler and a strange and bizarre love of ASCII art.
It's not so much any of those concepts giving you trouble as much as putting them all together. I'm relatively lazy with math, and can't be bothered to do much of it, and I don't usually remember
everything. Heck, if I ever need to write down math-related notes, I usually write down pseudocode-programming-style notations. For example, instead of a traditional "not equals" sign, I
literally write !=.
I probably got a bunch of mistakes in this post anyway that real math folks will correct!
But overall, if you're good at it, then this should be fun. As with programming, just remember to always do the homework assignments.
BTW, I was under the impression you were already in college/university. Guess I need to check profiles more often.
Last edited by MacGyver; 06-12-2007 at 04:53 PM.
Sounds like the equivalent to an English AS level maths. If you can understand trigonometry, which you have probably done already, then basic calculus should be nothing to worry about. Just a
couple of simple rules to differentiate or integrate plus some other stuff.
For algebra you can expect to do linear equations, simultaneous linear equations, quadratics, and perhaps polynomials. They should let you use a calculator for most things, but I wouldnt bother
with a graphics calculator, they are a rip off, and theres plenty of progs that do that stuff for you. Or even better code your own
I was studying this sort of stuff last year as an external student. It was fun to start off with, but when I got a list of all the things I needed to learn it started to dawn on me that it was
just a matter of memorizing procedures, which kinda killed the fun. So anyway, I went off to code a slew of unfinished games instead, and now I wish I stayed and did the maths. Its good stuff to
know. Good luck with it
That sucks if you can't use a calculator on the tests, but it makes sense I guess. I know my TI-89 helped way too much in calculus (now that I think about it, it gave me an unfair advantage, but
oh well).
I'd still highly recommend the TI-89. If nothing else, you can use it to check answers when you're doing homework, since it can do symbolic operations like solving equations and differentiating/
integrating. You might also need to have another scientific (non-graphing) calculator if they allow that on tests.
Just don't let the calculator be a crutch! For the last two calculus courses I took, we weren't allowed to use any calculator on the tests, and I completely screwed up my derivatives on the final
(which was the easiest part of the exam)
"Think not but that I know these things; or think
I know them not: not therefore am I short
Of knowing what I ought."
-John Milton, Paradise Regained (1671)
"Work hard and it might happen."
Calculus is the study of certain kinds of operations on FUNCTIONS. Unlike the math you've already learned, the basic objects of calculus are not numbers, but FUNCTIONS. So to understand calculus,
you need to understand functions, and this is the purpose of pre-calc.
A calculator isn't going to help you much. You are learning concepts, not performing tedious calculations on paper that would be better done by a computer. You'll probably be learning about
functions, series (both convergent and divergent, and why it matters), infinitesimals, and complex numbers (as the course description says). These are things you have to comprehend in your mind.
The calculator is no use there.
And don't let anybody scare you or tell you "calculus is hard." The fact is, calculus is easier than many other branches of "advanced" mathematics. It's a fact that if you are repeatedly told
that you're not going to "get" something, you ARE going to have trouble with it, no matter how smart you are. Just go into this telling yourself "this stuff is easier than long division" and
don't let anybody tell you otherwise. It's just MORE stuff, not HARDER stuff.
Of all the things you are taught in that class, the most likely to be neglected is graphing. The ability to visualize functions in your mind will put you way ahead of others in future mathematics
classes you take. (Also, the development of the ability will make your brain smarter. At math, anyway.) So make sure you become a ninja at that.
There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as
I'd still highly recommend the TI-89. If nothing else, you can use it to check answers when you're doing homework, since it can do symbolic operations like solving equations and differentiating/
integrating. You might also need to have another scientific (non-graphing) calculator if they allow that on tests.
This person hasn't used an HP graphing calculator (probably :-) ). Get an HP-50 and put it in RPN mode. It is a true hacker's calculator. Unless you've already got a graphing calculator...
Edit: inserted 'probably'
There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as
Here's what I gather from it :
Sequences and Series
Sequences we've pretty much been doing since we're kids. It's a straighforward concept, and since this is intro, it's likely you won't learn any of the hardcore stuff involving sequences (aka
limits, inf, sups and such), but rather the definition of what a sequence is as well as maybe some of the more famous sequences like the Fibonacci sequence.
Functions with an emphasis on Graphing and an Introduction to Calculus
Pretty much what MacGyver said, although "Introduction to calculus" can mean a lot of things, so it's hard to say what that specifically means.
cos, sin, tan, cosin, cotan, cosec. All pretty straightforward stuff, and if you've been on this board long enough (which I believe you have), you've probably seen them somewhere before. You
probably won't learn much more. This should be the easiest part of the course (except maybe the trig proofs)
Complex Numbers
Complex numbers are just a superset of the real numbers (aka the real numbers + some more numbers), and were essentially created in order to maintain the fundamental Theorem of algebra. They're
interesting little buggers, but they can be tricky at first if you haven't learned anything about vectors before. But they're as "real" as the numbers -2 and 1/4.
essential Algebra necessary for success in post-secondary Mathematics
This doesn't mean very much, and more likely hints to the fact that you'll have to use algebra in the other subjects. If anything, it could be a slight intro to vectors.
Hope that's of help.
Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction
Don't expect anything. Well, I guess you could expect to take the SAT Subject Test Math IIC soon. If you want. It has some good credit in the whole college application process you might soon be
codez http://code.google.com/p/zxcvbn/
That sucks if you can't use a calculator on the tests, but it makes sense I guess. I know my TI-89 helped way too much in calculus (now that I think about it, it gave me an unfair advantage, but
oh well).
Just don't let the calculator be a crutch! For the last two calculus courses I took, we weren't allowed to use any calculator on the tests, and I completely screwed up my derivatives on the final
(which was the easiest part of the exam)
I always found calculators were of no use in calculus. Since it often involves manipulation of unknown values, often markers will mark 100% for the solution and 0% for the answer. I had a
TI-8-something (the graphing one) and found it no use in any of my calculus or algebra courses. I'm thinking you're better off without a calculator at all for this course.
And what brewbuck said is absolutely right. I've taken 3 engineering calculus courses and although they were rigorous, if you kept your head in the game, you could always logically work a problem
out. Now I'm taking a probability and stats course (which, for lack of a better term, I like to call "memorization math") and I am getting absolutely reamed just because of all the differring
cases (i.e. fifity different formulas for fifty different situations). Now I'm starting to wish I was back in calc, where you could take just one or two formulae and re-arrange them to solve
anything you want.
To expand on Happy's answers:
Sequences we've pretty much been doing since we're kids. It's a straighforward concept, and since this is intro, it's likely you won't learn any of the hardcore stuff involving sequences (aka
limits, inf, sups and such), but rather the definition of what a sequence is as well as maybe some of the more famous sequences like the Fibonacci sequence.
My pre-calc class had sequences and we were expected to find limits, but not inf and sup and lim sup (tasty!). So yours might. They will probably cover geometric series and finding their sum via
the formula.
cos, sin, tan, cosin, cotan, cosec. All pretty straightforward stuff, and if you've been on this board long enough (which I believe you have), you've probably seen them somewhere before. You
probably won't learn much more. This should be the easiest part of the course (except maybe the trig proofs)
You might see new ways of look at these, too (rather than just geometrically), particularly as periodic functions, and also, how they are related to points on the unit circle. You'll have to be
able to recognize crap like what's their period. Under graphing you might be expected to graph these, and also functions of the form f(x) = sin(ax+b). This could lead to polar coordinates and
polar representations of complex numbers. And you'll probably learn the joys of trig identities
There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as
This is almost equivalent to my pure math 30 course during my high school years. Don't really have to worry about it, just do all of the practice problems and you are gonna have a high mark. Even
though it said to be pre-calculus, most of the time these things are not gonna be show up again until calculus 2. This course is not gonna be hard if you consider yourself not bad at math
already, it is just a preparation for you to go into the math courses in post secondary. This is a course that you need to pay attention in, as its probably harder than your grade 10 and grade 11
The three that is probably the most important are:
Trig- you are using it alot when you are in math courses during your post secondary
Graphs - a very important part of calculus, know it well before going to university, my friends had a hard time in catching up (especially if you are in Electrical, Computer or Software
engineering) during Signals and Transform. It is also very significant in vector calculus.
Series- my friend, haunted me for a long time in Vector Calculus and Differential Calculus so its a good start if you know this well too.
As i said, just do some practice problems and you are gonna be fine
Undergraduate Research
Electrical and Biomedical Engineering Department
University of Calgary
My Comp:
|Core 2 Duo 6420 4mb cache| Corsair 2*1Gb memory pc5400|
|500Gb and 80Gb Sata2| HIS 1950pro Turbo OC 256mb ViVo|
|X-Cube2 red micro atx case| 3in1 Tiger Game port|
|ASUS P5B-LD2 Rev2.0-VM| WindowsXP Pro SP2| Fedora 8|
|Windows XP Pro 64|
My Store
Real estate 43
I hate Maths.In this semestar I have to learn Integrals(double,triplle)
Last edited by crvenkapa; 06-14-2007 at 03:36 PM.
cos, sin, tan, cosin, cotan, cosec. All pretty straightforward stuff, and if you've been on this board long enough (which I believe you have), you've probably seen them somewhere before. You
probably won't learn much more. This should be the easiest part of the course (except maybe the trig proofs)
cos, sin and tan are pretty simple. But sec, cosec and cotan I've had a bit of trouble with. And especially when dealing with radians and exact values and stuff. Stuff like tan(2pi/6)/sec(pi/3)
cos(4pi/5). I can get it into degrees of sin and cos just fine, but simplifiying radicals kills me.
Just don't let the calculator be a crutch!
Too late
Rashakil Fol
Unless you've already got a graphing calculator...
We were required to get one in Grade 9. I have a TI-83 Plus.
Oddly enough, I haven't tried coding any of this stuff. Probably because when I get home from school, I don't feel like writing my day in C++
As far as I can tell, from all your descriptions, the only thing I haven't seen before is complex numbers. I think nearly everything else we've touched on already this year. I think we might have
supposed to have done limits, but I'm not sure (I saw it on last years final exam I think). I know there was a section dropped from the year because we ran out of time. Statistics was the unit I
06-12-2007 #2
06-12-2007 #3
06-12-2007 #4
06-12-2007 #5
06-12-2007 #6
Join Date
Jul 2005
06-12-2007 #7
Join Date
Jul 2005
06-12-2007 #8
Fear the Reaper...
Join Date
Aug 2005
Toronto, Ontario, Canada
06-12-2007 #9
06-13-2007 #10
Registered Abuser
Join Date
Jun 2006
06-13-2007 #11
Join Date
Jul 2005
06-14-2007 #12
Engineer in research :(
Join Date
May 2007
06-14-2007 #13
Join Date
Jan 2007
06-14-2007 #14
06-14-2007 #15
|
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/90761-what-expect-pre-calc.html","timestamp":"2014-04-20T03:36:34Z","content_type":null,"content_length":"118790","record_id":"<urn:uuid:737add06-ffec-4d52-a2b1-7437d3c38b54>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
period and amplitude
September 27th 2007, 12:20 PM #1
Nov 2006
period and amplitude
Find the amplitude and period of:
y=5sin(2x) + 4sin(7x)
i got 8.886 for the amplitude
how do i get the period?
you have to consider 2 different periods:
$p_1 = \frac{2\pi}2=\pi$ and
$p_2 =\frac{2\pi}7=\frac27 \cdot \pi$
You get the period of the combined function if the equation
$n \cdot p_1 = k \cdot p_2~, n,k~\in~\mathbb{Z}$ is true. Therefore:
$n \cdot \pi= k \cdot \frac27 \cdot \pi$
$\frac nk = \frac27$. The first possible values for n and k are:
$\frac nk ~\frac{\rightarrow}{\rightarrow}~ \frac27$. If n = 2 then $p_1 = 2\pi$ and if k = 7 then $p_2 = 2\pi$, thus the period of your function is $2\pi$
September 27th 2007, 09:52 PM #2
|
{"url":"http://mathhelpforum.com/trigonometry/19624-period-amplitude.html","timestamp":"2014-04-17T08:38:01Z","content_type":null,"content_length":"34927","record_id":"<urn:uuid:65a8dddd-48af-48de-a122-a1fae01b52f3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Brazilian Journal of Chemical Engineering
Services on Demand
Related links
Print version ISSN 0104-6632
Braz. J. Chem. Eng. vol.24 no.1 São Paulo Jan./Mar. 2007
FLUID DYNAMICS HEAT, MASS TRANSFER AND OTHER TOPICS
Evaluation of cyclone geometry and its influence on performance parameters by computational fluid dynamics (CFD)
W. P. Martignoni^I; S. Bernardo^II; C. L.Quintani^III
^IPetróleo Brasileiro S.A., PETROBRAS, AB-RE/TR/OT, Phone: +(55) (21) 3224-4255, Fax: +(55) (21) 3224-1767, Av. República do Chile 65, Sala 2102, Centro, CEP: 20031-912, Rio de Janeiro - RJ, Brazil.
E-mail: martignoni@petrobras.com.br
^IIInstituto Nacional da Propriedade Industrial (INPI), DIRPA / DQUIM II, Phone: +(55) (21) 9300-0623, Praça Mauá 07, Centro, Rio de Janeiro - RJ, Brazil.E-mail: sergiob@inpi.gov.br
^IIIUniversidade Estadual de Campinas, UNICAMP, Faculdade de Engenharia Química (FEQ), PQGe, Phone: +(55) (19) 35213855, Av. Albert Einstein, 500, Cidade Universitária Zeferino Vaz, CEP: 13083-970,
Cx. P. 6066, Campinas - SP, Brazil. E-mail: clquintani@yahoo.com.br
Cyclone models have been used without relevant modifications for more than a century. Most of the attention has been focused on finding new methods to improve performance parameters. Recently, some
studies were conducted to improve equipment performance by evaluating geometric effects on projects. In this work, the effect of cyclone geometry was studied through the creation of a symmetrical
inlet and a volute scroll outlet section in an experimental cyclone and comparison to an ordinary single tangential inlet. The study was performed for gas-solid flow, based on an experimental study
available in the literature, where a conventional cyclone model was used. Numerical experiments were performed by using CFX 5.7.1. The axial and tangential velocity components were evaluated using
RSM and LES turbulence models. Results showed that these new designs can improve the cyclone performance parameters significantly and very interesting details were found on cyclone fluid dynamics
properties using RSM and LES.
Keywords: CFD; Cyclones; Performance; RSM; LES.
Cyclones are widely used for removal dust of gaseous flows in industrial processes. Cyclone dust collectors have been used in many industrial facilities to collect solid particles from gas-solid
flows and to reduce air pollution originating in chimney smoke from chemical plant drier equipment (Ogawa, 1997). Currently, with new engineering applications of cyclones as dryers, reactors and
particularly in the removal of high-cost catalysts from gases in petroleum refineries, industries require a greater understanding of turbulent gas flows, which could lead to rigorous procedures
capable of accurately predicting efficiency, velocity and pressure fields (Meier and Mori, 1999).
There are many types of cyclones for the purpose of solid particle separation. However, the following are the most typical: returned flow or reversed flow, axial flow and rotary flow with tangential
injection of the second gas flow into the cyclone body. The standard kind of cyclone (composed of tangential inlet pipe to the main body for generating the rotational gas flow inside the equipment)
has an exit pipe, cone and dust bunker. Ogawa and Hikichi (1981) proposed that the solid particles entering the cyclone immediately bifurcate into two layers of dust due to the eddy current based on
the secondary flow on the upper cover surface in the coaxial space between cyclone body and exit pipe. One of them goes around the coaxial space on the upper cover surface and rotates around the exit
pipe with the gas flow. The other rotates and descends along the surface of the cyclone body. Then, on the surface of the cone, the dust layer, which is pressed onto the cone surface by the
centrifugal force, descends aided by gravitational force and descending airflow in the boundary layer. Lastly, these dust layers are deposited in the dust bunker (Zhou and Soo, 1990). However, some
of the deposited dust rolls up from this dust layer by the secondary flow in the boundary and flows through the exit pipe. Centrifugal effects, which are responsible for collecting fine particles,
depend directly on the tangential velocity of the solid particles. Therefore, the tangential velocity of the gas flow, which relates to the pressure drop, must be increased in order to increase
cyclone efficiency. These processes are the mechanism of separation of solid particles in cyclones.
The historical transition of cyclones development can be found in Crawford (1976), Storch (1979) and Ogawa (1984), where many old and interesting types of cyclones are discussed. The most standard
construction of the returned flow type is composed of a cylindrical body with a fixed diameter and a conical part. Physical models or families of cyclones are established when a set of dimensions is
fixed in relation to the diameter. There are various cyclone models in the literature, but the most famous are the Stairmand (1951) and the Lapple (1951) ones. These cyclones were developed through
experimental tests with the aim was performance optimize. However, according to Dirgo and Leith (1985), there is no theoretical base to assure that a specific model has all high performance
characteristics. The advantage of using this cyclones model is that its performance properties are supported by many studies found in the literature.
Since its conception over a century ago, many researchers have contributed to the large volume of work on improving the efficiency of cyclones by introducing new design and operation variables (Jo et
al., 2000). However, in most cases, the improvement in efficiency is marginal and in some cases it is associated with complex structure and additional operating costs (Gregg, 1995).
A good understanding of the fluid dynamics in a cyclone is required in order to make further design improvements. Analytical techniques do not permit variations in the geometry to be readily
assessed. Computational fluid dynamics (CFD) models provide an economical means of understanding the complex fluid dynamics and how it is influenced by changes in both design and operating
The first application of CFD techniques to cyclone simulation was presented by Boysan et al. (1982). After this pioneering work, several studies were done on turbulence modeling in order to improve
through prediction of velocity and pressure fields to modify the turbulence models.
If the inlet duct is ignored, cyclone shape is almost ax-symmetric and a number of previous CFD models used this feature in order to simplify the model to a two-dimensional case (Duggins and Frith,
1987). While this greatly reduces computational time, a two-dimensional model is limited, since the inlet duct location will break the flow pattern symmetry. Furthermore such simplifications can not
be used to assess changes in the inlet design or offset vortex finders (Witt et al., 1999). The recent increase in computing power and grid generation capabilities have allowed the latest CFD models
to include the full three-dimensional shape and to be used for evaluating design modifications. The lack of high- quality measurements of the flow field in cyclones has limited the validation of past
models (Witt et al., 1999).
In this work, some effects of an additional symmetrical inlet and a scroll outlet section are presented. The starting point was based on some works available in the literature (Patterson and Munz,
1989; Zhao et al., 2004 and Bernardo, 2005). Numerical simulations of gas-solid flow phase were carried out using CFX 5.7.1, a CFD code available on the market. For the turbulence model, the RSM and
LES formulations were tested using a refined grid and data on fluid dynamics properties and performance parameters (collection efficiency and pressure drop) were obtained.
The conservation equations of the flow involved phases can be written in a generalized form in this work. In a Eulerian-Eulerian model, the Reynolds averaged equations have been used, as follows:
where the subscript a represents the generic phase (solid or gas), r is the density of generic phase a, t represents the time and µ is the viscosity. Finally, u represents the velocity vector,
defined by Reynolds averaged equation as (see Meier and Mori, 1999).
In equation (2)
These equations are applicable to incompressible and transient cyclone flow in 3D coordinate systems. The flow can be considered isothermal, thus the energy conservation equation can be neglected.
Numerical methods have been developed to solve the equations presented above, whose complexity is significantly increased due to the Reynolds stress terms on the right-hand side of Equation (2). For
strongly swirling flows the standard k-e turbulence model is know to have limitations (Meier and Mori., 1999). In order to obtain values for the Reynolds stress terms, a turbulence model, known as
the Reynolds stress model (RSM), was used here. To compare the results obtained with RSM, another turbulence model, known as the large eddy simulation (LES) model, was also used. These models will be
described in the next section.
Reynolds Stress Model
This model is based on transport equations for all components of the Reynolds stress tensor and the respective dissipation rate. They are suitable for strongly anisotropic flows. This model does not
use the eddy viscosity hypothesis. An equation for the transport of Reynolds stresses in the fluid is solved for the individual stress components.
The differential equations, given for each component of the Reynolds stresses, were developed and their solution provides each stress component, allowing anisotropy in the turbulent stress terms.
More details on this development can be found in Meier and Mori (1999), Bernardo (2005), Bernardo et al. (2006) and ANSYS^® CFX^®- 5.7^TM Users Guide.
Here, f[ij] is the pressure-strain correlation, k is the turbulent kinetic energy, e is the dissipation rate of turbulent kinetic energy and P, the exact production term. P and k are given by
As the turbulence dissipation appears in the individual stress equations, an additional equation for e is still required:
In these equations, the anisotropic diffusion coefficients of the original models are replaced by an isotropic formulation, which increases the robustness of RSM. The model constants were obtained
from ANSYS^® CFX^®- 5.7^TM Users Guide and are presented:
c[s] = 0.22; c[e1] = 1.45; c[e2] = 1.9; C[µRS] =0.1152.
Large Eddy Simulation
Large eddy simulation (LES) starts from the same set of differential equations, but improves the movement equations by filtering and performing the decomposition of the variables on a large scale
(resolved) and a small scale (unresolved). The LES model was intended primarily for research purposes and single-phase, single-component and non-reacting flow simulations (ANSYS^® CFX^®- 5.7^TM Users
Guide). LES is an approach which solves for the large-scale fluctuating flows and uses "subgrid" scale turbulence models for the small-scale motion. With these methods, time-dependent equations are
solved for the turbulent motion with either no approximations and all large scales resolved; the equations are filtered in some way to remove very fine time and length scales (ANSYS^® CFX^®- 5.7^TM
Users Guide).
Any flow variable f can be written as:
where f'is the small scale part, and
After volume averaging and neglecting density fluctuations, the filtered Navier-Stokes equation becomes
As described above, turbulence models seek to solve a modified set of transport equations by introducing averaged and fluctuating components. For example, a velocity U may be divided into an average
The non linear transport term in the filtered equation can be developed as
In time averaging the terms (2) and (3) vanish, but when using volume averaging this is no longer true. Introducing the subgrid scale (SGS) stresses, [tij], as
we can rewrite the filtered Navier Stokes equations as
The LES approaches require fine grids and small time steps, particularly for wall-bounded flows. However, they can give details on the structure of turbulent flows, such as pressure fluctuations,
which can not be obtained from RANS formulation.
a) Smagorinsky Model
The Smagorinsky model can be thought of as combining the Reynolds averaging assumptions given by L[ij] + C[ij] = 0 with a mixing length-based eddy viscosity model for the Reynolds SGS tensor. It is
thereby assumed that the SGS stresses are proportional to the modulus of the strain rate tensor,
To close the equation, we need a model for the SGS viscosity v[SGS]. Based on dimensional analysis the SGS viscosity can be expressed as
where l is the length scale of the unresolved motion (usually the grid size D = (Vol)^1/3) and q[SGS] is the velocity of the unresolved motion.
In the Smagorinsky model, based on an analogy to the Prandtl mixing length model, the velocity scale is related to the gradients of the filtered velocity:
This yields the Smagorinsky model for the SGS viscosity:
where C[S] is the Smagorinsky constant. The value of the Smagorinsky constant for isotropic turbulence with inertial range spectrum is
For practical calculations the value of C[S] is changed depending on the type of flow and mesh resolution. Its value is found to vary between a value of 0.065 (channel flows) and 0.25. Often a value
of 0.1 is used. The equations 8-22 cited here are referenced in ANSYS^® CFX^®- 5.7^TM Users Guide.
The equations presented in part 2 were solved numerically using the commercial CFD code CFX, in which the control volume method is used to discretize the transport equations. The pressure-velocity
coupling algorithm SIMPLEC (SIMPLE Consistent) and the higher upwind interpolation scheme were used in all numerical experiments. More details on these schemes can be found in Patankar (1980).
Time steps of 0.001 seconds and a total simulation time of 12 seconds were used. A transient run was performed, using the steady-state results for initial conditions, with four iterations for each
time step. Numerical experiments were carried out with an accuracy of 10^-5 for the Euclidean norm of the mass source in the pressure-velocity coupling. A tight convergence criterion could be
achieved using these conditions.
Computational Grid and Boundary Conditions
Table 1 shows the geometric properties of the cyclone used by Patterson and Munz (1989). It was the starting point for this study. Figure 1 shows the cyclone described in Table 1.
Starting from the geometry proposed by Patterson and Munz (1989), some modifications of this geometry that were proposed in the literature (Zhao et al., 2004) were added. The first was the creation
of a symmetrical tangential inlet by splitting the inlet into two opposite ones, and the second was the substitution of inlet and outlet ducts by volute ones. All other cyclone parameters were
maintained unchanged. The study was carried out using Patterson and Munz (1989) as reference to validate the model proposed in this work.
Figures 2 and 3 show these cases. The case proposed in Figure 2 was based on Zhao et al. (2004), and that proposed in Figure 3 represents a scale-down of an industrial application. For all cases
presented in Figures 1, 2 and 3, grids with tetrahedral elements were built using Ansys ICEM CFD 5.0. Table 2 shows the characteristics of the grids.
The effect of grid refinement had previously been evaluated in the simulation process.
Because the LES formulation was applied, the grids presented in
Table 2
are very refined, compared to other former simulations. Martignoni et al. (2005) used the same cases as those presented in
Table 2
, but the grids were not so refined (they had around 30% cells compared to the present one). It was found that the LES formulation requires very refined grids to obtain the best results for this type
of study.
As boundary conditions, data used by Patterson and Munz (1989) and applied in the cases studied in this work are shown in Table 3.
Furthermore, the numerical computation ignored particle size distribution and used an average particle size of 10µm, obtained from a grade efficiency curve provided by Patterson and Munz (1989). The
solid phase was considered an inviscid fluid with an inlet volume fraction of 6.13.10^-5.
For case 2 (symmetrical inlet) the total flux was divided by 2. Then, initial velocity in this case was 7.6 m/s at both inlets. The boundary conditions for inflow velocities and volume fraction at
the cyclone inlet were assumed to be uniform. The boundary conditions for the solid phase were similar to those for the gas phase, except for the axial velocity at the wall, where the free slip
condition was used. At the walls "no slip" condition was used for the gas phase and "free slip" for the solid phase. The outlet boundary condition was set at atmospheric pressure.
Turbulence Model Validation for 3-D Flows
Turbulence models applied in the present work were validated by Bernardo et al. (2005) Bernardo (2005) and Bernardo et al. (2006). In these studies, the authors used previous numerical simulation
involving the RSM and LES formulation in the cyclone proposed by Patterson and Munz (1989). Additional details can be seen in the literature cited.
Convergence and Stability for Solution
Simulation tests were carried out for a transient state. The Eulerian-Eulerian formulation was used for both continuous and dispersed phase flow to perform the numerical simulations. To study the
phase dispersed particle trajectory, the Lagrangian approach was used. In order to guarantee the convergence and stability of the simulation, we verified the relation between pressure drop and
overall collection efficiency with real simulated time. This test provided a way of tracking the progress of real time during the simulation. Figure 4 shows how performance parameters behave with
time, for the initial 5s of real simulated time. We can see in Figure 4 that when the real time is 1.3s, performance parameters reach a specified value and do not change with time. This means that
the numerical solution was stabilized and convergence was reached. Figure 4 refers to case 2 (symmetrical inlet section), but it was also observed for the others three cases described in the Table 2,
when both the RSM and the LES turbulence models were applied.
Qualitative Results: Fluid Dynamics Profiles
In this section, the numerical results for the gas-solid flow in the three types of cyclones are presented. The initial total velocity for both gas and solid phases was 15.2 m/s. Figure 5 shows the
maps of tangential velocity for case 1. In this figure, we can see that the LES turbulence model was able to show the profile for gas flow inside the cyclone. It doesn't appear to be continuous, but
is winding with discontinued layers, while the RSM formulation did not show this important characteristic. Bernardo et al. (2005) observed this LES capability when working with inclined inlet section
in the cyclone proposed by Patterson and Munz (1989).
For case 2 (symmetrical inlet) and case 3 (double volute cyclone) it was observed that the LES formulation did not have the same characteristics for the tangential velocity profile for gas phase in
gas-solid flow as those in case 1. The geometric characteristics proposed for cases 2 and 3 could be responsible for the effect. In comparison with the cyclone geometry presented in case 1 (see
Figures 1 and 5), cases 2 and 3 modified respectively the inlet and the outlet track for gas flow inside the same cyclone. And in this case, there were no differences between the maps of tangential
velocity for the RSM and LES formulations. In all numerical simulations, tests were carried out using very refined grids for both the LES and the RSM turbulence models. Thus, the absence of
fluctuations observed for cases 2 and 3 confirms a relation between geometric characteristics and gas flow profile patterns. Figure 6 shows maps of tangential velocity for all three cases analyzed,
using the LES turbulence model. In this figure, we can see exactly the influence of cyclone geometry on tangential velocity profiles.
Regarding the tangential velocity profile, there is no data available in Patterson and Munz (1989) for case 1 (conventional cyclone). We extract points for tangential velocity considering another
study (Patterson and Munz, 1996), where the same conventional cyclone was used under the same operational conditions. In this work the authors used the axial position of 12cm below the cylindrical
cyclone body top. Figure 7 shows the numerical solutions obtained using the LES turbulence model for distributions of tangential velocity.
There is no experimental data available for comparison between numerical and experimental results. All results shown in Figure 7 are from numerical tests.
Figure 7 shows that results obtained on the capability of the turbulence model to represent the radial distributions of tangential velocities throughout the cyclone, where a good representation of
the swirling flow with the tangential velocity peak like a Rankine curve, typical of flows in this kind of apparatus. The geometric modifications caused lower values of tangential velocity for gas
phase in gas-solid flow, mainly for the symmetrical inlet option. When Figures 6 and 7 are analyzed together, we can conclude that the two proposed geometric modifications of the conventional cyclone
used by Patterson and Munz (1989) were able to modify fluid dynamics patterns for gas flow inside this equipment. This will be reflected by the performance parameters. Figure 8 shows maps of pressure
for numerical solutions in the three cyclones.
In Figure 8 we can observe that the total pressure drop decreases from the conventional cyclone to both new geometric design cyclones. Specially, the type with the symmetrical inlet has the lowest
pressure drop of the proposed models. These facts are in agreement with the tendency of the tangential velocity profiles presented in Figure 7. The reduction in pressure drop obtained when
modifications of conventional cyclone geometry are applied is a very important point to consider for improving the performance parameter of cyclones. Quantitative results for total pressure drop will
be presented in the next section.
Quantitative Results: Cyclone Performance Parameters
In Table 4 the performance parameters obtained for numerical simulations of all geometric types of cyclones studied in this work are presented. The overall collection efficiency of these cyclones was
calculated based on work proposed in the literature. More details about this equation can be found in Bernardo (2005).
The solid phase was characterized by an average diameter with an inviscid behaviour. This average diameter was obtained from a grade efficiency curve by Patterson and Munz (1989). In order to predict
the overall collection efficiency of the solid phase, it was assumed that all solid particles had the same diameter and that there was no interaction between the particles.
Experimental data presented in Table 4 were obtained from Patterson and Munz (1989) for the conventional cyclone. The numerical results for this case, represented by the conventional cyclone line in
Table 4, showed good agreement with the experimental data.
The proposed modifications of the geometry of the conventional cyclone increased the overall cyclone collection efficiency. There are no significant differences between results for both the RSM and
the LES turbulence models and the quantitative data. Bernardo et al. (2005) had observed this fact and concluded that the LES formulation is very useful to detect microscopic turbulent structures,
while RSM does not detect them. Results in this work confirmed that the LES formulation contributes on qualitative data for fluid dynamics profiles on flow inside cyclones. However, the LES
formulation required very refined grids. Therefore, computing costs to carry out the simulations were higher.
From these results we can see that the overall cyclone performance parameters studied here (pressure drop and collection efficiency) are influenced by cyclone geometric parameters. These parameters
are significantly improved and offer an alternative in the study of cyclone design.
A new inlet and outlet design applied to conventional cyclones, including a double inlet section and a scroll inlet and outlet section, was presented and analyzed in this work, using very refined
grids and the RSM and LES formulations as turbulence modeling.
Turbulence characteristics in the gas flow profiles were observed for the LES formulation, but not for the RSM formulation. In the study of new designs, these effects, indicating a relation between
inlet or outlet cyclone geometry and gas flow profiles inside the cyclones, need to be considered.
Very good performance parameter results were obtained. It was verified that the overall cyclone collection efficiency increased and the pressure drop decreased for both formulations, but the design
with two symmetrical inlets showed a large reduction in pressure drop than the other model.
All results indicate that these ideas can provide an alternative method for studying fluid dynamics inside cyclones and improve performance parameters. The next step in this work is to apply the
proposed design procedure to different types of cyclones, specially industrial ones.
Bernardo, S., Mori, M., Peres A.P. and Dionísio, R.P., (2006). "3-D Computational Fluid Dynamics of Gas and Gas-Particle Flows in a Cyclone with Different Inlet Section Angles". Powder Technology,
vol. 162, Issue 3, pp. 190-200. [ Links ]
Bernardo, S., Peres, A.P. and Mori, M. (2005). Computational Study of Cyclone Flow Fluid Dynamics using a Different Inlet Section Angle. Thermal Engineering (RETERM), vol. 4, issue 1, p. 18. [ Links
Bernardo, S. (2005). Estudo dos Escoamentos Gasoso e Gás-Sólido em Ciclones pela Aplicação de Técnicas de Fluidodinâmica Computacional. Ph.D Thesis, UNICAMP, Campinas-SP, Brazil, 273 p. [ Links ]
Boysan, F., Ayers, W.H. and Swithenbank, J.A. (1982). Fundamental Mathematical Modeling Approach to Cyclone Design. Inst. of Chemical Engineers, vol. 60, p. 222. [ Links ]
Crawford, M. (1976). Air Pollution Control Theory, McGraw-Hill. [ Links ]
Dirgo, J. and Leith, D. (1985). Performance of Theoretically Optimized Cyclones. Filtration and Separation, March/April, p. 119. [ Links ]
Duggins, R.K. and Frith, P.C.W. (1987). Turbulence Anisotropy in Cyclones. Filtration and Separation, Nov-Dez, pp .394. [ Links ]
Gregg, W.W. (1995). High Efficiency Cyclones for Powder Processing Applications, Adv. Filtration and Separation Technology, vol. 9, p. 240. [ Links ]
Jo, Y., Tien, C. and Ray, M.B. (2000). Development of a Post Cyclone to Improve the Efficiency of Reverse Flow Cyclones, Powder Technology, n.113, p. 97. [ Links ]
Lapple, C.E. (1951). Process use many collector types. Chemical Engineering, May, p. 144. [ Links ]
Martignoni, W.P., Bernardo, S. and Quintani, C.L. (2005). Evaluation of Geometric Modifications at an Experimental Cyclone using Computational Fluid Dynamics (CFD). Proceedings on 2^nd CFD Oil, Rio
de Janeiro, RJ. [ Links ]
Meier, H.F. and Mori, M. (1999). Anisotropic Behavior of the Reynolds Stress in Gas and Gas-Solid Flows in Cyclones. Powder Technology, vol. 101, p. 108. [ Links ]
Ogawa, A. and Hikichi, T. (1981). Theory of Cut-Size of a Rotary Flow Dust Collector, Bulletin JSME, vol. 24, n. 188, p. 340. [ Links ]
Ogawa, A. (1984). Estimation of the Collection Efficiencies of the Three Types of the Cyclones Dust Collectors from the Standpoint of the Flow Patterns in the Cylindrical Cyclone Dust Collectors.
Bulletin of JSME, vol. 27, n. 223, p. 64. [ Links ]
Ogawa, A. (1997). Mechanical Separation Process and Flow Patterns of Cyclone Dust Collectors. Ind. Applied Mech. Ver. , vol. 50, n. 3, p. 97. [ Links ]
Patankar, S.V. (1980). Numerical Heat Transfer and Fluid Flow, Hemisphere Pub. Co., New York. [ Links ]
Patterson, P.A. and Munz, R.J. (1989). Cyclone Collection Efficiencies at Very High Temperatures. The Canadian Journal of Chemical Engineering, vol. 67, April, p. 321. [ Links ]
Patterson, P.A. and Munz, R.J. (1996). Gas and Particle Flow Patterns at Room and Elevated Temperatures. The Canadian Journal of Chemical Engineering, vol.74, April, p. 213. [ Links ]
Stairmand, C.J. (1951).The Design and Performance of Cyclone Separators. Trans. Inst. Chem. Eng., vol. 29, p. 356. [ Links ]
Storch, O. (1979). Industrial Separators for Gas Cleaning, Elsevier . [ Links ]
Witt, P.J., Mittoni, L.J., Wu, J. and Shepherd, I.C. (1999). Validation of a CFD Model for Predicting Gas Flow in a Cyclone. Proceedings of CHEMECA99, Australia. [ Links ]
Zhao, B., Shen, H. and Kang, Y. (2004). Development of a Symmetrical Spiral Inlet to Improve Cyclone Separator Performance. Powder Technology, vol. 145, issue 1, pp. 47-50. [ Links ]
Zhou, L.X. and Soo, S.L. (1990). Theory Gas-Solid Flow and Collection of Solids in a Cyclone Separator. Powder Technology, vol. 63, p. 45. [ Links ]
ANSYS^® CFX^®- 5.7^TM Users Guide.
(Received: December 12, 2005 ; Accepted: December 5, 2006)
* To whom correspondence should be addressed
|
{"url":"http://www.scielo.br/scielo.php?pid=S0104-66322007000100008&script=sci_arttext","timestamp":"2014-04-18T06:43:07Z","content_type":null,"content_length":"68915","record_id":"<urn:uuid:b1a5bd22-c09c-41cd-b50a-de10cd75515f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
prove that in any group, an element and its inverse have the same order.
let $a$ be in $G$ and let $n\in \mathbb{Z}^+$ be the order of $a$, i.e. $n$ be the smallest positive integer such that $a^n=e$ then $(a^{-1})^n = e$. suppose there exists a positive integer $m <n$
such that $(a^{-1})^m= e$. then $\underbrace{a^{-1}a^{-1}\cdots a^{-1}}_{m \mbox{ copies}} = e$ implies that $\underbrace{aa\cdots a}_{m \mbox{ copies}} = a^m = e$ which is a contradiction. hence,
order of $a^{-1}$ is also $n$.
|
{"url":"http://mathhelpforum.com/advanced-algebra/50709-prove.html","timestamp":"2014-04-18T00:35:22Z","content_type":null,"content_length":"33822","record_id":"<urn:uuid:1c8ff027-e316-40ab-adb9-e72d7c41e466>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Random Matrix Theory: Foundations and Applications
Short description of the event:
The conference is a continuation of the "Cracow Matrix sequel":
2005 - Applications of Random Matrices to Economy and other Complex Systems
2007 - Random Matrix Theory: From fundamental Physics to Applications
2010 - Random Matrices, Statistical Physics and Information Theory
Traditionally, the aim of the conference is to get together mathematicians, physicists, statisticians and other practitioners of random matrix theory, in order to promote recent results and to
establish interdisciplinary links for applications.
The conference is a continuation of the "Cracow Matrix sequel":
2005 - Applications of Random Matrices to Economy and other Complex Systems
2007 - Random Matrix Theory: From fundamental Physics to Applications
2010 - Random Matrices, Statistical Physics and Information Theory
Traditionally, the aim of the conference is to get together mathematicians, physicists, statisticians and other practitioners of random matrix theory, in order to promote recent results and to
establish interdisciplinary links for applications.
We would also like to draw your attention to the synchronized event: 16th Workshop "Non-commutative harmonic analysis: random matrices, representation theory and free probability, with applications,"
to be held at the International Conference Centre of the Mathematical Institute of Polish Academy of Sciences in Bedlewo (near Poznań, Poland), during the period July 6-12 2014, just immediately
after the Cracow conference.
Early list of Invited Speakers
• Gernot Akemann
• Jinho Baik
• Jean-Paul Blaizot
• Arup Bose
• Yang Chen
• Merouane Debbah
• Thomas Guhr
• Eugene Kanzieper
• Satya Majumdar
• Jamal Najim*
• Alexandru Nica
• Raj Rao Nadakuditi*
• Gregory Schehr
• Alexander Soshnikov
• Roland Speicher
• Jac Verbaarschot*
• Pierpaolo Vivo
• Jiun-Chau Wang
• Konstantin Zarembo
• Jean-Bernard Zuber*
*To be confirmed.
|
{"url":"http://www.euro-math-soc.eu/node/4362","timestamp":"2014-04-20T21:45:53Z","content_type":null,"content_length":"14074","record_id":"<urn:uuid:d9ef6cc4-76e1-4051-81d4-68cbf1fa14ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Boomerang Attack
[Next] [Up] [Previous] [Index]
The Boomerang Attack
Recently, a means of improving the flexibility of differential cryptanalysis was discovered by David A. Wagner. Called the boomerang attack, it allows the use of two unrelated characteristics for
attacking two halves of a block cipher.
This diagram shows how the attack might work if everything goes perfectly for a particular initial block. The numbered points in the diagram show the steps involved in the attack.
1. Start with a random block of plaintext. Based on the characteristic known for the first half of the cipher, if we XOR a certain vector with it, called d1 (equal to 00100000 in the diagram), the
result after half-enciphering the two plaintext blocks, before and after the XOR, will differ by c1 (equal to 00110110 in the diagram), if what we wish to learn about the key happens to be true.
2. Since the characteristic applies only to the first half of the cipher, the results after the whole block cipher won't be related. Take those two results, and XOR each one with d2 (equal to
01001011 in the diagram), which is the vector corresponding to the characteristic for the second half of the cipher. In each case, XORing d2 with a ciphertext block is expected to change the
result after deciphering halfway by c2 (equal to 00010000 in the diagram), again, if something is true of the key.
3. With two intermediate results that differ by c1, if each one has c2 XORed to it, the two results of the XOR will still differ by c1. Since this difference now relates to the first half
characteristic, it can be seen in the final output, thus indicating the truth or otherwise of two hypotheses about the key.
This increases the potential effectiveness of differential cryptanalysis, because one can make use of characteristics that do not propagate through the complete cipher. Also, certain kinds of added
complexities, such as a bit transpose in the middle of the cipher, do not serve as a barrier to this method, since two values differing by an XOR with some value merely differ by an XOR with some
other value after a bit transpose.
However, it has its limitations. It only produces a result if both characteristics are present; it does not allow testing for each characteristic independently. Even so, it seems to double the number
of rounds a cipher needs to be considered secure.
Since at one end of a sequence of rounds, the precise difference between blocks that is required for the characteristic must be input, it isn't possible directly to cascade this method to break a
block cipher into four or more pieces.
Note that any single Feistel round has a large family of "characteristics" that is 100% probable, but which tells nothing about the key, since any pattern that involves leaving the half that is input
to the F-function unchanged, but involves an XOR to the half that is XORed with the output of the F-function applies, so one of the things this method can do is allow the use of attacks against the
first or last 15 rounds of DES against 16-round DES. Hence, if by some other trick a block cipher with 16 rounds could be broken into 16 pieces like this, one could test for an informative
characteristic which applied to any single round.
The Boomerang Amplifier Attack
A technique called the boomerang amplifier attack works like this: instead of considering the pairs of inputs, differing by the XOR required for the characteristic of the first few rounds, as
completely independent, one could note that it would be quite likely that somehow, taking two such pairs at a time, one could obtain any desired XOR difference between two such pairs by the birthday
paradox. This allows a boomerang attack to be mounted with only chosen plaintext, instead of adaptive chosen ciphertext as well.
I wondered if one could use the boomerang amplifier technique noted above to allow breaking a block cipher up into three pieces instead of two.
First, you start by enciphering a large number of chosen plaintext pairs, differing by the XOR amount required for the characteristic of the first piece. By the birthday paradox, there will be a good
chance of some pair of two of those pairs, somewhere among that number, which differ by the right amount to engage the differential characteristic of the middle piece.
I then take all the outputs of this process, and XOR them by the quantity required to engage, upon decipherment, the characteristic of the third piece.
Doing so ensures that the corresponding two pairs of blocks also has the XOR amount for the characteristic of the middle piece, this time in the reverse direction, as can be seen more clearly when we
look at the following diagram of the upwards journey by itself.
Unfortunately, though, the thing about a differential characteristic is that it only refers to the XOR between two blocks, and not the values of the blocks.
If a characteristic implies that A xor B equals X xor Y, and equals the characteristic, then it is true that A xor X and B xor Y are equal, but the value to which both of them are equal could have
any value. Hence, we have not preserved any structure that implies that we will have the correct differential for the first piece, during decipherment.
Well, we can still apply the differential for the first piece, and then continue in the reverse order again.
But we run into the same problem; we have no characteristic preserved on output. So it appears that breaking a block cipher into three parts is hopeless. But then we notice that, by iterating in this
fashion over our large number of input pairs, we can indefinitely preserve the characteristic in the middle.
This would only work if the characteristics involved had probability one, or very nearly one. Assuming that somehow this could be overcome, though, since one has produced a large number of pairs, in
the same spot within our large number of pairs, that have the middle differential activated, if one of the elements of each of two pairs differs from the same element in another cycle by the right
amount for the top differential, then the one connected with it by the middle differential will also match, not the other member of the same pair, and this is how the two pairs involved with the
middle differential can finally be distinguished.
But because the birthday paradox just says that, to find two matching values for a block having N values, you only need a number of blocks proportional to the square root of N. Using the birthday
paradox twice means that the complexity of this attack is proportional to the square root of N times itself, in other words, to N, and so this attack, even if it were possible, has a complexity
equivalent to that of the codebook attack: just use as chosen plaintext every possible input block, and record the result.
[Next] [Up] [Previous] [Index]
Chapter Start
Skip to Next Chapter
Table of Contents
Main Page
|
{"url":"http://www.quadibloc.com/crypto/co4512.htm","timestamp":"2014-04-18T05:30:47Z","content_type":null,"content_length":"8311","record_id":"<urn:uuid:d1629c3e-4ef2-45eb-965d-26b0f07cee78>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IMU-Net 25: September 2007
A Bimonthly Email Newsletter from the International Mathematical Union
Editor: Mireille Chaleyat-Maurel, Université René Descartes, Paris, France
1. EDITORIAL
Dear Reader,
First of all, let me say that it is a great honour for me to serve on
the Executive Committee of the IMU, even more since I am the first
Spanish mathematician who has such privilege.
Two are my main concerns in IMU: one is the relation of mathematics
with other disciplines, and for IMU this means to increase and to
stress relations and common programs with the other scientific unions
which form ICSU, the International Council for Science
http://www.icsu.org/. Mathematicians are inclined to maintain
themselves whithin their own territories; this is good to do nice
mathematics, but not to get more public appreciation. Furthermore, our
implication with IUPAP (Physics), IUTAM (Mechanics) or IAU
(Astronomy), among other ICSU's, is so natural and has a long history.
Moreover, the needs of mathematical modelling for the modern
scientific challenges like climate change, biodiversity or sustainable
development make necessary this interdisciplinary collaboration. We
should as well recall UNESCO resolution 29 C/DR126, with occasion of
the World Mathematical Year 2000, considering the central importance
of mathematics and its applications in today's world regarding
science, technology, communications, economics and numerous other
The second concern is related with Mathematical Education. In many
countries, one can observe an increasing gap between the mathematical
community of researchers and the mathematical community of secondary
professors and education researchers. For this reason, the General
Assembly of IMU celebrated last year in Santiago de Compostela
approved the following Resolution 8: "The General Assembly of the IMU
reaffirms the importance of the issues treated by ICMI. It recognizes
the importance of continuing and strengthening the relationship of IMU
with ICMI and urges the increased involvement of research
mathematicians in mathematical education at all levels." This is just
the beginning of a new trend initiated by IMU and I am sure that in
the coming years we will collect the fruits of such politics.
A different aspect is the relation among mathematics and statistics.
In my view, Probability and Statistics are emergent fields showing a
lot of activity in the last years; remember the recent Abel prize
(Srinivasa S. R. Varadhan), the Gauss Prize (Kiyoshi Ã?to) or the 2006
Fields Medal awarded to Wendelin Werner. A closer collaboration with
the International Statistical Institute and the Bernoulli Society for
Mathematical Statistics and Probability should be encouraged and I am
sure that IMU will do.
IMU should be the reference point for the many organizations in all
the countries in these two issues, relations with other sciences and
approaching of the "different" mathematical communities.
Manuel de León
Member, IMU Executive Committee
-> back to contents
2. IMU ON THE WEB
Scan this book! Not even wrong. The WDML vision.
Should we goggle at Google? "In the race to digitize the public
domain, is the future of the library at stake? For all the potential
of Web.2.0 technologies, our literary future still rests on what we
make of our past, specifically, the centuries of ideas and human
thought recorded in the miles of print books sitting on library
shelves around the world." LibraryJornal.com (August 15, 2007)
reports the views of the Open Content Alliance.
In the very first IMU on the Web (March 2004) the CEIC commented sadly
that some actions may have unintended consequences that will actually
increase the strain on the journals system. The recent holus bolus
resignation of yet an another editorial board (that of the journal
"K-theory") gives rise to useful commentary that warrants thoughtful
reading and some prompt action in advising your university library ...
The vision of the World Digital Mathematics Library includes our being
able to click on any citation and to be led seamlessly to that
reference and thence onto its bibliography and then ... . Total
costless seamlessness is still distant but the vision is being
realised by the mathematical sciences review journals as clearly
manifested in the June 2007 update of the Jahrbuch Database forming
part of the Electronic Research Archive for Mathematics ... .
... find relevant URLs and more on these matters at
Alf van der Poorten (alfATmaths.usyd.edu.au), member of the CEIC
-> back to contents
3. INTERNATIONAL CONGRESS OF MATHEMATICIANS 2006 (ICM 2006)
There are new material in the ICM2006 web site (http://www.icm2006.org):
- the proceedings
- the videos
-> back to contents
4. INTERNATIONAL CONGRESS OF MATHEMATICIANS 2010 (ICM 2010)
Program Structure
The ICM 2010 Program Committee is going to meet in early October to
define the program structure of the International Congress to take
place in Hyderabad in August 2010. If you have suggestions for the PC
please write to Hendrik Lenstra, the PC Chair, at
hwlicm@math.leidenuniv.nl immediately.
See also
for further information.
-> back to contents
5. INTERNATIONAL CONGRESS ON MATHEMATICAL EDUCATION (ICME 11)
The International Congress on Mathematical Education is held every
four years under the auspices of the International Commission on
Mathematical Instruction (ICMI).
The 11th International Congress on Mathematics Education [ICME-11]
will be held in Monterey, Mexico, 6-13 July 2008.
For information, go to http://icme11.org/
-> back to contents
The Fifth European Congress of Mathematics will be held in Amsterdam
(The Netherlands) 14-18 July 2008.
The calls for nominations for the EMS Prizes and for the Felix Klein Prize
are still open.
For more information see:
-> back to contents
7. ATLE SELBERG (1917-2007)
Renowned Norwegian mathematician Atle Selberg, Professor Emeritus at
the Institute for Advanced Study in Princeton, died on 6 August 2007.
Among his many honors was the Fields Medal (1950) for his elementary
proof of the prime number theorem.
-> back to contents
8. SUBSCRIBING TO IMU-NET
There are two ways of subscribing to IMU-Net:
1. Click on http://www.mathunion.org/IMU-Net
with a Web browser and go to the Subscribe button
to subscribe to IMU-Net online.
2. Send an e-mail to
imu-net-request@mathunion.org with the Subject: subscribe
In both cases you will get an e-mail to confirm your subscription so
that misuse will be minimized. IMU will not use the list of IMU-Net
addresses for any purpose other than sending IMU-Net, and will not make
it available to others.
Previous issues can be seen at:
-> back to contents
|
{"url":"http://www.mathunion.org/index.php?id=1078&L=0&type=98","timestamp":"2014-04-21T12:47:00Z","content_type":null,"content_length":"12430","record_id":"<urn:uuid:039d3dcf-3cf5-4069-87b8-8e68705e1932>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maywood, CA SAT Math Tutor
Find a Maywood, CA SAT Math Tutor
...I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. Your child's skills will be improved in a few sessions. I am organized, professional and friendly.
14 Subjects: including SAT math, Spanish, reading, ESL/ESOL
...Beyond memorizing facts, I help children to see how their lessons apply to real life. I work with parents so life-long learning habits are established. I have tutored elementary students from
the following Los Angeles area schools: John Thomas Dye School, Brentwood School, Crossroads, St.
24 Subjects: including SAT math, chemistry, writing, geometry
...My rate is flexible and I am available on weekends. I am most capable of tutoring in Trigonometry. I break down the concepts into smaller parts.
31 Subjects: including SAT math, reading, chemistry, English
...I am also currently serving as a mentor/adviser for pre-health students at Duke University, so questions about the application process are welcome too! I have studied math and science all my
life and really enjoy helping others. I am calm, friendly, easy to work with and have been tutoring for many years.
13 Subjects: including SAT math, chemistry, physics, geometry
...They are essential for higher math. I have been to lots of exams as a student. However, I have been to more exams as a tutor or supplemental instructor.
11 Subjects: including SAT math, calculus, statistics, geometry
Related Maywood, CA Tutors
Maywood, CA Accounting Tutors
Maywood, CA ACT Tutors
Maywood, CA Algebra Tutors
Maywood, CA Algebra 2 Tutors
Maywood, CA Calculus Tutors
Maywood, CA Geometry Tutors
Maywood, CA Math Tutors
Maywood, CA Prealgebra Tutors
Maywood, CA Precalculus Tutors
Maywood, CA SAT Tutors
Maywood, CA SAT Math Tutors
Maywood, CA Science Tutors
Maywood, CA Statistics Tutors
Maywood, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Maywood_CA_SAT_Math_tutors.php","timestamp":"2014-04-18T01:05:03Z","content_type":null,"content_length":"23685","record_id":"<urn:uuid:3125cd14-fcfa-4bba-ae22-874bfbee95af>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is a simply connected Ricci-flat Kaehler manifold a Calabi-Yau manifold?
up vote 4 down vote favorite
I have the following question: Let $(M,\omega, J)$ be a simply connected Kaehler manifold with Ricci-flat Kaehler metric. How can one show that $M$ is a Calabi-Yau manifold. By Calabi-Yau manifold I
mean that there exists a holomorphic $(n,0)-$form $\Omega$ such that the following equation is satisfied: $\frac{\omega^{n}}{n!} = (-1)^{\frac{n(n-1)}{2}}(\frac{i}{2})^{n} \Omega \wedge \bar{\Omega}
$. Should one put the assumption on $M$ to be compact? But what kind of compactness? With or without boundary? Is this necessary? Does this also work without any compactness assumption? Where can I
find a proof of this? Is there any reference? Or is this question too trivial? I hope that someone has the answer and also hope for a lot of replys. Thanks in advance.
Miguel B.
complex-geometry dg.differential-geometry cv.complex-variables at.algebraic-topology
Since $(M^{2n},J,\omega)$ is Kähler, its holonomy is contained in $U(n) \subset SO(2n)$. If in addition it is Ricci-flat, then the holonomy is contained in $SU(n)$ since the Ricci-form is
1 essentially the determinant. Now $SU(n)$ leaves invariant a nonzero $(n,0)$-form and hence, by the holonomy principle, there is a parallel nonzero $(n,0)$-form $\Omega$. In particular, $\Omega$ is
holomorphic. If properly normalised, you get the equation in the question, since both $\omega^n$ and $\Omega \wedge \bar\Omega$ are nonzero $(n,n)$-forms. – José Figueroa-O'Farrill Oct 18 '12 at
where does one use here simply conectedness? and does one use any compacness of $M$ ? – Miguel Oct 18 '12 at 9:09
3 Dear José, don't we need simple connectedness to know that the reduced holonomy group is the entire holonomy group for this argument to work? I think a finite quotient of a simply connected C-Y
manifold may not have any nonzero holomorphic $(n,0)$-form, while still admitting Ricci-flat metrics. – Gunnar Magnusson Oct 18 '12 at 9:15
You can relax the simply-connected requirement in odd dimensions. The universal covering space of a Ricci-flat Kähler manifold is Calabi-Yau (just apply the argument from Spiro's answer). A simple
application of the Atiyah-Bott fixed point formula then gives that in odd dimensions, every Ricci-flat Kähler manifold is Calabi-Yau, and in even dimensions, it is either Calabi-Yau, or it has
fundamental group $\mathbb{Z}_2$. – Rhys Davies Oct 19 '12 at 10:22
@Rhys: what you say can't be correct, think about a bielliptic surface (finite quotient of a complex $2$-torus). It does not have trivial canonical bundle (so it is not "Calabi-Yau" in the OP's
sense), but the fundamental group is not $\mathbb{Z}_2$. Also, I hope that by "odd dimension" you mean "odd complex dimension", since the manifolds are Kahler. – YangMills Oct 19 '12 at 13:52
show 2 more comments
1 Answer
active oldest votes
José is correct, with the caveat that Gunnar mentioned - you need simple-connectedness to know that reduced holonomy = holonomy. Below I expand a bit more on the details. [Thanks to Tim
Perutz for catching errors in the initial version of this answer.]
Notice that the OP did not ask for $\Omega$ to be parallel or even closed. The following is true: If $(M, J, g, \omega)$ is Ricci-flat Kaehler, then the image of the first Chern class $c_1
(M)$in $H^2 (M, \mathbb R)$ vanishes, so that if $\pi_1(M) = 0$, then $H^2(M, \mathbb Z)$ has no torsion, and thus the canonical bundle $\Lambda^{n, 0} (M)$ is topologically trivial. So
there exists a nowhere vanishing smooth $(n,0)$-form $\Omega$ that trivializes the canonical bundle. By consideration of type, $\Omega \wedge \overline \Omega$ is a nonvanishing $(n,n)
$-form, so by rescaling $\Omega$ by a nowhere vanishing complex valued function, one gets for "free" the identity that
$$ \frac{\omega^n}{n!} = (-1)^{\frac{n(n-1)}{2}} \Omega \wedge \overline \Omega.$$
up vote
2 down Since $\Omega$ is type $(n,0)$ and the complex structure is integrable, then $\Omega$ will be holomorphic (and thus the canonical bundle is holomorphically trivial) if and only if it is
vote closed. Since $M$ is Ricci-flat, the Bochner theorem tells you that an $(n,0)$ form is closed if and only if it is parallel, which would give you holonomy contained in $SU(n)$.
Compactness is needed to go the other way: Yau's theorem says that if $M$ is compact Kaehler and $c_1 (M) = 0$, then there exists a unique Ricci flat Kaehler metric in each Kaehler class.
There are noncompact examples where uniqueness fails. I don't know as much as I should about the literature on existence in the noncompact case, but the papers of Tian-Yau should have the
A good elementary reference is Chapter 6 of Compact Manifolds with Special Holonomy by Dominic Joyce.
Spiro, could you clarify how you conclude that $c_1$ is trivial? I can only see its triviality in de Rham cohomology, which doesn't see the torsion. If I understood correctly, you're not
presupposing simple connectivity. – Tim Perutz Oct 18 '12 at 13:40
@Tim: You are correct, we need simple connectivity to ensure that $\Lambda^{(n,0)}(M)$ is topologically trivial, since that ensures that $H^2(M, \mathbb Z)$ has no torsion. I will edit my
post again. Thanks for catching that. – Spiro Karigiannis Oct 18 '12 at 15:38
what do you mean by topologically trivial ? if the first chern class vanishes doesnt then follow that the cannonical bundle is trivial, hence we get a holomorphic $(n,0)-$form that
trivializes it? – Miguel Oct 19 '12 at 14:13
@Spiro: your sentence "which would give you holonomy $SU(n)$" is not literally correct, all you get is that the holonomy is contained in $SU(n)$ but it could be strictly smaller, such as
$SU(p)\times SU(q)$ for $p+q=n$ (reducible case), $Sp(n/2)$ (hyperkahler case), etc. – YangMills Oct 19 '12 at 15:42
@YangMills: Yes, of course, I should have been more clear. I will edit that. – Spiro Karigiannis Oct 20 '12 at 12:42
add comment
Not the answer you're looking for? Browse other questions tagged complex-geometry dg.differential-geometry cv.complex-variables at.algebraic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/109987/is-a-simply-connected-ricci-flat-kaehler-manifold-a-calabi-yau-manifold","timestamp":"2014-04-18T03:01:59Z","content_type":null,"content_length":"66597","record_id":"<urn:uuid:27ef10a6-3ce6-42de-939b-945ebfa90a91>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by weng
Total # Posts: 9
college physics
a 1.50 kg. water is inside a cylindrical jar 4 cm. in diameter. what is the pressure at the interior bottom of the jar?
college physics
a glass container was filled with oil with a density of 0.80 g/cm^3 up to 9 cm. in height. 1) what is the pressure at the base of the container? 2) what is the pressure if depth is 3 cm. from the
surface of the oil? 3) what is the pressure if depth is 4 cm. above the base?
college physics
a glass container was filled with oil with a density of 0.80 g/cm^3 up to 9 cm. in height. 1) what is the pressure at the base of the container? 2) what is the depth if 3 cm. from the surface of the
oil? 3) what is the depth if 4 cm. above the base?
college physics
what is the mass of an 80 ml. acetone when the density of the acetone is 0.792 g/ml.?
a 1.50 kg. water is inside a cylindrical jar 4 cm. in diameter. what is the pressure at the interior bottom of the jar?
what is the inverse square law equation?
A boulder weighs 800N at the surface of the earth. What would be its weight at a distance of three earth's radii from the center of the earth?
A boulder weighs 800N at the surface of the earth. What would be its weight at a distance of three earth's radii from the center of the earth?
what is the value of the acceleration due to gravity of the earth at an altitude twice the radius of the earth?
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=weng","timestamp":"2014-04-21T03:25:12Z","content_type":null,"content_length":"7672","record_id":"<urn:uuid:f928a2d5-1ef3-48bf-ba06-72735bea51d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shore Acres, WA Math Tutor
Find a Shore Acres, WA Math Tutor
...My name is Julianne, and I have recently relocated to the Pacific Northwest from the Southeast with my husband. I have a strong background in math and science, including a B.S. in physics and
a M.S. in nuclear engineering, as well as a deep interest in the language and mysticism of math. I woul...
5 Subjects: including prealgebra, precalculus, algebra 1, geometry
...I attended BYU in Provo UT where I received a BS degree in Zoology, then joining the military after graduation. While in the military I obtained an MS degree in Information Management and
graduated from several master's level military schools specializing in safety, military history, technical w...
48 Subjects: including ACT Math, prealgebra, algebra 1, Spanish
...In the mean time - I am excited to be able to work with so many fantastic students! I have been tutoring and teaching both professionally and through non-profit organizations for many years
now in Seattle, Hawaii and Chicago. I have worked with students K-12 in the Seattle Public Schools, Chicago Public Schools, and Seattle area independent schools.
27 Subjects: including precalculus, reading, study skills, trigonometry
...My own scores are a 170 Verbal and 169 Quantitative. I have a degree in Linguistics from the University of Washington and have a passion for grammar. I have helped many students revise their
writing and develop their own proofreading skills.
32 Subjects: including prealgebra, LSAT, algebra 1, algebra 2
...Learning terms and their definitions is a critical step in understanding biology. Not only are many classes centered around definitions, but by knowing what things are called and why they are
so named, a student often immediately understands the overall concept. Beyond terminology, I focus on how key processes build upon one another to facilitate life.
22 Subjects: including geometry, ACT Math, SAT math, algebra 2
Related Shore Acres, WA Tutors
Shore Acres, WA Accounting Tutors
Shore Acres, WA ACT Tutors
Shore Acres, WA Algebra Tutors
Shore Acres, WA Algebra 2 Tutors
Shore Acres, WA Calculus Tutors
Shore Acres, WA Geometry Tutors
Shore Acres, WA Math Tutors
Shore Acres, WA Prealgebra Tutors
Shore Acres, WA Precalculus Tutors
Shore Acres, WA SAT Tutors
Shore Acres, WA SAT Math Tutors
Shore Acres, WA Science Tutors
Shore Acres, WA Statistics Tutors
Shore Acres, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Arletta, WA Math Tutors
Crescent Valley, WA Math Tutors
Cromwell, WA Math Tutors
Dockton, WA Math Tutors
Harbor Heights, WA Math Tutors
Maplewood, WA Math Tutors
Oakbrook, WA Math Tutors
Point Fosdick, WA Math Tutors
Purdy, WA Math Tutors
Rosedale, WA Math Tutors
Sunny Bay, WA Math Tutors
Sunrise Beach, WA Math Tutors
Victor, WA Math Tutors
Warren, WA Math Tutors
Wollochet, WA Math Tutors
|
{"url":"http://www.purplemath.com/Shore_Acres_WA_Math_tutors.php","timestamp":"2014-04-17T13:02:51Z","content_type":null,"content_length":"24100","record_id":"<urn:uuid:2467dbe3-fb23-47a0-ab8e-45963bbd94a6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Undergraduate Program
The Mathematics Department serves a very large number of undergraduate students. We aim for excellence in our teaching of algebra, precalculus, and calculus, for students in every major that requires
mathematics. In addition to our teaching staff, we work with Learning Support Services to provide instructional assistance (tutoring), we provide the Mathematics Placement Exam to place students in
an appropriate course, and Physical & Biological Sciences Mathematics Advising at Undergraduate Affairs (PBSci UGA Advising Calendar). In addition to serving the campus with mathematical resources,
we support a large number of undergraduate mathematics majors.
The Mathematics Department offers a Major program leading to a B.A. in Mathematics, as well as a Minor in Mathematics. A joint Economics/Mathematics B.A. is now possible, and administered through the
Economics Department. Within the Mathematics Major, there are three tracks: Pure Mathematics, Mathematics Education, and Computational Mathematics.
The Pure Mathematics track is designed for students who value the study of mathematics, not only for application, but also for its own sake. Pure mathematicians focus on the big how and why questions
of mathematics, and attempt to find new formulae and methods while utilizing insights from a tradition of thousands of years. The Pure Mathematics track is recommended for those interested in
graduate study in pure mathematics, and those who seek a rigorous education that involves not only rote computational skills but also rigorous explanations of how mathematics works. To give a
well-rounded education in mathematics, the Pure Mathematics track requires an introduction to proof class, and a balance of advanced coursework in algebra, analysis, and geometry. Majors who seek
graduate study at top institutions often go beyond the required courses to enroll in graduate courses as well.
Our Math Education track is specially designed for prepare students for a career in K-12 mathematics education. It shares a rigorous approach to advanced mathematics, but requires coursework that is
particularly relevant to the K-12 classroom: number theory, classical geometry, and the history of mathematics. In addition, the Math Education track requires experience in Supervised Teaching. Many
Math Education majors also participate in CalTeach [LINK TO CALTEACH], to enhance their experience and directly connect with local schools.
Our Computational Mathematics track offers flexibility to students who are interested in mathematics together with its applications – students in Computational Mathematics pick up skills in
statistics, computer science, and mathematical modeling, among other topics. Much of the coursework for the Computational Mathematics track is offered outside of the Mathematics Department, offering
an interdisciplinary experience.
The Mathematics Minor provides an excellent foundation in mathematics, which can serve a student well in careers that require quantitative analysis. The Mathematics Minor can also provide an
enjoyable supplement for students who love mathematics but have already decided to pursue another major. The Mathematics Minor fits particularly well for students pursuing a quantitative major such
as physics. We encourage students to consider a combination of major and minor involving mathematics with physics, economics, computer science, and environmental science, for example.
All major degree programs have introductory requirements, major requirements, and a comprehensive requirement (or senior capstone). These requirements are specifically detailed in the UCSC General
Catalog or at PBSci Undergraduate Affairs.
|
{"url":"http://www.math.ucsc.edu/undergraduate/index.html","timestamp":"2014-04-20T08:15:15Z","content_type":null,"content_length":"13260","record_id":"<urn:uuid:99045f8c-6729-467f-87ea-ee8ab8784309>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
|
License plates are made using 2 letters followed by 3 digits. How many different plates can be made if repetition of letters and digits is allowed?
Number of results: 46,948
A bike club wants to make license plates for bikes. Each license plate has a letter followed by a digit. How many different license plates can be made?
Monday, April 18, 2011 at 9:04pm by JORDAN
one more please!! a bike club wants to make license plates for the neighborhood bikes. each license plate has a letter followed by a digit. how many different license plates can be made??? explain
please so i can understand how to find the anser THANKS!!!!
Monday, October 28, 2013 at 10:56pm by jj
utah license plates have 3 numbers followed by 3 letters. how many different license plates of this kind can be made
Monday, November 8, 2010 at 6:35pm by Koray
Basic Concepts of Probability and Counting 1. License plates are made using 3 letters followed by 2 digits. How many different plates can be made if repetition of letters and digits is allowed? (26^
3)(10^2) = (17576)(100) = 1,757,600
Tuesday, March 16, 2010 at 3:14pm by Looking for someone to check my answer
. License plates are made using 2 letters followed by 3 digits. How many different plates can be made if repetition of letters and digits is allowed?
Thursday, February 3, 2011 at 12:06am by Myrna
Math 118
License plates are made using 2 letters followed by 2 digits. How many plates can be made if repetition of letters and digits is allowed?
Tuesday, July 12, 2011 at 11:04am by ericka
License plates are made using 3 letters followed by 3 digits. How many plates can be made if repetition of letters and digits is allowed?
Wednesday, January 27, 2010 at 8:53pm by Rmz
College probability and statistics
The current license plates in New York State consist of three letters followed by four digits. ( i ) How many possible distinct license plates can there be? ( ii ) How many possible distinct license
plates could there be if the letters O and I and the digits, 0 and 1 were ...
Saturday, October 12, 2013 at 5:30pm by Melissa
In Python - A state creates license plates, each containing two letters followed by three digits. The first letter must be a vowel (A,E,I,O,U), and duplicate letters and digits are allowed. The
second letter can be any letter of the alphabet. The state only uses capital ...
Monday, April 22, 2013 at 9:22am by Sheryl
Math for teachers
A certain state makes license plates with any three digits followed by any three letters. How many different license plates can be made? 17) ______ A) 17,576,000 B) 9 C) 11,232,000 D) 12,812,904
Sunday, February 20, 2011 at 10:14pm by cassie
Many states offer personalized license plates. California, for example, allows personalized plates with seven spaces for numerals or letters, or one of the following four symbols. What is the total
number of license plates possible using this counting scheme? (Assume that each...
Wednesday, May 16, 2012 at 9:55pm by Monic
Many states offer personalized license plates. California, for example, allows personalized plates with seven spaces for numerals or letters, or one of the following four symbols. What is the total
number of license plates possible using this counting scheme? (Assume that each...
Wednesday, August 15, 2012 at 8:37pm by Anonymous
pre calculus
A state makes auto license plates that have two letters (excluding I, 0, and Q) followed by four digits of which the first digit is not zero. How many different license plates are possible?
Tuesday, March 30, 2010 at 10:37am by ron
Many states offer personalized license plates. California, for example, allows personalized plates with seven spaces for numerals or letters, or one of the following four symbols. What is the total
number of license plates possible using this counting scheme? (Assume that each...
Saturday, November 26, 2011 at 5:54pm by MONICA
A state creates license plates that each contain two letters followed by three digits. The first letter must be a vowel ( A, E, I, O, U ) and duplicate letters and digits are allowed. How many
different license plates are possible?
Tuesday, November 6, 2012 at 7:40pm by Anonymous
Math - Please Help Me!
How many ways can an IRS auditor select 3 of 11 tax returns for an audit? There are 8 members on a board of directors. If they must form a subcommittee of 3 members, how many different subcommittees
are possible? License plates are made using 3 letters followed by 3 digits. ...
Saturday, December 5, 2009 at 7:01pm by Ashley
Math Word Problems
How many ways can an IRS auditor select 3 of 11 tax returns for an audit? There are 8 members on a board of directors. If they must form a subcommittee of 3 members, how many different subcommittees
are possible? License plates are made using 3 letters followed by 3 digits. ...
Saturday, December 5, 2009 at 6:38pm by Ashley
Math for Elementary Teachers II
How many 4-digit license plates can be made by using the digits 0, 2, 4, 6, and 8 if repetitions are allowed? if repetitions are not allowed?
Monday, March 29, 2010 at 12:44am by Michael
How many 4-digit license plates can be made by using the digits 0, 2, 4, 6, and 8 if repetitions are allowed? if repetitions are not allowed?
Thursday, April 1, 2010 at 11:37am by Tyga
Some fleet car license plates contain 4 numbers followed by 3 letters and then another 3 numbers. If this were the standard fleet car license plate and there are no other constraints, the number of
plates that could be manufactured is
Friday, April 11, 2014 at 12:37am by Jessica
NY has 560 cars for 1000 residents. The current population of NY is 12,929,000 and rate of growth of the population is 0.02%. NY license plates consists of 4 letters followed by 3 digits. What year
will they need to change the way they make license plates?
Thursday, April 30, 2009 at 2:02am by Xi
how many different license plates can be made consisting of three letters by four digits?
Wednesday, January 4, 2012 at 11:47pm by luckybee
Suppose that the format for license plates in a certain state is two letters followed by four numbers. How many different plates can be made if repetitions of numbers and letters are allowed except
that no plate can have four zeros?
Friday, September 30, 2011 at 12:14am by Meghan
Computer Programming
I am trying to put the following in Python Code. I know how to do it mathematically, but code - not sure where to start. A state creates license plates that each contain two letters followed by three
digits. The first letter must be a vowel ( A, E, I, O, U ) and duplicate ...
Sunday, April 21, 2013 at 5:19pm by Sheryl
A state's license plate contains 3 letters followed by 3 digits. How many different license plates are possible? Explain your answer.
Sunday, April 19, 2009 at 6:25pm by sara
How many different license plates could be made if they have one or two letters followed by 1, 2, or 3 digits? Repetitions are allowed.
Friday, February 5, 2010 at 6:18pm by anonymous
How many different license plates are possible using the letters A and B,but there can never be 2 A's next to each other.
Monday, August 30, 2010 at 7:32pm by sue
A license plate is to have 3 digits followed by 2 uppercase letters. What is the number of different license plates possible if repetition of numbers and letters is not permitted?
Monday, January 7, 2013 at 3:30pm by tony
college math
a license plate is to consist of three digits followed by two uppercase letters. determine the number of different license plates possible. the first digit cannot be zero and repition is not
Tuesday, November 22, 2011 at 7:34pm by ashley
survey of math
A license plate is to consist of 2 letters followed by 3 digits. Determine the number of different license plates possible if repetition of letters and numbers is not permitted?
Thursday, November 29, 2012 at 12:58am by anjanette
suppose a license plate for a car has 4 letters followed by 1 number and 3 more letters. The letters and numbers are chosen randomly. a. how many license plates are possible? b. compare the
probability that a license plate spells math4you with the probability that the first 4 ...
Tuesday, February 26, 2008 at 4:03pm by Mackenzie
A car license plate consists of 6 characters. The first 3 characters are letters excluding I, O, Q and U. The last 3 characters are any of the numerals 0 to 9 inclusive. How many different license
plates are possible? Please show your work.
Monday, May 23, 2011 at 7:31pm by corey
A car license plate consists of 6 characters. The first 2 characters are any letter. The last 4 characters are odd numbers from 1 to 9 inclusive. How many different license plates are possible?
Please show your work.
Monday, May 23, 2011 at 7:33pm by corey
A state's license plates consist of four letters followed by three numerals, and 241 letter arrangements are not allowed. How many plates can the state issue?
Wednesday, March 6, 2013 at 10:20pm by Bo
How many license plates with 3 decimal digits followed by 3 letters do not contain both the number 0 and the letter O if in building the plates you are not allowed to repeat any letter or any digit?
Tuesday, July 12, 2011 at 11:04am by reggie
In Utah, a license plate consists of 3 digits followed by 3 letters. The letters I, O, and Q are not used, and each digit or letter may be used more than once. How many different license plates are
Thursday, March 12, 2009 at 9:07pm by bre
math check my work
a license plate has 2 letters followed by 4 digits.how many license plates are possible if the digits and letters can be repeated? f-8 g-92 h-676000 i-6760000 i think it is i am i correct thank you
for helping me
Wednesday, April 14, 2010 at 7:41pm by anna
math please check my work
a license plate has 2 letters followed by 4 digits.how many license plates are possible if the digits and letters can be repeated? f-8 g-92 h-676000 i-6760000 i think it is i(the 4th one) am i
correct thank you for helping me
Wednesday, April 14, 2010 at 8:07pm by anna
A parallel-plate capacitor is made from two plates 12.0cm on each side and 4.50mm apart. Half of the space between these plates contains only air, but the other half is filled with Plexiglas of
dielectric constant 3.40. An 18.0 V battery is connected across the plates. What is...
Thursday, March 3, 2011 at 10:07pm by Rachel
A state creates license plates that each contain two letters followed by three digits. The first letter must be a vowel and duplicate letters and digits are allowed. How many different plates are
Tuesday, August 20, 2013 at 2:56am by JJ
What is the probability of getting a license plate that has a repeated letter or digit if you live in a state in which license plates have one numeral followed by three letters followed by three
numerals? (Round your answer to one decimal place.)
Wednesday, June 12, 2013 at 10:56pm by Ronald
260 license plates
Monday, April 18, 2011 at 9:04pm by Kathryn
A license plate is to consist of two letters followed by three digits. How many different license plates are possible if the first letter must be a vowel, and repetition of letters is not permitted,
but repetition of digits is permitted? The first position can be filled in 5 ...
Sunday, March 25, 2007 at 9:49pm by Carlos
It was noticed that some cars have license plates with 3 letters followed by three numbers while other license plates have 3 numbers followed by 3 numbers if neither letters nor numbers can repeat
what is the probability of getting the letter A in the first letter slot and ...
Monday, January 18, 2010 at 11:53am by eric
Math - Fractions
Susan has p clean plates. After using 2/3 of the clean plates for a dinner party, Susan used 1/5 of the remaining plates to store the leftovers. In terms of p, how many plates are still clean and
Saturday, September 7, 2013 at 7:03pm by Venus
How many different license plates could be made if they have one or two letters followed by 1,2,or 3 digits? Reptetitions are allowed. I figured that 26 would be for 1 letter and 52 for 2 letters so
(26+52) but not sure how to figure the numbers part.
Tuesday, November 30, 2010 at 9:52am by m
I think I've seen this on NC license plates: First in Flight.
Saturday, November 10, 2007 at 7:52pm by Ms. Sue
It says to list all possible license plates, not the total number.
Monday, April 22, 2013 at 9:22am by Anonymous
9th grade - Civics
Name the first state to issue license plates for their automobiles.
Wednesday, April 28, 2010 at 7:28pm by Malik
Suppose a license plate requires three letters ,followed by three digits.how many license plates are there if the letters are all different and the digits are all different
Sunday, September 16, 2012 at 11:26am by nancy
How many license plates consisting of 2 letters (upper case) followed by two digits are possible?
Tuesday, April 19, 2011 at 5:09am by Alaini
How many license plates consisting of 2 letters (upper case) followed by two digits are possible?
Tuesday, May 31, 2011 at 8:30am by FALONGO
how many different license plates are possible with two letters followed by three digits between 0 and 9
Sunday, September 9, 2012 at 9:29am by nouf
a state's license plate consist of 3 letters followed by 4 numerals, and 246 letter arrangements are not allowed. how many plates can be issued
Wednesday, August 14, 2013 at 3:25pm by Anonymous
This is the part that will calculate the total number of license plates possible. def main(): num_of_poss_plates = 5*26*10*10*10 print('The number of possible plates are: ', num_of_poss_plates) main
Monday, April 22, 2013 at 9:22am by Sheryl
survey of math
I will assume that two of the same letter (such as BB) and two or three of the same digit (such as 211 or 111) can appear on a single license plate. What I assume is meant is that no two license
plates can be the same. In that case, the answer is (26^2)*1000 = 676,000 If not, ...
Thursday, November 29, 2012 at 12:58am by drwls
Ray is setting the table for a birthday diner.He needs to set 12 places at a round table.he has 3 different kinds of plates:white plates; blue plates; and gold plates.How can he set the table so that
no two of the same kind of plates are next to each other? Thank you for your ...
Thursday, January 29, 2009 at 7:14pm by Smiley
A sales manbought some plates at$50.00 each.if he sold all of them for$600.00 and made a profit of 20% on the transaction,how many plates did he buy
Thursday, April 26, 2012 at 4:00am by isaac
A sales manbought some plates at$50.00 each.if he sold all of them for$600.00 and made a profit of 20% on the transaction,how many plates did he buy?
Thursday, April 26, 2012 at 4:01am by isaac
A sales manbought some plates at$50.00 each.if he sold all of them for$600.00 and made a profit of 20% on the transaction,how many plates did he buy?
Thursday, April 26, 2012 at 4:08am by isaac
A)If a license plate consists of 2 letters followed by 3 digits or 3 letters followed by 2 digits, how many license plates can be created? B)You are given an exam with only 9 questions. if each
question has 4 choices, howmany different possible answers can you have for the ...
Sunday, January 18, 2009 at 6:03pm by Kennedy
physics (question in English)
(Some changes were made to the Google translate version for clarity) An electron enters the space between the plates of a plane-parallel capacitor, parallel to the plates and in the middle of the gap
between them. At what minimum potential difference between the plates will ...
Saturday, May 28, 2011 at 7:29am by drwls
How many possible license plates are possible in which two letters are followed by at least 1 digit and no more than 4 digits?
Thursday, February 17, 2011 at 8:27pm by ashley
Problem 21.79 The plates of a 3.2nF parallel-plate capacitor are each 0.25 m^2 in area. Part A - How far apart are the plates if there's air between them? Express your answer using two significant
figures. = Part B - If the plates are separated by a Teflon sheet, how thick is ...
Sunday, September 11, 2011 at 5:11pm by Paige
Problem 21.79 The plates of a 3.2 nF parallel-plate capacitor are each 0.25 m^2 in area. Part A - How far apart are the plates if there's air between them? Express your answer using two significant
figures. Part B - If the plates are separated by a Teflon sheet, how thick is ...
Tuesday, September 13, 2011 at 10:06pm by Paige
if a license plate consists of 2 letters followed by 4 digits, how many different plates could be created having at least one letter or digit repeated? I know its not 6,760,000
Friday, July 30, 2010 at 9:35pm by donna
The charged plates of a parallel plate capacitor each have a charge density of sigma C/m^2. Using Gauss's law, compute the electric field between the plates.
Friday, March 9, 2012 at 10:35am by pakilina
A parallel-plate capacitor made of circular plates of radius 55 cm separated by 0.25 cm is charged to a potential difference of 1000 Volts by a battery. Then a sheet of tantalum pentoxide is pushed
between the plates, completely filling the gap between them. How much ...
Thursday, January 28, 2010 at 4:07pm by Paul
Mr.Hash bough some plates at a a yard sale. After arriving home he found that 2/3 of the plates were chipped, 1/2 were cracked, 1/4 were both chipped and cracked. Only 2 plates were w/o chips or
cracks. How many plates did he buy in all?
Monday, October 24, 2011 at 9:52am by unknown
Mr Hash bought some plates at a yard sale. After arriving home he found that 2/3 of the plates were chipped, 1/2 were cracked, 1/4 were both chipped and cracked only 2 plates were w/o chips or
cracks. How many plates did he buy in all?
Friday, November 2, 2012 at 12:47pm by dell
Supercapacitors, with capacitance of 1.00F or more, are made with plates that have a spongelike structure with a very large surface area. Determine the surface area of a supercapacitor that has
capacitance of 1.0F and an effective separation between the plates of d=1.0 mm So ...
Wednesday, September 28, 2011 at 10:12pm by Caroline
Reading: Summary
Good! I'd combine the second and third sentences -- He started building houses 12 years ago and is still building them in Huntsville, Texas. You might also want to include using license plates for
roofing. For the last paragraph, you can include his age somewhere else and ...
Thursday, September 3, 2009 at 9:12pm by Ms. Sue
how many 4 character license plates are possible with 2 letters of the alphabet followed by 2 digits, it repetitions are allowed? if repetitions are not allowed?
Wednesday, July 29, 2009 at 6:53pm by Diana
How many 5-character license plates are possible with 2 letters from the alphabet followed by 3 digits, if repetitions are allowed? if repetitions are not allowed?
Wednesday, September 30, 2009 at 10:13pm by Clair
MATH Prob.
How many 4-character license plates are possible with 2 letters from the alphabet followed by 2 digits, if repetitions are allowed? if repetitions are not allowed?
Wednesday, August 12, 2009 at 12:19am by Twg
Math/ Probability
How many 4-character license plates are possible with 2 letters from the alphabet followed by 2 digits, if repetitions are allowed? if repetitions are not allowed?
Wednesday, May 19, 2010 at 7:26pm by Felica
I believe there are 4^25 possible combinations. consider that for each question there are 4 possible answers. It's like figuring the number of license plates where there are 25 letters, each of which
may be A-D.
Friday, January 24, 2014 at 3:09am by Steve
physics need help
A parallel-plate capacitor made of circular plates of radius 55 cm separated by 0.25 cm is charged to a potential difference of 1000 Volts by a battery. Then a sheet of tantalum pentoxide is pushed
between the plates, completely filling the gap between them. How much ...
Friday, January 29, 2010 at 2:26pm by paul
IR chemistry - instrumental anaylsis
Your explanation is long but not too instructive; I'm not exactly sure what you are asking. As long as you are examining the IR spectrum in the region of about 2-15 microns, then using salt plates is
ok for looking at molecules that absorb in that region. (When you salt salt ...
Monday, October 7, 2013 at 7:15pm by DrBob222
a pair of parallel plates is charge by a 30.3V battery, if the field between the plates is 1050N/C how far apart are the plates? answer in meters
Sunday, May 19, 2013 at 4:00pm by angela
How many 4-character license plates are possible with 2 letters from the alphabet followed by 2 digits, if repetitions are allowed? If repetitions are not allowed? could someone please help me with
the formula, I don't understand it yet
Sunday, November 29, 2009 at 12:32am by Punkie
An electron in a computer monitor enters midway between two parallel oppositely charged plates. The initial speed of the electron is 6.15 x 10^7 m/s and its vertical deflection is 4.70 mm. (a) What
is the magnitude of the electric field between the plates? (b) Determine the ...
Thursday, January 13, 2011 at 4:31pm by Isabell
if you are making 34 trophies plates that 3 1/4" long and 1" wide what size sheet of brass would be needed if plates are aligned with 2 plates per horizontal line
Saturday, February 2, 2013 at 11:18pm by Zona
Joe has $10,000 to purchase a used car. If the sales tax is 7% and the fee for title and license plates is $200, what is the maximum amount Joe can spend for a car?
Wednesday, November 11, 2009 at 1:06pm by mc
Joe has $10,000 to purchase a used car. If the sales tax is 7% and the fee for title and license plates is $200, what is the maximum amount Joe can spend for a car?
Monday, January 11, 2010 at 3:30pm by LeAnne
IR chemistry - instrumental anaylsis
Yea I know I was having a difficult time trying to figure out how to explain it. Basically I was using NaCl plates with the salt form of a drug although I wasn't supposed to. I was only supposed to
use the free base form with the salt plates so I was wondering if testing the ...
Monday, October 7, 2013 at 7:15pm by Instrumental
A parallel-plate capacitor is constructed from two circular metal plates or radius R. The plates are separated by a distance of 1.2mm. 1. What radius must the plates have if the capacitance of this
capacitor is 1.1 uF? 2. If the separation between the plates is increased, ...
Sunday, February 20, 2011 at 3:38pm by Jon
A parallel-plate capacitor is constructed from two circular metal plates or radius R. The plates are separated by a distance of 1.2mm. 1. What radius must the plates have if the capacitance of this
capacitor is 1.1 uF? 2. If the separation between the plates is increased, ...
Monday, February 21, 2011 at 1:39pm by Jon
12th grade
A certain parallel plate capacitor consists of two identical aluminium plates each of area 2times 10^-4 m^2.the plates are seperated by a distance of 0,03mm, with air occupying the space between the
plates. 1-CALCULATE THE CAPACITANCE OF THE CAPACITOR. AND 2-CALCULATE THE ...
Sunday, October 10, 2010 at 7:27pm by Masixole4Lydia
A certain parallel capacitor consist of two identical aluminium plates, each of area 2times 10^-4m^2. the plates are seperated by a distance of 0,03mm, with air occupying the space between the
plates. CALCULATE THE CAPACITANCE OF THE CAPACITOR. AND ALSO CALCULATE THE CHARGE ...
Monday, October 11, 2010 at 5:51pm by Masixole4Lydia
Given that the Number of Grantees of a Driver's License in a small country in the month of April are .... 1,200 for professionsl 3,350 for non-professional 2,450 for students If you want to make a
study of the correlation between driving habits and number of vehicular ...
Sunday, September 2, 2012 at 5:48am by Bianca
Suppose a license plate contains four digits.The probability of a license plate having an even or odd digit is the same.What is the probabilty of having a license plate with all even digits.
Saturday, November 7, 2009 at 9:52pm by melrose
A parallel-plate capacitor is constructed with circular plates of radius 0.056 m. The plates are separated by 0.25 mm, and the space between the plates is filled with dielectric with dielectric
constant k. When the charge on the capacitor is 1.2 uc the potential difference ...
Thursday, February 2, 2012 at 3:14am by Cooper
Senior citizens will pay $1.50 for a license fee for pets who have not been neutered. Senior citizens who have not been neutered will pay $1.50 for their pet's license. Note: Neutered and altered
mean about the same thing -- surgery has made them incapable of having babies. I'...
Wednesday, December 12, 2007 at 7:52pm by Ms. Sue
Two large circular metal plates are parallel and nearly touching, only 4.1 mm apart. The two plates are connected to the opposite terminals of a 9 V battery. (a) What is the average electric field
strength in the space between the plates? V/m (b) What is the electric force on ...
Tuesday, November 9, 2010 at 1:14am by dede
johnny's best lifelong memory was when he got his driver's license. during his teenage years, a driving license signified his passage to adulthood. (johnny knew that a driver's license was like
a_____ to adulthood. 1)road 2)passage 3)road 4)all(1,2,3) my answer is #2 passage
Monday, June 14, 2010 at 10:36pm by NOOR
yes and i think that around the charged parallel plates..the electric field charge is zero so would the answer be between the charged parallel plates? I can't visualize the question...about the
parallel plates..are they one negative and poistive plate..or both positive, both ...
Friday, February 5, 2010 at 7:00pm by Sandra
Two plates of area 5.00 × 10-3 m2 are separated by a distance of 1.50 × 10-4 m. If a charge separation of 6.40 × 10-8 C is placed on the two plates, calculate the potential difference (voltage)
between the two plates. Assume that the separation distance is small in comparison ...
Tuesday, June 4, 2013 at 10:31pm by Anonymous
The potential difference between the accelerating plates of a TV set is about 30 kV. If the distance between the plates is 1.1 cm, find the magnitude of the uniform electric field in the region
between the plates. (in N/C)
Thursday, May 17, 2012 at 4:06pm by andrew
Calculate the acceleration of the electrons enter the plates.Which are two parallel metal plates 12 mm apart, An electric field is produced between the plates, with the top plate held at a potential
of 120 V and the lower plate earthed.
Tuesday, June 11, 2013 at 6:55pm by Ying
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=License+plates+are+made+using+2+letters+followed+by+3+digits.+How+many+different+plates+can+be+made+if+repetition+of+letters+and+digits+is+allowed%3F","timestamp":"2014-04-21T10:28:12Z","content_type":null,"content_length":"42007","record_id":"<urn:uuid:000a4738-e382-477b-b1c6-6f04d09c5e48>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Book Review of "Elements of Automata Theory"
Book Review of “Elements of Automata Theory”
During summer 2010 I started reading a book titled Elements of Automata Theory by Jacques Sakarovitch
Book : Elements of Automata Theory by Jacques Sakarovitch
Reviewer : Shiva Kintali
During my undergrad I often found myself captivated by the beauty and depth of automata theory. I wanted to read one book on automata theory and say that I “know” automata theory. Couple of years
later I realized that it is silly to expect such a book. The depth and breadth of automata theory cannot be covered by a single book.
My PhD thesis is heavily inspired by automata theory. I had to read several (fifty year old) papers and books related to automata theory to understand several fundamental theorems. Unfortunately, the
concepts I wanted to learn are scattered in multiple books and old research papers, most of which are hard to find. When I noticed that Prof. Bill Gasarch is looking for a review of Elements of
Automata Theory, I was very excited and volunteered to review it, mainly because I wanted to increase my knowledge about automata theory. Given my background in parsing technologies and research
interests in space-bounded computation I wanted to read this book carefully. This book is around 750 pages long and it took me around one year to (approximately) read it. It is very close to my
expectations of the one book on automata theory.
First impressions : Most of the books on automata theory start with the properties of regular languages, finite automata, pushdown automata, context-free languages, pumping lemmas, Chomsky hierarchy,
decidability and conclude with NP-completeness and the P vs NP problem. This book is about “elements” of automata theory. It focuses only on finite automata over different mathematical structures. It
studies pushdown automata only in the context of rational subsets in the free group. Yes, there is 750 pages worth literature studying only finite automata.
This book is aimed at people enthusiastic to know the subject rigorously and not intended as a textbook for automata theory course. It can also be used by advanced researchers as a desk reference.
There is no prerequisite to follow this book, except for a reasonable mathematical maturity. It can be used as a self-study text. This book is a direct translation of its french original.
This book is divided into five major chapters. The first three chapters deal with the notions of rationality and recognizability. A family of languages are rationally closed if it is closed under
rational operations (union, product and star). A language is reconizable if there exists a finite automata that recognizes it. The fourth and fifth chapters discuss rationality in relations. Chapter
0 acts as an appendix of several definitions of structures such as relations, monoids, semirings, matrices, graphs and concepts such as decidability. Following is a short summary of the five major
chapters. There are several deep theorems (for example, Higman’s Theorem) studied in this book. I cannot list all of them here. The chapter summaries in the book have more details.
Chapter 1 essentially deals with the basic definitions and theorems required for any study of automata theory. It starts with the definitions of states, transitions, (deterministic and
nondeterministic) automaton, transpose, ambiguity and basic operations such as union, cartesian product, star, quotient of a language. Rational operators, rational languages and rational expressions
are defined and the relation between rationality and recognizability is established leading to the proof of Kleene’s theorem. String matching (i.e., finding a word in a text) is studied in detail as
an illustrative example. Several theorems related to star height of languages are proved. A fundamental theorem stating that `the language accepted by a two-way automaton is rational’ is proved. The
distinction between Moore and Mealy machines is introduced.
Chapter 2 deals with automata over the elements of an arbitrary monoid and the distinction between rational set and recognizable set in this context. This leads to a better understanding of Kleene’s
theorem. The notion of morphism of automata is introduced and several properties of morphisms and factorisations are presented. Conway’s construction of universally minimal automaton is explained and
the importance of well quasi-orderings is explained in detail. Based on these established concepts, McNaughton’s theorem (which states that the star height of a rational group language is computable)
is studied with a new perspective.
Chapter 3 formalizes the notion of “weighted” automata that count the number of computations that make an element be accepted by an automaton, thus generalizing the previously introduced concepts in
a new dimension. Languages are generalized to formal series and actions are generalized to representations. The concepts and theorems in this chapter makes the reader appreciate the deep connections
of automata theory with several branches of mathematics. I personally enjoyed reading this chapter more than any other chapter in this book.
Chapter 4 builds an understanding of the relations realized by different finite automata in the order they are presented in chapters 1, 2 and 3. The Evaluation Theorem and the Composition Theorem
play a central role in understanding this study. The decidability of the equivalence of transducers (with and without weigths) is studied. This chapter concludes with the study of deterministic and
synchronous relations.
Chapter 5 studies the functions realized by finite automata. Deciding functionality, sequential functions, uniformisation of rational relations by rational functions, semi-monomial matrix
representation, translations of a function and uniformly bounded functions are studied.
There are exercises (with solutions) at the end of every section of every chapter. These exercises are very carefully designed and aid towards better understanding of the corresponding concepts.
First time readers are highly encouraged to solve (or at least glance through) these exercises. Every section ends with Notes and References mentioning the corresponding references and a brief
historical summary of the chapter.
Overall I found the book very enlightening. It has provided me new perspectives on several theorems that I assumed I understood completely. Most of the concepts in this book are new to me and I had
no problems following the concepts and the corresponding theorems. The related exercises made these topics even more fun to learn. It was a joy for me to read this book and I recommend this book for
anyone who is interested in automata theory (or more generally complexity theory) and wants to know the fundamental theorems of theory of computing. If you are a complexity theorist, it is worthwhile
to look back at the foundations of theory of computing to better appreciate its beauty and history. This book is definitely unique in its approach and the topics chosen. Most of the topics covered
are either available in very old papers or not accesible at all. I applaud the author for compiling these topics into a wonderful free-flowing text.
This book is nicely balanced between discussions of concepts and formal proofs. The writing is clear and the topics are organized very well from the most specific to the most general, making it a
free-flowing text. On the other hand, it is very dense and requires lots of motivation and patience to read and understand the theorems. The author chose a rigorous way of explaining rationality and
recognizability. Sometimes you might end up spending couple of hours to read just two pages. Such is the depth of the topics covered. Beginners might find this book too much to handle. I encourage
beginners to read this book after taking an introductory automata theory course. This is definitely a very good reference text for researchers in the field of automata theory.
In terms of being used in a course, I can say that a graduate level course can be designed from a carefully chosen subset of the topics covered in this book. The exercises in the book can be readily
used for such a course.
This is an expensive book, which is understandable based on the author’s efforts to cover several fundamental topics (along with exercises) in such a depth. If you think it is expensive, I would
definitely suggest that you get one for your library.
10 thoughts on “Book Review of “Elements of Automata Theory””
1. Thanks for the nice review and taking the time to explore this!
(i) are there open problems in “pure” finite automata theory as pursued by this book?
(ii) are there problem solving nuggets, suitable e.g. for your site TrueShelf?
□ @Andy, I will write a new post about the open problems in finite automata theory.
2. This is a great review.
While Automata Theory is not my field, I will keep this book in mind. Do you have any suggestions for a must read introduction to Automate Theory?
I will retweet the link on Twitter.
□ The following books are very good :
* Theory of Computing: A Gentle Introduction
* Introduction to the Theory of Computation
* Introduction to Automata Theory, Languages, and Computation (3rd Edition)
3. i hv very much interest in automata and compilers field. Could you plz give me the details of book which i shud read from starting to grab some knowledge abouth these 2 subjects.
□ The following books on automata theory and theory of computation are very good :
* Theory of Computing: A Gentle Introduction
* Introduction to the Theory of Computation
* Introduction to Automata Theory, Languages, and Computation (3rd Edition)
The following compiler books are good :
* Compilers: Principles, Techniques, and Tools (2nd Edition)
* Writing Compilers and Interpreters: A Software Engineering Approach
* Engineering a Compiler, Second Edition
* Writing Compilers and Interpreters
☆ Thank u very much Sir. I have read Ullman’s compilers: Principles, Techniques, and Tools (2nd Edition) and it’s an awesum book…
i’ll definitely try reading these books
4. Hello Mr. Kintali. I am from India. I found this book in my institute’s library but this is so amazing book that I want to buy it. Unfortunately I couldn’t find it online. I generally buy from
flipkart.com . Most of the time I buy Indian editions which are cheaper than original print. I don’t know if there is any Indian edition of this book but I would prefer it to original print. Are
you aware of such listing anywhere online where i can pay off in INR . Please reply. Thanks
|
{"url":"http://kintali.wordpress.com/2012/07/14/book-review-of-elements-of-automata-theory/","timestamp":"2014-04-20T20:54:45Z","content_type":null,"content_length":"86148","record_id":"<urn:uuid:83401ecb-566d-4d7c-ba4c-257c281394b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rates of Return - Internal Rate of Return - Series 65 | Investopedia
Series 65
Quantitative Methods of Evaluating Businesses and Investments - Rates of Return - Internal Rate of Return
Different measures can be used when discussing potential rates of return.
Internal Rate of Return (IRR)
The IRR is essentially the interest rate that makes the net present value of all cash flow equal zero. It represents the return a company would earn if it expanded or invested in itself rather than
elsewhere.The internal rate of return used in time value of money calculations cannot be directly found by formula. It can be approximated by trial and error, but in the real world it is simply found
by inputting present value, future value, and the number of compounding periods into a financial calculator.
Several measures of return can be selected for such a calculation:
• Real return - also known as inflation-adjusted return. By adjusting the stated (nominal) return of an investment to take inflation into account, the investor will have a more realistic assessment
of return. So, if an investor were to earn 8% on an investment and inflation is 3%, the real rate of return would be approximately 5% (excluding any fees). Learn more about this in the section on
Bond Yields.
• Risk-adjusted return - this calculation allows an investor to determine if the amount of return received is commensurate with the risk taken. There are several methods to measure risk-adjusted
return that incorporate either beta (a measure of a portfolio's market risk) or standard deviation (a measure of a portfolio's total risk) and the risk-free return (typically measured by the
current rate on short-term Treasury bills). The most common method of measuring risk-adjusted return is the Sharpe Ratio, which is calculated by subtracting the risk-free rate of return from the
rate of return for a portfolio and dividing the result by the standard deviation of the portfolio returns.
□ Beta is a measure of volatility or systematic risk relative to the market as a whole. If beta = 1, the security's price will move with the market. If beta < 1, the security will move to a
lesser extent than the market. If beta > 1, the security will move at a greater pace than the market.
The following two articles on beta are worth a read if Beta is a new concept for you:
• Standard Deviation is a statistical concept that measures the dispersion of a set of data from its mean (average). So, if the average return for an investment over the last 5 years was 11.5%, and
yearly returns for the past 5 years was 9.5%, 8.5%, 13.9%, 9.1% and 16.5%, standard deviation would measure how the return for each of those 5 years differed from the mean. Standard deviation is
a measure of total risk for an individual security or an overall portfolio. Beta, on the other hand, measures only its systematic risk relative to the market.
Note that you will not have to calculate standard deviation in your upcoming Series 65 exam.
• Total return - incorporates the rate of return from all sources, including appreciation (or depreciation), dividends and interest. This is the actual rate of return an investment provided over a
certain period of time.
Look Out!
Look for questions on both the definition of total return and the inflation component of real return.
Hint: Any answers that involve risk are normally incorrect.
Exam Tips and Tricks
Consider this sample question:
1. Which of the following statements is least accurate with respect to how certain factors may impact internal rate of return (IRR)?
a. If the required return exceeds the project's IRR, the project should be accepted.
The higher the expected cash flows, the higher the IRR will be.
IRR may be regarded as the expected return on a project or an investment.
As the cost increases, the IRR will decrease, holding everything else constant.
The correct answer is "a". The IRR of the project is also the return expected from it. Therefore, if the required return exceeds the project's IRR (or expected return), the project should be
rejected because it is not expected to generate return to compensate for the risk.
comments powered by Disqus
|
{"url":"http://www.investopedia.com/exam-guide/series-65/quantitative-methods/rates-of-return-internal-rate-of-return.asp","timestamp":"2014-04-16T08:36:44Z","content_type":null,"content_length":"87146","record_id":"<urn:uuid:6042a8b7-afe1-4088-8f95-c32adc5737dc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A perpetuity is a stream of payments or a type of annuity that starts payments on a fixed date and such payments continue forever, or perpetually. Often preferred stock which pays a dividend is
considered as a form of perpetuity. However, one must assume that the firm does not go bankrupt or is otherwise impeded for making timely payments. The formula for evaluating a perpetuity is
relatively straight forward. It is simply the expected income stream divided by a discount factor or market rate of interest. It reflects the expected present value of all payments. It is comparable
to a perpetual bond or consol in this respect. If a preferred issue pays a $2.00 quarterly dividend and the annual interest rate is 5 percent then one would expect to be willing to pay 2.50/.0125, or
$200 per share. Here, the 5 percent interest rate was adjusted for a simple quarterly disbursement (.05/4 = .0125).
Similar financial terms
No similar financial terms found in the dictionary.
|
{"url":"http://www.bizterms.net/term/Perpetuity.html","timestamp":"2014-04-20T10:52:25Z","content_type":null,"content_length":"18221","record_id":"<urn:uuid:6603c6b4-e1df-450e-9a74-90891f125bbc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: "fcast graph" does not recognize an already variable in the data
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: "fcast graph" does not recognize an already variable in the data set after commands: "var" and " fcast compute"
From "Demetriou, Eleftherios" <Eleftherios.Demetrio@techhealth.com>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: "fcast graph" does not recognize an already variable in the data set after commands: "var" and " fcast compute"
Date Thu, 20 Aug 2009 14:44:45 -0400
Thank you Nick.
True the firs error is self explanatory. I was referring to the second
comment, but I added it here in case that the two error messages have
any relation.
I did what you suggest plus I created a new data set that I included
only the variables mentioned in the VAR command.
In addition I shorter the names of the variables to be of two characters
in case there is a confusion there.
But I still face the same problem.
-----Original Message-----
From: Demetriou, Eleftherios
Sent: Thursday, August 20, 2009 1:58 PM
To: 'statalist@hsphsun2.harvard.edu'
Subject: "fcast graph" does not recognize an already variable in the
data set after commands: "var" and " fcast compute"
Hi there
I would really appreciate any help to where I should look after I
receive an
unpredictable for me error message.
. var dlbms dlcms, exog(feb-dec) lags(1/12)
. fcast compute dlbmsvar
asymptotic standard error not available with exogenous variables
. fcast graph dlbmsvardlbms, observed
variable dlbmsvardlbms_LB not found
Note: dlbmsvardlbms variable is the one created after I executed the
"fcast compute" command. In other words, the variable exist on the
Thank you
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-08/msg01001.html","timestamp":"2014-04-18T21:01:44Z","content_type":null,"content_length":"8703","record_id":"<urn:uuid:6404dbaa-78c3-44a6-a759-43742572cea2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
compare -text
Compares two Infernal covariance models. Returns the common MaxiMin score and the offending RNA sequence. High scores point toward low discriminative power of the two models. Based on: "
Christian Höner zu Siederdissen, and Ivo L. Hofacker. 2010. Discriminatory power of RNA family models. Bioinformatics 26, no. 18: 45359" http://bioinformatics.oxfordjournals.org/content/26/18/
i453.long Version 0.0.1.5
SizeCompare is a small library providing size comparison functions standard Haskell data-types. Size compare runs in O(min(n,m)) for both arguments, possibly faster. Instead of measuring both
containers and comparing the result, SizeCompare iteratively deconstructs both sides of the equality equation until a conclusion can be made. A common expression like: length xs > 0 runs O(n) in the
length of the list. Sizecompare runs (O(1)) in this particular case: xs |>| 0 This is still an initial version of the library and updates may follow after some more profiling. Version 0.1
|
{"url":"http://www.haskell.org/hoogle/?hoogle=compare+-text","timestamp":"2014-04-16T06:23:06Z","content_type":null,"content_length":"10287","record_id":"<urn:uuid:11c349f0-6b24-4729-82f8-a029d00c9706>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The first eigenvalue of a branching process matrix
up vote 2 down vote favorite
Let $M$ be the real square matrix of a typed branching process, such that $M_{ij}$ is the expected value of offspring of type $j$ emanating from type $i$.
We know that if the first eigenvalue if $M$ is smaller than 1, then all types will be extinct, and if it is larger than 1, then with positive probability some types won't get extinct.
Is there some interpretation of the eigenvalue other than that? (its actual value.)
pr.probability st.statistics markov-chains
1 The second "smaller" should be "larger". – Federico Poloni Apr 22 '13 at 7:22
1 if it is larger than 1, then there are types that won't get extinct... is not accurate: rather, there is a positive probability that some types will not get extinct. – Did Apr 22 '13 at 7:57
add comment
1 Answer
active oldest votes
You can see this process as a dynamical system or a Markov chain without normalization. If the matrix is irreducible, starting from every initial distribution of number of individuals
$w_0$, the process will "converge" (in some suitable sense) to $w_{k}=\alpha \lambda^k v$, for some $\alpha\in\mathbb{R}$, and $(\lambda,v)$ the Perron eigenpair.
up vote 1 Thus, in the stationary limit, the ratios among the number of individuals of different types at each time step $k$ $(w_k)_i/(w_k)_j$ are the ratios of components of the Perron vector $v_i
down vote /v_j$, while the number of individuals is multiplied by $\lambda$ at each iteration. So $\\lambda $ is a growth factor for the number of individuals at each iteration.
Conditionally on non-extinction. – Did Apr 22 '13 at 7:58
You are right, sorry for forgetting this part! – Federico Poloni Apr 22 '13 at 9:02
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability st.statistics markov-chains or ask your own question.
|
{"url":"http://mathoverflow.net/questions/128285/the-first-eigenvalue-of-a-branching-process-matrix","timestamp":"2014-04-18T19:05:30Z","content_type":null,"content_length":"56371","record_id":"<urn:uuid:41041add-0118-47b1-9bbc-a3b56e070a89>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
99 questions/46 to 50
From HaskellWiki
These are Haskell translations of Ninety Nine Lisp Problems.
If you want to work on one of these, put your name in the block so we know someone's working on it. Then, change n in your block to the appropriate problem number, and fill in the <Problem
description>,<example in lisp>,<example in Haskell>,<solution in haskell> and <description of implementation> fields.
1 Logic and Codes
2 Problem 46
<Problem description>
<example in lisp>
Example in Haskell:
<example in Haskell>
<description of implementation>
3 Problem 47
<Problem description>
<example in lisp>
Example in Haskell:
<example in Haskell>
<description of implementation>
4 Problem 48
<Problem description>
<example in lisp>
Example in Haskell:
<example in Haskell>
<description of implementation>
5 Problem 49
An n-bit Gray code is a sequence of n-bit strings constructed according to certain rules. For example,
n = 1: C(1) = ['0','1'].
n = 2: C(2) = ['00','01','11','10'].
n = 3: C(3) = ['000','001','011','010',´110´,´111´,´101´,´100´].
Find out the construction rules and write a predicate with the following specification:
% gray(N,C) :- C is the N-bit Gray code
Can you apply the method of "result caching" in order to make the predicate more efficient, when it is to be used repeatedly?
Example in Haskell:
P49> gray 3
gray :: Int -> [String]
gray 1 = ["0", "1"]
gray (n+1) = let xs = gray n in map ('0':) xs ++ map ('1':) (reverse xs)
It seems that the gray code can be recursively defined in the way that for determining the gray code of n we take the gray code of n-1, prepend a 0 to each word, take the gray code for n-1 again,
reverse it and perpend a 1 to each word. At last we have to append these two lists. (Wikipedia seems to approve this.)
Instead of the equation for
we could also use
what leads to the same results.
6 Problem 50
<Problem description>
<example in lisp>
Example in Haskell:
<example in Haskell>
<description of implementation>
|
{"url":"http://www.haskell.org/haskellwiki/index.php?title=99_questions/46_to_50&oldid=9223","timestamp":"2014-04-18T03:43:54Z","content_type":null,"content_length":"21442","record_id":"<urn:uuid:ee05e3ab-029b-4a6e-afa3-bb6be95d95b2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Overbrook Hills, PA Statistics Tutor
Find an Overbrook Hills, PA Statistics Tutor
...Also, I have tutored students in ODE's for over ten years. I worked for close to three years as a pension actuary and have passed the first three exams given by the Society of Actuaries, which
rigorously cover such topics as calculus, probability, interest theory, modeling, and financial derivat...
19 Subjects: including statistics, calculus, geometry, algebra 1
...Hard work pays off! I have been an SAT, ACT, PSAT tutor for over 10 years. My first career was in business as a Vice President, consultant, and trainer.
35 Subjects: including statistics, English, reading, chemistry
I have been working as a statistician at the University of Pennsylvania since 1991, providing assistance to researchers in various areas of health behavior. I am proficient in several statistical
packages, including SPSS, STATA, and SAS. One of my particular strengths is the ability to explain sta...
1 Subject: statistics
I received my MEd in Middle School Math this May from Lesley University in MA, and waiting for my certification for NJ teacher license. I was trained in adolescent and cognitive psychology and
have a very strong practical Mathematics background. I have served as an educator in various roles, both part time and full time, spanning across middle school and elementary school classroom
9 Subjects: including statistics, geometry, algebra 1, algebra 2
...I can assist with any proofreading needs or help you child learn to read. My goal is to serve you and your learning needs! There is no one-size-fits-all when it comes to education.
20 Subjects: including statistics, reading, algebra 2, biology
Related Overbrook Hills, PA Tutors
Overbrook Hills, PA Accounting Tutors
Overbrook Hills, PA ACT Tutors
Overbrook Hills, PA Algebra Tutors
Overbrook Hills, PA Algebra 2 Tutors
Overbrook Hills, PA Calculus Tutors
Overbrook Hills, PA Geometry Tutors
Overbrook Hills, PA Math Tutors
Overbrook Hills, PA Prealgebra Tutors
Overbrook Hills, PA Precalculus Tutors
Overbrook Hills, PA SAT Tutors
Overbrook Hills, PA SAT Math Tutors
Overbrook Hills, PA Science Tutors
Overbrook Hills, PA Statistics Tutors
Overbrook Hills, PA Trigonometry Tutors
Nearby Cities With statistics Tutor
Bala, PA statistics Tutors
Belmont Hills, PA statistics Tutors
Bywood, PA statistics Tutors
Carroll Park, PA statistics Tutors
Cynwyd, PA statistics Tutors
Drexelbrook, PA statistics Tutors
Kirklyn, PA statistics Tutors
Llanerch, PA statistics Tutors
Merion Park, PA statistics Tutors
Merion Station statistics Tutors
Merion, PA statistics Tutors
Oakview, PA statistics Tutors
Penn Valley, PA statistics Tutors
Penn Wynne, PA statistics Tutors
Westbrook Park, PA statistics Tutors
|
{"url":"http://www.purplemath.com/overbrook_hills_pa_statistics_tutors.php","timestamp":"2014-04-20T13:50:48Z","content_type":null,"content_length":"24281","record_id":"<urn:uuid:d495ce21-da07-491a-9cfc-a4b76b488359>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Trick
03-26-2006, 05:14 AM #1
Math Trick
It's really not hard to stump me since math was one of my worst subject.
Personally I would like to know who came up with this.
1. Grab a calculator
2. Key in the first three digits of your phone number (NOT YOUR AREA CODE)
3. Multiply by 80
4. Add 1
5. Multiply by 250
6. Add the last 4 digits of your phone number.
7. Add the last 4 digits of your phone number again.
8. Subtract 250
9. Divide number by 2.
Do you recognize the answer?
"panic" only comes from having real expectations
Re: Math Trick
i truly hate the evil mind that figured that out
the store for all your blade, costuming (in any regard), leather (also in any regard), and steel craft needs.www.facebook.com/tdhshop
yes, this really is how we make our living.
Re: Math Trick
i truly hate the evil mind that figured that out
Probably working with a government grant too.
"panic" only comes from having real expectations
Re: Math Trick
Sounds like a good way to get a cute girl's phone number... I'll have to remember this one
Re: Math Trick
That's cool!
Whomever thought of this needs to find something to do.
Re: Math Trick
My number is supposed to be unlisted.
Makes all the routine posts.
Re: Math Trick
Its a simple algebraic equation and will work for a myriad of numbers. Theres one for birthdays etc. The SABR guys should be able to point you to the equation. I've seen it somewhere before but
can't put my hands on it.
Re: Math Trick
works for me
Re: Math Trick
Here's a simpler one:
Key in the first 7 digits of your telephone number.
Recognize it?
For every action there is an equal and opposite criticism.
Re: Math Trick
250(80a+1) + 2b – 250
a=first 3 digits
b=last 4 digits
Re: Math Trick
max venable
Here's a simpler one:
Key in the first 7 digits of your telephone number.
Recognize it?
Let's make some noise!
Re: Math Trick
max venable
Here's a simpler one:
Key in the first 7 digits of your telephone number.
Recognize it?
"I saw Wedding Crashers accidentally. I bought a ticket for Grizzly Man and went into the wrong theater. After an hour, I figured I was in the wrong theater, but I kept waiting. That’s the thing
about bear attacks. They come when you least expect it."-Dwight K. Schrute
Re: Math Trick
"Man, what would we ever do without water? We'd like....famish or something."
Re: Math Trick
Here is another interesting one called "Your age by chocolate":
1. First of all, pick the number of times a week that you would like to
have chocolate (more than once but less than 10)
2. Multiply this number by 2! (just to be bold)
3. Add 5
4. Multiply it by 50 -- I'll wait while you get the calculator
5. If you have already had your birthday this year add 1756 ....
If you haven't, add 1755.
6. Now subtract the four digit year that you were born.
You should have a three digit number now.
The first digit of this was your original number
(i.e., how many times you want to have chocolate each week).
The next two numbers are YOUR AGE! (Oh YES, it is!!!!!)
Re: Math Trick
max venable
Here's a simpler one:
Key in the first 7 digits of your telephone number.
Recognize it?
Go Gators!
03-26-2006, 05:17 AM #2
03-26-2006, 05:31 AM #3
03-26-2006, 08:44 AM #4
Join Date
Sep 2004
Hartford City, IN
03-26-2006, 12:43 PM #5
03-26-2006, 04:51 PM #6
03-26-2006, 07:44 PM #7
Join Date
Mar 2005
03-26-2006, 09:49 PM #8
got alil captain in u?
Join Date
Jun 2005
Bridgeport Ohio
03-26-2006, 10:02 PM #9
03-26-2006, 10:07 PM #10
03-26-2006, 10:08 PM #11
Join Date
Jul 2004
Back in the Burg, Ohio.
03-26-2006, 10:13 PM #12
03-26-2006, 10:17 PM #13
Join Date
Mar 2006
Columbia, MO
03-27-2006, 09:25 AM #14
03-27-2006, 02:08 PM #15
|
{"url":"http://www.redszone.com/forums/showthread.php?44123-Math-Trick","timestamp":"2014-04-23T16:00:20Z","content_type":null,"content_length":"90643","record_id":"<urn:uuid:8fd0746d-c0d9-4af6-8804-e7617f42ac85>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Short prove by induction
November 16th 2008, 09:21 AM #1
Oct 2008
I need to prove the following for one of my solutions in order to solve a problem:
For $n \in\mathbb{N}$ if $n>=3^3$ then $3^n > 79n^2$
Thank you in advance.
Have you checked that $3^{3^{3}}>79.(3^{3})^{2}$?
Then, let $n \geq 3^{3}$ be an integer, and assume that $3^{n}>79n^{2}$(induction hypothesis)
Is it true that, if $n\geq 3^{3}\$, then $\ 2n\leq n^{2}$ and $1\leq n^{2}$ ?
If it's the case, $79n^{2}+79(2n)+79 \leq 79n^{2}+79n^{2}+79n^{2}=3(79n^{2})$
What can we conclude using the induction hypothesis?
November 16th 2008, 09:57 AM #2
Senior Member
Nov 2008
|
{"url":"http://mathhelpforum.com/discrete-math/59847-short-prove-induction.html","timestamp":"2014-04-20T07:02:18Z","content_type":null,"content_length":"33398","record_id":"<urn:uuid:6cf6166e-7af6-4769-a999-cdb7dbba2966>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Untersuchungen über das logische
, 2000
"... In this article we describe the data model of the MBase system, a webbased, ..."
, 1994
"... This paper presents PROVERB, a text planner for argumentative texts. PIOVERB's main fimtm'e is that it combines global hierarchical plannillg alld illlphmncd organization of text with respect to
local derivation relations in a complementary way. The Ibmmr splits the task of presenting a particnlar p ..."
Cited by 21 (7 self)
Add to MetaCart
This paper presents PROVERB, a text planner for argumentative texts. PIOVERB's main fimtm'e is that it combines global hierarchical plannillg alld illlphmncd organization of text with respect to
local derivation relations in a complementary way. The Ibmmr splits the task of presenting a particnlar proof' subtasks of presenting suhproof. q'hc latter sinmlal.cs how the next intermediate
conclnsion to be presented is chosen under the guidance of the local focus.
- Journal of Universal Computer Science , 1999
"... Real-world applications of automated theorem proving require modern software environments that enable modularisation, networked inter-operability, robustness, and scalability. These requirements
are met by the Agent-Oriented Programming paradigm of Distributed Artificial Intelligence. We argue that ..."
Cited by 19 (10 self)
Add to MetaCart
Real-world applications of automated theorem proving require modern software environments that enable modularisation, networked inter-operability, robustness, and scalability. These requirements are
met by the Agent-Oriented Programming paradigm of Distributed Artificial Intelligence. We argue that a reasonable framework for automated theorem proving in the large regards typical mathematical
services as autonomous agents that provide internal functionality to the outside and that, in turn, are able to access a variety of existing external services. This article describes...
, 2000
"... We report on an experiment in exploring properties of residue classes over the integers with the combined effort of a multi-strategy proof planner and two computer algebra systems. An
exploration module classifies a given set and a given operation in terms of the algebraic structure they form. It th ..."
Cited by 18 (11 self)
Add to MetaCart
We report on an experiment in exploring properties of residue classes over the integers with the combined effort of a multi-strategy proof planner and two computer algebra systems. An exploration
module classifies a given set and a given operation in terms of the algebraic structure they form. It then calls the proof planner to prove or refute simple properties of the operation. Moreover, we
use different proof planning strategies to implement various proving techniques: from naive testing of all possible cases to elaborate techniques of equational reasoning and reduction to known cases.
- Annals of Pure and Applied Logic
"... Abstract. We develop a general algebraic and proof-theoretic study of substructural logics that may lack associativity, along with other structural rules. Our study extends existing work on
(associative) substructural logics over the full Lambek Calculus FL (see e.g. [36, 19, 18]). We present a Gent ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract. We develop a general algebraic and proof-theoretic study of substructural logics that may lack associativity, along with other structural rules. Our study extends existing work on
(associative) substructural logics over the full Lambek Calculus FL (see e.g. [36, 19, 18]). We present a Gentzen-style sequent system GL that lacks the structural rules of contraction, weakening,
exchange and associativity, and can be considered a non-associative formulation of FL. Moreover, we introduce an equivalent Hilbert-style system HL and show that the logic associated with GL and HL
is algebraizable, with the variety of residuated lattice-ordered groupoids with unit serving as its equivalent algebraic semantics. Overcoming technical complications arising from the lack of
associativity, we introduce a generalized version of a logical matrix and apply the method of quasicompletions to obtain an algebra and a quasiembedding from the matrix to the algebra. By applying
the general result to specific cases, we obtain important logical and algebraic properties, including the cut elimination of GL and various extensions, the strong separation of HL, and the finite
generation of the variety of residuated lattice-ordered groupoids with unit. 1.
"... Abstract. On the face of it, Hilbert’s Program was concerned with proving consistency of mathematical systems in a finitary way. This was to be accomplished by showing that that these systems
are conservative over finitistically interpretable and obviously sound quantifier-free subsystems. One propo ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract. On the face of it, Hilbert’s Program was concerned with proving consistency of mathematical systems in a finitary way. This was to be accomplished by showing that that these systems are
conservative over finitistically interpretable and obviously sound quantifier-free subsystems. One proposed method of giving such proofs is Hilbert’s epsilonsubstitution method. There was, however, a
second approach which was not refelected in the publications of the Hilbert school in the 1920s, and which is a direct precursor of Hilbert’s first epsilon theorem and a certain “general consistency
result. ” An analysis of this so-called “failed proof ” lends further support to an interpretation of Hilbert according to which he was expressly concerned with conservatitvity proofs, even though
his publications only mention consistency as the main question. §1. Introduction. The aim of Hilbert’s program for consistency proofs in the 1920s is well known: to formalize mathematics, and to give
finitistic consistency proofs of these systems and thus to put mathematics on a “secure foundation.” What is perhaps less well known is exactly how Hilbert thought this should be carried out. Over
ten years before Gentzen developed sequent calculus formalizations
- In Proc. of the IJCAR 2001 Workshop: Future Directions in Automated Reasoning , 2001
"... Proof Planning is a technique for automated (and interactive) theorem proving that searches for proof plans at the level of abstract methods. Proof methods consist of a chunk of mathematically
motivated, recurring patterns of calculus level inferences with additional pre- and post-conditions tha ..."
Cited by 2 (2 self)
Add to MetaCart
Proof Planning is a technique for automated (and interactive) theorem proving that searches for proof plans at the level of abstract methods. Proof methods consist of a chunk of mathematically
motivated, recurring patterns of calculus level inferences with additional pre- and post-conditions that model their applicability conditions.
"... Abstract. Modus Ponens says that if you know A and you know that A implies B, then you know B. This is a basic rule that we take for granted ..."
Add to MetaCart
Abstract. Modus Ponens says that if you know A and you know that A implies B, then you know B. This is a basic rule that we take for granted
"... Abstract. A theory of many-sorted implicative conceptual systems (abbreviated msic-systems) is outlined. Examples of msic-systems include legal systems, normative systems, systems of rules and
instructions, and systems expressing policies and various kinds of scienti…c theories. In computer science, ..."
Add to MetaCart
Abstract. A theory of many-sorted implicative conceptual systems (abbreviated msic-systems) is outlined. Examples of msic-systems include legal systems, normative systems, systems of rules and
instructions, and systems expressing policies and various kinds of scienti…c theories. In computer science, msic-systems can be used in, for instance, legal information systems, decision support
systems, and multi-agent systems. In this essay, msic-systems are approached from a logical and algebraic perspective aiming at clarifying their structure and developing e¤ective methods for
representing them. Of special interest are the most narrow links or joinings between di¤erent strata in a system, that is between subsystems of di¤erent sorts of concepts, and the intermediate
concepts intervening between such strata. Special emphasis is put on normative systems, and the role that intermediate concepts play in such systems, with an eye on knowledge representation issues.
In this essay, normative concepts are constructed out of descriptive concepts using operators architecture for a norm-regulated multi-agent system is suggested, containing a scheme for how normative
positions will restrict the set of actions that the agents are permitted to choose from.
"... A catalogue record for this book is available from the British Library ..."
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1103916","timestamp":"2014-04-20T07:16:36Z","content_type":null,"content_length":"34402","record_id":"<urn:uuid:19ff4fed-60f5-4b1c-b6d7-a8896201ebce>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multilevel image reconstruction with natural pixels
- Trans. Numer. Anal , 2001
"... . This paper considers problems of distributed parameter estimation from data measurements on solutions of partial differential equations (PDEs). A nonlinear least squares functional is
minimized to approximately recover the sought parameter function (i.e., the model). This functional consists of a ..."
Cited by 39 (13 self)
Add to MetaCart
. This paper considers problems of distributed parameter estimation from data measurements on solutions of partial differential equations (PDEs). A nonlinear least squares functional is minimized to
approximately recover the sought parameter function (i.e., the model). This functional consists of a data fitting term, involving the solution of a finite volume or finite element discretization of
the forward differential equation, and a Tikhonov-type regularization term, involving the discretization of a mix of model derivatives. We develop a multigrid method for the resulting constrained
optimization problem. The method directly addresses the discretized PDE system which defines a critical point of the Lagrangian. The discretization is cell-based. This system is strongly coupled when
the regularization parameter is small. Moreover, the compactness of the discretization scheme does not necessarily follow from compact discretizations of the forward model and of the regularization
term. We therefore employ a Marquardt-type modification on coarser grids. Alternatively, fewer grids are used and a preconditioned Krylov-space method is utilized on the coarsest grid. A collective
point relaxation method (weighted Jacobi or a Gauss-Seidel variant) is used for smoothing. We demonstrate the efficiency of our method on a classical model problem from hydrology. 1.
- Computing , 2000
"... In this paper, we consider a multigrid application in digital image processing. Here, the problem is to find a map, which transforms an image T into another image R such that the grey level of
the different images are nearly equal in every picture-element. This problem arises in the investigation of ..."
Cited by 12 (2 self)
Add to MetaCart
In this paper, we consider a multigrid application in digital image processing. Here, the problem is to find a map, which transforms an image T into another image R such that the grey level of the
different images are nearly equal in every picture-element. This problem arises in the investigation of human brains. The complete inverse problem is ill posed in the sense of Hadamard and nonlinear,
so the numerical solution is quite difficult. We solve the inverse problem by a Landweber iteration. In each minimization step an approximate solution for the linearized problem is computed with a
multigrid method as an inner iteration. Finally, we present some experimental results for synthetic and real images.
, 2000
"... This paper considers problems of distributed parameter estimation from data measurements on solutions of dierential equations. A nonlinear least squares functional is minimized to approximately
recover the sought parameter function (i.e., the model). This functional consists of a data tting term, ..."
Add to MetaCart
This paper considers problems of distributed parameter estimation from data measurements on solutions of dierential equations. A nonlinear least squares functional is minimized to approximately
recover the sought parameter function (i.e., the model). This functional consists of a data tting term, involving the solution of a nite volume or nite element discretization of the forward
dierential equation, and a Tikhonov-type regularization term, involving the discretization of a mix of model derivatives. The grid spacing of the model discretization, as well as the relative weight
of the entire regularization term, aect the sort of regularization achieved. We investigate a number of questions arising regarding their relationship, including the degree of nonlinearity of the
least squares functional. We also investigate the correct scaling of the regularization matrix, where we rigorously associate the practice of using unscaled regularization matrices with
approximations of...
"... Abstract. In this paper we introduce a multigrid method for sparse, possibly rankdeficient and inconsistent least squares problems arising in the context of tomographic image reconstruction. The
key idea is to construct a suitable AMG method using the Kaczmarz algorithm as smoother. We first present ..."
Add to MetaCart
Abstract. In this paper we introduce a multigrid method for sparse, possibly rankdeficient and inconsistent least squares problems arising in the context of tomographic image reconstruction. The key
idea is to construct a suitable AMG method using the Kaczmarz algorithm as smoother. We first present some theoretical results about the correction step and then show by our numerical experiments
that we are able to reduce the computational time to achieve the same accuracy by using the multigrid method instead of the standard Kaczmarz algorithm. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1116816","timestamp":"2014-04-18T01:03:14Z","content_type":null,"content_length":"21383","record_id":"<urn:uuid:6b54893f-8906-4d9f-9fd0-49461accaeaf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riverdale Pk, MD Algebra 2 Tutor
Find a Riverdale Pk, MD Algebra 2 Tutor
...My tutoring approach includes the following: (1) talk to students on a peer-level to better understand why they are experiencing difficulties in their subject; (2) work with students to
improve the skills that will help them succeed; and (3) help students develop a sense of autonomy and empowerme...
17 Subjects: including algebra 2, reading, writing, biology
...I have also been a math tutor through college, teaching up to Calculus-level classes. My tutoring style can adapt to individual students and will teach along with class material so that
students can keep their knowledge grounded. I have a Master's degree in Chemistry and I am extremely proficient in mathematics.
11 Subjects: including algebra 2, chemistry, geometry, organic chemistry
...There is nothing more rewarding than being a part of someone growing and improving themselves, and helping a person to set lofty goals and then meet them is a truly exhilarating experience. I
have been a part of this as a peer tutor, tutor to younger students, and then as a coach of young people...
30 Subjects: including algebra 2, reading, chemistry, physics
...I have developed fun activities for students to actually have fun while they are learning. I look forward to helping your child become a huge success. In fact, several of my past students have
earned As and Bs on their exams and major tests after scoring Ds, Es, and Fs.
18 Subjects: including algebra 2, reading, writing, calculus
...You may not use math a lot in your day to day life, but solving mathematical problems will increase your logical thinking and reasoning. I have experience teaching students with various
backgrounds in math. If you are already smart in math, I can give you further guidance.
12 Subjects: including algebra 2, calculus, prealgebra, precalculus
Related Riverdale Pk, MD Tutors
Riverdale Pk, MD Accounting Tutors
Riverdale Pk, MD ACT Tutors
Riverdale Pk, MD Algebra Tutors
Riverdale Pk, MD Algebra 2 Tutors
Riverdale Pk, MD Calculus Tutors
Riverdale Pk, MD Geometry Tutors
Riverdale Pk, MD Math Tutors
Riverdale Pk, MD Prealgebra Tutors
Riverdale Pk, MD Precalculus Tutors
Riverdale Pk, MD SAT Tutors
Riverdale Pk, MD SAT Math Tutors
Riverdale Pk, MD Science Tutors
Riverdale Pk, MD Statistics Tutors
Riverdale Pk, MD Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bladensburg, MD algebra 2 Tutors
Brentwood, MD algebra 2 Tutors
Cheverly, MD algebra 2 Tutors
College Park algebra 2 Tutors
Edmonston, MD algebra 2 Tutors
Greenbelt algebra 2 Tutors
Hyattsville algebra 2 Tutors
Landover Hills, MD algebra 2 Tutors
Lanham Seabrook, MD algebra 2 Tutors
Mount Rainier algebra 2 Tutors
New Carrollton, MD algebra 2 Tutors
North Brentwood, MD algebra 2 Tutors
Riverdale Park, MD algebra 2 Tutors
Riverdale, MD algebra 2 Tutors
University Park, MD algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Riverdale_Pk_MD_Algebra_2_tutors.php","timestamp":"2014-04-18T23:29:02Z","content_type":null,"content_length":"24516","record_id":"<urn:uuid:c3f99a2a-543a-44f4-99bd-b54eef90c361>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IGNOU Coaching For BCA, MCA, MBA Jaipur- 9680422112
Course Code : MCS-033
Course Code : MCS-033
Course Title : Advanced Discrete Mathematics
Assignment Number : MCA(3)/033/Assign/2012
Maximum Marks : 100
Weightage : 25%
Last Dates for Submission : 15
October, 2012 (For July 2012 Session)
April, 2013 (For January 2013 Session)
There are FIVE questions of total 80 marks in this assignment. Answer all
questions. 20 Marks are for viva-voce. You may use illustrations and diagrams to
enhance explanations. Please go through the guidelines regarding assignments given
in the Programme Guide for the format of presentation.
Question 1: (a) Using Karnaugh map, simplify
X': A'BC'D'+ ABCD+ ABCD'+ ABCD' (5 Marks)
(b) Describe Konigsberg’s 7 bridges problem and Euler's
solution to it. B (5 Marks)
(c) Show that the sum of the degrees of all vertices of a
graph is twice the number of edges in the graph. (5 Marks)
Question 2: (a) Let G be a non directed graph with 12 edges. If G has 5
vertices each of degree 3 and the rest have degree less
than 3, what is the minimum number of vertices G
can have? (5 Marks)
(b) What is Graph Cloning? Explain K-edge cloning with
an example. (5 Marks)
(c) Let f(n)= 5 f(n/ 2) + 3 and f(1) = 7. Find f(2k) where k
is a positive integer. Also estimate f(n) if f is an increasing
function. (5 Marks)
Question 3: (a) Define r-regular graph. Give an example of 3-regular
graph. (5 Marks)
f is bijective function with Range of f as the
(5 Marks)6
(c) What are isomorphic graphs? Are the graphs given below isomorphic?
Explain why? (7 Marks)
(i) (ii)
(d) What is connected Graph? Construct a graph with chromatic number 5.
(4 Marks)
Question 4:
(a) Solve following recurrence relations (9 Marks)
i) = + n, = 2
using substitution method
ii) 9
iii) =
(b) Write a short note on Tower of Hanoi Problem. How can it be
solved using recursion ? (4 Marks)
Question 5:
(a) Show that for subgraph H of a graph G (4 Marks)
∆ (H) ≤ ∆ (G)
(b) What is Divide and Concuer relations? Explain with an example? (4 Marks)
(c) Find a power series associated with the problem where we have to
find a number of ways to select 10 people to form and expert committee
from 6 Professors and 12 Associate Professors. (4 Marks)
(d) Tree is a Bipartite Graph” justify the statement with an example? (4 Marks)
IGNOU Coaching for BCA, MCA, MBA in Jaipur
Regional Director,
IGNOU Regional Centre,
Sector - 7,
Patel Marg,
Rajasthan - 302020
Ph :+91-0141-2785763 / 2785750
Fax :+91-0141-2784043
1 comments:
lisa kabi said...
plz post the solution
|
{"url":"http://ignoujaipur.blogspot.com/2012/08/ignou-mca-mcs-033-solved-assignments.html","timestamp":"2014-04-18T10:35:38Z","content_type":null,"content_length":"69087","record_id":"<urn:uuid:0fb31ee5-8cda-407e-8eb8-401a7fbf5a09>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lake Highlands, Dallas, TX
Dallas, TX 75230
Topflight Test Prep for the SAT, GRE and GMAT
...As both a university instructor and a test prep tutor, I am devoted to helping students master the daunting process of standardized testing. I teach the very best strategies, tips and shortcuts
for the SAT, GRE and
. I want each of my students to have the clearest...
Offering 5 subjects including GMAT
|
{"url":"http://www.wyzant.com/Lake_Highlands_Dallas_TX_GMAT_tutors.aspx","timestamp":"2014-04-20T03:29:43Z","content_type":null,"content_length":"60366","record_id":"<urn:uuid:5489a5e7-6aec-45e9-b6b7-978addabd63d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plane affine constructions and the TikZ calc library
LaTeX - Graphics, Figures & Tables
Written by Hugues Vermeiren
Sunday, 17 June 2012 18:32
The TikZ calc library enables you to build elaborate geometrical figures. More precisely, this library produces objets, mainly points, from primitive ones. Primitives objects are to be defined by
their coordinates (cartesians or polar) but a wise rule is to use as few computations on the coordinates as possible. Here comes the calc library and we propose on this page to describe some simple
applications of this powerful tool. We will limit ourselves to affine constructions and to figures containing essentially points and lines.
- A contribution to the LaTeX and Graphics contest -
The calc library is invoked, in the preamble, by the command \usetikzlibrary{calc}.
In affine geometry, we are essentially concerned with parallelism and ratio of lengths, the length itself has no meaning in affine geometry. Vector equality is also a great concern. So, with just a
few tools, we should be able to build about any plane affine construction. Of course, we'll have now and then to build the intersections of lines.
The length ratio
Build the point P such that AP = λ·AB . P is given by \coordinate (P) at ($(A)!λ!(B)$);
The midpoint M of [AB] is then \coordinate (M) at ($(A)!0.5!(B)$); The symmetric S of A with respect to B is \coordinate (S) at ($(A)!2!(B)$); or \coordinate (S) at ($(B)!-1!(A)$);
If you want to draw the line AB a bit further than points A and B, you'll type
\coordinate (A) at (-1,-1);
\coordinate (B) at (1,0);
\coordinate (A') at ($(A)!-0.3!(B)$);
\coordinate (B') at ($(A)!1.3!(B)$);
\draw (A')--(B');
\fill (A) circle (0.5mm) node[above]{$A$};
\fill (B) circle (0.5mm) node[below]{$B$};
As this is often used, it is a good idea to make a macro out of it. Something like this:
\coordinate (ATemp) at ($(#1)!-0.2!(#2)$);
\coordinate (BTemp) at ($(#1)!1.2!(#2)$);
\draw (ATemp)--(BTemp);
Or this
\coordinate (ATemp) at ($(#1)!{-#3}!(#2)$);
\coordinate (BTemp) at ($(#1)!{1+#4}!(#2)$);
\draw (ATemp)--(BTemp);
Parameters #3 and #4 determine (in %), the extent to wich the line is prolonged in both direction.
An easy construction is the one illustrating Ceva's theorem:
\coordinate[label=below:$A$] (A) at (-1,0);
\coordinate[label=right:$B$] (B) at (2.5,0.5);
\coordinate[label=above:$C$] (C) at (0,3);
\coordinate[label=right:$A_1$] (A1) at ($(B)!0.3!(C)$);
\coordinate[label=left:$B_1$] (B1) at ($(C)!0.8!(A)$);
\coordinate[label=above left:$I$]
(I) at (intersection of A--A1 and B--B1);
(C1) at (intersection of C--I and A--B);
\draw (A)--(B)--(C)--cycle;
\draw (A)--(A1) (B)--(B1) (C)--(C1);
\foreach \p in {A,B,C,A1,B1,C1,I}
\fill (\p) circle (0.5mm);
The fourth point of a parallelogram and vector equality
Determine the point D such that AD = BC . Who would say "I'll never need that!"? D is simply determined by the line of code: \coordinate (D) at ($(A)+(C)-(B)$);
\coordinate (A) at (-1,1);
\coordinate (B) at (1.5,-0.5);
\coordinate (C) at (1,1);
\coordinate (D) at ($(A)+(C)-(B)$);
\draw (A)--(B)--(C)--(D)--cycle;
\foreach \p in {A,B,C,D}
\fill (\p) circle (0.5mm) node [above right]{$\p$};
Here is an exercise that intensively uses this construction.
O is any point inside triangle ABC. I,J and K are such that OABI, OBCJ and OCAK are parallelograms. Prove that O is the center of gravity of triangle IJK.
\coordinate[label=above:$A$] (A) at (-1,-1);
\coordinate[label=below:$B$] (B) at (1.5,-1.2);
\coordinate[label=left:$C$] (C) at (0.2,1);
\coordinate[label=above:$O$] (O) at (0,0);
\coordinate[label=above:$I$] (I) at ($(B)+(O)-(A)$);
\coordinate[label=above:$J$] (J) at ($(C)+(O)-(B)$);
\coordinate[label=below:$K$] (K) at ($(A)+(O)-(C)$);
\coordinate[label=above:$K'$] (K') at ($(I)!0.5!(J)$);
\coordinate[label=left:$I'$] (I') at ($(J)!0.5!(K)$);
\coordinate[label=above:$J'$] (J') at ($(K)!0.5!(I)$);
\draw[dashed] (O)--(A)--(B)--(I)--cycle;
\draw[dashed] (O)--(B)--(C)--(J)--cycle;
\draw[dashed] (O)--(C)--(A)--(K)--cycle;
\draw (I)--(I') (J)--(J') (K)--(K');
\draw[thick] (I)--(J)--(K)--cycle;
\foreach \p in {A,B,C,I,J,K,O,I',J',K'}
\fill (\p) circle (0.5mm);
The sum of two vectors can be described by this:
\coordinate[label=left:$A$] (A) at (-1,0);
\coordinate[label=right:$B$] (B) at (0.75,1);
\coordinate[label=above:$C$] (C) at (1.5,2);
\coordinate[label=below:$D$] (D) at (3,-1);
\coordinate[label=right:$X$] (X) at ($(B)+(D)-(C)$);
\draw[->,>=triangle 45] (A)--(B)
\draw[->,>=triangle 45] (C)--(D)
node[midway,above right]{$\vec{v}$};
\draw[->,>=triangle 45,thick] (A)--(X)
\draw[dashed] (B)--(X) (B)--(C) (D)--(X);
\foreach \p in {A,B,C,X} \fill (\p) circle (0.5mm);
In this figure the arrow tips (the option '>=triangle 45' of the \draw command) are obtained using the arrows library: \usetikzlibrary{arrows}.
As an exercise, we might propose this:
Given the parallelograms ABCD and A'B'C'D' prove that the midpoints A'',B'', C'', D'' of segments [AA'],[BB'],[CC'] and [DD'] are the vertices of a parallelogram (possibly degenerated).
The image should look like this:
Remember that only points A,B,C and A',B',C' are defined by their coordinates. The rest is determined by the calc library.
In order to draw the parallel to line BC through point A, you can define D such that AD = CB and D' such that AD'=BC. Drawing line DD' places A in the middle of segment [DD'].
\coordinate (A) at (-2,1);
\coordinate (B) at (1,-0.5);
\coordinate (C) at (-1,-1);
\coordinate (TempA) at ($(A)+(B)-(C)$);
\coordinate (TempB) at ($(A)-(B)+(C)$);
\draw (TempA)--(TempB);
\foreach \p in {A,B,C}
\fill (\p) circle (0.5mm) node [above right]{$\p$};
A macro that draws the parallel to BC through point A could look like this:
\coordinate (ATemp) at ($(#1)+(#3)-(#2)$);
\coordinate (BTemp) at ($(#1)!{-#4}!(ATemp)$);
\coordinate (CTemp) at ($(#1)!{1+#5}!(ATemp)$);
\draw (BTemp)--(CTemp);
Points A, B and C correspond respectively to parameters #1, #2 and #3.
Thales Configuration
As an application of the intercept theorem (Thales' theorem), we propose the following:
\coordinate[label=below:$A$] (A) at (-1,-1);
\coordinate[label=right:$B$] (B) at (1.5,-0.5);
\coordinate[label=above:$C$] (C) at (0,2);
\coordinate[label=below:$X$] (X1) at ($(A)!\r!(B)$);
\coordinate (X2) at ($(A)!\r!(C)$);
\coordinate (X3) at ($(B)!\r!(C)$);
\coordinate (X4) at ($(B)!\r!(A)$);
\coordinate (X5) at ($(C)!\r!(A)$);
\coordinate (X6) at ($(C)!\r!(B)$);
\coordinate (P) at ($(X6)!0.9!(X1)$);
\draw (A)--(B)--(C)--cycle;
\draw[->,>=triangle 45]
\foreach \p in {A,B,C,X1,X2,X3,X4,X5,X6}
\fill (\p) circle (0.6mm);
Starting from point X and following a path each time parallel to one side of triangle ABC, do we necessarily return to point X?
The parabola as envelope
To close this short presentation we propose a more elaborate construction.
On one arm of an angle the arbitrary segment e and, on the other, the segment f are marked off n times in succession from the vertex of the angle, and the segment endpoints are numbered, beginning
from the vertex 0, 1, 2, ..., n and n, n-1, ..., 2, 1, 0 respectively.
Prove that the lines joining the points with the same number envelop a parabola.
A proof can be found, for example, in "100 Great Problems of Elementary Mathématics, Their History and Solution" by Heinrich Dörrie (Dover, 1958).
This is the kind of construction all of us have done, with paper and pencil, sometime when we were young in a class room. It is relatively easy to realize with the calc library and without any
analytical geometry. Of course, as this is a repetitive task, well need some loop instruction (Tikz provides a very efficient loop structure : the \foreach instruction).
\def\n{20} % nb. subdivisions OA and OB
\def\m{8} % nb. of tangents outside [OA], [OB]
\def\la{10}% distance OA
\def\lb{10}% dist. OB
\def\a{-10}% angle OA
\def\b{75}% angle OB
\coordinate (O) at (0,0);
\coordinate (A) at (\a:\la);
\coordinate (B) at (\b:\lb);
\draw[thick] ($(O)!2.5!(B)$)--($(B)!1.5!(O)$);
\draw[thick] ($(O)!2.5!(A)$)--($(A)!1.5!(O)$);
\foreach \i in {1,...,\n}{
\draw[red] ($(O)!{\i/\n}!(A)$)--($(O)!{(1-\i/\n+1/\n}!(B)$);
\foreach \i in {1,...,\m}{
\coordinate (X) at ($(O)!{1+\i/\n}!(A)$);
\coordinate (Y) at ($(O)!{-\i/\n}!(B)$);
\draw[blue] ($(X)!-0.8!(Y)$)--(Y);
\coordinate (X) at ($(O)!{1+\i/\n}!(B)$);
\coordinate (Y) at ($(O)!{-\i/\n}!(A)$);
\draw[green] ($(X)!-0.8!(Y)$)--(Y);
\fill (A) circle (2pt)node[above]{$A$};
\fill (B) circle (2pt)node[right]{$B$};
Drawing the initial lines (the red ones) won't convince many of us that the envelope is indeed an arc of a parabola, it could be any hyperbolic segment for example. Fortunately, the construction can
be completed with the blue and the green lines of the figure that are drawn in the same way as the red lines. The picture is then pedagogically much more convincing even if a rigourous proof has
still to be delivered.
very useful codes
Thank you for your codes
Latest Forum Posts
BibTeX, APA style not 100% ok in a french document
16/04/2014 15:09, trucmuche2005
Re: Section number in footer using fancyhdr
16/04/2014 10:43, esdd
aluminium sandwich panel,roofing sandwich panel
16/04/2014 10:23, Spread
Re: Fatal Error when compiling to PDF
16/04/2014 09:37, Johannes_B
Re: replace a command by its starred version
16/04/2014 08:35, jossojjos
Re: Fatal Error when compiling to PDF
16/04/2014 02:03, scotttyo
Re: Nomenclature in main document
15/04/2014 21:02, poeli
Re: Lining figures in ToC
15/04/2014 20:36, Johannes_B
Lining figures in ToC
15/04/2014 18:46, AleCes
Re: Sorting sections
15/04/2014 11:50, drumsandsail
|
{"url":"http://www.latex-community.org/component/content/article/50-latex-graphics-figures-tables/432-plane-affine-constructions-tikz","timestamp":"2014-04-16T13:07:13Z","content_type":null,"content_length":"54427","record_id":"<urn:uuid:940457ec-3e9b-4150-b4cb-758f32398c44>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rank of the character group of a maximal $K$-torus for semisimple and adjoint algebraic groups
up vote 1 down vote favorite
I've been trying to understand some of the idiosyncrasies associated to algebraic groups over non-algebraically closed fields $K$ of characteristic $p > 0$.
Let $G$ is a connected almost absolutely simple adjoint algebraic group over $K$ and $\widetilde{G}$ is the simply connected cover of $G$ also defined over $K$ with central isogeny $\pi : \widetilde
{G} \rightarrow G$. Let $\widetilde{T} \subseteq \widetilde{G}$ be a maximal $K$-torus and let $T= \pi(\widetilde{T}) \subseteq G$.
Is it possible for the $\mathbb{Z}$-rank of the $K$-defined character groups $X_K(\widetilde{T})$ and $X_K(T)$ to be different? In other words, is the $K$-split part of $\widetilde{T}$ always the
same rank as the $K$-split part of $T$?
add comment
2 Answers
active oldest votes
The map $\widetilde{T} \rightarrow T$ is an isogeny, and an isogeny between tori over any field $K$ induces a finite-index inclusion between the associated geometric character groups as $
{\rm{Gal}}(K_s/K)$-lattices, hence an isomorphism on associated rational vector space, so same dimension for spaces of Galois-invariants; i.e., same rank for $K$-rational character groups.
However, this is a central isogeny, and that is really the key to the good behavior.
For isogenies $f:G \rightarrow G'$ between connected semisimple $K$-groups such that $\ker f$ is not central in $G$ (which can only occur in positive characteristic $p$, such as Frobenius
isogenies $F_{G/K}:G \rightarrow G^{(p)}$) it can happen that $G'$ has larger $K$-rank than $G$. This might sound paradoxical if you are not familiar with it, since you might reason that
if $T' \subset G'$ is a maximal $K$-torus (say containing a maximal split $K$-torus of $G'$) then the identity component of the underlying reduced scheme of $f^{-1}(T')$ seems to be a
smooth $K$-subgroup scheme of $G$, hence a torus mapping onto $T'$ via an isogeny, so it has the same $K$-rank as $T'$ by the first paragraph above. That reasoning is valid provided that
$f^{-1}(T')_{\rm{red}}$ really is a smooth $K$-subgroup scheme of $G$. For perfect $K$ such conditions hold (since the underlying reduced scheme of a finite type $K$-group scheme is a
smooth $K$-subgroup scheme for such $K$). But it can fail when $K$ is imperfect.
For example, if $K$ is a local function field of characteristic $p$ and $A$ is a central division algebra of dimension $p^2$ over $K$ then $G := {\rm{SL}}_1(A)$ is a $K$-anisotropic form
of ${\rm{SL}}_p$ but $G^{(p)}$ is $K$-split since $A^{(p)}$ is $K$-split (by local class field theory). So $F_{G/K}:G \rightarrow G^{(p)}$ is an isogeny from a $K$-anisotropic absolutely
simple semisimple $K$-group onto a $K$-split one. And of course this is a non-central isogeny. I suspect that this is the kind of phenomenon you were trying to find when formulating the
up vote 2 OP.
down vote
accepted [EDIT: I should probably have explained why in the case of central isogenies there are no surprises. That is, if $f:G \rightarrow G'$ is a central isogeny between connected reductive
$K$-groups for a field $K$ (i.e., the scheme-theoretic kernel $\ker f$ centralizes $G$ in the functorial sense) and if $T' \subset G'$ is a maximal $K$-torus then the scheme-theoretic
preimage $T := f^{-1}(T')$ is a maximal $K$-torus of $G$ (in particular, smooth and connected, even if $f$ is inseparable or has disconnected kernel). The reason is that to prove the
$K$-group scheme $T$ is a maximal $K$-torus it suffices to do so after a ground field extension, since by Grothendieck's theorem on the "geometric maximality" of maximal tori over the
ground field in a smooth connected affine $K$-group we do not lose the maximality hypothesis on $T'$ after a ground field extension. Hence, we may assume $K$ is algebraically closed.
With $K = \overline{K}$, all choices of $T'$ are $G'(K)$-conjugate and the map $G(K) \rightarrow G'(K)$ is surjective, so it suffices to treat the case of a single $T'$. Ah, but then we
simply choose a maximal $K$-torus $S$ in $G$, so $T' := f(S)$ is a maximal $K$-torus in $G' = f(G)$, and thus the problem is to show that the inclusion $S \subset f^{-1}(f(S))$ of
$K$-group schemes is an equality. Since $f$ is necessarily faithfully flat, so $G' = G/(\ker f)$ as fppf group sheaves, it suffices to show that $\ker f \subset S$ as subfunctors of $G$.
Since $\ker f$ is central in $G$ by hypothesis, so it centralizes $S$, and hence it suffices to show that the scheme-theoretic centralizer $Z_G(S)$ of $S$ is equal to $S$. We know equality
on $K$-points by the classical theory, so one just has to show that the group scheme $Z_G(S)$ is smooth, which is to say that it has the "expected" tangent space (i.e., that of $S$). This
is a problem on dual numbers, and is proved in section 9 of Chapter III of Borel's textbook in a more classical language.]
add comment
Since the question is formulated in the language of the papers by Borel-Tits, I'd recommend $\S 22$ of Borel's GTM 126 Linear Algebraic Groups (on central isogenies) as a reasonable
reference. See especially 22.6-22.7. Borel and Tits developed their language for a connected reductive group defined over an arbitrary field, avoiding the greqter generality of scheme theory
but also being well aware of the advantages which scheme language can provide when the field of definition is imperfect. (SGA3 and the more recent writings by Milne, Conrad-Gabber-Prasad,
provide an alternative route into the complexities of working over function fields or over more general rings.)
up vote
0 down As long as your isogeny is central, there is not much difficulty in comparing an adjoint group with any covering group. Otherwise, as pointed out by user36938, life is more complciated. It's
vote certainly essential to make appropriate choices of language and source materials when dealing with these kinds of questions. In any case, the basic facts have been written down pretty
clearly a long time ago. Applying these facts to groups over function fields and the like remains a challenge.
add comment
Not the answer you're looking for? Browse other questions tagged algebraic-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/137861/rank-of-the-character-group-of-a-maximal-k-torus-for-semisimple-and-adjoint-al","timestamp":"2014-04-16T16:56:13Z","content_type":null,"content_length":"57935","record_id":"<urn:uuid:ee0b120a-38a5-4756-862b-c512002412f0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 1995 [00109]
[Date Index] [Thread Index] [Author Index]
Re: Some Easy (hopefully) Questions
• To: mathgroup at christensen.cybernetics.net
• Subject: [mg2038] Re: Some Easy (hopefully) Questions
• From: David Harvatin <dtharvat at sprint.uccs.edu>
• Date: Sat, 16 Sep 1995 01:41:41 -0400
• Organization: University of Colorado at Boulder
Rob Carscadden <carscadd at pps.pubpol.duke.edu> wrote:
>I'm trying to run a model using Mathematica. I've scoured all our
>libraries and all the books are gone. Working with all too basic
>Mathematica By Example, I haven't been able to find how to do some rather
>easy things.
>The first thing I want to do is find a max (I need to plug the max and
>where it occurs into anther equation). I have a function, and I have
>taken both the first and second derivatives. I solve for f'(x) = 0. But
>here's where the trouble comes in. I want to plug these points back into
>my f(x). How can I do this. Look at the following example:
>possmax = Solve[(x - 1)^2 == 0,x] // N
>I get {{x -> 1.},{x ->1.}}
>and when I try
>I get {x ->1.}
>but I want the number 1 instead.
>I know this if probably extremely basic, but due to lack of available
>documentation here, I can not figure it out.
>P.S. another useful trick to know would be how many objects are in a
>list. My function a bit more complicated (compositions, logs etc.), and I
>don't exactly how many solutions I will have from the Solve[ ] statement.
>Rob Carscadden
>carscadd at pps.duke.edu
>49% of all Statistics are made up on the spot!
To answer your first question, suppose you want to put both solutions into an expression called
exp1 that is also a function of x :
exp1 = 5 x -3. The ReplaceAll operator (forward slash then period) will replace all
occurrences of x in exp1 with the values you obtain for possmax if you do the following :
exp1 /. possmax
Your answer will be in the form of a list, with each element of the list corresponding to one
of your x values in possmax. If you want only one solution, then do the following :
exp1 /. possmax[[1]] or
exp1 /. possmax[[2]]
To answer your second question, use the Length command :
Length[possmax] gives you an answer of 2. For a list such as : list = { {1,2},{3,4} },
Length[list] also gives an answer of 2.
Good luck. I had to learn Mathcad without the benefit of the book, so I know how you feel.
BTW, unlike Mathcad, Stephen Wolfram's MMA book is readily available in most large bookstores
for around $50.
Dave Harvatin
dtharvat at sprint.uccs.edu
|
{"url":"http://forums.wolfram.com/mathgroup/archive/1995/Sep/msg00109.html","timestamp":"2014-04-20T06:05:27Z","content_type":null,"content_length":"36582","record_id":"<urn:uuid:674025fe-8be6-44f4-87cd-0dab4101c747>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics 618 > Ban > Notes > chap2.pdf | StudyBlue
Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis
I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy
I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey
This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson
|
{"url":"http://www.studyblue.com/notes/note/n/chap2pdf/file/382644","timestamp":"2014-04-16T19:57:37Z","content_type":null,"content_length":"31278","record_id":"<urn:uuid:975ea648-8395-4a1c-b61e-9f0196d57660>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by cesilia on Tuesday, October 26, 2010 at 8:06pm.
what is the longest multiplication combination for 120
Related Questions
math - Longest multiplication combination for 120?
Math - what is the longest multiplication combination for 120
math 5 grade - longest multiplication combination for 210
4th grade English - Water the garden longest of all in the hot weather. Is ...
math - longest multipcation combination for 120 180 and 210
4th grade math - what is distributive properties of multiplication?
4th grade - What property of multiplication helped solve 1X47=47?
math - is there a easier way to multiply 92 x95 instead of doing al;l the ;long...
4th grade - What is the greatest and least possible product you can make using ...
Math - I have to use the numbers....24-12-7-2 and use any combination of ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1288137988","timestamp":"2014-04-17T22:44:26Z","content_type":null,"content_length":"8089","record_id":"<urn:uuid:5756aafb-c495-49e8-8f38-378aebf8e016>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
negative base
The use of a negative base to represent numbers gives rise to some intriguing possibilities. Consider "negadecimal," for example, in which the base is minus 10 instead of the familiar positive 10. In
this system, the number 365 is equivalent to the decimal number 5 + (6 × -10) + (3 × -10 × -10), = 245, while 35 in negadecimal is equivalent to 5 + (3 × -10), = -25, in ordinary decimal. This points
to an interesting fact: the negadecimal equivalent of any positive or negative decimal number is always positive and therefore doesn't need to be accompanied by a sign. The Polish UMC-1, of which a
few dozen were built in the late 1950s and earlier 1960s, is the only computer ever to use "negabinary" (base 2 arithmetic).
Related category
|
{"url":"http://www.daviddarling.info/encyclopedia/N/negative_base.html","timestamp":"2014-04-20T01:23:28Z","content_type":null,"content_length":"5960","record_id":"<urn:uuid:8fc5d8d3-01d6-4aec-8bad-9c7fa1624406>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trapezoid Rule is really ruling, need to be free, help!
March 23rd 2006, 04:09 AM #1
Mar 2006
Trapezoid Rule is really ruling, need to be free, help!
I am sitting here stuck on these three problems. Please help me!
1. Find an approximation for the area in the first quadrant between the x-axis and the curve y=4-(x-2)^2 using 4 equally spaced intervals and a Left Hand Rieman Sum.
Im not sure about the left hand part. But I've tried and I come up with 10 and 10.666.
2. I have graph the function of y=1/x, where x goes from 0 to 5. Using the trapexoid rule with 4 equal intervals, I must approximate the area under the curve between x=1 and x=3.
Please help.
Okay, this is what I have for the second one so far. But for some reason I keep missing it from here. I know its just a matter of calculating but I just cant get it right.
f(x) = 1/x
a = 1
b = 3
n = 4
T = [(b - a)/(2n)]*[f(x0) + 2f(x1) + 2f(x2) + ... + 2f(xn-1) + f(xn)]
T = [(3 - 1)/(2*4)]*[f(1) + 2f(3/2) + 2f(2) + 2f(5/2) + f(3)]
T = (1/4)[1 + 2(2/3) + 2(1/2) + 2(2/5) + (1/3)]
Now for the next trapezoid problem(ruler).
If the trapezoid rule is used with 5 interval,, what is the integral dx/1 + x^2 ,with limits 1,0.
Please, please help!
I am sitting here stuck on these three problems. Please help me!
1. Find an approximation for the area in the first quadrant between the x-axis and the curve y=4-(x-2)^2 using 4 equally spaced intervals and a Left Hand Rieman Sum.
Im not sure about the left hand part. But I've tried and I come up with 10 and 10.666.
You are being asked to estimate:
$<br /> \int_0^4 y(x) dx=\int_0^4 4-(x-2)^2\ dx<br />$
using 4 equal sub-intervals of $[0,4]$. These sub-intervals are:
$[0,1], [1,2], [2,3]$ and $[3,4]$.
As we are instructed to use left hand Riemann sums we will approximate the
function on each of these subintervals by its value at the left hand point of
the sub-interval, so:
$<br /> \int_0^4 y(x) dx\approx (y(0)\times \delta x) +(y(1) \times \delta x)$$+ (y(2) \times \delta x) + (y(3) \times \delta x)<br />$,
where $\delta x$ is the width of the intervals in this case $\delta x=1$
$<br /> \int_0^4 y(x) dx\approx (0\times \delta x) +(3 \times \delta x)$$+ (4 \times \delta x) + (3 \times \delta x)=10<br />$.
This may be compared with the result of doing the integral analytically, which
gives an area of $32/3 \approx 10.667$.
-----------------------------------------------2. I have graph the function of y=1/x, where x goes from 0 to 5. Using the trapexoid rule with 4 equal intervals, I must approximate the area under
the curve between x=1 and x=3.
You need to approximate,
You are using 4 equal intervals thus, $n=4$.
Next, you need equal widths which is $\Delta x=\frac{b-a}{n}=\frac{2}{4}=.5$.
Now, by the trapezoidal rule,
$\frac{1}{2}[f(a)+2f(a+\Delta x)+2f(a+2\Delta x)+$$2f(a+3\Delta x)+f(a+4\Delta x)]$$\Delta x$
But in your values,
Now, find the value $f(x)=\frac{1}{x}$,
$\frac{1}{2}[1+1.33+1+.8+.33].5\approx 1.11$
Now, compare to the actual vale. This is what I am afraid of I do not know if you studied natrual logarithms. In that case, skip this part.
The actual value of,
$\int^3_1\frac{dx}{x}=\ln 3$
because this is the very definition of the natural logarithm. But $\ln 3\approx 1.09$
Notice that you error by using only 4 subintervals is only .02
March 23rd 2006, 05:08 AM #2
Grand Panjandrum
Nov 2005
March 23rd 2006, 06:04 AM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/2314-trapezoid-rule-really-ruling-need-free-help.html","timestamp":"2014-04-20T21:39:26Z","content_type":null,"content_length":"44008","record_id":"<urn:uuid:68a3ded0-5ea6-4683-9383-42a09da1b324>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: February 2009 [00519]
[Date Index] [Thread Index] [Author Index]
Re: testing if a point is inside a polygon
• To: mathgroup at smc.vnet.net
• Subject: [mg96475] Re: testing if a point is inside a polygon
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Sat, 14 Feb 2009 03:10:43 -0500 (EST)
• References: <gmp0mt$btn$1@smc.vnet.net> <gn3bv9$q4n$1@smc.vnet.net>
On Feb 13, 2:45 am, Daniel Lichtblau <d... at wolfram.com> wrote:
> On Feb 9, 4:31 am, Mitch Murphy <mi... at lemma.ca> wrote:
> > is there a way to test whether a point is inside a polygon? ie.
> > PointInsidePolygonQ[point_,polygon_] -> True or False
> > i'm trying to do something like ...
> > ListContourPlot[table,RegionFunction->CountryData["Cana=
> ","Polygon"]]
> > to create what would be called a "clipping mask" in photoshop.
> > cheers,
> > Mitch
> The application seems to involve many such tests rather than a one-off
> sort of query, so it is probably acceptable to spend time in
> preprocessing, if it allows us to avoid O(n) work per query (n being
> the number of segments). I'll show an implementation that bins the
> segments according to x coordinates. Specifically, a bin will contain
> a segment if either starting or terminating x coordinate of the
> segment corresponds to x values in the bin, or else the entire bin's
> range is between the starting and terminating values. Note that a
> given segment might be in multiple bins. So long as we bin "sensibly",
> all bins should have but a smallish fraction of the total number of
> segments.
> The preprocessing is a bit on the slow side because I do nothing
> fancy. I guess one could use Sort and some smarts to make it faster
> (which might be a good idea, if the plan is to do this for many
> different objects, e.g. all countries).
> Anyway, we create the bins of segments and also keep track of min and
> max x and y values (these we use as a cheap way of ruling out points
> not in an obvious bounding rectangle).
> [...]
I missed a couple of useful optimizations. We can use Compile both in
preprocessing and in the predicate query itself.
polyToSegmentList[poly_, nbins_] := Module[
{xvals, yvals, minx, maxx, miny, maxy, segments, flatsegments,
segmentbins, xrange, len, eps},
{xvals, yvals} = Transpose[Flatten[poly, 1]];
{minx, maxx} = {Min[xvals], Max[xvals]};
{miny, maxy} = {Min[yvals], Max[yvals]};
segments = Map[Partition[#, 2, 1, {1, 1}] &, poly];
flatsegments = Flatten[segments, 1];
xrange = maxx - minx;
eps = 1/nbins*len;
len = xrange/nbins;
segmentbins = Table[
getSegsC[j, minx, len, eps, flatsegments]
, {j, nbins}];
{{minx, maxx}, {miny, maxy}, segmentbins}
getSegsC = Compile[
{{j, _Integer}, {minx, _Real}, {len, _Real}, {eps, _Real}, {segs,
_Real, 3}},
Module[{lo, hi},
lo = minx + (j - 1)*len - eps;
hi = minx + j*len + eps;
Module[{xlo, xhi}, {xlo, xhi} = Sort[{#[[1, 1]], #[[2, 1]]}];
lo <= xlo <= hi || lo <= xhi <= hi || (xlo <= lo && xhi >=
= hi)
] &]]];
With this we can preprocess the polygons for Canada, using 1000 bins,
in around a minute on my machine.
In[346]:= canpoly = First[CountryData["Canada", "Polygon"]];
In[347]:= nbins = 1000;
Timing[{{xmin, xmax}, {ymin, ymax}, segmentbins} =
polyToSegmentList[canpoly, nbins];]
Out[337]= {55.3256, Null}
To repeat from the last note, there are almost certainly smarter ways
to do the preprocessing, so as to make it faster. For most
applications I can think of that would be more trouble than it is
worth, so it's not something I have attempted.
For the predicate evaluation we can do:
pointInPolygon[{x_, y_}, bins_, xmin_, xmax_, ymin_, ymax_] :=
{nbins = Length[bins], bin},
If[x < xmin || x > xmax || y < ymin || y > ymax, Throw[False]];
bin = Ceiling[nbins*(x - xmin)/(xmax - xmin)];
If[EvenQ[countIntersectionsC[bins[[bin]], x, y, ymin - 1.]], False,
countIntersectionsC = Compile[
{{segs, _Real, 3}, {x, _Real}, {yhi, _Real}, {ylo, _Real}},
Module[{tally = 0, yval, xlo, xhi, y1, y2},
{{xlo, y1}, {xhi, y2}} = segs[[j]];
If[(x < xlo && x < xhi) || (x > xlo && x > xhi), Continue[]];
yval = y1 + (x - xlo)/(xhi - xlo)*(y2 - y1);
If[ylo < yval < yhi, tally++];
, {j, Length[segs]}];
With this we can now process around 10,000 points in a second. I
selected the region so that most would be inside; this was so that we
would not gain speed due to a high percentage failing the basic
rectangle test.
In[352]:= n = 10000;
pts = Transpose[{RandomReal[{-115, -55}, {n}],
RandomReal[{45, 75}, {n}]}];
In[354]:= Timing[
inout = Map[pointInPolygon[#, segmentbins, xmin, xmax, ymin, ymax] &,
Out[354]= {1.04284, Null}
In[355]:= Take[inout, 20]
Out[355]= {True, True, True, True, False, True, True, True, True, \
True, False, True, True, False, False, False, False, True, True, True}
I should make a few remarks about the speed/complexity. If we split
the n polygon segments into m bins, for m somewhat smaller than n,
then for "good" geographies we expect something like O(n/m). Figure
most segments appear in no more than two bins, most appear in only one
bin, and no bin has more than, say, three times as many segments as
any other.
To achieve better algorithmic complexity I believe one would need to
use something more complicated, along the lines of a triangulation.
With such a data structure, efficiently implemented, it might be only O
(log(n)) to see whether a new point falls into a triangle, and, if so,
whether that triangle is inside or outside the boundary. I doubt such
a tactic would be applicable for a polygonal set of the size in
question here.
If the goal is, say, to color all pixels inside Canada differently
than those outside (in effect, to query every pixel in the bounding
rectangle), then it would be faster simply to find all boundary points
for a given vertical. This transforms from a problem of query on
points to one of splitting segments (a query on lines, so to speak).
Much more efficient, I would guess.
Daniel Lichtblau
Wolfram Research
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Feb/msg00519.html","timestamp":"2014-04-18T03:40:09Z","content_type":null,"content_length":"31291","record_id":"<urn:uuid:cbd7511a-06fc-422d-9cfd-ad371f9e7943>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: CR/PR questions
From: Jeremy Carroll <jjc@hpl.hp.com> Date: Mon, 5 May 2003 22:14:13 +0300 To: www-webont-wg@w3.org Message-Id: <200305052214.13579.jjc@hpl.hp.com>
> If your contention is that there are some things for which current
> systems can handle, but optimizations will be needed for larger
> community acceptance, I probably wouldn't disagree, but also wouldn't
> think these hold up the move to PR - optimizations generally come
> later in the day for these sorts of specs - what's more, there are
> already a couple of implementations where it appears to me that OWL
> reasoners can call out to other processes for doing artihmetic (or
> anything else for which there is a well-known process that is believe
> to be logically consistent) and thus it is a question of these being
> cited and more integrated.
One of my contentions is that there are *easy* OWL DL tests that no currently
implemented system can solve.
This points to one of two things:
- work that needs to be done by implementors to get their systems to do more
- work needed by the editors to make it clear that just because you can do
some reasoning in your head that does not mean that you should expect an OWL
reasoner to do it. (OWL Full has some health warnings about no complete
reasoners but I am not aware of any for OWL DL).
*can* means before the heat death of the universe, rather than in some longer
time frame.
The tests I have been working today, fit that pattern.
These can be done in my head, because I can reason about the cardinalities of
classes. An OWL DL reasoner which does not have this ability is severely
challenged. Either we should make it clear that users should not expect
reasoners to have this ability, or we should have tests that exercise these
abilities, and wait in CR until reasoners can pass them.
I take the well known pattern of 1-to-N relationships from UML, and encode
them in OWL, as relationships between finite sets.
An easy (soluble) example is:
I take a singleton class., I take a 1-to-2 relationship between that and a
second class. I take a 1-to-3 relationship between that and a third class,
now I add a 1-to-6 relationshup between the first class and the third class.
This is consistent.
If I increase the numbers, to beyond what one count on ones fingers I will
challenge most systems severly 200, 300, 60000 - not in themselves
unreasonable numbers. For me, to do in my head, it does not make the problem
much harder.
If I replace the 6 with a 5 then the file is inconsistent.
Now, I can replace the original singleton set with an infinite set, and
because infinity*2*3 = infinity*5 the ontology is now consistent.
Using a maxCardinality constraint with some large number I can instead insist
that the first class is finite, but I don't really know what. Since the size
of the original class has no bearing on the arithmetic it does not matter.
The tests I have been working on are found in
In the abstract syntax these are:
individualValuedProp( #p-N-to-1
inverse( #invP-1-to-N )
domain( #cardinality-N )
range( #only-d )
Functional )
individualValuedProp( #q-M-to-1
inverse( #invQ-1-to-M)
domain( #cardinality-N-times-M )
range( #cardinality-N )
Functional )
individualValuedProp( #r-N-times-M-to-1
inverse( #invR-1-to-N-times-M)
domain( #cardinality-N-times-M )
range( #only-d )
Functional )
EnumeratedClass( #only-d { #d } )
EquivalentClass( #only-d
restriction( #invP-1-to-N cardinality=2 )
restriction( #invR-1-to-N-times-M cardinality=6 )
EquivalentClass( #cardinality-N
restriction( #p-N-to-1 someValuesFrom(#only-d) )
restriction( #invQ-1-to-M cardinality=3 )
EquivalentClass( #cardinality-N-times-M
restriction( #r-N-times-M-to-1 someValuesFrom(#only-d) )
restriction( #q-M-to-1 someValuesFrom(#cardinlaity-N) )
906 is the same but with the numbers increased to 20, 30 600,
907 ditto but 200, 300, 60000
908 is also consistent, it seems similar and wrong at first blush:
individualValuedProp( #p-N-to-1
inverse( #invP-1-to-N )
domain( #cardinality-N )
range( #infinite )
Functional )
individualValuedProp( #q-M-to-1
inverse( #invQ-1-to-M)
domain( #cardinality-N-times-M )
range( #cardinality-N )
Functional )
individualValuedProp( #r-N-times-M-to-1
inverse( #invR-1-to-N-times-M)
domain( #cardinality-N-times-M )
range( #infinite )
Functional )
EquivalentClass( #infinite
restriction( #invP-1-to-N cardinality=2 )
restriction( #invR-1-to-N-times-M cardinality=5 )
EquivalentClass( #cardinality-N
restriction( #p-N-to-1 someValuesFrom(#infinite) )
restriction( #invQ-1-to-M cardinality=3 )
EquivalentClass( #cardinality-N-times-M
restriction( #r-N-times-M-to-1 someValuesFrom(#infinite) )
restriction( #q-M-to-1 someValuesFrom(#cardinlaity-N) )
The number 6 has been replaced by 5, and the class only-d has been renamed as
infinite; and the enumerated class axiom has vanished. This system has
infinite models only. I would be disappointed if FACT can't do this one
Test 910 is simply an inconsistent variant of 907.
Test 909 is more interesting, in that it is based on 908, but with the class
named #infinite renamed as #finite and then limited in size using another
property and a maxCardinality constraint.
individualValuedProp( #p-N-to-1
inverse( #invP-1-to-N )
domain( #cardinality-N )
range( #finite )
Functional )
individualValuedProp( #q-M-to-1
inverse( #invQ-1-to-M)
domain( #cardinality-N-times-M )
range( #cardinality-N )
Functional )
individualValuedProp( #r-N-times-M-to-1
inverse( #invR-1-to-N-times-M)
domain( #cardinality-N-times-M )
range( #finite )
Functional )
EquivalentClass( #finite
restriction( #invP-1-to-N cardinality=2 )
restriction( #invR-1-to-N-times-M cardinality=5 )
restriction( #f-K-to-1 someValuesFrom( #only-d ) )
EquivalentClass( #cardinality-N
restriction( #p-N-to-1 someValuesFrom(#infinite) )
restriction( #invQ-1-to-M cardinality=3 )
EquivalentClass( #cardinality-N-times-M
restriction( #r-N-times-M-to-1 someValuesFrom(#infinite) )
restriction( #q-M-to-1 someValuesFrom(#cardinlaity-N) )
individualValuedProp( #f-K-to-1
inverse( #invF-1-to-K)
domain( #only-d )
range( #finite )
Functional )
EnumeratedClass( #only-d { #d } )
EquivalentClass( #only-d
restriction( #invF-1-to-K maxCardinality=10000000000 )
Received on Monday, 5 May 2003 16:14:04 GMT
This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:58:00 GMT
|
{"url":"http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0036.html","timestamp":"2014-04-18T01:10:11Z","content_type":null,"content_length":"14167","record_id":"<urn:uuid:f9de85a2-0d64-4091-9cad-cc8f542d93d7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Texture on a disk
01-25-2013, 08:23 AM #1
Junior Member Newbie
Join Date
Jan 2013
Texture on a disk
I have defined a disk with a triangle fan and I want to put a texture on it. Unfortunately, I can't figure out how to define the texture coordinates.
Can anyone help me?
Thanks in advance for the time you will spend trying to help me.
It depends on how you want to texture. Do you want to drap a square raster over the circle?
Then the center of your disk has UV if 0.5,0.5 and the other vertices lie on a circle where the vertex (radius,0,0) maps to UV (1,0.)
Sorry but I don't understand your answer.
My disk is made of n triangles, and triangle i has the center of the disk as the first vertex, (radius*cos((i-1)*a),radius*sin((i-1)a)) as second vertex and (radius*cos(i*a),radius*sin(i*a)) for
third vertex. Can you please explicitely tell me what the texture coordinates should be to have the disk drawn in my texture image to be pasted over my 3D disk?
If my maths is right, the each vertex the uv will be( (cos((i-1)*a)+1)/2, (sin((i-1)a)+1)/2)
Then your math is wrong...
I don't think so
Code :
double a = DEGREES2RADIANS(45);
float ox = 200;
float oy = 200;
float r = 100;
for (int i = 0; i < 7; i++)
float u = (float(cos((i-1)*a))+1.0)/2.0;
float v = (float(sin((i-1)*a))+1.0)/2.0;
float cx = float(cos((i-1)*a)) * r;
float cy = float(sin((i-1)*a))* r;
01-26-2013, 11:13 PM #2
Senior Member OpenGL Pro
Join Date
Jan 2012
01-27-2013, 02:28 AM #3
Junior Member Newbie
Join Date
Jan 2013
01-27-2013, 06:03 PM #4
Senior Member OpenGL Pro
Join Date
Jan 2012
01-30-2013, 10:47 AM #5
Junior Member Newbie
Join Date
Jan 2013
01-30-2013, 04:18 PM #6
Senior Member OpenGL Pro
Join Date
Jan 2012
01-31-2013, 10:19 AM #7
Junior Member Newbie
Join Date
Jan 2013
01-31-2013, 10:40 PM #8
Senior Member OpenGL Pro
Join Date
Jan 2012
|
{"url":"http://www.opengl.org/discussion_boards/showthread.php/180873-breakout-game-scoring?goto=nextnewest","timestamp":"2014-04-21T00:01:22Z","content_type":null,"content_length":"58175","record_id":"<urn:uuid:9d2d47c3-5a00-4855-9322-d36b663756cb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cantor's Diagonalization
July 14th 2008, 01:07 AM
Cantor's Diagonalization
Hi folks. (Hi)
Got a Set question here:
"Use Cantor's diagonalization method to show that the set of all infinite strings of the letters {a,b} is not countable."
So here's what I have already tried:
I assumed it was denumerable to begin with and I want to produce a number that's not on the list in order to prove it's actually not countable.
First I figure I'll depict the string in a list... and I want to have 3 elements to do a proper diagonalization, but I couldn't get it to work well since I only have a and b...
S1 = L11 L12 L13
S2 = L21 L22 L23
S3 = L31 L32 L33
So, X = X1 X2 ...
if L11 = a, X1 = a, otherwise X1 = b
if L22 = a, X2 = a, otherwise X2 = b
Honestly I'm not really sure what to do at this point. (Worried)
July 14th 2008, 01:13 AM
Hi folks. (Hi)
Got a Set question here:
"Use Cantor's diagonalization method to show that the set of all infinite strings of the letters {a,b} is not countable."
So here's what I have already tried:
I assumed it was denumerable to begin with and I want to produce a number that's not on the list in order to prove it's actually not countable.
First I figure I'll depict the string in a list... and I want to have 3 elements to do a proper diagonalization, but I couldn't get it to work well since I only have a and b...
S1 = L11 L12 L13
S2 = L21 L22 L23
S3 = L31 L32 L33
So, X = X1 X2 ...
if L11 = a, X1 = a, otherwise X1 = b
if L22 = a, X2 = a, otherwise X2 = b
Honestly I'm not really sure what to do at this point. (Worried)
The way I see it you just list all strings. Since its countably many, we can index it using natural numbers.
Let $s_{ij}$ denote the 'j'th letter of the 'i'th string in your list.
Now create a new string x such that $x_{j} = \text{NOT}(s_{jj})$, where $x_j$ denotes the jth letter of the string.(Note that NOT(a) = b and NOT(b) = a)
Now claim that x cannot equal any of the strings in the list.
July 14th 2008, 01:54 AM
The way I see it you just list all strings. Since its countably many, we can index it using natural numbers.
Let $s_{ij}$ denote the 'j'th letter of the 'i'th string in your list.
Now create a new string x such that $x_{j} = \text{NOT}(s_{jj})$, where $x_j$ denotes the jth letter of the string.(Note that NOT(a) = b and NOT(b) = a)
Now claim that x cannot equal any of the strings in the list.
Hi Isomorphism,
Thank you for your help with this. :)
i think I mostly understand what you're saying, but how do I claim that X is not equal to any of the strings in the countable list?
Do I have to prove that other than differentiating it by saying its jth letter is NOT the jth letter of S ?
Thanks again :)
July 14th 2008, 02:50 AM
Hi Isomorphism,
Thank you for your help with this. :)
i think I mostly understand what you're saying, but how do I claim that X is not equal to any of the strings in the countable list?
Do I have to prove that other than differentiating it by saying its jth letter is NOT the jth letter of S ?
Thanks again :)
Well its simple. For two strings to be identical, every position's letter must be equal to the corresponding position's letter. Consider the ith string in the list.. It definitely differs in the
ith position if x, since we constructed x such that $s_{ii} eq x_i$. This means $x$ definitely varies in at least one position when compared to $s_i$. But "i" assumed was arbitrary. So it must
work for all natural numbers 'i'.
But this argument implies that we have created a new string thats not there in the list. Thus our initial assumption that the number of strings is countably many is wrong.
July 14th 2008, 04:02 AM
Hi folks. (Hi)
Got a Set question here:
"Use Cantor's diagonalization method to show that the set of all infinite strings of the letters {a,b} is not countable."
So here's what I have already tried:
I assumed it was denumerable to begin with and I want to produce a number that's not on the list in order to prove it's actually not countable.
First I figure I'll depict the string in a list... and I want to have 3 elements to do a proper diagonalization, but I couldn't get it to work well since I only have a and b...
S1 = L11 L12 L13
S2 = L21 L22 L23
S3 = L31 L32 L33
So, X = X1 X2 ...
if L11 = a, X1 = a, otherwise X1 = b
if L22 = a, X2 = a, otherwise X2 = b
Honestly I'm not really sure what to do at this point. (Worried)
What you have done is not right, the idea is that X should differ from Si at the i-th position, your construction has X agree with Si at the i-th position.
You need:
So, X = X1 X2 ...
if L11 = a, X1 = b, otherwise X1 = a
if L22 = a, X2 = b, otherwise X2 = a
Then since X differs at some position from every S it cannot be amoung the S's, hence the assumption of denumerability must be false
July 14th 2008, 08:19 AM
Great, thanks for your help, Isomorphism and RonL.
I understand now! (Rock)
|
{"url":"http://mathhelpforum.com/discrete-math/43648-cantors-diagonalization-print.html","timestamp":"2014-04-19T18:43:19Z","content_type":null,"content_length":"12867","record_id":"<urn:uuid:79bb55ba-84f9-442b-bc23-75e0399e3be8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with analysis of binary max heaps implemented as trees
February 5th 2013, 02:14 PM #1
Senior Member
Sep 2009
Help with analysis of binary max heaps implemented as trees
If I have a binary max heap implemented as a tree, where every node has a parent, left, right and key value. How can I determine the running times of it's operations like insert, delete min,
merge? I'm not sure where to start...
Re: Help with analysis of binary max heaps implemented as trees
Alright, my comp sci knowledge might be a little rusty so anyone can correct me if i miss some details.
So basically the big O notation gives the number of computations (algorithm complexity) for a set of size n.
For ex: for loop on a set of size n, thats n operations (one for each element) so this for loop is O(n) complexity.
Likewise, when you are dealing with a tree, the number of nodes it has is roughly approximated by an exponential. For a complete binary tree, it has roughly $2^n$ nodes. where n gives you the
"depth" of the tree.
So if you want to do an insert, basically you attach the child node to the deepest level of the tree and compare it with its parent. if the child node is greater than its parent swap the parent
and child. Now do the aforementioned procedure till the child node reaches its proper place, where its parent node is greater than it. So what is the maximum number of times this can happen?
Basically until the child node you attached becomes the root of the tree. so esentially, the depth at which the node is at, is the maximum number of comparisons that can happen. Since a tree has
exponential nodes, log(# of nodes) should give a rough estimate of depth, since if we have $2^n$ nodes then $log_2(2^n)$ is the depth of the tree, so the number of computations that can happen,
so the number of operations, if n = number of nodes on the tree, is log(n).
So insert is log(n) complexity.
Use the same sort of reasoning for delete, min, and merge.
so basically if the # of operations you have to do is roughly equal to the depth of the tree, it is always O(log(n))
Last edited by jakncoke; February 5th 2013 at 03:33 PM.
February 5th 2013, 03:28 PM #2
|
{"url":"http://mathhelpforum.com/math-topics/212628-help-analysis-binary-max-heaps-implemented-trees.html","timestamp":"2014-04-20T07:00:30Z","content_type":null,"content_length":"34156","record_id":"<urn:uuid:1bb22c79-628f-401e-9ea8-5567c45437a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Lie structure of a commutative ring with derivation
Jordan, D.A. and Jordan , C. R. (1978). The Lie structure of a commutative ring with derivation. Journal of the London Mathematical Society, 2(18) pp. 39–49.
In this paper we shall not view
Item Type: Journal Article
Copyright Holders: 1978 London Mathematical Society
ISSN: 1469-7750
Extra Information: MR number MR0573088
Academic Unit/Department: Mathematics, Computing and Technology > Mathematics and Statistics
Item ID: 31532
Depositing User: Camilla Jordan
Date Deposited: 09 Feb 2012 12:15
Last Modified: 09 Feb 2012 12:15
URI: http://oro.open.ac.uk/id/eprint/31532
Share this page:
Actions (login may be required)
View Item
Report issue / request change
|
{"url":"http://oro.open.ac.uk/31532/","timestamp":"2014-04-21T12:14:59Z","content_type":null,"content_length":"31267","record_id":"<urn:uuid:d21f211a-48d0-4211-80b3-229166c2b003>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
semiring with zero- and nonzero test
up vote 1 down vote favorite
Let $\mathcal{S}=(S,\oplus,\otimes,0,1)$ be a commutative semiring and define functions $\nu:S\to \lbrace 0,1\rbrace$ and $\bar\nu:S\to \lbrace 0,1\rbrace$ as: $$ \text{$\nu(s)=0$ if $s=0$; and $\nu
(s)=1$ otherwise} $$ and $$ \text{$\bar\nu(s)=1$ if $s=0$; and $\bar\nu(s)=0$ otherwise}. $$ Consider $\mathcal{S}$ extended with $\nu$ and $\bar\nu$, that is, $(S,\oplus,\otimes,\nu,\bar\nu,0,1)$.
I have the following questions:
1. Are such extended semirings known and studied?
2. Can these algebraic structures be described by identities?
Any comments are welcome!
add comment
2 Answers
active oldest votes
I don’t know about your first question; but for the second one, the answer is no — these structures can’t be axiomatised by algebraic identities.
If they could be, then any product of such structures, with the natural induced operations, would again be one. But this is not the case: if $S$, $T$ are any such structures with $0 \
neq 1$ in each of them, then the resulting operation $\nu_{S \times T}$ on their product will satisfy $\nu_{S \times T}(0_S,1_T) = (0_S,1_T)$, which is equal to neither $0_{S \times T}
up vote 2 down $ or $1_{S \times T}$. So $\nu_{S \times T}$ does not satisfy the desired defining property.
vote accepted
The big picture here is Birkhoff’s HSP theorem: a class of algebraic structures, over a fixed language, can be axiomatised by algebraic identities if and only if it is closed under
arbitrary products and subobjects (in categorical language: under all limits), and under direct images along homorphisms.
add comment
Peter LeFanu Lumsdaine has answered your second question. I would add that though your exact class of structures is not described by an equational theory, some related classes are, and
others by Horn sentences, which are of the form: conjunction of identities implies an identity. You might be interested in quasivarieties which are defined by Horn sentences and which
contain your semiring.
A discriminator term in an algebra is a term which satisfies a similar kind of behaviour: $t(x,y,z,w)$ returns $z$ if $x=y$ and returns $w$ otherwise. Varieties which have algebras with
up vote 4 such a term are called discriminator varieties. I believe your semiring generates a discriminator variety, and that the literature you seek may be one or at most two citation links away
down vote from the literature on discriminator varieties.
Gerhard "Ask Me About System Design" Paseman, 2011.07.29
add comment
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras or ask your own question.
|
{"url":"https://mathoverflow.net/questions/71611/semiring-with-zero-and-nonzero-test/71619","timestamp":"2014-04-19T04:57:52Z","content_type":null,"content_length":"53955","record_id":"<urn:uuid:d96f9298-a465-41f7-8f45-4fdd011defa3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Fuzzy linear programming models for NPD using a four-phase QFD activity process based on the means-end chain concept.
(English) Zbl 1175.90435
Summary: Quality function deployment (QFD) is a customer-driven approach in processing new product development (NPD) to maximize customer satisfaction. Determining the fulfillment levels of the
“hows”, including design requirements (DRs), part characteristics (PCs), process parameters (PPs) and production requirements (PRs), is an important decision problem during the four-phase QFD
activity process for new product development. Unlike previous studies, which have only focused on determining DRs, this paper considers the close link between the four phases using the means-end
chain (MEC) concept to build up a set of fuzzy linear programming models to determine the contribution levels of each “how” for customer satisfaction. In addition, to tackle the risk problem in NPD
processes, this paper incorporates risk analysis, which is treated as the constraint in the models, into the QFD process. To deal with the vague nature of product development processes, fuzzy
approaches are used for both QFD and risk analysis. A numerical example is used to demonstrate the applicability of the proposed model.
90C70 Fuzzy programming
90C05 Linear programming
[1] Almannai, B.; Greenough, R.; Kay, J.: A decision support tool based on QFD and FMEA for the selection of manufacturing automation technologies, Robotics and computer integrated manufacturing 24,
501-507 (2008)
[2] Al-Mashari, M.; Zairi, M.; Ginn, D.: Key enablers for the effective implementation of QFD: A critical analysis, Industrial management and data systems 105, No. 9, 1245-1260 (2005)
[3] Bondia, J.; Picó, J.: Analysis of linear systems with fuzzy parametric uncertainty, Fuzzy sets and systems 135, 81-121 (2003) · Zbl 1024.93033 · doi:10.1016/S0165-0114(02)00251-8
[4] Chan, L. K.; Wu, M. L.: Quality function deployment: A literature review, European journal of operational research 143, 463-497 (2002) · Zbl 1082.90022 · doi:10.1016/S0377-2217(02)00178-9
[5] Chan, L. K.; Wu, M. L.: A systematic approach to quality function deployment with a full illustrative example, Omega – the international journal of management science 33, No. 2, 119-139 (2005)
[6] Chen, L. H.; Ko, W. C.: A fuzzy nonlinear model for quality function deployment considering kano’s concept, Mathematical and computer modeling 48, 581-593 (2008) · Zbl 1145.90468 · doi:10.1016/
[7] Chen, L. H.; Weng, M. C.: A fuzzy model for exploiting quality function deployment, Mathematical and computer modeling 38, 559-570 (2003) · Zbl 1080.90542 · doi:10.1016/S0895-7177(03)90027-6
[8] Chen, L. H.; Weng, M. C.: An evaluation approach to engineering design in QFD processes using fuzzy goal programming models, European journal of operational research 172, 230-248 (2006) · Zbl
1116.90067 · doi:10.1016/j.ejor.2004.10.004
[9] Chen, K. M.; Horng, K. H.; Chiang, K. N.: Coplanarity analysis and validation of PBGA and T2-BGA packages, Finite elements in analysis and design 38, 1165-1178 (2002) · Zbl 1155.74405 ·
[10] Chen, Y.; Fung, R. Y. K.; Tang, J.: Rating technical attributes in fuzzy QFD by integrating fuzzy weight average method and fuzzy expected value operator, European journal of operational
research 174, 1553-1566 (2006) · Zbl 1103.90329 · doi:10.1016/j.ejor.2004.12.026
[11] Cristiano, J. J.; Iii, C. C. White; Liker, J. K.: Application of multiattribute decision analysis to quality function deployment for target setting, IEEE transactions on man, and cybernetics –
part C: Applications and reviews 31, No. 3, 366-382 (2001)
[12] J., J. J. Cristiano; J., J. K. Liker; Iii, C. C. White: Key factors in the successful application of quality function deployment (QFD), IEEE transactions on engineering management 48, No. 1,
81-95 (2001)
[13] Guimarães, A. C. F.; Lapa, C. M. F.: Fuzzy FMEA applied to PWR chemical and volume control system, Progress in nuclear energy 44, No. 3, 191-213 (2004)
[14] Gutman, J.: A means-end chain model based on consumer categorization processes, Journal of marketing 46, No. 1, 60-72 (1982)
[15] Kahraman, C.; Ertay, T.; Büyüközkan, G.: A fuzzy optimization model for QFD planning process using analytic network approach, European journal of operational research 171, 390-411 (2006) · Zbl
1090.90016 · doi:10.1016/j.ejor.2004.09.016
[16] Klir, G., Yuan, B., 2003. Fuzzy sets and fuzzy logic: Theory and application. 3rd ed. Pearson Education, Taiwan.
[17] Kwong, C. K.; Bai, H.: Determining the importance weights for the customer requirements in QFD using a fuzzy AHP with an extent analysis approach, IIE transactions 35, 619-626 (2003)
[18] Kwong, C. K.; Chen, Y.; Bai, H.; Chan, D. S. K.: A methodology of determining aggregated importance of engineering characteristics in QFD, Computers and industrial engineering 53, 667-679 (2007)
[19] Lager, T.: The industrial usability of quality function deployment: A literature review and synthesis on a meta-level, R&D management 35, No. 4, 409-426 (2005)
[20] Myint, S.: A framework of an intelligent quality function deployment (IQFD) for discrete assembly environment, Computers and industrial engineering 45, 269-283 (2003)
[21] Pillay, A.; Wang, J.: Modified failure mode and effects analysis using approximate reasoning, Reliability engineering and system safety 79, 69-85 (2003)
[22] Sharma, R. K.; Kumar, D.; Kumar, P.: Systematic failure mode effect analysis (FMEA) using fuzzy linguistic modeling, The international journal of quality and reliability management 22, No. 9,
986-1004 (2005)
[23] Stamatis, D. H.: Failure mode and effect analysis – FMEA from theory to execution, (1995)
[24] Tan, C. M.: Customer-focused build-in reliability: A case study, The international journal of quality and reliability management 20, No. 2/3, 378-397 (2003)
[25] Wasserman, G. S.: On how to prioritize design requirements during the QFD planning process, IIE transactions 25, No. 3, 59-65 (1993)
[26] Wu, H. C.: Linear regression analysis for fuzzy input and output data using the extension principle, Computers and mathematics with applications 45, 1849-1859 (2003) · Zbl 1043.62062 ·
[27] Xu, Z. S.; Da, Q. L.: An overview of operators for aggregating information, International journal of intelligent systems 18, 953-969 (2003) · Zbl 1069.68612 · doi:10.1002/int.10127
[28] Zadeh, L. A.: Fuzzy set as a basis for a theory of possibility, Fuzzy sets and systems 1, 3-28 (1978) · Zbl 0377.04002 · doi:10.1016/0165-0114(78)90029-5
[29] Zimmermann, H. J.: Fuzzy set theory and its applications, (1991)
|
{"url":"http://zbmath.org/?q=an:1175.90435","timestamp":"2014-04-18T15:54:18Z","content_type":null,"content_length":"29744","record_id":"<urn:uuid:336703cd-ffa9-473f-b533-3d28eb964609>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Optimizing parameters for an oscillator – Video
January 10, 2013
By FelixS
Here’s a video how the modFit function from the FME package optimizes parameters for an oscillation. A Nelder-Mead-optimizer (R function optim) finds the best fitting parameters for an undampened
oscillator. Minimum was found after 72 iterations, true parameter eta was -.05:
Evolution of parameters in optimization process from Felix Schönbrodt on Vimeo.
More on estimating parameters of differential equations is coming later on this blog!
Things I’ve learned:
• ffmpeg does not like pngs. They are internally converted to jpg in a very low quality and I could not find a way to improve this quality. Lesson learned: Export high quality jpgs from your R
• Use a standard frame rate for the output file (i.e., 24, 25, or 30 fps)
• My final ffmpeg command: ffmpeg -r 10 -i modFit%03d.jpg -r 25 -b:v 5000K modelFit.avi
□ -r 10: Use 10 pictures / second as input
□ -i modFit%03d.jpg: defines the names of the input files, modFit001.jpg, modFit002.jpg, …
□ -r 25: Set framerate of output file to 25 fps
□ -b:v 5000K: set bitrate of video to a high value
□ modelFit.mp4: video name and encoding type (mp4)
for the author, please follow the link and comment on his blog:
Nicebread » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/optimizing-parameters-for-an-oscillator-video/","timestamp":"2014-04-20T21:12:44Z","content_type":null,"content_length":"35764","record_id":"<urn:uuid:191fd947-6926-4bc7-9079-0a554328dc6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Veritas Prep Blog
Posted on March 13, 2014
The SAT’s upcoming redesign has some interesting elements – the return to the 1600 scale, the elimination of obscure-for-obscure’s-sake vocabulary, etc. – but perhaps the most noteworthy facet of the
announcement is its continuation of a positive trend in standardized tests: a push for more “authentic” material.
Read More...
The Science of Higher Education
Posted on March 12, 2014
Effective teachers at all levels know that students learn differently from one another: Some learn best by hearing information, others are hands-on (or “kinesthetic”) learners, and still others are
visual learners. But did you know that one of those groups makes up nearly two-thirds of the population? And do you know which teaching approach is most effective for each preferred learning style?
Do you know which is your own preferred learning style?
Read More...
SAT Tip of the Week: Essential Skills for the Test
Posted on March 12, 2014
So much time and energy is spent in preparing for the SAT. Many consider it the gate keeper to their college acceptence. It is a way to distinguish oneself on a level playing field from all others
who are attempting to gain admission to college, but what is the SAT really? Is it an IQ test? Is it a college prep test? Does one really have to succeed on the SAT to do well in college?
Read More...
U.S. News MBA Rankings for 2015
Posted on March 11, 2014
U.S. News & World Report, which maintains arguably the most influential graduate school rankings in the world, has just released its new business school rankings for 2015. It’s far too easy for
applicants to get caught up in the rankings, and to obsess over the fact that a school dropped three spots from one year to the next, but reality is that MBA rankings matter. They influence how
recruiters look at schools, they serve as a signal to applicants and affect what caliber of applicants each school receives, and they give you an idea of where you stand relative to your target
schools. You should never end your business school selection process with the rankings, but the reality is that you will probably start the process by seeing where schools sit in the MBA rankings.
Read More...
GMAT at the Movies: Diagnosis and Surgery of GMAT Problems with Doc Hollywood
Posted on March 11, 2014
In this series we return to classic movies to learn fundamental strategies for GMAT Success.
There are two facets to each quantitative problem – (1) deciding what to do and (2) then actually doing the math. I refer to these respectively as the “diagnosis” and “surgery.”
A Good Diagnosis Avoids Unnecessary Surgery
Read More...
School Profile: Why Students Are so Happy at Claremont McKenna College
Posted on March 11, 2014
Claremont McKenna College, a young private liberal arts school founded in 1946, is part of the Claremont University Consortium that also includes Pomona College, Scripps College, Pitzer College,
Harvey Mudd College, and two graduate schools. It is ranked #18 on the Veritas Prep Elite College Rankings. Claremont College takes a pragmatic pre-professional approach to academics, preparing
students for global leadership roles.
Read More...
All About Negative Remainders on the GMAT
Posted on March 10, 2014
I could have sworn that I had discussed negative remainders on my blog but the other day I was looking for a post discussing it and much as I would try, I could not find one. I am a little surprised
since this concept is quite useful and I should have taken it in detail while discussing divisibility. Though we did have a fleeting discussion of it here.
Read More...
GMAT Tip of the Week: Started From the Bottom, Now We Here
Posted on March 7, 2014
As Hip Hop Month rolls along in the GMAT Tip space, we’ll pass the torch from classic artists to the future, today letting Drake take the mic.
Read More...
How to Navigate the Waitlist for MBA Admission
Posted on March 7, 2014
Earlier this week, we talked about what it means to be on the waitlist. Today, we’ll go into more detail on what you can do if you’re on the waitlist. Despite the name “waitlist,” there are several
things you can do besides simply wait for your dream school to call. From a strategic standpoint, sitting in a state of limbo gives you the opportunity to improve your profile or status as a
candidate, and such improvements can and should be communicated to the admissions committees.
Read More...
The Importance of Timing on the GMAT
Posted on March 6, 2014
One of the main goals of the GMAT is to determine whether or not you can analyze a situation in front of you and determine the information needed to solve the question. In this way, the GMAT is
testing the same skills required to solve a business case. The numbers in front of you are not important, but your method of solving the question is. Crunching numbers and measuring hypotenuses are
not useful skills in business; you’ll have a calculator (or an abacus) to do that. Understanding how to approach and solve problems is the true skill being tested.
Read More...
11 Tips for Success on SAT Test Day
Posted on March 6, 2014
One of the most common (and frustrating) questions SAT instructors hear from their enthusiastic but sometimes misguided students is this: is there a secret to dominating the SAT? As nice as it would
be if there were some long guarded secret word or ritual that a student could invoke to dominate this test, there simply is no single secret. The SAT is a skills test and requires students to
practice the skills it values. There are, however, a few tools that are useful on all tests which take the form of the SAT and use it to gain advantage. Here are 11 tips to help you on test day.
Read More...
New SAT Coming in 2016: What Will Change?
Posted on March 5, 2014
Today the College Board, the organization behind the SAT, announced sweeping changes to the standardized exam that will launch in the spring of 2016. As College Board president David Coleman promised
last year when he announced that a new SAT was coming, the changes are meant to make the SAT less “coachable” and to make it more relevant to what is taught in high school classrooms. The changes
also make the SAT much more like the ACT (the SAT’s chief competitor), although you won’t see any mention of that in the College Board’s publicity announcements for the new SAT.
Read More...
What It Means to Be on the Waitlist for MBA Admission
Posted on March 5, 2014
This time of year, many applicants find themselves stuck in the waitlist process at one or more schools, which can be a very slow and painful waiting period. Not only are you competing for fewer
and fewer seats, you are doing so against everyone on the waitlist all the way back from round one as well as any fresh, new applicants from the final rounds.
Read More...
School Profile: Traditions of Bowdoin College
Posted on March 4, 2014
Bowdoin College, located in the coastal town of Brusnwick, Maine, is a small liberal arts college ranked #14 on the Veritas Prep Elite College Rankings.The exclusive school boasts famous alumni such
as Henry Wadsworth Longfellow and Nathaniel Hawthorne. More recently, current assistant professor of computer science, Daniela Oliveira was awarded the prestigious Presidential Early Career Award for
Scientists and Engineers by the White House for her research in computer security. This is an extraordinary honor for a small liberal arts college competing against large research universities.
Read More...
What to Avoid and What to Focus on in GMAT Reading Comprehension
Posted on March 4, 2014
In this series we return to classic movies to learn fundamental strategies for GMAT Success.
“A man and a woman meet aboard a luxury ocean liner. She already has a fiancé, but still the two fall in love. The ship sinks and the woman lives, but the man dies.”
Read More...
What Are the SAT Subject Tests and When Should You Take Them?
Posted on March 3, 2014
Hey all you Juniors! June 7th, 2014 is probably the best time for you to take SAT Subject Tests on the subjects you are taking this school year. Why wait until the fall, when you will have had all
summer to forget the chemistry, biology, US History, or whatever other classes you are taking right now?
Read More...
Is This GMAT Question Suspect?
Posted on March 3, 2014
I came across a discussion on one of our questions where the respondents were split over whether it is a strengthen question or weaken! Mind you, both sides had many supporters but the majority was
with the incorrect side. You must have read the write up on ‘support’ in your Veritas Prep CR book where we discuss how question stems having the word ‘support’ could indicate either strengthen or
inference questions. I realized that we need a write up on the word ‘suspect’ too so here goes.
Read More...
GMAT Tip of the Week: Learning Math from Mathers
Posted on February 28, 2014
March has traditionally been “Hip Hop Month” in the GMAT Tip of the Week space, so with March only hours away and winter weather gripping the world, let’s round up to springtime and start Hip Hop
March a few hours early, this time borrowing a page from USC-Marshall Mathers. There are plenty of GMAT lessons to learn from Eminem. He’s a master, as are the authors of GMAT Critical Reasoning, of
“precision in language“. He flips sentence structures around to create more interesting wordplay, a hallmark of Sentence Correction authors. But what can one of the world’s greatest vocal wordsmiths
teach you about quant?
Read More...
Want the Best in ACT Prep? Now We Offer It!
Posted on February 28, 2014
It was just 18 months ago when we shook up the college test prep space by announcing Veritas Prep SAT 2400. Since then, thousands of high school students and their parents have discovered what makes
Veritas Prep special when it comes to tackling standardized tests: The best instructors rigorously applying a proven system for success that any student can learn.
Read More...
How My Love of Music Helped Me Get into College
Posted on February 27, 2014
Ever since I saw their first concert freshman year, I wanted to join my high school jazz band. I loved the sound and the energy the band had on stage; they looked like they were having a great
time. However, I was a classic AP track student with a packed schedule; I managed to squeeze in marching/concert band first period, but for my first two years I just had to watch them play from the
balcony where the concert band sat.
Read More...
Is the GMAT Hard?
Posted on February 27, 2014
As a GMAT instructor, I get asked a lot of questions about the exam. Most of these questions are about what can be done to prepare for the exam and what to concentrate on, but one of the simplest
questions I get asked all the time is simply: “Is the GMAT hard?” Sadly, the answer is not very clean cut for a given prospective student, but I’ve spent enough time thinking about this test that I
now have a definite answer that I think captures the heart of what is being tested. My answer is simply this:
Read More...
What Is the GMAT?
Posted on February 26, 2014
After more than a decade of being in business, Veritas Prep has worked with tens of thousands of people who need to take the GMAT for one reason or another. But few actually take the time to truly
understand what the GMAT is all about, or why they’re really taking it (aside from the fact that it’s required for admissions to their desired graduate school).
Read More...
SAT Tip of the Week: 4 Ways to Score Above 2200
Posted on February 26, 2014
Picture in your mind the kind of person that gets a 2200 or above score on the SAT. You are probably picturing some Harvard bound wunderkind who attended the finest prep schools and excelled at all
of them, or perhaps a bookish recluse whose entire life has been spent pursuing academia.
Friends, I am not those people, but I still managed to score in the 99th percentile on the SAT. I’m not a genius (ask the neighbors whose mailbox I destroyed because I was in reverse when I thought
I was in drive), and I had a relatively normal upbringing in the public schools of North Carolina. I also did not do particularly well on the PSAT, which is generally an indicator of strength on the
Read More...
School Profile: The Innovation and Diversity of Brown University
Posted on February 25, 2014
Brown University is ranked number seventeen on the Veritas Prep College Ranking list. The quaint campus of this research school is located in the middle of the historic town of Providence, Rhode
Island. Brown was founded in 1764 making it the seventh oldest school in the U.S; it offers a wide variety of degrees in seventy concentrations. This university is known for having the spirit of
openness; they have proven this on more than one occasion, starting with becoming the first school to accept students from all religious backgrounds.
Read More...
Should I Apply for My MBA in Round 3 or Wait Until Round 1?
Posted on February 24, 2014
Round three is commonly thought of as the most competitive round, where applicants vie for the few remaining seats in coveted programs along with some of the most highly qualified candidates of the
season. Because these well qualified candidates know they will be desirable to the adcoms, they often wait until the last possible minute, since there appears to be a correlation between highly
successful business achievers and the lack of free time on their schedules to complete applications.
Read More...
An Official Question on Absolute Values
Posted on February 24, 2014
Now that we have discussed some important absolute value properties, let’s look at how they can help us in solving official questions.
Read More...
GMAT Tip of the Week: Synchronizing Twizzles in Critical Reasoning
Posted on February 21, 2014
As the Sochi Olympics enter their final weekend, we all have our lists of things we’ll miss and not miss from this sixteen-day celebration of snow and ice. We’ll almost all miss the hashtag #
sochiproblems, the cutaway shots of a scowling Vladimir Putin, the bro show of American snowboarders and TJ Oshie, and the debate over whether the skating judges conspired to give Russia the team
gold and the US the ice dancing gold.
Read More...
5 Ways to Prepare Yourself for the SAT
Posted on February 21, 2014
The majority of your work should be finished a week leading up to the exam. You’ve already poured over mountains of vocabulary, towers of practice exams, and piles of practice problems (I love
alliteration!). You know your triangles, you know the answer is always in the passage, and you know to check your pronouns for a clear and appropriate referent. Now, the only thing left is to take
the actual exam and apply all the knowledge you have spent the last months cultivating. So what should be done the week of the exam to make sure that you apply all your knowledge effectively?
Read More...
3 Reasons to Wait Until Round 1 to Apply for Your MBA
Posted on February 20, 2014
Most of the top U.S. business schools accept students in two or three rounds. Applicants are not always sure in which round to apply, and when they make a decision, they usually underestimate the
time it takes to put together a solid application.
Applying for an MBA is not like applying for a job. A well-rounded application not only needs quantitative data such as undergraduate grades and test scores, but also needs an accurate depiction of
your qualitative traits, which are usually shown through your essays, letters of recommendation, CV and extracurricular activities.
Read More...
3 Ways to Increase Your GMAT Score to a 760
Posted on February 20, 2014
Everyone who takes the GMAT wants to get a good score. The exact definition of “good” varies from student to student and from college recruiter to college recruiter. However no one can argue that
scoring in the top 1% of all applicants can be considered anything less than a good score. Getting into your local university’s business program may not require a terrific score, but it can’t hurt to
have one.
Read More...
The Symbiosis Between Education and Income
Posted on February 19, 2014
It’s no secret that earning a college degree or a graduate degree can lead to a higher-paying job. But do you realize just how big the difference can be? We’ve broken it down to show you what kinds
of jobs — and how much pay — you can expect when you earn a degree. You should never choose a major or a line of work solely for the pay, but keep these stats in mind if you’re wondering whether or
not you should go back to school.
Read More...
SAT Tip of the Week: Simplify Hard Questions in the Reading Section
Posted on February 19, 2014
The Reading Section is often considered the most difficult section of the SAT. Here’s a game-changing tip from a SAT 2400 tutor that’s guaranteed to boost your score.
Often, students find the Reading Section to be the trickiest section of the SAT because of the sheer amount of information they have to remember. In an earlier blog post, I discussed how making
targeted summaries can help students process the information in a passage. Although this strategy is a lifesaver for many, questions that reference specific details from the passages can still throw
students off. These include questions that ask what the author of one passage would think of a quoted line from the other passage, such as the one below:
Read More...
4 Practical Suggestions to Avoid Multitasking and Raise Your GMAT Score
Posted on February 18, 2014
In the first two parts of this article we learned that multitasking causes a host of problems that can be particularly detrimental to GMAT scores. Research shows that multitasking makes it very
difficult for a person to focus, damages the short-term memory, makes it hard to sort the relevant from the irrelevant, and slows down the transition from one task or way of thinking to another.
Read More...
School Profile: The University of Pennsylvania and the Toast Zamboni
Posted on February 18, 2014
The University of Pennsylvania is located in Philadelphia, the City of Brotherly Love, and began as a charity school in 1740. Under the influence of Benjamin Franklin, the school developed its roots
in training students to become leaders in public service, business, and government. The private university has fewer than 10,000 undergraduates and ranks #13 on the Veritas Prep Elite 61 list of
Read More...
Properties of Absolute Values on the GMAT - Part II
Posted on February 17, 2014
We pick up this post from where we left the post of last week in which we looked at a few properties of absolute values in two variables. There is one more property that we would like to talk about
today. Thereafter, we will look at a question based on some of these properties.
Read More...
GMAT Tip of the Week: If You Can't Be With the Sentence You Love, Love the One You're With
Posted on February 14, 2014
Happy Valentine’s Day, a day when we honor the soulmate, that one special someone, the concept of true love and destiny. Valentine’s Day is about finding “the one” and never letting go, and this day
itself is about being with that one you love, your one true destiny.
But if you think your destiny includes Harvard, Stanford, or Wharton, your Sentence Correction strategy should be a lot less “Endless Love” and a lot more “Love the One You’re With”. As Crosby,
Stills, Nash and Young sing directly about the art of GMAT Sentence Correction:
Read More...
How to Overcome the Disadvantages of Applying to Business School in Round 3
Posted on February 13, 2014
If you have decided you will take the plunge and apply to school in Round three, there are a few things you need to know. First and foremost, you must realize the odds of admission go dramatically
down in round three because of the relatively few number of slots that remain. This is simple mathematics—the lower the seat count, the more competitive it is to get one of them—think of it as
musical chairs with way more people than chairs.
Read More...
Forget Your Prior Knowledge When Solving GMAT Critical Reading Questions
Posted on February 13, 2014
The GMAT is an exam that students generally study for over a few months, but it can be argued that students have been preparing for it their entire lives. From mastering addition in elementary school
to understanding geometric properties and reading Shakespeare sonnets, your whole life has arguably been a prelude to your success on the GMAT. You might not need everything you’ve ever learnt on
this one exam, but you will already have been exposed to everything you need to be successful.
Read More...
7 Ways to Score Above 700 on the SAT Reading Section
Posted on February 12, 2014
“NOT READING!” I can hear the cries of thousands of young SAT test takers as they get to this section of their SAT. “This section is impossible! And subjective! And you can’t study for it!” Dear
student, you are wrong on all accounts! Not only is this section as objective as any other section of the SAT, but it can also be dominated like the other sections by taking into advisement a few
simple steps:
Read More...
How Multitasking Can Hurt Your GMAT Score: Part II
Posted on February 12, 2014
If you read part 1 of this article you know that multitasking can result in attention difficulties and problems with productivity. You may not think that all of this talk about decreased productivity
and being distracted would apply to the GMAT; after all there is no chance to update your Facebook status and “tweet” during the test right? So this must have no impact. However, when it does come
time to concentrate on just one thing – for example, the GMAT – researchers have found that multitaskers have more trouble tuning out distractions than people who focus on one task at a time.
Read More...
|
{"url":"http://www.veritasprep.com/blog/page/2/","timestamp":"2014-04-20T10:51:35Z","content_type":null,"content_length":"148793","record_id":"<urn:uuid:357fe14f-be11-44bf-b0b0-fe816953c4db>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Second-order Butterworth Lowpass Filter With ... | Chegg.com
A second-order Butterworth lowpass filter with a cutofffrequency of 8000 rad/s will have an H(s) described by thepole-zero plot shown in the Figure P10-9(b), where the outputsignal is the voltage
across the terminals of thecapacitor.
a) Calculate analytically the quanity |H(j2πf)|/H(0) fromthe pole-zero plot, and plot it as a function of f.
b) Find values of the L and C to realize H(s), using the formof the circuit drawn in Fig P10-9(a).
All resistors values are in ohms.
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/second-order-butterworth-lowpass-filter-cutofffrequency-8000-rad-s-h-s-described-thepole-z-q85928","timestamp":"2014-04-21T16:19:33Z","content_type":null,"content_length":"20508","record_id":"<urn:uuid:247f9ee0-9287-4fed-942a-7f1483c3d112>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
algebra problem
September 13th 2005, 07:34 PM #1
help please
solve the equation for the given variable.
ax + by = c solve for y..
a little stuck, could use some help thanks..
Hi, this is how you do it :)
this is your question
ax + by = c solve for y..
Look how simple this is...
First you isolate the terms that have "y" on one side of the equation. (note that by terms i mean the parts of the equation that are separed by addition or subtraction)
To do that, move "ax" to the other side so that you have:
by = c - ax
"ax" is now negative because when it was on the other side it was positive
Now move b to the other side. Think of it this way, (i like this way better because it will give you a greater understanding of fractions and equations which will help you much more in the future
than the other way) on one side you have b * everything (which is y), so the other side it will be 1/b * everything (which is c - ax) Simply, because "b" was on the numerator on one side, to move
it to the other side you send it to the denominator. The alternate way would be to say that since you have b*y you just have to divide both sides by "b" so that it will be canceled out on the
left side and leave y alone.
Thus, you will have y = (c - ax)/b which is your answer...
Last edited by Luke; September 13th 2005 at 08:23 PM.
thank you!
September 13th 2005, 08:19 PM #2
Sep 2005
September 14th 2005, 07:40 PM #3
|
{"url":"http://mathhelpforum.com/algebra/891-algebra-problem.html","timestamp":"2014-04-16T07:58:11Z","content_type":null,"content_length":"34258","record_id":"<urn:uuid:77183990-f243-41bd-aec7-73987fe83797>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relatively Prime Permutations
Euler's Totient function, φ(n), is used to determine the number of numbers less than n that are relatively prime to n. For example, φ(6) = 2, because only 1 and 5 are relatively prime with 6.
Interestingly, φ(63) = 36, and this is the first example of a number which produces a permutation of the value of its Totient function.
Given that p is prime, prove that p will not be a permutation of φ(p), and prove that p^2 will not be a permuation of φ(p^2).
Problem ID: 240 (10 Aug 2005) Difficulty: 3 Star
|
{"url":"http://mathschallenge.net/view/relatively_prime_permutations","timestamp":"2014-04-16T16:57:41Z","content_type":null,"content_length":"4577","record_id":"<urn:uuid:43c1f6c9-5de7-4353-b6e1-e87ad3a3aaf7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Greatest Wide Receivers Ever Version 2.0: Part I – Methodology
Regular readers know that one of my projects this off-season is to come up with a better way to grade wide receivers. I first attempted to
rank every wide receiver
four years ago. That study, which I will reproduce this week, has some positives and negatives. My goal is to eventually come up with four or five different ranking systems, so consider the series
this week to be the first of several ranking systems to come.
The first step in this system is to combine the three main stats — receptions, receiving yards and receiving touchdowns — into one stat: Adjusted Catch Yards. We know that a passing touchdown is
worth about 20 yards, so I’m crediting a receiver with 20 yards for every touchdown reception. Next, we need to decide on an appropriate bonus for each reception.
We want to give receivers credit for receptions because, all else being equal, a receiver with more receptions is providing more value because he’s likely generating more first downs. I looked at all
receivers over a 12-year period who picked up at least 30 receiving first downs. I then used the number of receptions and receiving yards those players had as inputs, and performed a regression to
estimate how many first downs should be expected. The best-fit formula was:
Receiving first downs = 4.9 + 0.255 * Receptions + 0.019 * Receiving Yards
There is nothing magical about the number 30, although the R^2 was pretty strong at 0.81 and both variables were highly significant. If we use 40 receptions, the R^2 is still strong (0.69) and
best-fit formula is:
Receiving first downs = 10.0 + 0.261 * Receptions + 0.016 * Receiving Yards
There is no doubt that receptions are highly correlated with receiving first downs. So what do we do now? The coefficient on receptions is 13.2 times the coefficient on yards in the first equation
and 16.6 times larger in the second formula. So if first downs were the only thing that mattered, we’d give a bonus of between 13 and 17 yards for each receptions. But, of course, first downs aren’t
the only things that matter, since receivers do more than just catch passes that result in first downs. Gaining seven yards on third-and-six is great. But gaining eight yards on third-and-six is
better. Gaining twenty is better still. You can think of 13 as the upper limit on the size of the bonus we should give for each catch, but in reality, receptions are quite a bit less valuable than
But take a step back and think about what the value is of a first down. Possession of the ball is generally worth about four points. If you have 1st-and-10 at the 50, you are in a state of +2.0
expected points, which means that if you fumbled the snap and your opponent recovered, they would be in that same +2.0 position. So we know that by losing possession, you lose four points. We know
that first-and-10 from your own 1 is worth -0.53 expected points, and 1st-and-goal from your opponent’s 1 is worth 5.96 points, which means the 98 yards in between is worth 6.5 points; that means a
point is roughly equal to 15 yards. So if possession is worth 4 points, then it is also worth 60 yards. If you average 35 net yards on a punt, the value of a first down that comes on third down is
then a net loss of 25 yards to the punting team. The regression says a catch is worth .26 first downs, so that would make a catch worth 0.26*25, or 6.25 yards.
Those are, of course, broad, sweeping generalities. Some catches are worth much, much more than others. But we can’t, at this point, track down all of Harold Carmichael’s third down catches and try
to assess the value of each one. So we have to estimate. And it’s worth remembering that not all first down catches come on third downs, so even if a first down may be worth 6.25 yards on third down,
that doesn’t mean the average first down is that valuable. In the end we are left having to make a pretty rough estimate, but I’m happy to give that number a slight haircut and make each reception
worth five yards.
So we have our formula for Adjusted Catch Yards: 5 * Receptions + Receiving Yards + 20 * Receiving Touchdowns. What’s next?
If you’re a frequent reader of this site, then you know we can’t simply use counting stats for wide receivers. Having a 1200 yard season is more impressive when your team throws 400 passes than when
it throws 550 passes. It’s also a lot more valuable. On the other hand, we don’t want to give too much credit to just the “high rate” guys. How do we find a middle ground?
First, I divided each wide receiver’s ACY by his team’s total number of pass attempts (sacks included). Once I have an ACY/A average for each receiver, I then had to come up with a baseline. I
decided to use the worst starter method, so this means the 32nd best wide receiver in modern times; in other eras, the baseline also equals WR N, where N equals the number of teams in the league.
Let’s use a real example to show how the formula works.
Most people would say Marques Colston had a much better year in 2012 than Sidney Rice. Colston caught 83 passes for 1,154 yards and 10 touchdowns, which looks a lot better than Rice’s 50-748-7 stat
line. But the Saints had 697 pass attempts while Seattle only had 438 pass plays, making it hard to compare the receivers based on raw numbers. Rice averaged 2.60 ACY/A, narrowly edging Colston’s
2.54 average. In 2012, the thirty-second ranked wide receiver in ACY/A was Anquan Boldin at 2.37. Rice gets credit for averaging 0.23 ACY/A over the baseline for 438 plays, so he’s credited with
being 102 ACY over average. Colston was 0.17 ACY/A over the baseline for 697 passes, giving him 120 ACY over average. So Colston does get credit for beating out Rice, but not nearly as much as the
raw numbers would indicate. Here are the top 32 wide receivers in 2012:
Rank Name Year TM G Rec RecYd TD ACY TM ATT ACY/TMATT VALUE
1 Brandon Marshall 2012 CHI 16 118 1508 11 2318 529 4.38 1067
2 Andre Johnson 2012 HOU 16 112 1598 4 2238 582 3.85 861
3 Calvin Johnson 2012 DET 16 122 1964 5 2674 769 3.48 855
4 A.J. Green 2012 CIN 16 97 1350 11 2055 586 3.51 669
5 Demaryius Thomas 2012 DEN 16 94 1434 10 2104 609 3.45 664
6 Michael Crabtree 2012 SFO 16 85 1105 9 1710 477 3.58 582
7 Vincent Jackson 2012 TAM 16 72 1384 8 1904 592 3.22 504
8 Wes Welker 2012 NWE 16 118 1354 6 2064 668 3.09 484
9 Dez Bryant 2012 DAL 16 92 1382 12 2082 694 3 441
10 Roddy White 2012 ATL 16 92 1351 7 1951 643 3.03 430
11 Reggie Wayne 2012 IND 16 106 1355 5 1985 669 2.97 403
12 Victor Cruz 2012 NYG 16 86 1092 10 1722 559 3.08 400
13 Steve Smith 2012 CAR 16 73 1174 4 1619 526 3.08 375
14 Percy Harvin 2012 MIN 9 62 677 3 1047 290 3.61 362
15 Eric Decker 2012 DEN 16 85 1064 13 1749 609 2.87 309
16 Steve Johnson 2012 BUF 16 79 1046 6 1561 541 2.89 281
17 Julio Jones 2012 ATL 16 79 1198 10 1793 643 2.79 272
18 Pierre Garcon 2012 WAS 10 44 633 4 933 297 3.14 231
19 Brian Hartline 2012 MIA 16 74 1083 1 1473 541 2.72 193
20 Dwayne Bowe 2012 KAN 13 59 801 3 1156 418 2.76 166
21 Randall Cobb 2012 GNB 15 80 954 8 1514 571 2.65 164
22 Danario Alexander 2012 SDG 10 37 658 7 983 361 2.73 130
23 Marques Colston 2012 NOR 16 83 1154 10 1769 697 2.54 120
24 Sidney Rice 2012 SEA 16 50 748 7 1138 438 2.6 102
25 Mike Williams 2012 TAM 16 63 996 9 1491 592 2.52 91
26 Golden Tate 2012 SEA 15 45 688 7 1053 411 2.56 82
27 Danny Amendola 2012 STL 11 63 666 3 1041 407 2.56 78
28 Cecil Shorts 2012 JAX 14 55 979 7 1394 557 2.5 78
29 Davone Bess 2012 MIA 13 61 778 1 1103 440 2.51 63
30 Jordy Nelson 2012 GNB 12 49 745 7 1130 457 2.47 50
31 Antonio Brown 2012 PIT 13 66 787 5 1217 496 2.45 43
32 Anquan Boldin 2012 BAL 15 65 921 4 1326 561 2.37 0
You might notice that Minnesota’s Percy Harvin ranks 14th in value added. Harvin only played in 9 games, but he ranked 3rd in ACY/Team Attempt. For players who played in fewer than 16 games (during
the 16 game era), I used a pro-rated number of team attempts for those players. Minnesota had 515 pass attempts last year, so we assume that they threw 56% (9/16) of their passes in the games Harvin
was active. Therefore, in the team attempts column for Harvin, he’s credited with 290 team attempts. Of course, Harvin is also then only credited for being above average for 290 plays, too.
Tomorrow I’ll present a list of some of the best seasons ever, and on Wednesday, we’ll look at the career list. Let me close with some obvious flaws in this system.
1) Post-season stats are excluded. It’s not difficult to add playoff numbers, but for now, I’d rather refine the system before including those numbers.
2) Rushing data, passing data and fumble data were also excluded for the same reason. No data exists on blocking ability, so that is obviously left out of this system, too.
3) The quality of the quarterback, offensive line, and the system a team runs all heavily impact a receiver’s numbers. So does playing in Buffalo compared to playing in New Orleans. These are all
important factors but I chose to leave them out of the system and let each reader subjectively tweak a player upward or downward based on their own thoughts.
4) Wide receivers who play with other great wide receivers are probably harmed in this system, at least slightly. That hurts people like Isaac Bruce and Torry Holt or Anquan Boldin and Larry
Fitzgerald. In some ways, this system does a better job of measuring “value added” than actual talent, and a superstar receiver on a team of scrubs probably can add more value than a star receiver on
a team full of stars.
{ 6 comments… read them below or add one }
Regarding # 4. This gets to a point I made earlier about what the methodology actually measures- how well a receiver performs given their context or situation; too many variables regarding
indoors/outdoors, QB and team qualitiy etc to say “whose the best” by any quantitiative metric. And the work being done with this method of measruing receiver quality makes me wish Larry
Fitzgerald would get traded to someone with a decent QB before his career dwindles to nothingness.
I have no idea what to make of the fact that Davone Bess and Brian Hartline both make the top 32.
Re the pro-rating: I don’t know how big of a difference this makes, but I’m not sure I understand the straight up prorating-approach. We’re dealing with top recievers in the league so their
absence will presumeably make their teams pass less. When Harvin isn’t there, handing off to Adrian Peterson is more tempting to the Vikings right? The run-pass value equilibrium i pushed towards
the run when your reciever is sidelined.
How complicated would it be to simply go back and see how many times the Vikings actually passed in the games Harvin actually appeared? Certainly there’s a tradeoff of precision, data
availabillity and coding complexity to consider.
Basically the trade-off is too annoying. It’s the right way, of course, but it’s a bit of a pain to code and I don’t think worth it. The other issue is that many times older receivers would
simply not get a catch in a game, but that doesn’t mean they were hurt. At some point (Maybe GWROAT III) I will go back and do it, though.
I’m totally fine with this, for the record. Sometimes the perfect way is simply to laborious – that’s just a fact of life.
Also fine: I really enjoy how you come back and answer questions and respond to debates even if the post is a couple of days old (I had fallen behind on my reading). It’s definetely
something that keeps me hanging around the blog.
Thanks Danish. Hope you stick around for a long time!
Leave a Comment
{ 8 trackbacks }
|
{"url":"http://www.footballperspective.com/the-greatest-wide-receivers-ever-version-2-0-part-i-methodology/","timestamp":"2014-04-17T12:41:49Z","content_type":null,"content_length":"65155","record_id":"<urn:uuid:7509117d-e76f-4328-928c-e9760455ddda>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A nonparametric test of market timing
Add New Comment
Journal of Empirical Finance 10 (2003) 399 – 425
A nonparametric test of market timing
Wei Jiang*
Finance and Economics Division, Columbia Business School, 3022 Broadway, New York, NY 10027, USA
In this paper, we propose a nonparametric test for market timing ability and apply the analysis to
a large sample of mutual funds that have different benchmark indices. The test statistic is formed to
proxy the probability that a manager loads on more market risk when the market return is relatively
high. The test (i) only requires the ex post returns of funds and their benchmark portfolios; (ii)
separates the quality of timing information a money manager possesses from the aggressiveness with
which she reacts to such information; and (iii) is robust to different information and incentive
structures, as well as to underlying distributions. Overall, we do not find superior timing ability
among actively managed domestic equity funds for the period of 1980 – 1999. Further, it is difficult
to predict funds’ timing performance from their observable characteristics.
D 2003 Elsevier Science B.V. All rights reserved.
JEL classification: G1; C1
Keywords: Mutual funds; Market timing; Nonparametric test; U-statistics
1. Introduction
Based on the theory of market efficiency with costly information, there has been ample
research work on measuring professional money managers’ performance. The emphasis
has been on one of the two basic abilities: securities selectivity and market timing. The
former tests whether a fund manager’s portfolio outperforms the benchmark portfolio in
risk-adjusted terms (Jensen, 1972; Gruber, 1996; Ferson and Schadt, 1996; Kothari and
Warner, in press). The latter tests whether a fund manager can out-guess the market by
moving in and out of the market (Treynor and Mazuy, 1966; Henriksson and Merton,
1981; Admati et al., 1986; Bollen and Busse, 2001).
* Tel.: +1-212-854-9679; fax: +1-212-316-9180.
E-mail address: wj2006@columbia.edu (W. Jiang).
0927-5398/03/$ - see front matter D 2003 Elsevier Science B.V. All rights reserved.
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
Measures of market timing have fallen into one of the two categories: portfolio- and
return-based methods. The former tests whether money managers successfully allocate
monies among different classes of assets (e.g., equity versus cash) to capitalize on market
ascendancy and/or to avoid downturns. If we could observe the portfolio composition of
mutual funds at the same frequency as we observe the returns, we could infer funds’
market timing by testing whether the portfolio holdings anticipate market moves. Graham
and Harvey (1996) empirically test market timing using investment newsletters’ asset
allocation recommendations.
Holdings, however, are often not available (especially in academic studies), which
limits the market timing analysis to the returns of funds and benchmark portfolios only.
The return-based method, on the other hand, only requires data on the ex post returns of
funds and the relevant market indices. The two most popular methods along this line are
those proposed by Treynor and Mazuy (1966) (henceforth ‘‘TM’’) and Henriksson and
Merton (1981) (henceforth ‘‘HM’’).
Most of the work on mutual fund performance measurement extends the CAPM or
multi-factor analysis of securities and portfolios to mutual funds. There has been
controversy over using such a metric to evaluate mutual fund performance. The static
a – b analysis misses the diversified and dynamic aspects of managed portfolios (Admati et
al., 1986; Ferson and Schadt, 1996; Becker et al., 1999; Ferson and Khang, 2001). A fund
manager may vary her portfolio’s exposure to the market or other risk factors, or alter the
fund’s correlation to the benchmark index in response to the incentive she faces (Chevalier
and Ellison, 1997). Consequently, the systematic part of the fund’s risk can be mis-
estimated when its manager is trying to time the market, and existing measures may
incorrectly attribute performance to funds, or fail to attribute superior returns to an
informed manager (Grinblatt and Titman, 1989). To address these issues, there has been a
great deal of study on capturing the effect of conditioning information on timing
performance measures (Ferson and Schadt, 1996; Becker et al., 1999; Ferson and Khang,
2001), controlling for spurious timing arising from not holding the benchmark
(Jagannathan and Korazjczyk, 1986; Breen et al., 1986), decomposing abnormal perform-
ance into selectivity and timing (Admati et al., 1986; Grinblatt and Titman, 1989), and
minimizing the loss of test power due to sampling frequencies (Goetzmann et al., 2000;
Bollen and Busse, 2001).
In this paper, we develop an independent test to measure the market timing ability of
portfolio managers without resorting to the estimation of a’s or b’s. The test is based on
the simple idea that a successful market timer’s fund rises significantly when the market
rises and falls slightly when the market drops. The nonparametric test has the following
properties. First, it is easy to implement because it only requires the ex post returns of
funds and their benchmark portfolios. Second, the test statistic is not affected by the
manager’s risk aversion because it separates the quality of timing information a fund
manager possesses from the aggressiveness of the reaction to such information. Third, the
test is more robust to different information and incentive structures, as well as to timing
frequencies and underlying distributions, than existing timing measures. Finally, the
method developed in this paper is readily applicable to analyzing the market timing
ability of financial advisors or newsletters (Graham and Harvey, 1996), or the timing
behavior of individual investors (Odean, 1998; Barber and Odean, 2000).
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
The rest of the paper is organized as follows: Section 2 presents the nonparametric
statistic of market timing and compares it with the TM and HM methods. Section 3 applies
the method to a data set of mutual funds with different benchmark indices. Section 4
2. Model
2.1. Market timing test statistics
We assume that a money manager’s timing information is independent of her
information about individual securities. This is a fairly standard assumption in the
performance measurement literature (e.g., see Admati et al., 1986; Grinblatt and Titman,
1989).1 With independent selectivity and timing, we have the following market model of
fund returns (all returns are expressed in excess of the risk-free rate):
ri;tþ1 ¼ ai þ bi;trm;tþ1 þ ei;tþ1;
where i is the subscript for individual funds throughout this paper. bi,t is a random variable
adapted to the information available to the manager at time t and rm represents the return of
the relevant market (which can be a subset of the total market) in which the mutual fund
invests. It is the benchmark portfolio return against which the fund is evaluated. In the
simplest case, a market timer decides on bt at date t and invests bt percent in the market
portfolio and the rest in bonds until date t + 1. Eq. (1) represents the return process from
such a timing strategy.
For a triplet {rm,t , rm,t , rm,t } sampled from any three periods such that rm,t < rm,t <
rm,t , an informed manager should, on average, maintain a higher average b in the trm,t ,
rm,t b range than in the trm,t , rm,t b range. The b estimates for both ranges (given two
observations for each range) are (ri,t À ri,t )/(rm,t À rm,t ) and (ri,t À ri,t )/(rm,t À rm,t ),
respectively. Accordingly, we propose using the probability
À r
À r
h ¼
À 1;
rm;t À r
À r
as a statistic of market timing ability. We motivate this market timing measure as follows.
A manager’s timing ability is determined by the relevance and accuracy of her
information. Let rˆm,t + 1 = E(rm,t + 1 | It) be the manager’s prediction about the next-period
market return based on It, her information set (both public and private) at time t. If It is not
informative at all, then the conditional distribution equals the unconditional one, that is,
f(rm,t + 1jrˆm,t + 1) = f(rˆm,t + 1), where f(Á) stands for the probability density function. In this
case, the conditional forecast equals the unconditional one and the manager would not be
able to tell when the market will enjoy relatively high returns. More specifically, for two
Correlated timing and selectivity information would in general cause technical difficulties in separating
abnormal performance due to timing from that due to selectivity. For a detailed discussion, see Grinblatt and
Titman (1989).
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
periods, t1 p t2, the following parameter takes the value of zero in the absence of timing
m ¼ Prðˆrm;t1þ1 > ˆrm;t2þ1 j rm;t1þ1 > rm;t2þ1Þ À Prðˆrm;t1þ1 < ˆrm;t2þ1 j rm;t1þ1 > rm;t2þ1Þ
¼ 2Prðˆrm;t1þ1 > ˆrm;t2þ1 j rm;t1þ1 > rm;t2þ1Þ À 1:
At the other extreme, if the forecast is always perfect, that is, rˆm,t + 1 u rm,t + 1, then m attains
its upper bound of one. Symmetrically, m = À 1 represents perfectly perverse market
timing. Therefore, the value of ma[ À 1,1] indicates the fund manager’s market timing
ability: the more accurate the information It the higher the value of m. The next step is to
find a relationship between the manager’s forecast (rˆm,t + 1) and her action (bt) so that h
defined in Eq. (2) is a valid proxy of m.
Suppose the manager receives a favorable signal that leads to a high rˆm,t + 1. How
much market exposure (bt) the manager would like to take apparently depends on two
factors: the precision of the forecast and the aggressiveness with which she uses her own
information. The first part concerns natural ability, while the latter can be affected by the
manager’s risk aversion. Grinblatt and Titman (1989) show that an investor who has
independent timing and selectivity information and non-increasing absolute risk aver-
sion3 would increase bt in Eq. (1) as information about the market becomes more
favorable, or
> 0. Combining
Bbt > 0 with Eq. (3), we see that the following
probability is greater than zero if and only if the manager possesses superior timing
2Prðbt > b Arm;t
1 þ1 > rm;t2 þ1 Þ À 1:
From the analysis above, therefore, superior timing ability m>0 (defined in Eq. (2))
translates into h>0 (defined in Eq. (2)) if a manager loads on more market risk when signals
about future market returns are more favorable. Eq. (2) is testable because the sample
analogue of h can be formed. Under the null hypothesis of no timing ability, the b has no
correlation with the market return, in which case the statistic h assumes the neutral value of
zero. Intuitively, an uninformed manager would move the market exposure of her portfolio
in the right direction as often as she would do in the wrong direction. Note that a triplet {ri,t ,1
ri,t , ri,t } is convex vis-a`-vis the market return if and only if (ri,t À ri,t )/(rm,t À rm,t )>
(ri,t À ri,t )/(rm,t À rm,t ). Therefore, h measures the probability that the fund returns bear a
convex relation with the market returns in excess of that of a concave relation.
The HM method tests whether the probability Prðˆrm;tþ1 > 0 j rm;tþ1 > 0Þ þ Prðˆrm;tþ1 < 0 j rm;tþ1 < 0Þis
greater than one. When the HM model is the correct specification, our measure picks up the manager’s timing
ability among a subset of triplets where at least two observations of market returns are of opposite signs. In
general, our measure allows the manager to make finer forecasts and uses more information in the return data by
looking at all triplets frm;t1þ1; rm;t2þ1; rm;t3þ1g for t1 6¼ t2 6¼ t3:
Non-increasing absolute risk aversion requires that the investor’s risk aversion measured byÀ u00ðwÞ be non-
increasing in the wealth level w. Commonly used utility functions, such as the exponential, power, and log
utilities, all meet this criterion.
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
The sample analogue to h becomes a natural candidate as a statistic. It is a U-statistic
with kernel of order three:
À r
À r
n ¼
À r
À r
m;t <r
where n is the sample size and sign(Á) is the sign function that assumes value 1 ( À 1) if the
argument is positive (negative) and equals zero if the argument is zero. By the property of
U-statistics, hˆn is a
n-consistent and asymptotically normal estimator for h (Serfling,
1980; Abrevaya and Jiang, 2001). That is,
nð ˆ
hn À hÞ ! N ð0; r2ˆ Þ when n ! l. Further
hˆn, as defined in Eq. (5), is the least variance estimator among all unbiased estimators for
the population coefficient h.
Abrevaya and Jiang (2001) provide the asymptotic distribution of the hˆn statistic. Let
zt u (rt , rm,t ), j={1, 2, 3}, and denote the kernel function of hˆn by
ri;t À ri;t
rt À ri;t
t ; z ; z Þ ¼ sign
j r
< r
< r
m;t À r
À r
A consistent estimator of the standard error of hˆn is derived in Abrevaya and Jiang (2001):
9 X
r2ˆ ¼
hðz ; z ; z Þ À ˆ
Simulation results in Abrevaya and Jiang (2001) show that the size of the test is very
accurate4 if we use the bootstrap method in standard error estimation for sample sizes
below 50 and use the asymptotic formula for larger sample sizes.
2.2. Properties
The new market timing measure (h) has a ready interpretation as the probability that a
fund manager takes relatively more systematic risk in a higher return period than in a low
return one. Since the seminal work of Treynor and Mazuy (1966) and Henriksson and
Merton (1981), there has been much work extending these measures in order to relax their
restrictive behavioral and distribution assumptions while retaining their intuitive appeal,
ease of implementation, and minimal data requirements.5 In this subsection, we discuss the
Using 1000 simulations, rejection rates at 5% significance level are between 4.5% and 5.5% for all error
Goetzmann et al. (2000) had an excellent review of the research that addresses the limitation of the TM and
HM timing measures.
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
contribution of the nonparametric timing measure on these grounds and point out its
limitations. The fund subscript i will henceforth be omitted where there is no confusion.
2.2.1. Information structure and behavioral assumptions
The nonparametric measure allows a more flexible specification of a fund manager’s
response to information. We require bt to be a non-decreasing function of rˆm,t + 1, that is,
the manager sets a higher b for the fund when her forecast of the next-period market return
is more favorable. Grinblatt and Titman (1989) show that sufficient conditions for this to
hold are i.i.d. random noise in market returns, independent selectivity and timing
information, and non-increasing absolute risk aversion. This requirement is less stringent
than those of the TM and HM measures, which require linear or binary response function
by the manager. The i.i.d. assumption, however, rules out heteroscedasticity in returns and
hence volatility timing by money managers. We will relax this assumption and discuss the
possible impact of volatility timing in a later section.
In general, a fund manager’s reaction to information depends on her risk aversion
(which could be affected by the incentive she faces) as well as her natural ability. The
functional form of such a response is difficult to specify without being somewhat arbitrary.
For example, the TM measure uses the following quadratic regression of a fund’s returns:
rtþ1 ¼ a þ brm;tþ1 þ c½rm;tþ1 2 þ etþ1;
where superior timing shows up in a positive coefficient ci. As analyzed in Admati et al.
(1986), the return process of Eq. (7) comes out of a linear response by the fund manager in
the form of:
bt ¼ ¯b þ k½ˆrm;tþ1 À EðrmÞ :
The linear response function is consistent with the manager’s acting as if she were
maximizing the expected utility of a CARA preference. However, such an assumption is
questionable if the fund manager maximizes the utility related to her own payoff under the
incentive she faces instead of the fund’s total return. The deviation from maximizing a
CARA preference is large when there is non-linearity in the incentive, explicitly or
implicitly, in the forms of benchmark evaluation (Admati and Pfleiderer, 1997), option
compensation (Carpenter, 2000), or non-linear flow-to-performance responses by fund
investors (Chevalier and Ellison, 1997).
The HM measure, on the other hand, assumes that a manager takes only two b values—
a high b when she expects the market return to exceed the risk-free rate and a low b when
otherwise. The binary-b strategy results in the following return model:
rtþ1 ¼ a þ brm;tþ1 þ c½rm;tþ1 þ þ etþ1;
where [rm,t + 1]+ = max(0, rm,t + 1). The coefficient on [rm,t + 1]+ represents the value added
by effective timing that is equivalent to a call option on the market portfolio where the
exercise price equals the risk-free rate. Such a specification, while intuitive, is highly
restrictive as well. After all, there is no reason to expect a uniform reaction to information
by all fund managers. In comparison, the nonparametric measure offers more flexibility. It
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
only requires the reaction function to be non-decreasing in the manager’s forecast of
market return.
When a linear reaction function is the correct specification, the nonparametric measure
gives the same result as the TM measure. In the TM model, the manager’s private signal,
yt, is generated according to
yt ¼ rm;tþ1 þ gt;
where gt is a normal random variable that is independent of rm,t + 1 and is i.i.d. across time.
Timing ability is represented by the inverse of the variance of the noise term. For any two
gt and gt from two periods t1p t2, we can calculate Eq. (3) as follows:
m ¼ 2Prðgt À g < rm;t
1 þ1 À rm;t2 þ1 j rm;t1 þ1 > rm;t2 þ1 À 1
1 þ1 À rm;t2 þ1 A
À 1;
where U(Á) stands for the cumulative probability function of the standard normal
distribution. It is easy to see that m is monotonically increasing in 1/rg, the precision of
the private signal. An infinitely noisy signal (rg = l) leads to m = 0 (no timing) and a
perfect signal (rg = 0) implies m = 1 (perfect timing). Therefore, the nonparametric measure
will identify a good timer who adopts the TM timing strategy.
2.2.2. Ability versus response
A fund manager’s market timing performance relies on both the quality of her private
information (ability) and the aggressiveness with which the manager reacts to her
information (response). This constitutes a dichotomy that is difficult to decompose.
Except for special cases, existing performance measures are not able to extract the
information-related component of performance. As Grinblatt and Titman (1989) point
out, it would be better if performance measures (in addition to detecting abnormal
performance) could ‘‘also select the more informed of two [managers]’’. An investor
should be more concerned with the quality of the manager’s information than with the
manager’s aggressiveness because the investor can choose the proportion of her wealth
invested in the fund in response to the manager’s ability.
The TM and HM measures reflect both aspects of market timing. We see that the
estimated cˆTM in the TM regression will pick up the coefficient in the linear reaction
function (the k term in Eq. (8)). Hence, more aggressive funds can show up with
higher cˆTM. The cˆTM coefficient in the HM model is an unbiased estimate for the
product D(bH À bL), where D is the probability defined in footnote 4, and bH(bL) is the
manager’s target b when the predicted market excess return is positive (negative).
Thus, both ability (the D term) and aggressiveness (the bH À bL term) are reflected in
the estimated timing. The nonparametric statistic, on the other hand, measures how
often a manager correctly ranks a market movement and appropriately acts on it,
instead of measuring how aggressively she acts on it. We see that, in the linear
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
response case (as in Eq. (8)), the k coefficient cancels out in the nonparametric
measure because
À ˆr
À ˆr
h ¼
2Pr k
2 > k
1 j rm;t < rm;t < rm;t
À 1
m;t À r
À r
À ˆr
À ˆr
2 >
1 j rm;t < rm;t < rm;t
À 1:
m;t À r
À r
Thus, our measure largely reflects the information quality component of performance.
Based on this analysis, we also see that there is great complementarity between the
nonparametric method and the two other methods. Used together in empirical work,
they can offer a more complete picture of the market timing performance of fund
2.2.3. Conditional information
The nonparametric measure can be extended to the context of conditional market
timing. The literature on conditional performance evaluation stresses the importance of
distinguishing performance that merely reflects publicly available information (as captured
by a set of instrumental variables) from performance that can be attributed to better
information. The conditional market timing approach (see, e.g., Ferson and Schadt, 1996;
Graham and Harvey, 1996; Becker et al., 1999; Ferson and Khang, 2001) assumes that
investors can time the market on their own using readily available public information, or
that by trading on other accounts they can undo any perverse timing that is predicted from
the public information. Under such circumstances, the real contribution of a fund manager
would be successful timing on the residual part of market returns that is not predictable
from public information.
Let r˜m,t and r˜i,t , j = 1, 2, 3, be the residuals of market returns and the fund return that
cannot be explained by lagged instrumental variables. The following statistic then proxies
the probability that a fund manager loads on more market risk when the market return is
higher, controlled for public information in both market and fund returns:
À ˜r
À ˜r
n ¼
À ˜r
À ˜r
m;t <˜
Theoretically, h in Eq. (2) and h˜ in Eq. (12) can have different magnitudes or even
different signs because the probabilities are conditional on different states. That is, a
manager who successfully times the unpredicted part of the market return can show up as a
mis-timer on the gross market return if we do not control for public information. Both
public and private information can be used to enhance portfolio returns, but a truly
informed manager should have superior market timing based on information beyond that
which is readily available to the public.
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
2.2.4. Statistical robustness
Breen et al. (1986) point out that heteroscedasticity can significantly affect the
conclusions of the HM tests. Jagannathan and Korajczyk (1986) and Goetzmann et al.
(2000) demonstrate the bias of the HM measure due to skewness. The asymptotic
distribution of the hˆn statistic, on the other hand, is unaffected by heteroscedasticity or
skewness. Further, hˆn in Eq. (5) is the least variance estimator among all unbiased
estimators of h in Eq. (2). The simulation results shown in Abrevaya and Jiang (2001)
demonstrate that the nonparametric test has accurate size even for small samples and is
robust (in terms of both the value of the statistic and its standard error) to outliers, non-
normality, and heteroscedasticity that are common in financial data.6 However, we do
require the errors in Eq. (1) to be serially uncorrelated. As we will be using monthly return
data for our empirical test, this assumption is not a serious concern. However, the statistic
can be biased when applied to high-frequency data.7
The nonparametric method also offers a timing measure that has little correlation with the
estimation error in the standard selectivity measures. TM or HM type regression models
would produce a spurious negative correlation between estimated selectivity and timing
because of the negatively correlated sampling errors between the two estimates (Jagannathan
and Korajczyk, 1986; Coggin, 1993; Kothari and Warner, in press). Our simulation shows
that a significant negative correlation between the two estimated abilities will occur in the
TM or HM models (or between the selectivity measure from one model and the timing
measure from the other) even when the correlation is non-existent. Coggin et al. (1993) and
Goetzmann et al. (2000) have similar results. On the other hand, the correlation between hˆn
and the selectivity measures from standard regression models is close to the truth.
2.2.5. Model specification and potential bias
In this section, we discuss three specification issues that can affect the consistency and
power of market timing tests: the separability of timing from selectivity; the difference
between the frequencies at which data are sampled and at which the manager times the
market; the relationship between market timing and volatility timing. The nonparametric
measure is more robust to model specifications than the TM and HM measures, though it
does not overcome all the biases.
A manager can enhance portfolio returns by selecting securities and by timing the
market. Decomposing returns in this fashion, however, is empirically difficult (Admati et
al., 1986; Grinblatt and Titman, 1989; Coggin et al., 1993; Kothari and Warner, in press).
Our measure relies on two common assumptions to avoid detecting spurious timing
because of selectivity issues. The first assumption is that a portfolio manager’s information
on the selectivity side (movement of individual securities) is independent of her
For example, Bollen and Busse (2001) test the hypothesis that fund returns are normally distributed and
reject normality at the 1% level. They also conjecture that the relative skewness of market and fund returns is
driven by the crash of 1987 and other smaller crashes in the sample.
When applying the measure to high-frequency data, we would recommend the following modification in
forming hˆn: use only triplet observations {rm,t + 1, rm,t + 1, rm,t + 1} that are at least k periods apart, where k is the
lag of possible serial correlation, and rescale the statistics by the number of triplets actually used, denote it m. For
any finite k, m !
when n ! l.
W. Jiang / Journal of Empirical Finance 10 (2003) 399–425
information on the timing side (market movement). In practice, this requires that each
individual security constitutes only a small portion of a diversified portfolio and has a
negligible impact on the whole market (the manager does not select ‘‘too many’’ stocks at
one time, either); or the fund manager must act on selectivity at a much lower frequency
than on market timing (so that the manager keeps roughly constant the composition of her
risky portfolio when trying to time the market). The second assumption is that the portfolio
does not contain derivatives. Jagannanthan and Korajczk (1986) show that buying call
options, for example, can induce spurious timing ability. Kosik and Pontiff (1999) find that
21% of the 679 domestic equity funds in their sample hold derivative securities, but
detailed information about their derivative holdings is not available. Our measure, like the
TM and HM measures, cannot distinguish market timing from option-related spurious
For most timing measures, biases arise when the econometrician observes return data at
a frequency different from the frequency at which the manager times the market.
Goetzmann et al. (2000) show that monthly evaluation of daily timers using the HM
measure is biased severely downward. At the same time, a major component of timing
skill would show up as security-selection skill. Bollen and Busse (2001) show that the
results of standard timing tests are sensitive to the frequency of data used. Ferson and
Khang (2001) point out that an ‘‘interim trading bias’’ can arise when expected returns are
time varying and managers trade between return observation dates. The major source of
bias is the mis-specification of the regressor [rm]+ in the HM equation that should take
different values depending on the actual timing frequency rather than uniform frequencies
(such as monthly). Goetzmann et al. (2000) suggest replacing the monthly option value
[rm]+ with its accumulated daily option value when daily data of fund returns are not
readily available. Simulations show that the nonparametric measure is more robust to the
difference between timing frequency and sampling frequency because it does not rely on a
regression involving a potentially unknown regressor [rm]+ measured at the ‘‘right’’
frequency. Ferson and Khang (2001) use conditional portfolio weights to control for
interim trading bias as well as for trading on public information. Since our measure does
not use portfolio weights, it can potentially be subject to such bias.
The third model specification issue comes from the fact that the manager might be
timing market volatilities as well as market returns. Busse (1999) shows that funds attempt
to decrease market exposure when market volatility is high. Laplante (2001) shows that
observed mutual fund positions are not informative about future market volatility. If
volatility and expected return are uncorrelated, then our market timing measure remains
consistent in the presence of volatility timing. If the correlation is positive, the market
timing measure would underestimate the information quality of a successful volatility
timing manager.8 The opposite is true when the relation is negative. Research on the
relationship between the expected return and volatility (see, e.g., Breen et al., 1989;
Glosten et al., 1993; Busse, 1999) finds that the relation between return and volatility is
weak, both conditionally and unconditionally. If this is the case, the manager’s timing on
return and volatility likely to be weakly related.
If the manager tries to time the volatility, she may reduce market exposure even when the expected return is
high, if high-expected return tends to go with high volatility.
Document Outline
• A nonparametric test of market timing
□ Introduction
□ Model
☆ Market timing test statistics
☆ Properties
○ Information structure and behavioral assumptions
○ Ability versus response
○ Conditional information
○ Statistical robustness
○ Model specification and potential bias
☆ Simulations
□ Testing the market timing of mutual funds
☆ Data
☆ Do funds out-guess the market?
☆ Some related questions
○ Does experience matter?
○ Do small funds fare better?
○ Is high turnover rate justified as timing?
○ Do investor flows affect market timing?
□ Conclusion
□ Acknowledgements
□ References
|
{"url":"http://pdfcast.org/pdf/a-nonparametric-test-of-market-timing","timestamp":"2014-04-21T04:34:06Z","content_type":null,"content_length":"75068","record_id":"<urn:uuid:474a77a7-f977-4821-9851-9d05285899b4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derivative of an exponential when the exponent is a function
what is the value of $<br /> \frac{dy}{dx}<br />$ if $<br /> y=2^{\sin (2^{\sin(2^x)})}<br />$
$y = 2^{\sin 2^{sin2^{x}}}$ let $u= 2^{x}$ then $y = 2^{\sin 2^{\sin u}}$ let $v = \sin u$ then $y = 2^{\sin 2^{v}}$ let $w = 2^v$ then $y = 2^{\sin w}$ finally let $z = \sin w$ then $y = 2^{z}$ $\
frac {dy}{dx} = \frac {dy}{dz} \frac {dz}{dw} \frac {dw}{dv} \frac {dv}{du} \frac {du}{dx}$ $\frac {dy}{dx} = \ln(2) 2^{z} \cos(w) ln(2)2^{v} \cos(u) ln(2) 2^x$ $= \ln(2) 2^{\sin w} \cos(2^{v}) ln(2)
2^{\sin u} \cos(2^{x}) \ln(2) 2^{x}$ $= \ln(2) 2^{\sin 2^{v}} \cos(2^{\sin u}) \ln(2)2^{\sin 2^{x}} \cos(2^{x}) \ln(2) 2^{x}$ $= \ln(2) 2^{\sin 2^{\sin 2^x}} \cos(2^{\sin 2^{x}}) \ln(2)2^{\sin 2^{x}}
\cos(2^{x}) \ln(2) 2^{x}$
|
{"url":"http://mathhelpforum.com/calculus/94866-derivative-exponential-when-exponent-function.html","timestamp":"2014-04-18T21:16:33Z","content_type":null,"content_length":"38520","record_id":"<urn:uuid:248d8c62-6a93-473a-b850-a88efddc2c82>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Physics for Scientists and Technologists: Fundamental Principles and Applications for Biologists, Chemists, Computer Scientists, and Nanotechnologists
ISBN: 978-0-470-29452-9
544 pages
April 2011
Read an Excerpt
Quantum Physics for Scientists and Technologists
is a self-contained, comprehensive review of this complex branch of science. The book demystifies difficult concepts and views the subject through non-physics fields such as computer science,
biology, chemistry, and nanotechnology. It explains key concepts and phenomena in the language of non-physics majors and with simple math, assuming no prior knowledge of the topic.
This cohesive book begins with the wavefunction to develop the basic principles of quantum mechanics such as the uncertainty principle and wave-particle duality. Comprehensive coverage of quantum
theory is presented, supported by experimental results and explained through applications and examples without the use of abstract and complex mathematical tools or formalisms. From there, the book:
• Takes the mystery out of the Schrodinger equation, the fundamental equation of quantum physics, by applying it to atoms
• Shows how quantum mechanics explains the periodic table of elements
• Introduces the quantum mechanical concept of spin and spin quantum number, along with Pauli's Exclusion Principle regarding the occupation of quantum states
• Addresses quantum states of molecules in terms of rotation and vibration of diatomic molecules
• Explores the interface between classical statistical mechanics and quantum statistical mechanics
• Discusses quantum mechanics as a common thread through different fields of nanoscience and nanotechnology
Each chapter features real-world applications of one or more quantum mechanics principles. "Study Checkpoints" and problems with solutions are presented throughout to make difficult concepts easy to
understand. In addition, pictures, tables, and diagrams with full explanations are used to present data and further explain difficult concepts.
This book is designed as a complete course in quantum mechanics for senior undergraduates and first-year graduate students in non-physics majors. It also applies to courses such as modern physics,
physical chemistry and nanotechnology. The material is also accessible to scientists, engineers, and technologists working in the fields of computer science, biology, chemistry, engineering, and
See More
About the Author.
About the Tech Editor.
Periodic Table of the Elements.
Fundamental Physical Constants.
Important Combinations of Physical Constants.
Preface: Science, Technology, and Quantum Physics: Mind the Gap.
1 First, There Was Classical Physics.
1.1 Introduction.
1.2 Physics and Classical Physics.
1.3 The Classical World of Particles.
1.4 Physical Quantities.
1.5 Newton's Laws of Motion.
1.6 Rotational Motion.
1.7 Superposition and Collision of Particles.
1.8 Classical World of Waves.
1.9 Refl ection, Refraction, and Scattering.
1.10 Diffraction and Interference.
1.11 Equation of Wave Motion.
1.12 Light: Particle or Wave?
1.13 Understanding Electricity.
1.14 Understanding Magnetism.
1.15 Understanding Electromagnetism.
1.16 Maxwell's Equations.
1.17 Confi nement, Standing Waves, and Wavegroups.
1.18 Particles and Waves: The Big Picture.
1.19 The Four Fundamental Forces of Nature.
1.20 Unification: A Secret to Scientific and Technological Revolutions.
1.21 Special Theory of Relativity.
1.22 Classical Approach.
1.23 Summary.
1.24 Additional Problems.
2 Particle Behavior of Waves.
2.1 Introduction.
2.2 The Nature of Light: The Big Picture.
2.3 Black-Body Radiation.
2.4 The Photoelectric Effect.
2.5 X-Ray Diffraction.
2.6 The Compton Effect.
2.7 Living in the Quantum World.
2.8 Summary.
2.9 Additional Problems.
3 Wave Behavior of Particles.
3.1 Introduction.
3.2 Particles and Waves: The Big Picture.
3.3 The de Broglie Hypothesis.
3.4 Measuring the Wavelength of Electrons.
3.5 Quantum Confi nement.
3.6 The Uncertainty Principle.
3.7 Wave-Particle Duality of Nature.
3.8 Living in the Quantum World.
3.9 Summary.
3.10 Additional Problems.
4 Anatomy of an Atom.
4.1 Introduction.
4.2 Quantum Mechanics of an Atom: The Big Picture.
4.3 Dalton's Atomic Theory.
4.4 The Structure of an Atom.
4.5 The Classical Collapse of an Atom.
4.6 The Quantum Rescue.
4.7 Quantum Mechanics of an Atomic Structure.
4.8 Classical Physics or Quantum Physics: Which One Is the True Physics?
4.9 Living in the Quantum World.
4.10 Summary.
4.11 Additional Problems.
5 Principles and Formalism of Quantum Mechanics.
5.1 Introduction.
5.2 Here Comes Quantum Mechanics.
5.3 Wave Function: The Basic Building Block of Quantum Mechanics.
5.4 Operators: The Information Extractors.
5.5 Predicting the Measurements.
5.6 Put It All into an Equation.
5.7 Eigenfunctions and Eigenvalues.
5.8 Double Slit Experiment Revisited.
5.9 The Quantum Reality.
5.10 Living in the Quantum World.
5.11 Summary.
5.12 Additional Problems.
6 The Anatomy and Physiology of an Equation.
6.1 Introduction.
6.2 The Schrödinger Wave Equation.
6.3 The Schrödinger Equation for a Free Particle.
6.4 Schrödinger Equation for a Particle in a Box.
6.5 A Particle in a Three-Dimensional Box.
6.6 Harmonic Oscillator.
6.7 Understanding the Wave Functions of a Harmonic Oscillator.
6.8 Comparing Quantum Mechanical Oscillator with Classical Oscillator.
6.9 Living in the Quantum World.
6.10 Summary.
6.11 Additional Problems.
7 Quantum Mechanics of an Atom.
7.1 Introduction.
7.2 Applying the Schrödinger Equation to the Hydrogen Atom.
7.3 Solving the Schrödinger Equation for the Hydrogen Atom.
7.4 Finding the Electron.
7.5 Understanding the Quantum Numbers.
7.6 The Signifi cance of Hydrogen.
7.7 Living in the Quantum World.
7.8 Summary.
7.9 Additional Problems.
8 Quantum Mechanics of Many-Electron Atoms.
8.1 Introduction.
8.2 Two Challenges to Quantum Mechanics: The Periodic Table and the Zeeman Effect.
8.3 Introducing the Electron Spin.
8.4 Exclusion Principle.
8.5 Understanding the Atomic Structure.
8.6 Understanding the Physical Basis of the Periodic Table.
8.7 Completing the Story of Angular Momentum.
8.8 Understanding the Zeeman Effect.
8.9 Living in the Quantum World.
8.10 Summary.
8.11 Additional Problems.
9 Quantum Mechanics of Molecules.
9.1 Introduction.
9.2 A System of Molecules in Motion.
9.3 Bond: The Atomic Bond.
9.4 Diatomic Molecules.
9.5 Rotational States of Molecules.
9.6 Vibrational States of Molecules.
9.7 Combination of Rotations and Vibrations.
9.8 Electronic States of Molecules.
9.9 Living in the Quantum World.
9.10 Summary.
9.11 Additional Problems.
10 Statistical Quantum Mechanics.
10.1 Introduction.
10.2 Statistical Distributions.
10.3 Maxwell–Boltzmann Distribution.
10.4 Molecular Systems with Quantum States.
10.5 Distribution of Vibrational Energies.
10.6 Distribution of Rotational Energies.
10.7 Distribution of Translational Energies.
10.8 Quantum Statistics of Distinguishable Particles: Putting It All Together.
10.9 Quantum Statistics of Indistinguishable Particles.
10.10 Planck’s Radiation Formula.
10.11 Absorption, Emission, and Lasers.
10.12 Bose–Einstein Condensation.
10.13 Living in the Quantum World.
10.14 Summary.
10.15 Additional Problems.
11 Quantum Mechanics: A Thread Runs through It all.
11.1 Introduction.
11.2 Nanoscience and Nanotechnology.
11.3 Nanoscale Quantum Confi nement of Matter.
11.4 Quick Overview of Microelectronics.
11.5 Quantum Computing.
11.6 Quantum Biology.
11.7 Exploring the Interface of Classical Mechanics and Quantum Mechanics.
11.8 Living in the Quantum World.
11.9 Summary.
11.10 Additional Problems.
See More
Paul Sanghera, PhD, is an educator, scientist, technologist, and entrepreneur. He has worked at world-class laboratories such as CERN in Europe and Nuclear Lab at Cornell, where he participated in
designing and conducting experiments to test the quantum theories and models of subatomic particles. Dr. Sanghera is the author of several bestselling books in the fields of science, technology, and
project management as well as the author/coauthor of more than 100 research papers on the subatomic particles of matter published in reputed European and American research journals.
See More
"The book presents a rich, self-contained, cohesive, concise, yet comprehensive picture of quantum
mechanics for senior undergraduate and first-year graduate students, nonphysicists majors,
and for those professionals at the forefront of biology, chemistry, engineering, computer science, materials science, nanotechnology, or related fields." (Zentralblatt MATH, 2011)
See More
Buy Both and Save 25%!
Quantum Physics for Scientists and Technologists: Fundamental Principles and Applications for Biologists, Chemists, Computer Scientists, and Nanotechnologists (US $130.00)
-and- Single-photon Devices and Applications (US $87.50)
Total List Price: US $217.50
Discounted Price: US $163.12 (Save: US $54.38)
Cannot be combined with any other offers. Learn more.
|
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470294523,subjectCd-LS00.html","timestamp":"2014-04-17T22:03:30Z","content_type":null,"content_length":"57969","record_id":"<urn:uuid:1c71710e-eb91-40fa-a950-3457d32dde9c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unable to solve this limit problem
October 27th 2009, 06:17 AM #1
Junior Member
Oct 2009
[Solved]Unable to solve this limit problem
$\lim \frac{(cos x)^\frac{1}{x} - 1}{x}$
Hmm how should i start about doing it?
is it possible to use L'H rule ?
Last edited by xcluded; October 27th 2009 at 09:21 AM. Reason: Solved
Of course you can use L'H since you've an indeterminate $\frac{0}{0}$. Just note that in order to derivate the numerator it may be a good idea to write
$\displaystyle{\left(\cos x\right)^\frac{1}{x}=e^{\frac{1}{x}\ln \cos x}}$
October 27th 2009, 08:28 AM #2
Oct 2009
|
{"url":"http://mathhelpforum.com/calculus/110802-unable-solve-limit-problem.html","timestamp":"2014-04-20T23:48:52Z","content_type":null,"content_length":"34839","record_id":"<urn:uuid:568e796b-c722-433a-98cc-7a67ffd5a024>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kronecker product
Kronecker product
Definition. Let $A=(a_{{ij}})$ be a $n\times n$matrix and let $B$ be a $m\times m$ matrix. Then the Kronecker product of $A$ and $B$ is the $mn\times mn$block matrix
$\displaystyle A\otimes B$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{ccc}a_{{11}}B&\cdots&a_{{1n}}B\\ \vdots&\ddots&\vdots\\ a_{{n1}}B&\cdots&a_{{nn}}B\\ \end{array}\right).$
The Kronecker product is also known as the direct product or the tensor product [1].
Fundamental properties [1, 2]
• 1 H. Eves, Elementary Matrix Theory, Dover publications, 1980.
• 2 T. Kailath, A.H. Sayed, B. Hassibi, Linear estimation, Prentice Hall, 2000
tensor product (for matrices), direct product
Mathematics Subject Classification
no label found
The entry for "Kronecker Product" or alternatively "Tensor Product"
shows formulas for the trace, rank, and determinant of the product
in terms of those for its factors.
Are there corresponding formulas for the other invariants, and in
particular, can the characteristic equation of the product be
related to the characteristic equations of its factors?
At worst, I suppose they could be deduced by knowing all the roots.
- hvm
this is possible, if not all that illuminating. recall that the $k$th coefficient of the characteristic polynomial of A is $(-1)^k {\rm tr}(\wedge^k A)$. Thus, for $A\otimes B$ we get ${\rm tr}(\
wedge^n A\otimes B)={\rm tr}(\wedge^n A)+{\rm tr}(\wedge^{n-1} A\otimes B)+\cdots={\rm tr}(\wedge^n A)+{\rm tr}(\wedge^{n-1} A){\rm tr}(B)+\cdots$.
Allright, why not try to make it more illuminating? Those
wedgies are determinants, the trace takes sums, and the final
form looks like a convolution. But I'm suspicious of anything
that starts off with something depending only on A; the
determinant of the tensor product doesn't look like that,
although the trace does. Call those wedgies, which are the
symmatric functions of the roots, sigma-k. Then Sigma-2 (for
the tensor product) would be sigma-2-A + sigma-1-A * sigma-1-B
+ sigma-2-B. Is that correct?
Is it possible to run this in Mathematica(TM) and get a human-
readable result?
|
{"url":"http://planetmath.org/kroneckerproduct","timestamp":"2014-04-17T21:35:01Z","content_type":null,"content_length":"111284","record_id":"<urn:uuid:7c82688f-6076-49b9-b1d3-6ba9e56d40ba>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Poker Problem - Probability of Getting a Pair After the Flop
November 2nd 2008, 06:19 AM #1
Nov 2008
I'm having trouble with a very specific problem.
Given the following assumptions:
1. Start with a standard 52-card deck of playing cards.
2. Deal five cards from the deck.
3. These cards are "visible", i.e., known to the solver of the problem.
4. None of the five visible cards are of the same rank, i.e., there are no pairs of fives, sixes, jacks, queens, etc.
Given these assumptions, what is the probability that, if two additional cards are dealt from the remaining 47-card deck, that the combined seven dealt cards will contain a five card hand that
contains a pair (e.g., two fives or two sixes).
Here's how I've analyzed the problem so far.
Let C(n, k) be the binomial coefficient "n choose k". After the initial five cards have been dealt, there are 47 cards left in the deck, and there are C(47, 2) = 1081 ways to deal the two
additional cards.
However, only 15 of the remaining 47 cards would "pair up" with one of the five cards that have already been dealt. (Three remaining cards of the same rank for each of the five originally dealt
cards.) In order for the combined seven dealt cards to contain a five card hand with a pair, then the last two dealt cards would have to contain one or more of these 15 cards.
The probability of this happening is computed by choosing each of the 15 cards, pairing them with each of the remaining 46 cards, then dividing that product by the total number of outcomes for
the final two cards, or 15*46/1081 = 63.83%.
My problem is that I've also written some code to simulate this problem and produce a monte carlo result, and the numbers don't match! The results of my monte carlo simulation are approximately
58.7% and are fairly consistent.
If any of you sees a flaw in the mathematical anlaysis I've presented here I would really appreciate a corrective post!
I'm having trouble with a very specific problem.
Given the following assumptions:
1. Start with a standard 52-card deck of playing cards.
2. Deal five cards from the deck.
3. These cards are "visible", i.e., known to the solver of the problem.
4. None of the five visible cards are of the same rank, i.e., there are no pairs of fives, sixes, jacks, queens, etc.
Given these assumptions, what is the probability that, if two additional cards are dealt from the remaining 47-card deck, that the combined seven dealt cards will contain a five card hand that
contains a pair (e.g., two fives or two sixes).
Here's how I've analyzed the problem so far.
Let C(n, k) be the binomial coefficient "n choose k". After the initial five cards have been dealt, there are 47 cards left in the deck, and there are C(47, 2) = 1081 ways to deal the two
additional cards.
However, only 15 of the remaining 47 cards would "pair up" with one of the five cards that have already been dealt. (Three remaining cards of the same rank for each of the five originally dealt
cards.) In order for the combined seven dealt cards to contain a five card hand with a pair, then the last two dealt cards would have to contain one or more of these 15 cards.
The probability of this happening is computed by choosing each of the 15 cards, pairing them with each of the remaining 46 cards, then dividing that product by the total number of outcomes for
the final two cards, or 15*46/1081 = 63.83%.
My problem is that I've also written some code to simulate this problem and produce a monte carlo result, and the numbers don't match! The results of my monte carlo simulation are approximately
58.7% and are fairly consistent.
If any of you sees a flaw in the mathematical anlaysis I've presented here I would really appreciate a corrective post!
Suppose the five dealt cards are 2,3,4,5,6 and the two additional cards are 7,7.
Also suppose the two additional cards are 6,6 how do you count that?
Re: Poker Problem - Probability of Getting a Pair After the Flop
Suppose the five dealt cards are 2,3,4,5,6 and the two additional cards are 7,7.
Also suppose the two additional cards are 6,6 how do you count that?
I think my analysis covers both these cases. In the former case, the 7,7 deal of the final two cards isn't included in the 15*46 potential "hits" that would result in a successful outcome. In the
latter case, the 6,6 deal is considered in the potential hits since, once I pick a 6 as the first card in the pair of additional cards, I put the other two 6s back in the deck as potential second
cards in the pair of additional cards. That's why I multiply 15 by 46 and not by, for example, 47-3 = 44...I need to account for the fact that, any of the fifteen ranked cards that could result
in a five card hand with a pair could be paired with one of the other two cards with the same rank (remember the first card with that rank is already in the first five dealt cards).
Does that make sense?
Captian Black,
Thanks again for the reply, I obviously didn't read it carefully enough! I was missing the "7 7" case, and with that observation I've been able to solve the problem:
There are three favorable combinations of the last two cards to consider:
1. Some combination of the fifteen cards that would pair up with the cards already showing. There are C(15,2) = 105 such combinations.
2. Some combination of one of those 15 cards with some other card. There are 15 * (47 - 15) = 480 such combinations.
3. As you observed, a pair of cards, neither of which pair up with the first five cards. There are 13-5=8 remaining unseen ranks, and 4 cards for each rank, so the number of these combinations
is 8 * C(4,2) = 8 * 6 = 48.
So the total number of "hit" possibilities is 105 + 480 + 48 = 633. Therefore the hit probability is 633 / 1081 = 58.557%, which perfectly matches my new and improved monte carlo simulation.
Thanks again for the help!
I'm having trouble with a very specific problem.
Given the following assumptions:
1. Start with a standard 52-card deck of playing cards.
2. Deal five cards from the deck.
3. These cards are "visible", i.e., known to the solver of the problem.
4. None of the five visible cards are of the same rank, i.e., there are no pairs of fives, sixes, jacks, queens, etc.
Given these assumptions, what is the probability that, if two additional cards are dealt from the remaining 47-card deck, that the combined seven dealt cards will contain a five card hand that
contains a pair (e.g., two fives or two sixes).
Here's how I've analyzed the problem so far.
Let C(n, k) be the binomial coefficient "n choose k". After the initial five cards have been dealt, there are 47 cards left in the deck, and there are C(47, 2) = 1081 ways to deal the two
additional cards.
However, only 15 of the remaining 47 cards would "pair up" with one of the five cards that have already been dealt. (Three remaining cards of the same rank for each of the five originally dealt
cards.) In order for the combined seven dealt cards to contain a five card hand with a pair, then the last two dealt cards would have to contain one or more of these 15 cards.
The probability of this happening is computed by choosing each of the 15 cards, pairing them with each of the remaining 46 cards, then dividing that product by the total number of outcomes for
the final two cards, or 15*46/1081 = 63.83%.
My problem is that I've also written some code to simulate this problem and produce a monte carlo result, and the numbers don't match! The results of my monte carlo simulation are approximately
58.7% and are fairly consistent.
If any of you sees a flaw in the mathematical anlaysis I've presented here I would really appreciate a corrective post!
Hi Regenerator,
The problem with the computation 15 * 46 / 1081 is that the 15 and 46 have some cards in common. This results in over-counting.
Let's see if we can avoid this problem, starting by defining more precisely what is meant by "a pair". A hand might include exactly one pair, two pairs, or three of a kind. Any of these
combinations might be said to include a pair. Let's analyze each of the three possibilities separately.
Three of a kind: There are 5 choices for the rank of the card, since it must match one of the first 5 cards dealt, and then there are $\binom{3}{2} = 3$ ways to choose the 2 cards. So
$P(\text{three of a kind}) = 5 * 3 / \binom{47}{2} = 0.01388$.
Two pairs: There are $\binom{5}{2} = 10$ ways to choose the two ranks from among the first 5 cards dealt, and then there are 3 * 3 = 9 ways to choose the cards, so
$P(\text{two pairs}) = 10 * 9 / \binom{47}{2} = 0.08326$.
Exactly one pair: This can happen in two ways. First, we could match exactly one of the first 5 cards dealt. There are 5 ways to choose the rank and 3 choices for the card, and then there are
47-15 = 32 choices for the non-matching card. Second, we could draw a pair which does not match any of the previous cards. There are 8 ways to choose the rank and $\binom{4}{2} = 6$ ways to
choose the two cards. So
$P(\text{exactly one pair}) = \frac{5 * 3 * 32 + 8 * 6}{\binom{47}{2}} = 0.4884$.
If you are interested in any one of these mutually exclusive possibilities then the total probability is 0.01388 + 0.08326 + 0.4884 = 0.5855, which is close to your simulation result.
Last edited by awkward; November 3rd 2008 at 04:09 PM. Reason: corrected typo
Possible approach to matching piar problem
Consider this as a NEARLY binomial problem. Nearly binomial because we are not using replacement. However if we define p(success)=15/47 because there are 15 cards that can pair the 5 known cards.
Now use binomial probability with:
number of trials = 2 (2 more cards drawn)
number of successes =0 (we will use the complement of zero successes for at least one success)
probability of success on one trial = 15/47 = 0.319
The result P(at least one success, matched pair) = (1 - .463)= .537 or 53.7%
This answer underestimates the Monte Carlo answer of 58.7% by about as much as your analysis overestimates the MC answer.
November 2nd 2008, 06:31 AM #2
Grand Panjandrum
Nov 2005
November 2nd 2008, 08:37 AM #3
Nov 2008
November 2nd 2008, 12:58 PM #4
Nov 2008
November 3rd 2008, 04:06 PM #5
February 21st 2009, 09:54 AM #6
Feb 2009
|
{"url":"http://mathhelpforum.com/discrete-math/57051-poker-problem-probability-getting-pair-after-flop.html","timestamp":"2014-04-20T21:58:14Z","content_type":null,"content_length":"55477","record_id":"<urn:uuid:b2f09e45-c231-49c9-96b3-c9c95ceb19ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which polygon has an interior measure of 900°?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f7fcf4e4b027eb5d99e2a0","timestamp":"2014-04-19T17:19:36Z","content_type":null,"content_length":"46643","record_id":"<urn:uuid:6ab88cd9-d140-447a-8a51-84ec7001541e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
T-89, Multiplying Binomials
April 3rd 2008, 12:35 PM #1
Feb 2008
T-89, Multiplying Binomials
How would I get the Ti-89 to solve for (y+5)(3y-2)? When I enter this in its bare form it just spews out the question I put in.
Well, it depends what you want to do with that oO
You can try to use the function develop( )
Or if it's really an equation, you need =0 or something like that. Then, use solve(«equation»[ and «further equations»], «variable(s)»)
What’s the function develop? What I want to do is get the answer laid out in the book which is 3y^2+13y-10. Like I said, this is multiplying the binomials so think of it in terms of the FOIL
develop means develop the expression, the contrary of factor.
It may give you what you want...
Check that there is no variable name y in you ti
If there’s an abbreviation for function development then I’m unaware of it. It didn’t work. Nor did putting a =0 at the end of it or using “solve” work. Finally, the variables have been cleared
and there hasn’t been a change in the answer it gives.
I'm sorry, it's expand()
I figured out my mistake with expand. I didn't have it in double parenthesis. Thanks for the help.
Here's a screen capture from my TI-92. It wroks essentially the same as the 89. If you want to solve for y, use the 'solve' function.
Last edited by galactus; November 24th 2008 at 05:38 AM.
April 3rd 2008, 12:49 PM #2
April 3rd 2008, 12:55 PM #3
Feb 2008
April 3rd 2008, 01:19 PM #4
April 3rd 2008, 01:39 PM #5
Feb 2008
April 3rd 2008, 01:43 PM #6
April 3rd 2008, 01:53 PM #7
Feb 2008
April 3rd 2008, 03:33 PM #8
|
{"url":"http://mathhelpforum.com/calculators/33116-t-89-multiplying-binomials.html","timestamp":"2014-04-18T06:23:12Z","content_type":null,"content_length":"50050","record_id":"<urn:uuid:f28a567e-089c-4217-a1ea-5fdd1a98fc88>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kevin Wilda
Kevin Wilda
a question. They will receive an automated email and will return to answer you as soon as possible. Please
to ask your question.
Debra G. re: How I Teach Zero and Negative Exponents
I purchased this item, but I am unable to download it.
Any suggestions?
Thank you
Let me email it to you. I will go checking for what is wrong. Can you send me your email?
Did you get my reply?
Lois C. re: Absolute Value Equations Worksheet #1
Where's the answer key?
I haven't made one. In class I just took 5 minutes and made one by hand. If I make an electronic one I will be sure to let you know.
Elias Robles III
(TpT Seller)
I purchased "Writing Inequalities Worksheet (Black & White)" For some reason I comes out blurry on my end. Can you email me the product.
Elias Robles III
(TpT Seller)
I purchased the "Write the Inequality" worksheet but the picture is blurry.
If you can send me an email and a little more description about the product I will send it to you and hopefully it will not be blurry this time.
J L.
How do I email you? I don't see your email address listed anywhere to receive my corrected version? Thanks, Julie
Mine is kwilda4939@yahoo.com. Send me an email to there and I will reply back with the corrected version.
J L.
I just recently purchased your "I have who has transversals" activity. While the answer key is correct the blue card #12 on the activity cards only has one black dot. Could you please email me a
corrected pdf file of this activity. Thank you, Julie
Sure no problem. Shoot me an email.
Christine D. re: I have who has game for writing inequalities
Can I share this activity in my trainings? I will not give it to them, but let them know where they can find it.
Go for it!
Robin W. re: Smartboard Teachers Edition for 7th Grade Math Teachers
Are you going to create smartboard lessons aligned to common core? I am trying to convince our administration to purchase your math items & will consider if the common core standards are attached.
I would love to do that as time allows.
Thanks for your response! You can send the black an white versions of the supplementary and complementary angles to afields@uc.k12.in.us. Thanks again!
What is another skill I can send you? Since you caught one of my mistakes I will send you another skill while I'm at it.
Sorry to resend this question, but I did not get a response. I thought maybe it got lost is all of the other questions.
Could you please revise the supplementary angles worksheet? Item #20 has the statement "Who has 136 degrees?" on it. Also, do I have to buy the supplementary and complimentary black and white
worksheet if I have already purchased both in color? I tried printing them in black and white as you suggested, but the backgrounds are too dark to print out nicely. Would love for you to email me
the black and white versions. Please let me know. Thanks so much!
Just send me your email and it will be on the way.
Shanna N. re: Math Properties Worksheet
Hi Kevin,
Just checking to see if you got my request for a key for your properties worksheet.
Hi Shanna,
The only key I have is one that I hand made. When I make a key electronically I will try and send you one. Thanks for asking.
Shanna N.
So Sorry Kevin,
There were technical problems, and my student teacher kept hitting submit....:)
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Shanna N. re: Math Properties Worksheet
Good morning.....I gave this assignment to my student teacher today. She asked if there was a key...is there??? Thanks so much!
Shanna Nason
6th grade Montana..
Buyer re: Supplementary Angles Worksheet
Could you please revise the supplementary angles worksheet? Item #20 has the statement "Who has 136 degrees?" on it. Also, do I have to buy the supplementary and complimentary black and white
worksheet if I have already purchased both in color? I tried printing them in black and white as you suggested, but the backgrounds are too dark to print out nicely. Would love for you to email me
the black and white versions. Please let me know. Thanks so much!
Buyer re: I have who has game for measuring angles with a protractor
Hi Kevin!
Totally want to use this for my Gr. 5 class this week. Just had a question: how exactly does the game work? Once I had one card to each student, and they measure and fill in their angle...what's
next? Do they all wander around the room trying to find the other person? How do they go about finding the other angle mentioned on their card?
Sorry for the late response. I'm moving to a new computer and it has been a hassle. What I do is tell them to measure the angle and try to be as precise as they can. I didn't want to make the picture
too close to the same so each card should be at least 7 degrees apart from each other. You start the game by picking a person at random to read the bottom of their card. Tell the class that if they
hear an angle measurement that is within 4 degrees up or down from the one just mentioned then it is you that they are referring to. The game continues to "loop" around the room reading the bottom of
a card and then answering it by reading the top of the next card. The person starting the game will be the one to end the game. I hope this helps. I have several of these games I sell by the way. If
you are still unsure just send me another question.
Jill J.
Writing inequalities Worksheet (Black and White) Thank you!
On the way!
Jill J.
I would definitely appreciate that. Thank you so much! My email is hazeleyesinnc@gmail.com
Ok great. I have several inequalities products. Could you describe it for me and I will be glad to send it your way.
Jill J.
I bought your inequality worksheet. I was wondering if you had it saved as a pdf or on word? It didn't save as one and it is really blurry....love the worksheet.
It should be a pdf. If you need me to email you a copy I can.
Debi Gault
(TpT Seller) re: Smartboard Lesson for Transversals
Do the features from this lesson work with promethean software?
I have never used a promethean board but what I've been told is yes they should be compatible. I would love to hear back from you if you find out anything definite.
Caryn Loves Math
(TpT Seller) re: Math Properties Worksheet
Does this item included a key?
I sure do. Do you have an email I could send it to?
I am a homeschool mom to a 9th grader- we need all of the help we can get in Geometry - Not my strong suit at all either.
the books do not show how to do the work either. My son failed this in public school. We need the credit, but I need worksheets with answers to be able to track progress. I am willing to buy what we
need to understand the subject, Worksheets, Quizzes and Tests etc. and use Video tutorials to understand the material. So far, we are not finding the worksheets, tests and quizzes we need - Your
Buyer re: Locating Points on the Coordinate Plane BW
does this come with an answer key?
If you can send me your email I will make you one.
Jill J.
Okay, thanks!
Jill J.
Could you add 5 more cards to go with your I Have Who Has game with coordinate planes/ordered pairs? I have 29 students in one class and would love to use this with them to review and practice the
I could but it would take me quite awhile. I have the same issue with one of my classes that has 30. What I do is have 6 of them do the game in pairs and then take turns who is in pairs.
Imaginary Friend
(TpT Seller)
Hi Mr. Wilda,
I was just checking out some of your items. They look great! Thought you deserved to know!! Thanks for posting them.
Kay Bennett
Williamsburg, VA
Thank you Kay! I try hard to create good products. Let me know if there is something you can't find and I will create it!
Sandra Olmstead
(TpT Seller) re: I have who has game for area of an irregular shape
Hey Kevin, My download only included the "answer page" and not the actual cards for the I have Who has game. Please send me the cards so that I can use this with my class this week. Thank you.
See if they download this time. I'm not sure why they didn't before. Thanks and let me know if you have more trouble.
Jennifer Lambert
(TpT Seller)
The alien pic looks fine when I open it. However, when I view a print preview or actually print, it doesn't. Some of the text is outside the margins/printable area on the page. If you don't mind,
please email it to me. Do I need to do anything special when I print it?
Thanks for making this fun stuff! I grew up here in Blackwell. I bet our paths have crossed before.
Jennifer Lambert...jlambert@blackwell.k12.ok.us
Let me if this fixes it. I bet your right about our paths. I'm probably alot older than you. I graduated in 79. Check you email.
Jennifer Lambert
(TpT Seller)
Hi Kevin. I'm having touble with your Coordinate Plane Alien Pic. When I print, some gets left off because it is too large for the page/margins. The left margin is the worst, not all the x
coordinates are there. On the right margin, the coordinates are there, but the 2 & 3 are missing after the word "Shape." Is there a way to make it smaller so that it will all fit within the margins &
print better?
Jennifer Lambert
Blackwell Middle School
Blackwell, OK
Hi Jennifer it is good to hear from you. I'm not sure what is going on with your graph. Does it look ok when you open it? I just opened it and it look ok on my end. I could email it to you again if
you think that would help. By the way I was born and raised in Perry! Let me know if you want me to put it in an email to you. My students are crazy over these graphs and that is why I have created
so many of them.
Buyer re: Smartboard Attendance Program
can you customize the dots and add the students name if you prefer to have them instead of their number?
Absolutely! That is exactly what I do with my classes. I thought I would put it out there with numbers and that way people that were interested and capable could do that and those not could use it at
is. Thanks for the question.
Teaching is a career change for me. I spent almost 20 years in the engineering field. I am now going into my 9th year of teaching math in the middle school.
My style is hands-on. I love using manipulatives and the smartboard. 95% of what we do in class has been created and designed by me.
Yet to be added
Associates in drafting and design from Northern Oklahoma College. Bachelor's degree from University of Central Oklahoma.
I can create most any type of worksheet. I love designing the graphs where students plot the points, connect them and a design is created. If I haven't already made what you want let me know and I
will create it for you! Smartboard lessons are another personal favorite of mine and I have several for sale.
PreK, Kindergarten, 1^st, 2^nd, 3^rd, 4^th, 5^th, 6^th, 7^th, 8^th, 9^th, 10^th, 11^th, 12^th, Higher Education, Adult Education, Homeschool, Staff
Math, Algebra, Applied Math, Arithmetic, Basic Operations, Fractions, Geometry, Graphing, Measurement, Numbers, Order of Operations, Mathematics, For All Subject Areas, Word Problems, Basic Math,
Mental Math
|
{"url":"http://www.teacherspayteachers.com/Store/Kevin-Wilda/Products/Alphabetically/%20Keyword/Page-1/TYPE-19","timestamp":"2014-04-20T12:23:53Z","content_type":null,"content_length":"335189","record_id":"<urn:uuid:65291dc7-f0e1-43a9-bc56-06dbcc59acd0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does a Dehn twist in the mapping class group of an cobordism give a BV-operator in string topology?
up vote 3 down vote favorite
In her article Higher string topology operations, Godin in particular construct for each surface with $n$ incoming and $m \geq 1$ outgoing boundary circles an operation $H_\ast(BMod(S);det^{\otimes
d}) \otimes H_\ast(LM)^{\otimes n} \to H_\ast(LM)^{\otimes m}$, where $Mod(S)$ is the mapping class group $\pi_0(Diff^+(S;\partial S))$.
As an example she gives claims that the generator of $H_1(BMod(Cylinder))$ (the twisting is trivial here) gives the BV-operator, where of course the generator of $H_1(BMod(Cylinder))$ corresponds to
the generator of $Mod(Cylinder)$ corresponding to the Dehn twist around one of the boundary components. My first question is: how does it follow from Godin's construction that this generator acts as
the BV-operator?
In general, the mapping class group of a surface with boundary is generated by a finite number of Dehn twists (e.g. A primer on mapping class groups v.4.02, page 131). My second question is: Do all
of these have an action in string topology similar to a BV-operator?
Finally, in many cases (e.g. the pair of pants) $BMod(S)$ is an H-space, being the classifying space of an abelian group (thanks to Chris Schommer-Pries for pointing out a mistake here originally).
This means there is an induced product in homology, and also a shifted product on twisted homology if the twistings is trivial (section 4.5 of Godin's article tells us this is the case is at most one
boundary component is completely free). My third question is: how does this product interact with the string topology operations?
@ skupers: You said: "BMod(S) is an H-space, being the classifying space of a group...". This is false, perhaps you meant something else? A topological group G is an H-space, but its classifying
space is not generally going to be an H-space. What could the product be? If G=A is abelian then you get a map $$BA \times BA \to BA$$ making BA topological group. You don't have this for general
groups G. – Chris Schommer-Pries Aug 6 '10 at 18:54
Yes, apparently I did mean abelian group. Thanks for pointing that out, I'll edit it. At least in the case of the pair of pants, we know that the mapping class group is abelian: it is $\mathbb{Z}^
3$, though. – skupers Aug 6 '10 at 20:51
@skupers : The mapping class group of a compact orientable surface with boundary is abelian only for the sphere, the disc, the annulus, and the pair of pants, which doesn't seem to me like "many
cases". – Andy Putman Aug 6 '10 at 22:04
One way to think of (some of) these loops in $BMod(S)$ is as loops in $M_{g,n}$ encircling a point in the boundary of Deligne-Mumford space (a stable nodal curve). Non-triviality of the resulting
2 operations on loopspace homology would be an obstruction to extending the string topology TCFT to D-M space. This would be of interest, for instance, in the picture of $H_{-\ast}(LM)$ as
symplectic cohomology $SH^\ast(T^\ast M)$. – Tim Perutz Aug 7 '10 at 1:01
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged string-topology mapping-class-groups homology tqft or ask your own question.
|
{"url":"http://mathoverflow.net/questions/34766/does-a-dehn-twist-in-the-mapping-class-group-of-an-cobordism-give-a-bv-operator","timestamp":"2014-04-17T07:49:40Z","content_type":null,"content_length":"53954","record_id":"<urn:uuid:0610ce39-27b1-4e6b-8298-84050ca32a9e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
flux through a sphere
Here's the question:
A fluid has density 2 and velocity field
Find the rate of flow outward through the sphere
So far I've found n, which is
and F dot n gives z^2.
I converted to spherical coordinates and z^2 is equal to 4cos^2[phi].
My integral is set up as:
4*(int[0-2pi] int[0-pi] (cos^2[phi]*sin[phi]dphi dtheta.
The first integral is -1/3cos^3[phi] from 0-pi which is 1/3 - - 1/3 = 2/3
The second integral gives 2/3*2*pi, so the entire thing is 4*2/3*2*pi.
I thought I was just supposed to multiply that by 2 (the density) but that's not the right answer. Can someone tell me what I did wrong or what I'm supposed to do with the density?
Thanks a lot.
|
{"url":"http://www.physicsforums.com/showthread.php?t=146493","timestamp":"2014-04-21T14:53:05Z","content_type":null,"content_length":"22998","record_id":"<urn:uuid:53b9361f-e078-45d4-ac85-b4b6e7565ea5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
M_2(k) as a central extension
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Does there exist a field $k$ and a subring $R$ of $S = M_2(k)$ such that $R$ is not finitely generated over its center, $S=kR$ and $1_R = 1_S$? ($S$ is the algebra of $2 \
times 2$ matrices over $k$.)
up vote 5 down vote
favorite ra.rings-and-algebras
add comment
Does there exist a field $k$ and a subring $R$ of $S = M_2(k)$ such that $R$ is not finitely generated over its center, $S=kR$ and $1_R = 1_S$? ($S$ is the algebra of $2 \times 2$ matrices over $k$.)
I think the answer is "yes". Let $A$ be a non-Noetherian integral domain (for example a polynomial ring in infinitely many variables over a field), let $I$ denote a
non-finitely-generated ideal, and let $k$ be the field of fractions of $A$. Let $R$ denote the ring of $2\times 2$ matrices with coefficients in $A$ and with bottom left hand entry in
I think this ticks all the boxes. For example $kR=M_2(k)$ because I can scale any element of $M_2(k)$ until it's in $M_2(R)$ and then again so that all entries are in $I$.
up vote 7 down
vote accepted However, I don't think $R$ can be finitely-generated over its centre (which is easily checked to be $A$). For if $r_1,r_2,\ldots,r_n$ are finitely many elements of $R$ then the ring
they generate over $A$ will be contained in the $2\times 2$ matrices with coefficients in $A$ and bottom left hand entry in $J$, the finitely-generated ideal generated by the bottom
left hand entries of the $r_i$, and this is a proper subset of $I$.
add comment
I think the answer is "yes". Let $A$ be a non-Noetherian integral domain (for example a polynomial ring in infinitely many variables over a field), let $I$ denote a non-finitely-generated ideal, and
let $k$ be the field of fractions of $A$. Let $R$ denote the ring of $2\times 2$ matrices with coefficients in $A$ and with bottom left hand entry in $I$.
I think this ticks all the boxes. For example $kR=M_2(k)$ because I can scale any element of $M_2(k)$ until it's in $M_2(R)$ and then again so that all entries are in $I$.
However, I don't think $R$ can be finitely-generated over its centre (which is easily checked to be $A$). For if $r_1,r_2,\ldots,r_n$ are finitely many elements of $R$ then the ring they generate
over $A$ will be contained in the $2\times 2$ matrices with coefficients in $A$ and bottom left hand entry in $J$, the finitely-generated ideal generated by the bottom left hand entries of the $r_i$,
and this is a proper subset of $I$.
This question has been explored in the context of polynomial identity rings (PI-rings). Your hypothesis implies that $R$ is a prime PI-ring of PI-degree 2 (see below). However, the main
structure theorems of PI-theory, Kaplansky's Theorem, Posner's Theorem, Artin-Procesi Theorem and central polynomials, are in the opposite direction. They show that if a ring $R$ satisfies
a polynomial identity plus a suitable further hypothesis, then $R$ is, or almost is, a finite module over its center, or at least has a large center.
One example is the strong form of Posner's theorem using central polynomials: Let $R$ be a prime PI-ring with center $C$, and let $T = C - \{0\}$. Then for some integer $n$, $T^{-1}R$ is a
central simple algebra of finite dimension $n^2$ over its center $T^{-1}C$.
The theory of central simple algebras says that if $A$ is finite dimensional central simple over a field $L$, then its dimension over $L$ is a square $n^2$, and there is a unit preserving
$L$-algebra embedding into $M_n(k)$ for some extension field $k$ of $L$.
Combining this theory with Posner's theorem gives: A ring $R$ (with unit) is a prime PI-ring if and only if there is a field $k$, an integer $n$, and a unit preserving ring embedding $R \to
up vote 1 M_n(k)$ such that $kR = M_n(k)$, where $R$ is identified with its image in $M_n(k)$.
down vote
The field $k$ is not unique, but the integer $n$ is. It is called the PI-degree of $R$. Thus your hypothesis: "$R$ is a subring of $S = M_2(k)$, where $k$ is a field, $S = kR$, and $1_R =
1_S$", implies that $R$ is a prime PI-ring of PI-degree 2.
One result showing that a prime PI-ring is close to being a finite module over its center is a theorem of mine (p. 174 in Drensky-Formanek, "Polynomial Identity Rings"): If $R$ is a prime
PI-ring of PI-degree $n$ with center $C$, then there is a $C$-module embedding of $R$ into a free $C$-module of rank $n^2$.
As for your question, there is an example due to Cauchon (p. 228 in Rowen, "Polynomial Identities in Ring Theory") of a Noetherian prime PI-ring of PI-degree 2 which is not a finite module
over its center. Cauchon also proved that if a prime PI-ring has the ascending chain condition on two-sided ideals, then it has the ascending chain condition on left and right ideals. In
other words, left Noetherian, right Noetherian, and ACC on two-sided ideals are equivalent for prime PI-rings.
add comment
This question has been explored in the context of polynomial identity rings (PI-rings). Your hypothesis implies that $R$ is a prime PI-ring of PI-degree 2 (see below). However, the main structure
theorems of PI-theory, Kaplansky's Theorem, Posner's Theorem, Artin-Procesi Theorem and central polynomials, are in the opposite direction. They show that if a ring $R$ satisfies a polynomial
identity plus a suitable further hypothesis, then $R$ is, or almost is, a finite module over its center, or at least has a large center.
One example is the strong form of Posner's theorem using central polynomials: Let $R$ be a prime PI-ring with center $C$, and let $T = C - \{0\}$. Then for some integer $n$, $T^{-1}R$ is a central
simple algebra of finite dimension $n^2$ over its center $T^{-1}C$.
The theory of central simple algebras says that if $A$ is finite dimensional central simple over a field $L$, then its dimension over $L$ is a square $n^2$, and there is a unit preserving $L$-algebra
embedding into $M_n(k)$ for some extension field $k$ of $L$.
Combining this theory with Posner's theorem gives: A ring $R$ (with unit) is a prime PI-ring if and only if there is a field $k$, an integer $n$, and a unit preserving ring embedding $R \to M_n(k)$
such that $kR = M_n(k)$, where $R$ is identified with its image in $M_n(k)$.
The field $k$ is not unique, but the integer $n$ is. It is called the PI-degree of $R$. Thus your hypothesis: "$R$ is a subring of $S = M_2(k)$, where $k$ is a field, $S = kR$, and $1_R = 1_S$",
implies that $R$ is a prime PI-ring of PI-degree 2.
One result showing that a prime PI-ring is close to being a finite module over its center is a theorem of mine (p. 174 in Drensky-Formanek, "Polynomial Identity Rings"): If $R$ is a prime PI-ring of
PI-degree $n$ with center $C$, then there is a $C$-module embedding of $R$ into a free $C$-module of rank $n^2$.
As for your question, there is an example due to Cauchon (p. 228 in Rowen, "Polynomial Identities in Ring Theory") of a Noetherian prime PI-ring of PI-degree 2 which is not a finite module over its
center. Cauchon also proved that if a prime PI-ring has the ascending chain condition on two-sided ideals, then it has the ascending chain condition on left and right ideals. In other words, left
Noetherian, right Noetherian, and ACC on two-sided ideals are equivalent for prime PI-rings.
|
{"url":"http://mathoverflow.net/questions/46574/m-2k-as-a-central-extension/54056","timestamp":"2014-04-20T06:34:48Z","content_type":null,"content_length":"55206","record_id":"<urn:uuid:15069f4d-7c29-4b0a-a12f-f06ca6ac96e2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
June 2000
Different ways of looking at numbers
There are all sorts of ways of writing numbers. We can use arithmetics with different bases, fractions, decimals, logarithms, powers, or simply words. Each is more convenient for one purpose or
another and each will be familiar to anyone who has done some mathematics at school. But, surprisingly, one of the most striking and powerful representations of numbers is completely ignored in the
mathematics that is taught in schools and it rarely makes an appearance in university courses, unless you take a special option in number theory. Yet continued fractions are one of the most revealing
representations of numbers. Numbers whose decimal expansions look unremarkable and featureless are revealed to have extraordinary symmetries and patterns embedded deep within them when unfolded into
a continued fraction. Continued fractions also provide us with a way of constructing rational approximations to irrational numbers and discovering the most irrational numbers.
Every number has a continued fraction expansion but if we restrict our ambition only a little, to the continued fraction expansions of "almost every" number, then we shall find ourselves face to face
with a simple chaotic process that nonetheless possesses unexpected statistical patterns. Modern mathematical manipulation programs like Mathematica have continued fraction expansions as built in
operations and provide a simple tool for exploring the remarkable properties of these master keys to the secret life of numbers.
The Nicest Way of Looking at Numbers
Introducing continued fractions
Consider the quadratic equation
Dividing by
Now substitute the expression for
We can continue this incestuous procedure indefinitely, to produce a never-ending staircase of fractions that is a type-setter’s nightmare:
This staircase is an example of a continued fraction. If we return to equation 1 then we can simply solve the quadratic equation to find the positive solution for
Picking golden mean,
This form inspires us to define a general continued fraction of a number as
where the partial quotients of the continued fraction expansion (cfe). To avoid the cumbersome notation we write an expansion of the form equation 7 as
Continued fractions first appeared in the works of the Indian mathematician Aryabhata in the 6th century. He used them to solve linear equations. They re-emerged in Europe in the 15th and 16th
centuries and Fibonacci attempted to define them in a general way. The term "continued fraction" first appeared in 1653 in an edition of the book Arithmetica Infinitorum by the Oxford mathematician,
John Wallis. Their properties were also much studied by one of Wallis's English contemporaries, William Brouncker, who along with Wallis, was one of the founders of the Royal Society. At about the
same time, the famous Dutch mathematical physicist, Christiaan Huygens made practical use of continued fractions in building scientific instruments. Later, in the eighteenth and early nineteenth
centuries, Gauss and Euler explored many of their deep properties.
How long is a continued fraction?
Continued fractions can be finite in length or infinite, as in our example above. Finite cfes are unique so long as we do not allow a quotient of
If cfes are finite in length then they can be evaluated level by level (starting at the bottom) and will reduce always to a rational fraction; for example, the cfe
These examples reveal a number of possibilities. All of the expansions except that for
Roger Cotes
, the Plumian Professor of Experimental Philosophy at Cambridge, in 1714.
Continued fractions allow us to probe an otherwise hidden order within the realm of numbers. If we had written the number
Some Useful Applications
Approximating Pi
If we chop off an infinite cfe after a finite number of steps then we will create a rational approximation to the original irrational. For example, in the case of
The more terms we retain in the cfe, the better the rational approximation becomes. In fact, the cfe provides the best possible rational approximations to a general irrational number. Notice also
that if a large number occurs in the expansion of quotients, then truncating the cfe before that will produce an exceptionally good rational approximation. Later on we shall see that, in some sense,
it is probable that most cfe quotients are small numbers (
Pythagorean musical scales
The ancient Pythagoreans discovered that the division of the string of a musical instrument by a ratio determined by small integers resulted in an appealing relationship. For example, a half length
gives a frequency ratio of
Taking logarithms to the base
If we used the next cf approximant we would get
Gears without tears
Huygens was building a mechanical model of the solar system and wanted to design the gear ratios to produce a proper scaled version of the planetary orbits. So, for example, in Huygens' day it was
thought that the time required for the planet Saturn to orbit the Sun is
A schematic of Huygens' gear train
One of Ramanujan's tricks revealed
The remarkable Indian mathematician
Srinivasa Ramanujan
was famous for his uncanny intuition about numbers and their inter-relationships. Like mathematicians of past centuries he was fond of striking formulae and would delight in revealing (apparently
from nowhere) extraordinarily accurate approximations (can you show that
which is good to
By using the rational approximation that comes from truncating the cfe before
Ramanujan was also interested in other varieties of nested expansion. In 1911 he asked in an article in the Journal of the Indian Mathematical Society what the value was of the following strange
formula, an infinite nested continued root:
A few months went by and no one could supply an answer. Ramanujan revealed that the answer is simply
Applied mathematicians have found that by approximating functions by continued function expansions, called Padé approximants, they often obtain far more accurate low-order approximations than by
using Taylor series expansions. By truncating them at some finite order, they end up with an approximation that is represented by the ratio of two polynomials.
Rational approximations - how good can they get?
Minding your p's and q's
Continued fractions allow us to probe an otherwise hidden order within the realm of numbers. If we had written the decimal part of the number
The rational fractions which are obtained by chopping off a cfe at order convergents of the cf. We denote them by
how quickly?
The cfe also allows us to gauge the simplicity of an irrational number, according to how easily it is approximatable by a rational fraction. The number
where the statement becomes false if
Thus the cfe shows that the golden mean stays farther away from the rational numbers than any other irrational number. Moreover, for any
If the cfe is finite then
There are many other interesting properties of cfes but one might have thought that there could not be any very strong properties or patterns in the cfes of all numbers because they can behave in any
way that you wish. Pick any finite or infinite list of integers that you like any they will form the quotients almost any (a.e.) real number – so omitting a set of ’special numbers’ which have a zero
probability of being chosen at random from all the real numbers - then there are remarkable general properties shared by all their cfes.
The Patterns Behind Almost Every Number
Gauss's other probability distribution
The general pattern of cfes was first discovered in 1812 by the great German mathematician
Carl Friedrich Gauss
(1777-1855), but (typically) he didn't publish his findings. Instead, he merely wrote to
Pierre Laplace
in Paris telling him what he had found, that for typical continued fraction expansions, the probability
Paul Lévy
If we consider the infinite cfe of a.e. real number then, in the limit that
This has some important features. First, check that, because it is a probability distribution, if we take the sum over all values of
If we make
Lévy's constant
Paul Lévy showed that when we confine attention to almost every continued fraction expansion then we can say something equally surprising and general about the rational convergents. We have already
seen in equations 21-24 that the rational approximations to real numbers improve as some constant times
where the Lévy constant,
Khinchin's constant
Then the Russian mathematician
Aleksandr Khinchin
proved the third striking result about the quotients of almost any cfe. Although the arithmetic mean, or average, of the not have a finite value, the geometric mean does. Indeed, it has a finite
value that is universal for the cfes of almost all real numbers. He showed that as
where Khinchin’s constant,
Thus the geometric mean quotient value is about
If we list the appearance of different values of
We see that there is already quite good convergence to the predicted values of
Remarkably, if you calculate the cfe of Khinchin’s constant itself you will find that its terms also have a geometric mean that approaches Khinchin’s constant as their number approaches infinity.
A notable exception
The most important number that is not a member of the club of "almost every number" whose geometric mean
Chaotic Numbers
Numbers as chaotic processes
The operation of generating the infinite list of cfe quotients from a.e. real number is a chaotic process. Suppose the real number we wish to expand is
Sometimes we write
The function
Graph 1: The function T(x) (equation 33).
If we apply this mapping over and over again from almost any starting value given by a real number with an infinite cfe, then the output of values of
Again, as with any probability distribution, we can check that
Graph 2: The probability distribution p(x) (equation 34).
What is chaos?
In order for a mapping like
We shall take the mean value of
For our mapping
Continued fractions provide one of the simplest exactly soluble chaotic mappings. Notice how the probability distribution is very smooth and simple, even though the succession of outputs from
iterations of the map to generate successive
The cfe of a real number can be generalised in a natural way to create what is called the -expansion of a real number
Continued Fractions in the Universe
Continued fractions appear in the study of many chaotic systems. If a problem of dynamics reduces to the motion of a point bouncing off the walls of a non-circular enclosure, of which the game of
billiards is a classic example, then continued fraction expansions of the numbers fixing the initial conditions will describe many aspects of the dynamics as a sequence of collisions occurs. A
striking example of this sort has been discovered in the study of solutions in the general theory of relativity, which describe the behaviour of possible universes as we follow them back to the start
of their expansion, or follow the behaviour of matter as it plummets into the central singularity of a black hole. In each of these cases, a chaotic sequence of tidal oscillations occurs, whose
statistics are exactly described by the continued fraction expansion of numbers that specify their starting conditions. Even though the individual trajectory of a particle falling into the black hole
singularity is chaotically unpredictable, it possesses statistical regularities that are determined by the general properties of cfes. The constants of Khinchin and Lévy turn out to characterise the
behaviour of these problems of cosmology and black hole physics.
Continued fractions are also prominent in other chaotic orbit problems. Numbers whose cfes end in an infinite string of noble numbers. The golden mean is the "noblest" of all because all of its
quotients are
Continued fractions are a forgotten part of our mathematical education but their properties are vital guides to approximation and important probes of the complexities of dynamical chaos. They appear
in a huge variety of physical problems. I hope that this article has given a taste of their unexpected properties.
Further Reading:
• J.D. Barrow, "Chaotic Behaviour in General Relativity", Physics Reports 85, 1 (1982).
• G.H. Hardy and E.M. Wright, An Introduction to the Theory of Numbers, Oxford University Presss, 4th ed. (1968).
• A.Y. Khinchin, Continued Fractions, University of Chicago Press (1961).
• C.D. Olds, Continued Fractions, Random House, NY (1963).
• M. Schroeder, Number Theory in Science and Communication, 2nd edn., Springer (1986).
• D. Shanks and J.W. Wrench, "Khinchin's Constant", American Mathematics Monthly 66, 276 (1959)
• J.J. Tattersall, Elementary Number Theory in Nine Chapters, Cambridge University Press (1999).
About the author
John D. Barrow is a Professor in the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge.
He is the Director of our own Millennium Mathematics Project.
|
{"url":"http://plus.maths.org/content/comment/reply/2165","timestamp":"2014-04-21T05:06:47Z","content_type":null,"content_length":"130054","record_id":"<urn:uuid:059ea6c6-3642-4e00-9380-cb7cb433efaf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
optimization problem
March 25th 2010, 02:33 PM
optimization problem
find the length and width of a rectangle that has the given perimter and a maximum area
data given:
perimeter = 80meters
any help?
2l+2w=80 <--- put in terms of either variable???
i got [80-2w/2] = Length
and i plugged this into the equation for the area of a rectangle...
Area = L*W
area = [80-2w/2] * W
then i differentiate area = [80-2w/2] * W and find the critical numbers and get the maximum.....am i doing this correct lol?
March 25th 2010, 02:58 PM
|
{"url":"http://mathhelpforum.com/calculus/135672-optimization-problem-print.html","timestamp":"2014-04-21T11:39:26Z","content_type":null,"content_length":"4637","record_id":"<urn:uuid:dc94c363-5a85-455b-9136-462b7c2b3576>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Second Summer School on Formal Techniques
May 27- June 1, 2012
Menlo College, Atherton, CA
Formal verification techniques such as model checking, satisfiability, and static analysis have matured rapidly in recent years. This school, the second in the series, will focus on the principles
and practice of formal verification, with a strong emphasis on the hands-on use and development of this technology. It primarily targets graduate students and young researchers who are interested in
using verification technology in their own research in computing as well as engineering, biology, and mathematics. Students at the school will have the opportunity to experiment with the tools and
techniques presented in the lectures.
The first Summer Formal school (SSFT11) was held in May 2011. This year, the school starts on Sun May 27 with a background course on Logic in Computer Science taught by Natarajan Shankar (SRI). The
course is optional but highly recommended - it covers the prerequisites for the main lectures.
N. Shankar: Speaking Logic
• Leonardo de Moura (MSR Redmond) and Bruno Dutertre (SRI):
Satisfiability Modulo Theories
[Slides: Introduction, Theory Solving, Quantifiers, Applications]
Abstract: We present an overview of the theory and practice of satisfiability solving modulo theories (SMT), covering the basics of propositional satisfiability, theory-specific solvers, and
their integration into an SMT solver; the effective ways of applying SMT technology in formal verification and constraint solving; and the current research challenges in scaling SMT technology to
handle larger problems and a broader range of problems.
• Sumit Gulwani (MSR):
Dimensions in Program Synthesis
[Slides: Lecture 1 (ppt), (pdf), Lecture 2 (ppt), (pdf), Lecture 3 (ppt), (pdf)]
Abstract: Program Synthesis is the task of searching for a program in some underlying domain that matches the user's intent. These lectures will describe three key dimensions in program
synthesis: (a) User Interaction Model: The user may specify the intent using examples, demonstrations, logical specifications, keywords, natural language, sketch, or partial/inefficient programs.
A key challenge here is to design a user interactional model that can deal with inherent ambiguities in under-specification (b) Search technique: We will cover techniques that have been developed
in various communities including use of SAT/SMT solvers (formal methods community), version space algebras (machine learning community), and A*-style goal-directed heuristics (AI community). (c)
Underlying domain of programs that can range from straight-line programs to programs with restricted form of loops/conditionals/events. These concepts will be illustrated via a variety of
applications in algorithm/software development (e.g., Bitvector algorithms, Program inverses) and end-user programming (e.g., Spreadsheet/drawing macros, Smartphone scripts). We will also present
some surprising applications in the area of automating education including solution generation, problem generation, automated grading, and content creation in a variety of math and science
• Daniel Kroening (Oxford):
Verifying Concurrent Programs [Slides: Verifying Concurrent Programs, Predicate Abstraction]
Abstract: Concurrent software model checking is one of the most challenging problems facing the verification community today. Not only does software generally suffer from data state explosion.
Concurrent software in particular is susceptible to state explosion due to the need to track arbitrary thread interleavings, whose number grows exponentially with the number of executing threads.
Predicate abstraction was introduced as a way of dealing with data state explosion: the program state is approximated via the values of a finite number of predicates over the program variables.
Predicate abstraction turns C programs into finite-state Boolean programs, which can be model checked. Predicate abstraction is typically embedded into a counterexample-guided abstraction
refinement (CEGAR) framework. The feasibility of the overall approach was convincingly demonstrated for sequential software by the success of the SLAM project at Microsoft, which was able to
discover numerous control-dominated errors in low-level operating system code. We will provide an introduction to predicate abstraction, CEGAR, and then discuss the extensions required to deal
with concurrent software.
• Ken McMillan:
Abstraction, Decomposition, and Relevance
Abstract: Proofs of large and complex systems can be daunting. We can make complex proofs manageable by a process of decomposition and abstraction. We first break the proofs of the system's
specification down into a collection of simpler properties. Then for each property, we abstract away details about the system that are not relevant to that particular property, thus simplifying
the proof effort. In these lectures we will consider:
□ How to decompose large proofs into simple properties
□ How to apply abstraction to eliminate irrelevant detail
□ How to determine automatically what information is relevant to a given property.
We will see that the problem of relevance can be approached by generalizing from particular cases, applying the principle of Occam's razor. This idea will be illustrated using a method called
Craig Interpolation that allows us to construct simple proofs in various forms that abstract away from irrelevant detail.
• Corina Pasareanu, Dimitra Giannakopoulou, Neha Rungta, Peter Mehlitz, and Oksana Tkachuk:
Verifying Components in the Right Context [slides: (Pasareanu): Compositional Verification, (Mehlitz,Tkachuk): PathFinder Part 1, Part 2, Part 3]
Abstract: Model checking is typically applied to the verification of critical components in large systems. However, components are fundamentally "open"; their behavior is dependent on the
environment in which they operate, i.e., on the external events and on values defined outside the component but referenced inside. A key challenge in component verification is modeling the
component environment. In particular, a good environment model (the "right context") should enable sufficiently precise yet tractable verification.
We will present techniques for automatically creating component environments, including abstraction and learning approaches, among others. We will discuss environment implementations based on
various formalisms, ranging from event scripting languages and framework abstractions, to temporal logic specifications and finite state automata. Environment modeling and component verification
will be illustrated within the Java PathFinder verification tool-set.
Invited Talks
• Aaron Bradley:
IC3 and Beyond: Incremental, Inductive Verification [Slides: IC3]
Abstract: In "Temporal Verification of Reactive Systems: Safety," Zohar Manna and Amir Pnueli discuss two strategies for strengthening an invariant property to be inductive: "(1) Use a stronger
assertion, or (2) Conduct an incremental proof, using previously established invariants." They "strongly recommend" the use of the second approach "whenever applicable," its advantage being
"modularity." Yet they note that it is not always applicable, as a conjunction of assertions can be inductive while none of its components, on its own, need be inductive. In manual proofs, an
experienced verifier follows this recommendation by iterating the following steps until a complete proof is discovered: first, identify a reason why the current lemmas are insufficient to prove
the desired property; second, develop a new lemma to address that reason.
However, until IC3, successful algorithmic approaches to model checking followed the first approach. For example, k-induction strengthens the property by unrolling; an interpolating model checker
iterates an interpolant-based approximate post-image computation until convergence, refining through further unrolling. While both rely on the conflict-clause learning of the underlying SAT
solver, neither use induction to discover intermediate lemmas as Manna and Pnueli recommend. Only their ultimate convergence relies on induction.
IC3 was a result of asking the question: if an incremental strategy is better for humans, might it not be better for algorithms as well? The fundamental issue in applying the second approach
directly, though, is the situation in which two or more essential lemmas are mutually inductive but not independently so. They must be discovered together. IC3 addresses this problem by
introducing stepwise-relative inductive lemma generation: if the reason that the current lemma set is insufficient cannot be addressed with an inductive lemma, address it with a lemma that holds
for a certain number of transitions, and keep looking. IC3 thus smoothly transitions between Manna's and Pnueli's second approach, when possible, and their first approach, when necessary.
In this talk, we discuss how incremental, inductive verification (IIV) has motivated new algorithms for model checking safety properties (IC3), omega-regular properties (FAIR), and CTL properties
• Alex Aiken:
New Applications of Underapproximations in Static Analysis
• Dawn Song:
BitBlaze-WebBlaze-DroidBlaze: Automatic Security Analysis in Binary, Web and Android
Abstract: I will present the BitBlaze project, describing how we build a unified binary program analysis platform and use it to provide novel solutions to computer security problems, including
automatic vulnerability discovery and defense, in-depth malware analysis, and automatic extraction of security models for analysis and verification. The BitBlaze Binary Analysis Infrastructure is
a fusion of static and dynamic analysis techniques and enables a set of powerful, novel symbolic reasoning techniques on program binaries. I will also talk about BitTurner, the first public
cloud-based service for automatic test case generation and security audit powered by dynamic symbolic execution on program binaries.
I will give an overview of the WebBlaze project, aiming at designing and developing new techniques and tools to improve web security, including automatic dynamic symbolic execution on JavaScript
for in-depth vulnerability detection in rich web applications. Finally, I will describe some ongoing efforts in DroidBlaze, an automatic security analysis infrastructure for Android apps. More
information about BitBlaze and WebBlaze is available at http://bitblaze.cs.berkeley.edu and http://webblaze.cs.berkeley.edu.
Registration is now closed
Questions on any aspect of the school can be posted here. -->
|
{"url":"http://fm.csl.sri.com/SSFT12/","timestamp":"2014-04-19T17:02:02Z","content_type":null,"content_length":"17076","record_id":"<urn:uuid:06212244-8ac2-4754-aed8-6ecbdb072b0d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Riemann Zeta Function
Here is a view of the
Riemann Zeta
function graphed from x=1.2 to 10. You will notice a sharp spike as x goes toward 1, where it shoots off to infinity. The Riemann Zeta function at x=1 is
the harmonic series
. Since *everybody* knows the harmonic series diverges, so does the Riemann Zeta function at x=1. As x gets larger, the function approaches 1 quickly. This function directly determines the
statistical properties of the distribution of prime numbers, so mathematician go wild studying everything about it.
If you can prove that the only solutions to the equation Zeta(z) = 0 occur on the line Re(z) = 1/2 (aka
The Riemann Hypothesis
), then you get
a million bucks
|
{"url":"http://leto.net/code/Math-GSL/2008/09/the-riemann-zeta-function.html","timestamp":"2014-04-17T08:04:40Z","content_type":null,"content_length":"17605","record_id":"<urn:uuid:16dfb768-e126-4c4e-aaec-cb2aa2bf08e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Moving Grid Framework for Geometric Deformable Models
Geometric deformable models based on the level set method have become very popular in the last decade. To overcome an inherent limitation in accuracy while maintaining computational efficiency,
adaptive grid techniques using local grid refinement have been developed for use with these models. This strategy, however, requires a very complex data structure, yields large numbers of contour
points, and is inconsistent with the implementation of topology-preserving geometric deformable models (TGDMs). In this paper, we investigate the use of an alternative adaptive grid technique called
the moving grid method with geometric deformable models. In addition to the development of a consistent moving grid geometric deformable model framework, our main contributions include the
introduction of a new grid nondegeneracy constraint, the design of a new grid adaptation criterion, and the development of novel numerical methods and an efficient implementation scheme. The overall
method is simpler to implement than using grid refinement, requiring no large, complex, hierarchical data structures. It also offers an extra benefit of automatically reducing the number of contour
vertices in the final results. After presenting the algorithm, we demonstrate its performance using both simulated and real images.
Keywords: Adaptive grid method, Geometric deformable model, Deformation moving grid, Topology preservation, Level set method
|
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2784682/","timestamp":"2014-04-21T07:38:32Z","content_type":null,"content_length":"185177","record_id":"<urn:uuid:74ce45c7-6f12-4288-8eed-70c93885890f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Work, Heat, Energy. and the First Law
From WikiEducator
Note: This section needs a very basic knowledge of calculus. To see a simplified version go HERE
□ Understand the Concepts of Work, Heat, and Energy
□ Make some Observations of the nature of energy
□ Conclude from these the First Law of Thermodynamics
Force acting through a distance
Therefore, Work, W is
dW = − Fdl
where F is force and dl is the distance through which the force acts The reason for the minus sign is explained below.
In most applications of thermodynamics we are mainly interested in mechanical work due to pressure of a fluid. Since pressure is force per unit area, force is simply pressure times area:
F = − PA
If we consider a volume, V, then the distance l is
If we then assume a constant area then we can take the A inside the differential and:
Integrating gives
Capacity to do work
Internal Energy, U
The total energy of the system^[1]
Energy transferred due to a temperature difference
No heat transfer between a system and its surroundings
Exothermic process
A process which releases heat
Endothermic process
A process which adsorbs heat
Heat is denoted by the symbol, Q
Heat and work are not properties
It is important to note that heat and work are not intrinsic properties of a system. They refer only to energy which is transferred. We cannot say, for example, that a brick has 15 J of heat. It may
however, have 15 J of energy.
Sign convention and notation
Heat and work are considered positive if they are transferred from the surroundings to the system.^[2] This is the reason for the negative sign in the work equations above.
Δ is used to indicate finite change (for example, ΔU)
d is used to indicate differential change (for example, dU)
However, we do not use ΔQ or ΔW for finite changes in heat or work, since Q and W only refer to change. We but simply use just Q or W. We still do use dQ and dW for differential change.
The laws of thermodynamics are based on observations of the natural world. The first law is based on two observations concerning energy:
1. Energy can be transferred between a system and its surroundings by only two ways: work and heat
2. The total energy of a system and its surroundings is always constant (The conservation of energy)
First Law
These two observations can be combined into the First Law of Thermodynamics:
The internal energy of a system is constant unless changed by doing work or by heating
Mathematical Statement
Mathematically, the change in internal energy is the sum of the work and heat entering or leaving the system:
ΔU = Q + W
dU = dQ + dW
1. ↑ Note that some references say the internal energy is the energy due to the internal vibrations, etc. In other words that other than kinetic or potential energy. However, since we are interested
only in energy differences the definition used here is equivalent and is easier to understand.
2. ↑ It is important to note that previously engineering used a different convention: Heat was the same, but work was considered positive if it was transferred from the system to the surroundings.
|
{"url":"http://wikieducator.org/Thermodynamics/Energy","timestamp":"2014-04-18T23:16:23Z","content_type":null,"content_length":"30197","record_id":"<urn:uuid:2d386393-8e61-46d4-99a6-69793a075bfc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3.3 The Dot Product
Home | 18.013A | Chapter 3 Tools Glossary Index Up Previous Next
3.3 The Dot Product
Given two vectors $v ⟶$ and $w ⟶$ whose components are elements of $ℝ$ , with the same number of components, we define their dot product, written as $v ⟶
· w ⟶$ or $( v ⟶ , w ⟶ )$ as the sum of the products of corresponding components: $∑ i v i w i$.
Obvious facts: the dot product is linear in $v ⟶$ and in $w ⟶$ and is symmetric between them.
We define the length of $v ⟶$ to be the positive square root of $( v ⟶ , v ⟶ )$ ; the length of $v ⟶$ is usually denoted by $| v &
LongRightArrow; |$ .
Wonderful Fact: the dot product is invariant under rotation of coordinates.
Exercises 3.1 Prove this statement. Solution
As a consequence of this fact, in evaluating $v ⟶ · w ⟶$ , we can rotate coordinates so that the first basis vector is in the direction of $v ⟶$ and the
second one is perpendicular to it in the plane of $v ⟶$ and $w ⟶$ .
Then $v ⟶$ will have first two coordinates $( | v ⟶ | , 0 )$ and if the angle between $v ⟶$ and $w ⟶$ is $θ$, $w ⟶$ will
have $( | w ⟶ | cos θ , | w ⟶ | sin θ )$ as its similarly defined coordinates.
The dot product $v ⟶ · w ⟶$ therefore is $| v ⟶ | | w ⟶ | cos θ$ , in this coordinate system (that is, with these basis vectors), and
hence in any coordinate system obtained by rotations from it.
The fact that the dot product is linear in each of its arguments is extremely important and valuable. It means that you can apply the distributive law in either argument to express the dot product of
a sum or difference as the sum or difference of the dot products.
Exercises 3.2 Express the square of the area of a parallelogram with sides $v ⟶$ and $w ⟶$ in terms of dot products. Solution
The dot product of $v ⟶$ and $w ⟶$ divided by the magnitude of $w ⟶$ , which is $| v ⟶ | cos θ$ , is called the component of $v &
LongRightArrow;$ in the direction of $w ⟶$.
The vector in the $w ⟶$ direction having magnitude and sign of $| v ⟶ | cos θ$ is called the projection of $v ⟶$ on $w ⟶$.
The vector obtained by subtracting the projection of $v ⟶$ on $w ⟶$ from $v ⟶$ is called the projection of $v ⟶$ perpendicular to $w &
LongRightArrow;$ or normal to $w ⟶$ . (By definition this projection has zero component in the direction of $w ⟶$ , and is therefore normal to $w ⟶$ .)
3.3 Express the square of the component of $v ⟶$ in the direction of $w ⟶$ in terms of dot products. Solution
3.4 Express the component of $v ⟶$ perpendicular to $w ⟶$ in terms of dot products. Solution
3.5 Write out $( v ⟶ − w ⟶ ) · ( v ⟶ − w ⟶ )$ using the linearity of the dot product in each of its arguments. What famous law does this
establish? Solution
3.6 Express the projection of $v ⟶$ on $w ⟶$ in terms of dot products and the vector $w ⟶$. Solution
|
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter03/section03.xhtml","timestamp":"2014-04-20T13:21:32Z","content_type":null,"content_length":"22665","record_id":"<urn:uuid:7af292ca-e888-4497-9f5c-38068f7061b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proofs Without Words
Proofs Without Words: Exercises in Visual Thinking, Volume 1
Proofs without words are generally pictures or diagrams that help the reader see why a particular mathematical statement may be true, and how one could begin to go about proving it. While in some
proofs without words an equation or two may appear to help guide that process, the emphasis is clearly on providing visual clues to stimulate mathematical thought. The proofs in this collection are
arranged by topic into five chapters: Geometry and algebra; Trigonometry, calculus and analytic geometry; Inequalities; Integer sums; and Sequences and series. Teachers will find that many of the
proofs in this collection are well suited for classroom discussion and for helping students to think visually in mathematics.
We haven't found any reviews in the usual places.
References from web pages
Proofs Without Words and Words Without Proofs
Dr. Roger Nelsen. Professor of Mathematics. Lewis & Clark College. ANNUAL BULLITT LECTURE. 2007. Visualization in Mathematics:. Proofs Without Words and ...
www.math.louisville.edu/ Bullitt/ Bullitt2007.pdf
JSTOR: Heron's Formula via Proofs without Words
Heron's Formula via Proofs without Words. Roger B. Nelsen. The College Mathematics Journal, Vol. 32, No. 4, 290-292. Sep., 2001. ...
links.jstor.org/ sici?sici=0746-8342(200109)32%3A4%3C290%3AHFVPWW%3E2.0.CO%3B2-H
Euler’s Triangle Inequality via Proofs Without Words
1. Euler’s Triangle Inequality via Proofs Without Words. ROGER B. NELSEN. Lewis & Clark College. Portland, OR 97219. nelsen@lclark.edu ...
www.lclark.edu/ ~mathsci/ euler.pdf
Interactive Mathematics Miscellany and Puzzles
PROOFS WITHOUT WORDS:. EXERCISES IN VISUAL THINKING. ROGER B. NELSEN. Introduction. see (se) v., saw, seen, seeing. -vt. 5. to perceive (things) mentally; ...
www.cut-the-knot.org/ books/ pww/ intro.shtml
Proofs without Words -- Mudd Math Fun Facts
No subject limitations Search only in selected subjects Algebra Calculus or Analysis Combinatorics Geometry Number Theory Probability Topology ...
www.math.hmc.edu/ funfacts/ ffiles/ 10001.1-4-8.shtml
Roger B. Nelsen, Proofs without Words, MAA, 0-88385-700-6, 1993
G. S. T. x€yrv' ƒ‚„`b…†Vp‡‰ˆ Xa… 3. s. 476. s. 4@9¥9¥9A47B. s. CFE. ‘“’ bthpbv4@3rsu”. G. T. U8V'c •–V†`b—¥—˜yd™e…†fpx¥—¥—˜yrf%xpgihjyrf%f `bv€k)…...
zakuski.utsa.edu/ ~gokhman/ ftp/ / courses/ notes/ sum.pdf
ingentaconnect Euler's Triangle Inequality via Proofs Without Words
Euler's Triangle Inequality via Proofs Without Words. Author: Nelsen, Roger B. Source: Mathematics Magazine, Volume 81, Number 1, February 2008 , pp. ...
www.ingentaconnect.com/ content/ maa/ mm/ 2008/ 00000081/ 00000001/ art00008;jsessionid=g0abkeobm7ght.alexandra?format=print
Proof without Words -- from Wolfram mathworld
Nelsen, rb Proofs Without Words: Exercises in Visual Thinking. Washington, DC: Math. Assoc. Amer., 1997. Nelsen, rb "Proof Without Words: Sums of Integers ...
mathworld.wolfram.com/ ProofwithoutWords.html
ISMAA 2006 Abstracts
Friday, April 7. th. ISMAA Abstracts. ISMAA 2006 Abstracts. (Note: Student Abstracts are on a separate handout). Friday, April 7. 12:45-1:45p.m. ...
ismaa.knox.edu/ ismaa2006abstracts.pdf
Geometric Series Part 3
Geometric Series Proofs: An Annotated Bibliography. We have seen a geometric proof and a classic algebraic proof for the sum of the geometric series, ...
www41.homepage.villanova.edu/ robert.styer/ Bouncingball/ geometric_series_3.htm
Bibliographic information
|
{"url":"http://books.google.com/books?id=cyyhZr-SffcC&pg=PA84","timestamp":"2014-04-21T05:33:51Z","content_type":null,"content_length":"98240","record_id":"<urn:uuid:a0ac0db6-7ea2-4fef-87ba-aa1674bd7e02>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
calculation of (a,b,c,d) in 34!
Pages: 1 2
Post reply
Full Member
calculation of (a,b,c,d) in 34!
(1) If
Then value of is
(2) If . Then value of is
Last edited by jacks (2013-10-14 15:49:37)
Full Member
Re: calculation of (a,b,c,d) in 34!
I have solved (1) one
19! contain factor of 9. so R.H.S must be divisible by 9.
So for divisibility by 9., sum of digit must be divisible by 9.
so a = 1 .
But I did not understand how can i solved (ii) one
Re: calculation of (a,b,c,d) in 34!
Hi jacks
There are a few thing you will need to do there. First, you should find the number of 0's with which 34! ends. You will find out that both a and b are 0. Then you use divisibility by 9 and 27 to get
c and d.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
Hi bobbym
I did it of the top of my head and I remember a similar problem where the answer was found the way I described above, so I just guessed that was it. Looks like I'll have ti have a bit more thinking.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
I am not seeing any quick way for either pair, are you?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
I thought about using 4 divisibility rules, but I do not think that will do much.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
If we could get 4 congruences that could be solved with the CRT. I just do not see how.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
Well a+b+c+d≡:7 (mod 9). And we can easily calculate it mod 11.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
You mean
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
4, actually.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
The numbers are 0, 3, 5, 2 which sum to 10. Which is 1 mod 9.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
Actually, the problen is that there are 3 digits missing in post #1. ab basically stsnds for 64352 in that number.
Last edited by anonimnystefy (2013-10-14 20:07:18)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
I am not following you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
Compare these two:
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
That is true, I did not notice that.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
I guess we'll have to wait for the OP to clear the question out.
But, either way, if it were like we initially thought, I don't know how we can get the answer...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
I agree, I do not even have a question like that in my notes and an internet search came up dry.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
I've seen a similar one with 28!, but not this one.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
Found only this:http://forum.math.uoa.gr/viewtopic.php? … p;p=211113 which suggests that your answer is still the correct one.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
That post is like 3 years old but has the same username!
See you later am going offline for a bit.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: calculation of (a,b,c,d) in 34!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
I remember doing this question in a BMO1 paper, I think.
Re: calculation of (a,b,c,d) in 34!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: calculation of (a,b,c,d) in 34!
I thought that they would not want any modulo-computation methods so I did not use them.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Post reply
Pages: 1 2
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=20089","timestamp":"2014-04-17T12:42:48Z","content_type":null,"content_length":"41443","record_id":"<urn:uuid:6dd412ef-31d6-4c8d-8171-9d4f355a7782>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Confoundry
Integer triangle
What's the smallest integer n (greater than 1) such that the area of the triangle with side lengths n, n+1, and n+2 is a whole number divisible by 20?
Attempt Info
• Correct Attempts: 17
• Incorrect Attempts: 8
• Ungraded Attempts: 0
• Total Attempts: 25
Quality Rating
Difficulty Rating
|
{"url":"http://www.confoundry.com/puzzle/integer-triangle/","timestamp":"2014-04-19T17:56:03Z","content_type":null,"content_length":"8496","record_id":"<urn:uuid:17f9e86d-6aac-4eb9-b219-5c9254b34b50>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
History Of The Theory Of Numbers - I
236 HlSTOBY OF THE THEOBY OF NuMBEBS. [CHAP. VIII
congruences modulo p are of the form a+b-\/n, where n is a fixed quadratic non-residue of p, while a, b are integers. But the cube root of a non-cubic residue is not reducible to this form a+b\/n. The p+l sets of integral solutions of y2—nz*=a (mod p) yield the p+l real or imaginary roots x^y+zVn of x^zza (mod p). The latter congruence has primitive roots if a = l.
Th. Schonemann64 built a theory of congruences without the use of Euclid's g. c. d. process. He began with a proof by induction that if a function is irreducible modulo p and divides a product AB modulo p, it divides A or B. Much use is made of the concept norm Nfv of f(x) with respect to <i>(x), i. e., the product /(&).. ./(/3m), where ft,..., pm are the roots of </>(#) = 0; the norm is thus essentially the resultant of / and <£. The norm of an irreducible function with respect to a function of lower degree is shown by induction to be not divisible by p. Hence if / is irreducible and Nf^O (mod p), then/ is a divisor of <t> modulo p. A long discussion shows that if ai,..., an are the roots of an algebraic equation f(x) = xn+... =0 and if f(x) is irreducible modulo p, then njll-jz—0(a,-)f is a power of an irreducible function modulo p.
If a is a root of f(x) and f(x) is irreducible modulo p, and if <j>(a) =^(a)+p#(a); we write <t>^$ (mod p, a); then $(x)—\l/(x) is divisible by f(x) modulo p. If the product of two functions of a is =0 (mod p, a), one of the functions is =0.
If f(x) = xn+ ... is irreducible modulo p and if /(a) = 0, then
/(a;)== (z-a)(s-ap)... (s-a^"1), a^s 1 (mod p, a),
a;""-1-! =PE te-*,(a)J (mod p, a),
where <& is a polynomial of degree n— I in a with coefficients chosen from 0, 1,..., p —1, such that not all are zero. There exist </>(pn—1) primitive roots modulis p, a, i. e., functions of a belonging to the exponent pn—1.
Let F(x) be irreducible modulis p, a, i. e., have no divisor of degree 2>1 modulis p, a. Let F(fi) =0, "algebraically. Two functions of /? with coefficients involving a are called congruent modulis p, a, ft if their difference is the product of p by a polynomial in a, /3. It is proved that
F(*)e (z-0)(a;-/3'n).. - (x-p*-1*), /3^"-l (mod p, o, 0).
If y<w, n being the degree of/(x), and if the function whose roots are the (pF—l)th powers of the roots of f(x) is ^0 (mod p) for z = l, then/(a) is irreducible modulo p. Hence if m is a divisor of p — 1 and if g is a primitive root of p, and if k is prime to m, then xm—gk is irreducible modulo p.
If i><m, m being the degree of F(x), and if the function whose roots are the (pw—l)th powers of the roots of F(x) is ^0 (mod p, a) for z = l, then
MGnindzlige einer allgemeinen Theorie der hohern Congnienzen, deren Modul eine reelle Primzahl iat, Progr., Brandenburg, 1844. Same in Jour, far Math., 31, 1846, 269-325.
|
{"url":"http://www.archive.org/stream/HistoryOfTheTheoryOfNumbersI/TXT/00000245.txt","timestamp":"2014-04-17T04:45:06Z","content_type":null,"content_length":"13516","record_id":"<urn:uuid:d857385b-24e3-4695-bc08-2afd8634afd9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Package: r-base (2.10.1-2) [universe]
Links for r-base
Ubuntu Resources:
Download Source Package r-base:
Please consider filing a bug or asking a question via Launchpad before contacting the maintainer directly.
Original Maintainer (usually from Debian):
It should generally not be necessary for users to contact the original maintainer.
External Resources:
Similar packages:
GNU R statistical computation and graphics system
R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run
programs stored in script files.
The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to S, the
underlying implementation and semantics are derived from Scheme.
The core of R is an interpreted computer language which allows branching and looping as well0;115;0c as modular programming using functions. Most of the user-visible functions in R are written in R.
It is possible for the user to interface to procedures written in the C, C++, or FORTRAN languages for efficiency, and many of R's core functions do so. The R distribution contains functionality for
a large number of statistical procedures and underlying applied math computations. There is also a large set of functions which provide a flexible graphical environment for creating various kinds of
data presentations.
Additionally, over thousand extension "packages" are available from CRAN, the Comprehensive R Archive Network, many also as Debian packages, named 'r-cran-<name>'.
This package is a metapackage which eases the transition from the pre-1.5.0 package setup with its larger r-base package. Once installed, it can be safely removed and apt-get will automatically
upgrade its components during future upgrades. Providing this packages gives a way to users to then only install r-base-core if they so desire.
Other Packages Related to r-base
• depends • recommends • suggests
dep: r-base-core (>= 2.10.1-2)
GNU R core of statistical computation and graphics system
dep: r-recommended (= 2.10.1-2)
GNU R collection of recommended packages [metapackage]
rec: r-base-html
GNU R html docs for statistical computing system functions
rec: r-doc-html
GNU R html manuals for statistical computing system
sug: ess
Emacs mode for statistical programming and data analysis
sug: r-doc-info
GNU R info manuals statistical computing system
or r-doc-pdf
GNU R pdf manuals for statistical computing system
Download r-base
Download for all available architectures
Architecture Package Size Installed Size Files
all 32.8 kB 76.0 kB [list of files]
|
{"url":"http://packages.ubuntu.com/lucid/r-base","timestamp":"2014-04-19T07:05:26Z","content_type":null,"content_length":"12816","record_id":"<urn:uuid:e1f74a7e-d4f8-41f6-9860-d2523487bf92>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple root finding and one dimensional integrals algorithms were implemented in previous posts.These algorithms can be used to estimate the cumulative probabilities and quantiles.Here, take normal
distribution as an example.Read More: 281 Words Totally
Relationship Between SAT & College Retention
Here is a quick analysis of the relationship between SAT score and student retention. The data is from the Integrated Postsecondary Education Data System (IPEDS) and analyzed using R. This was a
quick analysis and would be careful about making any strong conclusions. The source for running this analysis along with some additional graphics that
40 Fascinating Blogs for the Ultimate Statistics Geek!
I am happy to report that ByteMining is listed on “40 Fascinating Blogs for the Ultimate Statistics Geek“! Some of the ones that I frequently read, or are written by Twitter friends/followers (in no
particular order):R-bloggers, an aggregate site containing blog posts tagged as posts about R. High quality content. Statistical modeling, causal inference and social science. This one is...
R and Google Visualization API: Fish harvests
I recently gathered fish harvest data from the U.S. National Oceanic and Atmospheric Administarion (NOAA), which I downloaded from Infochimps. The data is fish harvest by weight and value, by species
for 21 years, from 1985 to 2005. Here is a link to a google document of the data I used below: https://spreadsheets.google.com/ccc?key=0Aq6aW8n11tS_dFRySXQzYkppLXFaU2F5aC04d19ZS0E&hl=en##############
### Fish harvest data...
In case you missed it: December Roundup
In case you missed them, here are some articles from December of particular interest to R users. A Facebook employee created a beautiful visualization of social connections around the world, which
made a lot of news on the Web. The creator, Paul Butler, explained how he did it using R. With sponsorship from Revolution Analytics, the R/Finance conference in...
Parsing and plotting time series data
This morning I came across a post which discusses the differences between scala, ruby and python when trying to analyse time series data. Essentially, there is a text file consisting of times in the
format HH:MM and we want to get an idea of its distribution. Tom discusses how this would be a bit clunky
Survival paper (update)
In a recent post, I discussed some statistical consultancy I was involved with. I was quite proud of the nice ggplot2 graphics I had created. The graphs nicely summarised the main points of the
paper: I’ve just had the proofs from the journal, and next to the graphs there is the following note: It is
I'm writing a new package that will create nice publication quality graphics of genome information. It's really an adaptor sitting between the biomaRt and ggplot2 packages. Here is the code so far:##
this function integrates 3 steps to creating a g...
I have just remembered a package called 'prettyR' that pretty much does what it says. It makes R code more readable.. so an example from my forthcoming genomeplot package (see forthcoming blog
entry): ## this function integrates 3 steps to creat...
Introducing the Lowry Plot
Here at the Health and Safety Laboratory* we’re big fans of physiologically-based pharmacokinetic (PBPK) models (say that 10 times fast) for predicting concentrations of chemicals around your body
based upon an exposure. These models take the form of a big system of ODEs. Because they contain many equations and consequently many parameters (masses of organs
|
{"url":"http://www.r-bloggers.com/search/ggplot2/page/143/","timestamp":"2014-04-19T09:39:46Z","content_type":null,"content_length":"37385","record_id":"<urn:uuid:37eea5a7-fb86-4a70-9637-4224e386a1ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gaussianity Measures for Detecting the Direction of Causal Time Series
José Miguel Hernández-Lobato, Pablo Morales-Mombiela, Alberto Suárez
We conjecture that the distribution of the time-reversed residuals of a causal linear process is closer to a Gaussian than the distribution of the noise used to generate the process in the forward
direction. This property is demonstrated for causal AR(1) processes assuming that all the cumulants of the distribution of the noise are defined. Based on this observation, it is possible to design a
decision rule for detecting the direction of time series that can be described as linear processes: The correct direction (forward in time) is the one in which the residuals from a linear fit to the
time series are less Gaussian. A series of experiments with simulated and real-world data illustrate the superior results of the proposed rule when compared with other state-of-the-art methods based
on independence tests.
|
{"url":"http://ijcai.org/papers11/Abstracts/223.html","timestamp":"2014-04-19T07:27:59Z","content_type":null,"content_length":"1905","record_id":"<urn:uuid:863f4404-d6e1-46a5-99ee-cd303b166d77>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for: Author/Editor=(Hardt_Robert)
The calculus of variations is a beautiful subject with a rich history and with origins in the minimization problems of calculus. Although it is now at the core of many modern mathematical fields, it
does not have a well-defined place in most undergraduate mathematics curricula. This volume should nevertheless give the undergraduate reader a sense of its great character and importance.
Interesting functionals, such as area or energy, often give rise to problems for which the most natural solution occurs by differentiating a one-parameter family of variations of some function. The
critical points of the functional are related to the solutions of the associated Euler-Lagrange equation. These differential equations are at the heart of the calculus of variations and its
applications to other subjects. Some of the topics addressed in this book are Morse theory, wave mechanics, minimal surfaces, soap bubbles, and modeling traffic flow. All are readily accessible to
advanced undergraduates.
This book is derived from a workshop sponsored by Rice University. It is suitable for advanced undergraduates, graduate students and research mathematicians interested in the calculus of variations
and its applications to other subjects.
Undergraduates, graduate students and research mathematicians interested in the calculus of variations and its applications to other subjects.
"The book is recommended to an audience of undergraduate students as well as to teachers looking for inspiration for their own lectures."
-- EMS Newsletter
"This work is a beautiful collection of six papers written by well known specialists in the Calculus of Variations. ... All these papers are very well written and they illustrate the fruitful
interplay between pure and applied mathematics."
-- Zentralblatt MATH
• F. Jones -- Calculus of variations: What does "variations" mean?
• R. Forman -- How many equilibria are there? An introduction to Morse theory
• S. J. Cox -- Aye, there's the rub. An inquiry into why a plucked string comes to rest
• F. Morgan -- Proof of the double bubble conjecture
• M. Wolf -- Minimal surfaces, flat cone spheres and moduli spaces of staircases
• B. L. Keyfitz -- Hold that light! Modeling of traffic flow by differential equations
|
{"url":"http://ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Hardt_Robert&arg9=Robert_Hardt","timestamp":"2014-04-21T03:12:49Z","content_type":null,"content_length":"16749","record_id":"<urn:uuid:fcb96947-58fe-4196-84de-7b704118926b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Training pace, % of VO2 max and training intensity
Forums >General Running>Training pace, % of VO2 max and training intensity
OK so here's another question I have wondered over the past few weeks and am willing to share with you guys to see if it makes any sense at all.
Now we know that we can have an estimated VO2 Max from a race result, using the McMillan calculators and such.
Say I get a reading of 50 for a recent race result.
If I go out on a training run and plug the resulting pace and distance into the McMillan calculator, and get say a VO2 max value of 45, is this relation (45/50) meaningful in some way ? Ie can you
calculate this quotient to get a measure of training intensity, and say for instance that it should be below 70 or 80% or whatever if it's to be an easy training pace, or over 90 or 95% if it's to be
a proper tempo effort...
No, estimating VO2max from a race result is sketchy enough as it is. A VO2max estimate from a sub-maximal effort is totally meaningless.
So Mikeymike you are saying to doesn't correlate the way % of max heart rate might?
Work on stretching and flexibility an stay healthy for Boston 2014 (have a BQ -9:00 time)
I'm not sure I understand the question. I am not saying anything about correlation or heart rate.
Measuring VO2max requires a maximum effort and a ventilator to measure oxygen and CO2 concentrations. Estimating VO2max from a race effort can be somewhat useful for training purposes, or for
comparing races at different distances. But I can't see how plugging a sub-maximal effort into a VO2max calculator would produce anything useful.
I guess the question is, when you see things like "80% of VO2Max", does that refer to pace (in which case the answer to the poster's question would seem to be yes), or more literally to oxygen uptake
rate (pace at which you're using 80% of your maximum uptake)? I would think the former, because otherwise things like 110% of VO2Max -- which I have seen -- wouldn't make any sense.
That said, determining appropriate training paces by looking at estimated VO2Max seems to be an unnecessarily roundabout way of going about it.
It is true that 110% of VO2max is meaningless, but I think what's meant in that construction where it's > 100% is 110% of vVO2max or 110% of the velocity at VO2max (definitely going big time
anaerobic). So, if vVO2 max is about 5 minutes per mile for someone, 5.67 meters/second, than 110% would be about 6.2 m/s, whatever that works out to in pace. In contradiction, I think what is
usually intended by 80% VO2max is actually the opposite: pace at 80% VO2max, not 80% of VO2max pace.
It is actually possible to estimate %VO2max from %HRmax if HRmax is known (and I think most people probably don't actually know what it is), but the key word there is "estimate" because there can be
many confounding factors for HR as a measure of effort.
As to the OP's original question.... I don't know! It's an intriguing one. I'd have to do the proof for myself (or at least a few example problems
all running goals are under review by the executive committee.
I do agree with this. I nerd out on this stuff a lot lately, but I don't actually USE it to determine training paces. For that I have McMillan to thank, not to mention some experience of my own,
Jeff, and some stubbornness (probably running reps sometimes faster than I should).
It is true that 110% of VO2max is meaningless, but I think what's meant in that construction where it's > 100% is 110% of vVO2max or 110% of the velocity at VO2max (definitely going big time
anaerobic). So, if vVO2 max is about 5 minutes per mile for someone, 5.67 meters/second, than 110% would be about 6.2 m/s, whatever that works out to in pace. In contradiction, I think what is
usually intended by 80% VO2max is actually the opposite: pace at 80% VO2max, not 80% of VO2max pace. This is my understanding also - although I usually substitute effort for pace. (trail runner on
hilly trails makes pace a weird concept, and I've never done a flat, non-snow race to have a valid input to any of the calculators) It is actually possible to estimate %VO2max from %HRmax if HRmax is
known (and I think most people probably don't actually know what it is), but the key word there is "estimate" because there can be many confounding factors for HR as a measure of effort. This is the
formula that I think I've seen the most: <abbr>%MHR</abbr> = 0.6463 x <abbr>%VO2max</abbr> + 37.182 But like, with most regressions, it may or may not be meaningful for an individual. As to the OP's
original question.... I don't know! It's an intriguing one. I'd have to do the proof for myself (or at least a few example problems 4000ft of uphill then same downhill in about 7 mi will ensure that
I won't have *that* sleep problem tonight.
This is my understanding also - although I usually substitute effort for pace. (trail runner on hilly trails makes pace a weird concept, and I've never done a flat, non-snow race to have a valid
input to any of the calculators)
This is the formula that I think I've seen the most:
But like, with most regressions, it may or may not be meaningful for an individual.
4000ft of uphill then same downhill in about 7 mi will ensure that I won't have *that* sleep problem tonight.
"So many people get stuck in the routine of life that their dreams waste away. This is about living the dream." - Cave Dog
Isn't what the OP is asking the basis for Daniels' "Points" system of estimating paces? He uses something called VDOT to emphasize that is NOT the same as Vo2Max.
Not as far as I can tell. Daniels calculates VDOT from a max (race) effort and all training paces are calculated from the resulting value. Whether you want to talk about VO2max or VDOT, I still can't
see how you could take a sub maximal effort, plug it into a VDOT calculator, compare it to your "actual" VDOT, and get anything meaningful from it.
It's like comparing an estimate of an estimate to the actual estimate.
On your deathbed, you won't wish that you'd spent more time at the office. But you will wish that you'd spent more time running. Because if you had, you wouldn't be on your deathbed.
The thing about the VO2Max number that RA assigns to workout entries is that RA seems to factor in the distance and pace. So if your "true" estimated VO2Max is 50.0 and RA says today's workout entry
has a VO2Max value of 40.0, I'm not sure what you can take away from the numerical "fact" that you ran the workout at "80% of VO2Max" (whatever that means).
Not as far as I can tell. Daniels calculates VDOT from a max (race) effort and all training paces are calculated from the resulting value. Whether you want to talk about VO2max or VDOT, I still can't
see how you could take a sub maximal effort, plug it into a VDOT calculator, compare it to your "actual" VDOT, and get anything meaningful from it. It's like comparing an estimate of an estimate to
the actual estimate.
"Ie can you calculate this quotient to get a measure of training intensity, and say for instance that it should be below 70 or 80% or whatever if it's to be an easy training pace, or over 90 or 95%
if it's to be a proper tempo effort..."
I think he's asking whether it is valid to use a maximal race effort as a baseline to determine relative intensities of other training runs and at about what pace they should be run. There's a
spreadsheet of Daniels-based calculations that shows this, I think, exactly. I've never studied Daniels's stuff but this sure looks like what the OP is asking about, doesn't it? Screenshot:
And I don't think that's what he/she is asking because before the part you quoted, there is this:
If I go out on a training run and plug the resulting pace and distance into the McMillan calculator, and get say a VO2 max value of 45, is this relation (45/50) meaningful in some way ?
|
{"url":"https://www.runningahead.com/forums/post/5ded49f604214ce98c2832c901cb21d4","timestamp":"2014-04-17T03:59:33Z","content_type":null,"content_length":"45941","record_id":"<urn:uuid:3003b8a9-6825-45a9-bab9-678fc12afdcb>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to get a Dehn-twist presentation of a periodic map of a Riemann surface?
up vote 9 down vote favorite
Let $f:S\to S$ be a given periodic map of order $n$, and $(m_i,\lambda_i, \sigma_i)$ be the valency of the multiple point $p_i$ of $f$ ( $i=1,\cdots,r$ ).
A classical result says such $f$ is isotopic to a product of Dehn twists. It is trivial when $n=1$. Now we assume that $n>1$. I want to know how to get such a Dehn twist presentation.
For a pseudo-periodic map, a similar Dehn twist presentation implies Picard-Lefschetz formula of the monodromy of a singular fiber (semistable or non-semistable). In fact, I wish to comupte the
monodromy of a non-semistable fiber.
For hyperelliptic periodic maps, Ishzaka provided a method.
ag.algebraic-geometry at.algebraic-topology mapping-class-groups
1 How is the map given in the first place? – Igor Rivin Jan 1 '11 at 2:48
The above question is answered in some detail by my answer to the following: mathoverflow.net/questions/142365/… – Sam Nead Jan 13 at 21:38
So, I am voting to close. – Sam Nead Jan 13 at 21:40
@SamNead: Do you think it is reasonable to close an old question as a duplicate of a much newer question? – Stefan Kohl Jan 13 at 23:44
@Stefan: Well, the alternative is to cut-and-paste my answer. But I thought that would be poor form. – Sam Nead Jan 15 at 22:28
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry at.algebraic-topology mapping-class-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/30588/how-to-get-a-dehn-twist-presentation-of-a-periodic-map-of-a-riemann-surface","timestamp":"2014-04-16T07:33:26Z","content_type":null,"content_length":"53659","record_id":"<urn:uuid:0f3d3ba0-937f-44ef-a45a-fd59f98b80aa>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 2 of 2
1. CJM 2010 (vol 62 pp. 787)
An Explicit Treatment of Cubic Function Fields with Applications
We give an explicit treatment of cubic function fields of characteristic at least five. This includes an efficient technique for converting such a field into standard form, formulae for the field
discriminant and the genus, simple necessary and sufficient criteria for non-singularity of the defining curve, and a characterization of all triangular integral bases. Our main result is a
description of the signature of any rational place in a cubic extension that involves only the defining curve and the order of the base field. All these quantities only require simple polynomial
arithmetic as well as a few square-free polynomial factorizations and, in some cases, square and cube root extraction modulo an irreducible polynomial. We also illustrate why and how signature
computation plays an important role in computing the class number of the function field. This in turn has applications to the study of zeros of zeta functions of function fields.
Keywords:cubic function field, discriminant, non-singularity, integral basis, genus, signature of a place, class number
Categories:14H05, 11R58, 14H45, 11G20, 11G30, 11R16, 11R29
2. CJM 1997 (vol 49 pp. 283)
The $2$-rank of the class group of imaginary bicyclic biquadratic fields
A formula is obtained for the rank of the $2$-Sylow subgroup of the ideal class group of imaginary bicyclic biquadratic fields. This formula involves the number of primes that ramify in the field,
the ranks of the $2$-Sylow subgroups of the ideal class groups of the quadratic subfields and the rank of a $Z_2$-matrix determined by Legendre symbols involving pairs of ramified primes. As
applications, all subfields with both $2$-class and class group $Z_2 \times Z_2$ are determined. The final results assume the completeness of D.~A.~Buell's list of imaginary fields with small class
Categories:11R16, 11R29, 11R20
|
{"url":"http://cms.math.ca/cjm/msc/11R16?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-18T05:37:40Z","content_type":null,"content_length":"28395","record_id":"<urn:uuid:c0bee6b2-7456-4077-958a-601607a25c64>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trig Waves question ???
Plain old cosx has a period of 2π. The variable (t) inside of cos is multiplied by 5, which compresses the period. So, the answer is 2π/5
The shift is the quantity subtracted from the variable inside cos. It seems that the shift is divided by the coefficient of t, giving 3π/10. I don't know why this is, but I just varified it with a
graphing program and it's true.
El que pega primero pega dos veces.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=18333","timestamp":"2014-04-18T03:26:37Z","content_type":null,"content_length":"10879","record_id":"<urn:uuid:71c928ce-866a-4ffe-9ae6-d6fe0994cdc8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: December 2001 [00156]
[Date Index] [Thread Index] [Author Index]
RE: Simple questions about Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg31892] RE: [mg31888] Simple questions about Mathematica
• From: "David Park" <djmp at earthlink.net>
• Date: Mon, 10 Dec 2001 06:14:31 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
Trying to match square roots with replacement rules is always a bit of a
problem because Mathematica represents them by Power[expr,1/2] in the
numerator and Power[expr,-1/2] in the denominator. So it is better to use
Power to begin with. So this works...
expr = 2/(3*Sqrt[a^2 + b^2]) + (1 - Sqrt[a^2 + b^2])^2 +
E^(-a + Sqrt[a^2 + b^2]);
expr /. Power[a^2 + b^2, n : (-1/2 | 1/2)] -> Power[x, n]
E^(-a + Sqrt[x]) + (1 - Sqrt[x])^2 + 2/(3*Sqrt[x])
where I have used the Alternatives pattern matching construction for n. The
replacement rule could also be written as:
expr /. (a^2 + b^2)^n:(1/2 | -(1/2)) -> x^n
As for running Mathematica in a "batch" mode, I always use Mathematica
interactively, and I think that is the best way to use it. But I never have
occasion to run really time consuming problems. Perhaps someone else will
give you advice on that.
David Park
djmp at earthlink.net
> -----Original Message-----
> From: Stephen Gray [mailto:stevebg at adelphia.net]
To: mathgroup at smc.vnet.net
> Sent: Sunday, December 09, 2001 6:07 AM
> To: mathgroup at smc.vnet.net
> Subject: [mg31892] [mg31888] Simple questions about Mathematica
> 1. Suppose I solve some equations and get a complicated
> expression containing multiple identical elements such as
> sqrt(a^2+b^2). To make the expression smaller, clearer, and
> easier to deal with, I want to substitute say R=sqrt(a^2+b^2)
> everywhere in the main expression and have R available as
> a symbol from then on. I see nothing in Help or anywhere
> else about how to do this. But it's such a standard thing to
> do that there must be an easy answer.
> 2. I'm not sure how a "batch" file is suppposed to be
> prepared and fed to Mathematica. That is, I want to
> prepare a bunch of operations and variables in advance
> and feed it in, preferably having mathematica give me
> line-by-line output as it would if I were typing each line
> in one by one.
> As a new user I really need answers to these questions.
> Thanks in advance for any information.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Dec/msg00156.html","timestamp":"2014-04-17T04:13:15Z","content_type":null,"content_length":"36375","record_id":"<urn:uuid:12695cde-9a4b-4c28-8ac9-f5a714bb6a21>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First Person Camera Control with LWJGL
This tutorial will show you a way of implementing a first person camera control system in Java using the Lightweight Java Game Library (LWJGL).
We will make a class called FPCameraController that has 3 properties:
position – A vector that will store the x, y and z co-ords of the camera.
yaw – A float that will store the yaw (y axis rotation) of the camera.
pitch – A float that will store the pitch (x axis rotation) of the camera.
We do not need to store the roll (z rotation) as a First person camera will not roll (tilt) but you could add this if you want your camera to tilt (some games use tilt to peek around corners).
import org.lwjgl.opengl.GL11;
import org.lwjgl.util.vector.Vector3f;
//First Person Camera Controller
public class FPCameraController
//3d vector to store the camera's position in
private Vector3f position = null;
//the rotation around the Y axis of the camera
private float yaw = 0.0f;
//the rotation around the X axis of the camera
private float pitch = 0.0f;
Now we will make the constructor that will take 3 float values as parameters: x, y and z. They will be the starting location of the camera.
//Constructor that takes the starting x, y, z location of the camera
public FPCameraController(float x, float y, float z)
//instantiate position Vector3f to the x y z params.
position = new Vector3f(x, y, z);
Next we will make yaw and pitch methods, these will be used to control the rotation of the camera. They will both take a float parameter : amount. This value will be the y movement of the mouse for
the pitch method and the x movement of the parameter for the yaw method.
//increment the camera's current yaw rotation
public void yaw(float amount)
//increment the yaw by the amount param
yaw += amount;
//increment the camera's current yaw rotation
public void pitch(float amount)
//increment the pitch by the amount param
pitch += amount;
Next we will make the walking methods typically bound to the WASD keys. The methods will need to calculate to how much on the x and z axises the camera will need to move as the movement is relative
to the the current yaw (y rotation) of the camera. For example if have turned the camera 45d to the right when you move forward you will half the amount on the z axis and half on the y axis
This is calculation is done using basic trigonometry. If we know the distance we want to move and the angle we want to move at (yaw) we can calculate how far to move with in the x and z axis like
x = distance * sin(yaw)
z = distance * cos(yaw)
//moves the camera forward relative to its current rotation (yaw)
public void walkForward(float distance)
position.x -= distance * (float)Math.sin(Math.toRadians(yaw));
position.z += distance * (float)Math.cos(Math.toRadians(yaw));
//moves the camera backward relative to its current rotation (yaw)
public void walkBackwards(float distance)
position.x += distance * (float)Math.sin(Math.toRadians(yaw));
position.z -= distance * (float)Math.cos(Math.toRadians(yaw));
//strafes the camera left relitive to its current rotation (yaw)
public void strafeLeft(float distance)
position.x -= distance * (float)Math.sin(Math.toRadians(yaw-90));
position.z += distance * (float)Math.cos(Math.toRadians(yaw-90));
//strafes the camera right relitive to its current rotation (yaw)
public void strafeRight(float distance)
position.x -= distance * (float)Math.sin(Math.toRadians(yaw+90));
position.z += distance * (float)Math.cos(Math.toRadians(yaw+90));
Next we will write a method that will be used to translate and rotate the modelview matrix so that we will look threw the camera.
//translates and rotate the matrix so that it looks through the camera
//this dose basic what gluLookAt() does
public void lookThrough()
//roatate the pitch around the X axis
GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f);
//roatate the yaw around the Y axis
GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f);
//translate to the position vector's location
GL11.glTranslatef(position.x, position.y, position.z);
This class would be used in the games main loop running the lookThrough() method before anything is rendered and with the movement and run from rotation methods run based on key presses and mouse
movement. This is what a game loop might look like using this class.
public void gameLoop()
FPCameraController camera = new FPCameraController(0, 0, 0);
float dx = 0.0f;
float dy = 0.0f;
float dt = 0.0f; //length of frame
float lastTime = 0.0f; // when the last frame was
float time = 0.0f;
float mouseSensitivity = 0.05f;
float movementSpeed = 10.0f; //move 10 units per second
//hide the mouse
// keep looping till the display window is closed the ESC key is down
while (!Display.isCloseRequested() &&
time = Sys.getTime();
dt = (time - lastTime)/1000.0f;
lastTime = time;
//distance in mouse movement from the last getDX() call.
dx = Mouse.getDX();
//distance in mouse movement from the last getDY() call.
dy = Mouse.getDY();
//controll camera yaw from x movement fromt the mouse
camera.yaw(dx * mouseSensitivity);
//controll camera pitch from y movement fromt the mouse
camera.pitch(dy * mouseSensitivity);
//when passing in the distance to move
//we times the movementSpeed with dt this is a time scale
//so if its a slow frame u move more then a fast frame
//so on a slow computer you move just as fast as on a fast computer
if (Keyboard.isKeyDown(Keyboard.KEY_W))//move forward
if (Keyboard.isKeyDown(Keyboard.KEY_S))//move backwards
if (Keyboard.isKeyDown(Keyboard.KEY_A))//strafe left
if (Keyboard.isKeyDown(Keyboard.KEY_D))//strafe right
//set the modelview matrix back to the identity
//look through the camera before you draw anything
//you would draw your scene here.
//draw the buffer to the screen
If there are any problems, questions and or suggestions about this tutorial please leave a comment.
17 Comments
1. Thanks so much for this tutorial! I’ve been searching Google for what seems to be years! Just a question, could gluLookAt be used to do the same thing as you’ve done in lookThrough(); ?
Thanks again,
2. Arn’t you suppost to translate, then rotate?
3. Thanks, that was very helpful. Could you please upload a simple example of this?
4. I’m having problems with this. It seems to be a bit more like orbit and less of a look around thing. I’m using gluPerspective with blocks of width 1, maybe it would work better with bigger
5. Hey mate, I have a working example I used to test it when I made the tutorial, I will try dig that up for you!
6. I have implemented a solution similar to this, though I added vertical movement:
public void walkForward(float distance) {
position.x -= distance * (float) Math.sin(Math.toRadians(yaw));
position.y += distance * (float) Math.tan(Math.toRadians(pitch));
position.z += distance * (float) Math.cos(Math.toRadians(yaw));
public void walkBackwards(float distance) {
position.x += distance * (float) Math.sin(Math.toRadians(yaw));
position.y -= distance * (float) Math.tan(Math.toRadians(pitch));
position.z -= distance * (float) Math.cos(Math.toRadians(yaw));
public void flyUp(float distance) {
position.y -= distance;
I have two problems. First, when I get the camera into certain orientations, it will jitter wildly when glRotatef() attempts to do the rotation. If you imagine standing in the middle of a cube
and looking at any of the corners, this is where the camera jitters. Second, and more trivial – tan likes to shoot off to inf depending on how much I pitch. I have simply been bounding tan in
that case.
Any ideas? Great example though
7. I should mention – the jitter only happens when I am quite far from the origin of my world. It’s got to be a lack of precision on glTranslatef and/or glRotatef as you leave the origin. I wonder
if I can move the origin with me.. hmm.
8. The Camera seems to zoom in when I turn up and zoom out when I go back. I need a camera which will tell me where my player is in the world (its blocks are 1 wide in the VBOs)
9. Sorry for the issues I have caused it seems that I was translating to 100 before looking through the camera. It works fine now.
10. I have a really weird problem. The mouse movements returned by getDX and getDY alternate between large and small values. So, if I move my mouse fairly smoothly, I might get this series of DX’s:
181, 123, 178, 127, 180, 119 and so on. Notice how they alternate between large and small.
When my mouse isn’t moving, it is as you would expect; all zeros. Needless to say, this issue causes jittery turning on the screen.
11. While everything works great i’m having one problem: When i push my mouse up i get a negative number from DY. When i push my mouse down i get a positive. This results in a really weird way of
looking up or down (inverted)
12. Can I have the Source? I have a problem and I want to see what I’m doing wrong. Can you post a download link plz?
13. Im trying to use this code and when i press W,S,A,D it doesnt move at all. It’s getting caught up here:
time = Sys.getTime();
dt = (time – lastTime)/1000.0f;
lastTime = time;
dt is always 0 and time is returned as a constant value (1.35410601E12). Since the distance is calculated as movementspeed*dt its always going to return 0;
and not move at all.
What could be a possible solution?
14. Thank you. This is the most straightforward tutorial I’ve found so far.
15. Hey Gasper: there’s a mistake in is code, the time shouldn’t be stored as a float, but as a long! After that, it’ll work!;)
Maybe you’d want to change that in the tut, Lloyd? Thanks, it’s been a great help!
16. I thought I was lost when all my own attempts on creating a first person camera failed, but your tutorial helped me out of this misery. My ideas were way too complex or – should I say – confused
17. Thanks man, I hit a rough patch on this one. It’s been way too long since I took linear algebra for me to remember the appropriate transforms, and I kept hitting a rough spot when I tried to turn
left and right. This solved all of that.
Have you considered submitting this to the main LWJGL tutorial site? A lot of younger programmers, particularly those that don’t have the slightest clue what a matrix transform is, could really
use it.
Leave a Comment
|
{"url":"http://www.lloydgoodall.com/tutorials/first-person-camera-control-with-lwjgl/","timestamp":"2014-04-21T09:36:55Z","content_type":null,"content_length":"38291","record_id":"<urn:uuid:6b7012dc-4123-4248-9de5-a99f4d65a25f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
For a summer fundraiser, Rob is doing manual labor to help several families do home improvement projects. He is tracking the amount of time he spends each week with a spreadsheet. If his spreadsheet
shows that during the past two weeks, he spent a total of 11,040 minutes working, how many days and hours has he worked? 6 days and 6 hours 6 days and 16 hours 7 days and 7 hours 7 days and 16 hours
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@LukeBlueFive can you help me with this?
Best Response
You've already chosen the best response.
Sure. There are 60 minutes in an hour: \[1 hour = 60 minutes\] So this means: \[11,040 minutes \times \frac{1hour}{60 minutes}\] Will give you the equivalent number of hours. Furthermore, there
are 24 hours in a day: \[1 day = 24 hours\] So this means you need to multiply the previous result by: \[\frac{1 day}{24 hours}\] To get the equivalent number of days.
Best Response
You've already chosen the best response.
66240 and 264090, thats what i am getting :( so sorry
Best Response
You've already chosen the best response.
You seem to be multiplying instead of dividing... guess I should've made my equations clearer. Let me rewrite them: \[\frac{11,040 minutes}{60 minutes}\] Will give you the number of hours. Then,
divide the following from that result: \[24 hours\] To get the number of days.
Best Response
You've already chosen the best response.
You divide since there are more minutes in a day than hours, and more hours than days in a year... if that makes any sense.
Best Response
You've already chosen the best response.
7.666667 i got final answer this
Best Response
You've already chosen the best response.
184 dividing 11040/60 184 / 24 got that 7.6666667
Best Response
You've already chosen the best response.
Okay, he worked for over 7 days, but how much over? To find out, multiply the .6666667 part by 24 to get the number of hours that is equal to 2/3 of a day.
Best Response
You've already chosen the best response.
Then you'll have the answer: 7 days and ? hours
Best Response
You've already chosen the best response.
But i m getting 184 when i multiply
Best Response
You've already chosen the best response.
Sorry got 16, you meant just multiply .666667 with 24
Best Response
You've already chosen the best response.
Thank you so sooooooooooo much @LukeBlueFive !!!!!!!!!! your totally amazing !!!!!
Best Response
You've already chosen the best response.
So sorry for troubling alot for this probs, actually worst at math
Best Response
You've already chosen the best response.
You're too kind, but you're welcome. Hope this made sense.
Best Response
You've already chosen the best response.
yes :) thank you so much, n um so sorry again for trouble
Best Response
You've already chosen the best response.
That's what I'm here for. =D
Best Response
You've already chosen the best response.
Lol thanks alot, your really great and truly genius !!! :)
Best Response
You've already chosen the best response.
lol, I've just seen a lot of math over the years... ^_^;
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/509bfc7ce4b0a825f21ae321","timestamp":"2014-04-18T03:34:14Z","content_type":null,"content_length":"69024","record_id":"<urn:uuid:22487204-5117-4ae9-8b4d-77b4d6f49aa4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
simple game, need help with loops and math
02-06-2010 #1
Registered User
Join Date
Feb 2010
simple game, need help with loops and math
the game is: there are two players. one player selects how many chips there are in the pile to start with. then he takes some chips, then player two takes some, then back to player 1, and so on
until there is only 1 chip left. caveats are that each player can only take up to half of the available amount, and must take more than one. this is my code so far. i know it's sloppy and needs
some neatening up and better wording, but i want to get it functioning properly first before i worry about all that. so i need the program to keep executing until there's just one chip, and
whenever someone enters and invalid amount (say more than half of the available amount) they are repeatedly asked to input a different number. right now the two main problems i'm having are that
it's not repeating itself, it's stopping (i'll point out where), and the problem with the math. ok here's the code and i'll have some comments at the bottom.
#include <iostream>
#include <iomanip>
#include <string>
#include <math.h>
using namespace std;
int main()
int a, b, c, d, pile_size, turn_size, num_chips, initial_number_of_chips, chips_left;
string player_1, player_2;
cout << "\nRules: The game starts with a pile of chips. Eaech player may only take at\n";
cout << "most half of the chips. The player that gets the last chip wins. Good luck."
<< "\nPlayer 1 please enter your first name:";
cin >> player_1;
cout << "\nPlayer 2 please enter your first name:";
cin >> player_2;
cout <<"\nThe current player is" << player_1;
cout << "\n How many chips you would like to start with, " << player_1;
cin >> initial_number_of_chips;
while (a>1)
cout<< "\nPlayer 1 it is your turn, how many chips would you like to take?";
while (a>(initial_number_of_chips/2) || a<1)
cout<<"\nplease enter a different value";
while (a <=initial_number_of_chips/2 || a>=1)
cout<<"\n There are" <<initial_number_of_chips-a<< "left";
cout<<"\n"<<player_2<<"it is your turn, please select how many chips you would like";
while (a >(initial_number_of_chips-a)/2 || a<1)
cout<<"\nplease enter a different value";
while (a <=(initial_number_of_chips-a)/2 || a>1)
cout<<"\nThere are"<<initial_number_of_chips-a<<"left"; // at this point, it keeps repeating the loop for player 2 and not jumping back up to player 1
return 0;
now with the math issue... i need the number of chips to be continually decreasing as the players take chips out of the pile. but i can't think of a way to do that. right now, every time either
player inputs a value for a, the previous a is then overwritten, and the number of chips available basically resets.
example: start with 200 chips.
player 1 takes 50
there are 150 left
player 2 takes 70
now it will go back to the initial 200 and subtract 70, giving me 130 left.
i know this is a lot, but can anyone give me some tips? i'm a beginner at this and this is my first time trying a program with multiple loops in it, and it's not coming easy. also don't worry
about all the int variables, i've been messing around with different things and just haven't deleted the ones i'm not using yet.
What's happening here is the result of initial_chips - a is calculated, displayed, and then thrown away, never to be reused. Have you learned the assignment statement?
No, I haven't heard of that. I will look into it though, thanks for the tip.
alright i've changed my code a bit, still not getting what i want, but getting closer
#include <iostream>
#include <iomanip>
#include <string>
#include <math.h>
using namespace std;
int main()
int a, b, c, d, pile_size, turn_size, num_chips, initial_number_of_chips, chips_left;
string player_1, player_2;
cout << "\nRules: The game starts with a pile of chips. Eaech player may only take at\n";
cout << "most half of the chips. The player that gets the last chip wins. Good luck."
<< "\nPlayer 1 please enter your first name:";
cin >> player_1;
cout << "\nPlayer 2 please enter your first name:";
cin >> player_2;
cout <<"\nThe current player is" << player_1;
cout << "\n How many chips you would like to start with, " << player_1;
cin >> initial_number_of_chips;
cout<< "\nPlayer 1 it is your turn, how many chips would you like to take?";
while (a>(initial_number_of_chips/2) || a<1)
cout<<"\nplease enter a different value";
if (a <=initial_number_of_chips/2 || a>=1)
cout<<"\n There are" <<initial_number_of_chips-a<< "left";
while (num_chips>1)
cout<<"\n"<<player_2<<"it is your turn, please select how many chips you would like";
while (a >(num_chips)/2 || a<1)
cout<<"\nplease enter a different value"; //right here it's continually asking for a different value, i can't get past this step. should i use an if instead?
if (a <=(num_chips)/2 || a>1)
cout<<"\nThere are"<<initial_number_of_chips-a<<"left";
cout<<"\n"<<player_1<<"it is your turn. please select how many chips you would like";
while (a>(num_chips)/2 || a<1)
cout<<"\nPlease enter a different value";
if (a<=(num_chips)/2|| a>=1)
cout<< "test";
return 0;
the test at the bottom means nothing, just seeing if i could make it there.
so still, my main issue is with the math functions not working correctly. i can't even test the loops completely because i don't have the math stuff down correctly. any ideas? since i'm using
"num_chips" to store how many chips are still available, i need to be able to keep subtracting from that without it resetting like it has been.
if does work, if you enter 20, 5, and then 5 for player 2 for example this will work, but what happens is it is flawed to start with because if a user enters an incorrect value to start with then
your line where you subtract how many chips there are is still invoked and this falsely alters the number of remaining chips, even when the player has made an invalid choice.
You should use a temporary variable to store the number of chips remaining, then if the choice is valid set the actual chips remaining to the temp value.
also you subtract chips after the initial choice and then go into the loop, you also need to have control in the loop so that the test is against the actual number of remaining chips, not the
number left over after any invalid choice was made
Last edited by rogster001; 02-08-2010 at 07:24 AM.
02-06-2010 #2
02-06-2010 #3
Registered User
Join Date
Feb 2010
02-06-2010 #4
Registered User
Join Date
Feb 2010
02-06-2010 #5
Registered User
Join Date
Feb 2010
02-08-2010 #6
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/123610-simple-game-need-help-loops-math.html","timestamp":"2014-04-18T03:55:13Z","content_type":null,"content_length":"64463","record_id":"<urn:uuid:285774ee-baaf-4f7e-bea8-688bb9df0591>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] mathematics as phenomena
Harvey Friedman friedman at math.ohio-state.edu
Fri Feb 3 02:50:27 EST 2006
On 1/28/06 10:47 AM, "Timothy Y. Chow" <tchow at alum.mit.edu> wrote:
> Neil Tennant wrote:
>> So, that was "thumbing of the nose" #1 on the part of CMs to
>> metamathematicians and foundationalists (MFs).
>> "Thumbing of the nose" #2 concerned set theory.
> Joe Shipman wrote:
>> What Friedman is criticizing is the determination of most
>> mathematicians to regard Godel's Incompleteness phenomenon as a
>> curiosity that is not relevant to mathematics as a whole, rather than
>> as a challenge to get involved in the METAmathematical pursuit if
>> identifying new axioms to be accepted as true.
> While I certainly find Friedman's theorems in this area extremely
> interesting, I am also somewhat troubled by the extent to which some of
> these efforts (I do not speak specifically of Friedman here) seem to be
> motivated by annoyance at the fact that "f.o.m. don't get no respect."
It is true that I wear two quite different hats about this.
1. Scientific Hat. I treat not only mathematics itself but mathematical
practice as a phenomenon which is deep and rich and obviously worthy of
intense study for its own sake and other reasons. Clearly there is an
informal notion, with considerable objectivity, of what is mathematically
important, or natural, or beautiful, or interesting. Of course, there is not
anything like universal agreement in many cases. But there is definitely an
important objectivity surrounding these notions as used by practicing
mathematicians. As I have said on the FOM before, I continually check my
intuition about these things with practicing mathematicians. There are also
the usual subfields and subsubfields of mathematics, for which there is
considerable objectivity as to what falls in what category or categories. It
is also true that mathematicians generally are not engaged in serious
research as to what these various terms mean. I would guess that only very
few mathematicians would have the ability to say something illuminating
about what these notions mean, and that this would involve much more
extramathematical considerations than they are normally comfortable with.
2. Academic Profession Hat. I have no doubt that f.o.m. is in a very awkward
position in academia, falling into a special place between mathematics,
philosophy, and computer science. I also have no doubt that Godel's place at
the top of mathematical thinkers (notice I said thinkers, not
mathematicians) is permanent, and that f.o.m.'s place among mathematical
subjects (subjects with mathematical methodology) was at the top in the
1930's, and also at the top in the 2000's. Also I have no doubt that the
present perceived view of the importance, relevance, beauty, value, promise,
etcetera, of f.o.m. is profoundly wrong and reflects poorly on the
intellectual judgment of the people involved in those determinations. But
still there are objective reasons that allow this attitude to persist, that
are worthy of intense study, and so this feeds into 1 above.
> Honestly, what do I care if someone thumbs his nose at me? There seems to
> be something psychologically unhealthy about channeling enormous amounts
> of effort into winning someone else's acceptance or changing someone
> else's behavior.
You are not talking about me, because of 1 above. Also with regard to 2
above, there is present danger that f.o.m. as we now know it, will die as a
viable profession, at least in the most visible parts of the academic
community - and perhaps elsewhere. People have, in the past, turned the
other cheek, while they are systematically slaughtered. I am not known as a
> For example, it seems to me that one likely outcome of Friedman's results
> is that large cardinal axioms will join the axiom of choice in the
> category of axioms that one learns may be relevant sometimes and which one
> goes ahead and assumes if necessary. However, "core mathematicians" will
> *still* exhibit no interest in engaging directly in f.o.m. and the "search
> for new axioms."
I remember conversations I had with a well known analyst at MIT many years
ago - whose name I forget, but would like to remember - who was Chairman
there and also represented the AMS in Washington for a while. I think he is
no longer with us.
I was talking to him in the 70's and early 80's when I really clearly
crystallized the program of getting incompleteness for "real, good, concrete
I remember vividly him telling me that most of the very top mathematicians
he knows over the years were attracted to the subject substantially because
truth and proof were not an issue - and he named some preeminent names. He
graphically asserted that these people would be visibly shaken by any
suitable good concrete independence result, and would very strongly want to
try to return math to the good old days where the rules of the game were not
an issue, and where everybody thought there were no logical difficulties
with real concrete interesting math. The only satisfactory way to do that is
for them to study the new axioms.
On the other hand, I have been in contact with a Fields Medalist who would
like to say something more along the lines that you are suggesting. That "we
will simply use more hypotheses and state them in the theorems."
But this was before I got into Pi01 and Pi00 (not yet posted), where it
might be more awkward to take this tack. (This particular famous
mathematician probably would not change his/her mind.) After all, the idea
of absolute truth for Pi01 and especially Pi00 sentences where the number of
objects under discussion is something like 8!! or 8!!!, is more ingrained in
Obviously the best guess is that an appropriate expansion of what I am doing
will have a spectrum of reaction, but with a considerable number of the
leading mathematicians far far more concerned about f.o.m. than they are
today. For example, many of them would readily participate in Symposia about
"WHAT POSITION SHOULD BE TAKE ON THE NEW USES OF THE NEW AXIOMS?", with
perhaps, even one of the focal points being: should we just neutrally add
hypotheses and not fret about any absolute truth of Pi01 and Pi00, or should
we try to recover the lost paradise that we had, illusory as it might have
been? Other critical issues might be: is there STILL some way of siphoning
off these new results as not good mathematics, that can be distinguished
from real mathematics?
Experiments will be conjured up. E.g., I might draw up a list of 10
mathematical statements, only some of which are provable in ZFC and others
are not. Ask mathematicians in such a meeting to pick out which ones are
provable and which are not.
The drama of such an experiment would of course crucially depend on how far
I get with this development. I am not there yet. But I can imagine
experiments like this that just might convince a very wide range of people
that they can't tell the difference in terms of "naturalness", "beauty",
>If new axioms turn up in the course of studying "core
> mathematics," then one will pay due attention to them, but studying f.o.m.
> for its own sake will still be regarded as deviant behavior. In other
> words, large cardinal axioms will be welcomed into the mainstream, but
> f.o.m. won't.
I doubt this for the following reason. I think history shows that there are
major swings over long periods of time, and that ideas that were undervalued
sometimes get overvalued. This happens all the time in the stock market.
The mathematicians will have to face just how this state of affairs came
about. In a vacuum? No. By studying f.o.m. for its own sake. Of course,
future reactions depend a lot on how far the program goes.
> I see little point in the Sisyphean task of trying to get f.o.m. respected
> in the same way that the more glamorous areas of mathematics are.
> Indeed, there is some danger that this "evangelical" motivation will draw
> effort away from the more meaningful task of formulating and pursuing
> productive agendas within f.o.m. itself. (Though I don't think this
> potential problem has actually materialized yet in practice.)
But on the other hand, if no attention is paid to the f.o.m. respect issue,
then there may not be an environment in which someone can be paid to work
single mindedly on such things for 40 years straight...
Harvey Friedman
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-February/009675.html","timestamp":"2014-04-20T03:15:12Z","content_type":null,"content_length":"11355","record_id":"<urn:uuid:4f0be0ff-add4-433c-b7a8-7211cc76860d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
8. Is line L parallel to line M? Explain. Yes, alternate interior angles are congruent. No; alternate interior angles are not congruent. No; corresponding angles are not congruent. Yes; corresponding
angles are congruent.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Oooo good question. Here, let me copy your drawing, and show you: |dw:1360113933387:dw| You see those two angles that I circled? If line L were parallel to line M, then those two angles would be
EQUAL, because they are CORRESPONDING angles. But they're not equal, you see. One of them is 124, and the other is 118. That's how we know that L is not parallel to M.
Best Response
You've already chosen the best response.
I realize that you're new to openstudy. This is the Language & Culture section. If you have another math question, you can try posting it the Mathematics section, where you'll get answers much
sooner, and in more detail.
Best Response
You've already chosen the best response.
So the answer would be "No; corresponding angles are not congruent."?
Best Response
You've already chosen the best response.
Yes that's right.
Best Response
You've already chosen the best response.
Thank you! Could you help me with this problem, too? Please?
Best Response
You've already chosen the best response.
12. Write a sequence of transformations that maps triangle ABC onto triangle A''B''C''.
Best Response
You've already chosen the best response.
It's been a few years since I've done this kind of problem. I'm sorry, I wish I could help. Try posting it in the math section. It's got thousands of members. You'll more likely find help there.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5111ae75e4b09cf125bddc6b","timestamp":"2014-04-18T23:43:57Z","content_type":null,"content_length":"52556","record_id":"<urn:uuid:8957082c-f628-4555-aa32-809ab457a182>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|