content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Multiple-input gates
Inverters and buffers exhaust the possibilities for single-input gate circuits. What more can be done with a single logic signal but to buffer it or invert it? To explore more logic gate
possibilities, we must add more input terminals to the circuit(s).
Adding more input terminals to a logic gate increases the number of input state possibilities. With a single-input gate such as the inverter or buffer, there can only be two possible input states:
either the input is "high" (1) or it is "low" (0). As was mentioned previously in this chapter, a two input gate has four possibilities (00, 01, 10, and 11). A three-input gate has eight
possibilities (000, 001, 010, 011, 100, 101, 110, and 111) for input states. The number of possible input states is equal to two to the power of the number of inputs:
This increase in the number of possible input states obviously allows for more complex gate behavior. Now, instead of merely inverting or amplifying (buffering) a single "high" or "low" logic level,
the output of the gate will be determined by whatever combination of 1's and 0's is present at the input terminals.
Since so many combinations are possible with just a few input terminals, there are many different types of multiple-input gates, unlike single-input gates which can only be inverters or buffers. Each
basic gate type will be presented in this section, showing its standard symbol, truth table, and practical operation. The actual TTL circuitry of these different gates will be explored in subsequent
The AND gate
One of the easiest multiple-input gates to understand is the AND gate, so-called because the output of this gate will be "high" (1) if and only if all inputs (first input and the second input and . .
.) are "high" (1). If any input(s) are "low" (0), the output is guaranteed to be in a "low" state as well.
In case you might have been wondering, AND gates are made with more than three inputs, but this is less common than the simple two-input variety.
A two-input AND gate's truth table looks like this:
What this truth table means in practical terms is shown in the following sequence of illustrations, with the 2-input AND gate subjected to all possibilities of input logic levels. An LED
(Light-Emitting Diode) provides visual indication of the output logic level:
It is only with all inputs raised to "high" logic levels that the AND gate's output goes "high," thus energizing the LED for only one out of the four input combination states.
The NAND gate
A variation on the idea of the AND gate is called the NAND gate. The word "NAND" is a verbal contraction of the words NOT and AND. Essentially, a NAND gate behaves the same as an AND gate with a NOT
(inverter) gate connected to the output terminal. To symbolize this output signal inversion, the NAND gate symbol has a bubble on the output line. The truth table for a NAND gate is as one might
expect, exactly opposite as that of an AND gate:
As with AND gates, NAND gates are made with more than two inputs. In such cases, the same general principle applies: the output will be "low" (0) if and only if all inputs are "high" (1). If any
input is "low" (0), the output will go "high" (1).
The OR gate
Our next gate to investigate is the OR gate, so-called because the output of this gate will be "high" (1) if any of the inputs (first input or the second input or . . .) are "high" (1). The output of
an OR gate goes "low" (0) if and only if all inputs are "low" (0).
A two-input OR gate's truth table looks like this:
The following sequence of illustrations demonstrates the OR gate's function, with the 2-inputs experiencing all possible logic levels. An LED (Light-Emitting Diode) provides visual indication of the
gate's output logic level:
A condition of any input being raised to a "high" logic level makes the OR gate's output go "high," thus energizing the LED for three out of the four input combination states.
The NOR gate
As you might have suspected, the NOR gate is an OR gate with its output inverted, just like a NAND gate is an AND gate with an inverted output.
NOR gates, like all the other multiple-input gates seen thus far, can be manufactured with more than two inputs. Still, the same logical principle applies: the output goes "low" (0) if any of the
inputs are made "high" (1). The output is "high" (1) only when all inputs are "low" (0).
The Negative-AND gate
A Negative-AND gate functions the same as an AND gate with all its inputs inverted (connected through NOT gates). In keeping with standard gate symbol convention, these inverted inputs are signified
by bubbles. Contrary to most peoples' first instinct, the logical behavior of a Negative-AND gate is not the same as a NAND gate. Its truth table, actually, is identical to a NOR gate:
The Negative-OR gate
Following the same pattern, a Negative-OR gate functions the same as an OR gate with all its inputs inverted. In keeping with standard gate symbol convention, these inverted inputs are signified by
bubbles. The behavior and truth table of a Negative-OR gate is the same as for a NAND gate:
The Exclusive-OR gate
The last six gate types are all fairly direct variations on three basic functions: AND, OR, and NOT. The Exclusive-OR gate, however, is something quite different.
Exclusive-OR gates output a "high" (1) logic level if the inputs are at different logic levels, either 0 and 1 or 1 and 0. Conversely, they output a "low" (0) logic level if the inputs are at the
same logic levels. The Exclusive-OR (sometimes called XOR) gate has both a symbol and a truth table pattern that is unique:
There are equivalent circuits for an Exclusive-OR gate made up of AND, OR, and NOT gates, just as there were for NAND, NOR, and the negative-input gates. A rather direct approach to simulating an
Exclusive-OR gate is to start with a regular OR gate, then add additional gates to inhibit the output from going "high" (1) when both inputs are "high" (1):
In this circuit, the final AND gate acts as a buffer for the output of the OR gate whenever the NAND gate's output is high, which it is for the first three input state combinations (00, 01, and 10).
However, when both inputs are "high" (1), the NAND gate outputs a "low" (0) logic level, which forces the final AND gate to produce a "low" (0) output.
Another equivalent circuit for the Exclusive-OR gate uses a strategy of two AND gates with inverters, set up to generate "high" (1) outputs for input conditions 01 and 10. A final OR gate then allows
either of the AND gates' "high" outputs to create a final "high" output:
Exclusive-OR gates are very useful for circuits where two or more binary numbers are to be compared bit-for-bit, and also for error detection (parity check) and code conversion (binary to Grey and
vice versa).
The Exclusive-NOR gate
Finally, our last gate for analysis is the Exclusive-NOR gate, otherwise known as the XNOR gate. It is equivalent to an Exclusive-OR gate with an inverted output. The truth table for this gate is
exactly opposite as for the Exclusive-OR gate:
As indicated by the truth table, the purpose of an Exclusive-NOR gate is to output a "high" (1) logic level whenever both inputs are at the same logic levels (either 00 or 11).
• REVIEW:
• Rule for an AND gate: output is "high" only if first input and second input are both "high."
• Rule for an OR gate: output is "high" if input A or input B are "high."
• Rule for a NAND gate: output is not "high" if both the first input and the second input are "high."
• Rule for a NOR gate: output is not "high" if either the first input or the second input are "high."
• A Negative-AND gate behaves like a NOR gate.
• A Negative-OR gate behaves like a NAND gate.
• Rule for an Exclusive-OR gate: output is "high" if the input logic levels are different.
• Rule for an Exclusive-NOR gate: output is "high" if the input logic levels are the same.
Related Links
|
{"url":"http://www.allaboutcircuits.com/vol_4/chpt_3/4.html","timestamp":"2014-04-19T22:31:13Z","content_type":null,"content_length":"21724","record_id":"<urn:uuid:b7aa437b-90f5-4b2e-9e72-c0fd62e88fd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Do oil prices help forecast
There has been much interest since the 1970s in the question of whether lagged oil price changes help forecast US real GDP growth (Hamilton 2009). This question has taken on new urgency following the
large fluctuations in the price of oil in recent years. There is interest not only in the question of possible asymmetries depending on whether the price of oil goes up or down, but also in the idea
that increases in the price of oil beyond certain time-varying thresholds may trigger recessions. In a recent study together with Robert Vigfusson, I examine how successful a number of linear and
nonlinear models of this type are in reducing the out-of-sample prediction mean-squared error (MSPE) of US real GDP growth (Kilian and Vigfusson 2012).
A useful reference point for this debate is the ability of oil prices to improve on simple univariate autoregressive forecasts of US real GDP growth at horizons up to two years. It can be shown that
there are at best small out-of-sample MSPE reductions when forecasting cumulative US real GDP growth from bivariate linear VAR models that include the percent change in the price of oil in addition
to real GDP growth. This finding is robust to whether the price of oil is specified in nominal or in real terms and whether the oil price is treated as exogenous or as endogenous with respect to US
real GDP. One possible explanation for this result is that the predictive relationship in question is nonlinear. Indeed this possibility has been discussed at length in the existing literature, but
the out-of-sample forecasting performance of these nonlinear models has never been evaluated systematically. In fact, suitable econometric models have been developed only very recently.
In this context, Hamilton (2003) made the case that the predictive relationship between oil prices and US real GDP is nonlinear in that (1) oil price increases matter only to the extent that they
exceed the maximum oil price in recent years and that (2) oil price decreases do not matter at all. He provided in-sample evidence that including appropriately defined lagged net increases in the
price of oil in an autoregression for real GDP growth helps predict US real GDP growth one quarter ahead. This evidence is backed up by our study looking at more recent data (Kilian and Vigfusson
2011). Evidence of in-sample predictability, as documented in these studies, however, need not translate into out-of-sample gains in forecast accuracy, which is the ultimate question of interest to
policymakers and applied forecasters.
To resolve this question, it is necessary to evaluate and compare a wide range of out-of-sample forecasting models for US real GDP based on nonlinear transformations of the price of oil that are
asymmetric in oil price increases and decreases. A striking result of this comparison is that, among the many alternative asymmetric models that have been suggested in the literature, only a
multivariate generalisation of the predictive model proposed by Hamilton (2003) produces systematic MSPE reductions at longer horizons. There is no evidence in support of forecasting models based on
the one-year net oil price increase, models based on the uncensored percentage oil price increase, or models based on large percentage increases in the price of oil, in contrast.
The performance of the three-year net increase model in some cases is impressive. For example, based on the three-year net increase in the US refiners’ acquisition cost for crude oil imports, the
MSPE reductions are between 19% and 26% at the one-year horizon and between 18% and 17% at the two-year horizon. Similar results are obtained with some other oil price series as well. At the
one-quarter horizon, however, the results are less clear cut and depend on the precise definition of the oil price variable.
To date much of the perceived empirical success of the three-year net oil price increase specification has been attributed to the fact that this oil price measure is asymmetric, with little attention
to the fact that this definition also embodies other nonlinearities. In this regard, it can be shown that reductions in the MSPE at least as large as for the three-year net oil price increase model
can be obtained based on an alternative forecasting model that is symmetric in the three-year net oil price increases and decreases. The results for this net oil price change model specification
suggest that the asymmetry embodied in the three-year net oil price increase measure is irrelevant for out-of-sample forecasting, if not harmful. This result is consistent with the fact that all
other asymmetric specifications considered appear inferior to forecasting models that are symmetric in the price of oil.
The three-year net oil price change model not only tends to be at least as accurate as the corresponding three-year net oil price increase model, but it is more robust to the definition of the oil
price variable, more robust across forecast horizons, and more robust to changes in the forecast evaluation period. In short, if there are nonlinearities that matter for forecasting they appear
related to how far the current oil price deviates from its most recent extreme values, not to whether the price of oil increased or decreased relative to that threshold. This evidence directly
addresses the common concern among many policy makers that the feedback from oil prices to the economy may become stronger once the price of oil passes certain possibly time-varying thresholds.
Furthermore, a number of alternative and equally economically plausible symmetric nonlinear specifications (including models that focus on large oil price changes or models that control for time
variation in the oil share) cannot replicate the forecasting success of the three-year net oil price change model.
A question of obvious interest is how much of the decline in US real GDP growth during 2008/09 could have been forecast with the help of the three-year net oil price change model. Based on the
four-quarter-ahead forecast, further analysis shows that the three-year net oil price change model anticipated about one third of the observed decline in US real GDP in 2008, while linear models
essentially failed to predict any decline. These results appear much more plausible than the corresponding forecasts from the three-year net oil price increase model, which imply that virtually all
of the 2008 recession could have been forecast one year in advance and that the financial crisis played no role in the 2008 recession. The latter economically implausible result can be traced to over
fitting problems in small samples.
In fact, a similar – if much less severe – overfitting problem also afflicts to the three-year net oil price change model. The apparent over fitting may be countered with some simple ad hoc
adjustments of the model coefficients. With these corrections, the three-year net change model would have forecast only about 15% of the observed cumulative decline in US real GDP in 2008 one year in
advance, which is still much larger than the decline implied by linear VAR forecasts, but more in line with other nonlinear symmetric forecasting models.
These results reinforce a growing body of work that has questioned the role of asymmetries in the relationship between the price of oil and the US economy, while drawing attention to a previously
undocumented type of threshold nonlinearity in the predictive relationship between the price of oil and US real GDP. The question of how important these threshold effects are deserves further study
on extended samples and on other time series. The preliminary findings in this regard discussed here have potentially important implications for applied forecasters, but also for economists
interested in modelling the transmission of oil price shocks. For example, there is no theoretical model to date that would rationalise the type of the threshold effects embodied by three-year net
oil price change models.
Hamilton, JD (2003), “What Is an Oil Shock?”, Journal of Econometrics, 113:363-398.
Hamilton, JD (2009), “Oil prices and the economic recession of 2007-2008”, VoxEU.org, 16 June.
Kilian, L and RJ Vigfusson (2011), “Are the Responses of the U.S. Economy Asymmetric in Energy Price Increases and Decreases?”, Quantitative Economics, 2(4):419-453.
Kilian, L and RJ Vigfusson (2012), “Do Oil Prices Help Forecast U.S. Real GDP? The Role of Nonlinearities and Asymmetries”, CEPR Discussion Paper No. 8980.
Login or register to post comments
|
{"url":"http://www.voxeu.org/article/do-oil-prices-help-forecast-real-gdp","timestamp":"2014-04-20T03:28:05Z","content_type":null,"content_length":"22672","record_id":"<urn:uuid:47fdc9ca-5a2e-4ff4-92ee-db1296d2f1c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear programming basics
A short explanation is given what Linear programming is and some basic knowledge you need to know.
A linear programming problem is mathematically formulated as follows:
• A linear function to be maximized or minimized
maximize c1 x1 + c2 x2
• Problem constraints of the following form
a11 x1 + a12 x2 <= b1
a21 x1 + a22 x2 <= b2
a31 x1 + a32 x2 <= b3
• Default lower bounds of zero on all variables.
The problem is usually expressed in matrix form, and then becomes:
maximize C^T x
subject to A x <= B
x >= 0
So a linear programming model consists of one objective which is a linear equation that must be maximized or minimized. Then there are a number of linear inequalities or constraints.
c^T, A and B are constant matrixes. x are the variables (unknowns). All of them are real, continue values.
Note the default lower bounds of zero on all variables x. People tend to forget this build-in default. If no negative (or negative infinite) lower bound is explicitely set on variables, they can and
will take only positive (zero included) values.
The inequalities can be <=, >= or =
Because all numbers are real values, <= is the same as < and >= is the same as >
Also note that both objective function and constraints must be linear equations. This means that no variables can be multiplied with each other.
This formulation is called the Standard form. It is the usual and most intuitive form of describing a linear programming problem.
minimize 3 x1 - x2
subject to -x1 + 6 x2 - x3 + x4 >= -3
7 x2 + 2 x4 = 5
x1 + x2 + x3 = 1
x3 + x4 <= 2
Sometimes, these problems are formulated in the canonical form. All inequalities are converted to equalities by adding an extra variable where needed:
maximize C^T x
subject to A x = B
x >= 0
Above example can then be written as:
minimize 3 x1 - x2
subject to -x1 + 6 x2 - x3 + x4 - s = -3
7 x2 + 2 x4 = 5
x1 + x2 + x3 = 1
x3 + x4 + t = 2
So everywhere an equality was specified, an extra variable is introduced and subtracted (if it was >) or added (if it was <) to the constraint. These variables also only take positive (or zero)
values only. These extra variables are called slack or surplus variables.
lp_solve add's these variables automatically to its internal structure. The formulator doesn't have to do it and it is even better not to. There will be fewer variables in the model and thus quicker
to solve.
See Formulation of an lp problem in lpsolve for a practical example.
The right hand side (RHS), the B-vector, must be a constant matrix. Some people see this as a problem, but it isn't The RHS can always be brought to the left by a simple operation:
A x <= B
Is equal to:
A x - B <= 0
So if B is not constant, just do that.
Basic mathematics also states that if a constraint is multiplied by a negative constant, that the inequality changes from direction. For example:
5 x1 - 2 x2 >= 3
If multiplied by -1, it becomes:
-5 x1 + 2 x2 <= -3
If the objective is multiplied by -1, then maximization becomes minimization and the other way around. For example:
minimize 3 x1 - x2
Can also be written as:
maximize -3 x1 + x2
The result will be the same, but changed from sign.
Minima and maxima on single variables are special cases of restrictions. They are called bounds. The optimization algorithm can handle these bounds more effeciently than other restrictions. They
consume less memory and the algorithm is faster with them. As already specified, there is by default an implicit lower bound of zero on each variable. Only when explicitly another lower bound is set,
the default of 0 is overruled. This other bound can be negative also. There is no default upper bound on variables. Almost all solvers support bounds on variables. So does lp_solve.
Frequently, it happens that on the same equation a less than and a greater than restriction must be set. Instead of adding two extra restrictions to the model, it is more performant and less memory
consument to only add one restiction with either the less than or greater than restriction and put the other inequality on that same constraint by means of a range. Not all solvers support this
feature but lp_solve does.
Integer and binary variables
By default, all variables are real. Sometimes it is required that one or more variables must be integer. It is not possible to just solve the model as is and then round to the nearest solution. At
best, this result will maybe furfill all constraints, but you cannot be sure of. As you cannot be sure of the fact that this is the most optimal solution. Problems with integer variables are called
integer or descrete programming problems. If all variables are integer it is called a pure integer programming problem, else it is a mixed integer programming problem. A special case of integer
variables are binary variables. These are variables that can only take 0 or 1 as value. They are used quite frequently to program discontinue conditions. lp_solve can handle integer and binary
variables. Binary variables are defined as integer variables with a maximum (upper bound) of 1 on them. See integer variables for a description on them.
Semi-continuous variables
Semi-continuous variables are variables that must take a value between their minimum and maximum or zero. So these variables are treated the same as regular variables, except that a value of zero is
also accepted, even if there is a minimum bigger than zero is set on the variable. See semi-continuous variables for a description on them.
Special ordered sets (SOS)
A specially ordered set of degree N is a collection of variables where at most N variables may be non-zero. The non-zero variables must be contiguous (neighbours) sorted by the ascending value of
their respective unique weights. In lp_solve, specially ordered sets may be of any cardinal type 1, 2, and higher, and may be overlapping. The number of variables in the set must be equal to, or
exceed the cardinal SOS order. See Special ordered sets (SOS) for a description on them.
lp_solve uses the simplex algorithm to solve these problems. To solve the integer restrictions, the branch and bound (B&B) method is used.
Other resources
Another very usefull and free paper about linear programming fundamentals and advanced features plus several problems being discussed and modeled is Applications of optimization with Xpress-MP. It
describes linear programming and modeling with the commercial solver Xpress-MP, but is as usefull for other solvers like lp_solve. In case that this link would not work anymore, try this via google
|
{"url":"http://lpsolve.sourceforge.net/5.5/LPBasics.htm","timestamp":"2014-04-19T10:16:26Z","content_type":null,"content_length":"9104","record_id":"<urn:uuid:52f3f741-e36d-4bf2-87ef-28baa6c0f392>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Difficult integration
May 15th 2011, 10:42 AM #1
Nov 2008
Difficult integration
I was helping my friend with her integration problems, and both she and I got stuck on this problem. Anyone can gives me a hand. I'd really appreciate it. We were trying to integrate $\frac{\sqrt
(x^2+1)}{x^2}$. We tried to do the following substitution. Let $x=tan\theta$ then, $dx=sec^2\theta$. So we have $\int \frac{(tan^2\theta +1)sec^2\theta}{tan^2\theta}=\int \frac{sec^3\theta}{tan^2
\theta}$. Then since $tan^2\theta=sec^2\theta-1$, we substitute this into the expression, but got nowhere.
For problems like these, I almost always throw my hands in the air and convert to sine and cosine! $sec(\theta)= \frac{1}{cos(\theta)}$ and $tan(\theta)= \frac{sin(\theta)}{cos(\theta)}$ so $\
frac{sec^3(\theta)}{tan^2(\theta)}$$= \frac{1}{cos^3(\theta)}\frac{cos^2(\theta)}{sin^2( \theta)}$$= \frac{1}{cos(\theta)sin^2(\theta)}$.
Now, that has an odd power of cosine, even though it is in the denominator, so I would multiply both numerator and denominator by $cos(\theta)$ in order to get
$\frac{cos(\theta)}{cos^2(\theta)sin^2(\theta)}= \frac{cos(\theta)}{(1- sin^2(\theta))\sin(\theta)}$. Make the substitution $u= sin(\theta)$ so that $du= cos(\theta)d\theta$ and that integral
which can be integrated by partial fractions.
Hello, jackie!
$\text{Integrate: }\:\int\frac{\sqrt{x^2+1}}{x^2}\,dx$
$\text{We tried this substitution: }\:x=\tan\theta \quad\Rightarrow\quad dx=\sec^2\!\theta\,d\theta$
$\text{So we have: }\:\int \frac{\sqrt{\tan^2\!\theta +1}\sec^2\!\theta}{\tan^2\!\theta}\;=\;\int \frac{\sec^3\!\theta}{\tan^2\!\theta}\,d\theta$
$\text{Then since }\tan^2\!\theta\,=\,\sec^2\!\theta-1,\,\text{ we substituted this, but got nowhere.}$
I did it like this . . .
. . $\frac{\sec^3\!\theta}{\tan^2\!\theta} \;=\;\frac{\sec\theta\cdot\sec^2\!\theta}{\tan^2\! \theta} \;=\;\frac{\sec\theta(\tan^2\!\theta + 1)}{\tan^2\!\theta} \;=\;\sec\theta + \frac{\sec\
. . . $=\;\sec\theta + \frac{1}{\cos\theta}\!\cdot\!\frac{\cos^2\!\theta} {\sin^2\!\theta} \;=\; \sec\theta + \frac{\cos\theta}{\sin^2\!\theta}$
. . . $=\;\sec\theta + \frac{1}{\sin\theta}\cdot\frac{\cos\theta}{\sin \theta} \;=\;\sec\theta + \csc\theta\cot\theta$
$\text{The integral becomes:}$
. . $\int (\sec\theta + \csc\theta\cot\theta)\,d\theta \;=\; \ln|\sec\theta + \tan\theta| - \csc\theta + C$
$\text{Back-substitute: }\:\tan\theta \,=\,x,\;\;\sec\theta \,=\,\sqrt{x^2+1},\;\;\csc\theta \,=\,\frac{\sqrt{x^2+1}}{x}$
$\text{Therefore: }\;\ln\left|\sqrt{x^2+1} + x\right| - \frac{\sqrt{x^2+1}}{x} + C$
Thanks a lot for your help, HallsofIvy and Soroban. My friend and I followed HallsofIvy's hint and got the answer. Later today we see Soroban's answer, and my friend likes the way you did this
problem. Anyway, we were glad to do this problem with both ways. Really appreciate both of your help.
May 15th 2011, 10:56 AM #2
MHF Contributor
Apr 2005
May 15th 2011, 03:32 PM #3
Super Member
May 2006
Lexington, MA (USA)
May 15th 2011, 07:10 PM #4
Nov 2008
|
{"url":"http://mathhelpforum.com/calculus/180669-difficult-integration.html","timestamp":"2014-04-16T21:03:52Z","content_type":null,"content_length":"46062","record_id":"<urn:uuid:3517559c-199e-4ae7-8325-4dc661629ce3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(1-(|x|-1)^2)^0.5=-3(1-(|x|/2)^0.5)^0.5 - Wolfram|Alpha
Stumblers Who Commented On This Page
Awww . . . I heart Wolfram Alpha.
Posted on Aug 3, 2010
Posted on May 27, 2010
Mathematical fun, me likey!! I love how beautiful and meaningful math can be beyond its "normal" applications
Posted on Apr 23, 2010
Okaaaay, so it's kind of cheesy. It's also very appropriate today (:
Posted on Jan 27, 2010
Tested. 'Tworks
Posted on Jan 27, 2010
Interesting, first time I saw it.
Posted on Dec 28, 2009
so x is equal to positive or negative two. both of those are less than three.. :D
Posted on Dec 27, 2009
Die in a fire.
Posted on Dec 22, 2009
stumbled this one the other day. http://www.wolframalpha.com/input/?i=%281-%28|x|-1%29^2%29^0.5%3D-2.5%281-%28|x|%2F2%29^0.5%29^0.5
Posted on Dec 21, 2009
Posted on Dec 17, 2009
Posted on Dec 16, 2009
Fun with curves.
Posted on Dec 15, 2009
|
{"url":"http://www.stumbleupon.com/content/2nC3i4/comments","timestamp":"2014-04-25T03:54:39Z","content_type":null,"content_length":"53377","record_id":"<urn:uuid:069afe27-b17a-4558-a9de-121fa039f6da>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Random [uniform?] sudokus [corrected]
May 19, 2010
By xi'an
As the discrepancy [from 1] in the sum of the nine probabilities seemed too blatant to be attributed to numerical error given the problem scale, I went and checked my R code for the probabilities and
found a choose(9,3) instead of a choose(6,3) in the last line… The fit between the true distribution and the observed frequencies is now much better
but the chi-square test remains suspicious of the uniform assumption (or again of my programming abilities):
> chisq.test(obs,p=pdiag)
Chi-squared test for given probabilities
data: obs
X-squared = 16.378, df = 6, p-value = 0.01186
since a p-value of 1% is a bit in the far tail of the distribution.
Filed under:
Monte Carlo
for the author, please follow the link and comment on his blog:
Xi'an's Og » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/random-uniform-sudokus-corrected/","timestamp":"2014-04-16T13:33:01Z","content_type":null,"content_length":"38322","record_id":"<urn:uuid:ec72c844-66a3-4478-b005-f15b2150b39a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
sheaves of representations on galois groups, can there be interesting cohomology?
up vote 1 down vote favorite
Consider a field $K$ (of characteristic 0, say) and its absolute galois group $G_K^{ab} = Gal(\overline{K}/K)$, given the Krull topology: $U_E(\sigma) = \sigma Gal(\overline{K}/E)$ form a basis of
the topology, ranging over $\sigma \in G_K^{ab}$ and $E/K$ finite galois.
Fix a group $G$ and denote by $R_E$ its representation ring over $E$, and by $R_E^\sigma \subset R_E$ the elements of $R_E$ fixed by $\sigma$.
We can construct a sheaf $\mathcal{F}$ on $G_K^{ab}$ by setting $\mathcal{F}(U_E(\sigma)) = R_E^\sigma$. It is a simple exercise to verify the axioms.
One might hope that the sheaf cohomology of $\mathcal{F}$ encodes information about the splitting behaviour of representations of G over various ground fields, but this is not the case: $G_K^{ab}$ is
known to be totally disconnected, hausdorff and compact. It is a theorem [1, 5.1] that $H^r(G_K^{ab}, \mathcal{F}) = 0$ for $r > 0$. Furthermore the $U_E(\sigma)$ are actually clopen, so most useful
subsets I can think of are also compact, hence their cohomology is equally uninteresting.
Is there a way to produce a useful cohomology along these lines?
Here "useful" essentially means "non-trivial", and "along these lines" basically "involving the galois action on $R_E$ for various $E$".
[1] http://www.jstor.org/stable/2035693
ag.algebraic-geometry rt.representation-theory galois-theory
add comment
2 Answers
active oldest votes
This is just a shot in the dark, but you might want to consider $G_K$-equivariant cohomology, with conjugation action on the space. At the very least, the equivariance will add some
up vote 1 down contribution from the group cohomology.
Equivariant cohomology indeed could be what I want. Leaves open the quest to actually find a space to act on. I have to think about this. – Tom Bachmann Mar 13 '10 at 9:21
Right now, it appears that your space is G_K itself, so conjugation is a natural choice of action. – S. Carnahan♦ Mar 13 '10 at 18:43
not sure what you're saying. When I look at the action of G_K on G_K, where are my representations going to come into play? – Tom Bachmann Mar 13 '10 at 20:36
add comment
Have you tried looking at Iwasawa's paper "Sheaves over Number Fields", http://www.jstor.org/stable/1970190?
up vote 0 He does not really do what you would like to, because his base space are the divisors of the number field, with some funny topology. But by class field theory it could give examples on
down vote the push-forward of your sheaf on the maximal abelian quotient of your Galois group (I am assuming your field is a global one, which might be out of your interest - sorry).
Huh. I just saw this question pop up on the front page and thought "This guy asked something quite similar to what I asked a long time ago.". Then I realised it's my own question :D.
Thank you for providing the reference, I will look into it. – Tom Bachmann Apr 11 '12 at 11:57
Seems a funny way to re-discover one's own question :D. Have you made any progress, in the meanwhile? Filippo – Filippo Alberto Edoardo Apr 11 '12 at 12:12
I didn't really think about it again until today g. – Tom Bachmann Apr 11 '12 at 17:41
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry rt.representation-theory galois-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/18018/sheaves-of-representations-on-galois-groups-can-there-be-interesting-cohomology","timestamp":"2014-04-18T14:07:28Z","content_type":null,"content_length":"61437","record_id":"<urn:uuid:68127da4-2da6-4b3c-a88a-073479c86102>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Marysville, WA Calculus Tutor
Find a Marysville, WA Calculus Tutor
...With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. I'm currently a Math and Science tutor at a
school. In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2.
26 Subjects: including calculus, chemistry, physics, geometry
...I have a Bachelor's in Aerospace Engineering, and a Master's and PhD in Aeronautical Engineering, plus I've attained my High School Teaching Certificate for Physics and Math, and taught High
School Physics and Physical Science for the last 13 years. Just moved to the Edmonds area from Ohio, and ...
12 Subjects: including calculus, physics, geometry, algebra 1
...Algebra may take up to 40% of SAT math test. Building a solid understanding of prealgebra is one of the initial steps preparing for the next level of mathematics. I will help students
establish the foundation for upcoming challenges.
15 Subjects: including calculus, geometry, algebra 1, algebra 2
...Organic Chemistry was covered in my high school. When I attended Washington State University, I was placed directly into O-Chem; where I earned A's in lecture and lab. Immediately following
the class, I was selected by the WSU Chem department to be an O-chem peer-tutor (a tutorial instructor). I hold a BS in Chem E (GPA 3.7) with minors in Mathematics and Material Science
62 Subjects: including calculus, chemistry, English, physics
...Received a 5 on the AP Chemistry Exam in 2003 as a junior in high school. High school chemistry student of the year as recognized by the American Chemical Society in 2003 I use Biostatistics
in my everyday job as a research scientist at the University of Washington. Additionally, I have taken undergraduate and graduate level Biostatistics courses with success.
17 Subjects: including calculus, chemistry, physics, geometry
Related Marysville, WA Tutors
Marysville, WA Accounting Tutors
Marysville, WA ACT Tutors
Marysville, WA Algebra Tutors
Marysville, WA Algebra 2 Tutors
Marysville, WA Calculus Tutors
Marysville, WA Geometry Tutors
Marysville, WA Math Tutors
Marysville, WA Prealgebra Tutors
Marysville, WA Precalculus Tutors
Marysville, WA SAT Tutors
Marysville, WA SAT Math Tutors
Marysville, WA Science Tutors
Marysville, WA Statistics Tutors
Marysville, WA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Marysville_WA_Calculus_tutors.php","timestamp":"2014-04-17T04:45:35Z","content_type":null,"content_length":"24231","record_id":"<urn:uuid:2fc19eba-4b73-4751-ae1d-f6b2a3eaebfd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Telescope Reviews: Beautiful, intriguing, elegant ideas
Otto Piechowski Beautiful, intriguing, elegant ideas
Pooh-Bah #5593614 - 12/29/12 12:12 AM
Reged: 09/20/05 I would like to know what scientific ideas you have found to be stunningly beautiful, breath-taking, elegant, intriguing, inspiring.
Loc: Lexington, KY I'll start. Kepler's Second Law of Planetary Motion. This is the one which says if the orbit of a planet around a heavier body (i.e. sun) is an ellipse, and if one considers the
distance the planet travels along the ellipse in the same amount of time, but at two different parts of the ellipse; one span being near perihelion and another near aphelion, the
areas "covered" will be equal.
The idea I am trying to convey, which most of the readers here already know, is much more clearly stated with an accompanying diagram. Also, some of you probably know how to just
use language better to express his seminal idea more simply and clearly.
Back to the point; I think this idea is stunning. The geometry of euclidean space is such, and the nature of gravity within that space is such, that it sweeps out equal areas. Who
would have thought....!!
Pess Re: Beautiful, intriguing, elegant ideas Otto Piechowski]
(Title) #5593797 - 12/29/12 04:49 AM
Reged: 09/12/07
Pesse ( E=MC^2 ) Mist
Loc: Toledo, Ohio
FirstSight Re: Beautiful, intriguing, elegant ideas [Re: Pess]
Duke of Deneb #5594096 - 12/29/12 10:28 AM
Reged: 12/26/05 relativistic mass:
m(r) = m(0)/sqrt(1 - v^2/c^2)
Loc: Raleigh, NC
Also very elegant is how e = mc^2 is directly derived from the above expression for relativistic mass; first restate the equation for relativistic mass as:
m(r) = m(o)*(1 - v^2/c^2)^-1/2
...using the binomial theorum to expand the above expression in a power series:
m(r) = m(0)(1 + 1/2(v^2/c^2) + 3/8(v^4/c^4) +....)
...which, when v is small, converges rapidly to:
m(r) = m(0) + 1/2 m(o)v^2(1/c^2)
...now, multiply both sides by c^2
m(r)c^2 = m(o)c^2 + 1/2m(0)v^2
...the last term on the right side is ordinary kinetic energy, the left term on the right side is the intrinsic energy of a body at rest. The term on the left is usually
encapsulated as simply 'e', which incorporates both the intrinsic "rest" energy and kinetic energy expressions on the right.
*thanks to Feynman's "Lectures on Physics", pp 15-8 through 15-11 for providing this clear, and surprisingly straightforward mathematical explanation.
Edited by FirstSight (12/29/12 11:50 PM)
scopethis Re: Beautiful, intriguing, elegant ideas FirstSight]
Postmaster #5594571 - 12/29/12 02:48 PM
Reged: 05/30/08
Og...inventing the wheel...
Loc: Kingman, Ks
ColoHank Re: Beautiful, intriguing, elegant ideas scopethis]
Carpal Tunnel #5594697 - 12/29/12 04:31 PM
Reged: 06/07/07
The Mayan calendar...
Loc: western
mountain monk Re: Beautiful, intriguing, elegant ideas ColoHank]
Carpal Tunnel #5595127 - 12/29/12 08:55 PM
Reged: 11/06/09 FirstSight:
Loc: Grand Teton I would say that the beauty of your example lies in the mathematics, not in the science, and that the line between the two is in itself interesting. I cannot think, offhand, of a
National Park scientific idea that is beautiful without it receiving a mathematical formulation. This is one of the main points of Penrose's The Road to Reality, section 34.2 (page 1014). I can
think of beautiful experiments--Newton's prism--but scientific ideas...? On the other hand, mathematics is filled with beauty---IMHO. Thanks for your example.
Dark skies.
Otto Piechowski Re: Beautiful, intriguing, elegant ideas Otto Piechowski]
Pooh-Bah #5595179 - 12/29/12 09:43 PM
Reged: 09/20/05 Making fire burn downwards.
Loc: Lexington, KY In his Nicomachean Ethics, Aristotle asserted that persons are not born morally good or bad but are taught good behavior and bad behavior. This idea was the birth of all subsequent
moral education in the western world. If good behavior could be learned, it could then be taught.
To make this point Aristotle contrasted moral values with things that happened "by nature". He pointed out that things which occurred "by nature" always happened the same way
regardless of how many attempts were made to change their "way"; their "habit". One example he used was that although a person could be taught to be generous, a flame could never
be taught to burn downwards.
However, in his 1869 Christmas lectures to children at the Royal Institution of Great Britain entitled The Chemical History of a Candle, Michael Faraday described and executed an
experiment in which the fire of a candle was induced to burn downwards.
This novel idea, experiment and description in no way affect Aristotle's ethical ideas. However they do evidence an improvement of modern science compared to the more deductive
methodology of ancient science; the importance of testing assumptions such as, fire always burns upwards.
Mike Casey
Re: Beautiful, intriguing, elegant ideas Otto Piechowski]
#5595243 - 12/29/12 10:25 PM
Reged: 11/11/04
Loc: El Pueblo de
Nuestra SeƱora If true, the discovery of the Higgs boson.
Mister T Re: Beautiful, intriguing, elegant ideas Mike Casey]
Pooh-Bah #5595625 - 12/30/12 07:02 AM
Reged: 02/01/08
NOBODY would be here discussing this if Ice didn't float....
Loc: Upstate NY
Dave Mitsky Re: Beautiful, intriguing, elegant ideas Mister T]
Postmaster #5596190 - 12/30/12 01:39 PM
Reged: 04/08/02 Two equations from Sir Isaac certainly got the ball rolling: F=ma and F=Gm1m2/d^2
Loc: PA, USA, Dave Mitsky
Planet Earth
Rick Woods Re: Beautiful, intriguing, elegant ideas Dave Mitsky]
Postmaster #5596377 - 12/30/12 03:31 PM
Reged: 01/27/05
Natural Selection is one.
Loc: Inner Solar
star drop Re: Beautiful, intriguing, elegant ideas Rick Woods]
Snowed In #5596392 - 12/30/12 03:39 PM
Reged: 02/02/08
e^(i*pi) = -1
Loc: Snow Plop,
saxmaneagle Re: Beautiful, intriguing, elegant ideas star drop]
sage #5596650 - 12/30/12 06:03 PM
Reged: 08/21/07
Loc: Saint
Francis, MN
Ravenous Re: Beautiful, intriguing, elegant ideas star drop]
sage #5597708 - 12/31/12 10:41 AM
Reged: 11/14/09 Quote:
e^(i*pi) = -1
Loc: UK
This is the one I was going to reply with too.
The Euler identity. A simple looking equation but it involves three fundamental constants - e, pi and i. (And 1 if you count that as a fundamental constant.)
I don't understand most of the maths behind it, but it's one of the most thought-provoking single lines of mathematics known...
deSitter Re: Beautiful, intriguing, elegant ideas star drop]
Still in Old #5597989 - 12/31/12 01:34 PM
Reged: 12/09/04 e^(i*pi) = -1
-i^2 like
You can argue, this is the most important formula in all of science and engineering.
Rick Woods Re: Beautiful, intriguing, elegant ideas deSitter]
Postmaster #5598111 - 12/31/12 02:44 PM
Reged: 01/27/05 Quote:
You can argue, this is the most important formula in all of science and engineering.
Loc: Inner Solar
I always thought that honor belonged to the formula for beer...
scopethis Re: Beautiful, intriguing, elegant ideas Rick Woods]
Postmaster #5598284 - 12/31/12 04:12 PM
Reged: 05/30/08
the square root of a negative number..
Loc: Kingman, Ks
EJN Re: Beautiful, intriguing, elegant ideas Otto Piechowski]
Carpal Tunnel #5598461 - 12/31/12 05:53 PM Attachment (20 downloads)
Reged: 11/01/05
Loc: 53 miles west
of Venus
Humble Re: Beautiful, intriguing, elegant ideas EJN]
Megalomaniac #5598705 - 12/31/12 08:26 PM
Reged: 09/26/05
Loc: Amargosa And there was light.
Valley, NV, USA
deSitter Re: Beautiful, intriguing, elegant ideas Rick Woods]
Still in Old #5599912 - 01/01/13 04:24 PM
Reged: 12/09/04
You can argue, this is the most important formula in all of science and engineering.
I always thought that honor belonged to the formula for beer...
Zounds, I had forgot the beer.
|
{"url":"http://www.cloudynights.com/ubbthreads/showflat.php/Cat/0/Number/5594096/Main/5593614","timestamp":"2014-04-20T06:41:31Z","content_type":null,"content_length":"90548","record_id":"<urn:uuid:e26e3f4b-fcfe-4a1f-b3c8-14b6a501a023>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Really need help understanding this limit/derivative question... The limit Lim x approachs 5pi CosX +1 / x-5pi represents the derivative of some function f(x) at some number a. Find f and a . I don't
even understand the wording of this question...
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Remember the limit definition of a derivative?\[\large f'(x)=\lim_{h \rightarrow 0}\frac{f(x-h)-f(x)}{h}\]Well there is also another definition that we see less often, of this form,\[\large f'(x)
=\lim_{x \rightarrow a}\frac{f(x)-f(a)}{x-a}\]This is the one that we want to analyze.
Best Response
You've already chosen the best response.
If we compare this to the limit we were given, we can see that is looks like our A value will be 5pi yes? \[\large \lim_{x \rightarrow 5\pi}\frac{\cos x +1}{x-5\pi}\]
Best Response
You've already chosen the best response.
The top is still a little tricky though, we have to sort it out.
Best Response
You've already chosen the best response.
wait the second one just looks like the mean value theorem? it oddly looks like f(a)-b/a-b = f'(c)
Best Response
You've already chosen the best response.
oh nevermind me theres a limit sorry im a bit tired
Best Response
You've already chosen the best response.
heh, yah it looks similar :)
Best Response
You've already chosen the best response.
\[\large \cos(5\pi)=?\]
Best Response
You've already chosen the best response.
So yeah, but we're looking for the original function is that it or looking for the derivative? and cos5pi would be... .96 ? But I think i did it in degree so it might be wrong.
Best Response
You've already chosen the best response.
That's one of your special angles that you're going to want to remember. It will produce the same value as Pi. 5pi is Pi with an extra spin around the circle. -1 yes?
Best Response
You've already chosen the best response.
With 2 extra spins* my bad.
Best Response
You've already chosen the best response.
I'll ask you one thing before we continue, when taking pi in a derivative, do we ALWAYS use it in radians? I mean sometimes it works when I don't put it in my calculator as a radian
Best Response
You've already chosen the best response.
If you're dealing with Pi, then yes you need to be in radians :o You could convert to degrees if radians are confusing you though.
Best Response
You've already chosen the best response.
i mean pi i mean cos and sin... wow I' m sounding stupid
Best Response
You've already chosen the best response.
5pi is the same as 180 degrees.
Best Response
You've already chosen the best response.
so in the question am I using L'hospital rule or finding the limit? that's where I'm lost. the question is really confusing . And alright thanks for explaining the pi. :)
Best Response
You've already chosen the best response.
No L'Hop. We're relating a weird looking limit back to the Limit Definition of a Derivative. We need to match up the pieces so we can see what the original function was. So far we've established
that our A value is 5pi. If you're unsure about that, compare the form of our limit with the Definition,\[\large \large f'(x)=\lim_{x \rightarrow a}\frac{f(x)-f(a)}{x-a}\qquad\qquad \rightarrow \
qquad \qquad \lim_{x \rightarrow 5\pi}\frac{\cos x +1}{x-5\pi}\]
Best Response
You've already chosen the best response.
See the a?
Best Response
You've already chosen the best response.
Yeah it's represented by 5pi?
Best Response
You've already chosen the best response.
so i have to equate both limit? and finding my h?
Best Response
You've already chosen the best response.
no h, we're using the second definition that i posted, not the one involving h.
Best Response
You've already chosen the best response.
It's another form of the limit definition. It comes up less often.
Best Response
You've already chosen the best response.
My bad I understood we had to manipulate it back to the other form
Best Response
You've already chosen the best response.
Then when its ask to find F, am I suppose to find cosx?
Best Response
You've already chosen the best response.
sorry, the wording is really throwing me off
Best Response
You've already chosen the best response.
If we can show that the limit matches the DEFINITION, then we can show what our F is. So we've established that 5pi matches the A we're looking for. We've also shown that cos(5pi)=-1. If we can
somehow find a -1 in the top of that fraction, we can make it look like the Definition.
Best Response
You've already chosen the best response.
Oh wow. I understand. This was a very basic question but I have never seen that definition in my entire class. Thanks a lot! :)
Best Response
You've already chosen the best response.
\[\large \frac{\cos x+1}{x-5\pi} \qquad =\qquad \frac{\cos x-(-1)}{x-5\pi} \qquad = \qquad \frac{\cos x-(\cos 5\pi)}{x-5\pi}\]
Best Response
You've already chosen the best response.
Make sense? :) k cool!
Best Response
You've already chosen the best response.
Thanks a lot you're patient!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50d23cede4b069abbb715701","timestamp":"2014-04-20T16:29:39Z","content_type":null,"content_length":"95940","record_id":"<urn:uuid:639fb345-7f1c-44bc-ae72-ff004036d15e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 109
Groundwater Flow Modeling Study of the Love Canal Area, New York 8 INTRODUCTION The U.S. Environmental Protection Agency identified a need to assess the groundwater hydrology of the Love Canal area,
Niagara Falls, New York. As part of this assessment, ground- water flow models were used to aid in data reduction and analysis and to attempt prediction of groundwater movement and contaminant
migration. The modeling effort was started on August 20, 1980, and completed December 1, 1980. The objectives were to (1) devise a conceptual framework, (2) assist in data collection, (3) design and
analyze aquifer tests, (4) predict system behavior, and (5) assess uncertainty. The technical approach involved the use of groundwater flow models, which were used to help interpret and predict the
behavior of groundwater flow and convective transport at Love Canal. Since hydrodynamic dispersion is neglected, arrival times may be slightly underestimated. This chapter summarizes some of the work
performed during 109 JAMES W. MERGER, CHARLES R. FAUST, and LYLE R. SILKA GeoTrans, Inc. AB STRACT Increasing awareness of the problems presented by hazardous waste sites is leading toward an
increased interest in, and application of, groundwater models. During the fall of 1980, a groundwater modeling study was conducted at the Love Canal area, Niagara Falls, New York. Flow models were
used to aid in data reduction and analysis and to attempt prediction of groundwater movement. Both slug tests and aquifer tests were analyzed. The conceptual framework for the hydrogeologic units
underlying Love Canal consists of a shallow water-table system of silts and fine sands and a deeper confined system in the Lockport Dolomite. The intervening confining layers consist of lacustrine
and glacial clays. Modeling in the dolomite focused on characterizing the aquifer and assessing the potential for its contamination. Best judgment indicates that for the Lockport to be contaminated,
the confining bed would have to be breached. Analysis of remedial action for the Lockport Dolomite indicates that three interceptor wells at the south end of the canal, pumped at only 32.3 mayday,
would reverse the flow of groundwater to the river and provide an adequate halt to migration of potential contaminants to the river. this study. Results are presented for the aquifer test analysis
and for modeling the Lockport Dolomite aquifer. The shallow system and remedial action modeling and analysis are pre- sented elsewhere (Mercer et al., 1981; Silka and Mercer, 19827. BAC KGROU N D The
Love Canal site is located on the east side of Niagara Falls, New York. The landfill at Love Canal was operated for nearly 30 yr and occupied a surface area of approximately 16 acres with the south
end 400 m from the upper Niagara River near Cayuga Island. The canal varies from about 3 to 11 m in depth with the original soil cover varying from 0 to 1.8 m in thickness (Leonard et al., 1977~.
Figure 8.1 shows the typical strata at the Love Canal site. The soil layers are underlain by glacial till, which in turn is underlain by bedrock consisting of the Lockport Dolomite. In general terms,
the groundwater hydrology includes (1) a shal-
OCR for page 109
110 DEPTH 1.8-2.5' (0.6-0.8 m) 4.0-5.5' (1.2-1.7 m) 7.5-8.5' (2.3 2.6 m) 10-5 11.5' (3.2 3.5 m/ 19.0 27.0' (5.8-8.2 m; 34.0-42.0' (10.4-12.8 m) DESCRIPTION Land Surface Clayey Silt Fill Silty Sand
Soft Silty Clay Lockport Dolomite (Ranges in thickness from approximately 100 to 150 feet/ 30 to 46 meters) FIGURE 8.1 Typical strata in Love Canal landfill area (modified from Conestoga-Rovers &
Associates, 1978). low system that is seasonally saturated and consists of the silt fill and silty sand and is underlain by (2) beds of confining material composed of clay and till that overlies (3)
the Lockport Dolomite, which is underlain by the relatively impermeable (4) Rochester Shale. FIGURE 8.2 Generalized potentiometric surfaces for the Lockport Dolomite. JAMES W. MERGER, CHARLES R.
FAUST, and LYLE R. SILKA Lockport Dolomite The Lockport Dolomite is overlain by leaky confining beds and underlain by the relatively impermeable Rochester Shale Johnston, 1964) and is fairly
continuous in the Niagara Falls area. Both artesian and water-table conditions occur in this fractured system, with the upper 3 to 4.6 m being the most permeable. Hydrogeologically, the dolomite in
the Niagara Falls area probably is bounded on the south by the upper Niagara River (see Figure 8.2~. It is bounded toward the west by the lower Niagara River gorge. The dolomite thins north- ward,
where it is bounded by its outcrops in the Niagara es- carpment. Under natural conditions, recharge occurs at the contact with the upper Niagara River near the falls and at an elevation high that is
just south of the Niagara escarpment. Discharge occurs as seepage faces and springs at the lower Niagara River, along the Niagara escarpment, and along parts of the upper Niagara River away from the
falls. A generalized potentiometric map developed from historic records for the Lockport Dolomite Johnston, 1964) is shown in Figure 8.2. The contours are highly idealized because the data were
either (1) absent, (2) representative of several layers within the dolomite, or (3) collected over a two-year span dur- ing 1961-1962. A well hydrograph in the area indicates that flow is quasi-
steady state. "Quasi" is used because seasonal variations of about +0.6 m are believed to be imposed on the steady-state condition in the upper part of the dolomite. Furthermore, since this study
occurred in the fall, it is expected that the water levels represent seasonal lows. N _ SEA LEVEL DATUM N I AGARA ESCARPMENT ~ in AMER~ 4 ~ SITE Be< STUDY AREA ~ J 13 .' LOVE CANAL
OCR for page 109
Groundwater Flow Modeling Study Shallow System The shallow system at Love Canal is located in the upper units of silty sand and silt fill. It is probably bounded toward the north and west by creeks
and toward the south by the Little Niagara River. Before remedial actions were taken, ground- water flow was probably toward the surface drainage, with the overall flow toward the south and the upper
Niagara River. The soils in this area consist of the Canadaigua-Raynham- Rhinebeck association characterized by somewhat poorly drained to very poorly drained soils having a dominantly medium- to
fine-texture subsoil (U.S. Department of Agriculture, 19721. These soils are silty loam to silty clay loam (ML to CL in the Unified Soil Classification). Previous work in the area has typ- ified the
soils as shown in Figure 8.1 (Conestoga-Rovers & Associates, 1978~. Underlying the lacustrine sediments are gla- cial tills. The shallow system can be summarized as follows: 1. Silty sand and silt
fill; approximately 3.7 m thick; hy- draulic conductivity is greater than or equal to 10- m/sec (Hart, 1978~. 2. Hard clay, transition clay, soft clay; 3.4 m thick; hydraulic conductivity is 10-l° to
10-il m/see (Leonard et al., 1977~. 3. Glacial till; 4.6 m thick; hydraulic conductivity is probably similar to that of clays (Glaubinger et al., 1979~. In addition to these units, storm-sewer and
sanitary-sewer excavations as well as swales may act as conduits. Ebert (1979) describes the swales as old drainage ways up to 3 m deep and 12 m wide in their original state. Many of these old
drainage ways have been filled with miscellaneous material. AQUIFER TESTING ANALYSIS Aquifer testing was designed, and subsequently modified, to characterize certain aspects of the groundwater system
at the Love Canal site. These included the following: 1. Determination of the horizontal hydraulic conductivity and storage coefficient of unconsolidated glacial units and the Lockport Dolomite; 2.
Determination of the variation of hydraulic conductivity with depth in the dolomite; 3. Determination of the hydraulic connection between the more permeable upper zones of the shallow system and the
dolomite, that is, the vertical component of hydraulic conduc- tivity in the till and tight lacustrine sediments; and 4. Determination of the interconnection (vertical hydraulic conductivity) of
permeable layers in the Lockport Dolomite. The tests used at the site included constant-pressure tests and constant-discharge tests in the Lockport Dolomite and a falling-head test in the overburden
and till units. Except for the constant-discharge test, the results of the testing are dif- ficult to quantify. Therefore, only the constant-discharge test in the dolomite is described here and is
presented for illustra- tive purposes. The basic solution used for the constant-discharge pumping 111 test is the Theis (1935) solution 4rrTJ: u (8.1) where s is the drawdown (L); r is the distance
from pumped well to observation well (L>; Q is the discharge rate (Loft); t is the time after start of pumping; T is the transmissivity (L2lt); and S is the storage coefficient (dimensionless).
Although the long-term pumping test in the dolomite at the site was designed to run at a constant-discharge rate, the actual pumping rate declined during the test. An approximate solution for these
conditions can be obtained by using the principle of superposition in conjunction with the basic solution. The pro- cedure involves representing the variable pumping rate by a series of pumping
periods having constant rates. The approx- imate solution is then given by 1 no s =—~ (Qj—Q-1) W(u) (8.2) where j is the particular pumping period, m is the total number of pumping periods, W(u) is
the exponential integral in Eq. cedure involves representing the variable pumping rate by a cedure is further detailed by Earlougher (1977~. Eqs. (8.1) and (8.2) provide forward solutions to the
ground- water response. In this field test the inverse solution is re- quired; that is, from observed water-level changes, the hy- drologic parameters T and S need to be determined. In order to solve
Eq. (8.2), a least-squares minimization technique is used. That is, to find T and S. n ~ [sO(ri, ti) —sC(ri, ti>]2 (8. 3) i = 1 is minimized, where sO is the observed drawdown, sc is the calculated
drawdown, i refers to a particular observation, and n is the total number of observations. Data were used from 12 observation wells and the pumping well. Measurements were continued at three
observation wells for 2 h after the pump was shut down. With all of these data, there are several alternative ways to partition them for analysis. Two obvious ways are (1) match all the observations
using one transmissivity value and one storage coefficient value (Case A), and (2) match the data for each well independently, calculating a separate T and S for each well (Case B). Both methods were
used in this analysis. Comparing the results of both cases pro- vides a measure of validity in the analysis. The results of matching the data are presented in Table 8.1. Using all available data and
matching all the wells with one T and S led to values of 0.0014 m2/sec and 1.49 x 1O-4, re- spectively. The mean deviation between observed values and calculated values using the above T and S was
0.12 m. Fitting the individual well data led to better matches of the data. For this case the transmissivity values were between 0.001 and 0.0035 m2/sec and the storage coefficient values were
between 0.343 x 1O-4 and 3.12 x 1O-4. As noted, the matches on individual wells were better mean deviations for each well were between 0.010 and 0.064 m. Other results from aquifer testing are not
presented but may be found in Mercer et al. (19817. The results of the aquifer
OCR for page 109
112 TABLE 8.1 Summary of Results of Pumping and Recovery Test Analysis JAMES W. MERGER, CHARLES R. FAUST, and LYLE R. SILKA Matching Group Well T (m-/sec) S x 1O-4 Mean Deviation, m Case A (all wells
together) 0.0014 1.490 0.119 Case B (individual wells) 38 0.0035 2.370 0.010 44 0.0031 1.650 0.014 48 0.0025 0.343 0.019 50 0.0018 0.483 0.032 56 0.0019 0.825 0.016 67 0.0017 1.750 0.025 68 0.0010
1.330 0.019 71 0.0019 1.500 0.020 79 0.0016 0.428 0.033 80 0.0014 1.290 0.025 86 0.0017 3.120 0.049 89 0.0010 2.000 0.064 Average 0.0020 1.420 testing partially fulfilled the original test
objectives. The fol- lowing conclusions were drawn from the above analysis: 1. The 22-h discharge test in the Lockport Dolomite pro- vided an average field transmissivity of 0.0014 m2/sec and stor-
age coefficient of 1.5 x 1O-4. These values are consistent with other values determined for the Lockport Dolomite in the Niagara vicinity. 2. Because many of the observation wells were completed only
about 1 m into the dolomite, and because they responded quickly to the pumping from a well screened at a deeper level, the upper permeable zones of the dolomite appear to have significant vertical
permeability. 3. The Lockport Dolomite is heterogeneous but less so than would normally be anticipated for carbonate aquifers. 4. The packer test results for the dolomite were inconclu- sive.
Consequently, the regional observations of lohnston (1964) regarding the variation of hydraulic conductivity with depth are still assumed applicable to the site. Examination of the core description
also supports lohnston's contention that the pri- mary water-bearing zones are located in the upper zones of the Lockport Dolomite. 5. The slug tests in the overburden wells provided an es- timate of
the hydraulic conductivity of the lacustrine sediments and till. Both values are on the order of 3.04 x 10-~" m/see and indicate relatively impermeable material. 6. The shallow material tested at the
slug test site was also relatively impermeable (on the order of 3 x 10-"' m/sec). However, this unit was quite clayey. Because the shallow silty- sandy units are highly variable, this one estimate is
probably not representative of the shallow system at the site. 7. No estimates of storage properties for the overburden wells could be determined from the slug tests. LOCKPORT DOLOMITE MODEL the
aquifer thickness. The boundary condition at the bottom is probably no-flow, since below the first 4.6 m parts of the Lockport Dolomite are relatively impermeable. At the top, the boundary condition
is probably head-controlled flux repre- senting leakage through the confining beds. A groundwater flow model that handles these areal flow conditions is that presented by Trescott et al. (19767.
Important assumptions include the following: 1. Groundwater flow and aquifer parameters in the Lock- port Dolomite are vertically averaged. 2. Q~asi-steady-state flow is assumed; that is, although
there are seasonal variations, the system over an extended period of time does not change hydrologically from the seasonally av- eraged surface. This assumption is based on the few well hy- drographs
available in the Niagara Falls area. 3. The aquifer in the Lockport Dolomite is assumed to be under leaky artesian conditions everywhere. 4. The aquifer transmissivity near the escarpment is as-
sumed equal to 4.58 x 1O-5 m'/sec. Because of the analysis of tests at the Love Canal site, and because of the higher aquifer transmissivity near the river Johnston, 1964), a zone bordering the upper
Niagara River was assumed to have a trans~nissivity of 4.58 x 10-4 m2/sec. This value is about one third the value obtained from the aquifer test analysis yet slightly greater than previously
reported values. This value was selected because the aquifer test yielded a local value, whereas the lower value used in the model is more representative of a larger area. Transmissivity is assumed
isotropic but nonhomogeneous. 5. Water moves vertically into or out of the Lockport Do- lomite through the confining layer. 6. The confining bed is assumed to be 7.6 m thick and is composed of clay
and till. 7. The confining-bed hydraulic conductivity is assumed to be 10-~" m/see (Leonard et al., 1977~. Because of the better drained soils near the Niagara escarpment (U.S. Department of
Agriculture, 1972>, this value was increased in that area to There are several numerical models that are appropriate for ~ ~^ ~~ ^ ^ ~ - ~ ~ simulating flow in the Lockport Dolomite. The approach
taken was to vertically average the flow and aquifer parameters through ;~.25 x 1()-Y m/sec. (,ontining-bed hydraulic conductivity is also isotropic but nonhomogeneous. 8. There are not enough wells
reported by Johnston (1964)
OCR for page 109
Groundwater Flow Modeling Study to construct a potentiometric surface for the silty sand and silt fill of the shallow system; wells that are in Johnston (1964) indicate that water levels are
approximately 3 m below land surface; therefore, values determined from a topographic map were used and 3 m subtracted to produce a shallow-system potentiometric surface. In the Love Canal area, this
resulted in heads that were 172 m above mean sea level. 9. The heads in the shallow system represent an average value and neglect seasonal variations or imposed stresses. 10. The rock underlying the
permeable part of the dolomite is considered impermeable. 11. The scale of the Lockport Dolomite model is regional, covering most of the area in Figure 8.2. The area of interest was subdivided into
rectangular blocks composing the finite-difference grid shown in Figure 8.3. The grid consists of 21 columns and 23 rows. The northern boundary is considered no-flow because it is located along the
middle of a recharge area near the Niagara escarpment, i.e., a ground- water divide. Recharge is through the confining bed. The east- ern boundary is approximated as no-flow because it follows a flow
line. The southern boundary is treated as constant head and corresponds approximately with the upper Niagara River. The western boundary follows approximately the covered con- duits of the
pump-storage project and is considered constant head. Calibration Calibration of the Lockport Dolomite model consists of match- ing the observed steady-state potentiometric surface in Figure 1 - ---
T--.- ~ - ~ I 173 ZERO TRANSMISSIVITY I E1 WELL BLOCK t N ~ 1 O lMILES 113 8.2. For steady-state flow conditions, the storage term can be eliminated during model calibration. Also, leakage through
the confining bed is considered to be under steady-state conditions. The computed potentiometric surface is shown in Figure 8.4. As may be seen, the match is good on a regional scale, with the
hydraulic gradient in the Love Canal area being toward the south and southwest. In terms of spatial distribution, leak- age into the Lockport occurred near the topographic high in the northern part
of the study area. Leakage out occurred toward the escarpment and at lower elevations toward the up- per Niagara River. In the Love Canal area, leakage was gen- erally into the Lockport Dolomite.
Thus, in the Love Canal area, the leakage is downward (annually averaged), using the value of 172 m for the head in the shallow system at the Love Canal site. The downward flow in the blocks
representing the canal area, however, is very low, with rates ranging from 0.14 to 0.07 mm/yr. The head difference between the dolomite and shallow system is small, especially near the south end of
the canal, and, as will be discussed later, the direction of leakage can be easily reversed. As for constant-head nodes, flow was into the dolomite at the pump-storage reservoir and, in general, was
out through the western boundary. For the southern boundary, the upper Niagara River was gaining from the do- lomite near Love Canal and east; toward the west and near the pump-storage intake, the
upper Niagara River was generally losing to the dolomite. Figure 8.5 shows a comparison of the hydraulic head com- puted from the steady-state match with the measured values, that is, in just the
Love Canal area (section A-A' on Figure 8.4~. Even on a local scale, this match is good, with the com- FIGURE 8.3 Finite-difference grid for the Lockport Dolomite model.
OCR for page 109
114 FIGURE 8.4 Computed potentiometric sur- face for the Lockport Dolomite. GEOTRANS, INC. N COMPUTED STEADY STATE POTENT IOMETRIC SURFACE IN LOCKPORT DOLOMITE; FEET ABOVE MSL; CONTOUR INTERVAL = 10
FEET 111t'1 ~ O 1 mile puted values being slightly lower than the observed. This dif- ference may be the result of the head used in the shallow system as well as the constant-head value of 172 m that
was used for the upper Niagara River near Love Canal. The observed profile near the river in Figure 8.5 is dashed, indicating that limited data were available there. This also indicates our
uncertainty in using 172 m as the constant-head value. Sensitivity Analysis A detailed sensitivity analysis was performed on the Lockport Dolomite model. The following were considered: (1) the con-
dition at the western boundary, (2) aquifer transmissivity, (3) confining-bed hydraulic conductivity, (4) river stage at the southern boundary, and (5) water level in the shallow system. 3 n - 72.5 -
. J r =< 172.0 . in 171 .S . 171.0 - _ _56 6 O -56 - J 566 > At 565 tic 564 563 1 1 1 1 1 1 1 1 1 1 LOVE CANAL - LOCKPORT DOLOMITE LEGEND to RUN ONE · MEASURED 1 1 1 o.O 0.1 0.2 0.3 0.4 0.5 0.6 0 7
0.8 o.9 1.0 SOUTH - NORTH tM! LES FROM RIVER) ~T~ ~—OTT 0.0 0.2 0.4 0.6 0.8 1.0 K I LOMETERS FROM R I VER 1.2 1.4 1 6 FIGURE 8.5 Magnification of mile O to 1.0 (O to 1.6 km) along section A-A'. JAMES
W. MERGER, CHARLES R. FAUST, and LYLE R. SILKA Pumped Storage Reservoi r B 1 oc ks A . ~ ~ LOVE CANAL : ~ A' Details are presented in Mercer et al. (1981~; only the results are presented in Table
8.2. Predictions Assuming Remedial Action Under natural conditions, flow in the Lockport Dolomite ap- pears to be at steady state. If remedial action for the dolomite is deemed necessary at some time
in the future, the flow field will undoubtedly be disrupted. This will cause transient flow in the dolomite, which can also be simulated. The steady-state model described in the previous section was
modified by varying values for the confining-bed specific storage and the aquifer storage coefficient. For the storage coefficient, a value of 1.5 x 1O-4 was used by Mercer et al. (1981) as
determined from aquifer test analysis. A value of 2.6 x 10-3/m for plastic to stiff clay was estimated for the specific storage of the confining bed (Domenico, 19721. The computed steady-state
hydraulic-head distribution in Figure 8.4 was used as the initial condition in the transient model. If remedial action is necessary for the Lockport Dolomite, installation of interceptor wells is a
likely alternative to be considered. To evaluate the electiveness of this remedial op- tion, interceptor wells were incorporated into the Lockport Dolomite model at the south end of Love Canal. Three
wells were placed at the southwest, southcentral, and southeast ends of the canal, since the flow gradient in the Lockport Dolomite is toward the south and southwest. Wells in these locations should
intercept any solute that enters the dolomite beneath the canal. The pumping rates of the three wells were set at 7.6 L/min each. This amounts to a total withdrawal of 32.3 m3/day. The transient
simulation lasted only 6.7 days, after which time the hydraulic heads in the dolomite came to a new steady state. These low pumpages are sufficient to cause a reversal in the hydraulic-head gradient.
That is, the flow is no longer toward the upper Niagara River, which means the wells would
OCR for page 109
Groundwater Flow Modeling Study TABLE 8.2 Summary of Sensitivity Runs for Dolomite Run Description Effect 115 1 Lockport Dolomite steady state using constant-head boundary toward the "Best"
comparison with observed data west and best estimate of parameters 2 Same as run 1 except with impermeable boundary toward the west Minor changes in heads at Love Canal site 3 Same as run 1 with
aquifer trans~nissivity increased by 50% Slight increase in heads at Love Canal site 4 Same as run 1 with aquifer transmissivity decreased by 50% Slight decrease in heads at Love Canal site 5 Same as
run 1 with confining-bed hydraulic conductivity increased by 50% Slight decrease in heads at Love Canal site 6 Same as run 1 with confining-bed hydraulic conductivity decreased by 50% Slight increase
in heads at Love Canal site 7 Same as run 1 with river stage increased by 0.3 m About 1-ft decrease in heads at Love Canal site 8 Saline as run 1 with river stage decreased by 0.3 m About 1-ft
decrease in heads at Love Canal site 9 Same as run 1 with heads in the shallow system in the Love Canal area Gradients through the confining bed were reversed; lowered by 0.3 m flow was out of the
dolomite 10 Same as run 1 with confining-bed hydraulic conductivity increased to that Created groundwater mound in the dolomite of the dolomite in the grid block representing the south end of Love
Canal a well radius of 7.6 cm) would be approximately 0.82 m, so that the assumptions of confined, artesian conditions in the dolomite are still valid. This new steady-state solution is dependent on
the assump- tion of a constant-head boundary at the upper Niagara River. The hydraulic connection of the river and the Lockport Do- lomite is uncertain. If the connection is present to a lesser
degree than assumed for the model, then the gradient would still be reversed by this pumpage; however, steady state may not be reached so quickly. Contamination Travel Times and Uncertainty Analysis
If a contaminant is assumed to have entered the Lockport Dolomite, the travel time for the contaminant to reach the upper Niagara River can be computed. The uncertainty in travel time depends on the
accuracy of our knowledge of the hydrogeologic system. The interstitial velocity of flowing groundwater can be writ- ten as vi = -—, (8. 4) ~ as where vi is the interstitial velocity, K is the
hydraulic conduc- tivity, ~ is the effective porosity, and ohms is the hydraulic gradient. Also note that for convenience the negative sign in Eq. (8.4) has been omitted. For steady uniform flow,
travel time (t) is simply distance (L) divided by interstitial velocity. If all reactions between solute and the rock in which the groundwater is flowing are considered to be simple equilibrium
linear sorption reactions, then the amount of solute present on the rock will be directly proportional to the amount of solute present in the fluid. This proportionality constant is the distribution
coefficient, k`,, and from it one can calculate the rate of movement of solute in a flowing groundwater system relative to the rate of flow of the transporting water itself according to the
expression water velocity = (1 + P kit), (8.5) solute velocity where p is the aquifer bulk density. Adjusting velocities and travel times for this retardation ef- fect results in the following
expression for solute travel times: ( ) (8. 6) as Although this is a relatively simple equation, there are consid- erable uncertainties associated with ¢, K, and kit, which lead to uncertainties in
the resulting calculated travel times. In this analysis, best estimated travel times to reach surface water for solute with various sorption properties are calculated. The un- certainty of this
estimate is evaluated using Monte Carlo sim- ulation techniques. The Monte Carlo approach is used here even though for some of the extreme cases shown a direct analysis can be performed. The direct
analysis is discussed later. Note that it is immaterial as to how the contaminant entered the groundwater in the Lockport Dolomite. The following best estimates are selected for evaluating Eq. (8.
61: L = 200 m, distance from the south end of Love Canal to the river; K = 3.0 x 1O-4 m/see (from aquifer test match, that is, 1.4 x 10-3 m2/sec/4.6 m, where a permeable thickness of 4.6 m is
assumed>; dhlds = 1.52 x 1O-4 (from measured hydraulic head); p = 2.5 g/cm3, common limestone density (Clark, 19661; ~ = 0.02, effective porosity (estimated for fractured lime- stone from Winograd
and Thordarson, 1975); kd = 0 to 10 mL/g (estimated from Apps et al., 1977). These values are best estimates from observed data, the aqui- fer-test analysis, and our hydrologic judgment. Eq. (8.6)
gives a travel time for a perfect tracer (k`, = 0) or for the water itself of 1005 days. That is, if the clays were breached or if solute were transported through the clays, on entering the Lockport
Dolomite, it is estimated to take 1005 days for the solute moving with the water to reach the river. In order to assess the statistical properties in the predicted results, it is first necessary to
specify the statistical properties of the uncertain parameters. In this case the parameters in Eq.
OCR for page 109
116 (8.6) that have the greatest uncertainty are the hydraulic con- ductivity, the porosity, and the distribution coefficient. In the following analysis, ~ and K will be assumed to vary according to
specified frequency distributions. Sensitivity analysis may be performed to evaluate the uncertainty in kit. There is also uncertainty in the simple uniform flow model and in the hydraulic gradient;
however, this uncertainty will not be evaluated. Freeze (1975) presented a large body of both direct and indirect evidence that supports a log-normal frequency distri- bution for hydraulic
conductivity. This distribution refers to the variance of hydraulic conductivity in space. The situation described by Eq. (8.6) is that of a constant, but uncertain, hydraulic conductivity. To
evaluate this uncertainty, we as- sume the same distribution. If the hydraulic conductivity is log-normally distributed, a new parameter y = log K can be defined that is normally distributed and can
be described by a mean value, ,uy, and a standard deviation, cry, that is, Nary, ayl. For this application, As = 1.9365 and cry = 0.5, that is, K = 0 3 x 10~93~ + 05> m/day (8.7) which is the value
obtained from the aquifer-test analysis, with the standard deviation of one-half log unit. Freeze (1975) gives a range of hydraulic conductivity data for fractured rock with standard deviations
ranging from 0.20 to 1.56, with a mean of 0.6785. These values of standard deviations indicate a larger spread of values for hydraulic conductivity than that deter- mined from the aquifer test.
Because the values in Freeze (1975) are more comprehensive, our value of one-half log unit was estimated from his data. We use feet per day to compute travel times in terms of days. For the first
simulation, Case 1, we estimated porosity to be 2 percent, that is ~ = 0.02. Although in theory a value for porosity may be calculated from the storage coefficient obtained from the aquifer tests, in
this case it was not possible. The storage coefficient determined from the aquifer test indicates that the aquifer compressibility is more important than the water compressibility. Consequently, the
storage coefficient is relatively insensitive to porosity and could not be determined. Values of K were chosen from a log-normal probability dis- tribution. This was done by recognizing that the
values of Hi = log ~ come from a normal probability distribution. The normal distribution generator is y = crySn + lye (8. 8) where Sn is a random number taken from a normal distribution with a zero
mean and a standard deviation of one, NfO,11. To obtain Sn, we use a random number, R*n' uniformly distributed on the interval (0,11. R*n is used to compute Sn (also called the random normal deviate,
by Ralston and Wilf, 19671: So = ( - 2 in R*n):L'2 sin 2~rR*, ~ ~ (8.9) Using the value of y from Eq. (8.8), hydraulic conductivity is computed from K= 109 (8. 10) and used in Eq. (8.6) to compute
the travel time to the upper Niagara River. To check convergence, we ran the Monte Carlo JAMES W. MERGER, CHARLES R. FAUST, and LYLE R. SILKA simulations for 3200 and 6400 events. No significant
difference appeared to exist, and the 6400-event distribution was used. A plot of the fraction of events in each interval versus the logarithm of travel time for a perfect tracer (k,` = 0) is shown
in Figure 8.6. Case 1 refers to the case where the "best esti- mate" for porosity is used. Note that if porosity were decreased by an order of magnitude, the plot would shift one log unit to the
left. The spread of the plotted travel times reflects the confidence with which we are able to specify them, given the precision of our estimate of the hydraulic conductivity, K. A range of 2
standard deviations on each side of the mean encompasses the 95 percent confidence interval. Based on the tabulated mean and standard deviation in Table 8.3, this means that there is a probability of
less than 0.05 that the travel time of a tracer (nonretarded element) will be greater than 10,000 days or less than 100 days. In case 2, we analyze the uncertainty in both hydraulic con- ductivity
and porosity, but assume they are uncorrelated. The same log-normal distribution for hydraulic conductivity is used, but porosity also is assumed log normally distributed with a standard deviation of
0.5 log unit. That is, x=log+, (8.11) where Nj,ux, (rXl and He = - 1.70 and ax = 0.5. This corre- sponds to a mean porosity of 0.02. The values of porosity are thus chosen from a log-normal
probability distribution, rec- ognizing that the values of xi = log hi come from a normal probability distribution. The normal generator is X = (7xSn + ~ + ~X, (8. 12) where Sn+~ is a random number
taken from NfO,11. Sn+~ is CASE 1 c Hi. ,\\4 . , . . . 1 , 1.0 2.0 3,0 4.0 5.0 6.0 LOG TRAVEL TIME (DAYS) r 10 1o2 103 104 105 1o6 TRAVEL T I ME ( DAYS ) FIGURE 8.6 Histogram of travel times in days
of solute from the south end of Love Canal to the upper Niagara River through the Lockport Dolomite. Values are computed by Monte Carlo simulation for known porosity, uncertain hydraulic
conductivity, and kit = 0.
OCR for page 109
Groundwater Flow Modeling Study 117 TABLE 8.3 Value of Log Mean and Log Standard Deviation of Travel Times in Days of Solute from the South End of Love Canal to the Upper Niagara River through the
Lockport Dolomite for Several Values of Distribution Coefficients and Varying Uncertainty Assumptions about Hydrologic Parametersa Distribution Coefficient k,/ (mL/g) 0.0 Monte Carlo Case 1 Direct
Analysis Case 2 Case 1 Case 2 0.0 0.1 .0 0.0 mean sigma mean sigma mean sigma mean sigma mean sigma 3.00 0.503 3.35 0.501 4.13 0.502 5.10 0.503 6.11 0.504 3.00 0.721 3.42 0.545 4.16 0.497 5.10 0.502
6.08 0.506 3.00 0.5 4.10 0.5 5.10 0.5 6.10 0.5 3.00 0.707 4.10 0.5 5.10 0.5 6.10 0.5 "Values determined by both Monte Carlo and direct analysis, as shown. cietermined from Ralston and Wilf (1967) as
Sn + ~ = ~—2 In R i'n)~'2 cos 21rR*n + i. (8. 13) Eqs. (8.9) and (8.13) provide a corresponding pair of random normal deviates with zero mean and unit variance for R*n and R*n+l Monte Carlo
simulations for case 2 are not shown. The most probable travel time again is 3.00 log days, but the standard deviation now is 0.707 log unit. For this case, the probability is less than 0.05 that the
travel time of a tracer will be greater than 25,942 days or less than 38 days. This broad spread in travel times in case 2 (~ = 10° 707) is a result of our assumption that porosity and hydraulic
conductivity are completely un- correlated. There is unquestionably some correlation between porosity and conductivity. However, the amount of correlation is unknown; therefore, there exists
uncertainty as to whether the standard deviation of the appropriate travel time is closer to 10°'5 or 10°7°7. We assume that the travel-time standard deviation is adequately represented by that
resulting from un- certainty in the conductivity alone, i.e., 10° 5, since it is prob- able that porosity is highly correlated with hydraulic conduc- tivity. The preceding discussion and the results
displayed in Figure 8.6 were restricted to solutes that are not retarded, i.e., to tracers. Such solutes have kit values of zero, so that the par- enthetic expression within Eq. (8.6) equals 1. Many
solutes are likely to be retarded, having nonzero kit values. To account for retardation, additional sets of Monte Carlo simulations were made using Eq. (8.6) and k`' values of 0.01, 0.1, 1.0, and
10.0. Two runs were made for each kit value, corresponding to the conductivity and porosity choices of the two cases described above. The results of this sensitivity analy- sis are given in the form
of tabulated log means and standard deviations of travel time in Table 8.3. The mean values of Table 8.3 show how retardation increases travel times for larger values of kit is the same and
approximately equal to 10°5. This follows from direct analysis of the two extreme situations in which ~ >> kit and ~ << A,. For either extreme, Eq. (8.6) is linear in terms of logarithms. Because it
is assumed that K and OCR for page 109
118 highly likely, a possibility that should not be overlooked is that fissured zones exist in the confining bed. Therefore, in the second case, we assume that the flow through the confining layer is
mainly through fractures. This is a clear possibility because fissured clay in the upper shallow sediments in the Love Canal area were described by Owens (1979) and have been reported in other areas
with similar geology (Freeze and Cherry, 1979~. If fissured, the clay would behave more like a fractured media, and an effective porosity of 0.0001 would be appropriate. Using this value and the
3.0-m thickness, the travel time for a tracer through the confining bed would be about 9 yr. Note that absorption would increase this value. Of these two extreme cases, the one leading to a longer
travel time is more likely. This is because the sediments comprising the confining bed are generally observed to be very moist. The moisture is expected to cause the clay to swell, hence causing the
fractures to heal or close. This is supported by Owens (1979), who observed that in the Love Canal area the fissured clays grade to soft moist clays at about 2.7- to 3.4-m depth. In addition, Freeze
and Cherry (1979) point out that fracture zones in till and glacio-lacustrine clay tend to be less permeable with depth and that highly fractured zones usually occur only within several meters of the
ground surface. The implication of an expected long travel time through the confining bed is significant. If contamination is found in the Lockport Dolomite, based on the above discussion, four pos-
sible explanations, in order of plausibility, are the following: 1. The confining bed was breached during original construc- tion or during modification for disposal; 2. Contamination was caused by
leakage from an upper zone because of a poorly sealed well; 3. The confining bed is significantly fractured; or 4. Solvents or some other free-phase organic may have sig- nificantly degraded the
integrity of the confining bed. CONCLUSIONS In many modeling studies, a significant product is the increase in understanding of the hydrologic system. By setting up the model and conducting
sensitivity analyses, the investigator gains insight into the behavior of the system and is enabled to make improved predictions of the system's response. This has oc- curred with the Love Canal
study. As with any modeling study, the worth of the results is dependent on the input. Although there have been numerous studies conducted at Love Canal, the collection of hydrologic data at the
canal has been accumulated over only a brief historic interval; therefore, many conclusions must be presented with a note of caution. Reliance on these predictions must be in accordance with the
limiting assumptions used in the models. This is not to say that the results and predictions are mean- ingless. In addition to providing the only means in gaining any understanding of the hydrologic
system, the results of modeling can be used to indicate additional data required to improve predictions or strengthen conclusions. The major conclusions from the Lockport modeling are a JAMES W.
MERGER, CHARLES R. FAUST, and LYLE R. SILKA mixture of strong supportable conclusions and predictions that require additional data input to gain confidence. Of importance to the Love Canal problem is
the potential for contamination of the Lockport Dolomite. This hinges on whether the confining clay beds have been breached under the canal. The vertical Darcian flow velocity through the confining
bed is low, with rates on the order of 2.5 x 1O-5 m/yr. The direction of flow depends on the local gradient between the shallow system and the dolomite. It could be in either direction, and as the
heads fluctuate seasonally, the gradient may reverse. Assuming a downward gradient through the confining bed, and that the confining bed was not breached and does not contain fracture zones, it would
take a nonadsorbing solute on the order of hundreds to thousands of years to reach the dolomite. If ad- sorption occurs, travel times will be even longer. If contami- nants traceable to the chemicals
disposed in the canal are found in the dolomite, the most likely explanation is that the confining layer was breached, given the long travel times calculated for the confining bed. If the confining
beds were breached, down- ward flow could produce a groundwater high in the dolomite. Since the hydraulic heads in the two systems are nearly equal, however, it is not possible, based on hydrologic
evidence alone, to determine whether the confining bed was breached. Thus, additional data would help to resolve this issue. Namely, chem- ical analyses of Lockport Dolomite groundwater should be ob-
tained to determine if contaminants are present, and longer- term historical data on the head distributions in the shallow and Lockport systems are needed to better identify mounds and gradients
between the two systems. If contaminants were to enter the Lockport Dolomite at the south end of Love Canal, and if there were no adsorption, the mean travel time to the upper Niagara River for the
solute would be 1000 days. This is dependent on assumptions con- cerning the flow properties. Nevertheless, the gradient in the dolomite toward the river may be reversed by placing inter- ceptor
wells near the south end of Love Canal. This can be accomplished with a total pumpage as low as 32.3 m3/day. ACKNOWLEDGMENT The work on which this Love Canal study is based was per- formed under
Subcontract No. 1-619-026-222-003D to GCA/ Technology Division pursuant to U.S. Environmental Protec- tion Agency Contract No. 68-02-3168, Technical Service Area 3, Work Assignment No. 26. REFERENCES
Apps, J. A., J. Lucas, A. K. Mathur, and L. Tsao (1977). Theoretical and Experimental Evaluation of Waste Transport in Selected Rocks: 1977 Annual Report of LBL Contract No. 45901 AK, Report LBL-
6022, Lawrence Berkeley Laboratory, Berkeley, Calif., 139 pp. Clark, S. P., Jr. (1966). Handbook of Physical Constants, revised edi- tion, Geol. Soc. Am. Mem. 97, 587 pp. Conestoga-Rovers &
Associates (1978). Project Statement Love Canal Remedial Action Project, City of Niagara Falls.
OCR for page 109
Groundwater Flow Modeling Study Domenico, P. A. (1972). Concepts and Models in Groundwater Hy- drology, McGraw-Hill, New York, 405 pp. Earlougher, R. C., Jr. (1977). Advances in well test analysis,
Soc. Petrol. Eng. Monogr. 5, Henry L. Doherty series, 264 pp. Ebert, C. H. V. (1979). Unpublished memo dated 3/7/79 entitled C0177- ments on the Love Canal Pollution Abatement Plan (no. 3), Dept. Of
Geography, SUNY at Buffalo. Freeze, R. A. (1975). A stochastic conceptual analysis of one-dimen- sional ground-water flow in nonuniform homogeneous media, Water Resour. Res. 11, 725-741. Freeze, R.
A., and J. A. Cherry (1979). Groundwater, Prentice-Hall, Englewood Cliffs, N. J., 604 pp. Glaubinger, R. S., P. M. Kohn, and R. Remirez (1979). Love Canal aftermath: Learning from a tragedy, Chem.
Eng. 86(23), 86-92. Hart, F. C., Associates, Inc. (1978). Draft Report: Analysis of a Ground- water Contamination Incident in Niagara Falls, New York, prepared for U.S. Environmental Protection
Agency, Contract No. 68-01- 3897. Johnston, R. H. (1964). Ground water in the Niagara Falls area, New York, State of New York Conservation Department Water Resources Commission Bulletin GW-53, 93 pp.
Leonard, R. P., P. H. Werthman, and R. C. Ziegler (1977). Charac- terization and abatement of ground-water pollution froth Love Canal chemical landfill, Niagara Falls, N.Y., Calspan Rep. No. ND-6097-
M-1, Buffalo, New York. Mercer, J. W., C. R. Faust, and L. R. Silka (1981). Ground-Water Flow Modeling Study of the Love Canal Area, New York, Final Rep., 119 January 2, 1981, prepared for GCA/
Technology Division, Subcon- tract No. 1-619-026-222-003D, U. S. Environmental Protection Agency Contract No. 68-02-3168. Owens, D. W. (1979). Soils report, northern and southern sections Love Canal,
Attachment VIII to Earth Dimensions Report. Ralston, A., and H. S. Wilf (1967). Mathematical Methods for Digital Computers, Vol. II, John Wiley, New York, 287 pp. Silka, L. R., and J. W. Mercer
(1982). Evaluation of remedial actions for ground-water contamination at Love Canal, New York, in Pro- ceedings of the 3rd National Conference on the Management of Uncontrolled Hazardous Waste Sites,
Washington, D. C. Theis, C. V. (1935). The relation between the lowering of the piezo- n~etric surface and the rate and duration of discharge of a well using ground-water storage, Trans. Am. Geophys.
Union 2, 519-524. Trescott, P. C., G. F. Finder, and S. P. Larson (1976). Finite-difference model for aquifer simulation in two dimensions with results of nu- merical experiments, in Techniques of
Water Resources Investiga- tions of the United States Geological Survey, Book 7, Chap. C1, 116 PP U. S. Department of Agriculture (1972). Soil Survey of Niagara County, New York, U.S. Government
Printing Office, Washington, D.C., 0-459-901. Winograd, I. J., and W. Thordarson (1975). Hydrogeologic and hy- drochemical framework, South-Central Great Basin, Nevada-Cali- fornia, with special
reference to the Nevada Test Site, U.S. Geol. Surv. Prof. Pap. 712-C, Cl-C126.
|
{"url":"http://www.nap.edu/openbook.php?record_id=1770&page=109","timestamp":"2014-04-17T19:01:59Z","content_type":null,"content_length":"82692","record_id":"<urn:uuid:f9f19301-340b-453c-9045-ad34916d2715>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ashburn, VA Math Tutor
Find an Ashburn, VA Math Tutor
...Thank you.I am a native Chinese. I speak both Mandarin and Shanghai Dialect. I majored in English Linguistics in Shanghai International Studies University and taught English in Fudan University
in Shanghai, China.
12 Subjects: including prealgebra, SAT math, precalculus, reading
...Strong reading, writing, and math skills are the foundation for successful completion of the verbal and quantitative skills, reading, mathematics, and language portions of the HSPT. (Please let
me know if the school requires any of the optional tests for science, mechanical aptitude, or religion....
32 Subjects: including algebra 1, algebra 2, biology, chemistry
I am a recent graduate from George Mason University. I earned my B.S. in mathematics with a concentration in actuarial science and minors in economics and data analysis. While I typically excel in
all areas of mathematics, I have a particular penchant for statistics.
21 Subjects: including algebra 1, algebra 2, calculus, probability
...Whether you need help planning your research project, cleaning and managing your data, entering your data, analyzing your data, writing up your results, or learning statistical software, I can
help guide you.I am a long-time, high-level user of Microsoft Windows, with particular expertise in Word...
6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word
...Assembled, cleaned, and stocked laboratory equipment and prepared laboratory specimens. Evaluated papers and instructed students. 2011 - Present Research and Educational Consultant,
Grantwriter, Adjunct Professor & Professional Presenter - Provide short-term and on-going projectized services to...
64 Subjects: including geometry, organic chemistry, discrete math, MCAT
|
{"url":"http://www.purplemath.com/ashburn_va_math_tutors.php","timestamp":"2014-04-19T17:40:44Z","content_type":null,"content_length":"23760","record_id":"<urn:uuid:6eb3a630-02ac-4fd8-bbee-8d6cdd7c5479>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
|
discrete time dynamics transitional graph question
March 21st 2010, 03:56 AM #1
Nov 2009
discrete time dynamics transitional graph question
Hi, a bit stuck on the second part of this university question..
Two permutations of prime period n may be considered to be equivalent
if one can be obtained from the other by reversing the orientation of
the line (i.e. using a conjugacy m(x) = -x). If m = (a1.....an) is
a permutation of prime period n then denote by M = (b1.... bn) the
permutation obtained by reversing the orientation of the line. What is
bk as a function of ak?
i think ive done this part ok just stating that M is the inverse of m, however im clueless as to the second part of the question
For each m which is an element of Q sketch the transition graph associated with the corre-sponding periodic orbit with the labelling S1 = [x1, x2], S2 = [x2, x3], S3 = [x3, x4] and S4 = [x4, x5].
thanks for looking
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/discrete-math/134831-discrete-time-dynamics-transitional-graph-question.html","timestamp":"2014-04-18T12:04:50Z","content_type":null,"content_length":"30525","record_id":"<urn:uuid:3fd25eb6-fb19-4c36-a924-0ae6b8eb26f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Function homework help
September 27th 2012, 01:40 PM #1
Sep 2012
Function homework help
Let f be the function given by f(x)= 2x/sqrt(x^2 + x + 1)
A) find the domain and justify
B) write an equation for each horizontal asymptote
C) does the graph cross any horizontal asymototes? Show work
So I started this and got
A) domain is all reals because denominator doesn't have restrictions
B) y=2 and y=-2 however I found these graphically and need to know how to do that algebraically
C) yes it crosses y=-2 (I also found this graphically and need it algebraically)
Thanks so much in advance!!!
Re: Function homework help
Let f be the function given by f(x)= 2x/sqrt(x^2 + x + 1)
A) find the domain and justify
B) write an equation for each horizontal asymptote
C) does the graph cross any horizontal asymototes? Show work
So I started this and got
A) domain is all reals because denominator doesn't have restrictions
B) y=2 and y=-2 however I found these graphically and need to know how to do that algebraically
C) yes it crosses y=-2 (I also found this graphically and need it algebraically)
Thanks so much in advance!!!
To find the horizontal asymptotes you need to take the limits as $\lim_{x \to \pm \infty}f(x)$
First here is a hint to remember
$\sqrt{x^2}=|x| e x$ so be extra careful when taking the limit as $x \to -\infty$
Recall the function definition of the absolute value is
$|x|=\begin{cases} x, \text{ if } x \ge 0 \\ -x, \text{ if } x < 0\end{cases}$
Here is the positive one
$f(x)=\frac{2x}{\sqrt{x^2(1+\frac{1}{x}+\frac{1}{x^ 2})}}$
Since we are going to postivite infinity we get that $|x|=x$ so
$\lim_{x \to \infty}f(x)=\lim_{x \to \infty}\frac{2}{\sqrt{1+\frac{1}{x}+\frac{1}{x^2}} }=2$
Last edited by MaxJasper; September 27th 2012 at 01:59 PM.
Re: Function homework help
A) Correct. Inside the square root you have, after completing the square $((x+ 1/2)^2 + 3/4)$ which is never 0 (bad for division) or negative (bad for square root).
B) Correct. Algebraically, you evaluate the two limits: $\lim_{x \rightarrow \infty} f(x)$ and $\lim_{x \rightarrow -\infty} f(x)$. Do you know how to do that?
C) This is two problems, because you have two lines (horizontal), and for each of them you're being asked if the line intersects the graph of the function.
Let's first look at the line y=-2. Suppose P is a point of intersection. What does that mean? It means that P is on the graph of the line y=-2, and P is on the graph of the function f. What does
that mean? If P = (a, b), then it means that b = f(a). That's what the graph of the function y=f(x) is - points with coordinates looking like (x, f(x)) where x is in the function's domain. The
same thing goes for the horizontal line y =-2. P = (a, b) is on that line means that P's y-coordinate, b, is the result of plugging P's x-coordinate, a, into the equation of the line.
OK, P = (a, b) is an intersection point of the graph of f and the graph of the horizontal line y=-2.
Therefore b = 2(a)/sqrt(a^2 + a + 1) (plugging P into the formula for the graph of f)
and b = -2 (plugging P into the formula for the graph of y = -2).
Note that when you "plug a in for x" in the formula for the horizontal line, nothing happens, because there is no x in that formula! It's the constant function that gives back a value -2 no
matter which x value you give it. So when you plugged a in for x, it produced -2. Plugging b in for y and you get: b = -2. That's it.
Now solve: b = 2(a)/sqrt(a^2 + a + 1) and b = -2.
Well, you've solved for b pretty easily! Plug that into the other equation,and then solve for a.
Solve for a: -2 = 2a/sqrt(a^2 + a + 1). Do you know how to do this?
When you're done, assuming it even has a solution, you'll need to plug a back into f to see if in fact f(a) = b = -2. It might not. In other words, CHECK your answer. This isn't checking for a
mistake, but for a legitimate math purpose that you'd need to do even if your were computationally infallible!
Now check the other hortizontal asymptote, y = 2. If you repeat everything above for this case, you'll end up with:
Solve for a: 2 = 2a/sqrt(a^2 + a + 1).
Notice how similar it is to the previous provblem when y=-2? They're the same except for a minus sign. To solve either equation you'll have to square it eventually (to undo that square root), and
as soon as you square it you'll lose the information as to whether it started out being the first equation, or the second. You won't know whether the a you end up solving for applies to the y=2
line or the y=-2 line! The only way to tell will be to check by evaluating f(a)! This is why that check isn't optional. Squaring both sides of an equation always loses information about whether
you had a + or a - prior to the squaring.
September 27th 2012, 01:55 PM #2
September 27th 2012, 01:56 PM #3
September 27th 2012, 02:31 PM #4
Super Member
Sep 2012
Washington DC USA
|
{"url":"http://mathhelpforum.com/calculus/204204-function-homework-help.html","timestamp":"2014-04-16T14:34:41Z","content_type":null,"content_length":"46321","record_id":"<urn:uuid:165bede9-d22d-4c31-b43c-580380051249>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Drcom-Client Open Source Project
Under the influence of the control system will w3 from the state (t, 0) to state (m + 1, 1). Connect these points of the segment on which to put the costs equal to the cost of repair in the reporting
year, plus the cost of operation: fo (r) + r (x, 0);
2. Suppose the system is in £ = (t, f), t> 0. Then, as before, we have H '{uc) == (t +1, £ +1) at a cost of r (r, /); g / (w3) = (t + l, 1), at a cost / (m , t) + g (t, 0).
Using Table. 3 and 4, mark the segments The corresponding possible state transitions from state g 'in Fig. 10. For example, over the segment connecting the points £ = (3, 2) and g / == (4, 1), is the
number of 5.1, it is - the amount of repair costs, is 2.3 (see Table 3.) and operating costs for the 4th year, if after the repair was 0 years, is 2.8 (see Table. 4).
Constrained optimization draw on the resulting network.
The optimization of the 6-th step. The final states of the system are known - the point (6; t). Analyze how to get to each end state. For this, we consider all the possible states that occur after
the 5th step.
State (5, 0). From it you can get to (6, 0), making the cost of 10 (only operation), and the status (6, 1) with costs 9.1 (repair and subsequent operation). It follows that if the penultimate step
led to the point (5, 0), we should go to the point (6, 1) (we note this direction arrow), and the minimum (unavoidable) costs related to this transition are equal to 9.1 [put this value in the circle
points (5, 0)].
State (5, 1). From it you can get to the point (6, 1) costs 3 4.1 = 7.1 and to the point (6, 2) the cost of 4.4. Choose the second administration, noted his arrow, and the minimal cost to put down in
a circle point (5, 1).
Arguing in the same way for each of the penultimate step, we will find for any outcome of the 5th step of the conditional optimal control on the 6-th step, we note it in Fig. 10 arrows and, in
addition, relatively minimal costs in the last step to put in the appropriate circle.
|
{"url":"http://www.drcom-client.org/en/docs/donate.html","timestamp":"2014-04-17T04:13:13Z","content_type":null,"content_length":"6007","record_id":"<urn:uuid:5fc2e6d6-00e3-481b-9c1f-ae4719b7f6f4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electrical resistance of an arc as a function of current - an engineering question...
04-11-2013, 06:32 AM #1
Senior Member
Join Date
Aug 2009
I have an arc between a tungsten electrode and a work piece. The length of the arc remains the same. If I vary the current over a reasonable range (so as not to deform the tungsten nor to reduce
the arc to instability) does the electrical resistance of the arc change?
The reason I ask is that I am attempting to quantify the heat input from a pulsed TIG arc as compared to a constant current arc.
p.s. If the resistance does change a bonus point to anyone who can point me to a formula for quantifying the resistance
For the calculation of the heat input (Q), the relationship used for constant
current process would be Q=(VxI/s)η, where V is the voltage, I
is the current, s is the welding speed and η is the efficiency of utilization of the heat generated.
The calculation of heat input for the pulsed current
process is done by computing the mean current using the relationship Im=(Ipt+Ibtb)/ tT, where Im is the mean current, Ip is the peak current, Ib is the base current, tp is the time on peak pulse,
tb is the time on base current and tT is the total time.
Does this help???
Owner Operator of JNT Mobile Welding & Repair LLC
Millermatic 350P Aluma Pro
Dynasty 200DX
Maxstar 150 STL
Trailblazer 302
Suitecase 12RC
Extreme 12VS
Extreme 8VS
Spoolmatic 30A
Miller HF251D-1
Passport Plus
Spoolmate 100
Hypertherm Powermax 45 and 85
Ingersoll Rand Engine Driven Compressor
Dake 75 ton H-Frame Press
JD Squared Model 32 Bender
Miller Digital Elite
Thanks jpence38,
I had not at this point considered the efficiency of utilization of the heat generated. An opportunity for further refinement. My original question related to what you refer to as the heat
generated, I am afraid I have to disagree on the pulse formula you provided. As I see it...
P = I x E (power or in this case heat generated = current times potential or voltage)
E = I x R (where R is resistance) so that
P = I^2 x R (think of "I squared R" losses in a conductor)
My formula for pulsed current would be:
P = Ip^2 x Tp x Rp + Ib^2 x Tb x Rb (where the p parameters are peak and b parameters are base or background)
If I wish to determine the heat dissipated by my torch when in pulse mode and I assume that the resistance of the torch does not change I find:
Iequiv^2 x R = Ip^2 x Tp x R + Ib^2 x Tb x R
and thus I can divide through by R to get:
Iequiv = SQRT(Ip^2 x Tp + Ib^2 x Tb)
Am I in error in my thinking???
If I set my Dynasty 200DX to its default pulse parameters where background amps are 25% of peak amps and peak time is 40 percent I compute that my WP9 torch, rated at 125 amp continuous, can thus
carry a peak amperage of about 190.
If the arc resistance stays the same between 190 amps and 48 amps (and the utilization efficiency is also the same) I would conclude that these pulse parameters would be adding heat to the
weldment at the same as a 125 amp continuous current.
Perhaps I need to setup some experiments to gather data. The resistance I can determine by measuring the voltage drop across the arc. The efficiency will be a little more challenging. But I have
some ideas.
Thanks again,
When I am looking at pulse welding, I generally want to use the time weighted average current.
Average current = Peak current * Time at peak + Background current * background time.
Time would be expressed as a fraction.
In the example given, with 190 amps peak, 25% background current, and 40% time high, I compute 48 amps as the background current.
Average current = 190 * 0.4 + 48 * 0.6 = 76 + 28.8= 104 amps average.
To use this formula, one would need to assume that the voltage in the arc does not vary significantly with the current. But I have not verified that this assumption is correct.
One of the problems with Ken's view is that he is looking as the arc as if it was an ideal resistive load. I doubt very much that this is the case.
The experiment is really simple to perform. Set up the torch with some kind of fixture, to insure that the arc length is constant. Vary the current, and plot the voltage. However, the high
frequency voltage is likely to confuse or even damage most sophisticated voltmeters. This is a case where a really basic voltmeter might work better.
Thanks Richard. I am not sure of the resistance characteristics of an arc, thus my original question. On the other hand, the heat lost to resistance is definitely a function of the SQUARE of the
current. That is why utility power is transmitted at high voltages (low currents) and then transformed into low(er) voltages at the point of use. I maintain that the heat dissipated by a TIG
torch is a time weighted average of the square of the peak and background currents.
My experimental rig as I envision it is as follows:
Starting with a piece of aluminum scrap 5" in diameter and about 6" long (I would use a block of copper but I happen to have this) I will drill a hole and insert a piece of 1/8" tungsten a couple
or 3 inches into the block with 1/8" or so sticking out. This will be my work piece/heat sink. I will then make a jig to hold the TIG torch about 1/8" away from the tungsten in the work piece
with a lever to allow me to bring the electrode in the torch into contact with the one in the work piece. I will then set the power source for panel amp control and lift arc start. That will
prevent the HF from zapping the volt meter as you quite rightly pointed out. I should then be able to start and maintain a consistent arc while measuring the voltage at different amperages.
If I decide to get really creative I could thermally insulate the aluminum with some fiberglass or such, drill another hole in it to accommodate a thermister temperature sensor. I could then run
the arc at a given current for a period of time, allow the temperature of the aluminum to reach reach uniformity and take a reading. Cool if back down, try another current etc. This would allow
me to see how much of the arc heat was deposited in the weldment.
I think that your overall plan is good. However, I suspect that an arc would generate enough RF to confuse many voltmeters. If you see erratic results, then try another voltmeter. I suspect that
an analog voltmeter would work best.
I'm not sure I understand the point of the original question, or what use you intend to put the results to, but...
A DC arc follows Ohm's law: V = R x i
An AC arc can be approximated by using Ohm's law on the RMS (root mean square) values. Use a true-RMS digital multimeter to read them directly.
Power (in Watts) is easy to calculate using W = V x i = R x i^2
A digital meter (even a cheapo) will always be more accurate & have less impact on the circuit than an analog (even an expensive one). As long as the actual working voltage doesn't exceed what
the meter is rated for, it won't fry.
But the power you calculate depends on exactly where you take your other measurements. If you measure voltage across the power supply, you'll be calculating power from the PS. For the power
(heat) lost by the arc, measure voltage from the tip of the torch to the work surface. For heat due to current flow in the work, measure voltage from the arc to the ground electrode. But there
will be heat lost by the arc & absorbed by the work which would be very difficult to calculate (even for NASA).
If you want to know how hot your work piece is getting, use a cheap non-contact (IR) thermometer.
Last edited by Steve83; 04-12-2013 at 10:09 AM.
Walk softly & carry a BIG SIX ! ! !
MM211 + SM100
I have an arc between a tungsten electrode and a work piece. The length of the arc remains the same. If I vary the current over a reasonable range (so as not to deform the tungsten nor to reduce
the arc to instability) does the electrical resistance of the arc change?
The reason I ask is that I am attempting to quantify the heat input from a pulsed TIG arc as compared to a constant current arc.
p.s. If the resistance does change a bonus point to anyone who can point me to a formula for quantifying the resistance
The arc has an effective _negative_ resistance; typically, voltage decreases with increasing current (opposite of a true resistor). This is why an arc without a welding machine to regulate it is
unstable, it will consume as much current as it can take until the source blows up. Unfortunately the value depends on many variables so it is not really simple to calculate; and yes, it is not a
straight line (for example, it shoots very high when voltage gets down near the ionization potential of the gas).
The total power at any instant is still V*I, so if you succeed in measuring both as you're proposing you will indeed find what you want. Interested to hear the answer.
I'd be interested in the results you obtain.
Can you please post a video showing your setup and measuring the voltage of the arc?
04-11-2013, 07:37 AM #2
04-11-2013, 11:13 AM #3
Senior Member
Join Date
Aug 2009
04-11-2013, 01:27 PM #4
Senior Member
Join Date
Sep 2009
04-11-2013, 02:16 PM #5
Senior Member
Join Date
Aug 2009
04-11-2013, 02:45 PM #6
Senior Member
Join Date
Sep 2009
04-11-2013, 05:12 PM #7
04-12-2013, 06:46 AM #8
Junior Member
Join Date
Feb 2013
04-12-2013, 07:08 AM #9
Senior Member
Join Date
Dec 2012
Sweetwater, TX
|
{"url":"http://www.millerwelds.com/resources/communities/mboard/showthread.php?31044-Electrical-resistance-of-an-arc-as-a-function-of-current-an-engineering-question&p=305586","timestamp":"2014-04-16T13:31:55Z","content_type":null,"content_length":"93738","record_id":"<urn:uuid:66f0079d-9272-4050-803a-1f8b8c6eff99>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kinematics and Snell's Law
Next: Dynamics and Reflection/Refraction Up: Reflection and Refraction at Previous: Reflection and Refraction at Contents
The phase factors of all three waves must be equal on the actual boundary itself, hence:
as a kinematic constraint for the wave to be consistent. That is, this has nothing to do with ``physics'' per se, it is just a mathematical requirement for the wave description to work. Consequently
it is generally covered even in kiddy-physics classes, where one can derive Snell's law just from pictures of incident waves and triangles and a knowledge of the wavelength shift associated with the
speed shift with a fixed frequency wave.
which is both Snell's Law and the Law of Reflection, where we use index of refraction, defined by
Next: Dynamics and Reflection/Refraction Up: Reflection and Refraction at Previous: Reflection and Refraction at Contents Robert G. Brown 2007-12-28
|
{"url":"http://phy.duke.edu/~rgb/Class/phy319/phy319/node39.html","timestamp":"2014-04-20T14:43:01Z","content_type":null,"content_length":"6749","record_id":"<urn:uuid:fc9bc029-dc8c-4af6-adf8-18b36d5711a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparing Complex Numbers to Clifford Algebra
1 Comparing Complex Numbers to Clifford Algebra
As the saying goes, learning proceeds from the known to the unknown.
There are at least two possibilities:
1) If you are starting from scratch, a nice logical path would be to learn things in the following order:
• Plain old numbers (scalars).
• Vectors.
• Clifford Algebra in two dimensions.
• Complex numbers can be understood as a subset of Clifford Algebra in two dimensions, as discussed below.
• Clifford Algebra in higher dimensions.
• Quaternions can be understood as a subset of Clifford algebra in three dimensions. Also note that Pauli spin matrics are isomorphic to quaternions. In particular, starting from complex number (or
otherwise), learning about Clifford Algebra is probably easier and better than trying to figure out quaternions and/or spin matrices directly.
2) If, however, you already have experience with complex numbers (and vectors), you can use the correspondences discussed below to jump-start your understanding of Clifford Algebra.
This document is also available in PDF format. You may find this advantageous if your browser has trouble displaying standard HTML math symbols.
— Complex Numbers — — Clifford Algebra —
A complex number has real part A multivector has a scalar part, and a vector part, and a bivector part, et cetera. For an explanation of these concepts, related concepts, terminology, et cetera,
and an imaginary part. including the geometric and pictorial representation of these objects, see reference 1, reference 2, reference 3, and reference 4.
Adding a real number to an imaginary number is like adding apples and oranges. Adding a scalar and a vector plus a bivector etc. is like adding apples and oranges plus pears et cetera. But there’s
But there’s nothing wrong with that. People do it all the time. nothing wrong with that. You can’t safely compare apples and oranges, but that’s a separate issue.
Due to a quirk in the terminology, the imaginary part of a complex number does not refer to an imaginary number, but refers The terminology for Clifford Algebra does not share this quirk. The
instead to the real number multiplying i. If z = x+iy (where x and y are real) the imaginary part of z is y. vector part is a vector. The bivector part is a bivector.
Consider the subset of complex numbers where the imaginary part is restricted to be zero. This Consider the subset of multivectors where everything but the grade=0 part is restricted to be zero.
corresponds to the plain old scalars. This corresponds to the plain old scalars.
Multiplication is associative and distributes over addition. Multiplication is associative and distributes over addition.
The ordinary product of two complex numbers p and q is The geometric product of two multivectors P and Q is written P Q without any special operator symbol. This geometric product is primary and
written p q without any special operator symbol. fundamental. Other operations, including dot product and wedge product, will be defined in terms of the geometric product.
We postulate the existence of the imaginary unit (i) whereupon all the other complex numbers We postulate the existence of some number of vectors (γ[1], γ[2] …) whereupon all the other
can be created by multiplication and addition. multivectors can be created by multiplication and addition.
The imaginary unit (i) does not correspond to a Clifford-Algebra vector, Consider the Clifford Algebra in two dimensions and restrict attention to the subset of multivectors where the grade=1 part
but rather a bivector: x+iy corresponds to x + γ[1]γ[2] y. is zero. This subset is closed under multiplication. This subset is isomorphic to the complex numbers.
Multiplication is Multiplication of vectors is commutative if the vectors are collinear. Multiplication of vectors is anticommutative (pq = − qp) if the vectors are orthogonal. In general
commutative. multiplication is neither commutative nor anticommutative. Most things in the real world are non-commutative. Putting on your socks doesn’t commute with putting on your shoes.
The complex number system doesn’t have vectors, just grade=0 real things and grade=2 A blade of grade r is defined to be the product of r mutually-orthogonal vectors. If you have a bunch of vectors
imaginary things. The imaginary unit (i) is not constructed from vectors but exists but don’t know for sure that they are orthogonal, you can express the wedge product in terms of permuted
by fiat. geometric products:
q[1]∧q[2]∧q[3]⋯q[r] := sign(π) q[π(1)] q[π(2)] q[π(3)]⋯ q[π(r)] (1)
where the sum runs over all r! possible permutations π, and sign(π) is +1 for even permutations and −1 for odd permutations. This will be a blade of grade r if the vectors are linearly independent;
otherwise it will be zero.
So we see that the wedge product is the completely antisymmetric product. For a discussion of the physical interpretation, in terms of area of parallelograms and volume of parallelepipeds, see
reference 5.
The wedge product is associative and distributes over addition.
We have not assumed the existence of a right-handed basis. Indeed we have not assumed the existence of a basis of any kind.
Some complex numbers are pure real. Some We say a multivector is homogeneous if it is a blade or a sum of blades all of the same grade. In D=3 or less, every homogeneous multivector is a
complex numbers are pure imaginary. blade. In D=4 and higher, you can have things like γ[1]γ[2] + γ[3]γ[4] which is homogeneous but not a blade.
We can select out the real part ℜ(z) or the imaginary part ℑ(z) for any complex number z. We can select out the grade=r part ⟨M⟩[r] for any multivector M.
We know how to form the complex conjugate of a complex number: We know how to form the reverse of a multivector: For every term that is a product of vectors, write the factors in reverse order:
(2+5i)* = (2−5i) (2+5γ[1]γ[2])∼ = (2+5γ[2]γ[1])
Given two complex numbers p and q, their wedge product is p∧q = ½[(pq)−(pq)*]. Quite generally, the wedge product will be the high-grade part of the geometric product. That is, if A has grade=r and
This is pure imaginary, and constitutes the high-grade piece of the ordinary B has grade=s, then A∧B = ⟨A B⟩[r+s]. This is a consequence of the previous definitions, since only the high-grade
product. This has norm |p||q|sin(θ) where θ is the angle between the two piece will survive the antisymmetrization. It is often much easier to pick out the high-grade piece by eye than to
vectors, which agrees with the ideas in reference 5. actually carry out the sum indicated in equation 1. (If you expand all vectors in terms of components, using an
orthogonal basis, it is particularly easy to be certain of the grade of any given term.)
Given two complex numbers p and q, their dot product is p·q = ½[(pq)+(pq)*]. This is pure Quite generally, we define the dot product as follows: The dot product of a scalar with anything is zero.
real, and constitutes the low-grade piece of the ordinary product. This has norm |p||q|cos Otherwise, the dot product of two multivectors is the low-grade piece of the geometric product. That is,
(θ) where θ is the angle between the two vectors. if A has grade=r and B has grade=s, then A·B = ⟨A B⟩[|r−s|].
The ordinary product can be written as the sum of the If either P or Q is a vector, then P Q = P∧Q + P·Q. In general, though, dot and wedge don’t exhaust the possibilities. If P has grade r and Q
wedge product and dot product: p q = p∧q + p·q has grade s, the geometric product will contain contributions of every grade from |r−s| up to r+s, counting by twos.
The product of a complex number with its The product of a blade with its reverse is automatically a scalar. We assume all scalars are real, because anything you could ever want to do with
conjugate is a real scalar. complex numbers can be done within the Clifford Algebra formalism.
We use this to define the squared norm of a complex number: if z = x+iy then We use this to define the squared norm of a multivector: if M = a + bγ[1] + cγ[2]γ[3] where a, b, and c are scalars, then
|z|^2 := z z* = x^2 + y^2 (2) ||M||^2 := ⟨M M∼⟩[0] = a^2 + b^2 + c^2 (3)
2 References
|
{"url":"http://www.av8n.com/physics/complex-clifford.htm","timestamp":"2014-04-20T05:49:20Z","content_type":null,"content_length":"25165","record_id":"<urn:uuid:7e1a76c5-1fc5-4777-b28f-a317a0c66496>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATH M441 3228 Introduction to Partial Differential Equations with Applications I
Mathematics | Introduction to Partial Differential Equations with Applications I
M441 | 3228 | Jin
P: M301 or M303, M311, and M343. Derivation and methods of solution of
classical partial differential equations of mathematical H447 Summer
Institute in Mathematical Models (4 cr.) (S/F grading) P: M303, M365.
Introduction to mathematical models and computer tools for modeling.
Mathematical topics include games, graphs, queues, growth processes, and
optimization. Emphasis on small group problem solving and on topics which
can be incorporated into the high school curriculum.
|
{"url":"http://www.indiana.edu/~deanfac/blfal99/math/math_m441_3228.html","timestamp":"2014-04-20T03:18:26Z","content_type":null,"content_length":"1110","record_id":"<urn:uuid:8355a2f0-b16e-4e2a-9dcc-7444531e1f97>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: CFGs vs. "declare variable before use"
"mefrill" <mefrill@yandex.ru>
4 Jun 2005 15:07:34 -0400
From comp.compilers
| List of all articles for this month |
From: "mefrill" <mefrill@yandex.ru>
Newsgroups: comp.compilers
Date: 4 Jun 2005 15:07:34 -0400
Organization: http://groups.google.com
References: 05-05-21605-06-020
Keywords: syntax, theory
Posted-Date: 04 Jun 2005 15:07:34 EDT
> I am presuming that you mean to pick an element z from L; then show
> (as you did) that we cannot apply the pumping lemma to the element z
> (as you defined it); therefore we can conclude that L is not context
> free. (I am a bit confused by your statement "it is not hard to prove
> z cannot belong to L", because clearly, z is in L).
I meant "z cannot belong to L if L is defined by KF-grammar". My
thinking was folowing: if L is defined by KF-grammar then pumping lemma
should be applied to L. And I am showing that it is not true by
selecting string z, which is (we know) in L, but cannot belong to L as
it is proven from pumping lemma. It is stndard method of proving named
as "the rule of contraries". To prove A-->B you prove !B-->!A. A here
is "L satisfies declare before using rule" and B is "L is not
KF-language". We suppose !B = "L IS KF-language" and prove !A="L DOES
NOT satisfy declare before using rule". To do this we get the simplest
model of the language with "declare before use" rule. Each language
within such the rule has structure like "declaration w statemens using
w", where w is identifier. We drop all unnecessary and get "ww"
language. What model may be simpler? To prove I used the property that
language L1={0^n1^n0^n1^n: n belongs to N} is contained in L={ww}. It
is the simplest method to prove. But, it is clear that it is only
simple way to prove and does not reflect WHY "declare before use" rule
does not allow a language to be KF. To understand this I suggest you
think about simple fact: if we want ID to be in garammar we must drop
abstract ID (coming from lex) and include in our KF-grammar fro the
language the set of rules generate ID. This set always must generate ID
by this way: generate pair "declare ID" number of "using ID" and then
move "using ID" further in the program text. So, the problem really
concerns ww language. And the problem really is generation of the
second w, which is context dependent from the first w. The simplest way
to prove this "context depending" is using pumping lemma. Try think and
find something else. But, is seems to be harder.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/05-06-025","timestamp":"2014-04-19T22:13:14Z","content_type":null,"content_length":"7486","record_id":"<urn:uuid:68738c0f-356f-4352-b93b-4a1f5d8a2746>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell-cafe] Variants of a recursive data structure
Niklas Broberg niklas.broberg at gmail.com
Thu Aug 3 11:08:28 EDT 2006
If you want the non-labelledness to be guaranteed by the type system,
you could combine a GADT with some type level hackery. Note the flags
to GHC - they're not that scary really. :-)
In the following I've used the notion of type level booleans (TBool)
to flag whether or not an expression could contain a label or not. A
term of type Exp TFalse is guaranteed to not contain any labels, a
term of type Exp TTrue is guaranteed *to* contain at least one label
somewhere in the tree, and a term Exp a could contain a label, but
doesn't have to.
{-# OPTIONS_GHC -fglasgow-exts -fallow-overlapping-instances
-fallow-undecidable-instances #-}
module Exp where
data TTrue
data TFalse
class TBool a
instance TBool TTrue
instance TBool TFalse
class (TBool a, TBool b, TBool c) => Or a b c
instance Or TFalse TFalse TFalse
instance (TBool x, TBool y) => Or x y TTrue
data TBool l => Exp l where
Num :: Int -> Exp TFalse
Add :: Or a b c => Exp a -> Exp b -> Exp c
Label :: String -> Exp a -> Exp TTrue
type SimpleExp = Exp TFalse
unlabel :: Exp a -> SimpleExp
unlabel n@(Num _) = n
unlabel (Add x y) = Add (unlabel x) (unlabel y)
unlabel (Label _ x) = unlabel x
On 8/3/06, Klaus Ostermann <ostermann at informatik.tu-darmstadt.de> wrote:
> Hi all,
> I have a problem which is probably not a problem at all for Haskell experts,
> but I am struggling with it nevertheless.
> I want to model the following situation. I have ASTs for a language in two
> variatns: A "simple" form and a "labelled" form, e.g.
> data SimpleExp = Num Int | Add SimpleExp SimpleExp
> data LabelledExp = LNum Int String | LAdd LabelledExp LabelledExp String
> I wonder what would be the best way to model this situation without
> repeating the structure of the AST.
> I tried it using a fixed point operator for types like this:
> data Exp e = Num Int | Add e e
> data Labelled a = L String a
> newtype Mu f = Mu (f (Mu f))
> type SimpleExp = Mu Exp
> type LabelledExp = Mu Labelled Exp
> The "SimpleExp" definition works fine, but the LabeledExp definition doesn't
> because I would need something like "Mu (\a -> Labeled (Exp a))" where "\"
> is a type-level lambda.
> However, I don't know how to do this in Haskell. I'd need something like the
> "." operator on the type-level. I also wonder whether it is possible to
> curry type constructors.
> The icing on the cake would be if it would also be possible to have a
> function
> unlabel :: LabeledExp -> Exp
> that does *not* need to know about the full structure of expressions.
> So, what options do I have to address this problem in Haskell?
> Klaus
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2006-August/017126.html","timestamp":"2014-04-18T00:28:15Z","content_type":null,"content_length":"6455","record_id":"<urn:uuid:645f5ebc-cdf2-464f-a2de-a43be444baf4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do i calculate the volume of a tetrahedral?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50fbaf7fe4b0860af51e5489","timestamp":"2014-04-16T17:20:38Z","content_type":null,"content_length":"39612","record_id":"<urn:uuid:de97690d-a87c-4d19-8c8d-aa3a30f25084>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
ee Find the common factors of 8 and 12. ee View Solution
ee Find the common factors of 8 and 20. ee View Solution
ee Find the GCF of 44 and 48. ee View Solution
ee Find the GCF of 48 and 72. ee View Solution
ee Find the GCF of 15 and 20. View Solution
ee Lauren has 56 pink flowers and 63 yellow flowers. Find the greatest number of identical bouquets she can make, using both the flowers such that no flower is left. ee View Solution
ee What are the factors of 79? ee View Solution
ee Find the GCF of 60 and 75. ee View Solution
ee Find the GCF of 60 and 57. View Solution
ee Find the GCF of 32 and 40. View Solution
ee Find the GCF of 48 and 54. ee View Solution
ee Identify the GCF of 60 and 70. ee View Solution
ee Identify the GCF of 57 and 95. ee View Solution
ee Annie has 32 pencils and 64 balloons. Find the greatest number of identical pairs she can make using both the pencils and the balloons to distribute among the children. ee View Solution
ee Find the GCF of 90 and 108. View Solution
ee Sunny has 48 crayons and 72 plain cartoon pictures. Find the GCF of number of crayons and number of cartoon pictures. View Solution
ee Is 15 the GCF of 60 and 75? ee View Solution
ee Is 3 the GCF of 18 and 21? ee View Solution
ee What are the factors of 52? ee View Solution
ee Find the common factors of 15 and 18. ee View Solution
ee Find the GCF of 33 and 30. View Solution
ee Find the common factors of 6 and 12. ee View Solution
ee Find the GCF of 40 and 48. View Solution
ee Find the GCF of 45 and 60. ee View Solution
ee Find the GCF of 22 and 24. ee View Solution
ee Find the GCF of 18 and 24. ee View Solution
ee Find the GCF of 24 and 32. ee View Solution
ee Identify the GCF of 50 and 60. ee View Solution
ee Identify the GCF of 16 and 64. ee View Solution
ee The prime factors of 48 and 96 are in the figure. Which of the following choices is the product of the factors in the intersection of the circles? ee View Solution
ee Laura has 24 pencils and 40 cards. Find the greatest number of identical pairs she can make using both the pencils and the cards to distribute among the children. ee View Solution
ee Find the GCF of 25 and 30. View Solution
ee Find the GCF of 54 and 108. View Solution
ee Dennis has 96 crayons and 120 plain cartoon pictures. Find the GCF of number of crayons and number of cartoon pictures. View Solution
ee Sheela has 56 pink flowers and 63 yellow flowers. Find the greatest number of identical bouquets she can make, using both the flowers such that no flower is left. ee View Solution
ee The GCF of two prime numbers is ________. ee View Solution
ee Is 15 the GCF of 75 and 90? ee View Solution
ee Is 6 the GCF of 12 and 18? ee View Solution
ee What is the GCF of 68 and 56? ee View Solution
ee What is the GCF of 20 and 10? ee View Solution
|
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdfbxkhmjf&.html","timestamp":"2014-04-16T19:41:21Z","content_type":null,"content_length":"79040","record_id":"<urn:uuid:9df226a3-36d2-465d-ac4a-e9bfbf9495d3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
La Jolla SAT Math Tutor
Find a La Jolla SAT Math Tutor
...I have also worked with my younger brother's friends as they progressed through their education. I was able to help them pass their classes and exams and ultimately go to various Universities.
I learned the different learning styles of each of the students, and did my best to work with them in ...
42 Subjects: including SAT math, reading, English, Spanish
...According to the CollegeBoard website, only about 4% of students improve their SAT scores by at least 100 points. After graduating from college, I earned GRE scores of 164V/158Q, placing me in
the top 3% of test-takers During two of my years at college, I worked with the Office of Admissions at ...
54 Subjects: including SAT math, Spanish, English, reading
...I use all the concepts that you will see in a high school and entry level college chemistry course quite regularly at both school and work. I have experience with both the AP Chemistry course
as well the SAT II subject tests as I managed to attain a perfect score on each of them. I obtained a minor in physics in college and was able to receive a top score on both the AP Physics B & C
19 Subjects: including SAT math, chemistry, calculus, writing
...I look forward to working with you!I have formally taught 2 years of 10th grade math. I have formally taught 2 years of 10th grade math. I lived in France for 5 years, minored in French in
college and lived in West Africa for 2 years, teaching math in French at a local high school.
14 Subjects: including SAT math, French, geometry, ESL/ESOL
...Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps
from past courses tend to reappear as roadblocks down the line. By identifying and correcting these problems, I help students become effective independent learners for both current and future
14 Subjects: including SAT math, calculus, physics, geometry
|
{"url":"http://www.purplemath.com/La_Jolla_SAT_math_tutors.php","timestamp":"2014-04-16T19:05:23Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:377ff009-12ea-43cc-82f9-e8a225cefcfc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Like equation-solving, finding a function's minimum is in many cases impossible to solve analytically, or is computationally infeasible (will take too long to compute). To tackle these problems
mathematicians invented a family of minimization methods called
iterative solvers
. The gradient descent method is one of them. The idea is very simple - the opposite direction to the gradient is the steepest descent direction, so in order to find a minimum we should just "follow
the gradient".
There is an unsolved problem here - the gradient tells us the direction we should go, but does not tell us
how much
to go. This parameter is called
step size
and there are many methods to compute/estimate the optimal step size. In this tutorial we will use a constant step size. Another problem is when should we stop? Clearly stopping when we reach the
minimum is impractical as it might take forever. There are many stopping criteria, and in this tutorial we will use the simplest one - the number of iterations.
The code
We will create a method that given a compiled term, an initial guess, the step size and the number of iterations attempts to find the minimum of the function represented by the compiled term.
static double[] GradientDescent(ICompiledTerm func, double[] init, double stepSize, int iterations)
// clone the initial argument
var x = (double[])init.Clone();
// perform the iterations
for (int i = 0; i < iterations; ++i)
// compute the gradient
var gradient = func.Differentiate(x).Item1;
// perform a descent step
for (int j = 0; j < x.Length; ++j)
x[j] -= stepSize * gradient[j];
return x;
Now we can use our small function to find the minimum of a function.
static void Main(string[] args)
var x = new Variable();
var y = new Variable();
var z = new Variable();
// f(x, y, z) = (x-2)² + (y+4)² + (z-1)²
// the min should be (x, y, z) = (2, -4, 1)
var func = TermBuilder.Power(x - 2, 2) + TermBuilder.Power(y + 4, 2) + TermBuilder.Power(z - 1, 2);
var compiled = func.Compile(x, y, z);
// perform optimization
var vec = new double[3];
vec = GradientDescent(compiled, vec, stepSize: 0.01, iterations: 1000);
Console.WriteLine("The approx. minimizer is: {0}, {1}, {2}", vec[0], vec[1], vec[2]);
The output is:
{The approx. minimizer is: 1.99999999663407, -3.99999999326813, 0.999999998317033}
Which is pretty close to the real minimizer (2, -4, 1)
|
{"url":"http://autodiff.codeplex.com/wikipage?title=Optimization%20-%20simple%20gradient%20descent&referringTitle=Documentation","timestamp":"2014-04-18T15:53:12Z","content_type":null,"content_length":"33835","record_id":"<urn:uuid:42e3e459-9f64-4f53-9450-a484cb91031a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ideal Problem
March 6th 2012, 06:54 AM
Ideal Problem
Let I={ (2x,3x) such that x is in Z mod 6} in R= Z mod 6 x Z mod 6
The book says this isn't an ideal, but I'm not seeing it.
If r=(a,b) and k=(2x,3x) where a,b,x are in Z mod 6 then rk=(a2x, b3x). But couldn't I commute it into (2ax, 3bx) where ax, bx are in Z mod 6? Or am I not allowed to move 2 since it's not
explicity in Z mod 6?
EDIT: Nevermind, I figured it out. ax must equal bx, but it isn't so.
|
{"url":"http://mathhelpforum.com/advanced-algebra/195660-ideal-problem-print.html","timestamp":"2014-04-20T07:07:09Z","content_type":null,"content_length":"3367","record_id":"<urn:uuid:3f882b21-14fb-4722-837e-ee8fe8b73df0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US20080199093 - Appratus and method for reversible data hiding for jpeg images
Reversible, also called invertible or lossless, image data hiding can imperceptibly hide data in digital images and can reconstruct the original image without any distortion after the hidden data has
been extracted out. Among the various digital image formats, Joint Photographic Experts Group (JPEG) formats are by far used the most often nowadays. Hence, how to reversibly hide data in a JPEG
image file is important and useful for many applications including authentication, secure data systems, and covert communications. For example, linking a group of data for some purpose to a cover
image in a reversible way is particularly critical for medical images, high accuracy images, images used for legal purpose and other environments in which the original image is of great importance.
Furthermore, the invented technology is expected to be able to apply to the I-frame of Motion Picture Experts Group (MPEG) video for various applications mentioned above.
Reversible data hiding by using the histogram shifting techniques has been reported in the literature. Reversible data hiding was first applied to the histogram of an image in the spatial domain in
Z. Ni, Y. Q. Shi, N. Ansari, W. Su, “Reversible data hiding,” IEEE International Symposium on Circuits and Systems (ISCAS03), Bangkok, Thailand, May 2003; and A. van Leest, M. van der Veen, and F.
Bruekers, “Reversible image watermarking,” Proceedings of IEEE International Conference on Image Processing (ICIP), II-731-4 vol. 3, September 2003. In addition, the technique was applied to the
histogram of DCT domain and integer wavelet transform domain. In general, the histogram shifting technique has achieved dramatically improved performance in terms of embedding capacity versus visual
quality of stego image measured by peak signal noise ratio (PSNR). However, none of the above-discussed lossless data hiding methods apply to JPEG images.
In fact, there are not many reversible data hiding techniques that have been developed for JPEG images to date. Some background art techniques are reported in J. Fridrich, M. Goljan and R. Du,
“Invertible authentication watermarking for JPEG images,” Proceedings of IEEE Information Technology and Computing Conference (ITCC), pp. 223-227, Las Vegas, Nev., USA, April 2001; J. Fridrich, M.
Goljan, and R. Du, “Lossless data hiding for all image formats,” Proc. of SPIE, Electronic Imaging 2002, Security and Watermarking of Multimedia Contents IV, vol. 4675, San Jose, Calif., pp. 572-583,
2002; and J. Fridrich, M. Goljan, Q. Chen, and V. Pathak, “Lossless data embedding with file size preservation,” Proc. SPIE Electronic Imaging 2004, Security and Watermarking of Multimedia Contents,
San Jose, Calif., January 2004.
In the first two background art references cited above, the least significant bit plane of some selected JPEG mode coefficients is losslessly compressed, thus leaving space for reversible data
embedding. Consequently the payload is rather limited. In the third background art reference cited above, the run-length encoded alternating current (AC) coefficients are modified to losslessly embed
data into JPEG images, aiming at keeping the size of JPEG file after lossless data hiding remaining unchanged. However, the payload is still rather limited (i.e., the highest payload in various
experimental results reported in the third paper is 0.0176 bits per pixel (bpp)).
Embodiments of the invention are directed at overcoming the foregoing and other difficulties encountered by the background arts. In particular, embodiments of the invention provide novel technique
based on histogram pairs applied to some mid- and lower-frequency JPEG quantized 8×8 block discrete cosine transform (DCT) coefficients (hereinafter referred to as JPEG coefficients).
Embodiments of the invention provide methods using histogram pair techniques that are applied to the mid- and lower-frequency coefficients of 8×8 blocks DCT. Experimental results are presented below
that demonstrate effectiveness of these methods. The data embedding capacity ranges from 0.0004, to 0.001, 0.1, up to 0.5 bits per pixel (bpp) for one-time data embedding, while the visual quality of
images with hidden data measured by both subjective and objective PSNR remains high. The increase of size of image files due to data hiding is not noticeable, and the shape of histogram of the mid-
and lower-frequency coefficients of DCT remains similar. It works for various JPEG Q-factors.
FIG. 1A is an exemplary high-level flow diagram of: (a) a method for lossless data embedding and (b) a method for data extraction from JPEG images.
FIG. 1B is an exemplary flow diagram of: (a) a method for lossless data embedding in JPEG images and (b) a method for data extraction from JPEG images.
FIG. 1C is an exemplary detailed block diagram of methods for lossless data embedding and data extraction from JPEG image files.
FIG. 2A is an exemplary spatial representation of image data and a histogram of the image data, with a threshold T=2.
FIG. 2B is an exemplary spatial representation of image data and a histogram of the image data after histogram shifting to form a histogram pair.
FIG. 2C is an exemplary spatial representation of image data and a histogram of the image data after embedding bit sequence D=[0,1,1].
FIG. 2D is an exemplary bit sequence D=[0,1,0,0,1,0,1,1,0] embedded in two loops.
FIG. 2E is a 5×5 image data embedding example (to be embedded bit sequence is D=[1 10 001].
FIG. 2F is histograms associated with FIG. 2E.
FIG. 3A is another exemplary spatial representation of image data and its histogram of the image data, with a threshold T=2 and S=−2.
FIG. 3B is the image and histogram after histogram shifting to form two histogram pairs.
FIG. 3C is the image and histogram after the bit sequence D=[0 1 10] has been embedded.
FIG. 4A is an exemplary set of selected JPEG coefficients for data embedding {16, 36}.
FIG. 4B is another exemplary set of selected JPEG coefficients for data embedding {4, 36}.
FIG. 4C is yet another exemplary set of selected JPEG coefficients for data embedding {16, 49}.
FIG. 5 is an exemplary plot of selected JPEG coefficients in a zigzag scan from 16 to 36: {16, 36} for various images.
FIG. 6 is an exemplary plot of selected JPEG coefficients in a zigzag scan from 4 to 36: {4, 36} for various images.
FIG. 7 is an exemplary plot of JPEG coefficients in a zigzag scan from 16 to 49: {16, 49} for various images.
FIG. 8 is an exemplary plot of PSNR versus achieved by the lossless data hiding method of embodiments of the invention with three commonly used JPEG images in regular form.
FIG. 9 is an exemplary plot of PSNR versus payload achieved by the lossless data hiding method of embodiments of the invention with three commonly used JPEG images in log (log (x)) form.
FIG. 10 is an exemplary plot of PSNR versus payload achieved by the lossless data hiding method of embodiments of the invention with three commonly used JPEG images (with small payload).
FIG. 11 is an exemplary plot of PSNR versus payload achieved by the lossless data hiding method of embodiments of the invention with three commonly used JPEG images (with large payload).
FIG. 12 is an original 512×512 “Lena” JPEG image with Q-factor 80.
FIG. 13 is the “Lena” JPEG image after embedding 100 bits (0.0004 bits per pixel (bpp)).
FIG. 14 is the “Lena” JPEG image after embedding 5000 bits (0.0191 bpp).
FIG. 15 is the “Lena” JPEG image after embedding 26,214 bits (0.1 bpp).
FIG. 16 is the “Lena” JPEG image after embedding 131,072 bits (0.5 bpp).
FIG. 17 is an original 512×512 “Baboon” JPEG image with Q-factor 80.
FIG. 18 is the “Baboon” JPEG image after embedding 100 bits (0.0004 bpp).
FIG. 19 is the “Baboon” JPEG image after embedding 5000 bits (0.00191 bpp).
FIG. 20 is the “Baboon” JPEG image after embedding 26,214 bits (0.1 bpp).
FIG. 21 is the “Baboon” JPEG image after embedding 131,072 bits (0.5 bpp).
FIG. 22 is an original 512×512 “Barbara” JPEG image with Q-factor 80.
FIG. 23 is the “Barbara” JPEG image after embedding 100 bits (0.0004 bpp).
FIG. 24 is the “Barbara” JPEG image after embedding 5000 bits (0.00191 bpp).
FIG. 25 is the “Barbara” JPEG image after embedding 26,214 bits (0.1 bpp).
FIG. 26 is the “Barbara” JPEG image after embedding 131,072 bits (0.5 bpp).
FIG. 27. PSNR with hidden data versus Q-factors with 1000 bits embedded into 512×512 Lena image with various Q-factors (JPEG coefficient region {4, 36}).
Embodiments of the invention relate to data hiding. Data hiding techniques can be used for purpose such as copyright protection, authentication, annotation, and steganography. Reversible data
hiding's unique/main characteristics are its reversibility or losslessness. Reversible data hiding is mainly used for medical images (for legal consideration) and military, remote sensing, and
high-energy physics images (for high accuracy). There is a need in the art for lossless data hiding methods that can be applied to JPEG images, allows for a sizable payload and maintains the size of
the JPEG file after lossless data hiding.
The principles of histogram pair based lossless data embedding are used in embodiments of the invention. A histogram, h(x), is the number of occurrences (i.e., the frequency) of feature x within a
set of samples X. In embodiments of the invention, the samples X are some selected JPEG quantized 8×8 DCT coefficients where the feature x is the JPEG coefficients' value. The x is either positive,
or negative integer, or zero, such as xε{−2, −1, 0, 1, 2, 3}. A histogram pair is defined as a part of the histogram, denoted by h=[m, n], where m and n are, respectively, the frequencies of two
immediately neighboring feature values xε{a, b} with a<b i.e., b=a+1, and one of the two frequencies (m and n) is 0.
Histogram pairs can be formulated via a process called histogram expansion. For example, via expanding, the histogram pair h=[m, 0] can be produced (note: the underline is used to mark the histogram
pair). The feature value whose frequency (i.e., h value) is not 0 is called the feature's original position. The feature value whose h value is 0 is called the feature's expansion position. For
embodiments of the invention, it is defined that when the feature value x is greater than or equal to 0, the histogram pair is of the format h=[m, 0], which means h(a)=m and h(b)=0, when the feature
value x is less than 0, the histogram pair is of h=[0, n], which means h(a)=0 and h(b)=n.
After the histogram pair is produced, lossless data embedding is possible. Data embedding rule may be as follows:
□ a. If the to-be-embedded bit is 0, the feature's original position is used; and
□ b. if the to-be-embedded bit is 1, the feature's expansion position is used.
Alternative embodiments of the invention may be implemented with the elements of the above rule reversed. Examples of embodiments of the invention with the above rules are discussed in the
following paragraphs. It is observed that after data embedding the histogram becomes more flat. When the histogram is completely flat, it is impossible to further embed data.
FIG. 1A is an exemplary high-level flow diagram for a (a) method for lossless data embedding and (b) a method of data extraction from JPEG images. In the method for data embedding, as shown in part
(a) of FIG. 1A, Step 11 is inputting an original JPEG image. Step 12 of the method involves entropy decoding the original JPEG image and determining JPEG quantized block Discrete Cosine Transform
(DCT) coefficients from the entropy decoded original JPEG image. In Step 13, a payload for embedding in the entropy decoded JPEG image, which is provided in Step 14, is supplied; lossless data
embedding of the payload in the entropy decoded original JPEG image occurs; and entropy encoding the data embedded entropy decoded original JPEG image. At Step 15, a JPEG image with hidden data is
the output of the method for data embedding.
In the method for data extracting, as shown in part (b) of FIG. 1A, Step 25 is inputting a JPEG image with hidden data. Entropy decoding the JPEG image with hidden data and determining JPEG quantized
block Discrete Cosine Transform (DCT) coefficients from the entropy decoded JPEG image with hidden data are performed in Step 24. Step 23 is data extracting a payload from the entropy decoded JPEG
image with hidden data and entropy encoding the payload and an original JPEG image without hidden data. The original JPEG image without hidden data and a payload of extracted data is outputted in
Step 22 and Step 21, respectively.
FIG. 1B is an exemplary flow diagram of: (a) a method for lossless data embedding in JPEG images and (b) a method for data extraction from JPEG images. In the method for lossless data embedding of in
part (a) of FIG. 1B, assume the length of the to-be-embedded data is L. P is a value assumed by JPEG coefficients, which is used for data embedding. We can consider the selected P=T as the “starting
point” for data embedding, and P=S as the stopping point. Payload can be measured either in number of bits, L, or bits per pixel (bpp). The term bpp is more general, since for the same L, if the
image size is different, the bpp will be difference. For, say, a 512×512 images, 0.1 bpp means 26,214 bits (see the big tables later in the report), if consider a 256×256 image, 0.1 bpp means L=
0.25×26,214 bits. For all of three commonly used 512×512 images, when we embed 0.1 bpp payload, the PSNR is 36 dB, meaning acceptable visual quality. Hence, roughly speaking, embedding 0.1 bpp up to
0.2 bpp has no problem with our proposed method.
In Step 31, a threshold (T) is set so T>0 and set the P←T in order to let the number of the mid- and low-frequency JPEG coefficients within the range [−T, T] be greater than L. For Step 32, in the
JPEG coefficient histogram, move the portion of histogram with the coefficient values greater than P to the right-hand side by one unit to make the histogram at P+1 equal to zero (i.e., call P+1 as a
zero-point). Also in Step 32, according to whether the to-be-embedded bit is 0 or 1, embed data into P or P+1, respectively. Step 33 determines whether method for data embedding is finished. If the
answer at Step 33 is “NO” (i.e., some of the to-be-embedded bits have not been embedded at this point) and the answer to P>0 is “NO” in Step 34, let P←(−P−1) in Step 36 and move the histogram (i.e.,
less than P) to the left-hand side by one unit to leave a zero-point at the value (P−1). Also in Step 36, according to whether the to-be-embedded bit is 0 or 1, embeds data into P or (P−1),
respectively. Then continue the method for embedding by returning to Step 32 and embedding the remaining to-be-embedded data.
Alternatively, if the answer at Step 33 is “NO” (i.e., some of the to-be-embedded bits have not been embedded at this point) and the answer to P>0 is “YES” in Step 34, let P←(−P) in Step 35 and move
the histogram (i.e., less than P) to the left-hand side by one unit to leave a zero-point at the value (P−1). Also in Step 35, according to whether the to-be-embedded bit is 0 or 1, embeds data into
P or (P−1), respectively. Then continue the method for embedding by returning to Step 32 and embedding the remaining to-be-embedded data.
Alternatively, if the answer at Step 33 is “YES” (i.e., all the data has been embedded), then stop the method for embedding and record the value P as the stop value S (i.e., let S←P) in Step 37.
In the method for data extraction in part (b) of FIG. 1B, assume the stop position S of data embedding is positive. In Step 41 of part (b) of FIG. 1B, set P←S. In Step 42, decode with the stopping
value P and the value (P+1) and extract all the data until P+1 becomes a zero-point. In addition in Step 42, move all the DCT coefficients histogram (greater than P+1) towards the left-hand side by
one unit to eliminate the zero-point. If the amount of extracted data is less than C, set P←(−P−1). Continue to extract data until (P−1) becomes a zero-point. Then move the histogram (less than P−1)
to the right-hand side by one unit to eliminate the zero-pint.
Step 43 determines whether method for data extraction is finished (i.e., is the amount of extracted data less than C). If the answer at Step 43 is “NO” (i.e., some of the to-be-extracted bits have
not been extracted at this point) and the answer to P>0 is “YES” in Step 44, let P←(−P−1) in Step 45 and move the histogram (i.e., less than P−1) to the right-hand side by one unit to eliminate a
zero-point). Then continue the method for extracting by returning to Step 42 and extracting the remaining to-be-extracted data.
Alternatively, if the answer at Step 43 is “NO” (i.e., some of the to-be-extracted bits have not been extracted at this point) and the answer to P>0 is “No” in Step 44, let P←(−P) in Step 46 and move
the histogram (i.e., less than P−1) to the right-hand side by one unit to eliminate a zero-point). Then continue the method for extracting by returning to Step 42 and extracting the remaining
to-be-extracted data.
Alternatively, if the answer at Step 43 is “YES” (i.e., all the data has been extracted), then stop the method for extracting. Histogram shifting makes histogram more flat, thus embedding data into
JPEG image file. Consider the horizontal axis of a histogram as representing the value of a set of selected mid- and lower frequency coefficients of an 8×8 block DCT that are integer-valued after
JPEG quantization.
Consider one selected pair of points in a histogram. Denote the length of to-be-embedded bit stream by L, a selected point in the horizontal axis by T, and its histogram value by h(T). If L≦h(T),
this single T point is enough for data embedding. The whole histogram can be divided into three parts: (1) a central part; (2) a to-be-embedded part; and (3) an end part. The central part is the
histogram whose value is less than T and kept intact during data embedding. The to-be-embedded part is the histogram pair whose values will change according to the to-be-embedded bits. The end part
is the histogram whose value is greater than T and will be shifted outwards before data embedding.
FIG. 1C is an exemplary detailed block diagram of a method for lossless data hiding and a method for data extraction from JPEG images. In FIG. 1C, data to be embedded 101 and a JPEG file 103 are
configured to provide to a data embedding function 105 and a JPEG bits stream function 107, respectively. The JPEG bit stream function 107 is configured to provide inputs to a first entropy decoding
function 109. The data embedding function 105 and entropy decoding function are configured to provide inputs to a JPEG coefficient function 111. The JPEG coefficient function 111 is configured to
provide inputs to an entropy coding function 113. The entropy coding function 113 is configured to provide inputs to a JPEG bit stream with hidden data function 115. The JPEG bit stream with hidden
data function 115 is configured to provide inputs to a JPEG file with hidden data 117, a JPEG image with hidden data 127 and a second entropy decoding function 123. The second entropy decoding
function is configured to provide inputs to a second JPEG coefficient function 121. The second JPEG coefficient function 121 is configured to provide inputs to a data recovering function 125 and a
second entropy coding function 119. The data recovering function 125 is configured to provide extracted data 133. The second entropy coding function is configured to provide inputs to a JPEG bit
stream recovering function 129. The JPEG bit stream recovering function 129 is configured to provide inputs to a recovered original image 131 and an original JPEG file 137. The JPEG image with hidden
data 127 and the recovered original image 131 are configured to provide inputs to a JPEG image display 135.
FIG. 1C is an exemplary flow diagram of: (a) a method for lossless data embedding in JPEG images and (b) a method for data extraction from JPEG images. In the method for lossless data embedding of in
part (a) of FIG. 1C, assume the length of the to-be-embedded data is L. P is a value assumed by JPEG coefficients, which is used for data embedding. We can consider the selected P=T as the “starting
point” for data embedding, and P=S as the stopping point. Payload can be measured either in number of bits, L, or bits per pixel (bpp). The term bpp is more general, since for the same L, if the
image size is different, the bpp will be difference. For, say, a 512×512 images, 0.1 bpp means 26,214 bits (see the big tables later in the report), if consider a 256×256 image, 0.1 bpp means L=
0.25×26,214 bits. For all of three commonly used 512×512 images, when we embed 0.1 bpp payload, the PSNR is 36 dB, meaning acceptable visual quality. Hence, roughly speaking, embedding 0.1 bpp up to
0.2 bpp has no problem with our proposed method.
In Step 1, a threshold (T) is set so T>0 and set the P←T in order to let the number of the mid- and low-frequency JPEG coefficients within the range [−T,T] be greater than L. For Step 2, in the JPEG
coefficient histogram, move the portion of histogram with the coefficient values greater than P to the right-hand side by one unit to make the histogram at P+1 equal to zero (i.e., call P+1 as a
zero-point). Also in Step 2, according to whether the to-be-embedded bit is 0 or 1, embed data into P or P+1, respectively. Step 3 determines whether method for data embedding is finished. If the
answer at Step 3 is “NO” (i.e., some of the to-be-embedded bits have not been embedded at this point) and the answer to P>0 is “NO” in Step 4, let P←(−P−1) in Step 6 and move the histogram (i.e.,
less than P) to the left-hand side by one unit to leave a zero-point at the value (P−1). Also in Step 6, according to whether the to-be-embedded bit is 0 or 1, embeds data into P or (P−1),
respectively. Then continue the method for embedding by returning to Step 2 and embedding the remaining to-be-embedded data.
Alternatively, if the answer at Step 3 is “NO” (i.e., some of the to-be-embedded bits have not been embedded at this point) and the answer to P>0 is “YES” in Step 4, let P←(−P) in Step 5 and move the
histogram (i.e., less than P) to the left-hand side by one unit to leave a zero-point at the value (P−1). Also in Step 5, according to whether the to-be-embedded bit is 0 or 1, embeds data into P or
(P−1), respectively. Then continue the method for embedding by returning to Step 2 and embedding the remaining to-be-embedded data.
Alternatively, if the answer at Step 3 is “YES” (i.e., all the data has been embedded), then stop the method for embedding and record the value P as the stop value S (i.e., let S←P) in Step 7.
In the method for data extraction in part (b) of FIG. 1C, assume the stop position S of data embedding is positive. In Step 11 of part (b) of FIG. 1C, set P←S. In Step 12, decode with the stopping
value P and the value (P+1) and extract all the data until P+1 becomes a zero-point. In addition in Step 12, move all the DCT coefficients histogram (greater than P+1) towards the left-hand side by
one unit to eliminate the zero-point. If the amount of extracted data is less than C, set P←(−P−1). Continue to extract data until (P−1) becomes a zero-point. Then move the histogram (less than P−1)
to the right-hand side by one unit to eliminate the zero-pint.
Step 13 determines whether method for data extraction is finished (i.e., is the amount of extracted data less than C). If the answer at Step 13 is “NO” (i.e., some of the to-be-extracted bits have
not been extracted at this point) and the answer to P>0 is “YES” in Step 14, let P←(−P−1) in Step 15 and move the histogram (i.e., less than P−1) to the right-hand side by one unit to eliminate a
zero-point). Then continue the method for extracting by returning to Step 12 and extracting the remaining to-be-extracted data.
Alternatively, if the answer at Step 23 is “NO” (i.e., some of the to-be-extracted bits have not been extracted at this point) and the answer to P>0 is “No” in Step 14, let P←(−P) in Step 16 and move
the histogram (i.e., less than P−1) to the right-hand side by one unit to eliminate a zero-point). Then continue the method for extracting by returning to Step 12 and extracting the remaining
to-be-extracted data.
Alternatively, if the answer at Step 3 is “YES” (i.e., all the data has been extracted), then stop the method for extracting. Histogram shifting makes histogram more flat, thus embedding data into
JPEG image file. Consider the horizontal axis of a histogram as representing the value of a set of selected mid- and lower frequency coefficients of an 8×8 block DCT that are integer-valued after
JPEG quantization.
Consider one selected point in a histogram with its feature, x, value (in the horizontal axis) equal to T, and its histogram value equal to h(T). Denote the length of to-be-embedded bit stream by L.
If L≦h(T), this single T point is enough for data embedding. The whole histogram can be divided into three parts: (1) a central part; (2) a to-be-embedded part; and (3) an end part. The central part
is the histogram whose value is less than T and kept intact during data embedding. The to-be-embedded part is the histogram pair whose values will change according to the to-be-embedded bits. The end
part is the histogram whose value is greater than T and will be shifted outwards before data embedding.
FIG. 2A to FIG. 2C is a simple example to illustrate the histogram pair. Assume a simple image is the left part of FIG. 2A. The right part of FIG. 2A is its histogram. We assume the threshold T is
value 2. Then the central part is value 0, which is intact in data embedding. The end part is the value 3, which is going to shift to right side by one unit before data embedding in order to leave
histogram value at x=3 empty for data embedding. After the edge histogram shifting, the new image and its corresponding histogram are shown in FIG. 2B. Now the h(2) and h(3) becomes a histogram pair.
In one histogram pair T and T+1, the rule for data embedding is: if the to-be-embedded bit is 0, the value is kept T. If the to-be-embedded bit is 1, the value becomes T+1. Now assume the
to-be-embedded bits are [0,1,1], we scan the image from left to right and from top to down. Once we meet pixel value T, we check the to-be-embedded bit and change its value according to the
be-be-embedded bit. In this way, after data embedding, the embedded image and its histogram are presented in FIG. 2C.
Consider the case of multiple selected pairs of points in histogram. If the length of to-be-embedded bit stream L>h(T), then only one T (or one histogram pair) is not enough for data embedding. Then
we need multiple T (or multiple histogram pairs) to embed data. These Ts are positive and negative in turn, such as [T,−T,T−1,−(T−1),T−2,−(T−2), . . . , S]. Same as the case of single T, the
histogram is also divided into three parts: (1) a central part; (2) a to-be-embedded; and (3) an edge part. As discussed above, the central part is the histogram whose value is less than T and kept
intact while data embedding. The to-be-embedded part is the histogram pair whose value will change according to the to-be-embedded bits. The end part is the histogram whose value is greater than T
and will be shifted to outer end before data embedding. After histogram shifting, the histogram pairs are [<h(T),h(T+1)=0>, <h(−T−1)=0,h(−T)>, <h(T−1),h(T)=0>, <h(−T)=0,h(−(T−1))>, <h(T−2),h(T−1)=0>,
<h(−(T−1)=0,h(−(T−2))>, . . . ]. When the embedding process stops, if the S is negative, then the histogram pair is <h(S−1)=0,h(S)>. If the S is positive, then the histogram pair is <h(S),h(S+1)=0>.
As an example of reversible data embedding using single histogram pair, assume samples are X=[a,a,a,a], i.e., the number of samples is M=4, feature values xε{a,b} are greater than 0. There is one
histogram pair h=[4,0]. Suppose that the to-be-embedded binary sequence is D=[1,0,0,1] whose length L is equal to 4, i.e., L=4.
During data embedding, we scan the sequence X=[a,a,a,a] in a certain sequencing, say, from left to right. When we meet the first a, since we want to embed bit 1, we change a to its expansion
position, b. For the next two to-be-embedded bits, since they are bit 0, we keep a in its original position, i.e., we do not change a. For the last to-be-embedded bit 1, we change a to b. Therefore,
after the four-bit embedding, we have X=[b,a,a,b], and the histogram is now h=[2,2]. Embedding capacity is C=L=4. Data extraction, or histogram pair recovery, is the reverse process of the above
mentioned data embedding. After extracting the data D=[1,0,0,1], the histogram pair becomes [4,0] and we recover X=[a,a,a,a] losslessly. Note that after data embedding, histogram is changed from h=[
4,0] to h=[2,2], histogram is completely flat and hence we cannot embed data any more.
As another example of reversible data embedding we examine the method when using two loops. Given a 3×3 image, the feature values are xε{a,b,c,d}, where features are all greater than 0. According to
the scan order, say, from left to right and from top to bottom, the samples X become X=[a,a,a,a,a,a,a,a,a], the total number of samples M=9, histogram is h=[9,0,0,0], as shown in FIG. 2D(a). The
histogram pair is h=[9,0]. The to-be-embedded bit sequence is D=[0,1,0,0,1,0,1,1,0] and L=9.
In the first data embedding loop “Loop 1,” since the first to-be-embedded bit is 0, use the original feature position a (meaning no change for the first a), the second bit is 1, use the expansion
position (meaning change a to b). In this way, we totally embed 9 bits, after data embedding, the samples become X=[a,b,a,a,b,a,b,b,a], refer to FIG. 2D(b). After the first embedding loop, the
histogram h=[9,0,0,0] becomes h=[5,4,0,0]. The payload is C[1]=L=9 bits.
For the second data embedding loop “Loop 2,” expanding the first: the histogram pair h=[4,0] is shifted towards the right-hand side by one position, thus producing the histogram with two histogram
pairs h=[5,0,4,0] and the samples become X=[a,c,a,a,c,a,c,c,a], refer to FIG. 2D(c). The second embedding loop will separately use the two histogram pairs in h=[5,0,4,0], xε{a,b,c,d} in order to
avoid confliction. That is, it first uses the histogram pair with larger absolute feature values, then uses the histogram pair with smaller absolute feature values. In this example, we first embed
data into the right histogram pair, then into the left histogram pair. The to-be-embedded bit sequence D=[0,1,0,0,1,0,1,1,0] is separated into two parts accordingly. That is, we first embed the front
portion of data D[1]=[0,1,0,0] into the histogram pair at the right side h=[4,0], xε{c, d}, resulting in the corresponding samples X[1]=[c,d,c,c]. Then, we embed the remaining data D[2]=[1,0,1,1,0]
into the left histogram pair h=[5,0], xε{a,b}, resulting in the corresponding samples X[2]=[b,a,b,b,a]. After Loop2, the histogram becomes h=[2,3,3,1] and the samples become X=[b,c,a,b,d,b,c,c,a],
FIG. 2( d). The embedding capacity in Loop 2 is C[2]=L=9 bits.
The total capacity after two embedding loops is C=18 bits. After two embedding loops, histogram changes from h=[9,0,0,0] to h=[2,3,3,1]. It is observed that the histogram has changed from rather
sharp ([9,0,0,0]) to relatively flat ([2,3,3,1]).
The principles of thresholding are discussed in the following paragraphs. Histogram pair based lossless data hiding seeks not only higher embedding capacity but also higher visual quality of stego
images measured by, say, PSNR (peak signal noise ratio). For instance, we may embed data with sufficient payload for annotation (such as caption) or for security (such as authentication) with
reversibility as well as the highest possible PSNR of the stego image with respect to the cover image.
In the background art, it was thought that one way to improve the PSNR is to use only a part of JPEG coefficients with small absolute values. In doing so, we need the so-called thresholding
technique. The thresholding method is to first set a threshold T, then embed data into those JPEG coefficients, x, with |x|≦T. That is, it does not embed data into the JPEG coefficients with |x|>T.
In addition, it makes sure that the small JPEG coefficients after data embedding will not conflict (will not be confused) with the large JPEG coefficients with (|x|>T). That is, for the JPEG
coefficients satisfying |x|≦T, histogram pair based data embedding is applied. It requires that after data embedding, the coefficients between −T≦x≦T will be separable from the coefficients with |x|>
T. The simple thresholding will divide the whole histogram into two parts: 1) the data-to-be embedded region, where the JPEG coefficients absolute value is small; and 2) no data-to-be embedded region
named end regions, where the JPEG coefficients' absolute value is large.
Our experimental works have indicated that the smallest threshold T does not necessarily lead to the highest PSNR for a given data embedding capacity. Instead, it is found that for a given data
embedding capacity there is an optimum value of T. This can be justified as follows. If a smaller threshold T is selected, the number of coefficients with |x|>T will be larger. This implies that more
coefficients with |x|>T need to be moved away from 0 in order to create histogram pair(s) to losslessly embed data. This may lead to a lower PSNR and more side information (hence smaller embedding
capacity). Therefore in embodiments of the invention, optimum histogram pair lossless embedding, and the best threshold T for a given data embedding capacity is selected to achieve the highest PSNR.
Discussion about the optimum parameters and experimental results are further discussed below.
FIG. 3A to FIG. 3C is an example for multiple histogram pairs. FIG. 3A is the original image and the corresponding histogram. Since the bit stream length is 4, it is not enough to rely on the
histogram h(T)=3 of one single T (T=2). Hence the new T sequence is x=T, S=−T]. Now in FIG. 3A, T=2; S=−2 produce two histogram pairs. After shifting the edge part of the histogram to the outer, the
new image and the histogram is presented in FIG. 3B. Now it produces two histogram pair <h(2)=3,h(3)=0> and <h(−3)=0,h(−2)=1>. Similar as the case of one histogram pair, after data embedding, the
embedded image and its corresponding histogram are shown in FIG. 3C.
The following is a discussion of maximum data embedding capacity. When the stop point S is negative, then the capacity is:
$∑ - T S h ( x ) + ∑ - S T h ( x ) .$
It produces 2(T−|S|+1) histogram pairs. When the stop point S is positive, the capacity is
$∑ - T - S - 1 h ( x ) + ∑ S T h ( x ) .$
It produces 2(T−|S|+1)−1 histogram pairs.
In addition, when the stop point S is 0, the capacity is:
$∑ - T - 1 h ( x ) + ∑ 0 T h ( x ) = ∑ - T T h ( x ) .$
It produces 2T histogram pairs. When T includes all the histogram value, in that case, the capacity is largest. It equals the integral of the histogram.
The maximum PSNR is discussed in the following paragraphs. When threshold T is small, the capacity is also small. Experimental results demonstrate that when the threshold T is large, it will increase
the PSNR. Hence, if the length of to-be-embedded bit stream is fixed, we can get the highest PSNR and its corresponding threshold T through experiments.
An example of a histogram pair based lossless data embedding is discussed in the following paragraphs. In this example, the to-be-embedded bit sequence D=[1 10 001] has six bits and will be embedded
into an image by using the proposed histogram pair scheme with threshold T=3, and stop value S=2. The dimensionality of the image is 5×5, as shown in FIG. 2E(a). The image has 12 distinct feature
(grayscale) values, i.e., xε{−5,−4,−3,−2,−1,0,1,2,3,4,5,6}. The grayscale values of this image have the histogram h[0]=[0,1,2,3,4,6,3,3,1,2,0,0] (as shown in 1^st row of FIG. 2F). As said before, for
x≧0, the histogram pair is of form h=[m,0], for x<0, the histogram pair is h=[0,n]. The second row of FIG. 2F is expanded image histogram: h[1 ](expanded), it has three histogram pairs. The first
histogram pair is in the far-right-hand side h=[1,0]; the second histogram pair is in the left-hand side h=[0,2]; the third histogram pair is in the right-hand side near the center h=[3,0]. The third
row of FIG. 2F is the image histogram after data embedding; h[2 ](bits embedded).
FIG. 2F and Table 1 use red line square to mark the third histogram pair. The first histogram pair [1,0] is used to embed the 1^st bit 1, the second histogram pair[0,2] is used to embed the next two
bits 1,0, and the third histogram pair [3,0] is used to embed three bits: 0,0,1. During expanding, we 1^st making h(4)=0, then making h(−4)=0, finally making h(3)=0. During each zero-point creation
the histogram shifting towards one of two (left and right) ends is carried out, the resultant histogram becomes h[1]=[1,0,2,3,4,6,3,3,0,1,0.2] (refer to FIG. 2E(c) and 2^nd row of FIG. 2F). There
histogram pairs are thus produced: in the right-most h=[1,0], in the left h=[0,2] and in the right (near center) h=[3,0].
After data embedding with bit sequence D=[1 10 001] with the selected scanning order (from right to left and from top to bottom), the histogram becomes h[2]=[1,1,1,2,4,6,3,2,1,0,1,2] (refer to FIG. 2
E(c) and 3^rd row of FIG. 2F). The three histogram pairs changed: in the right most from h=[1,0] to h=[0,1], in the left from h=[0,2] to h=[1,1], and in the right (near center) from h=[3,0] to h=
TABLE 1.0
Example of histogram pair based data embedding with T = 3, S = 2, D = [1 10 001]
1 X −5 −4 −3 −2 −1 0 1 2 3 4 5 6
2 h[0] 0 1 2 3 4 6 3 3 1 2 0 0
3 h[1] 1 0 2 3 4 6 3 3 0 1 0 2
4 h[2 ](bits 1 1 1 2 4 6 3 2 1 0 1 2
5 embedded no [1 0] no [001] [1] no
(ordering) embedding embedded embedding embedded embedded embedding
(second) (third) (first)
After data embedding, not only the image pixel values but also three histogram pairs have been changed. For example, embedding the last three bits 0,0,1 causes the histogram pair at the right-hand
side (near center) to change from h=[3,0] to h=[2,1], and three image pixel values marked with small rectangles (in red) to change from [2,2,2] to [2,2,3] (refer to FIG. 2E(c) and 3^rd row of FIG.
2F). Through this example, it becomes clear that the threshold can also be viewed as the starting point to implement histogram pair lossless data hiding.
Formulae of lossless data hiding based on histogram pairs are discussed in the following paragraphs. The proposed method divides the whole histogram into three parts: (1) the part where data to be
embedded; (2) central part—no data embedded and the absolute value of coefficients is small; (3) end part—no data embedded and the absolute value of coefficients is large. The whole embedding and
extraction procedure can be expressed by the formulae in Table 2 below.
In Table 2, T is selected threshold, i.e., start position, S is stop position, x is feature (JPEG coefficient) values before embedding, x′ is feature values after embedding, u(S) is unit step
function (when S≧0; u(S)=1, when S<0; u(S)=0), └x┘ rounds x to the largest integer not larger than x.
TABLE 2
Formulae of lossless data hiding based on histogram pairs
Embedding Recovering
after after
parts of histogram embedding condition recovering condition
Data to be x′ = 2x + b − |S| |S| ≦ x ≦ T x = └(x′ + |S|)/2┘, b = x′ + |S| − 2x |S| ≦ x′ ≦ 2T − 1 − |S|
embedded region
(right side)
(positive or zero)
Data to be x′ = 2x − b + |S| + u(S) −T ≦ x ≦ − |S| − u(S) x = └(x′ − |S| − u(S) + 1)/2┘ −2T − 1 + |S| + u(S) ≦ x′ ≦ −|S| − u(S)
embedded region b = x′ − |S| − u(S) − 2x
(left side)
Central part x′ = x −|S| − u(S) < x < |S| x = x′ −|S| − u(S) < x′ < |S|
(small absolute
Right edge part x′ = x + T + 1 − |S| x > T x = x′ − T − 1 + |S| x′ > 2T + 1 − |S|
Left edge part x′ = x − T − 1 + |S| + u(S) x < − T x = x′ + T + 1 − |S| − u(S) if x′ < −2T − 1 + |S| + u(S)
Moreover, the formulae corresponding to the above example are listed in Table 3 below.
TABLE 3
Formulae of third example (T = 3; S = +2, 6 bit data D = [1 10 001])
Embedding Recovering
after after
embedding condition recovering condition
right to- x′ = 2x + b − 2 if 2 ≦ x ≦ 3 x = floor((x′ + 2)/2), if 2 ≦ x′ ≦ 5
be- b = 0: x′ = [2, 4] b = 0: x = [2, 3] b = x′ + 2 − 2x b = 0: x′ = [2, 4]
embedded b = 1: x′ = [3, 5] b = 1: x = [2, 3] b = 0: x = [2, 3] b = 1: x′ = [3, 5]
b = 1: x = [2, 3]
left to-be- x′ = 2x − b + 3 if −3 ≦ x ≦ −3 x = floor((x′ − 2)/2), if −4 ≦ x′ ≦ −3
b = 0: x′ = [−3] b = 0: x = [−3] b = x′ − 3 − 2x b = 0: x′ = [−3]
embedded b = 1: x′ = [−4] b = 1: x = [−3] b = 0: x = [−3] b = 1: x′ = [−4]
b = 1: x = [−3]
central x′ = x if −2 < x < 2 x = x′ if −2 − u(S) < x′ < |S|
x′ = [−2, −1, 0, 1] x = [−2, −1, 0, 1] x = [−2, −1, 0, 1] x′ = [−2, −1, 0, 1]
right end x′ = x + 2 if x > 3 x = x′ − 2 x = [4] if x′ > 5 x′ = [6]
x′ = [6] x = [4]
left end x′ = x − 1 if x < −3 x = x′ + 1 x = [−4] if x′ < −4
x′ = [−5] x = [−4] x′ = [−5]
The selection of JPEG coefficients (i.e., JPEG quantized 8×8 Block DCT Coefficients) used for lossless data hiding is discussed in the following paragraph. In order to make data embedding less
perceivable and make hidden data more robust, we may choose lower- and mid-frequency coefficients to embed data in the implementation of our invented technology. Among all of the JPEG coefficients,
we determined the part of JPEG coefficients that has the best performance. In particular, we scan all of JPEG quantized 8×8 block DCT coefficients in the zigzag way to produce the histogram and data
is embedded through histogram pairs based scheme as described above.
FIG. 4A to FIG. 4C show three regions of selected JPEG coefficients for data embedding. Experimental results in terms of PSNR of the stego images with respect to thresholds T when embedding of 500
bits into these different parts of the JPEG coefficients are shown in FIG. 5, FIG. 6, and FIG. 7, respectively. Data was embed in the region of {16, 36} in FIG. 4A, {4,36} in FIG. 4B and {16,49} in
FIG. 4C of the DCT coefficients. Since the histogram of the DCT coefficients in {16, 36} and {16,49} are more concentrated, the maximum shown threshold T value in FIG. 4A to FIG. 4C is taken as 15.
On the other hand, for {4, 36} (i.e., FIG. 4A), the maximum shown threshold T value is taken as 40. In general, the region of {4,36} appears to bring out better performance.
The results of our analysis of Lossless Data Hiding in JPEG Image Files produced by embodiments of the invention are discussed below. Some experiments on JPEG images with Q-factor equal 80 were done
to evaluate the performance of embodiments of the invention. In particular, the test images used are: Lena.jpg (512×512), Barbara.jpg (512×512) and Baboon.jpg (512×512). The data is embedded in the
JPEG coefficients in the region of {4, 36}. The experimental results are presented in Table 4 below.
TABLE 4
Lossless data hiding in three commonly used 512 × 512 JPEG Images with
Q-factor 80 and the region of selected region of JPEG coefficients R = {4, 36}.
Payload 0.0004 0.0011 0.0019 0.003 0.0038 0.0114
bpp (bits) (100) (300) (500) (800) (1000) (3000)
Lena PSNR 58.4707 53.7209 52.6613 51.0275 49.8880 44.9344
T(begin) 12 9 5 6 5 2
S(stop) 12 −9 5 −6 −5 −2
time(sec) 0.182 0.373 0.179 0.382 0.375 0.342
JPEG(bit) 37956 37980 38024 38119 38120 38913
Baboon PSNR 58.2083 53.1640 51.6615 49.4362 48.4424 43.2057
T(begin) 19 16 14 8 10 5
S(stop) 19 −16 −14 8 −10 −5
time(sec) 0.192 0.44 0.363 0.194 0.387 0.414
JPEG(bit) 78696 78675 78771 78735 78776 79387
Barbara PSNR 58.3384 53.3831 51.2756 48.5166 48.3079 43.5953
T(begin) 13 11 9 7 7 4
S(stop) 13 −11 −9 −7 −7 −4
time(sec) 0.18 0.372 0.552 0.364 0.404 0.387
JPEG(bit) 48361 48406 48406 48483 48555 48678
Payload 0.0191 0.1 0.2 0.3 0.4 0.5
bpp (bits) (5000) (26214) (52429) (78643) (104858) (131072)
Lena PSNR 44.4357 36.5374 32.5016 30.9442 29.7305 27.6371
T(begin) 2 2 0 0 1 6
S(stop) −2 −1 0 0 0 0
time(sec) 0.367 0.745 0.187 0.204 0.597 2.474
JPEG(bit) 38913 42884 52486 57733 60792 65090
Baboon PSNR 41.5645 33.2079 27.8094 27.1292 24.6459 21.8980
T(begin) 4 1 1 1 3 13
S(stop) −4 −1 0 0 0 0
time(sec) 0.417 0.671 0.623 0.592 1.571 5.392
JPEG(bit) 79387 84610 91659 94485 98173 100922
Barbara PSNR 40.2392 33.8112 31.8271 30.5232 28.5581 24.8663
T(begin) 1 0 0 0 1 7
S(stop) 1 0 0 0 0 0
time(sec) 0.176 0.171 0.193 0.204 0.6120 2.941
JPEG(bit) 49903 57623 62525 67062 70639 74112
For the 512×512 JPEG Lena images with different Q-factors (30, 40, 50, 60, 70, 80, 90, 100), the experimental results when 1000 bits are embedded into JPEG coefficients in the range of {4,36} are
listed in Table 5 below.
TABLE 5
Experimental results with 1000 bits embedded into 512 × 512 Lena JPEG
Image (0.0038 bpp) with various Q-factors (the JPEG coefficient region
selected for data embedding is R = {4, 36}).
PSNR 41.7598 44.5994 45.3630 46.0381 47.9433 49.8880 52.9991 58.2008
T 2 2 2 2 4 5 7 9
S −2 2 2 2 −4 −5 −7 9
Time (sec) 0.337 0.179 0.185 0.1841 0.364 0.375 0.392 0.358
Original 15,159 18,027 20,919 24,076 29,294 37,937 59,197 162,247
image file
size (bits)
Image file 15,423 18,216 21,142 24,359 29,403 38,120 59,419 162,437
size after
Image file 264 189 −777 283 109 −817 222 190
after data
Image file 1.7% 1.0% 1.1% 1.2% 0.4% 0.5% 0.4% 0.1%
after data
In the background art reference entitled: “Lossless data embedding with file size preservation,” by Fridrich et al. discussed above, the highest payload reported in their experimental results on 50
images was 0.0176 bpp. In contrast, as the experimental results indicated Table 4 and Table 5 suggest, for all of three commonly used images, embodiments of the invention can easily embed 0.0191,
0.1, 0.2, 0.3, 0.4, and 0.5 bpp. That is, embodiments of the invention can embed much more data into JPEG images than the background art.
In addition, embodiments of the invention can keep data-size increases unnoticeable (e.g., when compared with the original JPEG image (before data embedding)). Specifically, when embedding 1000 bits
to three commonly used JPEG images with Q-factor ranges from 30 to 100, the image size increase after embedding ranges from 1.7% (264 bits) to 0.1% (190 bits).
Further, The PSNR versus payload of these three images is shown from FIG. 8 to FIG. 11 and FIG. 12 to FIG. 26 is some images after embedding different amount of data (i.e., small payload and large
payload). These figures indicate that the embodiments of invention work well with JPEG images with Q-factor equal 80.
The advantages of embodiments of the invention over the background art include, but are not limited to:
□ a. the histogram pair based lossless data hiding technique can be applied to JPEG quantized 8×8 block DCT coefficients, and the I-frame of MPEG videos;
□ b. the selection of optimum threshold and optimum JPEG coefficient region for data embedding can further improve the PSNR of stego images with a given payload;
□ c. the histogram pair JPEG image lossless data hiding technique does not noticeably increase the size of JPEG image file;
□ d. specifically, when 1000 bits are embedded into 512×512 Lena JPEG image (0.0038 bpp) with a Q-factor ranging from 30 to 100, the PSNR of the resultant stego images ranges from 41 dB to 58
□ e. further, before and after the data embedding, the increase of the JPEG file size ranges from 1.7% to 0.1%;
□ f. the amount of image-file-size increase ranges from 264 bits to 190 bits, which indicate satisfactory performance; and
□ g. compared to the background art, it appears that embodiments of the invention can achieve higher payload.
Moreover, FIG. 27 shows a plot of the curve of PSNR of images with hidden data with respect to the original images versus the varying Q-factors. In particular, FIG. 27 indicates that embodiments of
the present invention also work for different Q-factors well.
It will, of course, be understood that, although particular embodiments have just been described, the claimed subject matter is not limited in scope to a particular embodiment or implementation. For
example, one embodiment may be in hardware, such as implemented to operate on a device or combination of devices, for example, whereas another embodiment may be in software. Likewise, an embodiment
may be implemented in firmware, or as any combination of hardware, software, and/or firmware, for example. Likewise, although claimed subject matter is not limited in scope in this respect, one
embodiment may comprise one or more articles, such as a storage medium or storage media. This storage media, such as, one or more CD-ROMs and/or disks, for example, may have stored thereon
instructions, that when executed by a system, such as a computer system, computing platform, or other system, for example, may result in an embodiment of a method in accordance with claimed subject
matter being executed, such as one of the embodiments previously described, for example. As one potential example, a computing platform may include one or more processing units or processors, one or
more input/output devices, such as a display, a keyboard and/or a mouse, and/or one or more memories, such as static random access memory, dynamic random access memory, flash memory, and/or a hard
drive. For example, a display may be employed to display one or more queries, such as those that may be interrelated, and or one or more tree expressions, although, again, claimed subject matter is
not limited in scope to this example. Likewise, an embodiment may be implement as a system, or as any combination of components such as computer systems, mobile and/or other types of communication
systems and other well known electronic systems.
In the preceding description, various aspects of claimed subject matter have been described. For purposes of explanation, specific numbers, systems and/or configurations were set forth to provide a
thorough understanding of claimed subject matter. However, it should be apparent to one skilled in the art having the benefit of this disclosure that claimed subject matter may be practiced without
the specific details. In other instances, well known features were omitted and/or simplified so as not to obscure the claimed subject matter. While certain features have been illustrated and/or
described herein, many modifications, substitutions, changes and/or equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to
cover all such modifications and/or changes as fall within the true spirit of claimed subject matter.
|
{"url":"http://www.google.com/patents/US20080199093?ie=ISO-8859-1&dq=5,966,702","timestamp":"2014-04-18T05:44:21Z","content_type":null,"content_length":"159645","record_id":"<urn:uuid:bb56d53a-6701-4386-9861-3133009d6b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
April 20th 2009, 07:07 PM #1
Jan 2009
Slope Help?
The function f is differentiable and has values as shown in the table above. Both f and f ' are strictly increasing on the interval [0,5]. Which of the following could be the value of f ' (3)?
a. 20
b. 27.5
c. 29
d. 30
e. 30.5
I've already ruled out choice a and b, but between c, d, and e, I don't know which one is correct.
solved.... maybe
When I did a quadratic regression, and derived the formula, I got: $4.79\bar{54} x + 1.106\bar{81}$
so, once you put in 3, you get roughly $29.879\bar{54}$
So, I'd say it's D.
But to have a more clear answer, if you take the slope between points (3,45) and (2.8,39.2); you get 29.
Then with points (3.1,48.05) and (3,45); you get 30.5
Obviously then since the function is continuous, it must be in between 29 and 30.5, which gives option D.
hope this helps.
April 20th 2009, 07:50 PM #2
|
{"url":"http://mathhelpforum.com/calculus/84752-slope-help.html","timestamp":"2014-04-21T09:24:39Z","content_type":null,"content_length":"31649","record_id":"<urn:uuid:be758603-53ea-4ab1-bbb1-ba56afabf6c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
ee Find the common factors of 8 and 12. ee View Solution
ee Find the common factors of 8 and 20. ee View Solution
ee Find the GCF of 44 and 48. ee View Solution
ee Find the GCF of 48 and 72. ee View Solution
ee Find the GCF of 15 and 20. View Solution
ee Lauren has 56 pink flowers and 63 yellow flowers. Find the greatest number of identical bouquets she can make, using both the flowers such that no flower is left. ee View Solution
ee What are the factors of 79? ee View Solution
ee Find the GCF of 60 and 75. ee View Solution
ee Find the GCF of 60 and 57. View Solution
ee Find the GCF of 32 and 40. View Solution
ee Find the GCF of 48 and 54. ee View Solution
ee Identify the GCF of 60 and 70. ee View Solution
ee Identify the GCF of 57 and 95. ee View Solution
ee Annie has 32 pencils and 64 balloons. Find the greatest number of identical pairs she can make using both the pencils and the balloons to distribute among the children. ee View Solution
ee Find the GCF of 90 and 108. View Solution
ee Sunny has 48 crayons and 72 plain cartoon pictures. Find the GCF of number of crayons and number of cartoon pictures. View Solution
ee Is 15 the GCF of 60 and 75? ee View Solution
ee Is 3 the GCF of 18 and 21? ee View Solution
ee What are the factors of 52? ee View Solution
ee Find the common factors of 15 and 18. ee View Solution
ee Find the GCF of 33 and 30. View Solution
ee Find the common factors of 6 and 12. ee View Solution
ee Find the GCF of 40 and 48. View Solution
ee Find the GCF of 45 and 60. ee View Solution
ee Find the GCF of 22 and 24. ee View Solution
ee Find the GCF of 18 and 24. ee View Solution
ee Find the GCF of 24 and 32. ee View Solution
ee Identify the GCF of 50 and 60. ee View Solution
ee Identify the GCF of 16 and 64. ee View Solution
ee The prime factors of 48 and 96 are in the figure. Which of the following choices is the product of the factors in the intersection of the circles? ee View Solution
ee Laura has 24 pencils and 40 cards. Find the greatest number of identical pairs she can make using both the pencils and the cards to distribute among the children. ee View Solution
ee Find the GCF of 25 and 30. View Solution
ee Find the GCF of 54 and 108. View Solution
ee Dennis has 96 crayons and 120 plain cartoon pictures. Find the GCF of number of crayons and number of cartoon pictures. View Solution
ee Sheela has 56 pink flowers and 63 yellow flowers. Find the greatest number of identical bouquets she can make, using both the flowers such that no flower is left. ee View Solution
ee The GCF of two prime numbers is ________. ee View Solution
ee Is 15 the GCF of 75 and 90? ee View Solution
ee Is 6 the GCF of 12 and 18? ee View Solution
ee What is the GCF of 68 and 56? ee View Solution
ee What is the GCF of 20 and 10? ee View Solution
|
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdfbxkhmjf&.html","timestamp":"2014-04-16T19:41:21Z","content_type":null,"content_length":"79040","record_id":"<urn:uuid:9df226a3-36d2-465d-ac4a-e9bfbf9495d3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Items tagged with plane
I want to write the equation of the plane passing through the three points A, B, C has the form ax + by + cz + d = 0 where a, b, c, d are integer numbers, a >0 if a <> 0; b>0 if a = 0 and b<>0,...and igcd(a, b, c) = 1.
If the coordinates of the vertices A, B, C are all integers, for example A(2,2,2), B(1,2, -1), C(1,-1,-4), I tried,
point(B,1,2, -1):
|
{"url":"http://www.mapleprimes.com/tags/plane","timestamp":"2014-04-17T06:41:34Z","content_type":null,"content_length":"78923","record_id":"<urn:uuid:2fa68057-4f75-4088-8c17-5e3ae4418358>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Divergence of a series
March 20th 2011, 04:01 PM #1
Divergence of a series
Here's a challenge problem I just made up:
Let $\{a_n\}$ be a sequence of positive real numbers such that $\sum a_n = \infty$. Show that
$\frac{a_1}{a_0}+\frac{a_2}{a_0+a_1}+\frac{a_3}{a_0 +a_1+a_2}+ \dots = \infty$.
Let $S_n = \sum_{k=0}^{n}a_k$.
We want to show that $\sum_{k=0}^n\frac{\Delta S_k}{S_k}}$ diverges as $n\to \infty$.
( I chose this notation to suggest the similarity with $\int_{ [ a,\infty)} \tfrac{f'(x)}{f(x)}dx = +\infty$ , if $\displaystyle\lim_{x\to\infty} f(x) = +\infty$ )
We assume that $\frac{\Delta S_k}{S_k}} \to 0$ otherwise the series would diverge trivially.
Let $L_k = \log S_k$, then $\Delta L_k = \log S_{k+1} - \log S_k = \log \left( 1 +\frac{\Delta S_k}{S_k} \right) =\frac{\Delta S_k}{S_k} + O\left( \left(\frac{\Delta S_k}{S_k}\right)^2 \right)$
We observe that $\sum_{k=0}^n \Delta L_k = \log S_{n+1} - \log S_0$ diverges since $\log S_n \to \infty$ and now by the limit comparison test - both differences are always positive-, we get that
$\sum_{k=0}^n\frac{\Delta S_k}{S_k}}$ diverges too.
Notation: $\Delta a_n = a_{n+1}-a_n$
Very nice Paul!
Here's my solution:
It's well known that if $x_n>0$ and $\sum {x_n}$ converges , then $\prod (1+x_n)$ converges also. Let $A_n = \sum_{k=0}^n a_k$. We have
$\prod_{k=1}^n (1+\frac{a_k}{A_{k-1}}) = \prod_{k=1}^n\frac{A_k}{A_{k-1}} = A_n/a_0$. Now since $A_n \to \infty$, we see that $\prod (1+\frac{a_k}{A_{k-1}})$ diverges, which implies that $\sum \
frac{a_k}{A_{k-1}}$ diverges.
Very nice Paul!
Here's my solution:
It's well known that if $x_n>0$ and $\sum {x_n}$ converges , then $\prod (1+x_n)$ converges also. Let $A_n = \sum_{k=0}^n a_k$. We have
$\prod_{k=1}^n (1+\frac{a_k}{A_{k-1}}) = \prod_{k=1}^n\frac{A_k}{A_{k-1}} = A_n/a_0$. Now since $A_n \to \infty$, we see that $\prod (1+\frac{a_k}{A_{k-1}})$ diverges, which implies that $\sum \
frac{a_k}{A_{k-1}}$ diverges.
Your solution is same as what came to my mind when I saw the question.
For those who might want to see a reference for the relation between the sum and the product used by Bruno J, please see [Theorem 7.4.6, 1].
I quote the result for convenience.
Lemma. Let $\{x_{n}\}_{n\in\mathbb{N}}$ be a sequence of nonnegative numbers, then
$\sum_{n\in\mathbb{N}}x_{n}$ and $\prod_{n\in\mathbb{N}}(1+x_{n})$ converge or diverge together.
[1] L.S. Hahn and B. Epstein, Classical Complex Analysis, Jones and Bartlett Publishers, Inc., London, 1996.
Last edited by bkarpuz; March 27th 2011 at 12:21 AM.
March 20th 2011, 05:16 PM #2
March 21st 2011, 11:54 AM #3
March 26th 2011, 10:57 PM #4
|
{"url":"http://mathhelpforum.com/math-challenge-problems/175180-divergence-series.html","timestamp":"2014-04-21T03:09:32Z","content_type":null,"content_length":"48536","record_id":"<urn:uuid:9418cfc5-3eb5-4529-ab2d-1af614d770f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculate the total present value of the tax shield, Financial Management
Nortel is considering the purchase of a new call routing system. The system will cost $50M to purchase, an additional $7M to install, and will last for 30 years. The CCA rate associated with the
system is 6%, the firm's margin tax rate is 20%, and the firm's WACC is 9%. a. Using Excel, create a CCA table, as in class for the 30 year life of the asset. Assume that the asset is sold for its
UCC at the end of year 30.
b. Add a column that shows the value of the annual tax shield.
c. Add a column that shows the PV of the annual tax shield.
d. Calculate the total PV of the tax shield in Excel.
e. Calculate the PV of the tax shield using the PVCCA formula.
Posted Date: 2/15/2013 7:36:42 AM | Location : United States
Your posts are moderated
Question: (a) In the Strategic Planning Model, describe the various stages involved in the generation of capital projects in the public sector. (b) Outline the life cycle-co
Question: Explain: (a) the advantages and disadvantages, to a company, of debt finance over equity finance; (b) the reasons why a company may choose to issue preference s
Q. Evaluate Earning Yield plus Growth in Earning Method? Earning Yield plus Growth in Earning Method: - If the EPS of a company is likely to grow at a constant rate of growth t
From a practical point of view, the feasibility of the project for Maribyrnong Council can be divided into three elements which are: logistical, operational and legal issues. First
cost of capital, Financial Management The Nu-Nu Brothers Inc. (NNBI) has the following capital structure, which it considers to be optional: Debt 25% Preferred Stock 15% Common Equ
Q. Discuss the techniques to manage risks? Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of the four major categories li
a) IPod -Line / Mass production is most suitable given that Apple can sell the standardised product to mass markets across the world. Only small variations to the production proces
The Total Investable Capital Market Portfolio According to a report prepared by McKinsey in January 2007, World financial assets including bonds, stocks, corporate debt securit
ON THE BASIS OF FUNCTIONS •Functional / Subsidiary budgets: A subsidiary budget is a budget of income or expenditure appropriate to or the responsibility of functions, like
Write an essay explaining that the quantities of goods and services that we can produce are limited by both our available resources and by technology. Assume we want to increase
|
{"url":"http://www.expertsmind.com/questions/calculate-the-total-present-value-of-the-tax-shield-30133879.aspx","timestamp":"2014-04-17T09:52:22Z","content_type":null,"content_length":"29992","record_id":"<urn:uuid:27e1435c-0a58-4a17-a7db-3b8a5eaa31a6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hastings On Hudson Calculus Tutor
...I myself felt that same way throughout high school and part of my freshman year of college. I figured out that it is not a bad thing to reach out for help when you need it. It doesn't make you
look less intelligent, it means that you are helping yourself become more educated in a certain subject.
18 Subjects: including calculus, English, algebra 2, SAT math
...I've taken too many graduate biology courses to count. I could talk about biology for hours. However, not everything is relevant, and that's often the problem students have with AP or college
biology--what's important to know?
17 Subjects: including calculus, geometry, SAT writing, Regents
...I've read both Zill's introduction to differential equations, and Tennenbaum's more rigorous and proof-based Ordinary Differential Equations. I have also tutored one UCF student for the
introductory course, from the beginning of the course until its end, and the student ultimately earned an A. ...
32 Subjects: including calculus, physics, statistics, geometry
...I tutor for 4 years now to students in middle and high school and undergraduate students. And I enjoy sharing my knowledge and sharing some tips to find easier way to resolve math problems. I
assure your children will show an improvement after the first class.
17 Subjects: including calculus, English, Spanish, geometry
Hello my name is Andres. I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am
currently finishing my second major in engineering science.
9 Subjects: including calculus, Spanish, algebra 2, geometry
|
{"url":"http://www.purplemath.com/Hastings_On_Hudson_Calculus_tutors.php","timestamp":"2014-04-18T11:43:20Z","content_type":null,"content_length":"24259","record_id":"<urn:uuid:7dee7332-d9c6-4569-a67b-b17f534eedac>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Venn Diagrams Discussion
Mentor: If I have two sets of numbers, is it possible for the sets to have elements in common? Can an element be in both sets?
Student 1: Well... 5 is an odd number, and it's also a prime number....
Mentor: Great! So, elements can be part of two sets at once. I'm going to draw a picture to represent that, and you all can help me put some elements in the correct place.
Mentor: I put 5 in the place where these two circles overlap. Why do you think that I did that?
Student 2: Well, it's a prime number and an odd number, so the way you drew it, it's clear that it is a part of both circles!
Mentor: So what should we call those circles?
Student 2: They are sets, aren't they?
Mentor: Wonderful! Can anyone think of another number that I could put in this diagram? What about a number that is odd, but that isn't prime?
Student 3: You could put the number nine in the odd number circle, but not in the prime number circle, because it's divisible by 3.
Mentor: Perfect answer! What we are making here is called a Venn Diagram. Sometimes they have two circles, like the one we have drawn here, and sometimes they have more! Let's put a few more elements
in this one, then we can try to create a Venn diagram with three circles...
Continue to allow students to suggest elements until you feel they understand Venn diagrams.
|
{"url":"http://www.shodor.org/interactivate1.0/discussions/vdiags.html","timestamp":"2014-04-18T13:47:37Z","content_type":null,"content_length":"2633","record_id":"<urn:uuid:11a77b32-ed36-4fc4-9c11-b1cbaf58a871>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Paradox on Ordinals and Human Mind
A.P. Hazen a.hazen at philosophy.unimelb.edu.au
Sat Dec 18 00:20:57 EST 2004
Dmytro Taranovsky has asked about "König's Paradox" (=, for those who
like the prose of Russell's "Mathematical Logic as based on...," "the
contradiction concerning the least indefinable ordinal"). Suppose we
understand "define" or "specify" in a way that makes contact with
some, not TOO idealized, conception of the cognitive powers of human
beings (or beings of the same "epistemological type," if you prefer a
vaguer notion, as humans). On most reasonable precisifications of
that, it is at least plausible that there is some limit to the number
of definitions possible. [[[For example: Suppose you are a
philosophical physicalist and think that the humanly-understandable
concepts inject into physically distinguishable states of the brain.
Then it seems plausible to me that only finitely many concepts can
REALLY be understood, that with a moderate degree of idealization we
might go for a DENUMERABLE infinity, and that even immoderate
idealization is unlikely to take us beyond the first few
Beth-numbers.]]] So not all the ordinals in
the set-theoretic universe [[[I'm a -- moderate -- Platonist, so I
believe in such things; if you aren't at least willing to play along
with the Platonist assumption that there ARE ordinals, König's P.
isn't likely to interest you!]]] have "definitions" or "unique
specifications". But every non-empty class of ordinals has a least
member, so there must be a least indefinable ordinal ... and, oops,
haven't I just defined it?
Taranovsky suggests three resolutions:
>1. Infinite sets do not exist, but humans can define arbitrarily large
>2. Word "identify" and certain other words are meaningless (at least in
>the sense they are used in the paradox).
>3. The potential of the human mind extends beyond the finite, and
>every ordinal can be identified by a human mind.
To which I would like to comment:
A) Resolution (1) doesn't look good. If you don't believe
in infinite SETS, you probably shouldn't be happy with idealizations
according to which humans can formulate infinitely many
"definitions," and the same basic logic comes back to you with
Berry's Paradox (the one about "the least integer not nameable in
fewer than nineteen syllables").
B) Gödel was attracted to resolution (3), which led him to
the notion of (what are now called) Ordinal Definable sets: cf. his
remarks to the Princeton Bicentennial. This is a beautiful and
well-motivated theory, but I can't help feeling that we shouldn't
give up the hope of finding SOME interesting theory of definability
based on a less wildly immoderate idealization of the human mind.
C) My own sympathies are with something like (2), but
"meaningless" is too strong. One can think of "define" or "possible
language" or "idealized human-like mind" as MEANINGFUL notions, but
ones with an ineliminable vagueness which makes it inappropriate to
reason CLASSICALLY about them. One can have a consistent theory
quantifying over, say, possible definitions (see SKETCH below) if
you use a formally intuitionistical logic: the inference from "Not
all ordinals are defined" to "THERE IS a least indefinable one" is
D) There is, however, at least a 4th proposal out there:
DIALETHEISM, the view that some contadictions are true (and that we
must therefore use a logic not validating P, Not-P, therefore Q): cf.
Graham Priest, "The Logical Paradoxes and the Law of Excluded
Middle," in "Philosophical Quarterly," vol. 33 (1983), pp. 160-165.
(A critical reply by Ross Brady is in the same journal about two
years later.)
SKETCH: First-order theory with two sorts of variables,
ranging over SETS (inc. ordinals) and over CONCEPTS. Intuitionistic
logic, but law of excluded middle allowed for sentences in the purely
set-theoretic part of the language. ZFC or any other reasonable set
theory. (I don't know to what degree it is safe to allow
concept-theoretic vocabulary in instances of the set-theoretic axiom
schemes.) Two-place <concept,set> predicate HOLDINGOF. Nice
comprehension axioms for concepts. Two-place <concept,set> predicate
ENCODEDBY, axiom (embodying the idea that there are restrictions on
concept formation) that there is some set-- specify its cardinality
if you want-- such that every concept is encoded by a member of that
set and no member encodes more than one concept. Maybe axiom saying
ENCODEDBY is decidable. Define a DEFINITION as a concept HOLDINGOF
one set and NOT HOLDINGOF any other set. I ***believe*** that a
theory of this sort could be intellectually satisfying (though maybe
too weak to be of much interest), would allow a proof that not all
ordinals are defined, but would NOT allow the paradoxical
consequence that there is a least indefinable. For much the reason
that the "Least Number Principle" is independent of "Induction" in
intuitionistic arithmetic.
Allen Hazen
Philosophy Department
University of Melbourne
(About to leave town and not read e-mail for a while.)
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-December/008674.html","timestamp":"2014-04-18T10:37:39Z","content_type":null,"content_length":"7596","record_id":"<urn:uuid:3047e43e-4ca2-450a-81d3-bea0138860ed>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: 3D ViewPoint Selector (V5.2) in higher Mathematica versions ?
Replies: 2 Last Post: Apr 4, 2013 10:30 PM
Messages: [ Previous | Next ]
Re: 3D ViewPoint Selector (V5.2) in higher Mathematica versions ?
Posted: Apr 4, 2013 10:30 PM
in Mathematica V 5.2 there was a useful graphics tool being envoked by Menu
Input > *3D ViewPoint Selector*. With this interactive palette/notebook one
could easily determine the Cartesian or Spherical coordinates (i.e. {theta, phi,
r} resp. {x,y,} ) of a wireframe cube and then paste these values for the
optimized view of the rotated cube into another Mathematica graphics.
Unfortunately this tool became obsolete from version 6.0. on (which is mainly
due to the fact that 3D graphics can be manipulated interactively in higher
Mathematica versions.) Nevertheless, in order to determine the optimum view point
coordinates I sometimes miss this tool. And I just searched in the installation
tree structure of Mathematica V5.2 (and higher versions too) for a notebook
resp. palette with the name 3DViewPointSelector (or similar) but could not find
this tool even in V5.2. Of course, one could write a litte application with
Manipulate for rotating a wireframe cube and read out the viewpoint coordinates.
But if this tool already exists why do additional efforts...
Does anyone know where to look for this nice graphics tool which (if available)
would certainly be applicable in other Mathematica versions too. Help would be
Thanks R. Kragler
Hi, Robert,
I do not know the answer to your precise question. However, few years ago somebody (whose name I cannot recall now) published on this site a nice handy code helping in such situations. Stressing
that it is not mine (but belongs to the public domain, since it has been disclosed) I give it below along with a short explanation.
(* This is the code *)
extractViews[ll_] :=
Flatten[Union[Extract[ll, Position[ll, #]] & /@
{ViewPoint -> _, ViewCenter -> _, ViewVertical -> _,
ViewAngle -> _, ViewVector -> _, ViewRange -> _}]];
(* End of the code *)
How to operate:
1. Enter extractViews[] in a cell below the graphic.
2. Move the graphic to your liking.
3. Set the cursor between the brackets of ExtrahiereViews.
4. Make a "Copy output from above" (CtrlShiftL) and evaluate.
You'll get the values.
Have fun, Alexei
Alexei BOULBITCH, Dr., habil.
IEE S.A.
ZAE Weiergewan,
11, rue Edmond Reuter,
L-5326 Contern, LUXEMBOURG
Office phone : +352-2454-2566
Office fax: +352-2454-3566
mobile phone: +49 151 52 40 66 44
e-mail: alexei.boulbitch@iee.lu
Date Subject Author
4/4/13 Re: 3D ViewPoint Selector (V5.2) in higher Mathematica versions ? David Park
4/4/13 Re: 3D ViewPoint Selector (V5.2) in higher Mathematica versions ? Alexei Boulbitch
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2444794&messageID=8830892","timestamp":"2014-04-17T11:17:27Z","content_type":null,"content_length":"19631","record_id":"<urn:uuid:3d2c1588-532f-4174-93dd-e335ab2b525c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sacred Texts Christianity Early Church Fathers Index Previous Next
Chapter XXIV.—Chronology from Adam.
Adam lived till he begat a son, 687 230 years. And his son Seth, 205. And his son Enos, 190. And his son Cainan, 170. And his son Mahaleel, 165. And his son Jared, 162. And his son Enoch, 165. And
his son Methuselah, 167. And his son Lamech, 188. And Lamech s son was Noah, of whom we have spoken above, who begat Shem when 500 years old. During Noah s life, in his 600th year, the flood came.
The total number of years, therefore, till the flood, was 2242. And immediately after the flood, Shem, who was 100 years old, begat Arphaxad. And Arphaxad, when 135 years old, begat Salah. And Salah
begat a son when 130. And his son Eber, when 134. And from him the Hebrews name their race. And his son Phaleg begat a son when 130. And his son Reu, when 132 And his son Serug, when 130. And his son
Nahor, when 75. And his son Terah, when 70. And his son Abraham, our patriarch, begat Isaac when he was 100 years old. Until Abraham, therefore, there are 3278 years. The fore-mentioned Isaac lived
until he begat a son, 60 years, and begat Jacob. Jacob, till the migration into Egypt, of which we have spoken above, lived 130 years. And the sojourning of the Hebrews in Egypt lasted 430 years; and
after their departure from the land of Egypt they spent 40 years in the wilderness, as it is called. All these years, therefore, amount to 3,938. And at that time, Moses having died, Jesus the sun of
Nun succeeded to his rule, and governed them 27 years. And after Jesus, when the people had transgressed the commandments of God, they served the king of Mesopotamia, by name Chusarathon, 8 years.
Then, on the p. 119 repentance of the people, they had judges: Gothonoel, 40 years; Eglon, 18 years; Aoth, 8 years. Then having sinned, they were subdued by strangers for 20 years. Then Deborah
judged them 40 years. Then they served the Midianites 7 years. Then Gideon judged them 40 years; Abimelech, 3 years; Thola, 22 years; Jair, 22 years. Then the Philistines and Ammonites ruled them 18
years. After that Jephthah judged them 6 years; Esbon, 7 years; Ailon, 10 years; Abdon, 8 years. Then strangers ruled them 40 years. Then Samson judged them 20 years. Then there was peace among them
for 40 years. Then Samera judged them one year; Eli, 20 years; Samuel, 12 years.
i.e., till he begat Seth. [A fragment of the Chronicon of Julius Africanus, a.d. 232, is gievn in Routh s Reliquiæ, tom. ii. p. 238, with very rich annotations. pp. 357–509.]
Next: Chapter XXV.—From Saul to the Captivity.
|
{"url":"http://sacred-texts.com/chr/ecf/002/0020161.htm","timestamp":"2014-04-19T06:55:25Z","content_type":null,"content_length":"4511","record_id":"<urn:uuid:2dbb4eb5-a550-448c-9979-647ab3e43e74>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Mathematical Model of Zinc Absorption in Humans As a Function of Dietary Zinc and Phytate,
The quantities of zinc and phytate in the diet are the primary factors determining zinc absorption. A mathematical model of zinc absorption as a function of dietary zinc and phytate can be used to
predict dietary zinc requirements and, potentially, enhance our understanding of zinc absorption. Our goal was to develop a model of practical and informative value based on fundamental knowledge of
the zinc absorption process and then fit the model to selected published data to assess its validity and estimate parameter values. A model of moderate mathematical complexity relating total zinc
absorption to total dietary zinc and total dietary phytate was derived and fit to 21 mean data from whole day absorption studies using nonlinear regression analysis. Model validity, goodness of fit,
satisfaction of regression assumptions, and quality of the parameter estimates were evaluated using standard statistical criteria. The fit had an R^2 of 0.82. The residuals were found to exhibit a
normal distribution, constant variance, and independence. The parameters of the model, A[MAX], K[R], and K[P], were estimated to have values of 0.13, 0.10, and 1.2 mmol/d, respectively. Several of
these estimates had wide CI attributable in part to the small number and the scatter of the data. The model was judged to be valid and of immediate value for studying and predicting absorption. A
version of the model incorporating a passive absorption mechanism was not supported by the available data.
|
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC1995555/?lang=en-ca","timestamp":"2014-04-16T22:43:15Z","content_type":null,"content_length":"110454","record_id":"<urn:uuid:d37b69ca-69d7-4172-981f-7f4bded8a71b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do i find the excluded value of this denominator? r^3−r^2−49r+49
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ecd7ce4e4b04e045aef03fd","timestamp":"2014-04-21T07:55:00Z","content_type":null,"content_length":"60012","record_id":"<urn:uuid:937ce5f5-eb62-4eb9-8d74-5b3d29bea9a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
ee Find the common factors of 8 and 12. ee View Solution
ee Find the common factors of 8 and 20. ee View Solution
ee Find the GCF of 44 and 48. ee View Solution
ee Find the GCF of 48 and 72. ee View Solution
ee Find the GCF of 15 and 20. View Solution
ee Lauren has 56 pink flowers and 63 yellow flowers. Find the greatest number of identical bouquets she can make, using both the flowers such that no flower is left. ee View Solution
ee What are the factors of 79? ee View Solution
ee Find the GCF of 60 and 75. ee View Solution
ee Find the GCF of 60 and 57. View Solution
ee Find the GCF of 32 and 40. View Solution
ee Find the GCF of 48 and 54. ee View Solution
ee Identify the GCF of 60 and 70. ee View Solution
ee Identify the GCF of 57 and 95. ee View Solution
ee Annie has 32 pencils and 64 balloons. Find the greatest number of identical pairs she can make using both the pencils and the balloons to distribute among the children. ee View Solution
ee Find the GCF of 90 and 108. View Solution
ee Sunny has 48 crayons and 72 plain cartoon pictures. Find the GCF of number of crayons and number of cartoon pictures. View Solution
ee Is 15 the GCF of 60 and 75? ee View Solution
ee Is 3 the GCF of 18 and 21? ee View Solution
ee What are the factors of 52? ee View Solution
ee Find the common factors of 15 and 18. ee View Solution
ee Find the GCF of 33 and 30. View Solution
ee Find the common factors of 6 and 12. ee View Solution
ee Find the GCF of 40 and 48. View Solution
ee Find the GCF of 45 and 60. ee View Solution
ee Find the GCF of 22 and 24. ee View Solution
ee Find the GCF of 18 and 24. ee View Solution
ee Find the GCF of 24 and 32. ee View Solution
ee Identify the GCF of 50 and 60. ee View Solution
ee Identify the GCF of 16 and 64. ee View Solution
ee The prime factors of 48 and 96 are in the figure. Which of the following choices is the product of the factors in the intersection of the circles? ee View Solution
ee Laura has 24 pencils and 40 cards. Find the greatest number of identical pairs she can make using both the pencils and the cards to distribute among the children. ee View Solution
ee Find the GCF of 25 and 30. View Solution
ee Find the GCF of 54 and 108. View Solution
ee Dennis has 96 crayons and 120 plain cartoon pictures. Find the GCF of number of crayons and number of cartoon pictures. View Solution
ee Sheela has 56 pink flowers and 63 yellow flowers. Find the greatest number of identical bouquets she can make, using both the flowers such that no flower is left. ee View Solution
ee The GCF of two prime numbers is ________. ee View Solution
ee Is 15 the GCF of 75 and 90? ee View Solution
ee Is 6 the GCF of 12 and 18? ee View Solution
ee What is the GCF of 68 and 56? ee View Solution
ee What is the GCF of 20 and 10? ee View Solution
|
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdfbxkhmjf&.html","timestamp":"2014-04-16T19:41:21Z","content_type":null,"content_length":"79040","record_id":"<urn:uuid:9df226a3-36d2-465d-ac4a-e9bfbf9495d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-Coupling Model to Moderately Light Nuclei
An investigation based on the strong spin-orbit coupling model is made for nuclei in the $1{d}_{\frac{3}{2}}$ and $1{f}_{\frac{7}{2}}$ shells. Hartree-Fock wave functions and central forces are used
in calculating ground state spins and binding energy differences. Coupling rules are found for configurations of identical nucleons and for the simplest odd-odd nuclei. Approximate wave functions are
constructed for odd-$A$ and even-even nuclei having both neutrons and protons in the $1{f}_{\frac{7}{2}}$ shell. Binding energy differences, magnetic moments, and beta decay matrix elements are
calculated with these functions. The result of comparing binding energy differences with experiment is interpreted as favoring strong spin-orbit coupling over weak spin-orbit coupling.
DOI: http://dx.doi.org/10.1103/PhysRev.91.1430
• Received 8 June 1953
• Published in the issue dated September 1953
© 1953 The American Physical Society
|
{"url":"http://journals.aps.org/pr/abstract/10.1103/PhysRev.91.1430","timestamp":"2014-04-16T08:12:27Z","content_type":null,"content_length":"25004","record_id":"<urn:uuid:ea7c762b-2594-44b9-a19a-9b2582b7261b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SailNet Community - View Single Post - Hull Speed
Hull Speed
You are right. For non planing hulls, the hull speed is limited by its waterline length. It has absolutely nothing to do with the wind or what point you are sailing on. Boats are slower to weather
because they beat into the seas which slow them down and because they are going more directly into the wind.
Hull speed is a different thing. The speed of a wave is 1.34 times the square root of its length (distance between crests). The boat creates waves as it moves through the water. At 1/3 hull speed
there are three waves formed along the windward side. At 1/2 hull speed, the waves decrease to two. At hull speed, the boat creates a wave a little longer than her waterline length and gets trapped
between these two crests. Unless she has flat sections aft like planing boats and enough power to climb up on the wave and start planing on her flat sections, she can not exceed this speed. Roughly,
and close, you can calculate your hull speed by 1.34 times the square root of the waterline length of your boat, for displacement, non-planing hulls.
|
{"url":"http://www.sailnet.com/forums/17995-post2.html","timestamp":"2014-04-18T03:10:49Z","content_type":null,"content_length":"33442","record_id":"<urn:uuid:b31a531f-c526-403c-a2c6-9e15bcecb805>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The purpose of publishing this calendar is threefold: to popularise Mathematics and the Pakistan Mathematical Society,and above all, to generate income not only to recover its printing costs but also
for funding the society\’s activities which are of academic and social benefit to members in particular and Pakistani mathematicians in general. The price per calendar Rs500, inclusive of postage
Please send your cheque, in favour of \’Pakistan Mathematical Society\’ at the following address:
Dr Mohammad Aslam
Assistant Professor
Quaid-i-Azam University, Islamabad
You must be logged in to post a comment.
|
{"url":"http://www.algebraforum.org.pk/2012/03/calendar/","timestamp":"2014-04-21T02:09:03Z","content_type":null,"content_length":"21258","record_id":"<urn:uuid:3e921feb-1121-4ee3-be3d-50913580a1da>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calumet City Trigonometry Tutor
...I have a strong educational background in each aspect of the TEAS examination including grades of A at the college level in Chemistry, Biology, Physics, Anatomy & Physiology and a wide variety
of Mathematics. I also have considerable experience tutoring other similar nursing examinations including NCLEX and HESI. I'm confident that I can help anyone seeking to excel on the TEAS
49 Subjects: including trigonometry, English, reading, writing
...Algebra 1 (or Elementary Algebra) introduces the use of variables in equations and usually concludes with the basic concept of functions. The subject generally covers these topics:
*Substitution (replacing expressions with values) *Simplifying Algebraic Expressions (cancelling comon factors, etc...
7 Subjects: including trigonometry, calculus, physics, geometry
...Fun with Triangles! You thought the Pythagorean Theorem was wild? Oh, just wait.
14 Subjects: including trigonometry, geometry, ASVAB, GRE
...This method allows me to determine student mastery. If the students do not meet mastery, then I reteach using different methods until every student achieves mastery of the learning objective.
I'm passionate about teaching children of all ages.
70 Subjects: including trigonometry, chemistry, reading, physics
...It is exactly here that the student is to be given lot of practice. FOR AVERAGE STUDENTS: The teacher is required to develop logical thinking and confidence. Build up speed by giving more
14 Subjects: including trigonometry, calculus, geometry, statistics
|
{"url":"http://www.purplemath.com/calumet_city_trigonometry_tutors.php","timestamp":"2014-04-17T11:02:42Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:4a59ecbb-163a-4399-b008-1473eb553c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recurrence Equations for Matrix Determinant
up vote 3 down vote favorite
Context: I'm reading this paper http://portal.acm.org/citation.cfm?id=1382468
$B_1 = I$
$B_{k+1} = AB_k - \frac{1}{k} tr (AB_k)I$
$det(A) = \frac{(-1)^n}{n} tr(AB_n)$
Question: How does this formula for the determinant work?
I understand: (1) the definition of determinant via permutations
(2) the definition of determinant via minors $det(A) = \sum_i (-1)^{i+j} a_{i,j}det(A_{i,j})$
(3) the Cayley-Hamilton Theorem: $p(s) = det(sI-A)$, $p(A)=0$.
What I fail to understand: how the above recurrence works. If you could tell me the key idea (or give me a term to google for), I can finish the derivation myself.
3 I have a feeling that this might belong better on math.stackexchange.com – Yemon Choi Mar 12 '11 at 8:23
Is that "det" sitting by itself on the left side of your 3rd equation supposed to be $\det A$? – Gerry Myerson Mar 12 '11 at 11:32
@Gerry: good call. Change accepted. – LowerBounds Mar 12 '11 at 14:41
Shouldn't there be an $I$ in your recurrence relation? – Thierry Zell Mar 12 '11 at 17:28
@ Thierry: I don't see where I'm missing an $I$, can you make the edit? – LowerBounds Mar 12 '11 at 18:29
show 2 more comments
1 Answer
active oldest votes
To sum it all in one sentence, this is the Horner method applied to the computation of the characteristic polynomial of $A$. Your algorithm computes the whole charcateristic polynomial
of $A$, in fact, and the determinant is only the last offshoot.
Formally, let $$X^n+\sum_{k=0}^{n-1}a_kX^k$$ be the characteristic polynomial of $A$. Thus, for example, $-a_{n-1}$ is the trace of $A$, and $(-1)^{n}a_0$ is the determinant of $A$. An
easy induction on $k$ shows that
up vote 6 down
vote $$ B_k=A^{k-1}+ \sum_{j=0}^{k-2}a_{n+1-k+j}A^j$$
for all $k$. Finally, the Cayley-Hamilton theorem shows that $AB_n$ is exactly $-a_0I$.
@ Ewan: actually, there's something I still don't understand. How does the above equations explain for the $tr(AB_{k-1})$? Is this via Newton's Identity, or the expansion of $e^A$ ?
– LowerBounds Mar 12 '11 at 23:39
This method for computing the characteristic polynomial is sometimes called "Le Verrier method", or "Le Verrier-Faddeev" method. There are some references and explanations in
techmath.uibk.ac.at/wagner/psfiles/Faddeev.ps – Emmanuel Briand Mar 13 '11 at 7:52
@ LowerBounds : Yes, one may use Newton's identity (working in the algebraic closure of the ground field if necessary). Emmanuel Briand's reference above provides a different proof,
using adjoints. – Ewan Delanoy Mar 13 '11 at 8:04
@Emmanuel. This is the Faddeev variant of the Le Verrier's method. – Denis Serre Mar 13 '11 at 20:33
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question.
|
{"url":"http://mathoverflow.net/questions/58238/recurrence-equations-for-matrix-determinant","timestamp":"2014-04-16T14:17:06Z","content_type":null,"content_length":"60124","record_id":"<urn:uuid:bcf6b4c7-9aa8-44c6-9d3d-a4f90e707722>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euler's Formula
Euler's Formula, Proof 9: Spherical Angles
The proof by sums of angles works more cleanly in terms of spherical triangulations, largely because in this formulation there is no distinguished "outside face" to cause complications in the proof.
We need the following basic fact from spherical trigonometry: if we normalize the surface area of a sphere to 4 pi, and look at any triangle defined by great circle arcs on the sphere, the sum of the
three interior angles is pi+a where a (the excess of the triangle, is equal to the surface area of the triangle. (E.g. see Wells, p. 238).
To translate our question on polyhedra to one of spherical geometry, first triangulate the polyhedron; each new edge increases E and F by one each, so V-E+F is left unchanged. Now perform a
similar light-shining experiment to the one described on the index page: place a light source at an interior point of the polyhedron, and place a spherical screen outside the polyhedron having
the light source as its centerpoint. The shadows cast on the screen by the polyhedron edges will form a spherical triangulation. Since every edge is on two triangles and every triangle has three
edges, 2E=3F.
We now add up the angles of all the triangles; by the spherical trigonometry described above, the sum is (4+F) pi. Adding the same angles another way, in terms of the vertices, gives a total of 2
V pi. Since these two sums measure the same set of angles, F=2V-4 and combining this with the other equation 2E=3F yields the result.
Sommerville attributes this proof to Legendre. Because of its connections with geometric topology, this is the proof used by Weeks, who also gives an elegant proof of the spherical angle-area
relationship based on inclusion-exclusion of great-circular double wedges.
The relation A*k = 2 pi (V-E+F) on a surface of constant curvature k such as the sphere is a form of the Gauss-Bonnet formula from differential geometry.
Proofs of Euler's Formula.
From the Geometry Junkyard, computational and recreational geometry pointers.
David Eppstein, Theory Group, ICS, UC Irvine.
Semi-automatically filtered from a common source file. Last update: .
|
{"url":"http://www.ics.uci.edu/~eppstein/junkyard/euler/sphere.html","timestamp":"2014-04-16T10:10:27Z","content_type":null,"content_length":"3345","record_id":"<urn:uuid:b6699616-e68e-4c96-ac24-ae9dd65c5e99>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calverton, VA Math Tutor
Find a Calverton, VA Math Tutor
...I also specialize in subject tutoring across a myriad of Math and Science subjects. From Algebra 1 through Trigonometry, I can help students gain a new and better understanding in math. My
real passion, however is the sciences.
12 Subjects: including SAT math, chemistry, SAT reading, algebra 1
...Therefore, I have experience working with a wide variety of ages. My strong suit has always been in math and science, demonstrated by completing two years of Advance Placement Calculus and one
year of Advance Placement Statistics before leaving high school as well as five credits of various scie...
17 Subjects: including trigonometry, algebra 1, algebra 2, SAT math
...The people I have worked with and for have always been happy with my services and are especially pleased with the level of patience I exhibit when tutoring. Parents and students have also
expressed appreciation for my ability to explain things in a simple, understandable manner. It is important...
21 Subjects: including algebra 1, prealgebra, English, reading
...For college students: As a recent graduate student, I know what it's like and I know where you're coming from. You want to understand and succeed in your assignments that can get you to your
dreams. Or maybe you just want to get by a certain class that seems to be dragging you behind.
56 Subjects: including prealgebra, piano, English, writing
...There is an hour minimum for teacher meetings! There may also be a travel fee involved depending on distance. This is a flat fee/per session.
32 Subjects: including prealgebra, English, writing, reading
Related Calverton, VA Tutors
Calverton, VA Accounting Tutors
Calverton, VA ACT Tutors
Calverton, VA Algebra Tutors
Calverton, VA Algebra 2 Tutors
Calverton, VA Calculus Tutors
Calverton, VA Geometry Tutors
Calverton, VA Math Tutors
Calverton, VA Prealgebra Tutors
Calverton, VA Precalculus Tutors
Calverton, VA SAT Tutors
Calverton, VA SAT Math Tutors
Calverton, VA Science Tutors
Calverton, VA Statistics Tutors
Calverton, VA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Calverton_VA_Math_tutors.php","timestamp":"2014-04-21T07:11:37Z","content_type":null,"content_length":"23616","record_id":"<urn:uuid:f88181e7-d1f7-4f9c-8184-0b28f0411a3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Molecule of the Month
N[2]S[2] - Disulfur Dinitride
J. Gerratt and S.J. McNicholas
(Page design and HTML by Paul May)
This version is the plain HTML version with no plug-ins or embedded molecular structures.
A version using the Chemscape Chime plug-in can be found here, but you will need to install the plug-in beforehand to view this page properly.
The nature of the bonding in N[2]S[2] (and other N-S compounds) is far from obvious. A favourite first year University examination question is 'N[2]S[2] is said to be aromatic. Discuss'. The answer
is meant to be that the molecule is planar and possesses six electrons in orbitals of p symmetry, which implies some similarity to the benzene molecule. In actual fact, nothing could be further from
the truth.
have been
for N[2]S
[2], none of
which fit
So what is the correct bonding and structure of N[2]S[2]? According to Greenwood and Earnshaw [1], the geometry of the N[2]S[2] molecule is, indeed, almost exactly a square, in spite of the disparity
in the sizes of the S and N atoms: The S-N bond lengths, determined from X-ray diffraction studies, are 165.1 and 165.7 pm, the S-N-S bond angle is 90.4°, while the N-S-N angle is 89.9°. At room
temperature, N[2]S[2] readily polymerizes to form (SN)[x], which is metallic. At very low temperatures (0.33K), the polymer becomes superconducting. There is consequently much about the electronic
structure of this system to warrant a close study of the N[2]S[2] monomer itself.
We can calculate the structure of molecules such as N[2]S[2] using the spin-coupled valence bond method. The spin-coupled wave function incorporates much chemically significant electron correlation
in a compact and highly visual form. This approach to the determination of molecular electronic structure is described in detail in several places (see e.g.[3]). The six p orbitals, which are each
singly-occupied are shown in Figure 1 below (see [2]):
Figure 1. The six
p orbitals of N
f[4] f[5] f[6]
The contours are plotted in a plane parallel to that of the molecule and 1 bohr above it. From this, we see immediately that on each of the two S atoms there is indeed only a single p orbital:
orbitals f[1] and f[2] respectively. It can be seen that they are well localized. Each N atom also has a single highly localized p orbital centred upon it (orbitals f[4] and f[5] respectively). In
addition to this, there are two three-centre p orbitals, symmetrically related to each other, centred about each N atom and stretching over the two S-N-S subsystems (orbitals f[3] and f[6]). The
sulfur atoms bear a significant positive charge, +0.52e in the DZP basis, and the nitrogen atoms a complementary negative charge.
A transposition of orbitals f[1] and f[2], the p orbitals on the S atoms, is equivalent to a symmetry operation of the molecule: i.e. the transposition is equivalent to a reflection in a plane
perpendicular to that of the molecule and passing through both N atoms. Since the total electronic wave function for the ground state, Y[00], belongs to the totally symmetric representation A[g] of D
[2h], it must remain invariant under this operation. Consequently Y[00] must be symmetric towards interchange of the spatial coordinates of the two electrons described by f[1] and f[2]. This means
that in a state of symmetry A[g] in D[2h], the spins of the electrons in f[1] and f[2] must be coupled exactly to a singlet. The 'perfectly paired' spin function, according to which the spins of
orbital pairs (f[1],f[2]), (f[3],f[4]) and (f[5],f[6]), are to an excellent approximation simply paired up in singlets, predominates heavily.
From an initial glance at these results, one might conclude that there is a direct S-S p bond. However further examination of orbitals f[1] and f[2], (i.e. in a plane perpendicular to the molecular
plane and passing through both S atoms), shows that this is not so, as can be seen in Figure 2.
Orbitals f[1] and f[2] each possess a nodal surface, roughly half-way between the two S atoms, the lobes of f[1] and f[2] are bent slightly away from the other S-atom partner and the total electron
density actually decreases within the N[2]S[2] ring. These nodal surfaces originate from the radial nodal surfaces present in the 3p orbitals of sulphur.
Normally when two orbitals on different atoms overlap and the spins of the electrons occupying each of them are paired up, we consider this to be a single bond. Otherwise if the two orbitals happen
to be orthogonal, either exactly by symmetry, or effectively so due to the distance between them, the lowest state is usually obtained when the electron spins are coupled to a triplet. The molecule
is then referred to as a diradical. In N[2]S[2], a different situation arises: Orbitals f[1] and f[2] overlap and the corresponding spins are coupled to a singlet. Yet no true bond is formed. We
therefore consider that N[2]S[2] in its ground electronic state is best described as a singlet diradical.
A moment's thought will convince one that this bonding pattern is in accord with all the valencies within this single 'spin-coupled structure', though in a highly unexpected manner: N^- ions have a
valency of two and accordingly form two single bonds of s symmetry with a bond angle between them of 90°, one with each of the neighbouring S atoms. The S^+ ions have a valency of three and
accordingly they each form two bonds of s symmetry with adjacent N^- ions. We thus have a s single bond framework for N[2]S[2]. The remaining two electrons each occupy a single p orbital on each S
atom and are singlet coupled to each other. Because the orbitals involved are purely p in character, the question of bond angles does not arise. N[2]S[2] can therefore be represented reasonably well
by the bonding scheme shown in Figure 3 in which the dotted line joining the two S atoms indicates the singlet diradical character of the link.
Figure 3. A more
correct description
of the bonding in S
[2]N[2]. The red
dots in the first
figure indicate the
unpaired electrons
on the sulfurs.
Click on the figures
to view the pdb
files using a helper
program such as
(SN)[x] - a conducting polymer
N[2]S[2] can explode, and is also thermally pyrolised to give the polymeric (SN)[x], which is electrically conducting. Can we explain this metallic behaviour on the basis of its bonding? The
geometric structure of the (SN)[x] chain, which is not completely flat, is shown in Figure 4.
Figure 4. A section of the conducting polymer (SN)[x].
Click on the images to view the pdb file.
We have carried out calculations on N[2]S[2] in which two adjacent angles, SNS and NSN are opened out to 120° and 106°, thus effectively breaking an N-S bond. The resulting p orbitals are shown in
Figure 5. It can be seen that a singly-occupied p orbital on each S atom, f[1] and f[2], remains. In addition, the two orbitals forming the lone pair centred on the S-N-S group, f[3] and f[4], in
which f[4] is tightly bound around the N atom, the other, f[3], is three-centred, also survive. However, the lone pair associated with what is now the terminal N atom, is different: The orbital, f[6]
which in N[2]S[2] is three-centred, is now only two-centred.
Figure 5. The six
orbitals of SN
f[4] f[5] f[6]
We thus see that the (SN)[x] chain appears to consist, besides the electrons forming the s bonds which hold it together, of a singly-occupied p orbital on each S atom, interspersed with a p lone pair
centred around each N atom. The polymer may therefore to a good approximation be regarded as a one-dimensional chain of S atoms, with a single electron on each site. This is equivalent to a
half-filled band and consequently one would predict the polymer to be metallic, in agreement with observation.
1. J. Gerratt, S.J. McNicholas, M.Sironi, D.L. Cooper and P.B. Karadakov, 'The extraordinary electronic structure and bonding of N[2]S[2]', J. Am. Chem. Soc. 118 (1996) 6472-6476.
|
{"url":"http://www.chm.bris.ac.uk/motm/n2s2/s2n2.htm","timestamp":"2014-04-20T15:52:57Z","content_type":null,"content_length":"15775","record_id":"<urn:uuid:408840a1-d9bb-443c-9708-320f922a1902>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A fan club that comprises of boys and girls have a total of 160 members. if 32 more girls join the club, the percentage of girls will increase by 5%. Find the number of boys in the club.
Best Response
You've already chosen the best response.
so from the first sentence we have b+g=160 do you see that like b is the number of boys and g is the number of girls
Best Response
You've already chosen the best response.
and then: b+1.05g=160+32
Best Response
You've already chosen the best response.
I don't get it??? Why 1.05g?
Best Response
You've already chosen the best response.
because g gets increased by 5%, so 1+(5/100)
Best Response
You've already chosen the best response.
oh okay...then how do I work it out from there???
Best Response
You've already chosen the best response.
from the first equation you substract g from both sides, so: b= 160-g and then you substitute this value of b on the second equation: 160-g+1.05g=160+32 0.05g=32 g=32/0.05, are you sure its 5%,
because the value of g, doesn't make sense....
Best Response
You've already chosen the best response.
yeahh...but the answer is actually 48 boys. But i dunno hot to get it...
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4efd4f4be4b01ad20b52bb50","timestamp":"2014-04-17T18:59:38Z","content_type":null,"content_length":"42247","record_id":"<urn:uuid:29c6e912-e02c-4d52-9ac2-522c4d0ea75f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Three Phase Controlled Rectifier Study in Terms of firing angle var...
Three Phase Controlled Rectifier Study in Terms of firing angle variations
by IDES Editor, Working at The Institute of Doctors Engineers and Scientists - IDES on Sep 20, 2012
This paper introduce topology of three phase ...
This paper introduce topology of three phase
controlled rectifiers and proposed an accurate Statistical
method to calculate their input current harmonic components,
and calculate THD and harmonic currents with accurate
simulation in various firing angles, then investigate influence
of load variations in terms of firing angle variations on
harmonic currents. Finally a harmonic current database of
rectiûers is obtained in terms of firing angle and load
Total Views
Views on SlideShare
Embed Views
Usage Rights
© All Rights Reserved
|
{"url":"http://www.slideshare.net/ideseditor/three-phase-controlled-rectifier-study-in-terms-of-firing-angle-variations","timestamp":"2014-04-18T17:08:55Z","content_type":null,"content_length":"209509","record_id":"<urn:uuid:07cf42fd-39bd-45d3-8487-233e1d21801e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Noob question about templates & inheritance
Hi all,
I've got a noob question to post. I've got this class named "Vector":
using namespace std;
template<class T> class Vector;
template<class T>
ostream& operator<<(ostream&, const Vector<T>&);
template <class T>
class Vector
friend ostream& operator<< <>(ostream&, const Vector<T>&);
T* data;
unsigned len;
Vector(unsigned = 10);
Vector(const Vector<T>&);
virtual ~Vector(void);
Vector<T>& operator =(const Vector<T>&);
bool operator==(const Vector<T>&);
T& operator [](unsigned);
unsigned getLength(void) {return len;}
I've also got a class named "AssociativeArrayInheritance" that descends form Vector, as you can see:
template<class KeyType, class ValueType>
class AssociativeArrayInheritance : public Vector<Pair<KeyType, ValueType> >
AssociativeArrayInheritance(unsigned size=0):Vector<Pair<KeyType, ValueType> >(size){}
ValueType& operator [](const KeyType);
template<class KeyType, class ValueType>
ValueType& AssociativeArrayInheritance<KeyType, ValueType>::operator [](const KeyType key)
unsigned i = Vector::getLength (); // HOW CAN I CALL THIS METHOD BY INHERITANCE? :| iCAN'T SEEM TO FIGURE OUT...
When compiling this file, the last line (line163) ("unsigned i = Vector::getLength ();") is getting me the following error:
(all code is in file "vector2.h")
vector2.h: In member function ‘ValueType& AssociativeArrayInheritance<KeyType, ValueType>::operator[](KeyType)’:
vector2.h:163: error: ‘template<class T> class Vector’ used without template parameters
Can anybody help me? I know it must be something stupid & basic but I've searched the web, talked with colleagues, and by now I should be delivering this to S teacher...
Well, the error seems pretty straightforward -- you have a Vector, but it's a Vector of ... what? KeyTypes? ValueTypes? Something Elses? How do I know? And how do you expect to call getLength
without an object? You have to have a specific Vector in mind to call a member function on it.
You have to have a specific Vector in mind to call a member function on it.
With inheritance do I have to create an instance of the class to call a method in the super class?
If it's a non-static method, yes. All methods need an object to operate on.
But do you mean, "do I need a separate instance of the base class to call base class methods from within the derived class?" The answer to that question would be no. If you have permission to
execute the method in the base class, you can call it from the derived class just by itself.
class base {
void base_function() {}
class derived : public base {
void derived_function() {
[edit] Can you tell I didn't read the rest of the thread? Sorry for this rather useless post. [/edit]
Okay, I missed that you wanted the inherited version. So then it's Vector<Pair<KeyType,ValueType> >::getLength.
Is there a reason you need the base version and not the inherited version? (Did you override it, and now need the original?)
In first place, thanks to all. I was in the verge of despair and now I'm getting better.
Second, tabstop, I only have getLength() defined in the superclass (Vector) and I want to call it in sub class (AssociativeArrayInheritance). As simple as that!
Then do so. Everything in Vector is in AssociativeArrayInheritance.
That was just what I was thinking. I just thought that I could call the method by just typing getLength() in the subclass :S
Anyway, thanks for all! :)
You must type "this->getLength()". This is a rather complicated aspect of the name lookup rules in the face of templates. The C++ FAQ Lite describes why.
Don't forget to make getLength a const method as well.
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/108495-noob-question-about-templates-inheritance-printable-thread.html","timestamp":"2014-04-23T22:46:19Z","content_type":null,"content_length":"13503","record_id":"<urn:uuid:d5f6e128-de86-4102-9b1a-a7f2a5967f16>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Development Focus
Development Focus: New Wave Generators
This article highlights developments to be released in FLOW-3D version 10.0.
Currently, FLOW-3D can only simulate linear and Stokes waves. The linear wave has a sinusoidal surface profile and small wave steepness, while the Stokes wave is nonlinear and has larger wave
steepness, sharper crests and flatter troughs. The waves are generated at the mesh boundary using Airy's linear wave theory (Airy, 1849) and Fenton's fifth-order Stokes wave theory (Fenton, 1985),
respectively. Although the two theories can deal with many wave problems in practice, limitations exist.
The linear wave theory works only for small amplitude waves, which has quite limited application because coastal and ocean engineers are mainly interested in large waves that cause the greatest
damage to structures. Although the Stokes wave theory works for larger amplitude waves in deep water, it fails for long waves in shallow water, i.e., cnoidal waves. A cnoidal wave is a nonlinear wave
and has even sharper crests and flatter troughs than a Stokes wave. The differences between linear, Stokes and cnoidal waves can be found in Figures 1 and 2.
Figure 1. Comparison of profiles of the different progressive waves.
Fourier Series Wave Generation Method
In FLOW-3D version 10.0, a new wave generator has been added using the Fenton's Fourier series method (Fenton, 1999). Different from the linear, Stokes and other wave theories that have certain
application ranges, this method works for all kinds of oscillatory waves in deep and shallow water, including linear, Stokes and Cnoidal waves (see Figure 2 for details). More than that, it possesses
higher order accuracy than other theories (USACE, 2008). Fenton's Fourier series method is thus the recommended wave generator for any linear or nonlinear oscillatory wave simulation. The existing
linear and Stokes wave generators are still retained in version 10.0 to serve special needs of users. The animations below show linear, Stokes and Cnoidal waves generated by Fenton's Fourier series
Figure 2. Applicability ranges of various waves (after Le Méhauté, 1976, Sorensen, 2005 and USACE, 2008). d: mean water depth; H: wave height; T: wave period; g: gravitational acceleration
Linear Wave Simulation
Stokes Wave Simulation
Cnoidal Wave Simulation
Solitary Wave Generator
A solitary wave is a nonlinear, non-oscillatory wave. It has a single crest, no trough and is completely above the undisturbed water level. It is a good approximation of a shoaling cnoidal wave as
its crests become shorter and the troughs longer. It is often used to describe Tsunami waves caused by earthquakes and large-scale landslides. In FLOW-3D version 10.0, a solitary wave generator is
available using McCowan's theory (McCowan, 1891). This theory is more accurate than Boussinesq's theory (1871) and is highly recommended by Munk (1949). The simulation below is the result of a
solitary wave striking a structure.
Solitary Wave Simulation
Random Wave Generator
In coastal and ocean engineering, a regular wave like a Stokes or Cnoidal wave is often used to represent a design wave in analysis of wave interactions with offshore structures. However, when wind
acts on the sea surface, what we observe are many waves with different wavelengths, periods and amplitudes moving in different directions, which are referred to as irregular or random waves.
In FLOW-3D version 10.0, random waves can be generated at a mesh boundary as a superposition of many linear component waves of different amplitude and period. These waves propagate into the
computational domain to form a random sea. For each of these component waves, its amplitude and frequency are calculated using the wave energy spectrum. Initial wave phases, however, are random. The
Pierson-Moskowitz (P-M) spectrum for fully developed sea (Pierson-Moskowitz, 1964) and the JONSWAP spectrum for fetch-limited sea (Hasselmann, 1973) are implemented. Users can choose either of them
or use their own wave spectrum defined in a data file. For now, all the component waves are assumed to travel in the same direction at the wave boundary and directional wave spectrums are not
considered. The animation below is an example of random waves generated using the P-M spectrum.
Random Wave Simulation
Initial Wave Condition
Previously in FLOW-3D, a wave could not be defined as an initial condition. The solver must run for sufficiently long time to allow the oscillation from the wave boundary to reach everywhere in the
computational domain and evolve into steady wave motion. To shorten computation time, a new development has been made to initialize a wave at the beginning of a simulation. As an initial condition,
wave elevation and water velocity can be defined throughout the computational domain using the same wave generator and the same wave parameters as at the wave boundary. The animation below shows the
initial condition and the simulation result of a Stokes wave.
Initial Wave Simulation
• Airy, G. B., 1845, Tides and Waves, Encyc. Metrop. Article 102.
• Boussinesq, J. 1871, Theorie de L’intumescence Liquide Appelee Onde Solitaire ou de Translation se Propageant dans un Canal Rectangulaire, Comptes Rendus Acad. Sci. Paris, Vol 72, 755-759.
• Fenton, J. D., 1985, A Fifth-Order Stokes Theory for Steady Waves, Journal of Waterway, Port, Coastal and Ocean Engineering, Vol. 111, No. 2, 216-234.
• Fenton, J.D., 1999, Numerical Methods for Nonlinear Waves, in Advances in Coastal and Ocean Engineering, Vol. 5, ed. P.L.-F. Liu, 241-324, World Scientific: Singapore.
• Hasselmann, K., Barnet, T.P., Bouws, E., Carlson, H., Cartwright, D.E., Enke, K., Ewing, .A., Gienapp, H., Hasselmann, D.E., Kruseman, P., Meerburg, A., Muller, P., Olbers, D.J., Richter, K.,
Sell, W., and Walden, H., 1973, Measurement of Wind-Wave Growth and Swell Decay During the Joint North Sea Wave Project (JONSWAP), Report, German Hydrographic Institute, Amburg.
• Le Méhauté, B.,1976, An Introduction to Hydrodynamics and Water Waves, Springer-Verlag.
• McCowan, J., 1981, On the solitary wave, Philosophical Magazine, Vol. 32, 45-58.
• Munk, W. H. 1949, The Solitary Wave Theory and Its Application to Surf Problems, Annals New York Acad. Sci., Vol 51, 376-423.
• Pierson, W. J., and Moskowitz, L., 1964, A proposed spectral form for fully developed wind seas based on the similarity theory of S.A. Kitiagordskii, J. Geophys. Res., vol9, 5181-5190.
• Sorensen, R. M., 2005, Basic Coastal Engineering, Springer, 3rd edition.
• USACE (U.S. Army Corps of Engineers), 2008, Coastal Engineering Manual EM 1110-2-1100, Part II, Washington, DC.
|
{"url":"http://www.flow3d.com/resources/news_11/winter/modeling-waves-flow-3d-development.html","timestamp":"2014-04-16T04:11:09Z","content_type":null,"content_length":"27577","record_id":"<urn:uuid:00418136-d57b-4cee-8a85-c4c464802a83>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
|
seconds to minutes:seconds
Archive of Mr Excel Message Board
Back to Dates in Excel archive index
Back to archive home
seconds to minutes:seconds
Posted by jason on October 12, 1999 9:20 AM
i have a spreadsheet that i want to enter 10 seperate cells (E2:N2) of times in measured in seconds. i would like an excel formula that would sum these times and convert the sum to minutes:seconds.
all the formulas and formatting i have tried don't seem to work. i would think this is a simple thing to do, but i haven't found it yet.
Re: seconds to minutes:seconds
Posted by Chris on October 12, 1999 10:20 AM
The following formula should give you what you need:
Re: seconds to minutes:seconds
Posted by judi on October 12, 1999 10:37 AM
If your seconds are entered in as numbers (like 56 for 56 seconds) then try this:
This will give you answer like 8.31 meaning 8 minutes and 31 seconds
Re: seconds to minutes:seconds
Posted by Ivan Moala on October 12, 1999 3:36 PM
Judi's & Chris's work well.
An alternative to maintain your time data integrity ie. if you change the format
would be to use the following formula
The division by 60 * 60 * 24 is neccessary
because Excel stores all dates as integers and all times as decimal fractions. Excel takes your value as been a single day or 24 Hour
period eg enter 2 = 2days = 2 24hr periods =2*60*60*24 secs
Times are stored as decimal numbers between .0 and .99999,
where .0 is 00:00:00 and .99999 is 23:59:59
and then format your cells like this;
In the "Format cells" Dialog
Select Custom
In the "Type:" box
Type in "mm:ss"
This should give you the results as Minutes Seconds
Use the format painter to copy the custom formats to
your data range.
This archive is from the original message board at www.MrExcel.com.
All contents © 1998-2004 MrExcel.com.
Visit our
online store
to buy searchable CD's with thousands of VBA and Excel answers.
Microsoft Excel is a registered trademark of the Microsoft Corporation.
MrExcel is a registered trademark of Tickling Keys, Inc.
|
{"url":"http://www.mrexcel.com/archive/Dates/307.html","timestamp":"2014-04-18T08:04:28Z","content_type":null,"content_length":"4457","record_id":"<urn:uuid:6384d779-1cb8-45a7-8bd3-ffb5eda9b6cf>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
laureano luna laureanoluna at yahoo.es
Wed Oct 19 14:42:02 EDT 2005
This post just means to ask for help.
I propose the following (usual) definitions:
1. A truth is contingent whenever its negation has a model.
2. For any sentence "p" and any model "M", M satisfies not-p whenever it does not satisfy p.
Add to these definitions the classical excluded middle for formal logic: every sentence (closed well-formed formula) is either true or false.
Henkin´s semantics shows there are general non-standard models in which some sentences, that under classical excluded middle are second order logical truths, are not satisfied (since those models make SOL semantically complete, which it can´t be under classical excluded middle).
According to 2. those models satisfy the negations of some second order logical truths.
According to 1. those logical truths (whose negations have a model) are contingent.
So, there must be some contingent second order logical truths.
What is wrong above?
Laureano Luna Cabañero.
Correo Yahoo!
Comprueba qué es nuevo, aquí
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/fom/attachments/20051019/f7ce056f/attachment.html
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-October/009196.html","timestamp":"2014-04-17T22:01:36Z","content_type":null,"content_length":"3589","record_id":"<urn:uuid:b07b23f0-10e1-4c89-88a4-0c8757b68632>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert molecule to atom - Conversion of Measurement Units
›› Convert molecule to atom
›› More information from the unit converter
How many molecule in 1 atom? The answer is 1.
We assume you are converting between molecule and atom.
You can view more details on each measurement unit:
molecule or atom
The SI base unit for amount of substance is the mole.
1 mole is equal to 6.0221415E+23 molecule, or 6.0221415E+23 atom.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between molecules and atoms.
Type in your own numbers in the form to convert the units!
›› Definition: Molecule
This site uses an exact value of 6.0221415 x 10^23 for Avogadro's number. This is the number of molecules in 1 mole of a chemical compound.
›› Definition: Atom
This site uses an exact value of 6.0221415 x 10^23 for Avogadro's number. This is the number of atoms in 1 mole of a chemical element.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0037 seconds.
|
{"url":"http://www.convertunits.com/from/molecule/to/atom","timestamp":"2014-04-20T10:55:34Z","content_type":null,"content_length":"20192","record_id":"<urn:uuid:c5107b4a-cd0c-4a26-9470-376fdcc545cf>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3aSAa10. A general dynamic theory of thermopiezoceramic shells.
Session: Wednesday Morning, May 15
Time: 10:15
Author: G. Askar Altay
Location: Dept. Civil Eng., Bogazici Univ., Bebek, 80815 Istanbul, Turkey
Author: M. Cengiz Dokmeci
Location: Istanbul Tech. Univ.--Teknik Univ., Taksim, 80191 Istanbul, Turkey
This study presents a general theory for the motions of a ceramic shell in which there is coupling among mechanical, electrical, and thermal fields. The coated ceramic shell is treated as a
two-dimensional thermopiezoelectric medium and a separation of variables solution in terms of the thickness coordinates, the midsurface coordinates, and time is sought for its field variables. Then,
a variational averaging procedure [M. C. Dokmeci, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 35, 775--787 (1988)] together with the solution is used so as to derive the system of approximate
equations of ceramic shell. The invariant system of governing equations which are expressed in both differential and variational forms accounts for all the types of motions of ceramic shells. Certain
cases involving special geometry, material properties, and motions are considered [e.g., M. C. Dokmeci, J. Math. Phys. 19, 109--126 (1978)]. Also, the sufficient boundary and initial conditions are
given for the uniqueness in solutions of the fully linearized system of shell equations. The results are shown to generate a series of known shell theories [e.g., M. C. Dokmeci, IEEE Trans. Ultrason.
Ferroelectr. Freq. Control 37, 369--385 (1990) and references therein]. [Work supported by TUBA-TUBITAK.]
from ASA 131st Meeting, Indianapolis, May 1996
|
{"url":"http://www.auditory.org/asamtgs/asa96ind/3aSAa/3aSAa10.html","timestamp":"2014-04-19T15:15:58Z","content_type":null,"content_length":"2164","record_id":"<urn:uuid:ced7bd22-d93c-4ede-8fd4-d74dab81a07d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Allston Algebra 1 Tutor
Find an Allston Algebra 1 Tutor
...I have studied sight singing for two years while a music degree student at BU. I used both fixed and moveable DO, and I am completely competent with singing multi-part harmonies. We often sang
Bach chorales, which are very enjoyable and teach a lot about harmonies and lines.
26 Subjects: including algebra 1, reading, English, writing
My name is Michael and I graduated in the spring of 2012 from Boston College with a degree in Biochemistry. Upon graduation I took a full-time volunteer job as a tutor for high school students at
Match Charter Public High School in Boston. I am currently working as a Research Assistant at Brigham and Women's Hospital while I apply to medical school.
19 Subjects: including algebra 1, chemistry, linear algebra, organic chemistry
...Proof reading is a skill that must not be overlooked. In the hundreds of essays and reports I have written over the years, I have spent much time re-writing papers because the grammar often
doesn't sound the same when you look at it a second or third time. Simple spelling and grammatical mistakes can be fixed, showing a meticulous attention to detail.
8 Subjects: including algebra 1, English, SAT math, grammar
...I patiently break chemistry and math down for students so that they understand more completely the big picture and then I help them tackle the details so that they feel confident to work
through their homework and exam problems on their own. My goal in tutoring students is to alleviate the anxie...
10 Subjects: including algebra 1, chemistry, calculus, prealgebra
...I took calculus in high school and several levels of calculus in college. I also took 3D calculus at MIT. While tutoring in my junior and senior year of college, I tutored freshman in
10 Subjects: including algebra 1, physics, calculus, algebra 2
Nearby Cities With algebra 1 Tutor
Belmont, MA algebra 1 Tutors
Brighton, MA algebra 1 Tutors
Brookline Village algebra 1 Tutors
Brookline, MA algebra 1 Tutors
Cambridge, MA algebra 1 Tutors
Cambridgeport, MA algebra 1 Tutors
Charlestown, MA algebra 1 Tutors
Jamaica Plain algebra 1 Tutors
Newton Center algebra 1 Tutors
Newton Centre, MA algebra 1 Tutors
Newtonville, MA algebra 1 Tutors
Roxbury, MA algebra 1 Tutors
Somerville, MA algebra 1 Tutors
South Boston, MA algebra 1 Tutors
Watertown, MA algebra 1 Tutors
|
{"url":"http://www.purplemath.com/allston_ma_algebra_1_tutors.php","timestamp":"2014-04-20T04:04:27Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:34f7612e-2c00-4014-8ca1-81312c3f9df5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
Converge? $<br /> \sum\limits_{n = 1}^\infty {\frac{{\sin (n)}}<br /> {n}} <br />$ thanks
Yes. It can be proven by Dirichlet's test that $\sum_{n=1}^{\infty}\frac{\sin n}{n}$ and $\sum_{n=1}^{\infty} \frac{\cos n}{n}$ converge.
It is more interesting to sum this series. Remember that $2i\sin (x) = \sinh (ix)$. Therefore, $\sum_{n=1}^{\infty} \frac{\sin (n)}{n} = \frac{1}{2i} \left( \sum_{n=1}^{\infty} \frac{e^{in}}{n} - \
sum_{n=1}^{\infty} \frac{e^{-in}}{n} \right)$ Now use the formula, $\sum_{n=1}^{\infty} \frac{z^n}{n} = - \log (1-z)$ for $|z| < 1$ Therefore, we get, $\frac{1}{2i}\left( \log (1 - e^{-i}) - \log (1
- e^{i}) \right) = \sin 1$ I hope I did not make any mistakes.
Thanks! but I don`t know about complex number again, than you ThePerfectHacker
Maybe we can solve delimiting? I always find other serie minor, but this coverge or a major, but this diverge... Too I try whit quotien rule o the integral, but never obey the hypothesis Moreover I
am in first year of university, so I dont know powerful tool... anyways...thank you ThePerfectHacker
But you asked if the series does converge or not and that was answered. Now, wanna know how to find the value of that series? Its value is $\frac{\pi-1}{2}.$
|
{"url":"http://mathhelpforum.com/calculus/53936-series.html","timestamp":"2014-04-21T09:06:57Z","content_type":null,"content_length":"64287","record_id":"<urn:uuid:48d96e00-3232-4ec1-b957-85504b90910f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discrete Mathematics and Its Applications, 7th Edition
Feb 28, 2012
34,718 views
2 Comments »
Book Description
Discrete Mathematics and its Applications, Seventh Edition, is intended for one- or two-term introductory discrete mathematics courses taken by students from a wide variety of majors, including
computer science, mathematics, and engineering. This renowned best-selling text, which has been used at over 500 institutions around the world, gives a focused introduction to the primary themes in a
discrete mathematics course and demonstrates the relevance and practicality of discrete mathematics to a wide a wide variety of real-world applications…from computer science to data networking, to
psychology, to chemistry, to engineering, to linguistics, to biology, to business, and to many other important fields.
Table of Contents
Chapter 1. The Foundations: Logic and Proofs
Chapter 2. Basic Structures: Sets, Functions, Sequences, Sums, and Matrices
Chapter 3. Algorithms
Chapter 4. Number Theory and Cryptography
Chapter 5. Induction and Recursion
Chapter 6. Counting
Chapter 7. Discrete Probability
Chapter 8. Advanced Counting Techniques
Chapter 9. Relations
Chapter 10 Graphs
Chapter 11. Trees
Chapter 12. Boolean Algebra
Chapter 13. Modeling Computation
Book Details
• Hardcover: 1072 pages
• Publisher: McGraw-Hill Science/Engineering/Math; 7th Edition (June 2011)
• Language: English
• ISBN-10: 0073383090
• ISBN-13: 978-0073383095
Read Online & Download
Download [9.1 MiB]
Tags: Algorithm, Biology, Mathematic
• 1
February 28th, 2012 at 7:57 pm
why NO ONE ever post an EPUB?!?? I can’t read pdf on my kindle
[ Quote ] [ Reply ]
• 2
February 29th, 2012 at 1:10 am
Thank you very much for this one
[ Quote ] [ Reply ]
You must be logged in to post a comment.
|
{"url":"http://www.wowebook.in/book/discrete-mathematics-and-its-applications-7th-edition/","timestamp":"2014-04-17T05:30:01Z","content_type":null,"content_length":"49341","record_id":"<urn:uuid:1bacf7b5-ff7e-4933-86cf-79bde1c50adb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4aPAa1. Approximation of theoretical wind turbulence spectra from wind-speed measurements.
Session: Thursday Morning, December 5
Time: 8:00
Author: Xiao Di
Location: Appl. Res. Lab. and the Graduate Program in Acoust., Penn State, P.O. Box 30, State College, PA 16804
Author: Kenneth E. Gilbert
Location: Appl. Res. Lab. and the Graduate Program in Acoust., Penn State, P.O. Box 30, State College, PA 16804
Theoretical calculations of ground-to-ground sound propagation in a turbulent atmosphere require the power spectrum for horizontal wind-speed fluctuations, (Phi)[inf xx]. Experimentally, however, the
longitudinal spectrum, (Phi)[inf rr], is usually measured. In propagation predictions, a frequently used approximation is (Phi)[inf xx](approximately equal to)(Phi)[inf rr]. Under the assumption of
homogeneous, isotropic turbulence, an exact relation between (Phi)[inf xx] and (Phi)[inf rr] is derived and several limiting cases are discussed. For a Kolmogorov spectrum and small-angle scattering,
for example, (Phi)[inf xx](approximately equal to)(11/6)(Phi)[inf rr] is found. The practical significance of the relation between (Phi)[inf xx] and (Phi)[inf rr] for analyzing experimental data is
discussed. It is shown that, for many situations, the approximation (Phi)[inf xx]=constant x(Phi)[inf rr] is reasonable for comparing predicted and measured sound levels. [Work supported by the Army
Research Laboratory and the Applied Research Laboratory, Penn State.]
ASA 132nd meeting - Hawaii, December 1996
|
{"url":"http://www.auditory.org/asamtgs/asa96haw/4aPAa/4aPAa1.html","timestamp":"2014-04-18T10:44:17Z","content_type":null,"content_length":"2054","record_id":"<urn:uuid:5d362d03-0674-475d-aec9-c1eecd9b6c76>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Geometry, please help!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
The coordinates of the vertices of triangle ABC are A( -4, 4), B(-4, 2), C(-2,2) and triangle PQR are P(2, 4), Q(2, 2), R(0, 2). Which statement is correct?
Best Response
You've already chosen the best response.
a. Triangle ABC and triangle PQR are regular triangles. b. The two triangles are congruent by the AAA property. c. Triangle ABC is congruent to triangle PQR by the SSS property. d. The two
triangles are similar because the ratio of their corresponding sides is two.
Best Response
You've already chosen the best response.
it helps if you have graph paper
Best Response
You've already chosen the best response.
What do you see similar about these two triangles?
Best Response
You've already chosen the best response.
The two triangles are congruent. But which property?
Best Response
You've already chosen the best response.
Cool, thanks! (sorry I was going to respond to the question, I was just looking up the difference between the 2)
Best Response
You've already chosen the best response.
I messed that up, sorry. Side Side Side or Angle Angle Angle?
Best Response
You've already chosen the best response.
The Side Side Side postulate states that if three sides of one triangle are congruent to three sides of another triangle, then these two triangles are congruent.
Best Response
You've already chosen the best response.
So it is SSS?
Best Response
You've already chosen the best response.
AAA (Angle Angle Angle) does not prove that two triangles are congruent. However, it does make two triangles similar.
Best Response
You've already chosen the best response.
Yes, SSS.
Best Response
You've already chosen the best response.
Ok, thought so, had to check though haha, thanks!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/503d5bfbe4b0c29115be3198","timestamp":"2014-04-20T16:13:45Z","content_type":null,"content_length":"56018","record_id":"<urn:uuid:6ea8f577-bae7-401d-9b99-848c21e37af8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westport, CT Science Tutor
Find a Westport, CT Science Tutor
...I am also a very good public speaking and debate coach with expertise in both oral and written debate and the use of Microsoft PowerPoint. Finally, I also teach English to students from Japan
and Korea and can also teach basic Japanese.I am a trial attorney and have practiced for almost 10 years...
9 Subjects: including philosophy, Japanese, ESL/ESOL, public speaking
I have multi-disciplinary skills in chemistry, chemical engineering, materials science and polymer science. I have multiple years of experience in chemistry and engineering from industry. My work
has resulted in multiple patents, technical presentations and publications in leading science and engineering journals.
10 Subjects: including organic chemistry, physical science, calculus, chemistry
...Technical Writing heavily employs the proper usage of English grammar which is necessary for communicating through writing. General computer knowledge is basically how a computer works
internally. It is comprised of Hardware, Software, and a User Interface.
23 Subjects: including physics, writing, electrical engineering, mechanical engineering
...I have Undergraduate, Master's and PhD degrees in Computer Science and I was an Assistant Professor of Computer Science. I have Undergraduate, Master's and PhD degrees in Computer Science and
I was an Assistant Professor of Computer Science. I can help you get a general understanding of Compute...
36 Subjects: including ACT Science, reading, ESL/ESOL, algebra 1
...I have used it in writing my own songs, as well as improvising on other songs. I excelled through all of high school Math, including Trigonometry, getting "A"s and scoring 100 on what used to
be the old Sequential Math Course III regents. I got As in College Astronomy courses, and Astronomy is one of my hobbies.
29 Subjects: including biology, chemistry, physical science, astronomy
Related Westport, CT Tutors
Westport, CT Accounting Tutors
Westport, CT ACT Tutors
Westport, CT Algebra Tutors
Westport, CT Algebra 2 Tutors
Westport, CT Calculus Tutors
Westport, CT Geometry Tutors
Westport, CT Math Tutors
Westport, CT Prealgebra Tutors
Westport, CT Precalculus Tutors
Westport, CT SAT Tutors
Westport, CT SAT Math Tutors
Westport, CT Science Tutors
Westport, CT Statistics Tutors
Westport, CT Trigonometry Tutors
Nearby Cities With Science Tutor
Ansonia, CT Science Tutors
Darien, CT Science Tutors
Easton, CT Science Tutors
Fairfield, CT Science Tutors
Monroe, CT Science Tutors
New Canaan Science Tutors
Norwalk, CT Science Tutors
Ridgefield, CT Science Tutors
Saugatuck, CT Science Tutors
Selden, NY Science Tutors
Seymour, CT Science Tutors
Stony Brook Science Tutors
Trumbull, CT Science Tutors
Weston, CT Science Tutors
Wilton, CT Science Tutors
|
{"url":"http://www.purplemath.com/Westport_CT_Science_tutors.php","timestamp":"2014-04-20T04:14:41Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:0597185e-3c17-4a31-950e-28cb238bae58>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
|
STM publishing: tools, technologies and change
Just a short post to share another example from my on-going work on HarfBuzz/LuaTeX. A rather pointless example – without using any code to correctly place mark glyphs (e.g., vowels) – showing
randomly coloured Arabic glyphs. Thanks to the power of HarfBuzz and the superb Lua C API (especially C closures and "for loop" iterators) the code to process the Arabic text is about 25 lines of Lua
Source of text for typesetting example: BBC Arabic. I don't know what the text says but Google Translate indicated it was neither controversial or offensive – I hope that is the case!
Just to add an example with mark glyph positioning and random colours. Vowel positioning added about 10 lines of Lua script
Building on the work of porting LuaTeX to build on Windows I decided to explore adding HarfBuzz to provide Arabic shaping. The excellent HarfBuzz API lends itself to some interesting solutions so
here's a quick post to show some early results.
Source of text for typesetting fully vowelled Arabic examples: http://en.wikipedia.org/wiki/Arabic_language#Studying_Arabic
If you are interested to explore the inner structures of TeX boxes created in LuaTeX you can do this very conveniently using the following free resources:
• viznodelist.lua by Patrick Gundlach. This is an excellent Lua script that generates a text file containing a graph representation of the structures and nodes inside a \vbox{...} or \hbox{...}.
The file output by viznodelist.lua can be opened and displayed using GVEdit (see below).
• GVEdit is part of the Graphviz distribution and you can download a Windows installer from the Graphviz website
Installing Graphviz should be straightforward using the MSI installer provided. To use viznodelist.lua you'll need to put the file in the appropriate place within your texmf tree. To find the right
location you may need to look into your texmf.cnf file to examine the LUAINPUTS variable – which typically looks something like this:
LUAINPUTS = .;$TEXMF/scripts/{$progname,$engine,}/{lua,}//;$TEXMF/tex/{luatex,plain,generic,}//
For example, suppose your texmf folder is located at h:\texmf then you could put viznodelist.lua in the folder h:\texmf\scripts\lua.
Here's an ultra-minimal plain LuaTeX example:
\setbox1001= \vbox{\hsize=50 mm Hello \hbox{Hello}}
The above code will parse the contents of box 1001 and output a file called mybox.gv which you can open in GVEdit to view a graph of the the node structures in box 1001. The following screenshot
displays this:
GVEdit can export the graph in numerous formats including PDF, PNG etc.
This is a current work-in-progress so I'll keep it brief and outline the ideas.
There are, of course, a number of tools available to generate SVG from TeX or, more correctly, SVG from DVI. Indeed, I wrote one such tool myself some 9 years ago: as an event-driven COM object which
fired events to a Perl backend. For sure, DVI to SVG works but with LuaTeX you can do it differently and, in my opinion, in a much more natural and integrated way. The key is the node structures
which result from typeset maths. By parsing the node tree you can create the data to construct the layout and generate SVG (or whatever you like).
Math node structures
Let's take some plain TeX math and store it in a box:
\setbox101=\hbox{$\displaystyle\eqalign{\left| {1 \over \zeta - z - h} -
{1 \over \zeta - z} \right| & = \left|{(\zeta - z) - (\zeta - z - h) \over (\zeta - z - h)(\zeta - z)}
\right| \cr & =\left| {h \over (\zeta - z - h)(\zeta - z)} \right| \cr
& \leq {2 |h| \over |\zeta - z|^2}.\cr}$}
What does the internal node structure, resulting from this math, actually look like? Well, it looks pretty complicated but in reality it's quite easy to parse with recursive functions, visiting each
node in turn and exporting the data contained in each node. Note that you must take care to preserve context by "opening" and "closing" the node data for each hlist or vlist as you return from each
level of recursion.
The idea is that you pass box101 to Lua function which starts at the root node of the box and works its way through and down the node tree. One such function might look like this:
<<snip lots of code>>
function listnodes(head)
while head do
local id = head.id
if id==0 then
mnodes.nodedispatch[id](head, hdepth+1)
elseif id==1 then
mnodes.nodedispatch[id](head, vdepth+1)
if id == node.id('hlist') or id == node.id('vlist') then
--print("enter recursing", depth)
if id==0 then
elseif id==1 then
--mnodes.open(id, depth,head)
if id==0 then
mnodes.close(id, hdepth)
elseif id==1 then
mnodes.close(id, vdepth)
--print("return recursing", depth)
head = head.next
What you do with the data in each node depends on your objectives. My preference (current thinking) is to generate a "Lua program" which is a set of Lua functions that you can run to do the
conversion. The function definitions are dictated by the conversion you want to perform. For example, the result of parsing the node tree could be something like this (lots of lines omitted):
<<snip loads of lines>>
At each node you can emit a "function" such as GLUE(skip,0,65536,65536,2,2) or GLYPH(49,1,327681,422343,0) which contain the node data as arguments of the function call. Each of these "functions" can
then be "run" by providing a suitable function body: perhaps one for SVG, HTML5 canvas and JavaScript, or EPS file or whatever. The point is you can create whatever you like simply by emitting the
appropriate data from each node.
What about glyph outlines?
Fortunately, there is a truly wonderful C library called FreeType which has everything you need to generate the spline data, even with TeX's wonderfully arcane font encodings for the 8-bit world of
Type 1 fonts. Of course, to plug FreeType into the overall solution you will need to write a linkable library: I use DLLs on Windows. FreeType is a really nice library, easy to use and made even more
enjoyable by Lua's C API.
Even though I have omitted a vast amount of detail, and the work is not yet finished, I hope you can see that LuaTeX offers great potential for new and powerful solutions.
Note: if you want to zoom in on the matrices, right-click over an equation and set the MathJax parameters to your preferred values:
In this post I'll introduce a nice matrix-manipulation library called lua-matrix. It is written in pure Lua and so should be usable with any LuaTeX installation. You can download the code from GitHub
. You can use lua-matrix as a convenient method to create PDF transformation matrices.
Note: where to install Lua code modules
The texmf.cnf variable you need to check is LUAINPUTS. See this newsgroup post for more details.
Tip: a Git tool for Windows users
Like many open source projects lua-matrix is hosted on GitHub. If you are a Windows user you may need to install some utilities that let you "check out" a copy of the code on repositories such as
GitHub or others based on SVN. For SVN repositories there is the excellent TortoiseSVN but for Git repos I use a free tool called Git for Windows.
Very brief summary of matrices
Quoting from the PDF standard:
"PDF represents coordinates in a two-dimensional space. The point (x, y) in such a space can be expressed in vector form as [x y 1]. The constant third element of this vector (1) is needed so that
the vector can be used with 3-by-3 matrices. The transformation between two coordinate systems is represented by a 3-by-3 transformation matrix written as
$$\displaystyle \left( \matrix{ a & b & 0 \cr c & d & 0 \cr e & f & 1 \cr} \right)$$
Because a transformation matrix has only six elements that can be changed, it is usually specified in PDF as the six-element array [a b c d e f].
Note: This is method of representing coordinates is referred to as homogeneous coordinates.
The matrix for rotation by an angle \(\theta\) counter clockwise about the origin:
$$\displaystyle \left( \matrix{ \cos (\theta) & \sin (\theta) & 0 \cr -\sin (\theta) & \cos (\theta) & 0 \cr 0 & 0 & 1 \cr} \right)$$
This is expressed in PDF code as \(\cos(\theta)\ \sin(\theta)\ -\hskip-2pt\sin(\theta)\ \cos(\theta)\ 0\ 0\) cm
The matrix for translation by \((t_x, t_y)\) relative to the origin:
$$\displaystyle \left( \matrix{ 1 & 0 & 0 \cr 0 & 1 & 0 \cr t_x & t_y & 1 \cr} \right)$$
This is expressed in PDF code as \(1\ 0\ 0\ 1\ t_x\ t_y\) cm
The matrix for scaling by \(s_x\) in the horizonal direction and \(s_y\) in the vertical direction is:
$$\displaystyle \left( \matrix{ s_x & 0 & 0 \cr 0 & s_y & 0 \cr 0 & 0 & 1 \cr} \right)$$
This is expressed in PDF code as \(s_x\ 0\ 0\ s_y\ 0\ 0\) cm
Demonstration graphic
The following simple graphic (shown as inline SVG) will be used to explore transformations. The equivalent PDF graphic (in hand-coded PDF data) is shown below.
Equivalent PDF data
The following PDF code will draw a very similar graphic:
q % save graphics state
1 j % set the line join style
1 J % set the line cap style
10 M % set the miter limit
%Set the stroking color space to DeviceRGB
0 0 0 RG % black
% draw the axes
0 0 m
0.5 w
0 30 l
0 0 m
30 0 l
S % stroke
% draw the red arrow head on x axis
q % save graphics state
%Set the non-stroking color space to DeviceRGB
1 0 0 rg % red
% translate to end of line on x-axis
1 0 0 1 30 0 cm
% draw an arrowhead
0 0 m % move to the origin
0 1.5 l
2.5 0 l
0 -1.5 l
h % close the current subpath
B % fill and stroke
Q % restore graphics state
% draw the blue arrow head on y axis
q % save graphics state
%Set the non-stroking color space to DeviceRGB
0 0 1 rg % blue
% translate to end of line on y-axis
1 0 0 1 0 30 cm
% draw an arrowhead
0 0 m % move to the origin
-1.5 0 l
0 2.5 l
1.5 0 l
h % close the current subpath
B % fill and stroke
% restore graphics state
Creating a graphic with LuaTeX nodes
As usual, a simple plain TeX setup.
grafik = node.new("whatsit","pdf_literal")
grafik.data=" q % save graphics state
1 j % set the line join style
1 J % set the line cap style
10 M % set the miter limit
%Set the stroking color space to DeviceRGB
0 0 0 RG % black
% draw the axes
0 0 m
0.5 w
0 30 l
0 0 m
30 0 l
S % stroke
% draw the red arrow head on x axis
q % save graphics state
%Set the non-stroking color space to DeviceRGB
1 0 0 rg % red
% translate to end of line on x-axis
1 0 0 1 30 0 cm
% draw an arrowhead
0 0 m % move to the origin
0 1.5 l
2.5 0 l
0 -1.5 l
h % close the current subpath
B % fill and stroke
Q % restore graphics state
% draw the blue arrow head on y axis
q % save graphics state
%Set the non-stroking color space to DeviceRGB
0 0 1 rg % blue
% translate to end of line on y-axis
1 0 0 1 0 30 cm
% draw an arrowhead
0 0 m % move to the origin
-1.5 0 l
0 2.5 l
1.5 0 l
h % close the current subpath
B % fill and stroke
% restore graphics state
Q "
Here is out new graphic \vskip 35mm
\noindent\hskip 15mm \copy1000
Notes about the PDF graphic
In the code above we have not assigned any size to the box containing the graphic, hence I needed to add \vskip 35mm \noindent\hskip 15mm to push the graphic into a space where it will be seen. To
give the graphic some dimensions, we'll need to add code such as
tex.box[1000].width = width value in sp
tex.box[1000].height = height value in sp
tex.box[1000].depth = depth value in sp
where the values assigned are in sp (special points). You may recall that 65536sp = 1 TeX point, where 72.27 TeX points = 1 inch = 72 PostScript points (same as default in PDF).
As far as the LuaTeX engine is concerned, the box containing the graphic has zero size, we have to tell LuaTeX how big we want it to be. In addition, the line widths, based on the above code, will be
affected by any scaling but it is not too difficult to fix that.
The Lua code
The idea is that we create a number of functions based on the lua-matrix library and save those functions into a Lua module called "mymatrix.lua". Within "mymatrix.lua" we import the lua-matrix code
via its module called "matrix" which we load with:
local matrix=require("matrix")
Our simple API
Here are the functions defined within our "mymatrix.lua" module:
• rotate(angle): returns a 3 x 3 rotation matrix (angle positive for counter clockwise)
• translate(tx,ty): returns a 3 x 3 translation matrix (tx, ty are translations in x, y directions)
• scale(sx,sy): returns a 3 x 3 scaling matrix (sx, sy are scaling in x,y directions)
• dump(mtx): simple debugging function (mtx = 3 x 3 matrix)
• toinlinetex(mtx): returns inline TeX code so we can typeset the matrix (mtx = 3 x 3 matrix)
• todisplaytex(mtx): returns display TeX code so we can typeset the matrix (mtx = 3 x 3 matrix)
• topdf(mtx): returns matrix in PDF code format (mtx = 3 x 3 matrix)
Here's the source code for mymodule.lua. One huge advantage of putting Lua code into Lua modules is that it greatly simplifies dealing with \catcode issues. Note that within code saved in Lua files
you use the regular Lua comment "--" mechanism and not the TeX comment "%" mechanism. You can use "%" when the code is embedded in a \directlua{...} call.
local matrix=require("matrix")
local rad=math.rad
local sin=math.sin
local cos=math.cos
-- Function to generate PDF transformation (rotation) matrices.
function rotate(angle)
local rot = matrix {{cos(rad(angle)),sin(rad(angle)),0},{-sin(rad(angle)),cos(rad(angle)),0},{0,0,1}}
return rot
-- Function to generate PDF transformation (translation) matrices.
function translate(tx,ty)
local tran = matrix {{1,0,0},{0,1,0},{tx,ty,1}}
return tran
-- Function to generate PDF transformation (scale) matrices.
function scale(sx,sy)
local scale = matrix {{sx,0,0},{0,sy,0},{0,0,1}}
return scale
function dump(mtx)
for i=1,3 do
for j=1,3 do
function todisplaytex(mtx)
texcode=string.format([[$$\left(\matrix {%3.3f & %3.3f & %3.3f \cr %3.3f & %3.3f & %3.3f \cr %3.3f & %3.3f & %3.3f \cr } \right)$$]],
mtx[1][1], mtx[1][2], mtx[1][3],mtx[2][1], mtx[2][2], mtx[2][3],
mtx[3][1], mtx[3][2], mtx[3][3])
return texcode
function toinlinetex(mtx)
texcode=string.format([[$\left(\matrix {%3.3f & %3.3f & %3.3f \cr %3.3f & %3.3f & %3.3f \cr %3.3f & %3.3f & %3.3f \cr } \right)$]],
mtx[1][1], mtx[1][2], mtx[1][3],mtx[2][1], mtx[2][2], mtx[2][3],
mtx[3][1], mtx[3][2], mtx[3][3])
return texcode
function topdf(mtx)
pdftext = string.format("%3.3f %3.3f %3.3f %3.3f %3.3f %3.3f cm",
mtx[1][1], mtx[1][2],mtx[2][1], mtx[2][2], mtx[3][1], mtx[3][2])
return pdftext
Full example
% We'll create an non-transformed graphic and store it in box 1000
grafik = node.new("whatsit","pdf_literal")
q % save graphics state
1 j % set the line join style
1 J % set the line cap style
10 M % set the miter limit
%Set the stroking color space to DeviceRGB
0 0 0 RG % black
% draw the axes
0 0 m
0.5 w
0 30 l
0 0 m
30 0 l
S % stroke
% draw the red arrow head on x axis
q % save graphics state
%Set the non-stroking color space to DeviceRGB
1 0 0 rg % red
% translate to end of line on x-axis
1 0 0 1 30 0 cm
% draw an arrowhead
0 0 m % move to the origin
0 1.5 l
2.5 0 l
0 -1.5 l
h % close the current subpath
B % fill and stroke
Q % restore graphics state
% draw the blue arrow head on y axis
q % save graphics state
%Set the non-stroking color space to DeviceRGB
0 0 1 rg % blue
% translate to end of line on y-axis
1 0 0 1 0 30 cm
% draw an arrowhead
0 0 m % move to the origin
-1.5 0 l
0 2.5 l
1.5 0 l
h % close the current subpath
B % fill and stroke
% restore graphics state
Q "
% store graphic in box 1000
% create some transformation matrices
mtx1 = mymatrix.rotate(45)
mtx2 = mymatrix.scale(1.5,2)
mtx3 = mymatrix.translate(15,15)
% Now we copy our untransformed node
% and add the PDF transformation matrices to
% rotate, scale etc.
% We'll do a rotation and store in box 1001
grafik2 = node.copy(grafik)
cm = mymatrix.topdf(mtx1)
% copy the PDF data from the untransformed
% graphic and add the PDF rotation matrix
grafik2.data="q ".. cm ..grafik.data.." Q "
% store graphic in box 1001
% We'll do a scale and store in box 1002
grafik3 = node.copy(grafik)
cm = mymatrix.topdf(mtx2)
% copy the PDF data from the untransformed
% graphic and add the PDF rotation matrix
grafik3.data="q ".. cm ..grafik.data.." Q "
% store graphic in box 1002
% Now multiply the scale and rotation matrices
% --- experiment with different combinations
grafik4 = node.copy(grafik)
cm = mymatrix.topdf(combo)
% copy the PDF data from the untransformed
% graphic and add the result of multiplication
grafik4.data="q ".. cm ..grafik.data.." Q "
% store graphic in box 1002
% Now multiply the scale, rotation and translation
% matrices --- experiment with different combinations
grafik5 = node.copy(grafik)
cm = mymatrix.topdf(combo2)
% copy the PDF data from the untransformed
% graphic and add the result of multiplication
grafik5.data="q ".. cm ..grafik.data.." Q "
% store graphic in box 1002
Here are our graphics \vskip 35mm
\noindent\hskip 15mm \copy1000 Default graphic
\vskip 35mm
\noindent\hskip 15mm \copy1001 Rotated graphic = \directlua{tex.sprint(mymatrix.toinlinetex(mtx1))}
\vskip 35mm
\noindent\hskip 15mm \copy1002 Non-uniformly scaled graphic = \directlua{tex.sprint(mymatrix.toinlinetex(mtx2))}
\vskip 35mm
\noindent\hskip 15mm \copy1003 Rotation $\times$ scale = \directlua{tex.sprint(mymatrix.toinlinetex(combo))}
\vskip 35mm
\noindent\hskip 15mm \copy1004 Rotation $\times$ scale $\times$ translate =\directlua{tex.sprint(mymatrix.toinlinetex(combo2))}
Resulting PDF
As usual, through the Google Docs viewer or download PDF here.
Just a 10-minute hack to explore putting spot colours into a PDF via pdf_colorstack nodes. I don't have access to Acrobat Professional at the moment to check the separations properly, so treat this
as an "alpha" method (i.e., not fully tested...). The colour defined below is lifted straight from an early PDF specification and implemented via LuaTeX nodes. As it says "on the tin": a quick and
dirty method
\directlua {
n = pdf.immediateobj("stream", "{ dup 0.84 mul
exch 0.00 exch dup 0.44 mul
exch 0.21 mul
}", "/FunctionType 4
/Domain [0.0 1.0]
/Range [0.0 1.0 0.0 1.0 0.0 1.0 0.0 1.0] ")
o = pdf.immediateobj("[ /Separation
/DeviceCMYK".." "..n.." 0 R]")
pdf.pageresources = " /ColorSpace << /LogoGreen "..o.." 0 R >> "
pdf_colstart = node.new("whatsit","pdf_colorstack")
pdf_end = node.new("whatsit","pdf_colorstack")
pdf_colstart.data="/LogoGreen CS /LogoGreen cs 1 SC 1 sc "
pdf_end.data= " "
pdf_end.cmd = 2
tex.box[1999]= node.hpack(pdf_colstart)
tex.box[2000]= node.hpack(pdf_end)
There are many variations of passages of Lorem Ipsum available, but the majority have suffered alteration in some form, by injected humour, or randomised words which don't look even slightly believable. If you are going to use a passage of Lorem Ipsum, you need to be sure there isn't anything \makeitgreen{embarrassing hidden in the middle of text. All the Lorem Ipsum generators on the Internet tend to} repeat predefined chunks as necessary, making this the first true generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
Resulting PDF
As usual, through the Google Docs viewer or download here.
|
{"url":"http://www.readytext.co.uk/?cat=15","timestamp":"2014-04-20T13:18:58Z","content_type":null,"content_length":"61236","record_id":"<urn:uuid:255f4b8b-51d2-46c0-adee-a0444f565d53>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
Block A is on top of block B and block B is being pulled with a tension T. There is no friction between A and B nor is there friction between B and the floor. What is the ensuing motion of the
blocks when T is applied? |dw:1331273978869:dw|
Best Response
You've already chosen the best response.
Hi whats your idea about ?
Best Response
You've already chosen the best response.
When the rope pulls block B, what happens to block A?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[T=(m _{A}+m _{B})a\]
Best Response
You've already chosen the best response.
we take 2body in one mass
Best Response
You've already chosen the best response.
Okay. When we pull B for a while, what happens to A?
Best Response
You've already chosen the best response.
hence no friction between them a go back
Best Response
You've already chosen the best response.
So block A doesn't move?
Best Response
You've already chosen the best response.
no i mean block A go back notice that we assume no friction between 2masses
Best Response
You've already chosen the best response.
Because there's no friction between blocks A and B, Block A goes back and falls down as we pull block B?
Best Response
You've already chosen the best response.
i think yes hence no force check T
Best Response
You've already chosen the best response.
Cool :) Thanks
Best Response
You've already chosen the best response.
your welcome friend. for confidece ask another we solve that whit this assumption really no friction on all surfaces
Best Response
You've already chosen the best response.
heena whats your idea?
Best Response
You've already chosen the best response.
asking me ok wait lemme redo this just a sec
Best Response
You've already chosen the best response.
welll if there is no friction between block A and B then the upper block fall down as due to ma lower block moves noe qn ariseswhen tension applied as by looking this case it seems like freely
falling body because tension is applied but in which respect as no friction b/w block and floor if u applied a lil force or u dont it ll move due to its weight
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f59a09ee4b0636d89059b75","timestamp":"2014-04-19T20:01:25Z","content_type":null,"content_length":"146370","record_id":"<urn:uuid:4b3f236e-c7d8-4f30-8627-4efdaf58b030>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 2,569
How can I group white substances into 2 groups according to their properties&list the properties of each group? Calcium carbonate, citric acid, sucrose, Phenyl salicylate, Potassium iodide,and Sodium
chloride Also, which of them dissolves in water and which has solubility in e...
How can I group white substances into 2 groups according to their properties&list the properties of each group? Calcium carbonate, citric acid, sucrose, Phenyl salicylate, Potassium iodide,and Sodium
chloride Also, which of them dissolves in water and which has solubility in e...
use the discriminant to solve for the smallest value of b that would make the roots of the equation x^2+bx+5=o imaginary
a grocery store display consists of cases of soda. there are 15 cases at the base, and 1 case at the top. if there are 15 rows, how many cases are needed for the entire display? a) 180 b)30 c)130 d)
how much money would you earn in the fifth year? would you just plug in 5 into n to get your answer?
Physical Science
The string of a certain yo-yo is 80 cm long and will break when the force on it is 10 N. What is the highest speed the 200 g yo-yo can have when it is being whirled in a circle? Ignore the
gravitational pull of the earth on the yo-yo.
Physical Science
So far for: A) w=mg =(70 kg)(25m/s^2) = 1750 N B) m=wg =(1750 N)(9.8 m/s^2) =1666 (I don't know what the units would be for this or if it even right) If you can help, it would be greatly appreciated!
Physical Science
Thank you!
Physical Science
A bicycle and its rider together have a mass of 80 kg. If the bicycle's speed is 6m/s, how much force is needed to bring it to a stop in 4 seconds
What volume of 6.43% (wt.vol.) NaOCl solution is required to oxidize 221 mg of 9-fluorenol to 9-fluorenone? I'm not to sure of where to start with this problem. Please help!
a grocery store display consists of cases of soda. there are 15 cases at the base, & 1case at the top. if there are 15 rows, how many cases are needed for the entire display? a) 180 b)30 c) 130 d) 15
The product of two consecutive odd intergers is 143. Find the intergers.
a colony of 300 bacteria doubles each day. write an explicit formula to determine how many bacteria there would be on the nth day. how would you do this?
I am working on an assignment in which I need to put together a pro forma balance sheet. At the end of the assignment, I need to find the predicted values of accounts for the upcoming year. In 2011,
Notes Payable was 11.28 million; Long Term Bonds were 7 million; Common Stock ...
Physical Science
Thank you so much!
Physical Science
A person dives off the edge of a cliff 33m above the surface of the sea below. Assuming that air resistance is negligible, (a) How long does it dive last? (b) and with what speed does the person
enter the water?
The perimeter of a geometric figure is the sum of the lengths of its sides. If the perimeter of a pentagon is 48 meters, find the length of each side.
A quantity of ice at 0.0°C was added to 25.0 g of water at 21.0°C to give water at 0.0°C. How much ice was added? The heat of fusion of water is 6.01 kJ/mol and the specific heat is 4.18 J/(g · °C).
in 1991 the life expectancy of males was 65.4 in 1998 it was 68.8 let e represent life expectancy in year t and let t represent number of years since 1991
Please tell me where I can find Articles about Energy Resource and issues. Thanks!
calculate the energy required to heat 1.00L of ice at -50.0C to steam at 180.0C
What is the y-intercept of 4x+8y=12?
A farmer keeps hens and rabbits on his farm. One day, he counted a total of 70 heads and 196 legs. How many more hens than rabbit does he have? Explain your working please. Your help is greatly
Sandy decided to give one-fourth of her CDs to Jenn. Then she gives one-half of the remaining CDs to Bob. If she is left with 6 CDs,how many CDs did she have to begin with?
A wall hanging is x inches wide and has a height of 4x inches. If the perimeter is 40inches, what is the height in inches? Please help!
Find the area of each circular segment to the nearest tenth, given its central angle, x, and the radius of the circle. x= pi/8, r=7
the shorter leg of a 30-60-90 triangle is 9.4 inches long. find the perimeter.
Haha, Steve is correct, I apologize for the guesswork. :/ It's g(x)= 1/3 -4/3x. Going off the work Reiny did, I am still rather clueless, so considering it's only two problems, I guess I'll just have
to do some guesswork myself. Thank you for the help, though!
I am not exactly sure of what I'm supposed to do in the following question: Describe and correct the error, given the functions f(x)=3x - 5 and g(x)=1/3 - 4/3. The first equation is as follows: g(-3)
= 1/3 - 4/3 (3) =1/3 - 4 = -3 2/3 The second equation is: f(1)= 1/3 - 4/3(...
A sample of cesium metal reacted completely with water, evolving 47.0 mL of dry H2 at 31°C and 769 mmHg. What is the equation for the reaction?
A student investigates skateboarding down a slope. She times how long it takes her to skateboard between two lines on a sloping pavement. She varies the distance between the two lines and records her
results in a table. Distance s/m time t/s Distance 2 Time 2.32 Distance 4 Tim...
A student investigates skateboarding down a slope. She times how long it takes her to skateboard between two lines on a sloping pavement. She varies the distance between the two lines and records her
results in a table. Distance s/m time t/s 2 2.32 4 4.23 9 8.22 11 9.53 The st...
can anyone tell me is there any strong response aganist norcoss argument that if it wrong to kill puppies, so it is also wrong to support farming factory
WHat is the strongest response against norcoss argument on puppies, pig and people
Physics 201
Suppose the mass of the wing is 0.258g and the effective spring constant of the tissue is 4.00E-4 N/m. Distance d1 = 3.03 mm and distance d2 = 1.76 cm. If the mass m moves up and down a distance of
2.08 mm from its position of equilibrium, what is the maximum speed of the oute...
Am doing Autism and I have to give farther information. Thanks The cognitive part: I have...(1) Selecting attention, I have they absorve things..(2) Concrete processing, i dont have anything for this
one. (3) Rote memory: I putted recalling.
How can you have Motivation, Forecast and Orientation in speech intorduction? like show the audience pictures etc?..thanks!
a rod supports a 2.35 kg lamp. a)what is the magnitude of the tension in the rod? b)calculate the components of the force that the bracket exerts on the rod.
Am doing a project about Autism, what are the Cogngntive and pscohosocial of Autism and where can I find them? Thanks!
what is the adverb and verb for the sentence The players spent all day there.
The gcf(a,b)=495 and cm(a,b)=31,185 Find possible values of a and b if a is divisible by 35 and b is divisible by 81
V/CV OR VC/V PATTERN
If AC is 50 feet, and AB is 40 feet, which is the length of BC?
Oops, sorry I meant .004 under initial moles of RNH2 not .001. Regardless I get an answer of 9.84 instead of 10.81
For part b of problem 2, I used ..RNH2 + HCl--> RNH3 + Cl I.+.001..+.002...0......0 C.-x......-x.....+x....+x E+.001-x..+.002-x..+x...+x Is this correct? My issue with these problems is that I don't
know when to use the variables (like +x and -x for the C in ICE) or whe...
Regarding your first post, I believe I used aspirin as the weak acid. In a previous problem which was related to these problems, it stated "aspirin is a weak acid with a Ka of 3.0x10^-5"
Sorry, one more question. For .........RNH2 + HCl ==> RNH3^+ + Cl^- initial...0....................... added...........0 added..........2.0 added..........4.0 etc change..................... the
initial RNH2 is 0? Shouldn't it be the initial moles in the problem (.004mo...
For problem 1, if it were changed to 20 mL of the .07222M aspirin solution and 5mL of .2M NaOH, how would solving it differ? I tried ...........HA + NaOH ==> NaA + H2O
initial..0014444.................. add..............001............ change...-.001.-.001.....+.001...+.001...
1.Calculate the pH of a solution prepared by mixing 20mL of the .07222M aspirin solution with 10mL of .2M NaOH. 2. Calculate the pH for the titration of 40mL of .1M solution of C2H5NH2 with .1M HCl
for a)0 mL added and b)20 mL added c)40 mL added d)50mL added. Kb= 6.4x10^-4
For the ICE box, you are supposed to use the molarity of the aspirin. I did not realize that. Thank you so much for your help!
The answer that was in my answer key was 2.83 but I got 3.48. Is there something wrong in the calculations?
Aspirin is a weak acid with a Ka of 3.0x10^-5. Find the pH of a solution by dissolving .65g of aspirin in water and diluting it to 50mL. You can use RCOOH to represent aspirin. Molecular weight of
aspirin is 180g/mol
In about 100 words, explain what kinds of features of characteristics a midsize business would seek in its electronic commerce software that would not matter to a smaller business.
How many ml of a 34% sugar solution must be mixed with 2 ml of a 54% sugar solution to make a 38% solution?
Given that a√b^2-c = 5k, find the value of: 1. k when a = 3, b = 6 and c = 20 2. c when a = 4, b = 7, and k = 11 Explain your working please
A shopper in a supermarket pushes a loaded 31 kg cart with a horizontal force of 12 N. The acceleration of gravity is 9.81 m/s2 . a) Disregarding friction, how far will the cart move in 3.6 s,
starting from rest? Answer in units of m
A shopper in a supermarket pushes a loaded 31 kg cart with a horizontal force of 12 N. The acceleration of gravity is 9.81 m/s2 . a) Disregarding friction, how far will the cart move in 3.6 s,
starting from rest? Answer in units of m
Chemistry (Follow up post - Dr.BOB22)
The first link seems to be broken.
Chemistry (Follow up post - Dr.BOB22)
Oh, I'm sorry. I wasn't really trying to put them together. I just wanted you to see what I understood about ionization. And then ask my question. Do you think you could help me understand lasers and
plasma arcs connection to ionization levels? or point me to a website...
Chemistry (Follow up post - Dr.BOB22)
Ionization is when an electron is removed from an atom and ionization energy is the energy required to do this. "Electrons stream from the negative electrode to the positive electrode. In the process
of moving from one electrode to the other they knock electrons in the en...
Thank you. My apologies for the repeated postings.
I have an assignment to 'Research ionization energy levels and come up with 3 real world applications' and was told to look up lasers, plasma arcs and neon lights. Hours later, (lol) I'm still having
a little trouble. Is there any way someone can explain the connec...
Ionization is when an electron is removed from an atom and ionization energy is the energy required to do this. "Electrons stream from the negative electrode to the positive electrode. In the process
of moving from one electrode to the other they knock electrons in the en...
I have an assignment to 'Research ionization energy levels and come up with 3 real world applications' and was told you look up lasers, plasma arcs and neon lights. Hours later, (lol) I'm still
having a little trouble. Is there any way someone can explain the conne...
I have an assignment to 'Research ionization energy levels and come up with 3 real world applications' and was told you look up lasers, plasma arcs and neon lights. Hours later, (lol) I'm still
having a little trouble. Is there any way someone can explain the conne...
I'm still having a little trouble. Is there any way you can explain the connection between lasers, plasma arcs and neon lights?
Do these three things have ionization energy levels? To be honest, I'm not quite sure how ionization works.
I have an assignment to 'Research ionization energy levels and come up with 3 real world applications'. I'm not so sure how to do this. Any help would be appreciated.
college physics
What distance will a 10 hp motor lift a 2000 lb elevator in 30 s? What was the average velocity of this elevator during the lift?
An 83.2-kg propeller blade measures 2.24m end to end. Model the blade as a thin rod rotating about its center of mass. It's initially turning at 175rpm. Find the blade's angular momentum, the
tangential speed at the blade tip, and the angular acceleration and torque re...
If a piece of wood is 5 cm longer than a second piece, and 3/4 of the second piece is equal to 3/5 of the first, what is the length of the second piece? Explain your workings
Classify the possible combinations of signs for a reaction's ∆H and ∆S values by the resulting spontaneity. A) Spontaneous as written at all temperatures. B) Spontaneous in reverse at all
temperatures. C) Spontaneous as written above a certain temperature. D) S...
Calculate the pH of 100.0 mL of .0200 M pyruvic acid and .0600 M sodium pyruvate Calculate the pH of 100.0 mL of .0200 M pyruvic acid and 0.0600 M sodium pyruvate with 0.250 millimoles of H3O+ added.
math PLEASE HURRY
there are 24 students in a science class. Mr. Sato will give each pair of students 3 magnets. So far, Mr. Sato has given 9 pairs of students their 3 magnets. homw many more magents does Mr. Sato need
so that each pair of students has excatly 3 magnets.
a developer wants to enclose a rectangular grassy lot that borders a city street for parking. If the developer has 336 feet of fencing and does not fence the side along the street, what is the
largest area that can be enclosed and what are the dimensions that give this maximum...
I have no clue where to begin on this problem. Can some one help me please. One demographer believes that the population growth of a certain country is best modeled by the function P (t) =15 e^.08t,
while a second demographer believes that the population growth of that same co...
Calculate the pH of the following 0.0500 M H3AsO4 and 0.0500 M NaH2AsO4?
Calculate the pH of the following 0.0500 M H3AsO4 and 0.0500 M NaH2AsO4?
Calculate the pH of a solution made with 0.0600 M carbonic acid, H2C03 and 0.0300 M NaHCO3.
Calculate the pH of a solution made with 0.240 M H3PO3 and 0.480 M NaH2PO3.
6th grade math
A: 5/100 is exactly 0.05, not less. 0.05 is 5%, not 50% B. 0.7 is exactly 1/7, but 1/7 is not 10%. 10% is 1/10 C. 2/10 is 20%, not 25% and 0.9 is 90%, not 25% D. 0.3 is not 36%, it's 30%. 36/10 is
3.6, not 36%, which is 0.36. None of them seem correct. D':
On Apollo missions to the Moon, the command module orbited at an altitude of 150 above the lunar surface. How long did it take for the command module to complete one orbit?
The drawing shows a hydraulic system used with disc brakes. The force vector F is applied perpendicularly to the brake pedal. The pedal rotates about the axis shown in the drawing and causes a force
to be applied perpendicularly to the input piston (radius = 9.39 10-3 m) in th...
u.s. court system part 1
answers to penn foster exam-40600900
how long will it take to double my investment with 2500 at 4% annually
A rocket, 8975 kg, traveling at 1255 m/s East explodes into two pieces. if one piece, 2450 kg, moves at 750.0 m/s due south, what is: the mass of the second piece? the momentum of the second piece?
the velocity of the second piece?
In 2006, the General Social Survey asked, "Do you see yourself as someone that tends to be lazy?" For this question, 20 people said that they definitely did out of 1513 randomly selected people. What
is the 95% confidence interval for the proportion of all Americans ...
A proton, moving in negative y-direction in a magnetic field , experiences a force of magnitude F, acting in the negative x direction. a) What is the direction of the magnetic field producing force?
b) Does your answer change if the word proton is replaced by electron? I get a...
What caroxylic acid and alcohol is needed to produce isopentyl propionate?
Word Problem
A developer wants to enclose a rectangular grassy lot that borders a city street for parking. If the developer has 212 feet of fencing and does not fence the side along the street, what is the
largest area that can be enclosed?
BEDMAS with decimals
An isolated segment of wire of length L=4.50 m carries a current of magnitude i=35.0 A at an angle theta=50.3 degrees with respect to a constant magnetic field with magnitude B= 6.70E-2 T. What is
the magnitude of the magnetic force on the wire? a) 2.66 N b) 3.86 c) 5.60 N d) ...
What carboxylic acid and alcohol is needed to produce isobutyl propionate?
What carboxylic acid and alcohol is needed to produce propyl acetate?
What caroxylic acid and alcohol is needed to produce isopentyl propionate?
Suppose that the magnetic field o fthe Earth were due to a single current moving in a circle of radius 2988 km through the earth s molten core. The strength of the Earth s magnetic field on the
surface near a magnetic pole is about 6.00E-5 T. About how large a curren...
1)A wire is carrying a current, i, in the positive y-direction. The wire is located in a uniform magnetic field, B, oriented in such a way that the magnetic force on the wire is maximized. The
magnetic force acting on the wire, FB, is in the negative x direction. What is the d...
Please help me with these choose the best answers since I'm not sure of my answers. A current element produces a magnetic field in the region surrounding it. at any point in space, the magnetic field
produced by this current element points in a direction that is a) radial ...
1)A wire is carrying a current, i, in the positive y-direction. The wire is located in a uniform magnetic field, B, oriented in such a way that the magnetic force on the wire is maximized. The
magnetic force acting on the wire, FB, is in the negative x direction. What is the d...
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=jessica&page=9","timestamp":"2014-04-19T10:54:16Z","content_type":null,"content_length":"31461","record_id":"<urn:uuid:b00f56c7-8cd1-42fa-90f5-0883adf5b3c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haskell influences in CoffeeScript
While learning a bit of Haskell I was struck by the syntactical similarites to CoffeeScript. Since Haskell predates CoffeeScript by twenty years (1990/2010) it seems that it is Haskell that has had
an influence on CoffeeScript. What follows is a list of the similarities that I have observed.
Binding Variables
In Haskell we can bind a local variable to a scope using the let... in syntax:
circumference r = let pi = 3.14159
in pi * 2 * r
CoffeeScript supports the same thing via the do keyword. One interesting thing about do is that it allows a way to define a variable that is (sort of) not scoped to a function.
circumference = (r) ->
do (pi = 3.14159) ->
pi * 2 * r
You can read about some interesting uses of do in Reginald Braithwaite's CoffeeScript Ristretto.
Significant Whitespace
As you can see in the examples above Haskell and CoffeeScript both use significant whitespace in a similar way.
Expressions > Statements
Both Haskell and CoffeeScript encourage us to favour expressions over statements. CoffeeScript supports statements that aren't expressions but does everything possible to let you avoid them.
evenness = (i) ->
if i % 2 is 0
Combining this with the do notation we can do
evenness = (i) ->
do (is_even = (n) -> n % 2 is 0) ->
if is_even i
Haskell and CoffeeScript have similar syntax for list comprehension. The following Haskell function selects the even elements of a list (x mod 2 == 0) and maps them through a function that multiplies
them by two (x*2).
double_odds xs = [x*2 | x <- xs, x `mod` 2 == 0]
The equivalent CoffeeScript is:
double_odds = (xs) ->
x*2 for x in xs when x % 2 is 0
Function Call Syntax
Both languages use a parenthesis free syntax for applying a function.
add 2 3
add 2, 3
For basic incrementing or decrementing integers Haskell and CoffeeScript have the same syntax.
Literate Mode
Haskell and CoffeeScript both support a ‘literate’ mode that emphasizes comments over code.
In CoffeeScript's literate mode markdown text is interpreted as a comment. Indented text is executed as CoffeeScript code.
What I need is a function that doubles the even numbers in a list. Here's one!
double_odds = (xs) ->
x*2 for x in xs when x % 2 is 0
Haskell's literate mode uses a > to indicate lines of code.
What I need is a function that doubles the even numbers in a list. Here's one!
> double_odds xs = [x*2 | x <- xs, x `mod` 2 == 0]
That‘s all that I can think of, but I’m sure there is more. CoffeeScript is often described as a blend of Ruby and Python that compiles to JavaScript. I think this underplays the influence of Haskell
in CoffeeScript's design.
|
{"url":"http://withouttheloop.com/articles/2013-02-27-haskell-influence-coffeescript/","timestamp":"2014-04-19T04:27:12Z","content_type":null,"content_length":"6846","record_id":"<urn:uuid:365af4db-8f96-4a62-9948-fe58443f7b14>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Steilacoom Math Tutor
Find a Steilacoom Math Tutor
...I always said that I wanted to be a "professional student" when I grew up so I could stay in school forever, which in becoming a teacher, I get to do just that! I am pursuing my degree in Early
Childhood Education, and in the mean time, I am working in a child care program through the YMCA. I w...
16 Subjects: including prealgebra, writing, Spanish, algebra 1
...I am also being considered as a Finalist for the National Merit Scholarship Competition. In order to effectively tutor students, I believe in assessing the student's needs and ability and then
walking them through the process with them doing most of the work, leading them toward independent succ...
15 Subjects: including algebra 1, algebra 2, calculus, chemistry
...My specialties include Adult Basic Education (ABE), GED preparation, basic math and prealgebra, English grammar and composition, and research methods (college-level). I also can tutor for
social studies, religious/biblical studies, and classical languages. I have a passion for teaching and work...
20 Subjects: including prealgebra, English, grammar, reading
...I have them go through the problem as I go through the problem and where they get stuck at is when I get to come in and explain and or help. I do the work with them but I do not just give the
answers out. I can admit that i get stuck at times too but I think that is when we both can help each o...
5 Subjects: including differential equations, probability, ACT Math, geometry
...I can cover the introductory to intermediate accounting classes. I make sure that you understand the concepts involved by asking what you understand and filling in the gaps. I also make up
examples and questions based on the scenario.
12 Subjects: including algebra 1, algebra 2, ASVAB, prealgebra
|
{"url":"http://www.purplemath.com/steilacoom_math_tutors.php","timestamp":"2014-04-21T15:17:55Z","content_type":null,"content_length":"23657","record_id":"<urn:uuid:2d3d7da3-f06d-43d4-b0c4-5046fdc9918f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the distance between Baguio city and Banaue Rice Terraces in kilometers?
You asked:
What is the distance between Baguio city and Banaue Rice Terraces in kilometers?
Assuming you meant
• Baguio, the place in Baguio, the Philippines
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/what_is_the_distance_between_baguio_city_and_banaue_rice_terraces_in_kilometers","timestamp":"2014-04-18T19:02:33Z","content_type":null,"content_length":"55357","record_id":"<urn:uuid:a67176b7-60ab-496b-89f9-a0de63b9a5e7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Two classes of asymptotically different positive solutions of the equation
(English) Zbl 1169.34050
This paper is devoted to the problem of the existence of two classes of asymptotically different positive solutions of the delay equation
as $t\to \infty ,$ where $f:{\Omega }\to ℝ$ is a continuous quasi-bounded functional that satisfies a local Lipschitz condition with respect to the second argument and ${\Omega }$ is an open subset
in $ℝ×C\left(\left[-r,0\right],ℝ\right)$. Two approaches are used. One is the method of monotone sequences and the other is the retract method combined with Razumikhin’s technique. By means of linear
estimates of the right-hand side of the equation considered, inequalities for both types of positive solutions are given as well. Finally, the authors give an illustrative example and formulate some
open problems.
34K25 Asymptotic theory of functional-differential equations
|
{"url":"http://zbmath.org/?q=an:1169.34050","timestamp":"2014-04-16T16:30:58Z","content_type":null,"content_length":"22447","record_id":"<urn:uuid:0eb26ba8-1269-4e6c-874c-f4b216dd8a94>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Organizing and Displaying Distributions of Data
Chapter 7: Organizing and Displaying Distributions of Data
Created by: CK-12
The local arena is trying to attract as many participants as possible to attend the community’s “Skate for Scoliosis” event. Participants pay a fee of $10.00 for registering, and, in addition, the
arena will donate $3.00 for each hour a participant skates, up to a maximum of 6 hours. Create a table of values and draw a graph to represent a participant who skates for the entire 6 hours. How
much money can a participant raise for the community if he/she skates for the maximum length of time?
This problem will be revisited later in the chapter.
When data is collected from surveys or experiments, it is often displayed in charts, tables, or graphs in order to produce a visual image that is helpful in interpreting the results. From a graph or
table, an observer is able to detect any patterns or trends that may exist. The most common graphs that are used in statistics are line graphs, scatter plots, bar graphs, histograms, frequency
polygons, circle graphs, and box-and-whisker plots.
Chapter Outline
Chapter Summary
Files can only be attached to the latest version of None
|
{"url":"http://www.ck12.org/book/Basic-Probability-and-Statistics-A-Full-Course/r1/section/7.0/","timestamp":"2014-04-21T02:38:23Z","content_type":null,"content_length":"95869","record_id":"<urn:uuid:488f4031-d1c7-4d51-994f-fe0fbda4181f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Directions in Distribution Management
Advanced systems turn ‘event-driven’ binary schemes into hybrid hierarchical controls.
and voltage vary in proportion to one another across the network, solutions to accurately compute voltage and current flow at any node are iterative. Some software uses a closed-form voltage drop
computation, but those are generally estimated and based on assumptions that can limit accuracy. For DMS to work well in a smart grid context, the loadflow calculations must be accurate.
Power system engineers use the term “state estimation” to mean the ability to monitor certain points in the network for things like voltage and current, and solve those parameters for other,
non-telemetered points of interest. This technique is non-trivial, and has been applied to the sparse networks of transmission systems for many years. For distribution networks, the problem becomes
much more intensive and state estimation has to work hand-in-glove with the loadflow method chosen for analysis.
For DMS to do its job, it must be able to to accept real-time data from SCADA systems and other sources, and incorporate that information into its network solver. This capability is closely related
to the state estimation function described above and adds another level of complexity. That’s because as real-time parameters change, the state estimator must be capable of solving and resolving the
network, deciding which, if any, of the monitored parameters justify a re-running of the loadflow calculations.
Among the most important functions for a modern DMS are fault location and service restoration. This capability is an extension of basic short-circuit current and voltage computations. The DMS uses
fault calculations to help system operators plan protective schemes, analyze system failures and plan service restoration.
In addition to these core capabilities, there are two more important characteristics for DMS. North American style networks require that network solvers be capable of solving three-phase, unbalanced
systems. In European networks, a three-phase balanced solution is adequate, since nearly all distribution systems operate with all three phases in balance. But North American networks often consist
of segments of single-phase circuits, so that the simplifying assumptions of the balanced solution do not apply. Most importantly, modern DMSs must have the ability to quickly solve large, complex
networks, often with hundreds of thousands of nodes or more. In the complex equation of loadflow + state estimation + real-time integration, solving a large network in near-real-time is a challenge.
But high performance is the price of effective control of the distribution grid.
It might come as a surprise to some, but tightening the control of the operation of medium- and low-voltage systems for greater efficiency can lead to significant savings for the utility and its
ratepayers. Operating efficiencies can come through a wide array of techniques, including balancing phasing, managing system voltage and power factor, and optimizing network configuration. To take
one example, most utilities today operate the distribution grid in a fairly static configuration. At most, they may be able to reconfigure feeders through switching once a year or so to optimize
performance as seasonal loads change. As the grid gets smarter, with more points of switching control and more places where operating parameters are measured, more
|
{"url":"http://www.fortnightly.com/fortnightly/2011/02/new-directions-distribution-management?page=0%252C3%2C1","timestamp":"2014-04-18T23:24:19Z","content_type":null,"content_length":"47073","record_id":"<urn:uuid:fe08e7cb-907e-4d6f-b3c0-464ebf941312>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Angular quantities for general motion
Here, we shall define and interpret angular quantities very generally with respect to a "point" in the reference system - rather than an axis. This change in reference of measurement allows us to
extend application of angular quantities beyond the context of rotational motion. We can actually associate all angular quantities even with a straight line motion i.e. pure translational motion. For
example, we can calculate torque on a particle, which is moving along a straight line. Similarly, we can determine angular displacement and velocity for a projectile motion, which we have studied
strictly from the point of view of translation. We shall work out appropriate examples to illustrate extension of angular concepts to these motion.
We must understand here that the broadening the concept of angular quantities is not without purpose. We shall find out in the subsequent modules that the de-linking of angular concepts like torque
and angular momentum from an axis, lets us derive very powerful law known as conservation of angular momentum, which is universally valid unlike Newton's law (for translational or rotational motion).
The example given below calculates average angular velocity of a projectile to highlight the generality of angular quantity.
Problem 1 : A particle is projected with velocity "v" at an angle of "θ" with the horizontal. Find the average angular speed of the particle between point of projection and point of impact.
Solution : The average angular speed is given by :
Figure 1:
Average angular
speed during the
flight of a
Angular velocity
ω avg = Δ θ Δ t ω avg = Δ θ Δ t
From the figure, magnitude of the total angular displacement is :
Δ θ = 2 θ Δ θ = 2 θ
On the other hand, time of flight is given by :
Δ t = 2 v sin θ g Δ t = 2 v sin θ g
Putting these values in the expression of angular velocity, we have :
ω avg = Δ θ Δ t = 2 θ g 2 v sin θ ω avg = θ g v sin θ rad / s ω avg = Δ θ Δ t = 2 θ g 2 v sin θ ω avg = θ g v sin θ rad / s
From this example, we see that we can indeed associate angular quantity like angular speed with motion like that of projectile, which is not strictly rotational.
|
{"url":"http://cnx.org/content/m14314/1.8/","timestamp":"2014-04-21T09:47:10Z","content_type":null,"content_length":"79096","record_id":"<urn:uuid:f84d2303-9dd4-4069-9260-f55b50d540e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Self-similar processes in communications networks.
(English) Zbl 0988.90003
From the introduction: The main objective of the present paper is to review and briefly discuss the known definitions and properties of second-order self-similar discrete-time processes, to
supplement them with some more general conditions of self-similarity, to present a model for ATM cell traffic, and, finally, to find the conditions of model self-similarity.
Section II contains definitions of exactly and asymptotically second-order self-similar processes, which we adopt. The most essential second-order properties of these processes are presented. A
novelty here is the presentation of some unknown proofs and properties, as well as the presentation of all these properties in one paper. A comparison of different definitions is done, with
discussion and comments.
Section III gives a model of ATM cell traffic, the necessary and sufficient conditions for its exact self-similarity and a sufficient condition for its asymptotic self-similarity. The conditions are
more general than others obtained earlier; they contain the known conditions as special cases. We reference earlier papers which are particularly relevant to the model and also discuss some other
known models, which are linked, to our model.
The proots of our results are placed in Appendices A–D. In this presentation, we need to use the concepts of the Karamata slow- and regular-variation theory. The definitions of slowly and regularly
varying functions and sequences are given in Appendix E. For other known results in the theory, we refer to N. H. Bingham, C. M. Goldie and J. L. Teugels [Regular Variation. Cambridge, New York:
Cambridge Univ. Press (1987; Zbl 0617.26001)]. A brief presentation of our results was given in [N. Likhanov, B . Tsybakov and N. D. Georganas, “A model of self-similar communications-network
traffic”, Proc. Int. Conf. “Distributed Computer Communication Networks” (DCCN’97) (Tel-Aviv, Israel, 1997), 212-217 (1997)].
90B18 Communication networks (optimization)
94A05 Communication theory
90B20 Traffic problems
|
{"url":"http://zbmath.org/?q=an:0988.90003","timestamp":"2014-04-18T23:51:34Z","content_type":null,"content_length":"22810","record_id":"<urn:uuid:4a001ba9-2914-49a6-8efa-c6372b795ba0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Input Iterators
Apache C++ Standard Library Reference Guide
Input Iterators
Library: Iterators
Local Index
No Entries
A read-only, forward moving iterator
NOTE -- For a complete discussion of iterators, see the Iterators section of this reference.
Iterators are a generalization of pointers that allow a C++ program to uniformly interact with different data structures. Input iterators are read-only, forward moving iterators that satisfy the
requirements listed below.
Key to Iterator Requirements
The following key pertains to the iterator requirement descriptions listed below:
a and b values of type X
n value representing a distance between two iterators
u, Distance, tmp and m identifiers
r value of type X&
t value of type T
Requirements for Input Iterators
The following expressions must be valid for input iterators:
X u(a) copy constructor, u == a
X u = a assignment, u == a
a == b, a != b return value convertible to bool
*a a == b implies *a == *b
++r returns X&
r++ return value convertible to const X&
*r++ returns type T
a -> m returns (*a).m
For input iterators, a == b does not imply that ++a == ++b.
Algorithms using input iterators should be single pass algorithms. That is, they should not pass through the same iterator twice.
The value of type T does not have to be an lvalue.
See Also
Standards Conformance
ISO/IEC 14882:1998 -- International Standard for Information Systems -- Programming Language C++, Section 24.1.1
|
{"url":"http://stdcxx.apache.org/doc/stdlibref/inputiterators.html","timestamp":"2014-04-18T00:44:53Z","content_type":null,"content_length":"7136","record_id":"<urn:uuid:c8043e81-6402-48e2-b1a6-3ad8ac017776>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
three phase transformer - All About Circuits Forum
The term - "transformer loading" is normally taken to be referring to the power or VA loading, rather than the load current (say).
On the matter of efficiency I'm not sure where the author's use of the n^2 comes from - although I might be better informed if I had the complete text in front of me.
I would approach the problem from a more "fundamental" perspective.
Consider the primary input voltage to be Vp with current Ip and power factor cos(θ).
The fractional efficiency would be given by the general relationship
The losses would comprise the [assumed constant] no-load magnetization losses Po plus the [variable] winding losses.
In other words
Where if N (not the 'n' mentioned above) is the primary-to-secondary turns ratio
In your example Rp=0.42Ω, Rs=0.0019Ω and N=11000/400=27.5
$R_t=0.42+(27.5)^2*0.0019=0.42+1.4369=1.8569 \ \Omega$
We can then write the fractional efficiency as
to find the maximum efficiency we differentiate η with respect to the variable input current Ip at some arbitrary power factor.
$\frac{\partial \eta}{\partial I_p}=\frac{P_o}{V_pI_p^2cos(\theta)}-\frac{R_t}{V_pcos(\theta)}$
We find the maximum (or minimum) by equating the derivative to zero.
Which reduces to the condition for maximum efficiency
$\frac{P_o}{V_pI_p^2cos(\theta)}=\frac{R_t}{V_pcos( \theta)}$
or after simplifying
Which re-iterates the statement in the text that maximum efficiency occurs when the winding I^2R losses equal the no-load losses.
From this one then can deduce the actual primary current to meet this condition.
We can also note (along with the text) that the maximum efficiency condition is independent of power factor. However the actual efficiency value at that condition will depend on the power factor. As
an exercise you might try to determine what that maximum efficiency value might be.
So in the case of your example problem 34.7 with Po=2.9kW and Rt=1.8569Ω we have the value of
$I_p=\sqrt{\( \frac{2900}{1.8569} \)}=39.519 \ A$
At Vp=11kV this gives the primary VA input as 11000*39.519=434.71kVA.
Assuming a constant power factor of 0.8 through the transformer [*] then the input power would be 347.77kW. With the losses of 5.8kW this gives the load power as 341.97kW which differs slightly from
the text value. In my case the maximum efficiency [at 0.8 pf] would then be 98.33%.
My final comment is that this is all a bit arbitrary. The text results imply a change in overall efficiency from full-load efficiency to maximum efficiency as a difference of 0.3%, which barely
merits a mention at all. Efficiency is certainly a matter of great importance at rated operating conditions but the small difference from maximum η to the actual η value at rated conditions is
Also be advised this analysis is also underpinned by certain assumptions [*] and approximations and does not reflect the exact "truth" over the possible load operating range having regard to such
matters as load power factor and fault conditions.
Last edited by t_n_k; 05-16-2012 at 06:46 AM.
|
{"url":"http://forum.allaboutcircuits.com/showthread.php?t=69956","timestamp":"2014-04-17T16:11:36Z","content_type":null,"content_length":"70908","record_id":"<urn:uuid:405c0cb7-c7ea-4fed-abc6-9a6b147a56bb>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 24
24 V1/Vc = 1 and 3; y1/a = 0.33, 1, and 3; D50 = 0.2 and 3 mm; able. The Sheppard and Miller (2006) equation was modified and a = 2 in., 3 ft, and 33 ft. The results from this evaluation are to
create the new S/M equation. presented in the form of bar charts in Figures 13 though 20. Negative scour depth predictions were set equal to zero in Sheppard/Melville (S/M) Equation the charts.
Figures 13 and 15 show scour depth predictions for scenar- The Sheppard and Miller (2006) and Melville (1997) equa- ios comprising clear-water to live-bed transition (threshold) tions were melded and
slightly modified to form a new equa- flows (V1/Vc = 1), fine sand (D50 = 0.2 mm), and two different tion referred to here as the S/M equation. The modifications flow depth to pier width ratios (y1/a
= 3, 0.33). Figures 14 and consisted of: 16 are a parallel set of results for live-bed conditions (V1/Vc = 3). Similarly, Figures 17 through 20 apply to situations with coarser 1. Changing the 1.75
coefficient to 1.2 in the term for f2 in sediment (D50 = 3 mm). Equation 23, The equations used in producing the results shown in 2. Changing the value of V1/Vc where local scour is initiated Figures
13 through 20 span the period from 1949 to 2006 in from 0.47 to 0.4, and their development and publication. Improvements in the 3. Modifying/simplifying the manner in which the live-bed understanding
of local scour processes and scour hole peak velocity is computed. development during this time period resulted in improve- ments to the equilibrium scour predictive equations/methods. The resulting
equation is presented in Table 5. For example, several of the earlier equations predicted These changes improved the accuracy of the predictions negative scour depths for some of the input
conditions. for both laboratory and field data. However, this equation Also, the differences between the predictions become less underpredicts some of the measured field data at very low with time.
velocities (i.e., low values of V1/Vc), most likely due to rela- Variations in the predictions of local scour for different pier tively large sediment size distributions (large g). The under- sizes
("laboratory" to "typical field" to "very large field") are predictions are illustrated in Figures 21 and 22, which show reported. Some methods predict scour depth ratios decreas- before- and
after-modification upper bound curves for labo- ing with increasing pier size; others show constant values of ratory and field data, respectively. The scour depths in this scour depth ratio from
laboratory to field, with one equation range of flow velocities are, however, very small and therefore by Coleman (1971) showing larger normalized scour depths in are not likely to affect prediction
of design scour depths. Also, the field than in the laboratory. the reported scour depths in this range of V1/Vc seem large for These plots help identify those equations that produce un- the
magnitude of the flow velocities (i.e., the accuracy of these realistic results for prototype-scale piers and thus aid in elimi- data is questionable). nating such equations from further
consideration. The regime equations of Inglis (1949), Ahmad (1953) and Chitale (1962) yield negative scour depths in some cases. The Coleman (1971) Equation in HEC-18 equation yields an unrealistic
trend with increasing pier size No attempt was made to modify the scour equation in the and therefore was eliminated. Several other equations predict current version of HEC-18 (Richardson and Davis
2001) unreasonably high normalized scour depths (Inglis 1949, because it does not properly account for the physics of the Ahmad 1953, Chitale 1962, Hancu 1971, and Shen et al. 1969) local scour
processes. That is, all of the known local scour and were eliminated. This process left 17 methods/equations mechanisms are not accounted for with the dimensionless for the final analysis. groups in
this equation. The equation does, however, contain a wide-pier correction factor developed by Johnson and Torrico (1994). Predicted versus measured scour depth plots Modifications to Equilibrium
Scour using this equation are shown in Figure 23 (laboratory data) Predictive Methods and Figure 24 (field data). In general, the wide-pier correc- One of the objectives of this study was to
determine if any tion decreases the magnitude of the predicted scour depths. of the predictive equations could be modified to improve their The wide-pier correction does, however, increase the number
accuracy. The overall accuracy of most of the equations could of underpredictions in both the laboratory and field data. The be improved by adjusting one or more of their coefficients. overall error
for the dimensional scour is reduced with the However, in almost every case this adjustment increased wide-pier correction factor, but the error for the normalized underprediction, which for design
equations is not accept- scour is increased. The wide-pier correction factor also creates
OCR for page 24
25 Table 5. The S/M local scour equations. Reference Equation Notes No. ys V1 2.5 f1 f 2 f 3 for 0.4 1.0 a* Vc V1 Vlp V1 1 ys Vc Vc Vc V1 Vl p f1 2.2 2.5f 3 for 1.0 a* Vl p Vlp Vc Vc 1 1 Vc Vc ys V1
Vl p 2.2 f1 for a* Vc Vc 0.4 y1 a* e ffective diameter f1 tanh a* pr ojected width * shape factor 2 Shape factor =1, circu lar V1 S/M f2 1 1. 2 l n 4 25 Vc = 0.86 + 0.97 , square 4 a* = flow skew
angle in radians D50 f3 1 .2 0.13 a* a* 0 .4 10.6 D50 D50 Vl p1 5Vc Vl p 2 0.6 g y1 Vl p1 for Vl p1 Vl p 2 Vl p Vl p 2 for Vl p2 Vl p1 3 Data 2.5 Sheppard & Miller 2006 S/M 2 ys/a 1.5 1 0.5 0 0 0.2
0.4 0.6 0.8 1 V1/Vc Figure 21. Measured laboratory data at low velocities compared to the upper limit of Sheppard and Miller (2006) and S/M equations.
|
{"url":"http://www.nap.edu/openbook.php?record_id=14426&page=24","timestamp":"2014-04-19T17:58:30Z","content_type":null,"content_length":"44952","record_id":"<urn:uuid:c3b95467-0ecd-4028-86f7-aadaa8976d9d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Analog Science Fiction & Fact Magazine
"The Alternate View" columns of John G. Cramer
The Quantum Handshake
by John G. Cramer
Alternate View Column AV-16
Keywords: quantum, paradoxes,transactional, Copenhagen, interpretation
Published in the November-1986 issue of Analog Science Fiction & Fact Magazine;
This column was written and submitted 4/4/86 and is copyrighted © 1986, John G. Cramer. All rights reserved.
No part may be reproduced in any form without the explicit permission of the author.
This page now has an access count of: []
Quantum mechanics is weird. It has led respectable physicists to spin theories about cats that are half alive and half dead, about worlds which split into alternate universes with each quantum event,
about a reality altered because an intelligent observer watches it, about mathematical equations describing "knowledge" rather than physical reality. This month's AV is about my own work, a new
interpretation of quantum mechanics which seeks to dispell this weirdness by depicting each quantum event as a "transaction", a sort of handshake across space-time. A long description of this
"Transactional Interpretation" has just been published in the July Reviews of Modern Physics (available at most university and major public libraries). It challenges the standard Copenhagen
Interpretation of Bohr and Heisenberg which has maintained a shaky dominance as the orthodox interpretation of quantum mechanics for over fifty years.
Quantum mechanics (QM) was invented in the late 1920's when an embarrassing body of new experimental facts from the microscopic world couldn't be explained by the accepted physics of the period.
Heisenberg, Schroedinger, Dirac, and others used a remarkable combination of intuition and brilliance to devise clever ways of "getting the right answer" from a set of arcane mathematical procedures.
They somehow accomplished this without understanding in any basic way what their mathematics really meant. The mathematical formalism of quantum mechanics is now trusted by all physicists, its use
clear and unambiguous. But even now, five decades later, its meaning remains controversial. One hears the platitude that "mathematics is the language of science". Quantum mechanics reminds us that
this "language" may lack a proper translation, that formulating a theory is not the same as understanding its meaning.
For orientation, let's start our discussion with some fairly simple questions and answers:
Q: What is quantum mechanics?
A: It's the theory which deals with the smallest scale of physical objects in the universe, objects (atoms, nuclei, photons, quarks) so small that the lumpiness or quantization of physical variables
becomes important.
Q: What is quantization?
A: Its the idea that there are minimum size chunks for certain quantities like energy and angular momentum. The minimum energy chunk for light of frequency f is E=hf where h is Planck's constant. We
call the particle of light carrying this minimum-size energy chunk hf a photon.
Q: What is meant by "the formalism of quantum mechanics"?
A: Basically, the formalism is mathematics consisting of (1) a differential equation like Schroedinger's wave equation which relates mass, energy, and momentum; (2) the mathematical solutions of that
wave equation, called wave functions, which contain information about location, energy, momentum, etc. of some system; and (3) procedures for using wave functions to make predictions about physical
measurements on the system.
Q: What's a "system"?
A: It is any collection of physical objects which is to be described by quantum mechanics. It could be a single electron, a group of quarks, an atom, a cat in a box, or the whole universe and all its
Q: Why all the recent fuss about quantum mechanics?
A: Albert Einstein distrusted quantum mechanics because he perceived embedded in its formalism what he called "spooky actions at a distance". The characteristic which worried Einstein is called
"nonlocality". The term locality means that separated system parts which are out of speed-of-light contact can only retain some definite relationship through memory of previous contact. Nonlocality
means that some relationship is being enforced faster-than-light across space and time. The recent fuss has arisen because the nonlocality of quantum mechanics has been spotlighted by the EPR
(Einstein-Podolsky-Rosen) experiments performed in the last decade. These measurements of the correlated optical polarizations for oppositely directed photons show that something very like
faster-than-light hand-shaking must be going on within the formalism of quantum mechanics and in nature itself.
Q: Finally, just what is the Copenhagen interpretation?
A: The Copenhagen interpretation of quantum mechanics is a set of ideas and principles devised by Bohr, Heisenberg, and Born in the 1930's to give meaning to the formalism of quantum mechanics and to
avoid certain "paradoxes" which seemed implicit in the formalism.
My RMP article lists five independent interpretational ideas which comprise the Copenhagen interpretation:
(1) Heisenberg's Uncertainty Principle, the idea that pairs of "conjugate" variables (like position and momentum or energy and time) cannot simultaneously be measured to "perfect" accuracy, nor can
they have well-defined values at the same time;
(2) Born's Probability Law, the rule that the absolute square of the wave function gives the probability (P=|psi|^2=psi×psi*) of finding the system in the state described by the wave function;
(3) Bohr's Complementarity Principle, the idea that the uncertainty principle is an intrinsic property of nature (not a just a measurement problem) and that the observer, his measuring apparatus, and
the measured system form a "whole" which cannot be divided;
(4) Heisenberg's Knowledge Interpretation, the notion that the wave function is neither a physical wave travelling through space nor a direct description of a physical system, but rather is a
mathematically encoded description of the knowledge of an observer who is making a measurement on the system; and
(5) Heisenberg's Positivism, the principle that it isn't proper to discuss any aspect of the reality which lies behind the formalism unless the quantities or entities discussed can be measured
The first three elements of the Copenhagen interpretation are needed to connect the formalism with the results of physical measurements. The last two were devised by Heisenberg to deal with
Einstein's "spooky actions at a distance" criticism and similar problems which lie in the general area of nonlocality. Let's consider an example of how the knowledge interpretation handles
A excited atom gives up energy by spitting out a photon. The QM formalism represents this event as a wave function which spreads out from the atom in an ever-widening spherical wave front resembling
the ring of ripples from a stone thrown into a pond. The absolute square of this spreading wave function at a particular point in space-time gives the probability of finding the photon there. Finally
the photon hits a silver atom in a photographic plate, giving up its energy and leaving a black spot on the plate. Instantaneously the photon's wave function undergoes a process called "collapse"
which resembles the pricking of a soap bubble. The wave function completely disappears from all of space except in the immediate vicinity of the struck atom. The photon has now delivered its energy
to the silver atom and has no probability of existing elsewhere. The wave function which had just been expanding through time and space has abruptly vanished.
This vanishment is part of Einstein's "spookiness" criticism. In 1929 at a physics conference he questioned how the remote parts of the wave function could possibly know that it was time to vanish
when the photon was detected. Heisenberg's explanation was that the spreading wave function was not a real wave moving through space at the speed of light but rather a representation of the knowledge
of an observer. When the observer had not yet detected the photon, it has an equal probability of being anywhere on the spreading spherical wave front. But as soon as the photon is detected it is
know to have travelled to the silver atom and its probability of being elsewhere must become zero.
The problem with the knowledge interpretation comes when we try to stretch it to the EPR experiments, a system of two polarization-correlated photons travelling in opposite directions. Now there are
two observers making measurements and gaining information about two photons which are out of speed-of-light contact, and yet the two measurements remain correlated in a "spooky" way. The nonlocality
which enforces this correlation cannot be dismissed by attributing it to changes in knowledge. Something else must be goin on, and the Copenhageners can only retreat behind the shield of Heisenberg's
positivism in dealing with the problem.
The transactional interpretation meets the nonlocality problem head on, using a "transaction" model for quantum events which is itself nonlocal because it uses advanced waves which have negative
energy and travel backwards in time. Advanced waves were the subject of a previous AV column ["Light in Reverse Gear II", August-1985 Analog]. This transaction model is based on the "absorber theory"
originated by Richard Feynman and John Wheeler.
In the absorber theory description any emission process makes advanced waves on an equal basis with ordinary "retarded" waves. But when the retarded wave is absorbed (sometime in the future) a
cancellation process takes place which erases all traces of advanced waves and their "advanced" effects. The absorber manages to absorb the retarded wave by making a second retarded wave identical to
but exactly out of phase with the retarded wave from the emitter. Thus the two cancel and we say that the retarded wave from the emitter is absorbed. However, the absorber also must make an advanced
wave. This advanced wave backtracks the retarded wave, travelling backwards in time along the path taken by the retarded wave and reaching the emitter at the instant of emission. It continues
backward in time, but now it is accompanied by the advanced wave from the emitter. The two waves are exactly out of phase, so they also cancel, removing all "advanced" effects in the process.
An observer not privy to these inner mechanisms of nature would perceive only that a retarded wave had gone from the emitter to the absorber. The absorber theory description, unconventional though it
is, leads to exactly the same observations as the conventional one. But it differs in that there has been a two-way exchange, a "handshake" across space-time which led to the transfer of energy from
emitter to absorber.
This advanced-retarded handshake is the basis for the transactional interpretation of quantum mechanics. It is a two-way contract between the future and the past for the purpose of transferring
energy, momentum, etc. It is nonlocal because the future is, in a limited way, affecting the past on the same basis that the past affects the future. When you stand in the dark and look at a star a
hundred light years away, not only have the retarded light waves from the star been travelling for a hundred years toward your eyes, but also advanced waves from your eyes have reached a hundred
years into the past to encourage the star to shine in your direction. In my RMP paper this model is used to explain the accumulation of curiosities and paradoxes (the EPR paradox, Schroedinger's cat,
Wigner's friend, Wheeler's delayed choice, etc.) which have lain in the quantum mechanics Museum of Mysteries for decades. The need for half-and-half cats, schizophrenic universes, observer-dependent
reality, or "knowledge" waves has been eliminated.
In this column we usually spotlight recent physics developments and then consider their science fiction implications. The transactional interpretation unfortunately pulls the rug from under a number
of excellent SF works based on the weirder aspects of quantum mechanics. Examples are Pohl's "The Coming of the Quantum Cats" and Hogan's The Proteus Operation, both of which use the many-worlds or
Everett-Wheeler interpretation of quantum mechanics [See "The Alternate View: Other Universes II", November-1984 Analog]. The transactional interpretation addressed the same problems which prompted
development the many-worlds interpretation and solves them in a more satisfactory way.
There are SF possibilities in the transactional interpretation. Advanced waves could perhaps, under the right circumstances, lead to "ansible-type" FTL communication favored by LeGuin and Card and to
backwards in time signalling of the sort used in Benford's Timescape and Hogan's Thrice in Time. There is also the implication implicit in the transactional interpretation that Possibility does not
become Reality along that sharp knife-edge that we call "the present". Rather, Reality crystallizes along a much fuzzier boundary which stitches into both future and past, advancing somehow in a way
which defies sharp temporal definition. There must be a story in that.
Transactional Interpretation:
John G. Cramer, Reviews of Modern Physics 58, #3 (1986).
John G. Cramer, International Journal of Theoretical Physics 27, 227 (1988).
EPR Experiments:
S. J. Freedman and J. F. Clauser, Phys. Rev. Letters 28, 938 (1972);
A. Aspect, J. Dalibard, and G. Roger, Physical Review Letters 49, 91 (1982);
A. Aspect, J. Dalibard, and G. Roger, Physical Review Letters 49, 1804 (1982).
Absorber Theory:
J. A. Wheeler and R. P. Feynman, Reviews of Modern Physics 17, 157 (1945);
J. A. Wheeler and R. P. Feynman, Reviews of Modern Physics 21, 425 (1949).
Exit to ^ issue index.
This page was created by John G. Cramer on 7/12/96.
|
{"url":"http://www.npl.washington.edu/AV/altvw16.html","timestamp":"2014-04-18T13:09:28Z","content_type":null,"content_length":"16754","record_id":"<urn:uuid:8bc90257-0289-4421-b42d-441d97915212>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Watching Science Happen
Home » All, Positive Emotion
Y. Paul Lee, PMP, is currently a PhD student in the Project Management Program at the James Clark School of Engineering, University of Maryland. During the day, he works as a Branch Manager/Project
Engineer at the Space Telescope Science Institute (Baltimore) on the James Webb Space Telescope (JWST) program. Paul has also studied Physics, Astronomy, Computer Engineering/IT, Management and
Leadership at the University of California at Davis, the Johns Hopkins University, and MIT Sloan School of Management. His articles are here.
Editor’s Note: Paul Lee was one of my students in a graduate-level project management course at the University of Maryland on managing project teams. We discussed the three ratios presented in
Fredrickson and Losada’s 2005 paper, inquiry versus advocacy, positivity versus negativity, and self versus other.
Paul wrote to me recently, “I came across the following article today, which I think is interesting and certainly reminded me of what I was pondering about the positivity ratio last fall: Ratio
for a good life exposed as ‘nonsense’ by Bruce Bower in Science News.”
That article summarizes a paper by Brown, Sokal, and Friedman. Here’s a statement from that paper: “We shall demonstrate that each one of the three articles [the 2005 paper and two that preceded
it] is completely vitiated by fundamental conceptual and mathematical errors, and above all by the total absence of any justification for the applicability of the Lorenz equations to modeling the
time evolution of human emotions.”
I responded, “Yes, interesting to see the work of science happening, live. Exploratory research, theory, challenge, and so on. You might want to read Barbara Fredrickson’s response. Let us know
what you think.”
This article is his reply. Our intention by publishing Paul’s letter is to acknowledge both the challenge posed by Brown, Sokal, and Friedman and the response published by Fredrickson. We also
want to celebrate science as an evolving discipline where the actions of generating and challenging ideas are both important.
~ Kathryn Britton
It took me a little while to review the response and come to the few insights below.
I think Fredrickson’s idea of finding cross-support between theory, mathematics, and data is a noble and right course. However, as she pointed out, successes in such endeavors often come only when
the theory and information about the subject are at certain maturity. I use the term “information” instead of “data” for very specific reasons. If you follow classic knowledge management theory, the
evolution of knowledge follows these stages:
Data (unorganized facts) →
Information (organized facts) →
Knowledge (patterns from facts) →
Wisdom (abstraction of knowledge to universal truth)
Wisdom, that is, abstraction to universal truth, requires a lot more than empirical evidence. While Fredrickson’s response points to further accumulation of empirical evidence to support the idea of
a positivity ratios, the key take-away is that there is only enough data and information to suggest knowledge, that is, the existence of patterns. There needs to be a lot more collected, from
different perspectives, to reach wisdom, that is, mathematical models and their key parameters, such as the non-linear mathematical models described in the 2005 paper.
It might be premature to look for non-linear mathematical models at this point if the parameter space is not well defined. This is an intrinsic problem with social sciences, where it is often
difficult to quantitatively define a parameter space to use, which is exactly the problem that led to Losada’s questioned mathematical work.
Deriving Mathematical Models from Empirical Evidence
Using empirical evidence to derive mathematical models depends greatly on the tools and how they were used to derive the models. Using the wrong tool will get you the wrong results. Statistical tools
are powerful when used properly, but it is also well known that they can also lead to totally wrong conclusions when misused. There is a classic saying in the modeling community (be it scientific,
financial, and so on) that you can fit any data with any curve. Other objective pieces of information are needed to help separate the right from the wrong.
Added by Editor:
“I’ve come to see sufficient reason to question the particular mathematical framework Losada and I adopted
to represent and test the concept of a critical tipping point positivity ratio that bifurcates mental health into human flourishing and human languishing (Fredrickson & Losada, 2005).” ~ Barbara
Fredrickson, 2013
Critical Sanity Tests
Once you find a model that appears to match the data, there are two critical sanity tests that have to be passed in order for the model to be worth further consideration.
First, the parameters used in the model have to be tied to real quantitative observables that can be measured. Combinations of pseudo parameters, such as X (Advocacy versus Inquiry), Y (Positivity
versus Negativity), and Z (Self versus Other) in the Losada mathematics, would require a lot more supporting materials to tie them to reality. In other words, if you set out to look for a non-linear
model to fit the data, a non-linear model is what you will get. That leads to the second test.
A successful mathematical model not only fits the observed data, but it should be possible to use it to predict results that can be independently tested in new experiments. This is about predictions
by the model, and not the kind of predictions that Fredrickson and Losada listed at the end of the 2005 paper. Personally, I would not call them predictions. For example, take their first prediction:
“1. Human flourishing and languishing can be represented by a set of mathematical equations drawn from the Lorenz system.”
This is not really a prediction. It is equally probable that one can find a different set of non-linear equations to fit the data. Unless the authors can tie real physical measurements from the
empirical data to the Lorenz system, this is irrelevant.
I would actually call the seven predictions in the 2005 paper by Fredrickson and Losada 2005 paper as their interpretation of their application of their choice of mathematical model. I do not doubt
the empirical data they presented. It’s just that drawing the connection between the data and the non-linear Lorenz system is hard to substantiate.
I think in the end, I would agree with Fredrickson’s notion that positivity exhibits non-linear behavior, and a non-linear system representation of it would be a worthwhile academic goal to pursue.
Human behaviors certainly seem like chaotic dynamic systems.
The empirical evidence is growing and I think that Fredrickson is right on with that. It will be of great interest for other researchers to continue to follow up with the research. But I would
caution people from just taking some mathematical constructs and mapping them to empirical data. Many instances of bad science have resulted from this approach.
“… within the trinity of theory, mathematics, and data, data are what merit our closest attention and respect. I am grateful to Brown and colleagues (2013) for spurring me to update my own
thinking on positivity ratios. In doing so, I’ve learned that the most recent empirical evidence on the value of positivity ratios tells us quite a bit. The data say that when considering
positive emotions, more is better, up to a point, although this latter caution may be limited to self-focused positive emotions. The data also say that when considering negative emotions, less is
better, down to a point. Negativity can either promote healthy functioning or kill it, depending on its contextual appropriateness and dosage relative to positive emotions. Empirical evidence is
thus growing to support the value of calculating positivity ratios. Even so, considerable empirical work remains to be done to better understand the dynamic and nonlinear properties of positivity
ratios as well as the most appropriate algorithms for computing them.” ~ Barbara Fredrickson, 2013
Bower, B. (2013). Ratio for a good life exposed as ‘nonsense.’ Science News.
Brown, N. J. L., Sokal, A. D., & Friedman, H. L. (2013). The complex dynamics of wishful thinking: The critical positivity ratio. American Psychologist. Published online July 15, 2013. doi:10.1037/
a0032850. Abstract.
Fredrickson, B. L. (2013). Updated thinking on positivity ratios. American Psychologist. Published online July 15, 2013. doi:10.1037/a0033584.
Fredrickson, B. L. & Losada, M. (2005). Positive affect and the complex dynamics of human flourishing. American Psychologist. Vol. 60, October 2005, p. 678. doi:10.1037/0003-066X.60.7.678.
Losada, M. (2012). What is the Losada Line? What is Meta Learning? Interview with Dr. Marcial Losada by the Positive Business Forum.
Photo Credit: via Compfight cc
Model courtesy of zachstern
3 Comments »
• Hi all
‘Broaden and build’ is a wonderful unifying principle, but what concerns me is the whole concept of classifying emotions as either ‘positive’ or ‘negative’.
This seems to me the conceptual equivalent of a diet reduced to doses of sugar, vinegar, and perhaps ‘neutral’ white bread – which corresponds to a depressed emotional landscape?
My emotional landscape – and the one presented through the arts – is one where positive-negative (such as PANAS ‘excited’, ‘upset’) both co-exist and enrich each other. So a sweet-and-sour is
followed by lemon sorbet or, to mix metaphors, medieval polyphony morphs into Mahler.
This has perhaps important cultural connotations, in terms of disseminating the concepts of positive psychology?
• Interesting comment, Kate. If I feel negative emotion, I notice it, acknowledge it, am curious about it, and openly ask myself how I can look at it as an opportunity to use my strengths, ACT, and
then mindful forward ho to positive emotions. I love positivity resonance and know the 10 basic positive emotions are in my corner whenever I want to pull them out. Make that every day!
• Kate,
I love your point about real-life emotions coming in complex mixes that may simultaneously include multiple elements that could be independently classified as positive and negative. Think about
being at the funeral reception for someone you admired and loved. You may feel grief at the loss and angry at the way things turned out. You may have a sense of hollowness that you’ll never speak
to that person again. You may also re-experience memories of good times with the person. You may laugh at stories people tell. You may be curious about an aspect of the person that someone else
saw that you never did. All of this at the same time. In the days following Nelson Mandela’s death, I found myself listening to the Asimbonanga flash mob over and over again. Why? It made me cry,
but it also made me feel an awe that felt good.
Your comment made me think of this paragraph I wrote in 2010 report about the Biennial Meaning Conference in Toronto. I was summarizing Todd Kashdan’s talk:
The influence of emotion includes how we make sense of emotions and describe them, extending well beyond both intensity and valence (positive or negative). [Todd] cited two studies in which
college students were asked many times a day over a three week period about the emotions they experienced. Researchers also collected information about alcohol consumption in one study, and
aggression in another. He found that intensity of negative emotions did not predict greater alcohol consumption, but the degree to which people could clearly describe their emotions
correlated with lower consumption.
So the ability to see the nuances may be a very valuable one to build over time.
Leave a comment!
Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.
Be nice. Keep it clean. Stay on topic. No spam.
You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.
|
{"url":"http://positivepsychologynews.com/news/paul-lee/2014010827480","timestamp":"2014-04-19T06:53:16Z","content_type":null,"content_length":"66796","record_id":"<urn:uuid:f771d438-57fa-4909-9a51-95c94fb43b73>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How does gravity work?
Sorry, I figured everyone kind of gives their own theory when asking how things happen just for conversation.
There's a difference between a quantitative theory and a qualitative one. Einstein's general relativity specifically says
[tex]G_{ab} = 8\pi T_{ab}.[/tex]
No matter what you write about gravity being "
two like objects on the same frequency that happen to run into each other
," in the equation above, the symbols have meanings, and lead to 6 independent coupled partial differential equations. If one solves them (assuming one has the correct mathematical methods), one gets
numerical predictions about things like orbital periods, equations of motion etc. and we can directly compare these numbers to the
orbital periods. If these numbers agree better than any other previous theory, we have a contender for a new theory of the phenomenon. This is exactly what happened.
As for your, "Why does one need to know math to understand a theory?" In your posts, you clearly demonstrate that without the mathematics, you will not be able to convey clearly what you mean.
You also demonstrate your lack of knowledge of current physical theory, which already explain many of the phenomenon you wish to describe to unprecedented levels of accuracy. i.e. unless your theory
makes predictions which matches experiment better than the ones we have, then your theories of the universe must be incorrect. It is that simple.
|
{"url":"http://www.physicsforums.com/showthread.php?t=122033","timestamp":"2014-04-21T09:51:21Z","content_type":null,"content_length":"40723","record_id":"<urn:uuid:3d391ad1-10c1-49a8-8830-f2be5abe68e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Independent component analysis of electroencephalographic data
Results 1 - 10 of 160
- Neural Computing Surveys , 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Cited by 1492 (93 self)
Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example,
principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation
is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this
paper, we survey the existing theory and methods for ICA. 1
- NEURAL NETWORKS , 2000
"... ..."
, 2003
"... Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from
observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mutual i ..."
Cited by 390 (4 self)
Add to MetaCart
Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from
observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mutual independence between the signals. The weakness of the assumptions makes it a powerful
approach but requires to venture beyond familiar second order statistics. The objective of this paper is to review some of the approaches that have been recently developed to address this exciting
problem, to show how they stem from basic principles and how they relate to each other.
, 1999
"... An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. This was
achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a pro ..."
Cited by 202 (21 self)
Add to MetaCart
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. This was achieved by
using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have suband super-Gaussian
regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the
stability analysis of Cardoso and Laheld (1996) to switch between sub- and super-Gaussian regimes. We demonstrate that the extended infomax algorithm is able to easily separate 20 sources with a
variety of source distributions. Applied to high-dimensional data from electroencephalographic (EEG) recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker
electrical ...
- IEEE Transactions on Neural Networks , 2002
"... Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent
faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such ..."
Cited by 189 (4 self)
Add to MetaCart
Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as
a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in
the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis
images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the
principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random
variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces.
The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A
classifier that combined the two ICA representations gave the best performance. Index Terms—Eigenfaces, face recognition, independent component analysis (ICA), principal component analysis (PCA),
unsupervised learning. I.
, 1998
"... Pervasive electroencephalographic (EEG) artifacts associated with blinks, eye-movements, muscle noise, cardiac signals, and line noise poses a major challenge for EEG interpretation and
analysis. Here, we propose a generally applicable method for removing a wide variety of artifacts from EEG records ..."
Cited by 126 (20 self)
Add to MetaCart
Pervasive electroencephalographic (EEG) artifacts associated with blinks, eye-movements, muscle noise, cardiac signals, and line noise poses a major challenge for EEG interpretation and analysis.
Here, we propose a generally applicable method for removing a wide variety of artifacts from EEG records based on an extended version of an Independent Component Analysis (ICA) algorithm [2, 12] for
performing blind source separation on linear mixtures of independent source signals. Our results show that ICA can effectively separate and remove contamination from a wide variety of artifactual
sources in EEG records with results comparing favorably to those obtained using Principal Component Analysis. 1 INTRODUCTION Since the landmark development of electroencephalography (EEG) in 1928 by
Berger, scalp EEG has been used as a clinical tool for the diagnosis and treatment of brain diseases, and used as a non-invasive approach for research in the quantitative study of human
neurophysiology. Ironic...
"... In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. A number of face recognition algorithms employ
principal component analysis (PCA), which is based on the second-order statistics of the image set, and does n ..."
Cited by 101 (8 self)
Add to MetaCart
In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. A number of face recognition algorithms employ principal
component analysis (PCA), which is based on the second-order statistics of the image set, and does not address high-order statistical dependencies such as the relationships among three or more
pixels. Independent component analysis (ICA) is a generalization of PCA which separates the high-order moments of the input in addition to the second-order moments. ICA was performed on a set of face
images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons. 1 The algorithm maximizes the mutual information between the input
and the output, which produces statistically independent outputs under certain conditions. ICA was performed on the face images under two different architectures. The first architecture provided a
, 1996
"... Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time
delays, one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources ..."
Cited by 95 (7 self)
Add to MetaCart
Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays,
one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources through an unknown n \Theta n mixing matrix. The recently introduced ICA blind source separation
algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making
absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem.
cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations
where s...
, 1999
"... We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We
review those theories and suggest that information theory can be used to unify several lines of research. Pea ..."
Cited by 82 (8 self)
Add to MetaCart
We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review
those theories and suggest that information theory can be used to unify several lines of research. Pearlmutter and Parra (1996) and Cardoso (1997) showed that the infomax approach of Bell and
Sejnowski (1995) and the maximum likelihood estimation approach are equivalent. We show that negentropy maximization also has equivalent properties and therefore all three approaches yield the same
learning rule for a fixed nonlinearity. Girolami and Fyfe (1997a) have shown that the nonlinear Principal Component Analysis (PCA) algorithm of Karhunen and Joutsensalo (1994) and Oja (1997) can also
be viewed from information-theoretic principles since it minimizes the sum of squares of the fourth-order marginal cumulants and therefore approximately minimizes the mutual information (Comon,
1994). Lambert (19...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=27502","timestamp":"2014-04-21T15:50:32Z","content_type":null,"content_length":"37910","record_id":"<urn:uuid:fc1a4157-5da3-4a21-9861-4183d445540f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Concord, CA Algebra 2 Tutor
Find a Concord, CA Algebra 2 Tutor
...The Literacy Lab allowed me to use my skills and content knowledge to improve their programs when working with small groups of students. After a year serving as an Instructor, I was soon in
charge of training other instructors and managing school-wide programs in high need urban school and shel...
16 Subjects: including algebra 2, English, algebra 1, special needs
...I used Fortran for many years in high performance computing for climate research models and non linear finite element systems. Linear Algebra investigates the linear relationship between
multiple variables. On one side it is for many students a first encounter with mathematical abstraction and ...
41 Subjects: including algebra 2, calculus, geometry, statistics
...I have a broad background in Math/Science and Economics. I have taught a number of courses and have previously worked as a GRE/GMAT/SAT/ACT instructor. I am well regarded as an excellent
instructor and am able to deal with students with a wide range of abilities in math, finance and economics.
49 Subjects: including algebra 2, calculus, geometry, physics
...It really helps the student answer questions they could not in the class room. Math especially is a subject that requires individual repetition and practice to really get it and having a tutor
really helps them focus and develop good habits. I look forward to meeting my future students, artists, athletes!
26 Subjects: including algebra 2, physics, trigonometry, guitar
...Paul Church. Hence,I'm confident that I'm completely able to help the students to understand the lectures, do their homework and assignments correctly and improve their grade significantly. In
addition, I also can help the students to understand to basic concept of Physics like motions, pressures, force, wave, energy and light.
18 Subjects: including algebra 2, calculus, trigonometry, statistics
Related Concord, CA Tutors
Concord, CA Accounting Tutors
Concord, CA ACT Tutors
Concord, CA Algebra Tutors
Concord, CA Algebra 2 Tutors
Concord, CA Calculus Tutors
Concord, CA Geometry Tutors
Concord, CA Math Tutors
Concord, CA Prealgebra Tutors
Concord, CA Precalculus Tutors
Concord, CA SAT Tutors
Concord, CA SAT Math Tutors
Concord, CA Science Tutors
Concord, CA Statistics Tutors
Concord, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Alameda algebra 2 Tutors
Antioch, CA algebra 2 Tutors
Berkeley, CA algebra 2 Tutors
Danville, CA algebra 2 Tutors
Hayward, CA algebra 2 Tutors
Lafayette, CA algebra 2 Tutors
Martinez algebra 2 Tutors
Oakland, CA algebra 2 Tutors
Piedmont, CA algebra 2 Tutors
Pittsburg, CA algebra 2 Tutors
Pleasant Hill, CA algebra 2 Tutors
Richmond, CA algebra 2 Tutors
San Francisco algebra 2 Tutors
Vallejo algebra 2 Tutors
Walnut Creek, CA algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Concord_CA_Algebra_2_tutors.php","timestamp":"2014-04-20T19:50:52Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:a2876a1d-5d54-4948-919f-ff05cb907601>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grete Hermann
Grete Hermann
As Adolf Hitler came to power in Germany, Hermann participated in the underground movement against the Nazis, but by 1936 she left Germany for Denmark and later England. She returned when World War
II was over. In her later years she was more interested in politics and philosophy than in physics and mathematics.
Mathematics Subject Classification
no label found
|
{"url":"http://planetmath.org/GreteHermann","timestamp":"2014-04-16T19:09:06Z","content_type":null,"content_length":"27276","record_id":"<urn:uuid:b6b94854-4a14-4145-bc42-41aa4ee11479>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Question on solving system of equations with constants...
It's fairly straight forward to find information on how to solve a system of equations like this:
2x + 3y + 4z = 1
3x + 4y + 3z = 2
4x + 5y + 3z = 3
It has numerical constants in front of each term. You could use Gaussian elimination and solve for one, infinite, or no solutions. (The above example is completely random). It works because you can
cancel terms out.
It's a bit less straightforward to find information on how to solve it when you have constants represented by symbols:
ua*x + ub*y + uc*z = 1
va*x + vb*y + vy*z = 2
x + y + z = 3
Here we want to solve for x, y, and z, each expressed in terms of ua/ub/uc/va/vb/vc only. You can't use elimination here because terms such as ua and va will not cancel out.
The only method I'm aware of is substitution, though it seems to spill onto multiple pages (the above system is simpler than what I need to solve). Is there any easier way to attack something like
this? Any links to online references would be much appreciated.
If there's a "simpler" (neater, less painful) way to solve this sort of system, I'm not familiar with it. Sorry!
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=16&t=3271&p=8736","timestamp":"2014-04-17T00:48:41Z","content_type":null,"content_length":"18860","record_id":"<urn:uuid:6d731503-a843-4914-8753-adefdc5e0a45>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A. Approximate diabatic representation
B. Multi-surface reaction model
C. Hyperspherical coordinates
A. Basis set tests
B. Stationary points
C. Grid development and frequency projection
D. Potential energy surfaces
1. Diabatic surfaces
2. Spin-orbit coupling potentials
3. Surface characterisation
A. Scattering equations and R-matrix propagation
B. Dynamics
C. Kinetics
D. Numerical details
A. Hyperspherical adiabats
B. Geometry of nonadiabatic transitions
C. CH[3] + HCl → CH[4] + Cl(^2P)
1. Reaction probabilities
2. Integral cross sections
3. Nonadiabatic branching ratio.
D. Cl (^2P[J]) + CH[4] → HCl + CH[3]
E. Nonreactive scattering
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/134/20/10.1063/1.3592732","timestamp":"2014-04-17T19:46:41Z","content_type":null,"content_length":"98566","record_id":"<urn:uuid:5ba65f70-7366-4652-af64-3b50abee352b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unit Cube to Frustum - OpenGL
I've found countless threads, articles, etc about this topic, but I'm not doing something right. I can create a view frustum but it isn't on the shape of the 'view' (it's much much wider)
I'll explain my process here as best I can without getting too deep into the math. if everything here checks out, I'll look closer at those operations.
steps I take:
1.) create a unit cube (8 vertices) with these bounds:
min/max x: -0.5f to 0.5f
min/max y: -0.5f to 0.5f
min/max z: -0.1f to -1.1f
(if z ranges from positive to negative, the frustum becomes more like an hour glass)
2.) invert my projection matrix
3.) multiply each of my cube's vertices by this inverted projection matrix
4.) ...
5.) profit
I don't know how to apply the view resolution like 800x600 because the thing is already pretty big at this point, with the smallest face of the frustum (the screen) being slightly larger than the
actual screen space.
Edited by caibbor, 17 December 2012 - 06:42 PM.
|
{"url":"http://www.gamedev.net/topic/635955-unit-cube-to-frustum/","timestamp":"2014-04-18T10:59:32Z","content_type":null,"content_length":"112227","record_id":"<urn:uuid:232b3fa6-b8a9-427d-bf6d-cce0c6bf00db>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Edexcel IGCSE Mathematics B Student Book
Search results 70 Articles (Search results 1 - 10) :
Edexcel AS and A Level Modular Mathematics - Core Mathematics 2 9 December 2010
Edexcel AS and A Level Modular Mathematics - Core Mathematics 2
205 pages | Dec 12, 2007 |ISBN:0435519115 | PDF | 64.8 Mb
Motivating readers by making maths easier to learn, this work includes complete past exam papers and student-friendly worked solutions which build up to practice questions, for all round exam
preparation. It also includes a Live Text CDROM which features fully worked solutions examined step-by-step, and animations for key learning points.
A-Level Maths: Further Pure 3 Edexcel CD + Book 7 September 2011
A-Level Maths: Further Pure 3 Edexcel CD + Book
English | 2010 | PDF + MP3 | 200MB
Edexcel’s own course for the GCE specification
Edexcel Biology for A2 28 April 2013
Edexcel Biology for A2
C.J. Clegg,
English | 2009 | ISBN: 0340967803 | 216 pages | PDF | 16,2 MB
Mathematics for the International Student: Mathematics HL: International Baccalaureate Diploma Programme/ Worked Solutions 27 September 2013
Mathematics for the International Student: Mathematics HL: International Baccalaureate Diploma Programme/ Worked Solutions
2005 | pages: 792 | ISBN: 1876543450 | PDF | 6,6 mb
This book gives you fully worked solutions for every question in each chapter of the Haese & Harris Publications textbook Mathematics HL (Core) which is one of three textbooks in our series
"Mathematiks for the International Student".
A Student's Guide to the Study, Practice, and Tools of Modern Mathematics 30 September 2011
A Student's Guide to the Study, Practice, and Tools of Modern Mathematics
2011 | 250 | ISBN: 1439846065 | PDF | 4 Mb
A Student’s Guide to the Study, Practice, and Tools of Modern Mathematics provides an accessible introduction to the world of mathematics. It offers tips on how to study and write mathematics as well
as how to use various mathematical tools, from LaTeX and Beamer to Mathematica® and Maple™ to MATLAB® and R. Along with a color insert, the text includes exercises and challenges to stimulate
creativity and improve problem solving abilities.The first section of the book covers issues pertaining to studying mathematics. The authors explain how to write mathematical proofs and papers, how
to perform mathematical research, and how to give mathematical presentations....
Knowing and Teaching Elementary Mathematics: Teachers' Understanding of Fundamental Mathematics 24 September 2010
Knowing and Teaching Elementary Mathematics: Teachers' Understanding of Fundamental Mathematics in China and the United States
Routledge | 2010 | ISBN: 0415873843, 0415873843, 0203856341 | 232 pages | PDF | 14 MB
Studies of teachers in the U.S. often document insufficient subject matter knowledge in mathematics. Yet, these studies give few examples of the knowledge teachers need to support teaching,
particularly the kind of teaching demanded by recent reforms in mathematics education. Knowing and Teaching Elementary Mathematics describes the nature and development of the knowledge that
elementary teachers need to become accomplished mathematics teachers, and suggests why such knowledge seems more common in China than in the United States, despite the fact that Chinese teachers have
less formal education than their U.S. counterparts.
The World of Mathematics, Vol. 3 (World of Mathematics) 29 May 2011
The World of Mathematics, Vol. 3 (World of Mathematics)
624 pages | Aug 31 2010 |ISBN: 0486411516| PDF | 5.5 Mb
Vol. 3 of a monumental 4-volume set covers such topics as statistics and the design of experiments, group theory, the mathematics of infinity, the unreasonableness of mathematics, the vocabulary of
mathematics, and mathematics as an art. Includes contributions by Jacob Bernoulli, George Bernard Shaw, Bertrand Russell, Hans Hahn, Ernst Mach, Hermann Weyl, and many others.
How Chinese Learn Mathematics: Perspectives From Insiders 23 March 2011
How Chinese Learn Mathematics: Perspectives From Insiders
2005 | 592 | ISBN: 9812560149 | DJVU | 2 Mb
The book has been written by an international group of very active researchers and scholars who have a passion for the study of Chinese mathematics education. It aims to provide readers with a
comprehensive and updated picture of the teaching and learning of mathematics involving Chinese students from various perspectives, including the ways in which Chinese students learn mathematics in
classrooms, schools and homes, the influence of the cultural and social environment on Chinese students’ mathematics learning, and the strengths and weaknesses of the ways in which Chinese learn
mathematics. Furthermore, based on the relevant research findings, the book explores the implications for mathematics education and offers sound suggestions for reform and improvement. This book is a
must for anyone who is interested in the teaching and learning of mathematics concerning Chinese learners. ...
|
{"url":"http://www.downeu.me/e/Edexcel+IGCSE+Mathematics+B+Student+Book","timestamp":"2014-04-19T02:16:36Z","content_type":null,"content_length":"21532","record_id":"<urn:uuid:218391eb-cdb2-4f76-aa7f-8dc3c4568b69>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Meshing your Geometry: When to Use the Various Element Types
Meshing your Geometry: When to Use the Various Element Types
In a previous blog entry, we introduced meshing considerations for linear static problems. One of the key concepts there was the idea of mesh convergence — as you refine the mesh, the solution will
become more accurate. In this post, we will delve deeper into how to choose an appropriate mesh to start your mesh convergence studies for linear static finite element problems.
What Are the Different Element Types
As we saw earlier, there are four different 3D element types — tets, bricks, prisms, and pyramids:
These four elements can be used, in various combinations, to mesh any 3D model. (For 2D models, you have triangular and quadrilateral elements available. We won’t discuss 2D very much here, since it
is a logical subset of 3D that doesn’t require much extra explanation.) What we haven’t spoken in-depth about yet is why you would want to use these various elements.
Why and When to Use the Elements
Tetrahedral elements are the default element type for most physics within COMSOL. Tetrahedra are also known as a simplex, which simply means that any 3D volume, regardless of shape or topology, can
be meshed with tets. They are also the only kind of elements that can be used with adaptive mesh refinement. For these reasons, tets can usually be your first choice.
The other three element types (bricks, prisms, and pyramids) should be used only when it is motivated to do so. It is first worth noting that these elements will not always be able to mesh a
particular geometry. The meshing algorithm usually requires some more user input to create such a mesh, so before going through this effort, you need to ask yourself if it is motivated. Here we will
talk about the motivations behind using brick and prism elements. The pyramids are only used when creating a transition in the mesh between bricks and tets.
It is worth giving a bit of historical context. The mathematics behind the finite element method was developed well before the first electronic computers. The first computers to run finite element
programs were full of vacuum tubes and hand-wired circuitry, and although the invention of transistors led to huge improvements, even the supercomputers from 25 years ago had about the same clock
speed as today’s fashion accessories. Some of the first finite element problems solved were in the area of structural mechanics, and the early programs were written for computers with very little
memory. Thus, first-order elements (often with special integration schemes) were used to save memory and clock cycles. However, first-order tetrahedral elements have significant issues for structural
mechanics problems, whereas first-order bricks can give accurate results. As a legacy of these older codes, many structural engineers will still prefer bricks over tets. In fact, the second order
tetrahedral element used for structural mechanics problems in COMSOL will give accurate results, albeit with different memory requirements and solution times from brick elements.
The primary motivation in COMSOL for using brick and prism elements is that they can significantly reduce the number of elements in the mesh. These elements can have very high aspect ratios (the
ratio of longest to shortest edge) whereas the algorithm used to create a tet mesh will try to keep the aspect ratio close to unity. It is reasonable to use high aspect ratio brick and prism elements
when you know that the solution varies gradually in certain directions, or if you are not very interested in accurate results in those regions because you already know the interesting results are
elsewhere in the model.
Meshing Example 1: Wheel Rim
Consider the example of a wheel rim, shown below.
The mesh on the left is composed only of tets, while the mesh on the right has tets (green), bricks (blue), and prisms (pink) as well as pyramids to transition between these. The mixed mesh uses
smaller tets around the holes and corners, where we expect higher stresses. Bricks and prisms are used in the spokes and around the rim. Neither the rim nor the spokes will carry peak stresses (at
least under a static load) and we can safely assume a relatively slow variation of the stresses in these regions. The tet mesh has about 145,000 elements and around 730,000 degrees of freedom. The
mixed mesh has close to 78,000 elements and roughly 414,000 degrees of freedom, and takes about half as much time and memory to solve. The mixed mesh does take significant user interaction to set up,
while the tet mesh requires essentially no user effort.
Note that there is not a direct relationship between degrees of freedom and memory used to solve the problem. This is because the different element types have different computational requirements. A
second-order tet has 10 nodes per element, while a second-order brick has 27. This means that the individual element matrices are larger, and the corresponding system matrices will be denser, when
using a brick mesh. The memory (and time) needed to compute a solution depends upon the number of degrees of freedom solved for, as well as the average connectivity of the nodes, and other factors.
Meshing Example 2: Loaded Spring
Another example is shown below, this time it’s a structural analysis of a loaded spring. Since the deformation is quite uniform along the length of the helix of the spring, it makes sense to have a
mesh that describes the overall shape and cross section, but relatively stretched elements along the length of the wire. The prism mesh has 504 elements with 9,526 degrees of freedom, and the tet
mesh has 3,652 elements with 23,434 degrees of freedom. So although the number of elements is quite different, the number of degrees of freedom is less so.
Meshing Example 3: Material on a Wafer
The other significant motivation for using brick and prism elements is when the geometry contains very thin structures in one direction, such as an epitaxial layer of material on a wafer, a stamped
sheet metal part, or a sandwiched composite.
For example, let’s look at the figure below, of a thin trace of material patterned onto a substrate. The tet mesh has very small elements in the trace, whereas the prism mesh is composed of thin
elements in this region. Whenever your geometry has layers that are about 10^-3 or so times thinner than the largest dimension of the part, the usage of bricks and prisms becomes very highly
Additional Examples
It is also worth pointing out that COMSOL offers many boundary conditions that can be used in lieu of explicitly modeling thin layers of materials. For example, in electromagnetics, the following
four examples consider thin layers of material with relatively high and low conductivity, and relatively high and low permeability:
Similar types of boundary conditions exist in most of the physics interfaces. Usage of these types of boundary conditions will avoid the need to mesh such thin layers entirely.
Lastly, the above comments apply only to linear static finite element problems. Different meshing techniques are needed for nonlinear static problems, or if we are modeling time-domain or
frequency-domain phenomena.
Concluding Thoughts
To summarize, here is what you should keep in mind when starting your meshing of linear static problems:
• Use tets if you can; they require the least user interaction and support adaptive mesh refinement
• If you know the solution varies slowly in one or more directions, use bricks or prisms with high aspect ratios in those regions
• If the geometry contains thin layers of material, use bricks or prisms or consider using a boundary condition instead
• Always perform a mesh refinement study and monitor the memory requirements and convergence of the solution as you refine the mesh
1. Ivar Kjelberg November 5, 2013 at 2:15 am
Hi Walter
Thanks for nice clear examples
2. Robert Koslover November 15, 2013 at 8:19 pm
Thanks for the guidance. I wonder if you could expand your essay to comment on the utility, in various cases, of employing large numbers of small, but low-order elements vs. using small numbers
of larger, but higher-order elements?
3. Walter Frei November 18, 2013 at 10:47 am
Dear Robert,
It depends very much on how you define utility. Second-order elements represent the best compromise between growth in memory requirements and accuracy, and are the default in most physics
interfaces. The most common exceptions are problems involving chemical species transport and when solving for a fluid flow field, which use 1st order elements by default because of the
convection-dominated nature of the problem.
Loading Comments...
|
{"url":"http://www.comsol.nl/blogs/meshing-your-geometry-various-element-types/","timestamp":"2014-04-21T05:26:59Z","content_type":null,"content_length":"62878","record_id":"<urn:uuid:09f543cd-b01f-413d-bdfe-f1e1512fef50>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
|
F07GGF (DPPCON)
NAG Library Routine Document
F07GGF (DPPCON)
1 Purpose
F07GGF (DPPCON) estimates the condition number of a real symmetric positive definite matrix
, where
has been factorized by
F07GDF (DPPTRF)
, using packed storage.
2 Specification
SUBROUTINE F07GGF ( UPLO, N, AP, ANORM, RCOND, WORK, IWORK, INFO)
INTEGER N, IWORK(N), INFO
REAL (KIND=nag_wp) AP(*), ANORM, RCOND, WORK(3*N)
CHARACTER(1) UPLO
The routine may be called by its LAPACK name dppcon.
3 Description
F07GGF (DPPCON) estimates the condition number (in the
-norm) of a real symmetric positive definite matrix
is symmetric,
${\kappa }_{1}\left(A\right)={\kappa }_{\infty }\left(A\right)={‖A‖}_{\infty }{‖{A}^{-1}‖}_{\infty }$
Because ${\kappa }_{1}\left(A\right)$ is infinite if $A$ is singular, the routine actually returns an estimate of the reciprocal of ${\kappa }_{1}\left(A\right)$.
The routine should be preceded by a call to
to compute
and a call to
F07GDF (DPPTRF)
to compute the Cholesky factorization of
. The routine then uses Higham's implementation of Hager's method (see
Higham (1988)
) to estimate
4 References
Higham N J (1988) FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396
5 Parameters
1: UPLO – CHARACTER(1)Input
On entry
: specifies how
has been factorized.
$A={U}^{\mathrm{T}}U$, where $U$ is upper triangular.
$A=L{L}^{\mathrm{T}}$, where $L$ is lower triangular.
Constraint: ${\mathbf{UPLO}}=\text{"U'}$ or $\text{"L'}$.
2: N – INTEGERInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{N}}\ge 0$.
3: AP($*$) – REAL (KIND=nag_wp) arrayInput
the dimension of the array
must be at least
On entry
: the Cholesky factor of
stored in packed form, as returned by
F07GDF (DPPTRF)
4: ANORM – REAL (KIND=nag_wp)Input
On entry
: the
-norm of the
, which may be computed by calling
with its parameter
must be computed either
F07GDF (DPPTRF)
or else from a
of the original matrix
Constraint: ${\mathbf{ANORM}}\ge 0.0$.
5: RCOND – REAL (KIND=nag_wp)Output
On exit
: an estimate of the reciprocal of the condition number of
is set to zero if exact singularity is detected or the estimate underflows. If
is less than
machine precision
is singular to working precision.
6: WORK($3×{\mathbf{N}}$) – REAL (KIND=nag_wp) arrayWorkspace
7: IWORK(N) – INTEGER arrayWorkspace
8: INFO – INTEGEROutput
On exit
unless the routine detects an error (see
Section 6
6 Error Indicators and Warnings
Errors or warnings detected by the routine:
If ${\mathbf{INFO}}=-i$, the $i$th parameter had an illegal value. An explanatory message is output, and execution of the program is terminated.
7 Accuracy
The computed estimate
is never less than the true value
, and in practice is nearly always less than
, although examples can be constructed where
is much larger.
A call to F07GGF (DPPCON) involves solving a number of systems of linear equations of the form
; the number is usually
and never more than
. Each solution involves approximately
floating point operations but takes considerably longer than a call to
F07GEF (DPPTRS)
with one right-hand side, because extra care is taken to avoid overflow when
is approximately singular.
The complex analogue of this routine is
F07GUF (ZPPCON)
9 Example
This example estimates the condition number in the
-norm (or
-norm) of the matrix
, where
$A= 4.16 -3.12 0.56 -0.10 -3.12 5.03 -0.83 1.18 0.56 -0.83 0.76 0.34 -0.10 1.18 0.34 1.18 .$
is symmetric positive definite, stored in packed form, and must first be factorized by
F07GDF (DPPTRF)
. The true condition number in the
-norm is
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/F07/f07ggf.html","timestamp":"2014-04-19T23:03:43Z","content_type":null,"content_length":"17634","record_id":"<urn:uuid:b7345666-d554-4eed-983b-4a4f74cbeda6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Springs, GA Statistics Tutor
Find a North Springs, GA Statistics Tutor
...At the end of the first year my peers gave me the Mason Gold Standard Award which is a recognition of one who unselfishly contributes to the academic achievement of others through mentoring.
Graduated with a focus in Finance and passed the CFA Level 1 exam. Also passed the GACE Mathematics 022, 023 exams.
28 Subjects: including statistics, calculus, GRE, physics
...I had an overall GPA of 3.75 throughout 6 years of college, and my math GPA was 4.0. I also worked as a math tutor to other college students. More importantly, I know how to make learning fun
and easy.
29 Subjects: including statistics, reading, English, GED
I have a wide array of experience working with and teaching kids grades K-10. I have tutored students in Spanish, Biology, and Mathematics in varying households. I have instructed religious
school for 5 years with different age groups, so I am accustomed to working in multiple settings with a lot of material and different student skill.
16 Subjects: including statistics, Spanish, chemistry, calculus
...I know that takes time and consistency, both of which I am more than willing to provide. I am a former math teacher, and a former teacher educator. I have a bachelor's degree in Applied Math
from Brown University, and a Masters and PhD in Cognitive Psychology, also from Brown University.
8 Subjects: including statistics, algebra 1, trigonometry, algebra 2
I currently teach Statistics and Physics at a private school in Atlanta and I am very skilled at presenting complex concepts to my students in a very clear and understandable manner. I
successfully tutored well over one hundred students and because of this experience my sessions are very effective....
20 Subjects: including statistics, physics, calculus, geometry
Related North Springs, GA Tutors
North Springs, GA Accounting Tutors
North Springs, GA ACT Tutors
North Springs, GA Algebra Tutors
North Springs, GA Algebra 2 Tutors
North Springs, GA Calculus Tutors
North Springs, GA Geometry Tutors
North Springs, GA Math Tutors
North Springs, GA Prealgebra Tutors
North Springs, GA Precalculus Tutors
North Springs, GA SAT Tutors
North Springs, GA SAT Math Tutors
North Springs, GA Science Tutors
North Springs, GA Statistics Tutors
North Springs, GA Trigonometry Tutors
Nearby Cities With statistics Tutor
Briarcliff, GA statistics Tutors
Dunaire, GA statistics Tutors
Dunwoody, GA statistics Tutors
Fort Gillem, GA statistics Tutors
Green Way, GA statistics Tutors
North Atlanta, GA statistics Tutors
North Metro statistics Tutors
Overlook Sru, GA statistics Tutors
Peachtree Corners, GA statistics Tutors
Rockbridge, GA statistics Tutors
Snapfinger, GA statistics Tutors
Tuxedo, GA statistics Tutors
Vinnings, GA statistics Tutors
Vista Grove, GA statistics Tutors
Winters Chapel, GA statistics Tutors
|
{"url":"http://www.purplemath.com/North_Springs_GA_Statistics_tutors.php","timestamp":"2014-04-20T02:31:15Z","content_type":null,"content_length":"24418","record_id":"<urn:uuid:26ae8902-afb8-43eb-8a59-daec4e0142d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Introduction to Fuzzy Sets
Hardcover | $75.00 Short | £51.95 | ISBN: 9780262161718| 7.1 x 10.1 in | April 1998
Essential Info
Table of Contents
Significant Features
Chapter Descriptions
Intended Readership and Use of the Book
I FUNDAMENTALS OF FUZZY SETS
1 Basic Notions and Concepts of Fuzzy Sets
1.1 Set Membership and Fuzzy Sets
1.2 Basic Definition of a Fuzzy Set
1.3 Types of Membership Functions
1.4 Characteristics of a Fuzzy Set
1.5 Basic Relationships between Fuzzy Sets: Equality and Inclusion
1.6 Fuzzy Sets and Sets: The Representation Theorem
1.7 The Extension Principle
1.8 Membership Function Determination
1.8.1 Horizontal Method of Membership Estimation
1.8.2 Vertical Method of Membership Estimation
1.8.3 Pairwise-Comparison Method of Membership Function Estimation
1.8.4 Problem Specification-Based Membership Determination
1.8.5 Membership Estimation as a Problem of Parametric Optimization
1.8.6 Membership Estimation via Fuzzy Clustering
1.9 Generalizations of Fuzzy Sets
1.9.1 Interval-Valued Fuzzy Sets and Second-Order Fuzzy Sets
1.9.2 Type-Two Fuzzy Sets
1.10 Chapter Summary
1.11 Problems
2 Fuzzy Set Operations
2.1 Set Theory Operations and Their Properties
2.2 Triangular Norms
2.2.1 Several Classes of Triangular Norms
2.2.2 Triangular Norms as Models of Operations on Fuzzy Sets
2.3 Aggregation Operations on Fuzzy Sets
2.3.1 Compensatory Operators
2.3.2 Symmetric Sums
2.3.3 Averaging Operation
2.3.4 Ordered Weighted Averaging Operations
2.4 Sensitivity of Fuzzy Set Operators
2.5 Negations
2.6 Comparison Operations on Fuzzy Sets
2.6.1 Distance Measures
2.6.2 Equality Indexes
2.6.3 Possibility and Necessity Measures
2.6.4 Compatibility Measures
2.7 Chapter Summary
2.8 Problems
3 Information-Based Characterization of Fuzzy Sets
3.1 Entropy Measures of Fuzziness
3.2 Energy Measures of Fuzziness
3.3 Specificity of a Fuzzy Set
3.4 Frames of Cognition
3.4.1 Basic Definition
3.4.2 Main Properties
3.4.2.1 Specificity
3.4.2.2 Focus of Attention
3.4.2.3 Information Hiding
3.5 Information Encoding and Decoding Using Linguistic Landmarks
3.5.1 Encoding Schemes in the Fuzzy Communication Channel
3.5.2 Decoding Mechanisms
3.6 Decoding Mechanisms for Pointwise Data
3.6.1 Decoding Based on Modal Values of the Codebook
3.6.1.1 Center-of-Gravity Decoding
3.6.1.2 Polynomial Expansion
3.6.1.3 Linguistic Expansion
3.7 Decoding Using Membership Functions of the Linguistic Terms of the Codebook
3.8 General Possibility-Necessity Decoding
3.9 Distance between Fuzzy Sets Based on Their Internal, Linguistic Representation
3.10 Chapter Summary
3.11 Problems
4 Fuzzy Relations and Their Calculus
4.1 Relations and Fuzzy Relations
4.2 Operations on Fuzzy Relations
4.3 Compositions of Fuzzy Relations
4.4 Projections and Cylindric Extensions of Fuzzy Relations
4.5 Binary Fuzzy Relations
4.6 Some Classes of Fuzzy Relations
4.6.1 Equivalence and Similarity Relations
4.6.2 Compatibility and Proximity Relations
4.7 Fuzzy-Relational Equations
4.7.1 Introductory Comments
4.7.2 Interpretations of Composition Operators
4.7.2.1 sup-t (max-t) Composition
4.7.2.2 inf-s Composition
4.7.2.3 Composition Operators Involving Implication (j-operator)
4.8 Estimation and Inverse Problem in Fuzzy-Relational Equations
4.9 Solving Fuzzy-Relational Equations with the sup-t Composition
4.9.1 Properties of the Implication Operator
4.9.2 Extended Topologies of Fuzzy-Relational Equations
4.9.3 Solvability Conditions
4.9.4 Relation-Relation Fuzzy Equations
4.10 Solutions to Dual Fuzzy-Relational Equations
4.11 Adjoint Fuzzy-Relational Equations
4.12 Generalizations of Fuzzy-Relational Equations
4.12.1 Fuzzy-Relational Equations with an Equality Composition Operator
4.12.2 Multilevel Fuzzy-Relational Equations
4.12.3 Fuzzy-Relational Equations with s-t and t-s Composition Operations
4.13 Approximate Solutions to Fuzzy-Relational Equations
4.13.1 Modifications of Relational Constraints via Thresholding
4.13.2 Preprocessing Fuzzy Data via Clustering
4.13.3 Use of Auxiliary Variables
4.13.4 Solving Fuzzy-Relational Equations via Logic Filtering
4.13.5 Solving Fuzzy-Relational Equations as a Problem of Learning of Fuzzy Neurons
4.14 Chapter Summary
4.15 Problems
5 Fuzzy Numbers
5.1 Defining Fuzzy Numbers
5.2 Interval Analysis and Fuzzy Numbers
5.3 Computing with Fuzzy Numbers
5.4 Triangular Fuzzy Numbers and Basic Operations
5.4.1 Addition
5.4.2 Multiplication
5.4.3 Division
5.4.4 Inverse
5.4.5 Fuzzy Minimum
5.5 General Formulas for LR Fuzzy Numbers
5.6 Accumulation of Fuzziness in Computing with Fuzzy Numbers
5.7 Inverse Problem in Computation with Fuzzy Numbers
5.8 Fuzzy Numbers and Approximate Operations
5.9 Chapter Summary
5.10 Problems
6 Fuzzy Sets and Probability
6.1 Introduction
6.2 Probability and Fuzzy Sets
6.3 Hybrid Fuzzy-Probabilistic Models of Uncertainty
6.3.1 Probability of Fuzzy Events
6.3.2 Linguistic Probabilities
6.4 Probability-Possibility Transformations
6.5 Probabilistic Sets and Fuzzy Random Variables
6.5.1 Probabilistic Sets
6.5.2 Fuzzy Random Variables
6.6 Chapter Summary
6.7 Problems
7 Linguistic Variables
7.1 Introduction
7.2 Linguistic Variables: Formalization
7.3 Computing with Linguistic Variables: Hedges, Connectives, and Negation
7.4 Linguistic Approximation
7.5 Linguistic Quantifiers
7.6 Applications of Linguistic Variables
7.7 Chapter Summary
7.8 Problems
8 Fuzzy Logic
8.1 Introduction
8.2 Propositional Calculus
8.3 Predicate Logic
8.4 Many-Valued Logic
8.5 Fuzzy Logic
8.6 Computing with Fuzzy Logic
8.6.1 Truth Space Methods
8.6.1.1 Fuzzy Truth Values and Fuzzy Truth Qualification
8.6.1.2 Inverse Truth Qualification
8.6.1.3 Operations in Fuzzy Logic
8.6.1.4 Reasoning in the Framework of Truth Space
8.6.2 Compositional Rule of Inference
8.7 Some Remarks about Inference Methods
8.8 Chapter Summary
8.9 Problems
9 Fuzzy Measures and Fuzzy Integrals
9.1 Fuzzy Measures
9.2 Fuzzy Integrals
9.2.1 Basic Properties of Fuzzy Integration
9.2.2 Optimization Aspects of the Fuzzy Integral
9.3 Chapter Summary
9.4 Problems
II COMPUTATIONAL MODELS
10 Rule-Based Computations
10.1 Rules in Knowledge Representation
10.1.1 Qualified Propositions
10.1.2 Quantified Propositions
10.1.3 Unless Rules
10.1.4 Gradual Rules
10.1.5 Potential Inconsistency and Conflicting Rules
10.1.6 Categorical and Dispositional Propositions
10.2 Syntax of Fuzzy Rules
10.3 Semantics of Fuzzy Rules and Inference
10.3.1 Semantics of Unless Rules
10.3.2 Semantics of Gradual Rules
10.4 Computing with Fuzzy Rules
10.5 Some Properties of Fuzzy Rule-Based Systems
10.6 Rule Consistency and Completeness
10.7 Chapter Summary
10.8 Problems
11 Fuzzy Neurocomputation
11.1 Neural Networks: Basic Notions, Architectures, and Learning
11.2 Logic-Based Neurons
11.2.1 Aggregative OR and AND Logic Neurons
11.2.2 OR/AND Neurons
11.2.3 Conceptual and Computational Augmentations of Fuzzy Neurons
11.2.3.1 Representing Inhibitory Information
11.2.3.2 Computational Enhancements of Fuzzy Neurons
11.3 Logic Neurons and Fuzzy Neural Networks with Feedback
11.4 Referential Logic-Based Neurons
11.5 Fuzzy Threshold Neurons
11.6 Classes of Fuzzy Neural Networks
11.6.1 Approximation of Logical Relationships-Development of the Logic Processor
11.7 Referential Processor
11.8 Fuzzy Cellular Automata
11.9 Learning
11.9.1 Learning in a Single Neuron
11.9.2 Self-Organization Mechanisms in Fuzzy Neural Networks
11.10 Selected Aspects of Knowledge Representation in Fuzzy Neural Networks
11.10.1 Representing and Processing Uncertainty
11.10.2 Induced Boolean and Core Neural Networks
11.11 Chapter Summary
11.12 Problems
12 Fuzzy Evolutionary Computation
12.1 Evolution and Computing
12.2 Genetic Algorithms
12.2.1 Reproduction
12.2.2 Crossover
12.2.3 Mutation
12.3 Design of Fuzzy Rule-Based Systems with Genetic Algorithms
12.4 Learning in Fuzzy Neural Networks with Genetic Algorithms
12.5 Evolution Strategies
12.6 Hybrid and Cooperating Approaches
12.7 Chapter Summary
12.8 Problems
13 Fuzzy Modeling
13.1 Fuzzy Models: Beyond Numerical Computations
13.2 Main Phases of System Modeling
13.3 Fundamental Design Objectives in System Modeling
13.4 General Topology of Fuzzy Models
13.5 Compatibility of Encoding and Decoding Modules
13.6 Classes of Fuzzy Models
13.6.1 Tabular Format of the Fuzzy Model
13.6.2 Fuzzy-Relation Equations
13.6.3 Fuzzy Grammars
13.6.4 Local Fuzzy Models
13.7 Verification and Validation of Fuzzy Models
13.7.1 Verification Algorithms for Fuzzy Models
13.7.2 Validation of Fuzzy Models
13.8 Chapter Summary
13.9 Problems
III PROBLEM SOLVING WITH FUZZY SETS
14 Methodology
14.1 Analysis and Design
14.2 Fuzzy Controllers and Fuzzy Control
14.2.1 Knowledge Acquisition in a Simple Control Problem
14.2.2 Construction of the Fuzzy Controller: Algorithmic Aspects
14.2.2.1 Rules and Rule Base
14.2.2.2 Inference
14.2.2.3 Encoding
14.2.2.4 Decoding
14.2.2.5 Fuzzy Controllers and PI and PD Controllers
14.2.3 Possibility-Necessity Computations in the Fuzzy Controller
14.2.4 Fuzzy Hebbian Learning
14.2.5. Design Considerations and Controller Adjustments
14.2.5.1 Relational Partition of the Input Space
14.2.5.2 Controller Adjustments
Scaling Factors
Context-Dependent Adjustment of the Universe of Discourse
Windowing Effect
14.2.6 Fuzzy Logic Controller
14.3 Mathematical Programming and Fuzzy Optimization
14.3.1 General Optimization Problems
14.3.2 Fuzzy Optimization Problems
14.3.3 Fuzzy Linear Programming
14.3.3.1 Fuzzy Objective Functions and Fuzzy Constraints
14.3.3.2 Fuzzy Constraints with Fuzzy Coefficients
14.3.3.3 Fuzzy Coefficients in the Objective Function
14.4 Chapter Summary
14.5 Problems
15 Case Studies
15.1 Traffic Intersection Control
15.1.1 Fuzzy Traffic Controller
15.1.2 Fuzzy Controller
15.1.3 State Machine
15.1.4 Adaptation Methods
15.2 Distributed Traffic Control
15.2.1 Distributed Control System Architecture
15.2.2 Distributed Traffic Control System
15.2.3 Local Problem Solver
15.2.4 Evolutive Case-Based Mechanism
15.3 Elevator Group Control
15.3.1 Elevator Group Control System
15.3.2 Fuzzy Group Controller
15.3.3 Simulation Experiments
15.4 Induction Motor Control
15.4.1 Speed Control of AC Machines
15.4.2 Fuzzy Control Strategy
15.4.3 Controller Design
15.5 Communication Network Planning
15.5.1 Communication Network Model
15.5.2 Clustering Procedure
15.6 Neurocomputation in Fault Diagnosis of Dynamic Systems
15.6.1 Neurofuzzy Network Structure and Learning
15.6.2 Learning Algorithm
15.6.3 Fault Detection and Diagnosis
15.6.4 Simulation Results
15.7 Multicommodity Transportation Planning in Railways
Instructor Resources
An Introduction to Fuzzy Sets
The concept of fuzzy sets is one of the most fundamental and influential tools in computational intelligence. Fuzzy sets can provide solutions to a broad range of problems of control, pattern
classification, reasoning, planning, and computer vision. This book bridges the gap that has developed between theory and practice. The authors explain what fuzzy sets are, why they work, when they
should be used (and when they shouldn't), and how to design systems using them.
The authors take an unusual top-down approach to the design of detailed algorithms. They begin with illustrative examples, explain the fundamental theory and design methodologies, and then present
more advanced case studies dealing with practical tasks. While they use mathematics to introduce concepts, they ground them in examples of real-world problems that can be solved through fuzzy set
technology. The only mathematics prerequisites are a basic knowledge of introductory calculus and linear algebra.
Downloadable instructor resources available for this title: instructor’s manual
About the Author
Witold Pedrycz is Professor and Chair of Electrical and Computer Engineering at the University of Alberta.
"The Pedrycz and Gomide text is superb in all respects. Its exposition of fuzzy-neural networks and fuzzy-genetic systems adds much to its value as a textbook"
—Lotfi A. Zadeh, University of California, Berkeley.
|
{"url":"https://mitpress.mit.edu/books/introduction-fuzzy-sets","timestamp":"2014-04-18T21:22:47Z","content_type":null,"content_length":"462371","record_id":"<urn:uuid:2d73e86c-286f-40e2-9ef6-f709e82fcd64>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: category theory, cohomology, group theory, and f.o.m.
Todd Wilson twilson at csufresno.edu
Mon Feb 21 19:19:05 EST 2000
A couple of minor clarifications in my previous post.
On Mon, 21 Feb 2000, Todd Wilson wrote:
> It is perhaps also worth mentioning another, more modern example of
> intuition lagging strikingly behind results: the almost universally
> strongly held intuition that P is not equal to NP.
Of course, I meant "results lagging strikingly behind intuition".
Several researchers in complexity theory have expressed the opinion to
me that their confidence in P not equal to NP is only exceeded by
their lack of any idea of how to explain it formally.
> So, without taking up any of the particular matters that Simpson is
> criticizing in Bauer's post (I see that Andrej has just written a
> response), I would propose that we acknowledge that the intuitions of
> category theorists concerning the fundamental nature of their subject,
> even in the absence of tangible results vindicating these intuitions,
> need not be a "mass hallucination", and instead make an honest attempt
> to discover whether there really is anything to them.
I should have said "discover what there is to them". In other words,
I'm sufficiently confident that there is something to these
intuitions; the only questions are what it is and how to explain it.
Anyone interested in better understanding these intuitions should at
least look at the two book chapters by J. L. Bell mentioned in my
previous post and can continue by looking at the references to the
writings of Lawvere and the Synthese essays of Bell mentioned therein.
Todd Wilson
Computer Science Department
California State University, Fresno
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-February/003790.html","timestamp":"2014-04-18T13:59:38Z","content_type":null,"content_length":"4325","record_id":"<urn:uuid:548072ad-7858-46c3-905c-29af56ece67c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Linear transformations
Show that every linear transformation $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$ is the sum of two invertible linear transformations $T_1, T_2$.
that is actually true for any field $F$ with $\text{card}(F) \geq n+2.$ let $A$ be the matrix of $T$ in the standard basis. let $I$ be the $n \times n$ identity matrix and $f(x)=\det(A-xI) \in \
mathbb{R}[x],$ which is a polynomial of degree $n$ and so it has (at most) $n$ roots in $\mathbb{R}.$ choose $0 eq \lambda \in \mathbb{R}$ such that $f(\lambda) eq 0.$ then $B=A-\lambda I$ is
invertible and $A=\lambda I + B.$
Last edited by NonCommAlg; May 12th 2010 at 04:50 PM.
Good! Minced meat for NCA! Here's a topological proof I found. It's much less direct but maybe the idea can be used elsewhere! Let's equip $M=\mathbb{R}_{n\times n}$ with the Euclidean metric. Note
that $G=\mbox{GL}_n(\mathbb{R})$ is dense in $M$. Let $F=M-G$. Note that $F$ is not dense in $M$. If $T$ is invertible, then the theorem is trivial with $T_1=T_2=T/2$. Therefore suppose $T \in F$.
Let $U=G+T=\{f+T : f\in G\}$. It's clear that $U$ is dense in $M$ since $G$ is dense in $M$. If $U\subset F$, then $M = \overline U \subset \overline F$ and therefore $F$ is dense in $M$ which is
false. Therefore $U \cap G ot=\emptyset$. $\ \ \ \ \ \square$
Last edited by Bruno J.; May 12th 2010 at 11:37 PM.
|
{"url":"http://mathhelpforum.com/math-challenge-problems/144406-solved-linear-transformations.html","timestamp":"2014-04-17T13:22:41Z","content_type":null,"content_length":"79955","record_id":"<urn:uuid:bd804224-e63c-4749-930c-26a962996010>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
Sachin Patil, Jur van den Berg, Sean Curtis, Ming C. Lin, Dinesh Manocha, "Directing Crowd Simulations Using Navigation Fields," IEEE Transactions on Visualization and Computer Graphics, vol. 17,
no. 2, pp. 244-254, February, 2011.
BibTex x
@article{ 10.1109/TVCG.2010.33,
author = {Sachin Patil and Jur van den Berg and Sean Curtis and Ming C. Lin and Dinesh Manocha},
title = {Directing Crowd Simulations Using Navigation Fields},
journal ={IEEE Transactions on Visualization and Computer Graphics},
volume = {17},
number = {2},
issn = {1077-2626},
year = {2011},
pages = {244-254},
doi = {http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.33},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Visualization and Computer Graphics
TI - Directing Crowd Simulations Using Navigation Fields
IS - 2
SN - 1077-2626
EPD - 244-254
A1 - Sachin Patil,
A1 - Jur van den Berg,
A1 - Sean Curtis,
A1 - Ming C. Lin,
A1 - Dinesh Manocha,
PY - 2011
KW - Multiagent systems
KW - animation
KW - virtual reality.
VL - 17
JA - IEEE Transactions on Visualization and Computer Graphics
ER -
We present a novel approach to direct and control virtual crowds using navigation fields. Our method guides one or more agents toward desired goals based on guidance fields. The system allows the
user to specify these fields by either sketching paths directly in the scene via an intuitive authoring interface or by importing motion flow fields extracted from crowd video footage. We propose a
novel formulation to blend input guidance fields to create singularity-free, goal-directed navigation fields. Our method can be easily combined with the most current local collision avoidance methods
and we use two such methods as examples to highlight the potential of our approach. We illustrate its performance on several simulation scenarios.
[1] D. Thalmann, C. O'Sullivan, P. Ciechomski, and S. Dobbyn, Populating Virtual Environments with Crowds, Eurographics Tutorial Notes, 2006.
[2] N. Pelechano, J.M. Allbeck, and N.I. Badler, Virtual Crowds: Methods, Simulation and Control. Morgan and Claypool Publishers, 2008.
[3] S.M. LaValle, Planning Algorithms. Cambridge Univ. Press, http://msl.cs.uiuc.eduplanning/, 2006.
[4] C. Reynolds, "Flocks, Herds and Schools: A Distributed Behavioral Model," ACM SIGGRAPH Computer Graphics, vol. 21, pp. 25-34, 1987.
[5] C. Reynolds, "Steering Behaviors for Autonomous Characters," Proc. Game Developers Conf., 1999.
[6] D. Helbing, I. Farkas, and T. Vicsek, "Simulating Dynamical Features of Escape Panic," Nature, vol. 407, pp. 487-490, cond-mat/0009448, http://arxiv.org/abs/cond-mat0009448, Sept. 2000.
[7] A. Kirchner and A. Schadschneider, "Simulation of Evacuation Processes Using a Bionics Inspired Cellular Automaton Model for Pedestrian Dynamics," Physica A, vol. 312, nos. 1/2, pp. 260-276,
Sept. 2002.
[8] J. van den Berg, M. Lin, and D. Manocha, "Reciprocal Velocity Obstacles for Realtime Multi-Agent Navigation," Proc. IEEE Conf. Robotics and Automation, pp. 1928-1935, 2008.
[9] J. van den Berg, S. Patil, J. Sewall, D. Manocha, and M. Lin, "Interactive Navigation of Multiple Agents in Crowded Environments," Proc. Symp. Interactive 3D Graphics and Games, pp. 139-147,
[10] D.C. Brogan and J.K. Hodgins, "Group Behaviors for Systems with Significant Dynamics," Autonomous Robots, vol. 4, pp. 137-153, 1997.
[11] S.R. Musse and D. Thalmann, "A Model of Human Crowd Behavior: Group Inter-Relationship and Collision Detection Analysis," Proc. Workshop Computer Animation and Simulation, pp. 39-51, 1997.
[12] T. Sakuma, T. Mukai, and S. Kuriyama, "Psychological Model for Animating Crowded Pedestrians: Virtual Humans and Social Agents," Computer Animation Virtual Worlds, vol. 16, pp. 343-351, 2005.
[13] N. Pelechano, J.M. Allbeck, and N.I. Badler, "Controlling Individual Agents in High-Density Crowd Simulation," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 99-108, 2007.
[14] M. Sung, M. Gleicher, and S. Chenney, "Scalable Behaviors for Crowd Simulation," Computer Graphics Forum, vol. 23, no. 3, pp. 519-528, Sept. 2004.
[15] W. Shao and D. Terzopoulos, "Autonomous Pedestrians," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 19-28, 2005.
[16] Q. Yu and D. Terzopoulos, "A Decision Network Framework for the Behavioral Animation of Virtual Humans," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 119-128, 2007.
[17] S. Paris, J. Pettre, and S. Donikian, "Pedestrian Reactive Navigation for Crowd Simulation: A Predictive Approach," Computer Graphics Forum, vol. 26, pp. 665-674, 2007.
[18] B. Yersin, J. Maim, P. Ciechomski, S. Schertenleib, and D. Thalmann, "Steering a Virtual Crowd Based on a Semantically Augmented Navigation Graph," Proc. Int'l Workshop Crowd Simulation
(VCROWDS), 2005.
[19] J. Pettré, H. Grillon, and D. Thalmann, "Crowds of Moving Objects: Navigation Planning and Simulation," Proc. ACM SIGGRAPH '08: ACM SIGGRAPH Classes, pp. 1-7, 2008.
[20] S. Chenney, "Flow Tiles," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 233-242, 2004.
[21] X. Jin, J. Xu, C.C.L. Wang, S. Huang, and J. Zhang, "Interactive Control of Large Crowd Navigation in Virtual Environment Using Vector Field," IEEE Computer Graphics and Application, vol. 28,
no. 6, pp. 37-46, Nov. 2008.
[22] A. Treuille, S. Cooper, and Z. Popovic, "Continuum Crowds," Proc. ACM SIGGRAPH, pp. 1160-1168, 2006.
[23] H. Lee, M. Choi, Q. Hong, and J. Lee, "Group Behavior from Video: A Data-Driven Approach to Crowd Simulation," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 109-118, 2007.
[24] A. Lerner, Y. Chrysanthou, and D. Lischinski, "Crowds by Example," Computer Graphics Forum, vol. 26, no. 3, pp. 655-664, 2007.
[25] S.R. Musse, C.R. Jung, J.C.S. Jacques, and A. Braun, "Using Computer Vision to Simulate the Motion of Virtual Agents," Computer Animation Virtual Worlds, vol. 18, pp. 83-93, 2007.
[26] N. Courty and T. Corpetti, "Crowd Motion Capture," Computer Animation Virtual Worlds, vol. 18, pp. 361-370, 2007.
[27] J. Pettré, J. Ondrej, A. Olivier, A. Cretual, and S. Donikian, "Experiment-Based Modeling, Simulation and Validation of Interactions between Virtual Walkers," Proc. ACM SIGGRAPH/Eurographics
Symp. Computer Animation, pp. 189-198, 2009.
[28] A. Lerner, E. Fitusi, Y. Chrysanthou, and D. Cohen-Or, "Fitting Behaviors to Pedestrian Simulations," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 199-208, 2009.
[29] M. Hu, S. Ali, and M. Shah, "Learning Motion Patterns in Crowded Scenes Using Motion Flow Field," Proc. IEEE Int'l Conf. Pattern Recognition (ICPR), pp. 1-5, 2008.
[30] B. Ulicny, O. Ciechomski, and D. Thalmann, "Crowdbrush: Interactive Authoring of Real-Time Crowd Scenes," Proc. ACM SIGGRAPH/Eurographics Symp. Computer Animation, pp. 243-252, 2004.
[31] R.A. Metoyer and J. Hodgins, "Reactive Pedestrian Path Following from Examples," Proc. Int'l Conf. Computer Animation and Social Agents, pp. 149-156, 2003.
[32] M. Oshita and Y. Ogiwara, "Sketch-Based Interface for Crowd Animation," Proc. 10th Int'l Symp. Smart Graphics, pp. 253-262, 2009.
[33] T. Kwon, K.H. Lee, J. Lee, and S. Takahashi, "Group Motion Editing," Proc. ACM SIGGRAPH, pp. 1-8, 2008.
[34] S. Takahashi, K. Yoshida, T. Kwon, K.H. Lee, J. Lee, and S.Y. Shin, "Spectral-Based Group Formation Control," Computer Graphics Forum, vol. 28, pp. 639-648, 2009.
[35] L. Prasso, J. Buhler, and J. Gibbs, "The PDI Crowd System for ANTZ," Proc. ACM SIGGRAPH '98 Conf. Abstracts and Applications, p. 313, 1998.
[36] MASSIVE, http:/www.massivesoftware.com, 2006.
[37] J.J. van Wijk, "Image Based Flow Visualization," Proc. ACM SIGGRAPH '02, pp. 745-754, 2002.
[38] E. Zhang, K. Mischaikow, and G. Turk, "Vector Field Design on Surfaces," ACM Trans. Graphics, vol. 25, no. 4, pp. 1294-1326, 2006.
[39] M. Fisher, P. Schroder, M. Desbrun, and H. Hoppe, "Design of Tangent Vector Fields," ACM Trans. Graphics, vol. 26, no. 3, 2007.
[40] J. Stam, "Flows on Surfaces of Arbitrary Topology," ACM Trans. Graphics, vol. 22, pp. 724-731, 2003.
[41] G. Still, "Crowd Dynamics," PhD dissertation, Univ. of Warwick, 2000.
[42] G. Arechavaleta, J.-P. Laumond, H. Hicheur, and A. Berthoz, "An Optimality Principle Governing Human Walking," IEEE Trans. Robotics, vol. 24, no. 1, pp. 5-14, 2008.
[43] O.B. Bayazit, J. Lien, and N.M. Amato, "Roadmap-Based Flocking for Complex Environments," Proc. Pacific Conf. Computer Graphics and Applications, pp. 104-113, 2002.
[44] G. Chen, G. Esch, P. Wonka, P. Müller, and E. Zhang, "Interactive Procedural Street Modeling," ACM Trans. Graphics, vol. 27, no. 3, pp. 1-10, 2008.
[45] D. Ferguson and A. Stentz, "Field D∗: An Interpolation-Based Path Planner and Replanner," Springer Tracts in Advanced Robotics, vol. 28, pp. 239-253, 2007.
[46] J. Sethian and A. Vladimirsky, "Ordered Upwind Methods for Static Hamilton-Jacobi Equations: Theory and Algorithms," SIAM J. Numerical Analysis, vol. 41, no. 1, pp. 325-363, 2003.
[47] RVOLibrary, "RVO Library: Reciprocal Velocity Obstacles for Real-Time Multi-Agent Simulation," http://gamma.cs.unc.edu/RVO/Libraryindex.html , 2008.
[48] Horde3D, http:/www.horde3d.org, 2007.
[49] M. Paravisi, A. Werhli, J.J. Junior, R. Rodrigues, C.R. Jung, and S.R. Musse, "Continuum Crowds with Local Control," Proc. Computer Graphics Int'l Conf., pp. 108-115, 2008.
[50] S.J. Guy, J. Chhugani, C. Kim, N. Satish, M. Lin, D. Manocha, and P. Dubey, "Clearpath: Highly Parallel Collision Avoidance for Multi-Agent Simulation," Proc. ACM SIGGRAPH/Eurographics Symp.
Computer Animation, pp. 177-187, 2009.
Index Terms:
Multiagent systems, animation, virtual reality.
Sachin Patil, Jur van den Berg, Sean Curtis, Ming C. Lin, Dinesh Manocha, "Directing Crowd Simulations Using Navigation Fields," IEEE Transactions on Visualization and Computer Graphics, vol. 17, no.
2, pp. 244-254, Feb. 2011, doi:10.1109/TVCG.2010.33
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tg/2011/02/ttg2011020244-abs.html","timestamp":"2014-04-24T12:01:04Z","content_type":null,"content_length":"59844","record_id":"<urn:uuid:cf2dc435-221f-45f8-89b5-e6949c9e978f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limits of intrinsically ergodic systems
up vote 5 down vote favorite
Let $(X_i)$ be a sequence of compact metric spaces and $(f_i)$ a sequence of transitive transformations $f_i:X_i \to X_i$ with $0 < h_{top}(f_i) < \infty$.
The sequence of dynamical systems satifies:
• $X_i \subset X_{i+1}$, $h_{top}(f_i) < h_{top}(f_{i+1}) $;
• $X_i$ converges to a compact metric space $X$;
• $f_{i+1}\mid_{X_i} = f_i$ for every $i$;
• Besides, there is a transformation $f:X \to X$ such that f is transitive, $0 < h_{top}(f) < \infty$ and $f\mid_{X_i} = f_i$.
• $h_{top}(f_i)$ converges to $h_{top}(f)$
Assume now that the system $(X_i, f_i)$ is intrinsically ergodic for all $i\ge0$, i.e., it has a unique measure of maximal entropy.
QUESTION. Is $(X,f)$ intrinsically ergodic?
(If it helps, each $(X_i,f_i)$ in my set-up is a transitive subshift of finite type (SFT), but $(X,f)$ is not an SFT.)
If the answer is yes, does there exist a natural way to project the (unique) measure of maximal entropy $\mu$ on $X$ onto $X_i$ so that the projection of $\mu$ is the measure of maximal entropy $\
mu_i$ on $X_i$?
ds.dynamical-systems ergodic-theory symbolic-dynamics
add comment
1 Answer
active oldest votes
The answer is no. It's based on a (un?)published example of Crannell, Rudolph and Weiss.
The example is the following shift: $X$ is the subset of $\lbrace 0,\pm 1\rbrace ^{\mathbb Z}$ with the property that $x_k\cdot x_{k+2^n}$ is not allowed to be $-1$ for any values
of $k$ and $n$.
up vote 6 down What they prove is that there are 2 measures of maximal entropy for $X$: one the Bernoulli (1/2,1/2) measure living on sequences of 0's and 1's; the other the Bernoulli (1/2,1/2)
vote accepted measure living on sequences of 0's and $-1$'s. In fact I showed with Ayse Şahin that these are the unique measures of maximal entropy.
Now if you let $X_i$ be the subset of $X$ where you can't have $i$ consecutive $-1$'s, then $X_i$ is intrinsically ergodic, but $X$ is not.
Thank you very much for your answer Anthony. I'd like to know why the sequence of subshifts that you are constructing converges to the non-intrinsically ergodic system that you
are mentioning. Best, Rafa – Rafael Alcaraz Feb 9 '12 at 12:11
Anthony, I just realized about why does the sequence converges. Sorry! – Rafael Alcaraz Feb 9 '12 at 15:35
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ergodic-theory symbolic-dynamics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/87905/limits-of-intrinsically-ergodic-systems?sort=newest","timestamp":"2014-04-23T10:12:37Z","content_type":null,"content_length":"53971","record_id":"<urn:uuid:a3eb2997-5bfb-41a2-a807-3eba5acb28a1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Barriers to Diffusion in Dendrites and Estimation of Calcium Spread Following Synaptic Inputs
The motion of ions, molecules or proteins in dendrites is restricted by cytoplasmic obstacles such as organelles, microtubules and actin network. To account for molecular crowding, we study the
effect of diffusion barriers on local calcium spread in a dendrite. We first present a model based on a dimension reduction approach to approximate a three dimensional diffusion in a cylindrical
dendrite by a one-dimensional effective diffusion process. By comparing uncaging experiments of an inert dye in a spiny dendrite and in a thin glass tube, we quantify the change in diffusion
constants due to molecular crowding as D[cyto]/D[water] = 1/20. We validate our approach by reconstructing the uncaging experiments using Brownian simulations in a realistic 3D model dendrite.
Finally, we construct a reduced reaction-diffusion equation to model calcium spread in a dendrite under the presence of additional buffers, pumps and synaptic input. We find that for moderate
crowding, calcium dynamics is mainly regulated by the buffer concentration, but not by the cytoplasmic crowding, dendritic spines or synaptic inputs. Following high frequency stimulations, we predict
that calcium spread in dendrites is limited to small microdomains of the order of a few microns (<5 μm).
Author Summary
Diffusion is one of the main transport phenomena involved in signaling mechanisms of ions and molecules in living cells, such as neurons. As the cell cytoplasmic medium is highly heterogeneous and
filled with many organelles, the motion of a diffusing particle is affected by many interactions with its environment. Interestingly, the functional consequences of these interactions cannot be
directly quantified. Thus, in parallel with experimental methods, we have developed a computational approach to decipher the role of crowding from binding. We first study here the diffusion of a
fluorescent marker in dendrites by a one-dimensional effective diffusion equation and obtained an effective diffusion constant that accounts for the presence heterogeneity in the medium. Furthermore,
comparing our experimental data with simulations of diffusion in a crowded environment, we estimate the intracellular calcium spread in dendrites after injection of calcium transients. We confirm
that calcium spread is mainly regulated by fixed buffer molecules, that bind temporarily to calcium, and less by the heterogeneous structure of the surrounding medium. Finally, we find that after
synaptic inputs, calcium remains restricted to a domain of 2.5 µm to each side of the input location independent of the input frequency.
Citation: Biess A, Korkotian E, Holcman D (2011) Barriers to Diffusion in Dendrites and Estimation of Calcium Spread Following Synaptic Inputs. PLoS Comput Biol 7(10): e1002182. doi:10.1371/
Editor: Edmund J. Crampin, University of Auckland, New Zealand
Received: March 4, 2011; Accepted: July 17, 2011; Published: October 13, 2011
Copyright: © 2011 Biess et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: AB was supported by the German Federal Ministry of Education and Research (BMBF) via the Bernstein Center for Computational Neuroscience (BCCN) Göttingen under Grant No. 01GQ0430. This work
was partially supported by the program Neuro-informatics and DH's research is supported by an ERC-starting grant. The funders had no role in study design, data collection and analysis, decision to
publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Dendrites of neurons contain a complex intracellular organization made of organelles, such as mitochondria, endoplasmic reticulum, ribosomes and cytoskeletal network generated by actin and
microtubules [1]–[3]. The cell cytoplasm is thus a crowded rather than diluted medium in which diffusional mobility of small molecules is restricted [3]–[7]. Molecular crowding can affect many
biochemical processes such as, protein folding [8]–[10], enzymatic reactions [11]–[13] and signal transduction [14]. Although electromicroscopy images [15] reveal the complexity of dendritic
organization, there are no direct methods to estimate the functional consequence on diffusion. Modeling in combination with Monte-Carlo methods [16]–[19] allowed to study diffusion in crowded media.
Depending on the size of the diffusing molecule and the interactions with the heterogeneous media, crowding can lead to anomalous or normal diffusion [4], [20]–[24].
Neuronal calcium is an fundamental and ubiquitous messenger [25], [26]. It is regulated by cytoplasmic crowding, mobile and immobile calcium buffers [27]–[30], pumps and dendritic spines, which
cannot be easily dissociated experimentally. It was already noticed and quantified [31] that cellular calcium buffers can determine amplitude and diffusional spread of neuronal calcium signaling.
Precisely, fixed calcium buffers tend to retard the signal and to lower the apparent diffusion coefficient, whereas mobile buffers can contribute to calcium redistribution. To study calcium dynamics,
we develop in the first part, a model of diffusion in a crowded three-dimensional dendrite, that we reduce to a one-dimensional effective diffusion process. The model is general and can be applied to
protein diffusion in membranes or in endoplasmic reticulum-like networks [16], [32]. In a second part, we use uncaging experimental data of an inert dye (fluorescein) in a spiny dendrite and in a
glass tube of similar size filled with aqueous solution to estimate the reduction of the diffusion constant in a dendrite. These experiments are repeated by Brownian simulations in a 3D model
dendrite in order to validate our one-dimensional model.
In the last part, we use the previously derived effective diffusion constant and simulate a system of reaction-diffusion equations in one dimension to study calcium dynamics in a dendrite. We
accounted for calcium buffers, pumps, dendritic spines and synaptic inputs. We show that for moderate organelle crowding, calcium spread is mainly restricted by the buffer and the pump concentration
and not by obstacles or dendritic spines. Although crowding restricts dendritic diffusion by a factor 20, it is not responsible for the high calcium compartmentalization () in dendrites [33], [34].
We further show that following high frequency stimulations, calcium spread does not exceed . In summary, calcium microdomains are highly regulated by various active processes such as calcium buffers,
pumps and stores.
Our results are divided into three sections. In the first section, we present the diffusion model for an inert dye in a crowded dendritic medium. The model is derived from a periodic
compartmentalization of the dendritic domain. It is followed by an extension of the model to almost periodic compartments and the analysis of the mean time a particle takes to travel across the
dendrite. In the second part, we present the outcome of the uncaging experiments of fluorescein to probe the dendritic medium and to estimate the model parameters. It is followed by a comparison to
Brownian simulations, which repeat these experiments on a computer. Finally, we provide mean-field simulation results for calcium spread in a dendrite under the additional presence of stationary
buffers, pumps and synaptic input.
Crowding model
Modeling diffusion in a heterogeneous dendritic cytoplasm.
To characterize diffusion in a heterogeneous dendrite, containing various organelles such as mitochondria, spine apparatus, endoplasmic reticulum and other structures, we propose to derive from a
three dimensional analysis a one-dimensional effective diffusion equation. In the limit where the space in between organelles is small, particles can still move inside a dendritic domain and the
nature of the motion is not impaired, and is well approximated by the Smoluchowski limit of the Langevin equation [35]: a particle at position at time is described by(1)
where is a potential per unit of mass, is the friction coefficient, is the aqueous diffusion constant and is Gaussian white noise. The potential represents the effective force on the particle. When a
moving molecule hits impenetrable organelles , it is reflected. The distribution of independent molecules is characterized by the probability density function (pdf) which satisfies the Fokker-Planck
in the domain , and a zero flux condition on the organelles and the dendritic membrane :(3)
where is the flux and the outer normal of the domain . To study the overall effect of crowding on the diffusion, we shall approximate equations (2) and (3) by deriving a one-dimensional effective
diffusion equation along the dendrite. We adopt an approach based on a compartmentalization of the dendritic domain and the small hole theory [36], which provides the mean time for a Brownian
particle to exit a domain through a small absorbing opening. This method allows us to obtain an explicit expression for the apparent diffusion constant. We divided a dendrite into periodic
compartments of length and volume , (Figure 1A) separated from their neighbors by a reflecting cross section, except for a small opening of radius . This compartment should be large enough so that
the organelle density is the same in each of them. The small openings allow diffusing molecules to move across compartments. In contrast to previous models where crowding has been described by
spherical obstacles [37] that pose barriers to diffusing molecules, we model crowding as the sequence of periodic compartments and small openings at the boundaries of neighboring compartments. A
compartment starts at position and ends at position (Figure 1B). The number of particles in compartment , changes according to the net flux across the small windows. The flux can be estimated by the
small hole approximation for the Mean First Passage Time (MFPT) a Brownian particle takes to escape a small opening [36], [38]–[41]). At first order in , is approximated by(4)
where is the aqueous diffusion constant and the cylindrical compartment volume. denotes the dendrite radius. Note that the MFPT solely depends on the ratio for fixed radius . From numerical studies
(data not shown) we find that formula (4) holds for to reasonable accuracy (relative error 0.05). In the long-time asymptotic regime (), the unidirectional flux of particles through a small hole is .
The net flux is the difference between the unidirectional fluxes in opposite direction, and thus, given by
where we have assumed that the size of the opening and the compartment volume may be spatially dependent. The conservation of mass imposes that the changes in the number of particles inside the
compartment is the sum of the net fluxes at position and (Figure 1B) and thus(5)
Using a Taylor expansion with for a fixed value of the length , equation (5) becomes(6)
Figure 1. Compartmentalized model dendrite with attached spine including buffers and pumps.
The model dendrite is organized as a sequence of periodic compartments of length . The compartments are connected through little openings of radius where molecules can pass to neighboring
compartments. (B) Inward and outward fluxes through the small openings of compartment used in the derivation of the effective diffusion equation.
Introducing the concentration we obtain(7)
Similar equations have been derived in other contexts [42]–[44]. If the parameters and are spatially independent, equation (7) simplifies to(8)
where the effective diffusion constant is given by(9)
The compartment parameter is(10)
where and is the cross-sectional area. The effective diffusion constant depends on two parameters: the compartment length and the size of the opening . We determine the model parameters by (i)
measuring the ratio of diffusion constants and (ii) a calibration condition of the form . The latter condition is chosen such that the small hole approximation (4) is valid to reasonable accuracy
(relative error ), which we have tested in numerical simulations (not shown here). The effective diffusion constant for spatially homogeneous compartments is given by . Thus, the calibration
condition sets one parameter arbitrarily within the limits of the small hole approximation and measurements of the diffusion constant will fix the other parameter. Equation (6) can be associated with
a stochastic equation(11)
where the drift and diffusion terms are(12)
(a prime denotes differentiation with respect to ). Thus, the drift disappears for spatially homogeneous opening sizes between compartments ().
The previous analysis can be applied to the motion of receptors on the surface of neurons, which contains impenetrable micro domains [45]. When the surface can be decomposed into a set of
compartments containing small openings, we can apply the results of the small hole computation derived in dimension two [36], [38]: the mean time for a Brownian molecule to escape a domain of area
through a small hole is approximated by(14)
where is the ratio of the absorbing to the total length of the two dimensional compartment. Following the same reasoning as in the previous paragraph, the receptor density satisfies the one
dimensional reduced equation(15)
Crowding model for almost periodic diffusion barriers.
To further analyze the effect of diffusion barriers, we investigate how our previous analysis is affected by an almost periodic distribution of barriers, where a random jitter is modelled as white
noise. We will see that diffusion in such medium is characterized by a fourth order diffusion equation. This analysis shows that approximating diffusion by dimensional reduction can lead to a
none-classical diffusion description. We start with a compartment position given by(16)
where is a centered Brownian variable of variance 1 and the drift is a fixed number . When the other parameters are spatially independent, the conservation of mass leads for compartment to(17)
where . It can be shown that by expanding the functions and in terms of the random position the mean number of diffusing molecules is given by a fourth order diffusion type equation(18)
The effective diffusion equation (8) is recovered in the limit . In the small jitter limit , the effective diffusion constant reduces to(19)
where is defined in (9). We conclude that jittering leads to an increase in the diffusion constant compared to a periodic arrangement of barriers. Interestingly, the distribution of compartments
affects the nature of the apparent diffusion process: in the periodic case, the apparent diffusion is described by the standard second order diffusion while fluctuations in the compartment
distribution lead to an apparent diffusion that is described by a fourth order equation.
Mean time for a diffusing particle to travel across a dendrite.
A possible application of the previous theory and equation (6) is to estimate the mean time for a diffusing particle, such as a transcription factor, to travel across a nonbranching dendrite.
The probability density function to find a molecule at position at time is , where is the number of molecules per unit length. We can apply the standard theory of first passage time [35] to equation
(6) and obtain an equation for the mean first passage time :(20)
To obtain the MFPT, , to reach the cell body (soma) from any starting point, we solve equation (20) in a dendrite sealed at the distance (Reflecting boundary condition) from the nucleus,
The solution is(21)
For example, when , and the compartment volume is constant , the mean time a diffusing molecule takes to travel from location to the nucleus is given by(22)
where the effective diffusion constant is defined in (9). Similarly, the MFPT, , in opposite movement direction, i.e., from the (reflecting) soma to an absorbing site at in the dendrite, is given by
We conclude that in a dendrite with an effective diffusion constant of [46], and , the mean time for a mRNA to reach the soma, starting from the tip () is about min.
The effect of crowding in dendrites
Uncaging experiments in a dendrite.
To study crowding inside a dendrite, we use a set of experiments in which we measure the diffusion time course of a caged inert dye molecule fluorescein (Materials and Methods). To estimate the
effect of crowding in the dendritic medium, we compare the diffusion time course near and far away from any dendritic spines (to avoid any perturbation by the spine domain) to diffusion in a glass
pipette of a similar radius. Figure 2A shows confocal microscopy images of a dendritic segment with several attached spines and the glass pipette. We first compare the fluorescent transient in the
dendrite and in the glass tube at different locations from the uncaging spot (). We find a much faster decay in the aqueous solution of the pipette compared to the dendrite (Figure 2B). The
fluorescent signals were averaged over several uncaging experiments (). The diffusion constants were extracted by a least-square fit of the data to the numerical solutions of equation (8), which
consider a spine as homogeneous at the length scale of the compartment length .
Figure 2.
(A) Images of the dendritic segments and the glass pipette used in the experiments. The sites of the uncaging spots are indicated. (B) Fluorescein transients in the pipette (black) and in the
dendritic medium far away from any attached spine (green) at different distances from the uncaging spot. (C) Fluorescein transients in the dendrite near and far away of any attached dendritic spine
are shown in blue and green, respectively. Fluorescein was uncaged at the base of the spine at location . The data are averaged values over several uncaging experiments (). The numerical solutions of
the 1D effective diffusion equation are shown as solid lines.
For the pipette data, where fluorescein diffuses freely, this method led to a diffusion constant of , whereas in the dendritic medium far away from any attached spines, we estimated a diffusion
constant of . This number is not far from the upper estimate obtained for axoplasm of metacerebral cells of Aplysia california, where [31]. We conclude that cytoplasmic crowding in the dendrite
resulted in a drastic reduction of the apparent diffusion constant by a factor of . Using the crowding model presented in the previous section, we can estimate the compartment length and the opening
size that leads to this reduction in the diffusion constant. From formula (9) and the calibration condition, we find that and , where the dendritic radius was set to .
We further investigated the influence of spine on dendritic diffusion: we initiated a dye transient in the dendritic shaft at the base of a spine by uncaging fluorescein. Figure 2C shows the
fluorescent signals in the presence and absence of the spine at different locations from the uncaging spot. We obtain a slightly larger diffusion constant near a dendritic spine compared to no spine.
Brownian simulations of the uncaging experiments.
To support our modeling approach we use Brownian simulations (Figure 3) to reproduce the uncaging experiments in a glass pipette (Figure 3A) and in a 3D cylindrical model dendrite far (Figure 3B) and
near a dendritic spine (Figure 3C). The methods used for the implementation of the Brownian simulations are described in Materials and Methods.
Figure 3. Brownian simulations of uncaging experiments.
(A) Model glass pipette (radius and length ). Shown is the initial particle distribution as taken from the experimental data and the sampling volumes (white cylindrical disks) at different locations
from the uncaging spot (). (B) Compartmentalized model dendrite (radius and length ). The compartment length and the opening size are derived from the theoretical model ( and ). (C) Compartmentalized
model dendrite with attached spine (dendrite geometry as in B with spine neck radius: 0.3 , spine neck length 0.2 , spine head radius 0.4 ). (D) Comparison of 3D Brownian simulations with the
uncaging experiments and the results derived from the solutions of the 1D effective diffusion equation. The normalized concentration profiles are shown for the glass tube (A), the dendrite (B) and
the dendrite with attached spine (C) at three locations from the uncaging spot ().
We first calibrate the parameter of the model: according to equation (9), a reduction of diffusion constants by a factor of results in a compartment length of and an opening size of . The spine
characteristic lengths are taken from the confocal microscopy image Figure 2A. We simulated particles (of fluorescein) and sampled the concentrations in cylindrical disks (height = ) at locations of
the experimental recording sites () for a duration of 0.7 ms, which corresponds to the temporal resolution of the experimental data. For the simulations in the glass pipette and in the dendrite near
and far any attached spine, the initial distribution in the axial direction was taken from the experimental data, whereas in the radial direction, it is taken to be homogeneous. The diffusion
constant of fluorescein in aqueous solution was set to in all simulations. Figure 3D shows the comparison of the 3D Brownian simulations with the experifmental data and the results derived from the
1D effective equation. Note that in all simulations the concentration at and is normalized to 1. There is a sharp drop of particle concentration in the case of a dendrite with attached spine at .
This is due to the flux of particles out of the sampling box into the spine. Note further that the 1D effective diffusion is only valid in the long-time asymptotic regime where ms. We conclude that
the results of the 1D effective diffusion equation and the 3D Brownian simulations in our diffusion model recover the time course of the experimental data, confirming our overall our approach. A
movie (Video S1) of our Brownian simulations in the dendrite with an attached spine is given in the Text S1.
Calcium dynamics in crowded dendrites
In addition to cytoplasmic crowding, calcium dynamics is regulated by many factors such as binding to buffer molecules (e.g., calmodulin and calcineurin), dendritic spines and various types of pumps
located on the dendritic surface (PMCA, NCX) and on the surface of internal organelles such as the endoplasmic reticulum (SERCA). It is usually not possible to dissect experimentally the contribution
of each process, and we shall apply our previous result to study calcium spread in dendrites.
We present a reaction-diffusion equation (Materials and Methods) to simulate calcium dynamics in both spiny and aspiny dendrites. At this stage, we do not take into account the intracellular calcium
stores, and thus, we exclude the generation of calcium waves through CICR, nor do we model spontaneous dendritic calcium spikes or calcium transients associated with back-propagating
action-potentials. We shall focus here on the local spread of calcium transients and we ignore global calcium events. We include in our simulations the effect of buffers, pumps, spines and synaptic
input. The contribution to calcium dynamics for each active component is provided in the Materials and Methods.
Calcium is highly restricted by the buffer activity and not by molecular crowding
We first simulated calcium diffusion in an aqueous solution (contained in a glass pipette) by initiating a calcium transient and solving the one dimensional diffusion equation (41)–(45) with a
diffusion constant of (Figure 4A). The effect of crowding alone on calcium diffusion in a dendrite was simulated by reducing the free diffusion constant to (Figure 4B). We assume here that the
effects of crowding on motion are the same for fluorescein molecules and calcium ions attached to a dye molecules. As expected, crowding leads to a more localized and persistent calcium transient
compared to free diffusion in an aqueous solution.
Figure 4.
(A) Calcium diffusion in an aqueous solution contained in a pipette of length . (B) Calcium diffusion in a crowded dendrite with an effective diffusion constant of . A calcium transient of was
initiated at . Note that the initial concentration is equal to about 600 particles per and evaluates to about 470 particles per micron for a dendrite with diameter . (C) Same settings than in (A) but
with additional buffers (medium buffer concentration) and pumps. (D) Same settings than in (B) but with additional buffers (medium buffer concentration) and pumps. (E) -influx was injected at for 1 s
at the location of the NMDAR in the middle of the dendritic segment as shown in the upper and middle panel. The resulting spatiotemporal -profile in the dendrite is shown in the lower panel. (F)
Spatiotemporal profiles of in the dendrite for different influx frequencies at the location of the NMDAR. (G) Corresponding calcium spread in the dendrite as measured by the full width at half
maximum (FWHM) of the calcium signal.
We next added two types of imobile buffers, calmodulin (CaM) and calcineurine (CN), as well as pumps (NCX and PCMA) to the simulation. The buffer concentration was varied between low ([CaM]: , [CN]:
), medium ([CaM]: , [CN]: ) and high ([CaM]: , [CN]: ) levels. Figure 4C and D show the effect of fast buffering on calcium dynamics in aqueous solution and in a crowded dendrite, respectively, for
medium buffer concentration. The differences are small. The calcium signal in the crowded medium is more localized in space and slightly longer lasting than in aqueous solution. From these simulation
results, we conclude that the spatiotemporal extent of the calcium signal is highly restricted by the stationary buffer activity. These results agree qualitatively with other uncaging experiments of
calcium in glass tubes and dendrites [47].
Calcium spread following a large range of frequency stimulation is less than around the source
We next analyze calcium spread originating from localized inputs such as synapses. At dendritic synapses calcium can enter through NMDA-receptors. To estimate calcium spread as a function of the
synaptic input frequency, we simulated -influx in the middle of a dendritic segment (Figure 4E). Buffers and pumps were set to their default values (Table 1). We initiated calcium transients in the
crowded model dendrite for different input frequencies (). The spatiotemporal extent of the calcium signal for different input frequencies is given in the intensity plots Figure 4F. Calcium spread is
measured by the full width at half maximum (FWHM) of the calcium signal. Interestingly, for input frequencies larger than 20 Hz, the calcium signal in the dendrite reaches a stationary value. For
high input frequencies (20 Hz) calcium spread does not exceed ( = 0.5FWHM) as measured from the input source. This is in agreement with the experimental data where calcium spread was contained within
a domain of about . We conclude that buffer and pumps limit calcium spread to few micrometers.
Table 1. Model parameters.
Calcium spread in crowded dendrites
We have shown here that dendritic crowding reduces the diffusion constant of inert Brownian molecules by a factor of 20 when compared to diffusion in an aqueous solution. We have used this result to
estimate calcium spread in dendrites. We found that in the absence of regenerative mechanisms (VSCC, calcium stores), the spread of calcium largely depends on the buffer concentration and moderate
molecular crowding does not play a significant role in shaping calcium dynamics. Thus, crowding has only a minor effect compared to the cumulative effect of pumps and buffers. In addition, the
presence of a single (passive) spine at the location of calcium release did not influence calcium diffusion in the dendrite.
In this study, we have analyzed the effect of molecular crowding on calcium spread under the presence of stationary buffers. Assuming that the diffusion constant of calcium and fluorescein are
reduced by the same factor due to the effect of molecular crowding, our results confirm previous studies that calcium spread is largely restricted by the effect of stationary buffers [31], [48]–[50].
Our analysis showed only a small effect of molecular crowding on calcium spread (Figure 4C and 4D): slightly more calcium molecules were bound to buffers in the crowded condition.
These results are qualitatively consistent with stochastic simulations in a cubic cell model under different crowding and buffer mobility conditions [19], where it has been shown that molecular
crowding affects the calcium signaling system mainly through crowding-induced binding of calcium to buffer molecules and less through the direct hindrance of calcium diffusion. This study showed
further that these effects are not additive. Interestingly, the reduction in diffusion constant due to molecular crowding was found to be for moderately crowded environments with excluded volume
fraction. In our study, the reduction of the calcium diffusion constant was extrapolated from fluorescein uncaging experiments in the dendritic medium, which resulted in a much higher value. This
difference might result from additional crowding effects such as cavities that were not modelled in the stochastic simulations.
Calcium is restricted in microdomains near each synaptic input
Calcium microdomains have been observed during spontaneous and electrically evoked activation of synapses on dendritic shafts in aspiny neurons [34]. Compartmentalization into domains of about
resulted from fast kinetics of calcium permeable AMPA receptors and fast local extrusion via the exchanger [34]. In general, as observed in Figure 4, calcium spread is robustly confined in a domain
of less than from the input source and this seems to be independent of the synaptic firing frequency. Thus, calcium dynamics seems to be well regulated by buffers, stores and extrusion mechanisms.
It is certainly a requirement for dendrites to prevent calcium spread over large distances because it is not only the primary messenger in the induction of synaptic plasticity, such as long term
potentiation (LTP) [51], but it is also involved in morphological changes and in the regulation of receptor trafficking such as AMPA [52]. While organelle localization might depend on the dendritic
local needs (protein syntheses, energy supply and local calcium stores), calcium pump densities and calcium buffer concentrations might be regulated independently to maintain calcium homeostasis. It
remains an unsolved question to determine how pumps and calcium buffer molecules are regulated along a dendrite.
Molecular trafficking near dendritic spines
Using our previous computations, we found that (passive) dendritic spines in this mean-field approach do not contribute much in dendritic calcium regulation (data not shown). In general, our result
suggests that spines should not significantly affect the movement of diffusing particles along the dendrite. However, in the case of calcium, we have not taken into account a possible calcium
propagation through the endoplasmic reticulum network, which may lead to a very different type of propagation.
Dendritic spines can be seen as the ultimate place of confinement in dendrites: indeed, calcium exchangers located on the endoplasmic reticulum surface or on the spine neck membrane can prevent
calcium from diffusing into the spine head [53], [54]. In addition, large crowding observed at the spine base due to various types of organelles such as the endoplasmic reticulum or the spine
apparatus [2], [15] can prevent diffusing molecules from entering the spine neck. However, it is not clear whether mRNA or transcription factors can enter dendritic spines by passive diffusion or
whether active processes are required.
Materials and Methods
Fluorescein experiments in dendrites
Cultures were prepared as detailed in [47]: we use wistar rat pups at P1. Hippocampal tissue was mechanically dissociated and plated on 12 mm glass coverslips at 3–4105 cells per well in a 24 well
plate. Cells were left to grow in the incubator at C, 5% CO for 4 days, at which time the medium was changed to 10% HS in enriched MEM. The medium was changed four days later to 10% HS in enriched
MEM. Cells were transfected at 1 wk in culture with DsRed plasmid to visualize the dendrites and spines using a lipofectamine 2000 (Invitrogen) method. On the day of imaging, the glass was
transferred to the recording medium containing (in mM): NaCl 129, KCl 4, MgCl 1, CaCl 2, glucose 10, HEPES 10, and TTX . pH was adjusted to 7.4 with NaOH, and osmolarity to 315 mOsm with sucrose.
Ten-fourteen day old cultured cells were patch clamped at the soma and recorded with a glass pipette containing (in mM): K-gluconate 140, NaCl 2, HEPES 10, EGTA 0.2, Na-GTP 0.3, Mg-ATP 2,
phosphocreatine 10, and 100 of caged fluorescein (Molecular Probes) at pH 7.4 having a resistance of 6–12 M. Signals were amplified with Axopatch 200 (Axon Instruments Inc. Foster City, CA). Cells
were imaged with a 63× water immersion objective (NA = 0.9). UV laser was aimed at a spot of 1 m in the center of the field of view. A line scan mode (0.7 msec/line) was used along an imaged dendrite
to measure fast changes in fluorescence following flash photolysis of caged fluorescein. In the second stage of the experiment, the content of patch pipettes, containing caged fluorescein, was sucked
out and introduced into additionally prepared pipettes with long and sharp tips, having tens of microns in length and about 1–2 in diameter making their geometry similar to a “typical” dendrite. Same
line scan mode was used to compare changes in fluorescence in a dendrite and in a glass tube, containing similar concentrations of caged fluorescein. Data were analyzed using custom made MATLAB-based
programs. Steps of 0.6 from the center of the uncaging sphere were defined through the line scans and pixels inside every step were horizontally averaged. Every line scan trial was repeated 7–14
times. Statistical comparisons were made with t-tests.
Brownian simulations
We implemented the Brownian simulations in MATLAB using a ray-tracing algorithm. To overcome the huge computational burden that Brownian simulations in complex domains impose, we made heavily use of
MATLAB's object-oriented programming and vectorization features as well as of the external C/C++ interface functions capabilities (MEX-files). We first constructed a triangular mesh of the simulation
domain (e.g., cylinder, cylinder with spine, see Figures 3 A,B) using a simple mesh generator based on distance function (DistMesh package, [55]). The (meshed) simulation domain was then equipped
with user-defined sampling boxes, an initial distribution of particles and diffusion barriers (e.g., disks with small holes, see Figures 3 B,C). We predefined a sampling interval ( ms) at which the
particle concentrations in the sampling boxes were measured.
Surface mesh elements were defined to be either reflective or absorbing. The top and the bottom of the cylindrical domain was set to be absorbing while all other surface elements were defined to be
reflective. Particle rays crossing reflecting boundaries or obstacles were reflected according to the law of light reflection. To speed up the code we divided the simulation domain into partition
voxels. For each partition voxel a list of contained objects (mesh elements, obstacles) was pre-computed and provided to the algorithm during execution.
The Brownian simulation was implemented using an Euler-scheme with adaptive-step size. Steps were defined by the distance to mesh elements and obstacles. The closer the particles were to objects the
smaller the step size was chosen. As a rule of thumb, the minimal step size was determined by 0.3–0.5 of the smallest length scale that had to be resolved (e.g., the radius of the hole of the disks,
see Figure 3 B). The (vectorized) particle rays were traced in the voxels and tested for intersections with mesh elements or objects. If intersections occurred the particles were either reflected or
absorbed. It is important to note that an adaptive-step size algorithm leads for each particle to a different progress in physical time. Hence, the measurement of particle concentrations at fixed
sampling times, required the implementation of a scheduler that removed particles temporarily from the simulation and stored their positions. Our simulations lasted between several hours to several
days on a cluster depending on the number of particles, number of objects and the minimal step size. We have made extensive use of MATLAB's visualization tools to monitor the simulations and to
generate visual outputs of the simulation results (see snapshots in Figures 3 A–C and a movie (Video S1) in the Text S1). We have included in the Text S1 a validation study of diffusion in a
cylindrical domain with absorbing boundaries at the top and bottom. Different measures such as global and local particle concentrations as well as the mean first passage time to the absorbing
boundaries are extracted from the simulations and compared with existing analytical results. The test-simulation is shown in Video S2. A good agreement between these results was obtained, and thus,
evidence for the correctness of the implemented algorithm in the Monte-Carlo simulation tool is provided.
Calcium dynamics
The spatiotemporal calcium signal in the dendrite is regulated by several active and passive components that are described next.
Calcium buffers.
Dendrites contain a large number of different buffers. The reactions of a buffer that can bind calcium ions is modeled by the series of chemical reactions(24)
where and are the forward and backward rates for , respectively. We choose two representative members of the buffer molecules: calcineurin (CN) with one calcium binding site () and calmodulin (CaM)
with four binding sites (). The kinetic equations are derived from the standard theory of chemical reactions, leading to a coupled set of odes for the unknown calcium concentrations, [Ca] and buffer
concentrations, [BCa], with , (), calcium bonds:(25)
In the following we will not use concentrations as the dynamic variables, but the number of particles (in ) per unit length, . The conversion from calcium concentration to particles per unit length
where is the cross section of the dendrite.
Calcium pumps.
Two basic mechanisms are responsible for the removal of calcium ions across the neuron membrane: the ATP-driven plasma membrane pumps (PMCA) and the exchanger (NCX). The PMCA pumps extrude ions
against the concentration gradient using the energy provided by the ATP molecules. The sodium-calcium exchanger can move one calcium ion inwards for moving three sodium ions outward. Both extrusion
mechanisms are described by similar equation: the loss of calcium ions through the PMCA pumps () and NCX () is modeled according to(29)
with an activation characteristics(30)
where the half-saturation concentration is , the extrusion rate per pump (number of ions per unit time) is given by , the density of pumps per unit length is denoted by and is the hill coefficient.
Passive effect of dendritic spines.
Dendritic spines are modeled as passive calcium absorbers. In our model, calcium ions entering a dendritic spine are totally absorbed. The flux of calcium ions into the dendritic spine depends on the
spine neck radius. It can be computed in the configuration where the dendrite is compartmentalized and the compartments are connected through small openings (Figure 1A). In that case, the openings
between compartment and the spine entrance are well separated, and thus the flux of calcium ions into a dendritic spine with spine neck radius located at a longitudinal position is(31)
where is a rectangle function of width :(32)
and is the Heaviside step-function ( for and zero otherwise). The rate, , is given by the inverse of the mean first passage to reach a small opening of radius . Thus(33)
where is the compartment volume. The total flux of calcium ions into the neck of spines with a neck radius distributed at positions is given by(34)
where is the spine density per unit length.
Calcium dye.
The effect of the calcium dye is modeled as buffer by the reaction(36)
where denotes the calcium-dye complex. The kinetic equations are given by(37)
where is the total dye concentration, i.e.,
Synaptic input.
Calcium influx on dendritic spines is mediated primarily by slow NMDA currents [56]. The voltage dependent NMDA channel is at resting potential mostly block by the ions. As [57], we ignore the
details of the voltage dependence of the NMDA receptor channel and consider a simplified model corresponding to presynaptic stimulation in conjunction with postsynaptic voltage clamp. The time course
of the NMDA mediated synaptic current is modeled as the difference of two exponentials(39)
where denotes the step-function, is the time of stimulus initiation and the number of pulses. The electrical currents are transformed into a particle current per unit length according to(40)
where is the location of the receptor, is the radius of the channel opening, the fraction of current carried by calcium through the receptor, the Faraday constant, the valence of calcium and is a
rectangular function with center at and half-width .
Reaction-diffusion equations.
The total effect of buffers, pumps and spines on the cytosolic calcium concentration can be summarized in form of a reaction-diffusion equation:(41)
where describes the calcium fluxes due to the buffers, pumps, spines and synaptic input. Equation (41) is coupled to the dynamic equations for the particle density of buffer molecules with maximally
calcium binding sites and to the equation describing the calcium-dye particle density, . We finally obtain the set of equations that describe the calcium dynamics in the dendrite:(42)
We included in the above equations the effect of mobile buffers. However, in the following, we assume that the buffers are fixed and set the buffer diffusion constants, , to zero.
Numerical simulations.
The reaction-diffusion equations (42)–(45) were solved numerically using MATLAB. The partial differential equations were solved using the numerical method of lines which is implemented in the MATLAB
solver. Space and time discretizations were set to and , respectively, depending on the total simulation time which varied between and . The total simulation time was determined by the biological
components included in the simulation protocols. For example, for a simulation of calcium diffusion with activated pumps and buffers, a simulation time of 10 ms was sufficient due to the fast uptake
of calcium by the buffers. Simulation protocols that included synaptic input required a much larger simulation time of about 1 s (Figure 4).
Supporting Information
Movie of a stochastic simulation of particles in a model dendrite with an attached spine. The geometric measurements for the dendritic segment and the dendritic spine were extracted from Figure 2A.
The top and the bottom of the dendritic cylinder are absorbing surfaces.
Movie of a diffusion experiment of particles in a cylindrical domain (radius , length ) with absorbing boundary conditions at the top and bottom of the cylinder. Diffusion constant: .
Validation study for testing the algorithm implemented in the stochastic simulation tool.
(A) Global particle concentration in a cylindrical domain (radius , length ) with absorbing top and bottom and normalized local particle concentration in a small sampling volume with center at and
height . Comparison of the exact global and local particle concentrations (7) and (8), respectively, to the Brownian simulation results using particles. (B) Comparison of the averaged mean first
passage time as a function of cylinder length . Diffusion constant: .
Author Contributions
Conceived and designed the experiments: EK. Performed the experiments: EK. Analyzed the data: AB. Contributed reagents/materials/analysis tools: AB EK DH. Wrote the paper: AB EK DH.
|
{"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002182","timestamp":"2014-04-23T14:44:23Z","content_type":null,"content_length":"326921","record_id":"<urn:uuid:5bae0ecb-3baa-4027-8b54-cf1faaaaf1b8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is 2mb in bytes?
You asked:
What is 2mb in bytes?
the capacity 2,097,152 bytes
Assuming you meant
Did you mean?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/what_is_2mb_in_bytes","timestamp":"2014-04-19T14:43:12Z","content_type":null,"content_length":"52807","record_id":"<urn:uuid:3025c814-fc84-425b-a2cd-fbf32ad9f385>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
nonnegative Fourier Transform
up vote 0 down vote favorite
Let $\widehat{f}(\xi)$ be Fourier transform of $f$ given by \widehat{f}(\xi)=\int_{\mathbb{R}^n} e^{-ix\cdot\xi}f(x)dx. Suppose that $\widehat{f}(\xi)$ is nonnegative and locally integrable function,
easily seems (by inverse Fourier transform) that \Vert f\Vert_{L^{\infty}} \leq \Vert \widehat{f}\Vert_{L^1}. How to show that there is a positive constant $c>0$ such that \Vert \widehat{f}\Vert_{L^
1}\leq c \Vert f\Vert_{L^{\infty}}.
fourier-transform fourier-analysis
What do you call "the other inequality" and where your question originates from? – Seva Feb 18 '13 at 18:02
How to show that there is a positive constant $c>0$ such that \Vert \widehat{f}\Vert_{L^1}\leq c \Vert f\Vert_{L^{\infty}} – Marcelo Feb 18 '13 at 18:10
My question originate from Lemarie's book: "recents developments in the Navier-Stokes problem" p168. – Marcelo Feb 18 '13 at 18:13
I meant $f(t)$ of course... – Yemon Choi Feb 18 '13 at 21:13
Yemon Choi, in Lemarie's book he say that \Vert f\Vert_{L^{\infty}}=\Vert \widehat{f}\Vert_{L^1} – Marcelo Feb 19 '13 at 1:28
show 1 more comment
1 Answer
active oldest votes
If $\hat f$ is nonnegative, then (up to a factor), $$f(0)=\int \hat f=\Vert \hat f \Vert_1 = \Vert f \Vert_\infty.$$
up vote 4 down vote accepted
tanks so much Michael Renardy – Marcelo Feb 19 '13 at 4:37
add comment
Not the answer you're looking for? Browse other questions tagged fourier-transform fourier-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/122202/nonnegative-fourier-transform","timestamp":"2014-04-17T01:40:47Z","content_type":null,"content_length":"56579","record_id":"<urn:uuid:9d5b5020-6aa4-4f3c-bd33-611cb8a44fcb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gaussian Covariance Faithful Markov Trees
Journal of Probability and Statistics
Volume 2011 (2011), Article ID 152942, 10 pages
Research Article
Gaussian Covariance Faithful Markov Trees
^1Unité de Recherche Signaux et Systémes (U2S), Ecole Supérieure de la Statistique et de l'Analyse de l'Information (ESSAI), Ecole Nationale d'Ingénieurs de Tunis (ENIT), 6 Rue des Métiers, Charguia
II 2035, Tunis Carthage, Ariana, Tunis 1002, Tunisia
^2Department of Statistics, Department of Environmental Earth System Science, Woods Institute for the Environment, Stanford University, Standford, CA 94305, USA
Received 30 May 2011; Accepted 9 August 2011
Academic Editor: Junbin B. Gao
Copyright © 2011 Dhafer Malouche and Bala Rajaratnam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Graphical models are useful for characterizing conditional and marginal independence structures in high-dimensional distributions. An important class of graphical models is covariance graph models,
where the nodes of a graph represent different components of a random vector, and the absence of an edge between any pair of variables implies marginal independence. Covariance graph models also
represent more complex conditional independence relationships between subsets of variables. When the covariance graph captures or reflects all the conditional independence statements present in the
probability distribution, the latter is said to be faithful to its covariance graph—though in general this is not guaranteed. Faithfulness however is crucial, for instance, in model selection
procedures that proceed by testing conditional independences. Hence, an analysis of the faithfulness assumption is important in understanding the ability of the graph, a discrete object, to fully
capture the salient features of the probability distribution it aims to describe. In this paper, we demonstrate that multivariate Gaussian distributions that have trees as covariance graphs are
necessarily faithful.
1. Introduction
Markov random fields and graphical models are widely used to represent conditional independences in a given multivariate probability distribution (see [1–5], to name just a few). Many different types
of graphical models have been studied in the literature. Concentration graphs encode conditional independence between pairs of variables given the remaining ones. Formally, let us consider a random
vector with a probability distribution where is a finite set representing the random variables in . An undirected graph is called the covariance graph (see [1, 6–11]) associated with the probability
distribution if the set of edges is constructed as follows: Note that means that the vertices and are not adjacent in .
The concentration graph associated with is an undirected graph , where is the set of vertices and each vertex represents one variable in . The set is the set of edges (between the vertices in )
constructed using the pairwise rule: for pair , where .
Note that the subscript zero is invoked for covariance graphs (i.e., versus ) as the definition of covariance graphs does not involve conditional independences.
Both concentration and covariance graphs not only are used to encode pairwise relationships between pairs of variables in the random vector , but as we will see below, these graphs can also be used
to encode conditional independences that exist between subsets of variables of . First, we introduce some definitions.
The multivariate distribution is said to satisfy the “intersection property” if for, any subsets , , , and of which are pairwise disjoint,
We will call the intersection property (see [2]) in (1.3) above the concentration intersection property in this paper in order to differentiate it from another property that is satisfied by when
studying covariance graph models. Though this property can be further relaxed, we will retain the terminology used in [2].
We first define the concept of separation on graphs. Let , , and denote a pairwise disjoint set of vertices. We say that a set separates and if all paths connecting and in intersect , that is, .
(This is not to be confused with stochastic independence which is denoted by as compared to .) Now, let satisfy the concentration intersection property. Then, for any triplet of subsets of pairwise
disjoint, if separates and in the concentration graph associated with , then the random vector is independent of given . This latter property is called concentration global Markov property and is
formally defined as Kauermann [6] shows that if satisfies the following property: for any triplet of subsets of pairwise disjoint, then, for any triplet of subsets of pairwise disjoint, if separates
and in the covariance graph associated with , then . This latter property is called the covariance global Markov property and can be written formally as follows: In parallel to the concentration
graph case, property (1.5) will be called the covariance intersection property and is sometimes also referred to as the composition property. Even if satisfies both intersection properties, the
covariance and concentration graphs may not be able to capture or reflect all the conditional independences present in the distribution; that is, there may exist one or more conditional independences
present in the probability distribution that does not correspond to any separation statement in either or . Equivalently, a lack of a separation statement in either or does not necessarily imply a
conditional independence. On the contrary case when no other conditional independence exists in except the ones encoded by the graph, we classify as a faithful probability distribution to its
graphical model (see [12]). More precisely, we say that is concentration faithful to its concentration graph if, for any triplet of subsets of pairwise disjoint, the following statement holds:
Similarly, is said to be covariance faithful to its covariance graph if, for any triplet of subsets of pairwise disjoint, the following statement holds: A natural question of both theoretical and
applied interest in probability theory is to understand the implications of the faithfulness assumption. This assumption is fundamental since it yields a bijection between the probability
distribution and the graph in terms of the independences that are present in the distribution. In this paper, we show that when is a multivariate Gaussian distribution, whose covariance graph is a
tree, it is necessarily covariance faithful, that is, such probability distributions satisfy property (1.8). Equivalently, the associated covariance graph is fully able to capture all the conditional
independences present in the multivariate distribution . This result can be considered as a dual of a previous probabilistic result proved by Becker et al. [13] for concentration graphs that
demonstrates that Gaussian distributions having concentration trees (i.e., the concentration graph is a tree) are necessarily concentration faithful to its concentration graph (implying that property
(1.7) is satisfied). This result was proved by showing that Gaussian distributions satisfy two types of conditional independence properties: the intersection property and the decomposable
transitivity property. The approach in the proof of the main result of this paper is vastly different from the one used for concentration graphs (see [13]). Indeed, a naïve or unsuspecting reader
could mistakenly think that the result for covariance trees follows simply by replacing the covariance matrix with its inverse in the result in Becker et al. [13]. This is of course incorrect and, in
some sense, equivalent to saying that a matrix and its inverse are the same. The covariance matrix encodes marginal independences whereas the inverse covariance matrix encodes conditional
independences. These are very different models. Moreover, the former is a curved exponential family model whereas the latter is a natural exponential family model.
The outline of this paper is as follows. Section 2 presents graph theory preliminaries. Section 2.2 gives a brief overview of covariance and concentration graphs associated with multivariate Gaussian
distributions. The proof of the main result of this paper is given in Section 3. Section 4 concludes by summarizing the results in the paper and the implications thereof.
2. Preliminaries
2.1. Graph Theoretic Concepts
This section introduces notation and terminology that is required in subsequent sections. An undirected graph consists of two sets and , with representing the set of vertices, and the set of edges
satisfying, . For , we write when and we say that and are adjacent in . A path connecting two distinct vertices and in is a sequence of distinct vertices , where and , where, for every , . Such a
path will be denoted and we say that connects and or alternatively and are connected by . We also denote by the set of paths between and . We now proceed to define the subclass of graphs known as
trees. Let be an undirected graph. The graph is called a tree if any pair of vertices in is connected by exactly one path; that is, . A subgraph of induced by a subset is denoted by , and . A
connected component of a graph is the largest subgraph of such that each pair of vertices can be connected by at least one path in . We now state a lemma, without proof, that is needed in the proof
of the main result of this paper.
Lemma 2.1. Let be an undirected graph. If is a tree, then any subgraph of induced by a subset of is a union of connected components, each of which are trees (or what we will refer to as a “union of
tree connected components”).
For a connected graph, a separator is a subset of such that there exists a pair of nonadjacent vertices and such that , and If is a separator, then it is easily verified that every such that is also
a separator.
2.2. Gaussian Concentration and Covariance Graphs
In this section, we present a brief overview of concentration and covariance graphs in the case when the probability distribution is multivariate Gaussian. Consider a random variable , where and ,
where denotes the cone of positive definite matrices. Without loss of generality, we will assume that . Gaussian distributions can also be parameterized by the inverse of the covariance matrix
denoted by . The matrix is called the precision or concentration matrix. It is well known (see [2]) that for any pair of variables , where , . Hence, the concentration graph can be constructed simply
using the precision matrix and the following rule: . Furthermore, it can be easily deduced from a classical result (see [2]) that for any Gaussian concentration graph model the pairwise Markov
property in (1.2) is equivalent to the concentration global Markov property in (1.4).
As seen earlier in (1.1) covariance graphs on the other hand are constructed using pairwise marginal independence relationships. It is also well known that, for multivariate Gaussian distributions, .
Hence, in the Gaussian case, the covariance graph can be constructed using the following rule: . It is also easily seen that Gaussian distributions satisfy the covariance intersection property
defined in (1.5). Hence, Gaussian covariance graphs can also encode conditional independences according to the following rule: for any triplet of subsets of pairwise disjoint, if separates and in the
covariance graph , then .
3. Gaussian Covariance Faithful Trees
We now proceed to study the faithfulness assumption in the context of multivariate Gaussian distributions and when the associated covariance graphs are trees. The main result of this paper, presented
in Theorem 3.1, proves that multivariate Gaussian probability distributions having tree covariance graphs are necessarily faithful to their covariance graphs; that is, all of the independence and
dependences in can be read by using graph separation. We now formally state Theorem 3.1. The proof follows shortly after a series of lemmas/theorem(s) and an illustrative example.
Theorem 3.1. Let be a random vector with Gaussian distribution . Let be the covariance graph associated with . If is a disjoint union of trees, then is covariance faithful to .
The proof of Theorem 3.1 requires, among others, a result that gives a method to compute the covariance matrix from the precision matrix using the paths in the concentration graph . The result can
also be easily extended to show that the precision matrix can be computed from the covariance matrix using the paths in the covariance graph . We now formally state this result.
Lemma 3.2. Let be a random vector with Gaussian distribution , where and are positive definite matrices. Let and denote, respectively, the concentration and covariance graph associated with the
probability distribution of . For all in , where, if , ,, denote, respectively, and with rows and columns corresponding to variables in path omitted. The determinant of a zero-dimensional matrix is
defined to be 1.
The lemma above follows immediately from a basic result in linear algebra which gives the cofactor expression for the inverse of a square matrix. In particular, for an invertible matrix , its inverse
can be expressed as follows:
A simple proof can be found in Brualdi and Cvetkovic [14]. The result has been rediscovered in other contexts (see [15]), but, as noted above, it follows immediately from the expression for the
inverse of a matrix.
The proof of our main theorem (Theorem 3.1) also requires the results proved in the lemma below.
Lemma 3.3. Let be a random vector with Gaussian distribution . Let and denote, respectively, the covariance and concentration graphs associated with , then (i) and have the same connected components,
(ii)if a given connected component in is a tree, then the corresponding connected component in is complete and vice versa.
Proof. Proof of (i): the fact that and have the same connected components can be deduced from the matrix structure of the covariance and the precision matrix. The connected components of correspond
to block diagonal matrices in . Since , then, by properties of inverting partitioned matrices, also has the same block diagonal matrices as in terms of the variables that constitute these matrices.
These blocks correspond to distinct components in and . Hence, both matrices have the same connected components.
Proof of (ii): let us assume now that the covariance graph is a tree, hence it is a connected graph with only one connected component. We will prove that the concentration graph is complete by using
Lemma 3.2 and computing any coefficient (). Since is a tree, there exists exactly one path between any two vertices and . We will denote this path as . Then, by Lemma 3.2, First, note that the
determinants of the matrices in (3.3) are all positive since principal minors of positive definite matrices are positive. Second, since we are considering a path in , , . Using these two facts, we
deduce from (3.3) that for all . Hence, and are adjacent in for all . The concentration graph is therefore complete. The proof that when is assumed to be a tree implying that is complete follows
We now give an example illustrating the main result in this paper (Theorem 3.1).
Example 3.4. Consider a Gaussian random vector with covariance matrix and its associated covariance graph (which is a tree) as given in Figure 1(a).
Consider the sets , , and . Note that does not separate and in as any path from and does not intersect . Hence, we cannot use the covariance global Markov property to claim that is not independent of
given . This is because the covariance global Markov property allows us to read conditional independences present in a distribution if a separation is present in the graph. It is not an “if and only
if” property in the sense that the lack of a separation in the graph does not necessarily imply the lack of the corresponding conditional independence. We will show however that in this example is
indeed not independent of given . In other words, we will show that the graph has the ability to capture this conditional dependence present in the probability distribution .
Let us now examine the relationship between and given . Note that in this example , , and . Note that the covariance graph associated with the probability distribution of the random vector is the
subgraph represented in Figure 1(b) and can be obtained directly as a subgraph of induced by the subset .
Since 2 and 5 are connected by exactly one path in , that is, , then the coefficient , that is, the coefficient between 2 and 5 in inverse of the covariance matrix of , can be computed using Lemma
3.2 as follows: where and are, respectively, the covariance matrices of the Gaussian random vectors and . Hence, since the right hand side of the equation in (3.4) is different from zero. Hence, .
Now, recall that, for any Gaussian random vector , where , , and are pairwise disjoint subsets of . The contrapositive of (3.5) yields
Hence, we conclude that since does not separate and , is not independent of given . Thus, we obtain the desired result:
We now proceed to the proof of Theorem 3.1.
Proof of Theorem 3.1. Without loss of generality, we assume that is a connected tree. Let us assume to the contrary that is not covariance faithful to , then there exists a triplet of pairwise
disjoint subsets of , such that , but does not separate and in , that is,
As does not separate and and since is a connected tree, then there exists a pair of vertices such that the single path connecting and in does not intersect ; that is, . Hence, . Thus, two cases are
possible with regard to where the path can lie: either or . Let us examine both cases separately.
Case 1 (). In this case, the entire path between and lies in and hence we can find a pair of vertices belonging to and such that . (As an illustration of this point, consider the graph presented in
Figure 1(a). Let , , and . We note that the path lies entirely in and hence we can find two vertices, namely, and , belonging to path that are adjacent in ).Recall that since is a tree, any induced
graph of by a subset of is a union of tree connected components (see Lemma 2.1). Hence, the subgraph of induced by is a union of tree connected components. As and are adjacent in , they are also
adjacent in and belong to the same connected component of . (In our example in Figure 1(a) with , consists of a union of two connected components with its respective vertices being and .) Hence, the
only path between and is precisely the edge . Using Lemma 3.2 to compute the coefficient , that is, coefficient in the inverse of the covariance matrix of the random vector , we obtain, where denotes
the covariance matrix of and denotes the matrix with the rows and the columns corresponding to variables and omitted. We can therefore deduce from (3.9) that . Hence, . Now, since is Gaussian, , and
, we can apply (3.5) to arrive at a contradiction to our initial assumption in (3.8).Note in the case that is empty the path has to lie entirely in . This is because by assumption does not intersect
. The case when lies in is covered in Case 1 and hence it is assumed that . (As an illustration of this point, consider once more the graph presented in Figure 1(a). Consider , , and . Here, and the
path connecting and intersects .) Case 2 ( and ). In this case, there exists a pair of vertices with , such that the vertices and are connected by exactly one path in the induced graph of by (see
Lemma 2.1). (In our example in Figure 1 with , , and , the vertices and will correspond to vertices 2 and 7, respectively, and , which is a path entirely contained in .)
Let us now use Lemma 3.2 to compute the coefficient , that is, the -coefficient in the inverse of the covariance matrix of the random vector . We obtain that where denotes the covariance matrix of
and denotes with the rows and the columns corresponding to variables in path omitted. One can therefore easily deduce from (3.10) that . Thus, is not independent of given . Hence, once more we obtain
a contradiction to (3.5) since and .
Remark 3.5. The dual result of the theorem above for the case of concentration trees was proved by Becker et al. [13]. We note however that the argument used in the proof of Theorem 3.1 cannot also
be used to prove faithfulness of Gaussian distributions that have trees as concentration graphs. The reason for this is as follows. In our proof, we employed the fact that the subgraph of induced by
a subset is also the covariance graph associated with the Gaussian subrandom vector of as denoted by . Hence, it was possible to compute the coefficient which quantifies the conditional (in)
dependence between and given , in terms of the paths in and the coefficients of the covariance matrix of . On the contrary, in the case of concentration graphs the sub-graph of the concentration
graph induced by is not in general the concentration graph of the random vector . Hence our approach is not directly applicable in the concentration graph setting.
4. Conclusion
In this note we looked at the class of multivariate Gaussian distributions that are Markov with respect to covariance graphs and prove that Gaussian distributions which have trees as their covariance
graphs are necessarily faithful. The method of proof used in the paper is also vastly different in nature from the proof of the analogous result for concentration graph models. Hence, the approach
that is used could potentially have further implications. Future research in this area will explore if the analysis presented in this paper can be extended to other classes of graphs or
D. Malouche was supported in part by a Fullbright Fellowship Grant 68434144. B. Rajaratnam was supported in part by NSF grants DMS0906392, DMS(CMG)1025465, AGS1003823, NSA H98230-11-1-0194, and
1. D. R. Cox and N. Wermuth, Multivariate Dependencies, vol. 67 of Monographs on Statistics and Applied Probability, Chapman & Hall, London, UK, 1996.
2. S. L. Lauritzen, Graphical Models, vol. 17 of Oxford Statistical Science Series, The Clarendon Press Oxford University Press, New York, NY, USA, 1996.
3. J. Whittaker, Graphical Models in Applied Multivariate Statistics, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons Ltd.,
Chichester, UK, 1990.
4. D. Edwards, Introduction to Graphical Modelling, Springer Texts in Statistics, Springer, New York, NY, USA, 2nd edition, 2000.
5. B. Rajaratnam, H. Massam, and C. M. Carvalho, “Flexible covariance estimation in graphical Gaussian models,” The Annals of Statistics, vol. 36, no. 6, pp. 2818–2849, 2008. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
6. G. Kauermann, “On a dualization of graphical Gaussian models,” Scandinavian Journal of Statistics, vol. 23, no. 1, pp. 105–116, 1996. View at Zentralblatt MATH
7. M. Banerjee and T. Richardson, “On a dualization of graphical Gaussian models: a correction note,” Scandinavian Journal of Statistics, vol. 30, no. 4, pp. 817–820, 2003.
8. N. Wermuth, D. R. Cox, and G. M. Marchetti, “Covariance chains,” Bernoulli, vol. 12, no. 5, pp. 841–862, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
9. D. Malouche, “Determining full conditional independence by low-order conditioning,” Bernoulli, vol. 15, no. 4, pp. 1179–1189, 2009. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
10. K. Khare and B. Rajaratnam, “Covariance trees and Wishart distributions on cones,” in Algebraic Methods in Statistics and Probability II, vol. 516 of Contemporary Mathematics, pp. 215–223,
American Mathematical Society, Providence, RI, USA, 2010.
11. K. Khare and B. Rajaratnam, “Wishart distributions for decomposable covariance graph models,” The Annals of Statistics, vol. 39, no. 1, pp. 514–555, 2011. View at Publisher · View at Google
12. M. Studený, Probabilistic Conditional Independence Structures, Springer, New York, NY, USA, 2004.
13. A. Becker, D. Geiger, and C. Meek, “Perfect tree-like Markovian distributions,” Probability and Mathematical Statistics, vol. 25, no. 2, pp. 231–239, 2005. View at Zentralblatt MATH
14. R. A. Brualdi and D. Cvetkovic, A Combinatorial Approach to Matrix Theory and Its Applications, Chapman & Hall/CRC, New York, NY, USA, 2008.
15. B. Jones and M. West, “Covariance decomposition in undirected Gaussian graphical models,” Biometrika, vol. 92, no. 4, pp. 779–786, 2005. View at Publisher · View at Google Scholar
|
{"url":"http://www.hindawi.com/journals/jps/2011/152942/","timestamp":"2014-04-20T21:15:24Z","content_type":null,"content_length":"430996","record_id":"<urn:uuid:75d949e1-baac-4211-80eb-ec4fd0dfd54e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Possible Answer
The distinction between one-tailed and two-tailed tests was popularized by Ronald Fisher in the influential (Fisher 1925), where he applied it especially to the normal distribution, which is a
Purpose: Test if variances from two populations are equal An F-test (Snedecor and Cochran, 1983) is used to test if the variances of two populations are equal. - read more
Please vote if the answer you were given helped you or not, thats the best way to improve our algorithm. You can also
submit an answer
or search
documents about f test one tailed
Share your answer: f test one tailed?
Question Analizer
f test one tailed resources
|
{"url":"http://www.askives.com/f-test-one-tailed.html","timestamp":"2014-04-19T05:28:07Z","content_type":null,"content_length":"35460","record_id":"<urn:uuid:7f09108a-201b-41c9-ae1a-f62fbbc56d74>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
15-851 Computation and Deduction
15-851 Computation and Deduction
Lecture 10: Progress
We prove the progress theorem for our continuation-passing machine. It states that the computation on well-typed machine states can never get stuck: either we have produced a value and are finished,
or a computation rule will apply. This shows that an untyped computation model is adequate for Mini-ML.
[ Home | Schedule | Assignments | Handouts | Software | Overview ]
Frank Pfenning
|
{"url":"http://www.cs.cmu.edu/~fp/courses/comp-ded/lectures/lecture10.html","timestamp":"2014-04-21T14:54:22Z","content_type":null,"content_length":"2833","record_id":"<urn:uuid:098a31fb-a9ee-4883-9b7d-02564dcdb470>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|