content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Figure 1: A schematic illustration of the relationship between density, density fluctuations, and temperature in a one-dimensional Fermi gas. The three grids represent a 1D phase space, with axes x
and p[x]. Each box represents a phase-space cell (with volume = h). At most, one particle is permitted per box. The density corresponds to the number of atoms per column. The temperature is related
to how the number of atoms per row decreases with p[x]. A higher temperature means more population in high momentum states. (a) A cold dense gas. (b) A cold but less dense gas. (c) A dense but hotter
gas. The density fluctuations (that is, the variance in the number of particles per column) are lowest in (a). If the absolute density is known, a measurement of the density fluctuations gives
information about the absolute temperature. This relationship is embodied in the fluctuation and dissipation theorem.
|
{"url":"http://physics.aps.org/articles/large_image/f1/10.1103/Physics.3.59","timestamp":"2014-04-17T10:24:43Z","content_type":null,"content_length":"1857","record_id":"<urn:uuid:f50b08a0-4f71-41c4-9e87-85d0bc5b2d47>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Colmar Manor, MD Geometry Tutor
Find a Colmar Manor, MD Geometry Tutor
...F. Computations of Derivatives. III.
21 Subjects: including geometry, calculus, statistics, algebra 1
...In preparation for my own academic degrees, I have completed math classes up through calculus. I have taught algebra via chemistry and physics classes. Most recently, I have tutored algebra.
5 Subjects: including geometry, chemistry, algebra 1, algebra 2
I am a hydraulic/civil engineer by profession. For my undergraduate program, I was the recipient of the prize for an outstanding student. I have both classroom teaching and private tutoring
14 Subjects: including geometry, chemistry, calculus, physics
...I currently teach a web design class where I teach my students how to build websites, from scratch, without the use of a web editor. Although I am a secondary teacher, I do have experience
teaching elementary students as well, particularly with the older ones. For 3 years I taught summer enrichment classes to elementary students, ranging from grades 1 through 5.
29 Subjects: including geometry, reading, writing, algebra 1
...My 30 years of services in the system of education includes teaching high school and college math, developing elementary and middle school grade texts, and tutoring students of different
grades including my own daughter and son. My philosophy of teaching is that every student can be successful i...
7 Subjects: including geometry, algebra 1, algebra 2, SAT math
Related Colmar Manor, MD Tutors
Colmar Manor, MD Accounting Tutors
Colmar Manor, MD ACT Tutors
Colmar Manor, MD Algebra Tutors
Colmar Manor, MD Algebra 2 Tutors
Colmar Manor, MD Calculus Tutors
Colmar Manor, MD Geometry Tutors
Colmar Manor, MD Math Tutors
Colmar Manor, MD Prealgebra Tutors
Colmar Manor, MD Precalculus Tutors
Colmar Manor, MD SAT Tutors
Colmar Manor, MD SAT Math Tutors
Colmar Manor, MD Science Tutors
Colmar Manor, MD Statistics Tutors
Colmar Manor, MD Trigonometry Tutors
Nearby Cities With geometry Tutor
Bladensburg, MD geometry Tutors
Brentwood, MD geometry Tutors
Cottage City, MD geometry Tutors
Edmonston, MD geometry Tutors
Fairmount Heights, MD geometry Tutors
Garrett Park geometry Tutors
Hyattsville geometry Tutors
Mount Rainier geometry Tutors
North Brentwood, MD geometry Tutors
Riverdale Park, MD geometry Tutors
Riverdale Pk, MD geometry Tutors
Riverdale, MD geometry Tutors
Seat Pleasant, MD geometry Tutors
Somerset, MD geometry Tutors
University Park, MD geometry Tutors
|
{"url":"http://www.purplemath.com/colmar_manor_md_geometry_tutors.php","timestamp":"2014-04-18T18:40:55Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:353eff34-f79e-49d6-b24b-f532a7f9e174>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Variable Depth Shallow Water Wave Equation
From WikiWaves
We consider here the problem of waves reflected by a region of variable depth in a finite region or in an otherwise uniform depth region assuming the equations of Shallow Depth (assuming the problem
is linear). We consider slightly more general equations of motion so that the same method could be used for a variable density string.
We begin with the shallow depth equation
$\rho(x)\partial_t^2 \zeta = \partial_x \left(h(x) \partial_x \zeta \right).$
subject to the initial conditions
$\zeta_{t=0} = \zeta_0(x)\,\,\,{\rm and}\,\,\, \partial_t\zeta_{t=0} = \partial_t\zeta_0(x)$
where $\zeta$ is the displacement, $\rho$ is the string density and $h(x)$ is the variable depth (note that we are unifying the variable density string and the wave equation in variable depth because
the mathematical treatment is identical).
Waves in a finite basin
We consider the problem of waves in a finite basin $-L<x<L$. At the edge of the basin the boundary conditions are
$\left.\partial_x \zeta\right|_{x=-L} = \left.\partial_x \zeta\right|_{x=L} =0$
We solve the equations by expanding in the modes for the basin which satisfy
$\partial_x \left(h(x) \partial_x \zeta_n \right) = -\lambda_n \rho(x) \zeta_n ,$
normalised so that
$\int_{-L}^L \rho\zeta_n \zeta_m = \delta_{mn}.$
The solution is then given by
$\zeta(x,t) = \sum_{n=0}^{\infty} \left(\int_{-L}^L \rho(x)\zeta_n(x^\prime) \zeta_0 (x^\prime) dx^\prime \right) \zeta_n(x) \cos(\sqrt{\lambda_n} t )$
$+ \sum_{n=1} ^{\infty} \left(\int_{-L}^L \rho(x)\zeta_n(x^\prime) \partial_t\zeta_0 (x^\prime) dx^\prime \right) \zeta_n(x) \frac{\sin(\sqrt{\lambda_n} t )}{\sqrt{\lambda_n}}$
where we have assumed that $\lambda_0 = 0$.
Calculation of $\zeta_n$
We can calculate the eigenfunctions $\zeta_n$ by an expansion in the modes for the case of uniform depth. We use the Rayleigh-Ritz method. The eigenfunctions are local minimums of
$J[\zeta] = \int_{-L}^L \frac{1}{2}\left\{ h(x)\left(\partial_x \zeta\right)^2 - \lambda \rho(x) \zeta^2 \right\}$
subject to the boundary conditions that the normal derivative vanishes (where $\lambda$ is the eigenvalue).
We expand the displacement in the eigenfunctions for constant depth $h=1$
$\zeta = \sum_{n=0}^{N} a_n \psi_n(x)$
$\psi_n = \frac{1}{\sqrt{L}} \cos( n \pi (\frac{1}{2L}x + \frac{1}{2})),\,\,ne 1$
$\psi_0 = \frac{1}{\sqrt{2L}},\,$
and substitute this expansion into the variational equation we obtain
$J[\vec{a}] = \int_{-L}^L \frac{1}{2}\left\{ h(x)\left( \sum_{n=0}^{N} a_n \partial_x \psi_n(x)\right)^2 - \lambda \rho(x) \left(\sum_{n=0}^{N} a_n \psi_n(x)\right)^2 \right\}$
$\frac{dJ}{da_m} = \int_{-L}^L \frac{1}{2}\left\{ h(x)2\partial_x \psi_m(x)\left( \sum_{n=0}^{N} a_n \partial_x \psi_n(x)\right) - \lambda \rho(x)2\psi_m(x)\left(\sum_{n=0}^{N} a_n \psi_n(x)\right) \
right\} = 0$
$\int_{-L}^L \left\{ h(x)\partial_x \psi_m(x)\left( \sum_{n=0}^{N} a_n \partial_x \psi_n(x)\right) \right\} = \int_{-L}^L \left\{ \lambda \rho(x)\psi_m(x)\left(\sum_{n=0}^{N} a_n \psi_n(x)\right) \
$\sum_{n=0}^{N} a_n \int_{-L}^L \left\{ h(x)\partial_x \psi_n(x)\partial_x \psi_m(x)\right\} = \lambda \sum_{n=0}^{N} a_n \int_{-L}^L \left\{ \rho(x)\psi_n(x)\psi_m(x) \right\}$
this equation can be rewritten using matrices as
$\mathbf{K} \vec{a} = \lambda\mathbf{M} \vec{a}$
where the elements of the matrices K and M are
$\mathbf{K}_{mn} = \int_{-L}^L \left\{ \left(\partial_x \psi_m h(x) \partial_x \psi_n\right) \right\}$
$\mathbf{M}_{mn} = \int_{-L}^L \left\{\rho(x)\psi_n(x)\psi_m(x)\right\}$
Matlab Code
Code to calculate the solution in a finite basin can be found here finite_basin_variable_h_and_rho.m
Waves in an infinite basin
We assume that the density $\rho$ and the depth $h$ are constant and equal to one outside the region $-L<x<L$. We can therefore write the wave as
$u(x,t) = e^{-i k x} + R e^{i k x},\,\,x<-L$
$u(x,t) = Te^{-i k x},\,\,x<-L$
for waves incident from the right. To solve we use a solution to the problem on the interval $-L<x<L$ subject to arbitrary boundary conditions and match.
Solution in Finite Interval of Variable Properties
Taking a separable solution gives the eigenvalue problem
$\partial_x \left( h(x) \partial_x\zeta \right) = -\omega^{2}\rho(x)\zeta \quad (1)$
Given boundary conditions $\zeta \mid_{-L} = a$ and $\zeta \mid_L = b$ we can take $\zeta = \zeta_p + u$ With $\zeta_p = \frac{(b-a)}{2L}x + \frac{b+a}{2}$ satisfying the boundary conditions and $u$
satisfying $u |_{-L} = u |_L = 0$
Substituting this form into (1) gives
$\partial_x(h(x)\partial_x \zeta_p)+\partial_x(h(x)\partial_xu) = -\omega^{2}\left(\zeta_p+u\right)$
Or, on rearranging
$\partial_x(h(x)\partial_xu)+\omega^{2}u = -\partial_x(h(x)\partial_x \zeta_p)-\omega^{2}\zeta_p = f(x)\quad (2)$
$f(x) = -\partial_x(h(x)\partial_x \zeta_p)-\omega^{2}\zeta_p = -\frac{(b-a)}{2L}\partial_xh(x) - \omega^2 \left(\frac{(b-a)}{2L}x + \frac{b+a}{2}\right)$
Now consider the homogenous Sturm-Liouville problem for u
$\partial_x(h(x)\partial_xu)+\lambda u = 0\quad u|_0=u|_1=0 \quad (3)$
By Sturm-Liouville theory this has an infinite set of eigenvalues $\lambda_k$ with corresponding eigenfunctions $u_k$. Also since $u_k|_0=u_k|_1=0\quad \forall k$ Each $u_k$ can be expanded as a
fourier series in terms of sine functions.
$u_k = \sum_{n=1}^{\infty} a_{n,k} \psi_n$
$\psi_n = \sin\left(\frac{n\pi}{2}\left(\frac{x}{L} + 1\right)\right)$
Transforming (3) into the equivalent variational problem gives
$J[u] = \int_{-L}^{L}\,hu'^{2}-\lambda \rho u^{2} \, dx \quad (4)$
Substituting the fourier expansion for $u_k$ into (4):
$J = \int_{-L}^{L}\,h\left(\sum_{n=1}^{\infty} a_{n} \psi_n'\right)^{2}-\lambda \rho \left(\sum_{n=1}^{\infty} a_{n} \psi_n\right)^{2} \, dx \quad$
Since $u_k$ minimises J, we require $\frac{\partial J}{\partial a_{n}}=0 \quad \forall n$
$\frac{\partial J}{\partial a_{m}}=\int_{-L}^{L}\left\{2h\psi_m'\sum_{n=1}^{\infty} a_{n}\psi_n'-2\lambda\rho \psi_m \sum_{n=1}^{\infty} a_{n} \psi_n\right\}dx=0$
$\implies \int_{-L}^{L}2h\psi_m'\sum_{n=1}^{\infty} a_{n}\psi_n'dx= \int_{-L}^{L}2\lambda\rho \psi_m \sum_{n=1}^{\infty} a_{n} \psi_ndx$
$\implies \sum_{n=1}^{\infty}a_{n}\int_{-L}^{L}h\psi_m'\psi_n'dx= \lambda\sum_{n=1}^{\infty}a_{n}\int_{-L}^{L}\rho \psi_m \psi_ndx$
By defining a vector $\textbf{a} = \left(a_{n}\right)$ and matrices $K_{(n,m)} = \int_{-L}^{L}h(x)\psi_m'(x)\psi_n'(x)dx$ and $M_{(n,m)} = \int_{-L}^{L}\rho(x)\psi_m(x)\psi_n(x)dx$ we have the linear
system $K\textbf{a} = \lambda M\textbf{a}$ which returns the eigenvalues and eigenvectors of equation (3), with eigenvectors $\textbf{a}$ representing coefficient vectors of the fourier expansions of
If we now construct $u(x,\omega) = \sum_{k=1}^{\infty} b_k u_k$ and substitute this into equation (2) we get
$\sum_{k=1}^{\infty} (\partial_x(h(x)\partial_x u_k) + \omega^{2}\rho u_k)b_k = f(x)$
$\implies \sum_{k=1}^{\infty} (-\lambda_k\rho u_k + \omega^{2}\rho u_k)b_k = f(x)$
$\implies \sum_{k=1}^{\infty} (\omega^{2}-\lambda_k) b_k \rho u_k = f(x) \quad (5)$
And defining the RHS of equation (5) as $f(x)$, a known function, we can retrieve the coefficients $b_k$ by integrating against $u_k$
$b_k = \frac{\int_{-L}^{L}\,f u_k\,dx}{(\omega^{2}-\lambda_k) \int_{-L}^{L}\, \rho u_{k}^{2}\,dx}$
The coefficients $c_n$ of the fourier expansion of u are just $\sum_{k=1}^{\infty}a_{n,k}b_{k}$ with $a_{n.k}$ being the $n$th coefficient of the $k$th eigenfunction of the Sturm-Liouville problem.
$\zeta(x,\omega)=\frac{(b-a)}{2L}x + \frac{b+a}{2}+\sum_{n=1}^{\infty}c_{n} \sin\left(\frac{n\pi}{2}\left(\frac{x}{L} + 1\right)\right), \quad (6)$
with $\zeta |_{-L}=a \quad \zeta |_{L}=b$ and, given $a$ and $b$ explicitly differentiating $\zeta$ gives $\partial_x \zeta |_{-L}$ and $\partial_x \zeta |_L$.
Matching at $\pm L$
We choose a basis of the solution space for any particular $\omega$ to be $\{\zeta_1,\,\zeta_2\}$, where $\zeta_1$ is the solution to the BVP(a=1,b=0) and $\zeta_1$ is the solution to the BVP(a=0,b=
1). The functions $\zeta_1$ and $\zeta_2$ can be calculated for any $\omega$ from (6).
The aim here is to construct a matrix S such that, given $a$ and $b$
$S \begin{pmatrix} \zeta |_{-L} \\ \zeta |_L \end{pmatrix}=\begin{pmatrix} \partial_x \zeta |_{-L} \\ \partial_x \zeta |_L \end{pmatrix}$
Taking $a=1,\,b=0$ to give $\zeta_1$ shows that the first column of S must be $\begin{pmatrix} \partial_x \zeta_1 |_{-L} \\ \partial_x \zeta_1 |_L \end{pmatrix}$ and likewise taking $a=0,\,b=1$ to
give $\zeta_2$ shows the second column must be $\begin{pmatrix} \partial_x \zeta_2 |_{-L} \\ \partial_x \zeta_2 |_L \end{pmatrix}$. So S is given by
$\begin{pmatrix} \partial_x \zeta_1 |_{-L} \, \partial_x \zeta_2 |_{-L} \\ \partial_x \zeta_1 |_L \, \partial_x \zeta_2 |_L \end{pmatrix}$
Now for the area of constant depth on the left hand side there is a potential of the form $e^{i\omega x}$ which, creates reflected and transmitted potentials from the variable depth area of the form
$Re^{-i\omega x}$ and $Te^{i\omega x}$ respectively where the magnitudes of $R$ and $T$ are unknown. We can calculate that the boundary conditions for $\zeta$ must be
$\zeta |_{-L} = e^{-i\omega L}+Re^{i\omega L}, \quad \zeta |_L = Te^{i\omega L}, \quad \partial_x \zeta |_{-L} = i\omega e^{-i\omega L}-i\omega Re^{i\omega L}, \quad \partial_x \zeta |_L = i\omega Te
^{i\omega L}$
$\begin{pmatrix} i\omega e^{-i\omega L}-i\omega Re^{i\omega L} \\ i\omega Te^{i\omega L} \end{pmatrix} = S\begin{pmatrix} e^{-i\omega L}+Re^{i\omega L} \\ Te^{i\omega L} \end{pmatrix}$
Knowing S these boundary conditions can be solved for $R$ and $T$, which in turn gives actual numerical boundary conditions to the original problem($a_+=e^{-i\omega L}+Re^{i\omega L}$ and $b_+=Te^{i\
omega L}$). Taking a linear combination of the solutions already calculated ($a_+\zeta_1(x,\omega) + b_+\zeta_2(x,\omega)$) will provide the solution for these new boundary conditions. This solution,
along with the potentials outside this region gives a generalised eigenfunction potential for the whole real axis which we denote as $\zeta_+(x,\omega)$.
Independent generalised eigenfunctons (which we denote as $\zeta_-(x,\omega)$)can be found by considering the potential $e^{-i\omega x}$ on the right hand region of constant depth. The corresponding
reflected and transmitted potentials from the variable depth area are of the form $Re^{i\omega x}$ and $Te^{-i\omega x}$ respectively. Again knowing S we can solve for R and T and hence find the
numerical boundary conditions ($a_-=Te^{i\omega L}$ and $b_-=e^{-i\omega L}+Re^{i\omega L}$).
We come out with:
$\zeta_+(x,\omega) = \left\{ \begin{matrix} {e^{i\omega x}+Re^{-i\omega x}, \quad \mbox{ for } x<-L} \\ {a_+\zeta_1(x,\omega) + b_+\zeta_2(x,\omega),\quad \mbox{ for } -L\leq x \leq L} \\ {Te^{i\
omega x}, \quad \mbox{ for } x>L} \end{matrix} \right.$
where R and T are found by solving:
$\begin{pmatrix} (S_{11} + i\omega)e^{i\omega L} & S_{12}e^{i\omega L} \\ S_{21}e^{i\omega L} & (S_{22}-i\omega)e^{i\omega L} \end{pmatrix} \begin{pmatrix}R\\T\end{pmatrix} = \begin{pmatrix} (i\omega
- S_{11}) e^{-i\omega L} \\ -S_{21}e^{-i\omega L} \end{pmatrix}$
and $a_+=e^{-i\omega L}+Re^{i\omega L}$ and $b_+=Te^{i\omega L}$
Note that the values of R and T for $\zeta_-$ are different from those for $\zeta_+$ (although they are related through some identities). For $\zeta_-$ we have:
$\zeta_-(x,\omega) = \left\{ \begin{matrix} {Te^{-i\omega x}, \quad \mbox{ for } x<-L} \\ {a_-\zeta_1(x,\omega) + b_-\zeta_2(x,\omega),\quad \mbox{ for } -L\leq x \leq L} \\ {e^{-i\omega x}+Re^{i\
omega x}, \quad \mbox{ for } x>L} \end{matrix} \right.$
where R and T are found by solving:
$\begin{pmatrix} S_{12}e^{i\omega L} & (S_{11}+i\omega)e^{i\omega L} \\ (S_{22}-i\omega)e^{i\omega L} & S_{21}e^{i\omega L}\end{pmatrix} \begin{pmatrix}R\\T\end{pmatrix} = \begin{pmatrix} S_{12}e^{-i
\omega L} \\ -(i\omega + S_{21})e^{-i\omega L} \end{pmatrix}$
Note a_+, b_+, a_- and b_- are found frm their corresponding R and T values.
Generalised Eigenfunction Expansion
Now we have the generalised eigenfunctions $\zeta_+(x,\omega)$ and $\zeta_-(x,\omega)$, which have the orthogonality relationships:
$\int_{-\infty}^{\infty}\zeta_+(x,\omega)\zeta_-(x,\omega')\mathrm{d}x= 0 \qquad (7)$
$\int_{-\infty}^{\infty}\zeta_+(x,\omega)\zeta_+(x,\omega')\mathrm{d}x= 2 \pi \delta(\omega-\omega') \qquad (8)$
$\int_{-\infty}^{\infty}\zeta_-(x,\omega)\zeta_-(x,\omega')\mathrm{d}x= 2 \pi \delta(\omega-\omega') \qquad (9)$
For any particular $\omega$ the general solution to the differential equation can be written as:
$\zeta(x,t,\omega) = \cos(\omega t)\left(c_1(\omega)\zeta_+(x,\omega)+d_1(\omega)\zeta_-(x,\omega)\right) + \frac{\sin(\omega t)}{\omega}\left(c_2(\omega)\zeta_+(x,\omega)+d_2(\omega)\zeta_-(x,\
The general solution to the PDE is therefore:
$\zeta(x,t) = \int_{0}^{\infty} \zeta(x,t,\omega) \mathrm{d} \omega$
$\implies \zeta(x,t) = \int_{0}^{\infty} \left\{ \cos(\omega t)\left(c_1(\omega)\zeta_+(x,\omega)+d_1(\omega)\zeta_-(x,\omega)\right) + \frac{\sin(\omega t)}{\omega}\left(c_2(\omega)\zeta_+(x,\omega)
+d_2(\omega)\zeta_-(x,\omega)\right) \right\} \mathrm{d} \omega \qquad (10)$
Giving the initial conditions:
$F(x) = \zeta(x,0) = \int_{0}^{\infty} \left\{ c_1(\omega)\zeta_+(x,\omega)+d_1(\omega)\zeta_-(x,\omega) \right\} \mathrm{d} \omega$
$G(x) = \partial_t \zeta(x,0) = \int_{0}^{\infty} \left\{ c_2(\omega)\zeta_+(x,\omega)+d_2(\omega)\zeta_-(x,\omega) \right\} \mathrm{d} \omega$
Using identities (7),(8) and (9) we can show:
$c_1(\omega) = \frac{1}{2\pi}\langle F(x), \zeta_+(x,\omega) \rangle$
$d_1(\omega) = \frac{1}{2\pi}\langle F(x), \zeta_-(x,\omega) \rangle$
$c_2(\omega) = \frac{1}{2\pi}\langle G(x), \zeta_+(x,\omega) \rangle$
$d_2(\omega) = \frac{1}{2\pi}\langle G(x), \zeta_-(x,\omega) \rangle$
Which we can use in combination with (10) to solve the IVP.
Matlab Code
Code to calculate the solution in a infinite basin can be found here infinite_basin_variable_h_and_rho.m
|
{"url":"http://www.wikiwaves.org/Variable_Depth_Shallow_Water_Wave_Equation","timestamp":"2014-04-19T04:19:15Z","content_type":null,"content_length":"45170","record_id":"<urn:uuid:e80355a3-7b51-4b8c-ba6b-2f12fb58c097>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Would be the first derivative of this function with: F(X)=(1+2x+x^3)^1/4 ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4ddd1669ee2c8b0b6c083fe8","timestamp":"2014-04-20T01:06:11Z","content_type":null,"content_length":"39612","record_id":"<urn:uuid:92a6e6ed-d96e-44b2-96e2-f3dca07ea960>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An arctan integral
okay the problem at first glance it's hard, but i don't want to give any hint yet, so i'll wait a couple of days and i'll post a solution.
Posting in thread so I remember about it. I think it has to do with... $\frac{d}{dx}\arctan(x) = \frac{1}{1+x^2}$. Then find an expression for arctan(x)/x maybe and use integration by parts. I'll
look at it later.
Will just post any working as I go along... Spoiler:
I make use of (again) 'magic differentiation' to solve this problem . Consider $\int_0^1 \frac{dx}{(x^2+1)\sqrt{x^2+2} }$ Sub. $x = \frac{1}{t}$ , it becomes $\int_1^{\infty} \frac{t~dt}{(t^2+1)\sqrt
{1+2t^2} }$ sub $1+2t^2 = u^2$ .... After a few steps , we can obtain $\int_0^1 \frac{dx}{(x^2+1)\sqrt{x^2+2} } = \frac{\pi}{6}$ The integral $\int_{0}^{1}{\frac{\arctan \left( \sqrt{2+x^{2}} \
right)}{\left( 1+x^{2} \right)\sqrt{2+x^{2}}}\,dx}$ $= \int_{0}^{1}{\frac{\frac{\pi}{2} - \arctan \left( \frac{1}{\sqrt{2+x^{2}}} \right)}{\left( 1+x^{2} \right)\sqrt{2+x^{2}}}\,dx}$ $= \frac{\pi}{2}
\int_0^1 \frac{dx}{(x^2+1)\sqrt{x^2+2} } - \int_{0}^{1}{\frac{\arctan \left( \frac{1}{\sqrt{2+x^{2}}} \right)}{\left( 1+x^{2} \right)\sqrt{2+x^{2}}}\,dx}$ $= \frac{\pi^2}{12} - \int_{0}^{1}{\frac{\
arctan \left( \frac{1}{\sqrt{2+x^{2}}} \right)}{\left( 1+x^{2} \right)\sqrt{2+x^{2}}}\,dx}$ Consider $I(a) = \int_{0}^{1}{\frac{\arctan \left( \frac{a}{\sqrt{2+x^{2}}} \right)}{\left( 1+x^{2} \right)
\sqrt{2+x^{2}}}\,dx}$ Magic Differentiation ! $I'(a) = \int_0^1 \frac{dx}{(x^2+1)(x^2 + a^2 + 2 )}$ $= \frac{1}{1 + a^2 } \int_0^1 \left[ \frac{1}{1 + x^2 } - \frac{1}{x^2 + a^2+2} \right]~dx$ $= \
frac{\pi}{4}\cdot \frac{1}{1+a^2} - \int_{0}^{1}{\frac{\arctan \left( \frac{1}{\sqrt{2+a^{2}}} \right)}{\left( 1+a^{2} \right)\sqrt{2+a^{2}}}\,dx}$ So $\int_{0}^{1}{\frac{\arctan \left( \frac{1}{\
sqrt{2+x^{2}}} \right)}{\left( 1+x^{2} \right)\sqrt{2+x^{2}}}\,dx} = I(1) = I(1)-I(0)$ $= \frac{\pi^2}{16}- I(1)$ $= \frac{\pi^2}{32}$ The answer to the problem is $\frac{\pi^2}{12} - \frac{\pi^2}
{32}$ $= \frac{5 \pi^2}{96}$
okay, my solution uses the fact that $\frac{\arctan \left( \frac{1}{\sqrt{2+x^{2}}} \right)}{\sqrt{2+x^{2}}}=\int_{0}^{1}{\frac{dt}{2+ x^{2}+t^{2}}}.$
sorry for the late answer, actually, it holds for a better way. try to find it, my time is bounded by now.
|
{"url":"http://mathhelpforum.com/math-challenge-problems/133490-arctan-integral-print.html","timestamp":"2014-04-18T08:58:46Z","content_type":null,"content_length":"16665","record_id":"<urn:uuid:1d0d732e-2416-4dde-8211-feca68cfdcf3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
|
general solution ,trig?
$2\sin 2\theta = \sqrt{3}\tan 2\theta = \sqrt{3}\frac{\sin 2\theta}{\cos 2\theta}\Longleftrightarrow \sin 2\theta = 0$ or $\cos 2\theta = \frac{\sqrt{3}}{2}$
ok $2sin2\theta=\sqrt{3}tan2\theta$ $2sin2\theta=\sqrt{3}\left(\frac{sin2\theta}{cos2\t heta}\right)$ dived the to side sin2 theta $2=\sqrt{3}\left(\frac{1}{cos2\theta}\right)$ $cos2\theta=\frac{\
sqrt{3}}{2}$ $2\theta=\frac{\pi}{6}....\theta=\frac{\pi}{12}+2n\ pi$
would the end result be $\frac{\pi}{12}+n\pi$ or would it be $\frac{\pi}{12}+2n\pi$?
You can't just divide away $\sin 2\theta$ from both sides since it's not a constant. If you do that, you'll miss out on all the solutions where $\sin 2\theta=0$ It's similar to this equation: $x^3=x$
has obviously got one root $x=0$, but if you divide both sides with $x$ without taking that into consideration you'll end up with $x^2=1$, and thus miss a root. The general solution to $\sin x = C$
is $x=v+2n\pi$ or $x=\pi-v+2n\pi$ where $v$ is the angle and $n\in Z$ The general solution to $\cos x = C$ is $x=\pm v+2n\pi$ where $v$ is the angle and $n\in Z$
good night, I offer you a resolution 2shared - download R1.pdf
I am sorry I should be careful when I answer questions
|
{"url":"http://mathhelpforum.com/calculus/91417-general-solution-trig.html","timestamp":"2014-04-18T07:08:59Z","content_type":null,"content_length":"53371","record_id":"<urn:uuid:d58032d6-ade2-4cb9-a626-da84c355e15e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dunaire, GA Math Tutor
Find a Dunaire, GA Math Tutor
...Since then, I've worked with half a dozen or so students taking calculus either in college or high school. I'm very comfortable with the material covered in first-year calculus courses such as
limits, derivatives, integrals, related rates, rotating shapes to create solids and so on. I passed th...
25 Subjects: including algebra 1, algebra 2, calculus, vocabulary
...Email or call me to discuss details and availability! (1.5 hour minimum per in-person session (if available); 1 hour minimum per online session)I have tutored Algebra ever since I was in high
school. Unlike many, fortunately math comes naturally for me. More importantly I am able to explain the concepts in a way that makes sense to struggling students.
19 Subjects: including calculus, algebra 1, algebra 2, geometry
...A lot of students are struggling in math and that is mainly because of concepts that they may not have grasped in earlier years. Math is building blocks and if blocks are missing at the
bottom, the structure collapses. One on one tutoring enables me to fill in those gaps and hence build a strong structure and that helps students succeed.
8 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I read it daily and do my best to apply its truths to my personal live. I have a degree in Art from U.A.S.D., the State University in Santo Domingo, Dominican Republic. I have taught Art for
over 7 years at the different schools I've worked for.
21 Subjects: including prealgebra, algebra 1, English, reading
...When teaching phonics to students in the lower grades, I like to use the Orton-Gillingham approach. Students have had much success with this program. It incorporates phonics, spelling,
reading, vocabulary, sight words, handwriting, prefixes and suffixes.
11 Subjects: including prealgebra, reading, writing, grammar
Related Dunaire, GA Tutors
Dunaire, GA Accounting Tutors
Dunaire, GA ACT Tutors
Dunaire, GA Algebra Tutors
Dunaire, GA Algebra 2 Tutors
Dunaire, GA Calculus Tutors
Dunaire, GA Geometry Tutors
Dunaire, GA Math Tutors
Dunaire, GA Prealgebra Tutors
Dunaire, GA Precalculus Tutors
Dunaire, GA SAT Tutors
Dunaire, GA SAT Math Tutors
Dunaire, GA Science Tutors
Dunaire, GA Statistics Tutors
Dunaire, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Avondale Estates Math Tutors
Belvedere, GA Math Tutors
Briarcliff, GA Math Tutors
Clarkston, GA Math Tutors
Cumberland, GA Math Tutors
Embry Hls, GA Math Tutors
Fort Gillem, GA Math Tutors
North Decatur, GA Math Tutors
North Metro Math Tutors
North Springs, GA Math Tutors
Rockbridge, GA Math Tutors
Scottdale, GA Math Tutors
Snapfinger, GA Math Tutors
Tuxedo, GA Math Tutors
Vista Grove, GA Math Tutors
|
{"url":"http://www.purplemath.com/Dunaire_GA_Math_tutors.php","timestamp":"2014-04-19T19:51:21Z","content_type":null,"content_length":"23929","record_id":"<urn:uuid:4067ced1-647f-4a19-9fd1-5a2bbd919294>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: June 2010 [00144]
[Date Index] [Thread Index] [Author Index]
Re: PDE, laplace, exact, should be simple...
• To: mathgroup at smc.vnet.net
• Subject: [mg110243] Re: PDE, laplace, exact, should be simple...
• From: peter <plindsay.0 at gmail.com>
• Date: Thu, 10 Jun 2010 08:07:32 -0400 (EDT)
• References: <hu4sbb$1gn$1@smc.vnet.net> <201006091119.HAA11928@smc.vnet.net>
Hello Steve
thanks for your detailed answer to my query. [ Incidentally I should
apologise for the inexplicable profusion of equal signs in my
postings, I'm almost afraid to type another one. ]
I noticed that another maths package was able to arrive, with a little
prompting, at the correct solution and wondered if I was missing
something obvious using Mathematica. I'll study your comments closely,
many thanks
On 9 June 2010 12:19, schochet123 <schochet123 at gmail.com> wrote:
> Depending on the generality you are trying to achieve, this problem is
> very far from simple.
> If all one wanted was to obtain the solution Sin[Pi x] Sinh[Pi y]/
> Sinh[Pi] for the specific problem
> {D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == 0 , u[0, y] == 0, u[x=
, 0]
> == 0, u[1, y] == 0, u[x, 1] =Sin[x]}
> then one could define
> myDSolve[D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == 0 , u[0, y] ==
= 0,
> u[x, 0] == 0, u[1, y] == 0, u[x, 1] ==Sin[Pi x]},u,{x,y}]={=
> -
>>Function[{x,y}, Sin[Pi x]Sinh[Pi y]/Sinh[Pi] ]}}
> Why can't the built-in DSolve find that answer? I am not from Wolfram,
> but it seems to me that DSolve doesn't attempt to find check whether
> specific functions are solutions, because there are infinitely many
> equations that have explicit solutions and looking for all of them
> would take too long. As a simple example, consider a homogeneous
> linear variable-coefficient single ODE in the variable x. Whenever the
> sum of all the coefficients equals zero then E^x is a solution. You
> can easily add an appropriate set of boundary conditions that make E^x
> be the unique solution. However, if the ODE is complicated enough then
> Mathematica will not find that solution. If it were to look for such
> solutions, then why not look for the solutions E^(2 x) or E^(k x) for
> arbitrary k or arbitrary polynomial solutions, or ...
> The upshot is that DSolve uses a set of algorithms that solve entire
> classes of problems.
> What class of problems does the above problem belong to?
> 1) If you want to solve the 2-D Laplace equation on any rectangle with
> Dirichlet boundary conditions (u= something) on all sides, with three
> conditions of the form u==0 and the fourth of the form u==f, wher=
e =
> f
> is c Sin[k( x-x0)]] or c Sin[k (y-y0)] and vanishes at the endpoints
> of the boundary interval, then you need to check that the boundary
> conditions are given for two values of each variable, that three of
> the four conditions say that u equals zero, and that the fourth is of
> the above form. You can then write a function myDSolve that will give
> the solution u[x_,y_]=f[x] Sinh[k (y-y0)]/Sinh[k (y1-y0)] where the
> boundary value f is taken on at y==y1, and the value zero at y==y=
> except that you may need to switch the roles of x and y.
> 2) If you want to solve the Laplace equation in arbitrary dimensions
> then there are analogous but more complicated formulas.
> 3) If you have nonzero boundary values on all sides then in dimension
> d the solution will be a sum of 2^d terms of the above form.
> So it should be possible to write a Mathematica program that will find
> solve problems of generality 1-3. However:
> 4) If you want to allow the boundary data f to be an arbitrary smooth
> function that vanishes at the endpoints of the boundary interval then
> you need to calculate its Fourier Sine coefficients and form an
> infinite series of solutions of the above form. In general Mathematica
> will not be able to calculate Integrate[f[x] Sin[k x],{x,0,Pi}] to
> obtain those coefficients.
> Moreover, even when Mathematica does calculate the above integral,
> substituting specific values for k may yield 0/0 and hence give the
> answer Indeterminate. For example try calculating the general formula
> for the Fourier Sine coefficients on the interval [0,Pi] of the
> function f[x_]= x Sin[3 x]. For this particular function it is easy to
> see that this problem occurs only for k==3, but in general it is
> probably not possible to determine what the bad values of k are.
> 5) If you want to allow more general boundary conditions and more
> general PDEs you will find that in general you cannot calculate
> explicitly the appropriate eigenfunctions to use in the series
> expansion, at which point you are stuck.
> So (Disclaimer once again: I am not from Wolfram so this is just a
> guess) the reason DSolve does not find your solution is apparently
> that generality levels 1-3 seem too specific to bother implementing,
> while levels 4-5 are too difficult.
> Steve
> On Jun 2, 9:05 am, peter lindsay <plinds... at googlemail.com> wrote:
>> forgive the simplicity of this:
>> D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == 0
>> BCs={u[0, y] == 0, u[x, 0] == 0, u[1, y] == 0, u[x, 1] ==
> ==
>> Sin[=F0 x]}
>> DSolve etc, etc, etc...
>> A solution is Sin[Pi x] Sinh[Pi y]
>> How can I get mathematica to come up with this gem ?
>> thanks, and sorry again for any stupidity on my part
>> Peter Lindsay
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jun/msg00144.html","timestamp":"2014-04-18T05:37:15Z","content_type":null,"content_length":"30815","record_id":"<urn:uuid:bc1dc853-3ec7-456e-a033-fd440a89b18c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability 2
February 18th 2006, 03:55 AM #1
Junior Member
Oct 2005
Probability 2
A town has three bus routes A, B and C. Route A has twice as many buses as each of B and C. Over a perido of time it has been found that, along a certain stretch of road, where the three routes
converge, the buses on these routes run more than five minutes late 1/2, 1/5 and 1/10 of the time respectively.
1) A bus is going down this stretch of road. What is the probability it is more than 5 minutes late?
2) Comment on the size of the answer to 1) with the respect to the given probabilities 1/2, 1/5 and 1/10.
3) An inspector, standing on this particular stretch of road, sees a bus that is more than five minutes late. Find the probability that it is a route B bus.
Help please
A town has three bus routes A, B and C. Route A has twice as many buses as each of B and C. Over a perido of time it has been found that, along a certain stretch of road, where the three routes
converge, the buses on these routes run more than five minutes late 1/2, 1/5 and 1/10 of the time respectively.
1) A bus is going down this stretch of road. What is the probability it is more than 5 minutes late?
Part 1)
There are twice as many rout $A$ buses as either route $B$ or $C$, so half of the
buses are route $A$. So the proportion $P(A)$ of buses which are route $A$ is $0.5$, and
the proportion $P(B)$ that are route $B$ is 0.25, and the proportion $P(C)$ route C
is again 0.25. So the probabilty that a random bus is late is:
$<br /> P(late)=P(A)P(late|A) + P(B)P(late|B)+P(C)P(late|C)<br />$
But we are told that the proportion $P(late|A)$ of route $A$ buses which are late is $1/2=0.5$,
and that $P(late|B)=1/5=0.2$, and $P(late|C)=1/10=0.1$. So:
$<br /> P(late)=0.5 \times 0.5 + 0.25 \times 0.2+0.25 \times 0.1=0.325<br />$
Last edited by CaptainBlack; February 18th 2006 at 10:45 AM.
A town has three bus routes A, B and C. Route A has twice as many buses as each of B and C. Over a perido of time it has been found that, along a certain stretch of road, where the three routes
converge, the buses on these routes run more than five minutes late 1/2, 1/5 and 1/10 of the time respectively.
1) A bus is going down this stretch of road. What is the probability it is more than 5 minutes late?
2) Comment on the size of the answer to 1) with the respect to the given probabilities 1/2, 1/5 and 1/10.
Part 2)
Answer to 1) is 0.325, which is less than the proportion of route A buses
which are late because the lateness of route A is diluted by the better
time keepers of routs B and C, but only slightly because half of all buses
are the poor time keepers of route A.
Last edited by CaptainBlack; February 18th 2006 at 10:44 AM.
A town has three bus routes A, B and C. Route A has twice as many buses as each of B and C. Over a perido of time it has been found that, along a certain stretch of road, where the three routes
converge, the buses on these routes run more than five minutes late 1/2, 1/5 and 1/10 of the time respectively.
1) A bus is going down this stretch of road. What is the probability it is more than 5 minutes late?
2) Comment on the size of the answer to 1) with the respect to the given probabilities 1/2, 1/5 and 1/10.
3) An inspector, standing on this particular stretch of road, sees a bus that is more than five minutes late. Find the probability that it is a route B bus.
Part (3).
This is simply an aplication of Bayes theorem:
$<br /> P(B \wedge late)=P(B)P(late|B)=P(B|late)P(late)<br />$
So we may write:
$<br /> P(B)P(late|B)=0.25 \times 0.2$
$<br /> P(B|late)P(late)=P(B|late)\times 0.325<br />$
$<br /> P(B|late)=\frac{0.25 \times 0.2}{0.325}=0.154<br />$
Last edited by CaptainBlack; February 18th 2006 at 11:48 AM.
Thanks Ron!
February 18th 2006, 10:30 AM #2
Grand Panjandrum
Nov 2005
February 18th 2006, 10:33 AM #3
Grand Panjandrum
Nov 2005
February 18th 2006, 10:44 AM #4
Grand Panjandrum
Nov 2005
February 18th 2006, 10:46 AM #5
Junior Member
Oct 2005
|
{"url":"http://mathhelpforum.com/statistics/1916-probability-2-a.html","timestamp":"2014-04-17T22:00:04Z","content_type":null,"content_length":"49381","record_id":"<urn:uuid:cb354a8b-349a-4769-be29-3bf1c54565ca>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Settling Velocity
Determining the Settling Velocity
First, a Little More Theory
Whenever you have have a particle moving through a horizontally-flowing fluid, confined within a certain space, a number of forces come into play. Gravity, the type of fluid involved, how smoothly
the fluid is flowing, the size of the particles, the dimensions of the piping or chamber, the bouyancy force, and a few other negligible forces. All in all, there are quite a few things to look into.
First, there was a man named Stokes, who did a great deal of work in these areas. To find out more about him, go to your library or text and read about him. Basically, his one important contribution
was a formula for calculating Drag Force in completely laminar environments, with perfectly spherical bodies. And for situations with basically spherical particles and mostly laminar flow, those
equations work. The equation, when coupled with one other and trying to solve for Drag Coefficient, works out to this.
Drag Coefficient = 24 / Reynold's Number
However, we (usually) need to go a bit beyond that. I have tried to make this simple, I hope, and so I broke it down into three more subsections, each dealing with a certain type of settling.
Free Settling
The total amount of force exerted on a particle can be broken down into four categories.
Force due to Acceleration = Gravity Force - Buoyancy Force - Drag Force
I won't go into the derivation of these formula; I figure if you want to look for them, that's a project you can endeavor on yourself. The important information is this. For a particle, there are two
stages when it falls. The acceleration portion and then the portion of constant velocity, also known as the terminal velocity or free settling velocity
The following is a diagram of the correlation between Reynold's Number and Drag Coefficient for Rigid Spherical Bodies. Any other type of particle has it's own special charts, but once more, I must
forget those, due to space and time and scope considerations.
Free Settling, with Wall Effect
When the diameter of the particle becomes fairly noticeable with respect to the diameter of the container, the the particles tend to get forced away from the wall through something known,
appropriately, as wall effect.
To compensate for this, you need only know whether or not the flow is laminar, as well as the diameter of the particle and the container. Below are two fudge factors, or correction factors, that you
can multiply your previously calculated terminal velocities by in order to allow for wall effect. The most important ratio in this case is what I am terming DR, which is equal to the following.
DR = Particle Diameter / Container Diameter
For laminar flow, the correction factor is
k = 1 / (1 + 2.1 * DR )
And for turbulent flow, the correction factor is slightly altered.
k = ( 1 - DR² ) / ( square root of [ 1 + DR * DR * DR * DR ] )
Hindered Settling
Hindered settling is called hindered settling for a reason -- the added number of particles in an enclosed area creates a slower-moving mixture than would normally be expected.
In this case, everything revolves around epsilon (e), which is the volume fraction of the slurry mixture occupied by the liquid.
What comes from that is that yet another dimensionless variable, Psi, was created for the sole purpose of adding more confusion to this mess.
Psi = 1 / (10 raised to the 1.82*(1-epsilon) power)
Now, using this wonderful variable, we can calculate the effective viscosity of the mixture due to the hinderance of other particles.
effective viscosity = viscosity / Psi
And now the density of the fluid phase (rhom) is altered, which is now calculated as
rhom = epsilon * rho + (1 - epsilon)*rhop
Now we get to the important point. Calculating, as before, the terminal velocity, for laminar flow. If it does not fall into laminar flow, then other equations well beyond the scope of what I am
doing here are needed.
Vt = g*Dp*Dp*(rhop - rho)*epsilon*epsilon*Psi / (18*viscosity)
(Here is a sketched formula for Stokes' Law. The notation is Vs for settling velocity instead of terminal velocity if you want to compare with the equation as typed by Chris Patillo that has other
To figure out whether the flow is laminar, use the following equation. If the value is below one, you are okay; the flow is laminar. If it is above one, then you need to look elsewhere.
Reynolds Number = Diameter of particle * Vt * rhom / (effective viscosity * epsilon)
How to measure settling velocity experimentally
You can measure the height of the clear liquid interface as it changes over time. After that, you can plot that. You get a plot something like this.
The average settling velocity for a particular plot at any given time is then equivalent to
settling velocity = (height at time 1 - original height) / (time required to reach current height)
Quick Conclusions
The fastest settling particles are huge, heavy, spherical molecules. The slowest settling particles, which sometimes cannot be settled accurately or properly, are tiny, light, irregularly shaped
molecules. And for anything in between, here is a general guide as to what characteristics increase the rate of thickening.
• Spherical or Near-Spherical Particles
• Heavy Particles
• Dilute Slurries. See Also: Concentration
• Particles whose Diameter does not rival that of the Container
• Flocculation, or "clumping," of particles into spherical shapes
• Autocoagulation due to mineral or chemical traits inherent in the particle
|
{"url":"http://www.rpi.edu/dept/chem-eng/Biotech-Environ/SEDIMENT/sedsettle.html","timestamp":"2014-04-18T11:26:45Z","content_type":null,"content_length":"6537","record_id":"<urn:uuid:5837b4c9-61e9-49b1-bdf3-5ce6a95fe358>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
using value [Archive] - OpenGL Discussion and Help Forums
07-12-2010, 04:55 AM
I want to use lX,lY,lZ as below. Rotate and use the new value for openal. Why does this below not work?
Thanks Michael
glTranslatef(lX/10, lY/10, lZ/10);
glRotatef (-pPsi, 0.0,1.0,0.0);
glRotatef (-pTheta,-1.0,0.0,0.0);
glRotatef (-pPhi, 0.0,0.0,1.0);
alSource3f(Sources[1], AL_POSITION, lX/10, lY/10, lZ/
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-171537.html","timestamp":"2014-04-16T22:24:07Z","content_type":null,"content_length":"9203","record_id":"<urn:uuid:0cc9356e-7f65-4c5c-90b4-75f4ec8a7cb8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boolean Algebra Problem
October 12th 2010, 08:05 AM #1
Sep 2008
Boolean Algebra Problem
I've just started learning boolean algebra at university and am stuck by the following problem:
Prove a.b + a'.c = (a+c).(a'+b) // Corrected from previous typo
Could somebody please give me a push in the right direction.
Many Thanks
Last edited by StaryNight; October 12th 2010 at 09:25 AM.
As written, your equation is false. Set a = 0, b = 0, c = 1. Then ab + a'c = 0 + 1 = 1. However, ac(a'+b) = 0(a'+b) = 0.
Perhaps you have a typo in there somewhere?
[EDIT]: OP corrected ac(a'+b) to (a+c)(a'+b) for the RHS.
Last edited by Ackbeet; October 12th 2010 at 09:28 AM. Reason: OP corrected.
Sorry, I meant to type
Prove a.b + a'.c = (a+c).(a'+b)
That I'll buy. What techniques are available to you?
It basically says to do it using 'Boolean Algebra'. We've studied DeMorgan's theorem and K-maps.
Ok. I would probably go with the K-map method. In order to do that, you're going to need to expand out the RHS so that it's in disjunctive normal form. What do you get when you do that?
I've managed to prove it by firstly writing a truth table and finding the inverse fuction and then applying De Morgan and using absorbtion. However, from the question I was supposed to prove the
identity entirely algebraically.
Here's one approach. Start with
a'b'c+a'bc+abc'+abc, and simplify.
However, it is also true that
Simplify this RHS.
It's the step
that you must justify. It follows, ultimately, because of the law of the excluded middle. You've basically exhausted all 8 possible truth assignments of the variables.
I'm not sure how much sense that makes in my mind, but I know what I'm trying to say.
October 12th 2010, 08:14 AM #2
October 12th 2010, 09:25 AM #3
Sep 2008
October 12th 2010, 09:27 AM #4
October 12th 2010, 09:42 AM #5
Sep 2008
October 12th 2010, 10:07 AM #6
October 12th 2010, 11:42 AM #7
Sep 2008
October 12th 2010, 12:08 PM #8
|
{"url":"http://mathhelpforum.com/algebra/159326-boolean-algebra-problem.html","timestamp":"2014-04-16T14:30:51Z","content_type":null,"content_length":"52106","record_id":"<urn:uuid:5f9333c2-6147-484d-875a-0331f3613349>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra 1 Tutors
Loveland, CO 80538
PhD professional for Math, Physics, and Computer Tutoring
...Wronskian, Lyapunov style, predator-prey relationships, Euler 1st and 2nd equations, homogeneous and heterogeneous style, transformation -- Like Laplace, and Fourier) are some of the ways to
assist in understanding how to solve these problems. In my current job,...
Offering 10+ subjects including algebra 1
|
{"url":"http://www.wyzant.com/geo_Fort_Collins_algebra_1_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-20T14:12:27Z","content_type":null,"content_length":"60431","record_id":"<urn:uuid:d51824ee-1105-4511-bca6-786b98cda4d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics and Astronomy 2013-2014
Course Descriptions
PHYS 3111 Quantum Physics I (4 qtr. hrs.)
First of a two-quarter sequence. The Schrödinger equation: interpretation of wave functions; the uncertainty principle; stationary states; the free particle and wave packets; the harmonic
oscillator; square well potentials. Hilbert space: observables, commutator algebra, eigenfunctions of a Hermitian operator; the hydrogen atom and hydrogenic atoms. Prerequisites: PHYS 2252, 2260,
2556, 3612; MATH 2070.
PHYS 3112 Quantum Physics II (4 qtr. hrs.)
Second of a two-quarter sequence. Angular momentum and spin; identical particles; the Pauli exclusion principle; atoms and solids: band theory; perturbation theory; the fine structure of hydrogen;
the Zeeman effect; hyperfine splitting; the variational principle; the WKB approximation; tunneling; time dependent perturbation theory; emission and absorption of radiation. Scattering: partial wave
analysis; the Born approximation. Prerequisite: PHYS 3111.
PHYS 3270 Workshop: Practical Astronomy (1 to 5 qtr. hrs.)
Capstone coursework featuring studies in experimental, computational, and/or theoretical work in astronomy and astrophysics.
PHYS 3311 Advanced Laboratory I (1 qtr. hrs.)
First of a three-quarter sequence. Advanced experimental techniques in physics. Meets with PHYS 2311. Prerequisite: instructor's permission.
PHYS 3312 Advanced Laboratory II (1 qtr. hrs.)
Second of a three-quarter sequence. Advanced experimental techniques in physics. Meets with PHYS 2312. Prerequisite: instructor's permission.
PHYS 3313 Advanced Laboratory III (1 qtr. hrs.)
Third of a three-quarter sequence. Advanced experimental techniques in physics. Meets with PHYS 2313. Prerequisite: instructor's permission.
PHYS 3510 Analytical Mechanics I (4 qtr. hrs.)
Lagrangian and Hamiltonian mechanics. Prerequisites: PHYS 1113, 1213, or 1214; MATH 2070; consent of instructor.
PHYS 3611 Electromagnetism I (4 qtr. hrs.)
First of a two-quarter sequence. Vector algebra; differential vector calculus (gradient, divergence and curl); integral vector calculus (gradient, divergence and Stokes' Theorems); line, surface and
volume integrals; Electrostatics: the electric field, electric potential, work and energy in electrostatics; method of images, boundary value problems and solutions to Laplace's equation in
Cartesian, spherical and cylindrical coordinates; multipole expansion of the electric potential; electric fields in matter: polarization; the electric displacement vector; boundary conditions, linear
dielectrics. Magnetostatics: magnetic fields and forces. Prerequisites: PHYS 1113, 1213, or 1214; MATH 2070.
PHYS 3612 Electromagnetism II (4 qtr. hrs.)
Second of a two-quarter sequence. Magnetic vector potential; magnetic fields in matter: magnetization; fields of magnetized objects; linear and nonlinear magnetic materials; electromotive force,
Ohm's law; electromagnetic induction; Faraday's law; Maxwell's equations; the displacement current; boundary conditions; the Poynting theorem; momentum and energy density of the fields; the Maxwell
stress tensor; the wave equation and electromagnetic waves in vacuum and matter; absorption and dispersion; wave guides; the potential formulation and gauge transformations; retarded potentials;
dipole radiation. Prerequisite: PHYS 3611.
PHYS 3700 Advanced Topics: General (3 qtr. hrs.)
Offered irregularly, depending on demand. May be taken more than once for credit. Prerequisite(s): instructor's permission.
PHYS 3711 Optics I (4 qtr. hrs.)
First of a two-quarter sequence. Gaussian optics and ray tracing; matrix methods and application to optical design; elementary theory of aberrations; light as electromagnetic wave, diffraction and
interference; interferometers and their applications. Elementary theory of coherence; selected topics. May include laboratory work as appropriate. Prerequisites: PHYS 1113, 1213 or 1214, MATH 2070.
PHYS 3841 Thermal Physics I (4 qtr. hrs.)
First of a two-quarter sequence. Laws of thermodynamics; thermal properties of gases and condensed matter; kinetic theory of gases, classical and quantum statistics. Prerequisites: PHYS 1113, 1213
or PHYS 1214; MATH 2070.
PHYS 3991 Independent Study (1 to 8 qtr. hrs.)
PHYS 3992 Directed Study (1 to 10 qtr. hrs.)
PHYS 3995 Independent Research (1 to 10 qtr. hrs.)
PHYS 4001 Introduction to Research I (1 or 2 qtr. hrs.)
This course is the first of the 3-course sequence designed to provide the opportunity of learning fundamental skills to conduct independent research in any physical science discipline. In this
course, students review essential material in mathematical physics, learn basic programming techniques and improve upon their skills in literature search and scientific writing, especially proposal
writing. Special in-class seminars in collaboration with the Penrose Library and Writing and Research Center are scheduled. Student are introduced to research conducted by Physics and Astronomy
faculty so that they can choose a faculty member with whom to take on a Winter Research Project during the winter interterm and winter quarter as part of Introduction to Research II. Students must
prepare and submit a research proposal before the end of the fall quarter.
PHYS 4002 Introduction to Research II (1 to 3 qtr. hrs.)
This is the second of the 3-course sequence to provide the opportunity of learning fundamental skills to conduct independent research in any physical science discipline. In this course, students
conduct an independent research or study project that they have outlined in the research proposal they submitted as part of Introduction to Research I under supervision of a faculty advisor of their
choosing. At the same time, students have time to review issues that we face as researchers. Prerequisite: PHYS 4001 and consent of a faculty research advisor.
PHYS 4003 Introduction to Research III (1 or 2 qtr. hrs.)
This is the third of the 3-course sequence to provide students with the opportunity of learning fundamental skills to conduct independent research in any physical science disciplines. In this
course, students complete their Winter research project conducted as part of Introduction to Research II and present the results in writing as a term paper and in oral presentation as part of the
Departmental Colloquia. Special in-class sessions in collaboration with the Writing and Research Center are included. Prerequisite: PHYS 4002.
PHYS 4100 Foundations of Biophysics (3 qtr. hrs.)
Focus of the course is on application of basic physics principles to the study of cells and macromolecules. Topics include diffusion, random processes, thermodynamics, reaction equilibriums and
kinetics, computer modeling. Must be admitted to the MCB PhD program or related graduate program with instructor approval.
PHYS 4111 Quantum Mechanics I (3 qtr. hrs.)
PHYS 4112 Quantum Mechanics II (3 qtr. hrs.)
PHYS 4251 Intro to Astrophysics I (3 qtr. hrs.)
PHYS 4252 Intro to Astrophysics II (3 qtr. hrs.)
PHYS 4253 Intro to Astrophysics III (3 qtr. hrs.)
PHYS 4411 Advanced Condensed Matter I (3 qtr. hrs.)
Materials structure; structure analysis; elastic properties; defects; plastic mechanical properties; thermal properties and phonons; free electron gas; energy bands and Fermi surfaces; crystalline
and amorphous semiconductors; quasiparticles and excitations; electrical properties and ferroelectrics; magnetic properties and ferromagnetics; classical and high-Tc superconductors; other advanced
materials. Co-requisite: PHYS 4111.
PHYS 4412 Advanced Condensed Matter II (3 qtr. hrs.)
Materials structure; structure analysis; elastic properties; defects; plastic mechanical properties; thermal properties and phonons; free electron gas; energy bands and Fermi surfaces; crystalline
and amorphous semiconductors; quasiparticles and excitations; electrical properties and ferroelectrics; magnetic properties and ferromagnetics; classical and high-Tc superconductors; other advanced
materials. Co-requisite: PHYS 4112.
PHYS 4413 Advanced Condensed Matter III (3 qtr. hrs.)
Materials structure; structure analysis; elastic properties; defects; plastic mechanical properties; thermal properties and phonons; free electron gas; energy bands and Fermi surfaces; crystalline
and amorphous semiconductors; quasiparticles and excitations; electrical properties and ferroelectrics; magnetic properties and ferromagnetics; classical and high-Tc superconductors; other advanced
materials. Co-requisite: PHYS 4113.
PHYS 4511 Advanced Dynamics I (4 qtr. hrs.)
PHYS 4611 Adv Electricity & Magnetism I (3 qtr. hrs.)
PHYS 4612 Adv Electricity & Magnetism II (3 qtr. hrs.)
PHYS 4750 Seminar in Physics (1 qtr. hrs.)
PHYS 4811 Statistical Mechanics I (4 qtr. hrs.)
Fundamentals of thermodynamics, microcanonical and canonical ensemble, quantum formulation noninteracting particle systems.
PHYS 4910 Special Topics Physics (1 to 5 qtr. hrs.)
PHYS 4991 Independent Study (M.S.) (1 to 10 qtr. hrs.)
PHYS 4992 Directed Study (M.S.) (1 to 10 qtr. hrs.)
PHYS 4995 Independent Research (M.S.) (1 to 10 qtr. hrs.)
PHYS 6991 Independent Study (PhD) (1 to 10 qtr. hrs.)
PHYS 6995 Independent Research (PhD) (1 to 10 qtr. hrs.)
For More Information
The department of physics and astronomy's website offers the most current information on courses, requirements, faculty and student news. Go to the Department od Physics website for more information
on the program.
The University of Denver is an Equal Opportunity institution. We admit students of any race, color, national and ethnic origin to all the rights, privileges, programs and activities generally
accorded or made available to students at the university. The University of Denver does not discriminate on the basis of race, color, national and ethnic origin in administration of our educational
policies, admission policies, scholarship and loan programs, and athletic and other university-administered programs. University policy likewise prohibits discrimination on the basis of age,
religion, disability, sex, sexual orientation, gender identity, gender expression, marital status or veteran status. Inquiries concerning allegations of discrimination based on any of the above
factors may be referred to the University of Denver, Office of Diversity and Equal Opportunity.
|
{"url":"http://www.du.edu/learn/graduates/degreeprograms/bulletins/phys/coursedescriptions.html","timestamp":"2014-04-17T09:50:52Z","content_type":null,"content_length":"32369","record_id":"<urn:uuid:86f2a204-8504-43b6-b144-b9382c264c82>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/troublemaker/answered","timestamp":"2014-04-20T08:17:34Z","content_type":null,"content_length":"113290","record_id":"<urn:uuid:632fd684-544f-4025-8857-926c05b9ff86>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the greatest common factor of twenty-eight and forty-two?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/504fa3b3e4b03b79a332e001","timestamp":"2014-04-19T19:36:31Z","content_type":null,"content_length":"53665","record_id":"<urn:uuid:1f7e88b0-eeb1-4c79-aac8-de613b02fd57>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure This Math Challenges for Families - Answer
Quick Answer:
A manhole cover rests on a small lip inside the hole. A circular manhole cover typically will not fall into the hole because its width is the same all around.
A rectangular manhole cover, however, could fall through the hole when it is tipped upward. You can see this by drawing a diagonal in a rectangle to create two triangles:
Complete Solution:
Mathematically, the greatest angle of a triangle is opposite the longest side. The greatest angle in each triangle formed by drawing in the diagonal is the right angle at the corner.
This means that the diagonal of a rectangle is always longer than either of the sides. As a result, rectangular covers can always be dropped through their corresponding holes if the lips are small.
Because a square is a rectangle, the same reasoning applies to squares.
|
{"url":"http://www.figurethis.org/challenges/c04/answer.htm","timestamp":"2014-04-17T00:54:49Z","content_type":null,"content_length":"16513","record_id":"<urn:uuid:788877a5-3d9f-4c3c-a43b-3d86f381c2b7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
|
San Lorenzo, CA Algebra Tutor
Find a San Lorenzo, CA Algebra Tutor
...I have a large library of enrichment material for those students who need/want additional challenge, and I also have experience tailoring my explanations to meet the learning style of students
who are having difficulty with the content. Pre-calculus is often somewhat of a survey course without a...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
Greetings! Thank you for looking at my page. I've recently graduated from Santa Clara University with a Biochemistry degree.
24 Subjects: including algebra 2, algebra 1, chemistry, piano
...Your challenge is our quest! I'm excited to jump in, and I look forward to hearing from you. InDesign was covered in my landscape architecture classes in college, and we used it to compose
large-format boards for presentations, including text, photos, and graphics.
34 Subjects: including algebra 1, algebra 2, reading, Spanish
...As my students have found out, I am friendly, have great patience, good at making difficult subjects understandable and hard-to-remember equations memorable. I can make seemingly dull math/
science subjects interesting, so my sessions are always interesting to my students. I highly customize to the specific needs, learning styles, and personalities of my students.
8 Subjects: including algebra 2, physics, geometry, calculus
I am an experienced and licensed teacher. I have taught 6th grade earth science and 8th grade physical science. I have tutored algebra 2, geometry, and Spanish as well as various sciences.
24 Subjects: including algebra 2, algebra 1, chemistry, Spanish
Related San Lorenzo, CA Tutors
San Lorenzo, CA Accounting Tutors
San Lorenzo, CA ACT Tutors
San Lorenzo, CA Algebra Tutors
San Lorenzo, CA Algebra 2 Tutors
San Lorenzo, CA Calculus Tutors
San Lorenzo, CA Geometry Tutors
San Lorenzo, CA Math Tutors
San Lorenzo, CA Prealgebra Tutors
San Lorenzo, CA Precalculus Tutors
San Lorenzo, CA SAT Tutors
San Lorenzo, CA SAT Math Tutors
San Lorenzo, CA Science Tutors
San Lorenzo, CA Statistics Tutors
San Lorenzo, CA Trigonometry Tutors
Nearby Cities With algebra Tutor
Alameda algebra Tutors
Albany, CA algebra Tutors
Belmont, CA algebra Tutors
Burlingame, CA algebra Tutors
Castro Valley algebra Tutors
Foster City, CA algebra Tutors
Hayward, CA algebra Tutors
Hillsborough, CA algebra Tutors
Lafayette, CA algebra Tutors
Los Altos Hills, CA algebra Tutors
Newark, CA algebra Tutors
San Carlos, CA algebra Tutors
San Leandro algebra Tutors
San Ramon algebra Tutors
Union City, CA algebra Tutors
|
{"url":"http://www.purplemath.com/San_Lorenzo_CA_Algebra_tutors.php","timestamp":"2014-04-19T12:25:02Z","content_type":null,"content_length":"23914","record_id":"<urn:uuid:79987a4e-9bc4-4630-9dc7-325a39d70c37>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Find the Perpendicular Bisector of Two Points
Edit Article
Gathering InformationCalculating the Equation of the Line
Edited by Musab J, Maluniu, Dipitashick, Trigtchr and 15 others
A perpendicular bisector is a line that cuts a line segment connected by two points exactly in half by a 90 degree angle. To find the perpendicular bisector of two points, all you need to do is find
their midpoint and opposite reciprocal, and plug these answers into the equation for a line in slope-intercept form. If you want to know how to find the perpendicular bisector of two points, just
follow these steps.
Method 1 of 2: Gathering Information
1. 1
Find the midpoint of the two points. To find the midpoint of two points, simply plug them into the midpoint formula: [(x[1] + x[2])/2,( y[1] + y[2])/2]. This means that you're just finding the
average of the x and y coordinates of the two sets of points, which leads you to the midpoint of the two coordinates. Let's say we're working with the (x[1], y[1]) coordinates of (2, 5) and the
(x[2], y[2]) coordinates of (8, 3). Here's how you find the midpoint for those two points:^[1]
□ [(2+8)/2, (5 +3)/2] =
□ (10/2, 8/2) =
□ (5, 4)
□ The coordinates of the midpoint of (2, 5) and (8, 3) are (5, 4).
2. 2
Find the slope of the two points. To find the slope of the two points, simply plug the points into the slope formula: (y[2] - y[1]) / (x[2] - x[1]). The slope of a line measures the distance of
its vertical change over the distance of its horizontal change. Here's how to find the slope of the line that goes through the points (2, 5) and (8, 3):^[2]
□ (3-5)/(8-2) =
□ -2/6 =
□ -1/3
☆ The slope of the line is -1/3. To find this slope, you have to reduce 2/6 to its lowest terms, 1/3, since both 2 and 6 are evenly divisible by 2.
3. 3
Find the negative reciprocal of the slope of the two points. To find the negative reciprocal of a slope, simply take the reciprocal of the slope and change the sign. You can take the reciprocal
of a number simply by flipping the x and y coordinates. The reciprocal of 1/2 is -2/1, or just -2; the reciprocal of -4 is 1/4.^[3]
□ The negative reciprocal of -1/3 is 3 because 3/1 is the reciprocal of 1/3 and the sign has been changed from negative to positive.
Method 2 of 2: Calculating the Equation of the Line
1. 1
Write the equation of a line in slope-intercept form. The equation of a line in slope-intercept form is y = mx + b where any x and y coordinates in the line are represented by the "x" and "y,"
the "m" represents the slope of the line, and the "b" represents the y-intercept of the line. The y-intercept is where the line intersects the y-axis. Once you write down this equation, you can
begin to find the equation of the perpendicular bisector of the two points.^[4]
2. 2
Plug the negative reciprocal of the original slope into the equation. The negative reciprocal of the slope of the points (2, 5) and (8, 3) was 3. The "m" in the equation represents the slope, so
plug the 3 into the "m" in the equation of y = mx + b.
□ 3 --> y = mx + b =
□ y = 3x + b
3. 3
Plug the points of the midpoint into the line. You already know that the midpoint of the points (2, 5) and (8, 3) is (5, 4). Since the perpendicular bisector runs through the midpoint of the two
lines, you can plug the coordinates of the midpoint into the equation of the line. Simply plug in (5, 4) into the x and y coordinates of the line.
□ (5, 4) ---> y = 3x + b =
□ 4 = 3(5) + b =
□ 4 = 15 + b
4. 4
Solve for the intercept. You have found three of the four variables in the equation of the line. Now you have enough information to solve for the remaining variable, "b," which is the y-intercept
of this line. Simply isolate the variable "b" to find its value. Just subtract 15 from both sides of the equation.
□ 4 = 15 + b =
□ -11 = b
□ b = -11
5. 5
Write the equation of the perpendicular bisector. To write the equation of the perpendicular bisector, you simply have to plug in the slope of the line (3) and the y-intercept (-11) into the
equation of a line in slope-intercept form. You should not plug in any terms into the x and y coordinates, because this equation will allow you to find any coordinate on the line by plugging in
either any x or any y coordinate.
□ y = mx + b
□ y = 3x - 11
□ The equation for the perpendicular bisector of the points (2, 5) and (8, 3) is y = 3x - 11.
Article Info
Thanks to all authors for creating a page that has been read 174,867 times.
Was this article accurate?
|
{"url":"http://www.wikihow.com/Find-the-Perpendicular-Bisector-of-Two-Points","timestamp":"2014-04-18T03:00:30Z","content_type":null,"content_length":"73606","record_id":"<urn:uuid:beecab19-ba55-4e96-9719-6e9fe1f802f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Best Sheep Unit Besides The Breaks?
03-30-2012, 07:47 PM #51
Senior Member
Join Date
Apr 2011
Thanked 268 Times in 212 Posts
Only respect this way as well, UH!
I am not an engineer, and I haven't taken Statistics since 1999, so give me some leeway...
However, my understanding of the Addition Rule in Probability P(A) or P(B) = P(A) + P (B) is that it refers to one draw with outcomes (events) independent of each other. As in:
Bitterroot and UH have applied in a sheep unit with 1 tag where their individual probability of drawing is 2%. The probalility that either Bitterroot or UH draws is P(Bitterroot) + P(UH) which is
(.02) + (.02) = (.04). So there is a 4% chance that either will draw.
Maybe that also applies to separate draws, but that is not my understanding.
If that was the way it worked, you could take a odd/even 50/50 bet and after losing twice, bet the house.
BKC my Grandpa taught me that math error a long time ago. I havent heard it in years.
I think you are both incorrect on calculating independent probabilities. The corect way would be to calculate the odds of not drawing each one and multiplying them.
So if you apply for two units with both 2% odds it would look like: 1- [(1.00-.02) * (1.00-.02)] then you would get 0.0396 or just under 4%
So for 50 years at 2% you get: 1- [(1.00-.02)^50] = .6358 or 63.5% chance over 50 years
If you only had 1%: it would be about 39.5%
I think this is the proper way to calculate this type of odds.
There goes 15 minutes I'll never get back... Why don't you guy's move this over to the math forum topic? Oh wait... there is none...
Alright, I finally got it!
UH, it turns out we are both right after I did some research. My apologies. We were just using two different probability laws!
I was using the Law of independent Trials. You were using the Law of Large Numbers!
That is, on any given year, your odds of drawing a 2% tag is 2%. The draw has no memory of past failures.
However, over a large sample, you would expect to draw 2% of the time, so in 100 years, you would expect to draw twice. The problem with the law of large numbers is that it is more and more
reliable with larger and larger numbers. So if you applied for a 1000 years, you would expect to draw 20 times. But the odds of drawing those 20 times in the last 20 years of the 1000 is the same
as drawing it the first 20, or 1 of the 20 in the first 5, or 5 of the 20 in each of the last 4 decades, etc.
It is like flipping a coin. Even if you get 15 heads in a row, it doesn't make the 16th flip a tails ... BUT over 1000 flips, about 500 are going to be each!
This thread has been very good for me, because it does illustrate the advantages of even slightly higher odds.
Thanks, UH.
BB, I think you are right on, both principles are at work. If after 40 years of applying a person still hasn't drawn, his odds are still lousy in the next draw he's up for. That's why you can't
bet the house on any given play.
My wife told me not to post originally, as working small draw odds is one of the secrets to draw low odds tags, but there are still at least 3 more ways to work the odd that are as good or better
then that one. I turn 70 years old in 2034, so feel free to ask me then.
Maybe Eastman's will have to set-up a "math" or "drawing odds" forum...that was a good one
BB, you mentioned the new bonus points squared in Montana that was the first I had heard of that. Is that for residents only? I might just have to move to Montana when I have 30 points.
It is for everybody. Of course, residents have a lot more licenses available to them, though.
In case anyone is curious, I'm still on the 680 train.
Arise... Kill, Eat! - Acts 10:13
03-30-2012, 07:49 PM #52
Senior Member
Join Date
Dec 2011
Thanked 26 Times in 19 Posts
03-31-2012, 09:56 AM #53
Junior Member
Join Date
Jul 2011
Thanked 0 Times in 0 Posts
03-31-2012, 11:26 AM #54
Senior Member
Join Date
Mar 2011
Colorado Mountains
Thanked 49 Times in 43 Posts
03-31-2012, 11:38 AM #55
Senior Member
Join Date
Feb 2012
The high plains of Colorado
Thanked 43 Times in 40 Posts
03-31-2012, 08:20 PM #56
Senior Member
Join Date
Apr 2011
Thanked 268 Times in 212 Posts
04-01-2012, 10:51 AM #57
Senior Member
Join Date
May 2011
North Umpqua River, Oregon
Thanked 271 Times in 186 Posts
04-01-2012, 12:36 PM #58
Senior Member
Join Date
Apr 2011
Thanked 268 Times in 212 Posts
04-04-2012, 12:05 AM #59
05-02-2012, 08:28 PM #60
|
{"url":"http://www.eastmans.com/forum/showthread.php/1755-Best-Sheep-Unit-Besides-The-Breaks/page6","timestamp":"2014-04-21T10:44:36Z","content_type":null,"content_length":"89788","record_id":"<urn:uuid:388dcca1-4dba-4ba5-81b8-8e2ddba17a82>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are roots of transcendental elements transcendental?
up vote 6 down vote favorite
This looks extremely easy, but then again it's late at night...
Let $k$ be a commutative ring with unity. An element $a$ of a $k$-algebra $A$ is said to be transcendental over $k$ if and only if every polynomial $P\in k\left[X\right]$ (with $X$ being an
indeterminate) such that $P\left(a\right)=0$ must satisfy $P=0$.
Let $n$ be a positive integer. Let $A$ be a $k$-algebra, and $t$ be an element of $A$ such that $t^n$ is transcendental over $k$. Does this yield that $t$ is transcendental over $k$ ?
There is a rather standard approach to a problem like this which works if $k$ is reduced (namely, assume that $t$ is not transcendental, take a nonzero polynomial $P$ annihilated by $t$, and consider
the product $\prod\limits_\omega P\left(\omega X^{1/n}\right)$, where $\omega$ runs over a full multiset of $n$-th roots of unity adjoined to $k$; this product can be seen to lie in $k\left[x\right]$
and annihilate $t^n$; this all requires a lot more work to put on a solid footing). There are even easier proofs around when $k$ is an integral domain or a field (indeed, in this case, if $t$ is not
transcendental over $k$, then $t$ is algebraic over $k$, so that, by a known theorem, $t^n$ is algebraic over $k$ as well, hence not transcendental). I am wondering if there is a counterexample in
the general case or I am just blind...
ac.commutative-algebra polynomials commutative-rings
2 Perhaps this is wrong as you say it is late: if t where not transcendental it would be algebraic so k[t] is finite dimensional over k, but k[t] contains k[t^n] so t^n is algebraic, contradiction.
– quid May 29 '13 at 2:05
2 @quid: $k$ is a commutative ring, not a field. You can't conclude that $k[t]$ is finitely generated as a $k$-module either (since we may have for example $k = \mathbb{Z}, t = \frac{1}{2}$). –
Qiaochu Yuan May 29 '13 at 2:37
5 @darij: Let $k$ contain two elements $a, b$ such that $a^2 = ab = b^2 = 0$ and let $A = k[t]/(at^3 - b)$. By construction, $t$ is algebraic. It looks like $t^2$ might be transcendental, although
I'm not sure how to prove it. – Qiaochu Yuan May 29 '13 at 3:03
@Qiaochu Yuan: but it is denoted $k$, it must be a field! :-) Thanks for the correction, somehow I thought I was missing something obvious, but that $k$ is not a field I just did not notice (while
it is clearly stated). – quid May 29 '13 at 10:25
add comment
1 Answer
active oldest votes
$\def\F{{\mathbb F}}$ This is just a proof of Qiaochu's example.
up vote 5 down Let $k=\F[Y,Z]/(Y^2,YZ,Z^2)$, $a=\overline{Y}$, $b=\overline{Z}$. In the ring $k[X]$ we have $I=(aX^3-b)=\{aP(X)X^3-bP(X): P(X)\in \F[X]\}$, since $a(aX^3-b)=b(aX^3-b)=0$. So each
vote accepted element of this ideal contains both even and odd powers of $X$. Thus in $k[X]/I$ the element $t=\overline{X}$ is algebraic, while $t^2$ is not.
I believe the term is "integral" for which one requires the equation of integrally to be monic. It might be easy, but I actually don't see why $t$ is "algebraic" over $k$. Can you
tell me a polynomial $P$ in $k[x]/I$ such that $P(t) = 0$? If $\mathbb{F}$ is a field, I thought that $k[X] \cong \mathbb{F}[X,Y,Z]/(Y^2,YZ,Z^2, YX^3-Z)$. Isn't this isomorphic to
$\mathbb{F}[Y,X]/ (Y^2)$? – Youngsu May 29 '13 at 8:53
Yes, I was wrong about the term, sorry. The polynomial is simply $at^3-b$... – Ilya Bogdanov May 29 '13 at 9:17
1 And yes, we may say that $k[t]={\mathbb F}[X,Y]/(Y^2)$, $t=\overline{X}$, and $k$ is a subring generated by $a=\overline{Y}$ and $b=\overline{X}^3\overline{Y}$. Then $at^3-b=0$,
but $t^2$ is transcendental. – Ilya Bogdanov May 29 '13 at 9:22
Qiaochu and Ilya: thank you! – darij grinberg May 29 '13 at 13:49
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra polynomials commutative-rings or ask your own question.
|
{"url":"http://mathoverflow.net/questions/132174/are-roots-of-transcendental-elements-transcendental/132192","timestamp":"2014-04-20T01:31:03Z","content_type":null,"content_length":"62788","record_id":"<urn:uuid:1d3404c9-b7dd-4779-889a-1155cf470890>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For an advanced course in digital design for seniors and first-year graduate students in electrical engineering, computer engineering, and computer science.
This book builds on the student's background from a first course in logic design and focuses on developing, verifying, and synthesizing designs of digital circuits. The Verilog language is introduced
in an integrated, but selective manner, only as needed to support design examples (includes appendices for additional language details). It addresses the design of several important circuits used in
computer systems, digital signal processing, image processing, and other applications.
Table of Contents
1 Introduction to Digital Design Methodology 1
1.1 Design Methodology–An Introduction
1.1.1 Design Specification
1.1.2 Design Partition
1.1.3 Design Entry
1.1.4 Simulation and Functional Verification
1.1.5 Design Integration and Verification
1.1.6 Presynthesis Sign-Off
1.1.7 Gate-Level Synthesis and Technology Mapping
1.1.8 Postsynthesis Design Validation
1.1.9 Postsynthesis Timing Verification
1.1.10 Test Generation and Fault Simulation
1.1.11 Placement and Routing
1.1.12 Physical and Electrical Design Rule Checks
1.1.13 Parasitic Extraction
1.1.14 Design Sign-Off
1.2 IC Technology Options
1.3 Overview
2 Review of Combinational Logic Design 13
2.1 Combinational Logic and Boolean Algebra
2.1.1 ASIC Library Cells
2.1.2 Boolean Algebra
2.1.3 DeMorgan’s Laws
2.2 Theorems for Boolean Algebraic Minimization
2.3 Representation of Combinational Logic
2.3.1 Sum-of-Products Representation
2.3.2 Product-of-Sums Representation
2.4 Simplification of Boolean Expressions
2.4.1 Simplification with Exclusive-Or
2.4.2 Karnaugh Maps (SOP Form)
2.4.3 Karnaugh Maps (POS Form)
2.4.4 Karnaugh Maps and Don’t-Cares
2.4.5 Extended Karnaugh Maps
2.5 Glitches and Hazards
2.5.1 Elimination of Static Hazards (SOP Form)
2.5.2 Summary: Elimination of Static Hazards in Two-Level Circuits
2.5.3 Static Hazards in Multilevel Circuits
2.5.4 Summary: Elimination of Static Hazards in Multilevel Circuits
2.5.5 Dynamic Hazards
2.6 Building Blocks for Logic Design
2.6.1 NAND—NOR Structures
2.6.2 Multiplexers
2.6.3 Demultiplexers
2.6.4 Encoders
2.6.5 Priority Encoder
2.6.6 Decoder
2.6.7 Priority Decoder
3 Fundamentals of Sequential Logic Design 69
3.1 Storage Elements
3.1.1 Latches
3.1.2 Transparent Latches
3.2 Flip-Flops
3.2.1 D-Type Flip-Flop
3.2.2 Master—Slave Flip-Flop
3.2.3 J-K Flip-Flops
3.2.4 T Flip-Flop
3.3 Busses and Three-State Devices
3.4 Design of Sequential Machines
3.5 State-Transition Graphs
3.6 Design Example: BCD to Excess-3 Code Converter
3.7 Serial-Line Code Converter for Data Transmission
3.7.1 Design Example: A Mealy-Type FSM for Serial Line-Code Conversion
3.7.2 Design Example: A Moore-Type FSM for Serial Line-Code Conversion
3.8 State Reduction and Equivalent States
4 Introduction to Logic Design with Verilog 103
4.1 Structural Models of Combinational Logic
4.1.1 Verilog Primitives and Design Encapsulation
4.1.2 Verilog Structural Models
4.1.3 Module Ports
4.1.4 Some Language Rules
4.1.5 Top-Down Design and Nested Modules
4.1.6 Design Hierarchy and Source-Code Organization
4.1.7 Vectors in Verilog
4.1.8 Structural Connectivity
4.2 Logic System, Design Verification, and Test Methodology
4.2.1 Four-Value Logic and Signal Resolution in Verilog
4.2.2 Test Methodology
4.2.3 Signal Generators for Testbenches
4.2.4 Event-Driven Simulation
4.2.5 Testbench Template
4.2.6 Sized Numbers
4.3 Propagation Delay
4.3.1 Inertial Delay
4.3.2 Transport Delay
4.4 Truth Table Models of Combinational and Sequential Logic with Verilog
5 Logic Design with Behavioral Models of Combinational
and Sequential Logic 141
5.1 Behavioral Modeling
5.2 A Brief Look at Data Types for Behavioral Modeling
5.3 Boolean Equation-Based Behavioral Models of Combinational Logic
5.4 Propagation Delay and Continuous Assignments
5.5 Latches and Level-Sensitive Circuits in Verilog
5.6 Cyclic Behavioral Models of Flip-Flops and Latches
5.7 Cyclic Behavior and Edge Detection
5.8 A Comparison of Styles for Behavioral Modeling
5.8.1 Continuous Assignment Models
5.8.2 Dataflow/RTL Models
5.8.3 Algorithm-Based Models
5.8.4 Naming Conventions: A Matter of Style
5.8.5 Simulation with Behavioral Models
5.9 Behavioral Models of Multiplexers, Encoders, and Decoders
5.10 Dataflow Models of a Linear-Feedback Shift Register
5.11 Modeling Digital Machines with Repetitive Algorithms
5.11.1 Intellectual Property Reuse and Parameterized Models
5.11.2 Clock Generators
5.12 Machines with Multicycle Operations
5.13 Design Documentation with Functions and Tasks: Legacy or Lunacy?
5.13.1 Tasks
5.13.2 Functions
5.14 Algorithmic State Machine Charts for Behavioral Modeling
5.15 ASMD Charts
5.16 Behavioral Models of Counters, Shift Registers, and Register Files
5.16.1 Counters
5.16.2 Shift Registers
5.16.3 Register Files and Arrays of Registers (Memories)
5.17 Switch Debounce, Metastability, and Synchronizers for Asynchronous Signals
5.18 Design Example: Keypad Scanner and Encoder
6 Synthesis of Combinational and Sequential Logic 235
6.1 Introduction to Synthesis
6.1.1 Logic Synthesis
6.1.2 RTL Synthesis
6.1.3 High-Level Synthesis
6.2 Synthesis of Combinational Logic
6.2.1 Synthesis of Priority Structures
6.2.2 Exploiting Logical Don’t-Care Conditions
6.2.3 ASIC Cells and Resource Sharing
6.3 Synthesis of Sequential Logic with Latches
6.3.1 Accidental Synthesis of Latches
6.3.2 Intentional Synthesis of Latches
6.4 Synthesis of Three-State Devices and Bus Interfaces
6.5 Synthesis of Sequential Logic with Flip-Flops
6.6 Synthesis of Explicit State Machines
6.6.1 Synthesis of a BCD-to-Excess-3 Code Converter
6.6.2 Design Example: Synthesis of a Mealy-Type NRZ-to-Manchester
Line Code Converter
6.6.3 Design Example: Synthesis of a Moore-Type NRZ-to-Manchester
Line Code Converter
6.6.4 Design Example: Synthesis of a Sequence Recognizer 284
6.7 Registered Logic
6.8 State Encoding
6.9 Synthesis of Implicit State Machines, Registers, and Counters
6.9.1 Implicit State Machines
6.9.2 Synthesis of Counters
6.9.3 Synthesis of Registers
6.10 Resets
6.11 Synthesis of Gated Clocks and Clock Enables
6.12 Anticipating the Results of Synthesis
6.12.1 Synthesis of Data Types
6.12.2 Operator Grouping
6.12.3 Expression Substitution
6.13 Synthesis of Loops
6.13.1 Static Loops without Embedded Timing Controls
6.13.2 Static Loops with Embedded Timing Controls
6.13.3 Nonstatic Loops without Embedded Timing Controls
6.13.4 Nonstatic Loops with Embedded Timing Controls
6.13.5 State-Machine Replacements for Unsynthesizable Loops
6.14 Design Traps to Avoid
6.15 Divide and Conquer: Partitioning a Design
7 Design and Synthesis of Datapath Controllers 345
7.1 Partitioned Sequential Machines
7.2 Design Example: Binary Counter
7.3 Design and Synthesis of a RISC Stored-Program Machine
7.3.1 RISC SPM: Processor
7.3.2 RISC SPM:ALU
7.3.3 RISC SPM: Controller
7.3.4 RISC SPM: Instruction Set
7.3.5 RISC SPM: Controller Design
7.3.6 RISC SPM: Program Execution
7.4 Design Example: UART
7.4.1 UART Operation
7.4.2 UART Transmitter
7.4.3 UART Receiver
8 Programmable Logic and Storage Devices 415
8.1 Programmable Logic Devices
8.2 Storage Devices
8.2.1 Read-Only Memory (ROM)
8.2.2 Programmable ROM (PROM)
8.2.3 Erasable ROMs
8.2.4 ROM-Based Implementation of Combinational Logic
8.2.5 Verilog System Tasks for ROMs
8.2.6 Comparison of ROMs
8.2.7 ROM-Based State Machines
8.2.8 Flash Memory
8.2.9 Static Random Access Memory (SRAM)
8.2.10 Ferroelectric Nonvolatile Memory
8.3 Programmable Logic Array (PLA)
8.3.1 PLA Minimization
8.3.2 PLA Modeling
8.4 Programmable Array Logic (PAL)
8.5 Programmability of PLDs
8.6 Complex PLDs (CPLDs)
8.7 Field-Programmable Gate Arrays
8.7.1 The Role of FPGAs in the ASIC Market
8.7.2 FPGA Technologies
8.7.3 XILINX Virtex FPGAs
8.8 Embeddable and Programmable IP Cores for a System-on-a-Chip (SoC)
8.9 Verilog-Based Design Flows for FPGAs
8.10 Synthesis with FPGAs
Related Web Sites
Problems and FPGA-Based Design Exercises
9 Algorithms and Architectures for Digital Processors 515
9.1 Algorithms, Nested-Loop Programs, and Data Flow Graphs
9.2 Design Example: Halftone Pixel Image Converter
9.2.1 Baseline Design for a Halftone Pixel Image Converter
9.2.2 NLP-Based Architectures for the Halftone Pixel Image Converter
9.2.3 Minimum Concurrent Processor Architecture for a Halftone Pixel Image Converter
9.2.4 Halftone Pixel Image Converter: Design Tradeoffs
9.2.5 Architectures for Dataflow Graphs with Feedback
9.3 Digital Filters and Signal Processors
9.3.1 Finite-Duration Impulse Response Filter
9.3.2 Digital Filter Design Process
9.3.3 Infinite-Duration Impulse Response Filter
9.4 Building Blocks for Signal Processors
9.4.1 Integrators (Accumulators)
9.4.2 Differentiators
9.4.3 Decimation and Interpolation Filters
9.5 Pipelined Architectures
9.5.1 Design Example: Pipelined Adder
9.5.2 Design Example: Pipelined FIR Filter
9.6 Circular Buffers
9.7 Asynchronous FIFOs–Synchronization across Clock Domains
9.7.1 Simplified Asynchronous FIFO
9.7.2 Clock Domain Synchronization for an Asynchronous FIFO
10 Architectures for Arithmetic Processors 627
10.1 Number Representation
10.1.1 Signed Magnitude Representation of Negative Integers
10.1.2 Ones Complement Representation of Negative Integers
10.1.3 Twos Complement Representation of Positive and Negative Integers
10.1.4 Representation of Fractions
10.2 Functional Units for Addition and Subtraction
10.2.1 Ripple-Carry Adder
10.2.2 Carry Look-Ahead Adder
10.2.3 Overflow and Underflow
10.3 Functional Units for Multiplication
10.3.1 Combinational (Parallel) Binary Multiplier
10.3.2 Sequential Binary Multiplier
10.3.3 Sequential Multiplier Design: Hierarchical Decomposition
10.3.4 STG-Based Controller Design
10.3.5 Efficient STG-Based Sequential Binary Multiplier
10.3.6 ASMD-Based Sequential Binary Multiplier
10.3.7 Efficient ASMD-Based Sequential Binary Multiplier
10.3.8 Summary of ASMD-Based Datapath and Controller Design
10.3.9 Reduced-Register Sequential Multiplier
10.3.10 Implicit-State-Machine Binary Multiplier
10.3.11 Booth’s Algorithm Sequential Multiplier
10.3.12 Bit-Pair Encoding
10.4 Multiplication of Signed Binary Numbers
10.4.1 Product of Signed Numbers: Negative Multiplicand,
Positive Multiplier
10.4.2 Product of Signed Numbers: Positive Multiplicand,
Negative Multiplier
10.4.3 Product of Signed Numbers: Negative Multiplicand,
Negative Multiplier
10.5 Multiplication of Fractions
10.5.1 Signed Fractions: Positive Multiplicand, Positive Multiplier
10.5.2 Signed Fractions: Negative Multiplicand, Positive Multiplier
10.5.3 Signed Fractions: Positive Multiplicand, Negative Multiplier
10.5.4 Signed Fractions: Negative Multiplicand, Negative Multiplier
10.6 Functional Units for Division
10.6.1 Division of Unsigned Binary Numbers
10.6.2 Efficient Division of Unsigned Binary Numbers
10.6.3 Reduced-Register Sequential Divider
10.6.4 Division of Signed (2s Complement) Binary Numbers
10.6.5 Signed Arithmetic
11 Postsynthesis Design Tasks 749
11.1 Postsynthesis Design Validation
11.2 Postsynthesis Timing Verification
11.2.1 Static Timing Analysis
11.2.2 Timing Specifications
11.2.3 Factors That Affect Timing
11.3 Elimination of ASIC Timing Violations
11.4 False Paths
11.5 System Tasks for Timing Verification
11.5.1 Timing Check: Setup Condition
11.5.2 Timing Check: Hold Condition
11.5.3 Timing Check: Setup and Hold Conditions
11.5.4 Timing Check: Pulsewidth Constraint
11.5.5 Timing Check: Signal Skew Constraint
11.5.6 Timing Check: Clock Period
11.5.7 Timing Check: Recovery Time
11.6 Fault Simulation and Manufacturing Tests
11.6.1 Circuit Defects and Faults
11.6.2 Fault Detection and Testing
11.6.3 D-Notation
11.6.4 Automatic Test Pattern Generation for Combinational Circuits
11.6.5 Fault Coverage and Defect Levels
11.6.6 Test Generation for Sequential Circuits
11.7 Fault Simulation
11.7.1 Fault Collapsing
11.7.2 Serial Fault Simulation
11.7.3 Parallel Fault Simulation
11.7.4 Concurrent Fault Simulation
11.7.5 Probabilistic Fault Simulation
11.8 JTAG Ports and Design for Testability
11.8.1 Boundary Scan and JTAG Ports
11.8.2 JTAG Modes of Operation
11.8.3 JTAG Registers
11.8.4 JTAG Instructions
11.8.5 TAP Architecture
11.8.6 TAP Controller State Machine
11.8.7 Design Example:Testing with JTAG
11.8.8 Design Example: Built-In Self-Test
A Verilog Primitives 851
A.1 Multiinput Combinational Logic Gates
A.2 Multioutput Combinational Gates
A.3 Three-State Logic Gates
A.4 MOS Transistor Switches
A.5 MOS Pull-Up/Pull-Down Gates
A.6 MOS Bidirectional Switches
B Verilog Keywords 863
C Verilog Data Types 865
C.1 Nets
C.2 Register Variables
C.3 Constants
C.4 Referencing Arrays of Nets or Regs
D Verilog Operators 873
D.1 Arithmetic Operators
D.2 Bitwise Operators
D.3 Reduction Operators
D.4 Logical Operators
D.5 Relational Operators
D.6 Shift Operators
D.7 Conditional Operator
D.8 Concatenation Operator
D.9 Expressions and Operands
D.10 Operator Precedence
D.11 Arithmetic with Signed Data Types
D.12 Signed Literal Integers
D.13 System Functions for Sign Conversion
2.1.1 Assignment Width Extension
E Verilog Language Formal Syntax 885
F Verilog Language Formal Syntax 887
F.1 Source text
F.2 Declarations
F.3 Primitive instances
F.4 Module and generated instantiation
F.5 UDP declaration and instantiation
F.6 Behavioral statements
F.7 Specify section
F.8 Expressions
F.9 General
G Additional Features of Verilog 913
G.1 Arrays of Primitives
G.2 Arrays of Modules
G.3 Hierarchical Dereferencing
G.4 Parameter Substitution
G.5 Procedural Continuous Assignment
G.6 Intra-Assignment Delay
G.7 Indeterminate Assignment and Race Conditions
G.8 wait STATEMENT
G.9 fork join Statement
G.10 Named (Abstract) Events
G.11 Constructs Supported by Synthesis Tools
H Flip-Flop and Latch Types 925
I Verilog-2001, 2005 927
I.1 ANSI C Style Changes
I.2 Code Management
I.3 Support for Logic Modeling
I.4 Support for Arithmetic
I.5 Sensitivity List for Event Control
I.6 Sensitivity List for Combinational Logic
I.7 Parameters
I.8 Instance Generation
J Programming Language Interface 949
K Web sites 951
L Web-Based Resources 953
Index 965
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Advanced Digital Design with the Verilog HDL, CourseSmart eTextbook, 2nd Edition
Format: Safari Book
$86.99 | ISBN-13: 978-0-13-604229-7
|
{"url":"http://www.mypearsonstore.com/bookstore/advanced-digital-design-with-the-verilog-hdl-coursesmart-0136042295","timestamp":"2014-04-18T08:37:13Z","content_type":null,"content_length":"37790","record_id":"<urn:uuid:1019f9ad-5d14-48c2-9f68-d90d0b5a1c09>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Archives of the Caml mailing list > Message from Andreas Rossberg
[Caml-list] Polymorphic variants
Date: -- (:)
From: Andreas Rossberg <rossberg@p...>
Subject: Re: [Caml-list] How to compare recursive types?
John Max Skaller wrote:
> > In that case
> >any type term can be interpreted as a rational tree.
> >
> .. what's that?
An infinite tree that has only a finite number of different subtrees.
Such trees can naturally be represented as cyclic graphs.
> >If you add lambdas (under recursion) things get MUCH harder. Last time I
> >looked the problem of equivalence of such types under the equi-recursive
> >interpretation you seem to imply (i.e. recursion is `transparent') was
> >believed to be undecidable.
> >
> In the first instance, the client will have to apply type functions
> to create types ..
I don't understand what you mean. If you have type functions you have
type lambdas, even if they are implicit in the source syntax. And
decidability of structural recursion between type functions is an open
problem, at least for arbitrary functions, so be careful. (Thanks to
Haruo for reminding me of Salomon's paper, I already forgot about that.)
OCaml avoids the problem by requiring uniform recursion for structural
types, so that all lambdas can be lifted out of the recursion.
> >[...]
> I don't understand: probably because my description of the algorithm
> was incomplete, you didn't follow my intent. Real code below.
OK, now it is getting clearer. Your idea is to unroll the types k times
for some k. Of course, this is trivially correct for infinite k. The
correctness of your algorithm depends on the existence of a finite k.
> I guess that, for example, 2(n +1) is enough for the counter,
> where n is the number of typedefs in the environment.
I don't think so. Consider:
t1 = a*(a*(a*(a*(a*(a*(a*(a*b)))))))
t2 = a*t2
This suggests that k must be at least 2m(n+1), where m is the size of
the largest type in the environment. Modulo this correction, you might
be correct.
Still, ordinary graph traversal seems the more appropriate approach to
me: represent types as cyclic graphs and check whether the reachable
subgraphs are equivalent.
There is also a recent paper about how to apply hash-consing techniques
to cyclic structures:
author = "Laurent Mauborgne",
title = "An Incremental Unique Representation for Regular Trees",
editor = "Gert Smolka",
journal = "Nordic Journal of Computing",
volume = 7,
pages = "290--311",
year = 2000,
month = nov,
Andreas Rossberg, rossberg@ps.uni-sb.de
"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
|
{"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2002/04/2141c1e241cbe401242fd04d832f5902.en.html","timestamp":"2014-04-16T09:26:14Z","content_type":null,"content_length":"12048","record_id":"<urn:uuid:faf34606-26c9-4a27-94f1-bc68a06f599b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ALEX Lesson Plans
Subject: English Language Arts (9), or English Language Arts (9), or Mathematics (9 - 12)
Title: Fibonacci Poetry
Description: Students will review the Fibonacci sequence and compose poems with the number of syllables in each line corresponding to the the numbers of that sequence.This lesson plan was created as
a result of the Girls Engaged in Math and Science University, GEMS-U Project.
Subject: English Language Arts (9), or English Language Arts (9), or Mathematics (9 - 12), or Technology Education (9 - 12)
Title: Marathon Math
Description: This unit on sequences and series is intended to help students make the connection from math to real life situations. Developing a marathon training program for a beginner runner is one
simple way that students may use patterns in real life. The total mileage per week usually creates a pattern over time. Mathematical operations on patterns, sequences, and series enable students to
do the calculations necessary for exploring the pattern. Students also explore nutrition information needed for a training program as proper nutrition is an important part of sports training.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Golden Ratio
Description: In this Illuminations lesson, students explore the Fibonacci sequence. They examine how the ratio of two consecutive Fibonacci numbers creates the Golden Ratio and identify real-life
examples of the Golden Ratio.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Counting Trains - Fibonacci
Description: In this lesson, students use Cuisenaire Rods to build trains of different lengths and investigate patterns. Students make algebraic connections by writing rules and representing data in
tables and graphs.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Vigenere Grid
Description: This reproducible transparency, from an Illuminations lesson, depicts a Vigenere Grid, which is used for encoding a message using a polyalphabetic cipher.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Vigenere Cipher
Description: In this lesson, one of a multi-part unit from Illuminations, students learn about the polyalphabetic Vigenere cipher. They encode and decode text using inverse operations.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Hearing Music, Seeing Waves
Description: This reproducible pre-activity sheet, from an Illuminations lesson, presents summary questions about the mathematics of music, specifically focused on sine waves and the geometric
sequences of notes that are an octave apart.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics,Science
Title: Numerical Analysis
Description: In this lesson, one of a multi-part unit from Illuminations, students use iteration, recursion, and algebra to model and analyze a changing fish population. They use an interactive
spreadsheet application to investigate their models.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Recursive and Exponential Rules
Description: In this lesson, one of a multi-part unit from Illuminations, students determine recursive and exponential rules for various sequences.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
Subject: Mathematics,Science
Title: Symbolic Analysis
Description: In this lesson, one of a multi-part unit from Illuminations, students use iteration, recursion, and algebra to model and analyze a changing fish population. They work to find additional
equations and formulas to represent the data.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
ALEX Podcasts
Magic Squares
This podcast shows what a magic square is and the varying differences between the different types. This podcast also shows how to make the simplest version of the magic square.There is an example of
a sixteen cell square as well.It does not, however, show how to make a sixteen cell though.Finally, it shows how a magic square works.
Thinkfinity Learning Activities
Subject: Mathematics
Title: Fractal Tool
Description: This student interactive, from Illuminations, illustrates iteration graphically. Students can view preset iterations of various shapes and/or choose to create their own iterations.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12
|
{"url":"http://alex.state.al.us/all.php?std_id=54125","timestamp":"2014-04-17T06:49:49Z","content_type":null,"content_length":"69431","record_id":"<urn:uuid:933edd4f-630e-4b65-b283-ac5b690d898e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proving a set is countable
November 5th 2013, 06:22 AM #1
Super Member
Sep 2008
proving a set is countable
Can anyone please help me with this question, I am finding it very hard. I am not sure where to begin
Re: proving a set is countable
Let's write $S(E)$ a slightly different way:
$S(E) = \left\{A \subseteq E \mid \text{card}(A) = n, n\in \mathbb{N}\right\}$
Note that this is similar to the power set, but the power set includes subsets with infinite cardinality and the empty set.
Let $P$ be the set of primes (as suggested in the hint). Define $f:E \to P$ to be any injection. Such an injection exists since both $E$ and $P$ are countable. Then, define $g:S(E) \to \mathbb{N}
$ by $g(A) = \prod_{e \in A}f(e)$. Prove that $g$ is an injection.
Another proof of this: If $E$ is finite, then $S(E) = \mathcal{P}(E) \setminus \{\emptyset\}$. Hence, $\text{card}(S(E)) = 2^{\text{card}(E)}-1$ where $\mathcal{P}(E)$ is the power set of $E$, so
$S(E)$ is finite and therefore countable. Suppose $E$ is infinite and let $E = \{e_0,e_1,\ldots\}$ be an enumeration of the elements of $E$. Then, define $h: S(E) \to \mathbb{N}$ by $h(A) = \sum_
{e_k \in A}2^k$. Show that $h$ is a bijection.
Last edited by SlipEternal; November 5th 2013 at 07:00 AM.
November 5th 2013, 06:52 AM #2
MHF Contributor
Nov 2010
|
{"url":"http://mathhelpforum.com/algebra/223889-proving-set-countable.html","timestamp":"2014-04-17T01:17:05Z","content_type":null,"content_length":"38327","record_id":"<urn:uuid:e4a4e6d4-467e-4f40-a991-8a840df67f63>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimating With Finite Sums
December 4th 2006, 07:35 PM #1
Junior Member
Oct 2006
Estimating With Finite Sums
can somebody explain to me how to do the following problem?
y = 2x-x^2 [0,2]
partition [0,2] into 4 subintervals and show the four rectangles that LRAM uses to approximate the area of the function. Compute the LRAM sum without a calculator.
thanks for any help
I assume you need to find,
$\int_0^2 2x-x^2dx$
The length of the interval is 2.
And there are 4 rectangles, so the width of each one is,
$\Delta x=\frac{2}{4}=.5$
Now form the sum,
$\sum_{k=1}^n f(a+k\Delta x)\Delta x$
$\Delta x=.5$
(Note, $k$ is a running value. It has no specific value. It simply are all the integers between 1 and 4).
Thus, $f(a+k\Delta x)=f(.5k)$
$\sum_{k=1}^4 f(.5k)\Delta x$
$\sum_{k=1}^4 [2(.5)k-(.5k)^2](.5)$
$\sum_{k=1}^4 (k-.25k^2)(.5)$
December 4th 2006, 07:45 PM #2
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/8415-estimating-finite-sums.html","timestamp":"2014-04-18T15:56:17Z","content_type":null,"content_length":"36231","record_id":"<urn:uuid:3b32c965-3ca7-4019-b229-032e89a2b136>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Where would the Two Trains Meet?
Date: 5 Jun 1995 13:24:23 -0400
From: J. Poholsky's grade6A Charles River
Subject: Where would the Trains Meet?
Hello Mr.math,
Do you really do this for a living? I mean what kind of a living is that?
You sit around a computer typing all day?! Don't you get lonely? I'd simply
hate it. O.K. here's a math problem, you have 2 trains, 1 train left
Baltimore at 6:00 pm and is travelling 19 m.p.h. Yes we understand that's a
little slow, but this is a math problem. The other train left from
Philadelphia in the opposite direction and is going 85. Yes this is fast, but
it is a math problem. Where would they meet?
Date: 6 Jun 1995 11:09:22 -0400
From: Dr. Ken
Subject: Re: Where would the Trains Meet?
Hello there!
>Do you really do this for a living? I mean what kind of a living is that?
>You sit around a computer typing all day?! Don't you get lonely? I'd simply
>hate it.
Well, to each his or her own then. Actually, you may be surprised to learn
that none of the doctors of math is paid for this, we do it on a volunteer
basis. Since we divide the work among a few different people, nobody has to
sit around a computer all day, we each sit around a computer for just a
little while. And it's fun, darnit!
>O.K. here's a math problem, you have 2 trains, 1 train left
>Baltimor at 6:00 pm and is traveling 19 m.p.h. Yes we understand that's a
>little slow, but this is a math problem. The other train left from
>Philedalphia in the opposite direction and is going 85. Yes this is fast but
>it is a math problem. Where would they meet?
Picture what's happening in this problem. You've got two trains speeding
toward each other, one going 85 mph and the other going 19 mph. So their
total speed, relative to each other, is 85 mph + 19 mph = 104 mph. I don't
know exactly how far apart these two cities are from each other (although I
made the drive between Baltimore and Philly just yesterday!), so I'll denote
that distance by the letter D. You can find out how far it is and just fill
it in.
If the total speed is 104 mph, then let the time the two trains travel be t.
Then since Distance = Rate x Time, we get
D = 104t, so t = D/104. This tells you _when_ the trains meet, and to find
out _where_ they meet you would take t and use the formula Distance = Rate x
Time again. Hope this helps! Thanks for the question.
|
{"url":"http://mathforum.org/library/drmath/view/58613.html","timestamp":"2014-04-19T10:02:32Z","content_type":null,"content_length":"7308","record_id":"<urn:uuid:0fc6295b-7357-4a52-a5d0-6497947a97ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Generalizing the Ramsey Problem through Diameter
Given a graph $G$ and positive integers $d,k$, let $f_d^k(G)$ be the maximum $t$ such that every $k$-coloring of $E(G)$ yields a monochromatic subgraph with diameter at most $d$ on at least $t$
vertices. Determining $f_1^k(K_n)$ is equivalent to determining classical Ramsey numbers for multicolorings. Our results include
$\bullet$ determining $f_d^k(K_{a,b})$ within 1 for all $d,k,a,b$
$\bullet$ for $d \ge 4$, $f_d^3(K_n)=\lceil n/2 \rceil +1$ or $\lceil n/2 \rceil$ depending on whether $n \equiv 2 (mod 4)$ or not
$\bullet$ $f_3^k(K_n) > {{n}\over {k-1+1/k}}$
The third result is almost sharp, since a construction due to Calkin implies that $f_3^k(K_n) \le {{n}\over {k-1}} +k-1$ when $k-1$ is a prime power. The asymptotics for $f_d^k(K_n)$ remain open when
$d=k=3$ and when $d\ge 3, k \ge 4$ are fixed.
Full Text:
|
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v9i1r42/0","timestamp":"2014-04-20T21:48:44Z","content_type":null,"content_length":"15022","record_id":"<urn:uuid:396b2ebf-9ac9-483d-9125-b02bd44094ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Smartboard Smarty
Smartboard Smarty
a question. They will receive an automated email and will return to answer you as soon as possible. Please
to ask your question.
Buyer re: Measurement Estimation Liters, Milliliters Metric Smartboard Lesson
Buyer re: Measurement Estimation Liters, Milliliters Metric Smartboard Lesson
Is there a chance you can concert this into a powerpoint?
Buyer re: Daily Math Review Week 11
Can I get it for free
Amanda D.
Hi! I just downloaded the Radius and Diameter SmartBoard activity and I cannot open it in Word. The Adobe format opens fine but then it's not interactive. Do you have any suggestions on what to try
next? I planned on using this today.
Leslie Ames
(TpT Seller) re: Vertex Edge Graphs Grade 3 Smartboard Math Lesson - Lessons
Can this smartboard lesson be downloaded and copied so that the kids have something to work on while I am teaching the lesson?
This SMART Notebook lesson can be printed. When you are in Notebook software, click on file, then scroll down and click on print. You can print it out full page, handouts, and thumbnails.
Leslie Ames
(TpT Seller) re: Vertex Edge Graphs Grade 3 Smartboard Math Lesson - Lessons
Can this be down loaded so the kids have a copy while we are working it on the smartboard?
I just downloaded your estimation page with 20 questions and I cannot get it to print. When I save it, I get an excel page on screen and then when I print it comes up blank. What am I doing wrong?
I am not sure. Send me an email to scotoole@cox.net and I will send you a PDF version.
Hi! I tried mulitiple times to download your Christmas Cause and Effect smartboard activity. It would not let me.
Do you know what I can do? Or is there a refund policy?
Kelly Edmonds
Send me an email to scotoole@cox.net and I will send it to you directly. I am not sure why it won't let you download it.
Allison H. re: Math Smartboard Lesson Parallel, Perpendicular, and Intersecting lines
I paid for the product, yet I am unable to use it. When I click on the file to download it, I get an error message when activeinspire tries to open it.
Do you have Smartboard Notebook software on your computer? Send your reply to scotoole@cox.net
Melantha Yazzie
(TpT Seller) re: Math Smartboard Lessons Multiplication 3 digits x 1 digit
can you give this to me in PDF?
Please send me an email to scotoole@cox.net and I will be happy to send you a pdf copy of this lesson.
Teachers Resource Force
(TpT Seller)
Hiya, I noticed you have Olympic themed resources in your shop and I just thought I'd let you know I received this email today from the legal department about a request for permission for using the
Olympic name and logo:
"Please note that the Olympic properties belong to the IOC and the rights to the commercial reproduction of these are reserved for the members of the Olympic movement and its official partners.
The IOC considers your request to be commercial, and therefore cannot consent to your reproduce the wordmark "Olympics" and Olympic symbol in the title of your resource pack."
I'm afraid we can't sell products with the Olympic name or logo - I just wanted to share this with you too to avoid any messy legal action!
Thank you for the heads up.
Buyer re: Charlotte's Web Chapter Quizzes
You should really update the editing on these. The questions are good. It would not take you long to update that and the layout. I went for a two collumn layout and it still fit on one page with the
multiple choice answers vertically placed instead of horizontal.
Can you send me the file in PDF format? Thank you
Send me an email to scotoole@cox.net and I will send you a PDF form of this lesson.
Buyer re: Probability: Likely, Unlikely, Equal, Certain, Impossible - Smartboard
I just purchased this activity and cannot open it on my Mac. What program can I use to open this file.
It takes SMARTNote software to open this file.
Tessa Grotegut
(TpT Seller) re: Pirate Place Value Decimals (Tenths, hundredths)Math Smartboard Lesson
I was wondering same thing as Jodigary2....is this activispire compatible??? Promethean board?
I was told this would work on active inspire boards, but I do not have one to test it out. I would try to download one of my free lessons to test it out to see if it works.
Jody C. re: Math Smartboard Lesson Double Digit Multiplication Smartboard
Is there any way this can be sent to me in a PDF form? Thanks.
Send me your email address to scotoole@cox.net and I will send a PDF version to you.
Amy Clifford
(TpT Seller)
How do you save your Smart notebooks in order to sell them so that others can not take your fonts or graphics?
Buyer re: Pirate Place Value Decimals (Tenths, hundredths)Math Smartboard Lesson
Will this work with active inspire?
I was told this would work on active inspire boards, but I do not have one to test it out. I would try to download one of my free lessons to test it out to see if it works.
Buyer re: Thanksgiving Fact or Opinion Powerpoint PPT Language Arts Lesson
Do you have to have special software for these lessons? I only have an interactive projector, not a true smart board. I can use power point activities but was worried about these ones. Thanks!
You can download free software to view these lessons. Go to SMARTtech downloads. Download the SMART Notebook Interactive Viewer Software.
Buyer re: Daily Math Review Part 1 (Weeks 1-12)
Do they print clearly in black and white?
I believe they do. I have sold many of these with many good reviews. I would suggest downloading week 1 (which I list for free) and try printing it out on your printer to see what you think.
(TpT Seller) re: Daily Math Review Week 1
Hi, I purchased all your sets of Daily Math Review for my 5th grade math class. They are fantastic. However, I notice that you do not have your name or a copyright on any of the sheets. Considering
that you have a wonderful product, I am just writing to suggest that your protect yourself and your fantastic product. Thanks for such great material!
Buyer re: Daily Math Review Week 1
Do you have these daily math reviews divided by grade levels? They are fantastic for my 5th and 6th grade (year beginning) but look a bit tough for my 4th graders. Thanks, Dianne
I use them with my 4th graders. They are tough at the beginning of the year, but the students really pick on it quickly. Each time we go over them I teach them how to do one of the problems they
haven't learned yet. By the time it comes to formally teaching many if these skills, it takes a short review.
Buyer re: Daily Math Review Week 3
Week 1 actually is week 3. Can you send me week one please? I've bought it already.
Send me an email to scotoole@cox.net and I will send it to you
I ordered Charlotte's Web Chapter Quizzes on June 25th. I cleared my credit card, but I haven't received the order. I'm just checking on it.
Thank you,
Sheri cavender
Charlotte's Web Chapter Quizzes are a digital download. After you purchase the product hover over the MY TPT link at the top of the page. Under the Buy category click on My Purchases. On the next
page you should be able to download it. If you still cannot do it after following these instructions send me an email to scotoole@cox.net and I will send them to you.
Southeast Classroom Creations
(TpT Seller) re: Input - Output Function Machine Math Smartboard Lesson
How do you upload thumbnail pictures of your smartboard lessons to the store? I was trying to do this on mine. Your store looks great!
Thanks Southeast Classroom creations! Send me an email to scotoole@cox.net and I will give you instructions.
Melissa Hebert
(TpT Seller) re: Subtraction (Regrouping) Smartboard Math Lesson
I installed the unzipping software and still cannot open subtraction with regrouping smartboard file. Please advise.
Send me an email to scotoole@cox.net and I will send it to you.
I have purchased this item from you, but I am unable to use the file. I am reading the comments below, and will try the suggestions you gave others. Does something also need to be e-mailed to me?
To use a SMARTboard lesson you need to have Notebook software on your computer. I can email you the direct link to get this free software if you would like, Send me an email to scotoole@cox.net
Kay E. re: Venn Diagram Smartboard Math Lesson
Is this compatible with Activboard?
I am told it is. Send me an email to scotoole@cox.net and I will send you a SMART notebook lesson that you can try before you purchase anything.
Buyer re: Measurement Estimation Grams and Kilograms Metric Smartboard Lessons
I was not able to open this file on my computer. My computer said it didn't recognize the file that created it. Is there any suggestions to help me get it to open.
You need to have notebook software on your computer. You can download a free viewer version of the software at smarttech downloads. If you cannot find this send me an email to scotoole@cox.net and I
will send you the link directly.
Owlways Learning More
(TpT Seller) re: Vertex Edge Graphs Grade 4 Smartboard Math Lesson - Lessons
Hello! Do you have your vertex-edge smartboard lessons available in standard ppt? I need grade levels 2, 3 and 4. Thank you!
No I only have them as smartboard lessons. I could send them as a PDF but they would not be interactive.
Annette H. re: Measurement Estimation US Customary Standards Pack Smartboard Lessons
I had no problem down loading and opening the file, but then unzipped one file says DCStore or something like that and the other 3 wont open in smartboard-message said try saving it in a version 9 or
something...which I have no idea how to do and th DS store file wont open at all. does this mean I dont have the software I should have had before I bought this-this has happened 2 other times with
purchases (the DS store file) and I just gave up, but I really want this one to work. Any help would be great!
Send me an email to scotoole@cox.net and I will send you these lessons directly. Sometimes zipped files do not unzip properly.
Cheryl M. re: Valentines Day Wheel of Fortune Smartboard Lesson
I was unable to use the Valentine's Day game Wheel of Fortune. The error message I'm getting is : it is not a supported file type or the file has been damaged. (ex. email attachment). Can you try to
send it another way?
This lesson is a Smartboard lesson that needs Notebook software to open. Do you have Notebook software on your computer? Send me an email to scotoole@cox.net and I will be able to better help you
with this issue. When you respond to the above question, also let me know whether you have a PC or a Mac computer.
Buyer re: Easter Smartboard Fact or Opinion Language Arts Lesson
I purchased this and am not able to print it out. Any suggestions??
It is a smartboard lesson. Send me an email to scotoole@cox.net and I can send you a PDF version.
Buyer re: Daily Math Review Part 2 (Weeks 13-24)
Any chance you're doing a LA review like the math :)
Not at the present time.
Buyer re: Line Plots Range, Median, and Mode Math Smartboard Lesson
Hi there! I recently had a type of smartboard installed in my classroom. I purchased your line plot activity, but it won't open. Any suggestions?
You need to SMART notebook software on your computer. You can download a free notebook viewer but googling smart tech downloads. If you have problems send me an email to scotoole@ cox.net and I will
send you a direct link. In the email make sure to tell me whether you have a PC or Mac
Laura A. re: Daily Math Review Part 1 (Weeks 1-12)
I'm not understanding the answer to #9 week 1 Day2. Could you please explain how you got 51?
#9 is the Range of the data in the Stem/Leaf plot.
The stem is like the tens column and the leaf is the same as the ones column.
We find the range by taking the smallest number away from the largest number.
Hope this helps.
Kathleen D. re: Symmetry - Smartboard Math Lesson
I just purchased this activity but my computer is not letting me open the file and is saying it is corrupt. Do you have an suggestions on how to get this working?
Send me an email to scotoole@cox.net and I will send you a new copy of that file.
Amy McCall
(TpT Seller) re: Daily Math Review Part 2 (Weeks 13-24)
I bought some of your daily reviews. I am interested in making some of my own to go along with my state standards. Where did you get your math manipulative pictures?
I made the pictures in an image making software that I purchased.
Buyer re: Points, Lines, Rays, Segments Geometry Math Smartboard Lesson
I wrote an email last week relating to not being able to open the powerpoint I purchased covering points, lines, rays, segments smartboard geometry. I bought this because I have a mimeo
smartboard-however, I am unable to open this unit. Can you help me, or am I able to get a full refund back
You purchased a Smartboard Lesson. In order to open and use it you would have to have Notebook software on your computer. To get a refund you have to click on the Contact Us link at the bottom of any
page on TeachersPayTeachers. They will then send you a refund for this item. I am sorry you didn't have the software to open it. You can download a free Notebook viewer by going to the link below
for a PC
For a Mac
Buyer re: Comparing Fractions Unlike Denominators Smartboard Math Lesson
Should I be able to click on the pictures of the different jobs and have it go to another page??
I have to use the arrows to get to every new page, is that right?
The pictures of the jobs do not link to another page. It was just to show the students where they might need to use this skill in the future.
Buyer re: Points, Lines, Rays, Segments Geometry Math Smartboard Lesson
I purchased points, lines, rays, segments Geometry Math smartboard lesson for 3.00 dollars and it won't open for me to use. Please help!!!! Thanks, Tara Noonan
Please send me an email to scotoole@cox.net so I can help you. When you do could you answer these questions. Do you have Notebook software (used to open SmartBoard lessons on your computer? What
happens when you try to open this lesson? Do you get an error message? If so, what does the message say?
Buyer re: Valentine's Day Smartboard Party Games Lesson - Lessons
Good morninG!
I purchased this yesterday... and the letters aren't in the spaces correctly on my smart board. Is there something I need to do to be able to use this?
Please let me know
Please send me an email to scotoole@cox.net and tell me what version of notebook software you are using. Version 11 and newer is distorting the notebook lesson.
Lori L C. re: Daily Math Review Part 1 (Weeks 1-12)
I am having trouble getting this to download to print. Any suggestions?
Reply to me at scotoole@cox.net and I will try to help you resolve this issue.
Natalie R. re: Measurement Estimation Grams and Kilograms Metric Smartboard Lessons
Can you use this lesson on a Promethean board as well?
I have been told that you can open these lesson with the active inspire software. If it were me I would check with my district tech person before hand.
Beth K. re: Input - Output Function Machine Math Smartboard Lesson
This is my first time to order a Smartboard Lesson. I was able to download it but when I try to open it, I cannot. It states "no available application can open it." What application do I use? I tried
to open through Safari, Firefox, and Chome to no avail!
I hope I will be able to use this! It would make for a great lesson! bkoryzno aka EastaEagle
You need to have software on your computer to open the lesson. It is a software called Notebook software. If you email me at scotoole@cox.net I will give you the link to download this free software.
In the email please let me know whether you have a Mac or PC computer.
Buyer re: Area and Perimeter Smartboard Math Lesson
New to this TPT. If I download this to my home computer can I use it at school or should I purchase from my school e-mail and or computer? Thanks.
You can download it to any computer once you purchase it. You can download it on every computer you have. Go to teacherspayteachers and log in. Click on the My Purchases tab at the top. From that
page you should be able to download it.
Laura P. re: Figurative Language Smartboard Language Arts Lesson
When I open the file it crashes. I have tried to get it through email as well and it does the same thing. Do you have any other suggestions?
Send me an email to scotoole@cox.net and I will send you this file directly.
Krista D. re: Polar Express Smartboard "Race to the North Pole" Jeopardy lesson
Is there a way to open this file for use on my activeboard?
I tried using save as but it will not work.
I am told that you can import them into the active inspire software.
Deb on the Web
(TpT Seller) re: Wheel of Christmas Smartboard Wheel of Fortune Type Lesson
I am having trouble figuring out how to play the game with the students. I have never used a Wheel of Fortune format game before. I know that they click on the spinner, but what do they do next? When
I click on the space, the correct letter will appear, but I can't figure out how the students select the letter they want to guess with.
It is a cute game, and I am hoping I will be able to use it.
Thank you,
Debbie Shuler
The teacher will know the answer to the puzzle before play begins. The students spin the spinner, guess a letter that they believe is in the puzzle. If they guess correctly, the teacher reveals those
letters and the students earn the point value that they landed on for each correct letter. For example, if the kids landed on $200 and guessed the letter r and there were 2 letter r's in the puzzle
they would earn $400 for their team. The teams can give up their turn spinning to guess the puzzle at any time. Only the team that correctly guesses the answer banks their money for their team. The
other team loses their money and you begin a new puzzle.
Buyer re: Polar Express Smartboard "Race to the North Pole" Jeopardy lesson
Is it possible to get a list of your questions since I do not have smart board access? We have eno at our school.
Send me an email to scotoole@cox.net and I will send you a copy of the questions.
I have taught for 15 years. Six years ago I received an electronic smartboard. I have since been developing all of my lessons with the smartboard software. I have become very adept at designing
lessons and train new teachers how to use the smartboards.
The main focus of our district is student engagement. All of my lessons I design are focused on keeping the students engaged. I use animated clipart, colors, patterns, weblinks, pictures, and videos
to make the lessons both interesting and exciting for the kids. I want my students excited to come to school each and every day!!
I have won the CCS Curriculum Competition in 2008 and 2009 for grades 4-6 (Arizona competition for designing lesson using smartboard software).
I received a Bachelors Degree in Education from Arizona State University. I then proceeded to get a Masters Degree in Elementary Education from Northern Arizona University. I followed that up by
getting another Masters Degree in Educational Leadership from Northern Arizona University.
I have been happily married for 10 years now. We have 5 wonderful children. Our natural daughter is now 14 years old. We have 4 adopted Korean children ages 6, 4, 3, and 1. I enjoy spending time with
all of them and we hope to adopt more.
PreK, Kindergarten, 1^st, 2^nd, 3^rd, 4^th, 5^th, 6^th, 7^th, 8^th, 9^th, 10^th, 11^th, 12^th
English Language Arts, Balanced Literacy, Reading, Math, Applied Math, Arithmetic, Basic Operations, Fractions, Geometry, Graphing, Measurement, Numbers, Order of Operations, Other (Math), EFL - ESL
- ELD, Other (Specialty), ELA Test Prep, Math Test Prep, Other (ELA), For All Subject Areas, Literature, Basic Math, Short Stories
|
{"url":"http://www.teacherspayteachers.com/Store/Smartboard-Smarty?breadcrumb=2&subject=17","timestamp":"2014-04-16T13:19:27Z","content_type":null,"content_length":"376305","record_id":"<urn:uuid:5b7c8b9d-afed-4317-a3bb-ecd47a37706b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Horizons Math Algebra (Grade 8) Complete Set
The student workbook includes a set of lesson review boxes accompanied by questions that provide practice for previously taught concepts and the concepts taught in the lesson. "Exploring Math
Through..." sections help students understand how ordinary people use algebraic math, providing concrete examples of how math is useful in life. Students will need to supply paper to work the
problems. 333 pages, softcover.
The teacher's guide includes the main concepts, lesson objectives, materials needed, teaching tips, the assignment for the day, and the reduced student pages with the correct answers supplied. Each
lesson will take approximately 45-60 minutes, and is designed to be teacher-directed. Softcover.
Arranged by assignment category, the test & resource book includes 16 tests, 4 quarter tests, lesson worksheets, formula strips, nets supplements, and algebra tiles. Tests are designed to be given
after all the lesson material is presented, generally after 10 days. Softcover.
|
{"url":"http://reviews.christianbook.com/2016/740325/alpha-omega-publications-horizons-math-algebra-grade-8-complete-set-reviews/reviews.htm","timestamp":"2014-04-17T19:49:51Z","content_type":null,"content_length":"56241","record_id":"<urn:uuid:eea4fcd9-0d30-494b-9f14-4835cc2223d4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ohm's Law
If you want to know the current of a 9V battery, then you would say:
I = V / R bu if there is no resistance than wouldn't the current be 9?
But my question is, is it 9amps or 9milliamps!
Please tell me what I'm doing wrong!
Rock Robotics
|
{"url":"http://www.societyofrobots.com/robotforum/index.php?topic=10595.0","timestamp":"2014-04-17T10:36:58Z","content_type":null,"content_length":"149694","record_id":"<urn:uuid:a0337c4f-2e95-433a-aa03-db1d5b04624c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum chaos on locally symmetric spaces
An Isaac Newton Institute Workshop
Random Matrix Theory and Arithmetic Aspects of Quantum Chaos
Quantum chaos on locally symmetric spaces
Authors: Akshay Venkatesh (MIT), Lior Silberman (Princeton )
I will discuss a proof of QUE, w.r.t. the basis of Hecke eigenforms, for certain locally symmetric spaces of rank > 1.
I will discuss a proof of QUE, w.r.t. the basis of Hecke eigenforms, for certain locally symmetric spaces of rank > 1.
|
{"url":"http://www.newton.ac.uk/programmes/RMA/Abstract4/venkatesh.html","timestamp":"2014-04-18T14:13:37Z","content_type":null,"content_length":"2222","record_id":"<urn:uuid:f551d97d-a595-499d-8fd9-5f56fa33bfd2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Entanglement recycling makes teleportation more practical
(Phys.org)—Working in the exotic-sounding field of quantum teleportation, physicists are trying to make it easier to transmit quantum information in the form of qubits from one location to another
without having the qubits actually travel through the intervening space. One challenge in quantum teleportation protocols lies in producing the enormous amount of entanglement required to send each
qubit, a requirement that makes teleportation impractical for real applications. To overcome this problem, the scientists have proposed the idea of entanglement recycling, where the entanglement
needed to send a qubit can be reused to send other qubits, greatly reducing the amount of entanglement required.
The physicists, Sergii Strelchuk at the University of Cambridge in the UK, Michał Horodecki at the University of Gdansk in Poland, and Jonathan Oppenheim of the University of Cambridge and the
University College of London, have published their study on entanglement recycling in a recent issue of Physical Review Letters.
The physicists looked at a recently introduced teleportation protocol called port-based teleportation. Like all quantum teleportation protocols, port-based teleportation requires that the sender,
Alice, and the receiver, Bob, share an entangled state, called a resource state, before sending qubits. To teleport a qubit, Alice first performs a measurement on the qubit's state and her part of
the entangled resource state, destroying both qubits in the process. She then communicates her measurement outcome as classical information to Bob. By carrying out a certain operation, Bob can
transform his resource state into the teleported state from Alice.
In port-based teleportation, Bob's operation involves using Alice's measurement outcome to discard all of his ports except for the one indicated by the measurement outcome. That port contains the
teleported state. However, this process destroys the entanglement between Alice and Bob, so they have to generate a new entangled resource state every time they teleport a qubit. Such a large amount
of entanglement isn't suitable for practical purposes.
Before the physicists could show how entanglement recycling might help, they first had to develop a generalized teleportation protocol that bridges the two main types of protocols. As they explained,
all currently known teleportation protocols can be classified into two groups (the symmetric permutation group and the Pauli group). The two groups differ in how they incorporate entanglement: the
first group of protocols requires an infinitely large amount of shared entanglement to teleport a state, while the second group uses less entanglement but requires that Bob apply a correction after
receiving Alice's measurement, which isn't necessary in the first group.
The physicists developed a generalized protocol by modifying the port-based teleportation protocol so that it shared some properties of both groups. The scientists could then show how to recycle the
entanglement produced in the generalized protocol by keeping the resource state and reusing it for subsequent teleportations. They showed that the original resource state doesn't degrade very much
and retains a sufficiently high fidelity to be reused.
The scientists incorporated the concept of entanglement recycling into two new protocols based on the generalized port-based teleportation protocol. In one protocol, qubit states were teleported
sequentially, one at a time, using the same resource state. In the second protocol, multiple qubit states were teleported simultaneously, using the recycled resource state in a different way. In the
simultaneous case, Alice teleports each qubit to a unique port, and makes a measurement on all of these states and the resource state in one round, which she then sends to Bob. In both protocols, the
resource state degrades proportionally to the number of qubits teleported, whether sequentially or simultaneously, placing a limit on the total number of qubits that can be teleported by one resource
The scientists hope that, by reducing the amount of entanglement required for quantum teleportation protocols, entanglement recycling could open the doors to implementing teleportation in quantum
computing applications.
More information: Sergii Strelchuk, Michał Horodecki, and Jonathan Oppenheim. "Generalized Teleportation and Entanglement Recycling." PRL 110, 010505 (2013). DOI: 10.1103/PhysRevLett.110.010505
(Arxiv: http://arxiv.org/abs/1209.2683 )
1.5 / 5 (8) Jan 17, 2013
Not again, the entanglement mambo-jumbo right out of Star Trek.
At the same time, on Earth: "How Wineland & Haroche Stole My Discovery (and got 2012 PHYSICS NOBEL PRIZE for it...)"
3.7 / 5 (6) Jan 17, 2013
Almost as good as the exact same article that was posted yesterday: http://phys.org/n...ion.html
Ivona Poyntz
3 / 5 (4) Jan 17, 2013
Fascinating stuff
3 / 5 (3) Jan 17, 2013
Hey guys, can you help? I know there is a lot on the net about this topic and I know...correction...my books only give a little serious info about entanglement. Can anyone suggest a reliable and
readable account for the laymen which has maths as well. Thanks
2.3 / 5 (6) Jan 21, 2013
What a hilarious site, look at the "rating system" for comments. You can give little stars to posters. (S)he loves me, (s)he loves me not...
Then if someone rates my post high or low, should I return the favor? As in the "peer review" system. Seriously, what a sicko you have to be to make a rating system even for comments.
It's lab Nazi control-freaks thinking science is about democracy,
when it isn't.
In the meantime, on planet Earth: "How Wineland & Haroche Stole My Discovery (and got 2012 PHYSICS NOBEL PRIZE for it...)" http://sites.goog...ci#Nobel
3 / 5 (2) Jan 21, 2013
Hey guys, can you help? I know there is a lot on the net about this topic and I know...correction...my books only give a little serious info about entanglement
damnit...just realized the link I wanted to post doesn't work because it has plus signs in it.
I'll send you a PM
1 / 5 (2) Jan 21, 2013
Just about every site I read now has ratings for the comments. Usually it's just a simple up/down or even only an uprating available; but they almost all have it.
2 / 5 (4) Jan 21, 2013
Just about every site I read now has ratings for the comments. Usually it's just a simple up/down or even only an uprating available; but they almost all have it.
Concepts from democracy such as "most people" don't hold in science, be it in research or when discussing research.
|
{"url":"http://phys.org/news/2013-01-entanglement-recycling-teleportation.html","timestamp":"2014-04-21T12:53:55Z","content_type":null,"content_length":"81532","record_id":"<urn:uuid:e4ab0ae6-c5fa-4d58-83c2-343e8074b716>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Back to the Future: A Time Machine?
"Einstein's General Theory of Relativity" is the theory of gravity. It connects gravity with space and time," said Mallett. "That's where it starts."
To break down the complicated theories of how time and space can be distorted by lasers, Mallet uses a breakfast beverage.
"Imagine a cup of coffee in front of you. Think of that coffee in your cup as being like empty space. Think of your spoon as being a circulating light beam. If you stir, it starts swirling. That's
what the light beam is doing to space - it's causing empty space when the coffee swirls around," said Mallett.
"If I drop in a sugar cube, the coffee will drag the sugar cube around. In the case of empty space, it's a neutron - part of every atom. If we put it in empty space, the space will drag the neutron
around the same way the coffee drags the sugar cube around.
No comments:
|
{"url":"http://einsteins-relativity-theory.blogspot.com/2009/02/back-to-future-time-machine.html","timestamp":"2014-04-17T16:12:36Z","content_type":null,"content_length":"31204","record_id":"<urn:uuid:521ff897-1d9b-4197-b779-e53d538c4eb2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Total # Posts: 66
A store has found that 60% of their customers spend more than $75 each visit. What is the probability that the next two customers will spend more than $75.
A+2B--->2C Kc=2.27 2c--->D Kc=0.144 Calculate the equilibrium constant for the reaction D--->A+2B
Algebra 1
what is the standard form equation of the line through (7,-4)wiyh the slope of 3/4
A dolphin jumps with an initial velocity of 13.0at an angle of 44.0 above the horizontal. The dolphin passes through the center of a hoop before returning to the water. If the dolphin is moving
horizontally when it goes through the hoop, how high above the water is the center ...
A dolphin jumps with an initial velocity of 13.0at an angle of 44.0 above the horizontal. The dolphin passes through the center of a hoop before returning to the water. If the dolphin is moving
horizontally when it goes through the hoop, how high above the water is the center ...
5-6th grade Math
Follow instructions: 1) Write the word SOLAR. 2) Add the 5th letter of the alphabetto the right of the last letter from the left. 3) Remove the 19th letter of the alphabet and replace them with the
14th letter of the Alphabet. 4) Remove all vowels EXCEPT the 2nd vowel of the a...
Managerial Finance
The cost of capital for Goff Computers,Inc
idk. it's just a homework question to see if we understand the subject i guess.
The annual nominal rate of interest on a bank certificate of deposit is 12%. What would be the effect of an inflation rate of 13% and why?
How many edges are in K15, the complete graph with 15 vertices. Any help would be appreciated, thanks.
Consider the graph with V = [A, B, C, D, E] and E = [AX, AY, AZ, BB, CX, CY, CZ, YY]. Without drawing a picture of the graph a. List all the vertices adjacent to Y b. List all the edges adjacent to
c. Find the degree of Y d. Find the sum of the degrees of the vertices
Three partners are dividing a plot of land among themselves using the lone-divider method. After the divider D divides the land into three shares s1, s2 and s3, the choosers C1 and C2 submit their
bids for these shares. Suppose that the chooser s bids are C1 = {s3}: C2 = ...
Jim and Paula jointly purchase the half sausage-half veggie pizza for $21. Suppose that Paula values veggie pizza three times as much as she values sausage pizza. Find the dollar value to Paula of
each of the following pieces of pizza: a. The veggie half of the pizza b. The sa...
Consider the weighted voting system [75: 31, 29, 23, 16, 8, 7]. Find each: a. The total number of players b. The total number of votes c. The weight of P3 d. The minimum percentage of the votes
needed to pass a motion 9rounded to the next whole percent) This is for my general ...
Using the Borda Count Method, find the winner of the election A has 3106 votes B has 3134 votes C has 2816 votes D has 3024 votes
Using the Borda Count Method, find the winner of the election A has 3106 votes B has 3134 votes C has 2816 votes D has 3024 votes
If you buy a $1395 savings bond that pays 4.3% annual simple interest. Determine the value of the bond after 7 years.
Suppose $3500 is invested in an account with an APR of 11% compounded monthly. Find the future value of the account in 3 years.
Suppose you buy a $1395 savings bond that pays 4.3% annual simple interest. Determine the value of the bond after 7 years.
A tuition bill was $4360. If she paid $6052 what's the percentage increase on her tuition. (Could you please show me how you get this answer so I'll have an idea on how to do the others that are
A vending machine has marbles of 4 different colors: Red, blue, purple,and yellow. The marbles are well mixed. You drop a quarter into the machine and you get 2 random marbles. Write out the sample
space for the experiment. The sample space should be N = 10.
A band has 25 new songs, how many different ways are there for the group to record a CD consisting of 12 songs chosen from 25 new songs. ( The order in which the songs appear on the CD is relevant).
Find the odds of an event E with Pr(E)=5/8
There is a total of 25, how many different sets of 12 can you get from that. The order is not relevant.
Math(differential equations)
An 8 lb weight stretches a spring 8 feet. The weight is moving through a surrounding medium that has a damping effect equal to the instantaneous velocity of the weight. Initially the weight is
displaced 4 feet below the equilibrium position with a downward velocity of 2 ft/sec...
The area of a trapezoid is 80 square units. If its height is 8 units, find the length of its median.
Richard is flying a kite. The kite string makes an angle 57 with the ground. If Richard is standing 100 feet from the point on the ground directly below the kite, find the length of the kite string.
Pre Calc
ln (x^2 +x-12)=3ln2
7th grade math
this is an equation in slope intercept form: y=mx+b An equation has at least one variable and you want to find the product of it here is an example : 8=2x+4 4=2x 2=x
English Comp.
Is this a decent thesis statement? Developing cultural competence skills for students in Schools of Pharmacy requires hands on learning experience that Service learning offers.
English Comp.
I'm hoping this is a better thesis statement, please let me know. "Although most Schools of Pharmacy have added cultural competency training to their curriculum, not all have added the Service
learning experience, which should be a requirement for all schools which wo...
English Comp.
Actually I did I'm just having a hard time coming up with a thesis. Everything seems to be a fact.
English Comp.
Is this a decent thesis statement or is it too long? The imbalance in racial diversity in the field of pharmacy brings a need for more in-depth cultural competency training programs in colleges of
pharmacy so they can better understand, relate, and provide appropriate health c...
English Comp.
I need help writing a sentence outline for my paper. My thesis statement is: "There is a noticeable imbalance in racial diversity in the pharmacy field, prompting needed reforms in Schools of
Pharmacy training programs of cultural competence". I also would like to kn...
Organic Chem I Lab
I had to react 1-propanol with NaBr and H2SO4. I refluxed for one hour and when attempting to separate out the product (product was supposed to be 1-bromopropane) via sep funnel either could no
longer distinguish between the organis layer and the aqueous layer, or there was ne...
Organic Chem I Lab URGENT
PLEASE assist
Organic Chem I Lab
I would actually appreciate some speculation. I have to write an article over this reaction, and if it failed, then why. I have some theories such as I reacted everthing at too high of a temperature
and lost product, poor leaving groups involved... yadda yadda. I would like to...
Organic Chem I Lab
I added about 5g of NaBr, and used about 20ml of sulfuric acid. I did this reaction four times, making sure my 1-propanol was my limiting reagent. I'm not sure where I could have gone wrong. It is a
possibility that during the reflux stage the temp got too high and I may h...
Organic Chem I Lab
I had to react 1-propanol with NaBr and H2SO4. I refluxed for one hour and when attempting to separate out the product (product was supposed to be 1-bromopropane) via sep funnel either could no
longer distinguish between the organis layer and the aqueous layer, or there was ne...
Algebra I
Pump A, working alone, can fill a tank in 3 hours, and pump B can fill the same tank in 2 hrs. If the tank is empty to start and pump A is switched on for one hour, after which pump B is also
switched on and the two work together, how many minutes will pump B have been working...
why does the pH of an aqueous solution change after HCl solution has been added
College Algebra
Can we try this again? I already posted this question and thank you for responding but I just don't understand. I am posting it again because I need to clarify that I have formulas my professor has
us use but since I've never come across a problem like this one before ...
College Algebra
When a certain drug is taken orally, the concentration of the drug in the patients bloodstream after t minutes is given by C(t)=0.06t-0.0002t^2, where 0 is less than or equal to "t" which is less
than or equal to 240 and the concentration is measured by mg/L. When is...
College Algebra
In class we were given formulas such as: f(x)= ax^2 + bx+ c x= -b/2a and then an f(x) or whatever the letters are being used in the word problem where you plug in the answer for x back into the
original equation. I just don't understand what role the 0 ≤ t ≤ 24...
College Algebra
Quadratic Function problem: When a certain drug is taken orally, the concentration of the drug in the patients bloodstream after t minutes is given by C(t)=0.06t-0.0002t^2, where 0 ≤ t ≤ 240 and the
concentration is measured by mg/L. When is the maximum serum conce...
social studies,word unscrambler
ponctuation, mechanisme, and writing
I just have to say that I'm doing this same exact assignment right now haha! I was searching for an answer to a question but it looks like i found all of them! woohoo! and actually a few of those
answers are wrong haha!
I have a question: My textbook says "The surface of a right prism is 224 sq. feet, and the length of a side of the square base is one-third the height. What are the dimensions of the prism?" How do I
find out the dimensions when I don't know the height or the len...
there are 100 bushels of corn to be distributed out to 100 people. the men get 3 bushels of corn each, the woman get 2 bushels each, and the children get 1/2 bushels each. how can the bushels be
distributed evenly among the people? (so that the numbers add up to 100 people and...
I dont think that is possible? Is this your own question, or one from a textbook? If a textbook, read the pages in the chapter of the question. Otherwise, your guess is as good as mine, becasue I
dont think that that is possible.
Physics Lab
Can someone please help me? I need some ideas, I am totally stuck...I posted as the same name as I am now! Please! thanks
yes you use the equation that was posted in the answer before mine...there you go!
what about caloriometry? Use a caloriometer and find the temperatures with a digital thermometer. You need to take the water and find the initial temperature, then add the sodium and then mix the
sodium in there until it is fully dissolved and then measure the new temperature....
Physics Lab
those would work....friction between the dynamics cart and the wooden board its beind dropped at. How would this affect the lab though..would it make the dynamics cart not move as fast or drop as
quickly? And how could that be avoided (doesnt have to be real, just something th...
Physics Lab
Hey there, I did a lab in which a wooden board was taken and then a dynamics cart was dropped down it. Four markings of different incriments were marked on the board, so that when we dropped it, we
could record the time it took for the cart to reach a certain distance. We did ...
I am pretty sure that 1 is right, but 2 should be 3:6 whether or not you can reduce I don't remember. For #3 it should be so again I am not sure if you can reduce to 1:3 cause that is not really the
probability! If you can reduce, then nice work!
how do you translate the number 501,289 into the spanish written form? (five hundred one thousand two hundred eighty nine)in spanish?
A uniform plank of length 5.6 m and weight 203 N rests horizontally on two supports, with 1.1 m of the plank hanging over the right support (see the drawing). To what distance x can a person who
weighs 440 N walk on the overhanging part of the plank before it just begins to ti...
One end of a wire is attached to a ceiling, and a solid brass ball is tied to the lower end. The tension in the wire is 160 N. What is the radius of the brass ball? Well, the tension is from the
brass ball, and you are given the weight as 160N. Weight=densitybrass*g*(volume) F...
A solid cylindrical disk has a radius of 0.16 m. It is mounted to an axle that is perpendicular to the circular end of the disk at its center. When a 50 N force is applied tangentially to the disk,
perpendicular to the radius, the disk acquires an angular acceleration of 120 r...
Two thin rectangular sheets (0.06 m 0.52 m) are identical. In the first sheet the axis of rotation lies along the 0.06 m side, and in the second it lies along the 0.52 m side. The same torque is
applied to each sheet. The first sheet, starting from rest, reaches its final angu...
AP Calc BC
Thank you!!
The function f has derivatives of all orders for all real numbers x. Assume f(2)=-3, f'(2)=5, f''(2)=3, and f'''(2)=-8. The fourth derivative of f satisfies the inequality, the (absolute value of
f''''(x)) <=3 for all x in the closed ...
Let f be a function that has derivatives of all orders for all real numbers. Assume f(0)=5, f'(0)=-3, f''(0)=1, and f'''(0)=4. Write the third-degree Taylor polynomial for h, where h(x) = integral of
f(t)dt from 0 to x, about x=0 for this part, I got 5x...
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Meredith","timestamp":"2014-04-16T10:38:51Z","content_type":null,"content_length":"23035","record_id":"<urn:uuid:1e6a8d0e-a7c2-4f14-89a2-459e94bf10d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A class of mass transport models: Factorised steady states and condensation in real space: Lecture II
Seminar Room 1, Newton Institute
The traditional Bose-Einstein condensation in an ideal quantum Bose gas occurs in momentum space, when a macroscopically large number of bosons condense onto the ground state. It is becoming
increasingly clear over the last decade that condensation can also happen in real space (and even in one dimension) in the steady state of a broad class of physical systems. These are classical
systems, generally lack a Hamiltonian and are defined by their microscopic kinetic processes. Examples include traffic jams on a highway, island formation on growing crystals and many other systems.
In this lecture, I'll discuss in detail two simple models namely the Zero-range process and the Chipping model that exhbits condensation in real space. Lecture-II
I'll introduce a generalized mass transport model that includes in iteself, as specail cases, the Zero-range process, the Chipping model and the Random Average process. We will derive a necessary and
sufficient condition, in one dimension, for the model to have a factorised steady state. Generalization to arbitrary graphs will be mentioned also.
We will discuss, in the context of the mass transport model, the phenomenon of condensation. In particular we will address three basic isuues: (1) WHEN does such a condensation occur (the criterion)
(2) HOW does the condensation happen (the mechanism) and (3) WHAT does the condensate look like (the nature of fluctuations and lifetime of the condensate etc.)? We will see how these issues can be
resolved analytically in the mass transport model.
|
{"url":"http://www.newton.ac.uk/programmes/PDS/seminars/2006032911301.html","timestamp":"2014-04-20T10:48:04Z","content_type":null,"content_length":"5916","record_id":"<urn:uuid:0e464324-bb98-4e84-b18e-de409a380698>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A-level Mathematics/MEI/NM/Solving equations
The methods in this section are for solving numerically equations of the form f(x) = 0 that cannot be solved analytically, or are too difficult to solve analytically.
Change of sign methodsEdit
If we have a continuous function f(x), and two x values a and b, then provided f(a) and f(b) have opposite signs, we know that the interval [a, b] contains a root of the equation f(x) = 0 (somewhere
between a and b must be a value where f(x) = 0).
Graphically, if the curve y = f(x) is above the x-axis at one point and below it at another, it must have crossed the x-axis somewhere in between. Therefore, a root of f(x) = 0 lies somewhere in
between the two points.
Change of sign methods use this information to progressively shrink an interval containing a change of sign, in order to find a root.
Note that:
• The function must be continuous within the interval for the above to hold true
• There may be more than one root in the interval. For example, f(x)=x^3-x, solving f(x)=0 . The interval [-2, 2] has a change of sign and contains three roots: x=-1,x=0,x=1
• Repeated roots do not cause a change of sign. For example, solving f(x)=0 where f(x)=(x -1)^2 . f(x) will evaluate as positive at the endpoints of any interval, yet there is a repeated root at x=
Bisection is a change of sign method. It requires an initial interval containing a change of sign. On each step (called an iteration), the bisection method involves doing the following:
1. Bisect (divide into 2 equal parts) an interval in which a root is known to lie, hence giving two new intervals.
2. Evaluate the function at the endpoints of the two new intervals.
3. A change of sign indicates (provided the function is continuous) that there is a root in that interval. Hence deduce that a root lies in the interval which has a change of sign of the evaluation
of the function between its endpoints.
4. Take the interval containing a root as your new starting interval in the next iteration.
Given an interval [a, b], let x be the new estimate of the root.
$x = \frac{a+b}{2}$
The function is then evaluated at a, b and x, and the interval containing a sign change - either [a, x] or [x, b] - is selected as the new interval.
[graph, example]
• Bisection always converges - it will always find a root in the given starting interval.
• It carries a definite statement of the bounds in which the result must lie. Numerical work is of much more value if you know how accurate the answer you obtain is.
• It has a fixed rate of convergence - the interval halves in size on each iteration. Under certain conditions, other methods may converge more slowly.
• It requires a starting interval containing a change of sign. Therefore it cannot find repeated roots.
• It has a fixed rate of convergence, which can be much slower than other methods, requiring more iterations to find the root to a given degree of precision.
False positionEdit
False position expands on bisection by using information about the value of the function at the endpoints to make a more intelligent estimate of the root. If the value of f(x) at one endpoint is
closer to zero than at the other endpoint, then you would expect the root to lie closer to that endpoint. In many cases, this will lead to faster convergence on the root than bisection, which always
estimates that the root is directly in the middle of the interval.
The procedure is nearly exactly the same as bisection. However, given an interval [a,b], the new estimate of the root, x, is instead:
$x = \frac {af(b) - bf(a)} {f(b) - f(a)}$
This can be seen as just a weighted average of a and b, with the weightings being the value of f(x) at the other endpoint. This is because we want the endpoint with a smaller f(x) to have a larger
weight, so that the estimate is closer to it. We therefore use the larger value of f(x) as the weighting for the endpoint which produces a smaller f(x).
The false position method is equivalent to constructing a line through the points on the curve at x=a and x=b, and using the intersection of this line with the x-axis as the new estimate.
[graph, example, advantages, disadvantages]
Other methodsEdit
Iterative method notationEdit
x[r] means the value of x after r iterations. For example, the initial estimate of the root would be x[0], and the estimate obtained by performing one iteration of the method on this would be x[1].
We write down iterative methods in the form x[r+1] (the next estimate of the root) in terms of x[r] (the current estimate of the root). For example, the fixed point iteration method is:
$x_{r+1} = g(x_r) \,\!$
If the new estimate is calculated from the previous two estimates, as in the case of the secant method, the new estimate of the root will be x[r+2], written in terms of x[r+1] and x[r].
This is similar in some ways to the false position method. It uses the co-ordinates of the previous two points on the curve to approximate the curve by a straight line. Like the false position
method, it uses the place where this line crosses the axis as the new estimate of the root.
$x_{r+2} = \frac { x_rf(x_{r+1}) - x_{r+1}f(x_r) } { f(x_{r+1}) - f(x_r) }$
[graph, example, advantages, disadvantages]
The Newton-Raphson method is based on obtaining a new estimate of the root on each iteration by approximating f by its tangent and finding where this tangent crosses the x-axis.
$x_{r+1} = x_r - \frac{f(x_r)}{f'(x_r)}$
[graph, example, advantages, disadvantages]
Fixed point iterationEdit
In this method, the equation $f(x)=0$ is rearranged into the form x = g(x). We then take an initial estimate of x as the starting value, and calculate a new estimate using g(x).
$x_{r+1} = g(x_r) \,\!$
If this method converges, then provided g is continuous it will converge to a fixed point of g (where the input is the same as the output, giving x = g(x) ). This value of x will also satisfy f(x) =
0, hence giving a root of the equation.
Fixed point iteration will not always converge. There are infinitely many rearrangements of f(x) = 0 into x = g(x). Some rearrangements will only converge given a starting value very close to the
root, and some will not converge at all. In all cases, it is the gradient of g that influences whether convergence occurs. Convergence will occur (given a suitable starting value), if when near the
$-1 < g'(x) < 1 \,\!$
Where g' is the gradient of g. Convergence will be fastest when g'(x) is close to 0.
[staircase + cobweb diagrams, examples, advantages, disadvantages]
Last modified on 21 June 2010, at 15:50
|
{"url":"http://en.m.wikibooks.org/wiki/A-level_Mathematics/MEI/NM/Solving_equations","timestamp":"2014-04-17T18:51:08Z","content_type":null,"content_length":"23738","record_id":"<urn:uuid:2c1ba058-42d4-4995-b550-26f3d3c7d461>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability and statistics blog
Alright, I better start with an apology for the title of this post. I know, it’s really bad. But let’s get on to the good stuff, or, perhaps more accurately, the really frightening stuff. The plot
shown at the top of this post is a simulation of the martingale betting strategy. You’ll find code for it here. What is the martingale betting strategy? Imagine you go into a a mythical casino that
gives you perfectly fair odds on the toss of a mythically perfect coin. You can bet one dollar or a million. Heads you lose the amount you bet, tails you win that same amount. For your first bet, you
wager $1. If you win, great! Bet again with a dollar. If you lose, double your wager to $2. Then if you win the next time, you’ve still won $1 overall (lost $1 then won $2). In general, continue to
double your bet size until you get a win, then drop your bet size back down to a dollar. Because the probably of an infinite loosing streak is infinitely small, sooner or later you’ll make $1 off of
the sequence of bets. Sound good?
The catch (you knew there had to be a catch, right?) is that the longer you use the martingale strategy, the more likely you are to go broke, unless you have an infinitely large bankroll. Sooner or
later, a run of heads will wipe out your entire fortune. That’s what the plot at the beginning of this post shows. Our simulated gambler starts out with $1000, grows her pot up to over $12,000 (with
a few bumps along the way), then goes bankrupt during a single sequence of bad luck. In short, the martingale stagy worked spectacularly well for her (12-fold pot increase!) right up until the point
where it went spectacularly wrong (bankruptcy!).
Pretty scary, no? But I haven’t even gotten to the really scary part. In an interview with financial analyst Karl Denninger, Max Keiser explains the martingale betting strategy then comments:
“This seems to be what every Wall Street firm is doing. They are continuously loosing, but they are doubling down on every subsequent throw, because they know that they’ve got unlimited cash at their
disposal from The Fed… Is this a correct way to describe what’s going on?
Karl Denninger replies. “I think it probably is. I’m familiar with that strategy. It bankrupts everyone who tries it, eventually…. and that’s the problem. Everyone says that this is an infinite sum
of funds from the Federal Reserve, but in fact there is no such thing as an infinite amount of anything.”
Look at the plot at the beginning of this post again. Imagine the top banking executives in your country were paid huge bonuses based on their firm’s profits, and in the case of poor performance they
got to walk away with a generous severance package. Now imagine that these companies could borrow unlimited funds at 0% interest, and if things really blew up they expected the taxpayers to cover the
tab through bailouts or inflation. Do you think this might be a recipe for disaster?
|
{"url":"http://www.statisticsblog.com/category/games/","timestamp":"2014-04-21T13:08:59Z","content_type":null,"content_length":"93579","record_id":"<urn:uuid:c33dda6a-622f-40d9-a3ac-482ddd17187f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Counting array elements
Alan G Isaac aisaac at american.edu
Fri Oct 22 19:17:02 CDT 2004
More new user feedback ...
On Fri, 22 Oct 2004, Chris Barker apparently wrote:
> Well, I think that the idea of a bool being different than
> an int is often useful.
Yes. E.g., applications to directed graphs.
> we can use some version of sum() to add up all the
> true values.
Unclear, but given the existence of sometrue,
it seems natural enough to let sum treat a Bool as an
integer. Products work naturally, of course.
> I would probably maintain
> the easy conversion of a Bool array to an Int array, for when you really
> do need to do math with them.
I would rephrase this.
Boolean arrays have a naturally different math,
which it would be nice to have supported.
It would also be nice to easily convert to Int,
when that representation captures the math needed.
> We'd want a compete set, many of which already exist. A few off the top
> of my head:
> sometrue
> alltrue
> numtrue
I'd just let sum handle numtrue.
> Maybe mirrors for false:
> somefalse, allfalse, numfalse
I'd just rely on alltrue, sometrue, and (size less sum) for these.
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2004-October/015972.html","timestamp":"2014-04-16T10:20:22Z","content_type":null,"content_length":"3704","record_id":"<urn:uuid:ecef9517-cd5e-437c-91f4-7314aa5cf2bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lets talk about COFFEE
Lets talk about COFFEE - Page 27
• Posts: 16
• Joined: 1/2012
• Location: Boston
unfortunately all i could afford is the whirley pop for now lol
They told me they use UPS.
UPS also delivers beans to my parents in Jersey at a better roast date than the fucking cafe's get in NYC.
• Posts: 12,902
• Joined: 6/2007
• Location: omicron persei 8
you can order online, but shipping costs is like half of bag cost
I know, so I don't bother. I just complain.
It seems that they get their deliveries in the middle of the week or something. This I find annoying, because I will usually buy beans either Sunday, or Monday to have at my office for the workweek.
If I buy them on Wednesday they will sit in my office over the whole weekend without being used. No sense in bringing them home for the weekend because over the weekend I am out and about and I buy
coffee out, not make it myself.
Since I didn't get to buy beans ths morning I went to starbucks just now. I took this horrible picture, but I think it illustrates some of starbucks' issue. They have the different bean roast
profiles. What they call blonde, and their lightest roast, I would probably categorize as almost medium. Their dark I would say is burnt. What do you think?
It's supposed to work really well if you're careful. Do you like the beans you home roast better than what you get at the store?
Starbucks's blonde roast is a joke --- they go past 2nd crack, which by anyone's definition is dark. Peet's also has a lighter roast now, except they're blends.
Speaking of Peet's, I tried the Ethiopian Super Natural in the CCD with a much longer steep time (8 minutes vs. 4), and it turned out much, much better. The burnt, roast notes are now in balance and
actually pretty pleasant.
• Posts: 1,552
• Joined: 6/2006
Sorry but I don't think it's revelatory or really original to call Starbucks beans burnt or over-roasted.
Oh snap, you really told me.
Interesting. I would have been too scared to steep longer. I would have thought I would have had a cup of mud.
A friend and I were discussing this, and came up with this picture:
The top graph is the instantaneous extraction of various flavors from a bean while it's steeping, while the bottom is the preference curve (which says I am a snob for green roasts, while he will
drink any coffee-flavored swill). If you integrate the top graph, you will have the total extraction over the time period of your integration. This is the same thing as your steep time. One can also
compare the two graphs directly to predict what kind of brewing someone might prefer. He seems to think I like underextracted (sour) coffee while he likes burnt coffee.
Anyway, the curves are obviously abstract and will change with different beans and roasts. My hypothesis for the Ethiopian Super Natural is that the bitter curve is quite a bit lower (perhaps due to
the darker roast), and with a profile that is weighted lower, so the bitter hump occurs more to the left and it's less symmetric. The tasty curve could be biased more towards the right. Either or
both could be true. That is, the bean could have the bitter curve in the picture, but have a tasty curve that peaks more to the right.
So for relatively low steep times, you'd tend to get more bitter than tasty components. Whereas if you steep longer, the higher extraction of the tasty particles could help offset the bitterness that
you get with shorter extraction. The interesting question is (assuming this is true and we aren't just pulling things out of our butts) how one can shape or determine these curves for a bean so as to
better customize a brew for a bean.
Hope this wasn't overthinking it.
I am probably perfectly in between you and your friend, with a "kurtosis" closer to that of Andre.
Originally Posted by
Ok, another pissed off Gimme rant... I woke up a bit early today to walk the mile to Gimme to get beans for the work week. What was I thinking? The definition of insanity if doing the same thing over
and over again and expecting different results. Not one bag of beans was under 10 days old. What the fuck is wrong with these people? I think they know they are fucked up too because as I grab the
bags and turn them over to look at roast dates they get all nervous and come up to me, "Uhh, sir are you looking for a specific coffee??? I can help you." Every time they also are conveniently
getting more beans in "later today". Fuck.
Ok, went to another gimme location after work (a few blocks from the one I went to in the morning) same thing. 10-20 day old beans. The girl was trying to sell me the one roasted on the 23rd. I told
her that I would rather have no coffee than pay $16 for old beans. I went to the location where I was told in the morning they would be getting a delivery that day. Walked in looked at the beans,
alas same ones from the morning. The guy was nervous as usual and said that those are the freshest because they just got them delivered that day. Don't fucking bullshit me, I was here in the morning
and I see the UPS box behind the counter, where they haven't been put out. I walked right out. So pissed off.
Wow, that's awful. You may want to call Gimme just to let them know how their product is being sold to customers. I got better treatment at my local Peet's: when I went in to get the Arabian Mocha
Sanani, I asked how old the stuff in the bins were, and one of the baristas piped up that they'd just received some that day, and they fetched that batch without even being asked. The beans were
roasted the day before.
Originally Posted by
A Y
Wow, that's awful. You may want to call Gimme just to let them know how their product is being sold to customers. I got better treatment at my local Peet's: when I went in to get the Arabian Mocha
Sanani, I asked how old the stuff in the bins were, and one of the baristas piped up that they'd just received some that day, and they fetched that batch without even being asked. The beans were
roasted the day before.
I sent gimme an email via website a while ago abotu their stock of beans. Never heard anything, and they sure as hell didn't change anything.
|
{"url":"http://www.styleforum.net/t/153072/lets-talk-about-coffee/390","timestamp":"2014-04-20T13:58:46Z","content_type":null,"content_length":"111195","record_id":"<urn:uuid:f9df7889-ab7f-4e32-b7c6-439e6f7c8734>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formality of classifying spaces
up vote 15 down vote favorite
Let $G$ be a compact Lie group (or reductive algebraic group over $\mathbb{C}$), and let $BG$ be its classifying space. Fix a prime $p$. Let $\mathcal{A}$ denote the dg algebra of singular cochains
on $BG$ with coefficients in a field or characteristic $p$ (or if you prefer the dg algebra of endomorphisms of the constant sheaf). My question is:
Is it known for which primes $p$ the dg algebra $\mathcal{A}$ is formal, that is, quasi-isomorphic to a dg algebra with trivial differential?
I assume / hope that the answer is that this is true if $p$ is not a torsion prime for $G$ (i.e. $p$ arbitrary in types $A$ and $C$, $p \ne 2$ in types $B$, $D$ and $G_2$, $p \ne 2, 3$ in types
$F_4$, $E_6$ and $E_7$, and $p \ne 2,3,5$ in type $E_8$.)
Note that we know* that $\mathcal{A}$ is formal in characteristic 0.
Can one then conclude that it is formal in any characteristic in which the cohomology of $\mathcal{A}$ is torsion free?
If so I think this would give the above list of primes.
*) for example because $H(BG, \mathbb{Q})$ is a poynomial algebra, and $\mathcal{A}$ admits a graded commutative model using the de Rham complex -- see Bernstein-Lunts "Equivariant sheaves and
add comment
1 Answer
active oldest votes
I think that you outlined the proof. In more detail, let $W$ be the Weyl group of $G$, and $T$ its maximal torus. Pick $p$ coprime to $|W|$; this allows to ignore higher $W$ group cohomology
in the computation
$$H^*(BG, \mathbb{F}_p) \cong H^*(BT, \mathbb{F}_p)^W$$
Since $W$ is a reflection group, $H^*(BT, \mathbb{F}_p)^W$ is a polynomial algebra, say on $d$ generators. Pick cocycle representatives $x_1, \dots, x_d \in C^*(BG, \mathbb{F}_p)$. Now let
up vote $R = \mathbb{F}_p[y_1, \dots, y_d]$ be the free graded commutative algebra on generators $y_i$ in the same degree as $x_i$, and equip $R$ with the $0$ differential. By freeness (and the fact
2 down that $d(x_i) = 0$), you get a map $R \to C^*(BG, \mathbb{F}_p)$ of DGA's which sends $y_i$ to $x_i$. You know (because you constructed it that way) that it induces an isomorphism in
vote cohomology, and so $BG$ is formal at the prime $p$.
If you have some other mechanism for ensuring that $H^*(BG, \mathbb{F}_p)$ is a polynomial algebra (e.g., the statement is known integrally, as for $G = U(n)$, $Sp(n)$), the same argument
7 What worries me here is that cochains with coefficients in $\mathbb{F}_p$ is only an $E_\infty$-algebra, and so I'm not sure that you can make the generators honestly commute. I think
you'd need to try something like mapping the generators in separately by maps of associative algebras (where a polynomial algebra on one generator is free) and then use the fact that the
range is $E_\infty$ to construct a map from the tensor product. – Tyler Lawson Apr 5 '11 at 14:54
That's a good point. Can you say more about how to get the map from the tensor product into the range? – Craig Westerland Apr 7 '11 at 2:30
Yes, this was also my concern. For tori I think one can make this work, but I don't see a priori how it works in this case. (The reason that Bernstein and Lunts develop the whole
machinery of the de Rham complex on the classifying space is in order to make a choice of representatives so that they commute.) – Geordie Williamson Apr 7 '11 at 6:27
@Craig: If we were talking honest commutativity, then I'd construct maps of associative algebras $F_p[y_i] \to R$ and then use a composite $\otimes F_p[y_i] \to \otimes R \to R$; this
uses that associative algebras are closed under tensor, and for commutative objects the product map is a map of commutative algebras. The same is true in this case, you just have to $E_\
infty$-up everything or work with some kind of honest commutative or associative replacements and deal with the model category issues. – Tyler Lawson Apr 8 '11 at 5:54
add comment
Not the answer you're looking for? Browse other questions tagged reference-request equivariant dg-algebras classifying-spaces a-infinity-algebras or ask your own question.
|
{"url":"https://mathoverflow.net/questions/46521/formality-of-classifying-spaces","timestamp":"2014-04-18T18:46:32Z","content_type":null,"content_length":"58161","record_id":"<urn:uuid:e7ad22f3-848c-4dee-963a-c52e718faca2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elizabethport, NJ Statistics Tutor
Find an Elizabethport, NJ Statistics Tutor
...As an English major student in college and English lover, I regard English as one of my favorite subjects and I do great in different kinds of English competitions. I've tutored students on
SAT Reading and received positive feedback. Besides I enjoy surfing on English websites during my free time.
9 Subjects: including statistics, English, reading, Chinese
...Teaching is my passion. I have worked with kids of all ages for the best six year, from one-on-one home tutoring to group tutoring in class rooms and after-school programs. Although I have a
bachelor's in biology, I am able to tutor different subject and help with homework for every grade.
26 Subjects: including statistics, chemistry, reading, geometry
...I am proficient in the material tested in the SAT Math subject tests, both 1 and 2. I've tutored the LSAT logical and analytical reasoning sections several times. I'm able to provide direct
and clear explanations for which choice is the correct one, and why each of the others are false.
32 Subjects: including statistics, physics, calculus, geometry
...Having earned three master's degrees and working on a doctoral degree, all in different fields, I have become very aware of the importance of approaching material in a way that minimizes the
anxiety of what may seem an overwhelming task. This involves learning how to strategize learning. Let me...
50 Subjects: including statistics, chemistry, calculus, physics
...Pricing depends on subject(s) taught, travel required, and minimum hours per week. I am available to teach on weekends and after 6 p.m. on most weekdays. If you have more than one child or
would like semi-private tutoring, rates may be adjusted further.
34 Subjects: including statistics, reading, writing, ESL/ESOL
Related Elizabethport, NJ Tutors
Elizabethport, NJ Accounting Tutors
Elizabethport, NJ ACT Tutors
Elizabethport, NJ Algebra Tutors
Elizabethport, NJ Algebra 2 Tutors
Elizabethport, NJ Calculus Tutors
Elizabethport, NJ Geometry Tutors
Elizabethport, NJ Math Tutors
Elizabethport, NJ Prealgebra Tutors
Elizabethport, NJ Precalculus Tutors
Elizabethport, NJ SAT Tutors
Elizabethport, NJ SAT Math Tutors
Elizabethport, NJ Science Tutors
Elizabethport, NJ Statistics Tutors
Elizabethport, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Avenel statistics Tutors
East Newark, NJ statistics Tutors
Elizabeth, NJ statistics Tutors
Hillside, NJ statistics Tutors
Kenilworth, NJ statistics Tutors
Linden, NJ statistics Tutors
Midtown, NJ statistics Tutors
Millburn statistics Tutors
North Elizabeth, NJ statistics Tutors
Parkandbush, NJ statistics Tutors
Peterstown, NJ statistics Tutors
Rahway statistics Tutors
Roselle Park statistics Tutors
Roselle, NJ statistics Tutors
Union Square, NJ statistics Tutors
|
{"url":"http://www.purplemath.com/Elizabethport_NJ_statistics_tutors.php","timestamp":"2014-04-20T21:35:48Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:55af3698-e0b3-4f40-9647-033de523ab6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Buying Cows, Pigs, and Chickens
Date: 07/09/97 at 17:34:56
From: Velda Thomas
Subject: Math question
I recently came across one of the most challenging math problems I've
ever seen. I know that it is a Diophantine problem, and I know the
answer, but I just cannot seem to figure out how to get it. Can you
Here it is:
A farmer buys 100 animals for $100.00. The animals include at least
one cow, one pig, and one chicken, but no other kind. If a cow costs
$10.00, a pig costs $3.00, and a chicken costs $0.50, how many of each
did he buy?
Answer: 5 cows; 1 pig; and 94 chickens.
Velda L. Thomas
Date: 07/15/97 at 20:24:07
From: Doctor Sydney
Subject: Re: Math question
It is always a good idea to first label your objects. So, let's make
the following definitions:
A = the number of cows
P = the number of pigs
C = the number of chickens
Now we can translate our information into equations.
First, we know that the total number of animals is 100:
A + P + C = 100 (1)
And the total cost is $100:
10A + 3P + 0.5C = 100 (2)
Now we have 2 equations and 3 unknowns. Usually a system of equations
like this would have many different solutions if A, P, and C were
allowed to be any number. However, because we know that A, P, and
C must be positive integers (we can't have parts of animals and we
can't have negative animals), the number of possible solutions is
significantly reduced. The fact that A, P, and C are positive integers
will be very important for finding the solution to this problem.
Whenever you have a system of equations that gives relationships
between lots of variables, you want to simplify it so that you can
find out what the individual variables are. One way to do this is to
get rid of as many variables as you can. Using substitution we can
simplify the above two equations into one equation that gives the
relation between just two of the variables. This new equation will be
much easier to work with.
So, first we need to rearrange equation (1) so that it shows what one
of the variables is equal to; we shall solve for C (but you could
solve for P or A, too, if you wanted):
C = 100 - P - A (3)
Substituting equation (3) into equation (2):
10A + 3P + 0.5 (100 - P - A) = 100
Doing some rearranging:
10A + 3P + 50 - 0.5P - 0.5A = 100
9.5A + 2.5P = 50
19A + 5P = 100
Now we have an equation that relates just two of our variables. What
do we do with it? Well, we know that both A and P are positive
integers, right? That is the only piece of information we have left
that can help us solve the problem. Often at this stage, people write
the above equation in a form such that it has "solved for" one of the
variables. For instance, we could solve for P and rewrite the above
equation as:
P = 20 - 19A
This is helpful because when it is in this form, it is easy to plug in
whole number values for A and see immediately whether or not P will
also be a whole number. For instance, we can see right off the bat
that if A = 1, P will not be a whole number; therefore A cannot be 1.
Now look carefully at the equation above. For what values of A will P
be a whole number?
The only values of A which make P a whole number are those such that
when divided by 5 yield a whole number, right? In other words, 5 must
divide evenly into A. Thus, the only possible values for A that make P
a whole number are 5, 10, 15, and so on. 0 is not a possibility
because we are told that A > 0.
We are making progress! Let's see what happens when A = 5. Plugging
into the equation above, we see that:
P = 20 - 19*5 = 1
So far, so good. Now, let's check to make sure when A is 5 and P is 1
that we have that C is a positive integer. You can do this on your
own; you'll find that C must be 94 when A is 5 and P is 1.
So, we are done, right? Not so fast! We have shown that A = 5, P = 1,
C = 94 is one solution, but these problems often have multiple
solutions. We must check to see if there are other solutions.
Let's go back to where we left A and P. We had tried A = 5, right?
Next we need to try A = 10. What value does P take on when A = 10?
Plug into the equation to get that:
P = 20 - 38 = -18
Oh dear! This won't work, since P > 0, right? What about if we try to
plug in bigger and bigger values for A? Well, if you look at the
equation relating P and A, you'll see that P will get more and more
negative. Hence, for all A > 5 such that A is divisible by 5, we have
that P < 0. Thus, the only value that A can be is 5.
Now that we have found a solution that works and we've shown that it
is the only solution, we have finished.
I hope this has helped you to understand how to solve Diophantine
Equations better. Please do write back if you have any more questions.
Good luck!
-Doctors Susan and Sydney, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/53051.html","timestamp":"2014-04-19T10:43:44Z","content_type":null,"content_length":"10145","record_id":"<urn:uuid:4e5e94f0-aa17-42da-aeda-32d79c382363>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
YOKOI, Keisuke; NGHIEM, Minh-Quoc; MATSUBAYASHI, Yuichiroh y AIZAWA, Akiko. Contextual Analysis of Mathematical Expressions for Advanced Mathematical Search. Polibits [online]. 2011, n.43, pp.
81-86. ISSN 1870-9044.
[1] "Information Processing Society of Japan,"http://www.jpsj.or.jp/. [ Links ]
[2] World Wide Web Consortium, "Mathematical markup language (mathml) version 2.0 (second edition)," http://www.w3.org/TR/MathML2. [ Links ]
[3] "World Wide Web consortium (W3C)," http://www.w3.org/. [ Links ]
[4] M. Suzuki, T. Kanahori, N. Ohtake, and K. Yamaguchi, "An integrated ocr software for mathematical documents and its output with accessibility," in Computers Helping People with Special Needs,
ser. Lecture Notes in Computer Science. Springer Berlin / Heidelberg, 2004, vol. 3118, pp. 648 655. [ Links ]
[5] R. Munavalli and R. Miner, "Mathfind: a math aware search engine," in Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval,
ser. SIGIR '06. New York, NY, USA: ACM, 2006, pp. 735 735. [Online]. Available: http://doi.acm.org/10.1145/1148170.1148348. [ Links ]
[6] J. Misutka, "Indexing mathematical content using full text search engine," in WDS' 08 Proceedings of Contributed Papers: Part I Mathematics and Computer Sciences, 2008, pp. 240 244. [ Links
[7] M. Adeel, H. S. Cheung, and S. H. Khiyal, "Math GO! prototype of a content based mathematical formula search engine," Journal of Theoretical and Applied Information Technology, vol. 4, no.
10, pp. 1002 1012, 2006. [ Links ]
[8] K. Yokoi and A. Aizawa, "An approach to similarity search for mathematical expressions using MathML," in Towards digital mathematics library (DML), 2009, pp. 27 35. [ Links ]
[9] M. Kohlhase and A. Franke, "Mbase: Representing knowledge and context for the integration of mathematical software systems," Journal of Symbolic Computation, vol. 32, no. 4, pp. 365 402,
2001. [ Links ]
[10] S. Jeschke, M. Wilke, M. Blanke, N. Natho, and O. Pfeiffer, "Information extraction from mathematical texts by means of natural language processing techniques," in ACM Multimedia EMME
Workshop, 2007, pp. 109 114. [ Links ]
[11] T. Kudo, "Mecab: Yet another part of speech and morphological analyzer," http://mecab.sourceforge.net/. [ Links ]
[12] S. S. Shwartz, Y. Singer, and N. Srebro, "Pegasos: Primal Estimated sub GrAdient SOlver for SVM," in ICML '07: Proceedings of the 24th international conference on Machine learning. New York,
NY, USA: ACM, 2007, pp. 807 814. [ Links ]
[13] N. Okazaki, "Classias: a collection of machine learning algorithms for classification," 2009. [Online]. Available: http://www.chokkan.org/software/classias/. [ Links ]
|
{"url":"http://www.scielo.org.mx/scieloOrg/php/reference.php?pid=S1870-90442011000100011&caller=www.scielo.org.mx&lang=es","timestamp":"2014-04-21T12:13:26Z","content_type":null,"content_length":"8550","record_id":"<urn:uuid:9f8aa1ba-3416-4fc5-ba26-5c9c229f5793>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elementary Shape From Occluding Contours
The boundary of an object provides a cue to the basic shape of an object not only through the position of the boundary but by the fact that the normal to the surface at the boundary is typically at
90 degees to the viewing direction.
Here are some manually defined occluding boundaries on Mona.
Here is part of a needle diagram showing the surface normals set so that they point outwards from the occluding contours. At all other points it is assumed the surface normal points directly towards
the viewer.
This very crude approximation of what the surface normals are for the whole scene is enough to construct a simple 2½D representation of the shape of the scene. For this I use my Shape from Shapelets
reconstruction method. Note that, strictly speaking, the vector field is far from integrable (at line terminations and T junctions) but the Shapelet reconstruction approach has no difficulties with
this. Some results for Mona are shown below. (Note I chose not to define a contour along her chin. I found that when I did this the reconstruction produced a very pointy chin, giving her a mean
Here is another simple example
|
{"url":"http://www.csse.uwa.edu.au/~pk/Research/ShapeFromContour/index.html","timestamp":"2014-04-21T09:37:22Z","content_type":null,"content_length":"2378","record_id":"<urn:uuid:57f9ac06-c18e-4cea-bc63-d986bdd49f2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South El Monte Algebra 2 Tutor
Find a South El Monte Algebra 2 Tutor
Hello! I am a graduating senior at Caltech in physics. I've been a tutor at Caltech in a few advanced physics and math courses and used to tutor a lot in high school.
26 Subjects: including algebra 2, calculus, geometry, physics
...Are you feeling lost in math class - trying to memorize endless rules without understanding why? I can help. I break it down so that you get it.
24 Subjects: including algebra 2, Spanish, physics, writing
...The section that covers standard grids and coordinate planes is not multiple choice. Math topics include square roots, circumference, ratios and proportions, multiplying and dividing fractions
and decimals, volume, angles, exponents, and the Pythagorean Theorem. Students will also need some basic knowledge of the English and metric measurement systems to answer many of the questions.
27 Subjects: including algebra 2, reading, English, GED
...My passion and enthusiasm for math is reflected in my teaching, and I try to make each session interesting and fun! I have experience working as a private math tutor and at an established math
tutoring company. I am extremely patient and understanding, with an adaptable teaching style based on the student's needs.
9 Subjects: including algebra 2, calculus, geometry, algebra 1
...I thank you for your consideration and look forward to helping you or your student succeed! Best wishes, Marisa S.I am qualified to tutor Genetics because I got an 'A' in Genetics when I took
it for my undergraduate Biology degree. Even after I took the class I used a lot of genetics material from subsequent classes.
11 Subjects: including algebra 2, chemistry, biology, algebra 1
|
{"url":"http://www.purplemath.com/south_el_monte_algebra_2_tutors.php","timestamp":"2014-04-20T23:36:51Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:cdfb5a9c-a5cc-486c-8a92-6bb3b087c8b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Mathematica Journal: Volume 9, Issue 2: In and Out
In and Out
Q: I would like to understand the timing difference between the following two code samples. Slow version:
Fast version:
Is this due to autocompilation?
A: Robert Knapp (rknapp@wolfram.com) answers: The issue has to do with autocompilation, but is quite subtle. Table is computed, each time through the outer loop. Before evaluation, Mathematica be
consistent, so we cannot do this. Thus, since
|
{"url":"http://www.mathematica-journal.com/issue/v9i2/contents/Inout9-2/Inout9-2_4.html","timestamp":"2014-04-18T05:35:14Z","content_type":null,"content_length":"8391","record_id":"<urn:uuid:c12c3439-83c4-43e2-9f1c-96cc5ac75fe2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chapter 9 : Right Triangles and Trigonometry :
Chapter 9 : Right Triangles and Trigonometry
How do builders find the length of a skywalk support beam? How can you determine the height of a water slide given its length and angle of elevation? In Chapter 9, you'll use the Pythagorean Theorem
and trigonometric ratios to answer these and other questions.
|
{"url":"http://www.classzone.com/books/geometry/page_build.cfm?id=none&ch=9","timestamp":"2014-04-18T14:04:25Z","content_type":null,"content_length":"23055","record_id":"<urn:uuid:4d0cbc3a-8d71-4be7-869e-d6b27653d5ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ultimate state of two-dimensional Rayleigh-Bénard convection
Seminar Room 1, Newton Institute
Determining the transport properties of high Rayleigh number convection turbulent convection remains a grand challenge for experiment, simulation, theory, and analysis. In this talk, after a brief
review of the theory and applications of Rayleigh-Bénard convection we describe recent results for mathematically rigorous upper limits on the vertical heat transport in two dimensional
Rayleigh-Bénard convection between stress-free isothermal boundaries derived from the Boussinesq approximation of the Navier-Stokes equations. These bounds challenge some popular theoretical
arguments regarding the nature of the asymptotic high Rayleigh number ‘ultimate regime’ of turbulent convection. This is joint work with Jared Whitehead.
|
{"url":"http://www.newton.ac.uk/programmes/TOD/seminars/2012072311001.html","timestamp":"2014-04-17T04:18:42Z","content_type":null,"content_length":"4379","record_id":"<urn:uuid:667f422a-4f6e-43c2-9b92-630eeebf3a87>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Bafflers?
Ah, okay!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Yes, I did it in a different way.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Well, is it the same as mine, or not?
And, I think I have found a formula for the numbers in the other thread...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
My solution consists of using geogebra with an increment of .0000000000000000000000000000000000000000001 at this small increment quantum math takes over and even though that is over 10^43 points it
is done instantaneously. Then I put the 10^43 values into the spreadsheet and curve fit. Because in quantum math 10^43 equals infinity the fit was exact. Got the same answer you did.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
I think my way is simpler...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
You should have seen geogebra animating through the 10^43 points in no time at all. The GUI changed colors and its shape changed as it warped into the 5 and 6th dimensions. To see the spreadsheet
holding 10^43 numbers was truly awe inspiring. In quantum math geogebra is a brute.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
It must have been amazing. Shame I wasn't there to see...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Smoke came out of the tower and the monitor began to phase out of this universe. Quantum mathematics! It is faaaaaantaaaastic.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Math+Physics=Wonder! (that's wonder factorial).
Last edited by anonimnystefy (2012-12-25 08:55:35)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Actually, none of that is true. I had the answer from the book. I just had to look at it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Really? None of that is true? Well, you sure had me fooled there!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
I am sorry then. Now you know the truth.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Yes, thank you for being honest.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Honest is the best policy. Like my dear ole pappym used to say.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
I need new problems! The ones already posed are too tough!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Okay, I will see if I can post one. The other's are not too difficult, you are capable of solving them.
New problem:
A plane moves on the xy plane according to the equation x + x*y + y = 5;
If -13.8 < 2 x + 2 y < 5.8 then the plane will be okay.
A says) That plane is a goner. It will never make the flight.
B says) It is close but the plane will make it.
C say) Computer calculations say A is right. I am not getting on that plane.
D says) What are calculations. I like trains better anyway. If man were meant to fly he would have been born like an ostrich.
E says) I have to agree with B on this one. The plane will remain within safe limits for the entire time. By the way D, ostriches can not fly.
Is the plane okay or is it not. Who is right?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
I have to agree with A on this one. Looking at the graph of points whose coordinates satisfy the given equation, I see that the sum of the two coordinates has a minimun of -infjnity and a maximum of
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Supposing the path of the flight was 15x² + 16x y + 10y² = 10
and the safety range was
-2.045983018410 < 2x + 2y < 2.045983018410
what is your prediction then?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Full Member
Re: Bafflers?
Last edited by scientia (2013-01-14 22:41:08)
Re: Bafflers?
Hi scientia;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Full Member
Re: Bafflers?
Thanks bobbym. I've hidden my solution.
Last edited by scientia (2013-01-13 21:11:33)
Re: Bafflers?
New problem:
A right triangle is to be constructed on the top side of a square with sides a making a the hypotenuse. See the diagram. You must find point E. Now BE and EC are in ratio of 3 to 4 and BE and BC are
in ratio of 3 to 5. EC and BC are in ratio 4 to 5.
A says) Impossible to determine.
B says) Kaboobly doo, it is easy. You start by...
C says) A is right again. I could not do it so I guess it is impossible.
D says) Right triangle?
E says) I got it too.
Find E and remember it must be for any square and any orientation.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Ummm, is the point E given or ...? The text of the problem is very unclear...
And what orientation have to do with anything?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Point E is to be found. When you find it a right triangle will be created with the sides in the given proportions.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
What does orientation have to do with anything and since when do you do constructive geometry?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=248568","timestamp":"2014-04-21T09:39:56Z","content_type":null,"content_length":"43157","record_id":"<urn:uuid:20d97395-dba0-4716-9264-8f78a3972bd8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: solving for x
Replies: 3 Last Post: Feb 12, 2005 1:50 AM
Messages: [ Previous | Next ]
Re: solving for x
Posted: Feb 11, 2005 4:01 PM
Isn't 2 an answer? I mean, I plug in 2 into both sides of the eqn and I
get 2=2, which I know is true. Since the value x=2 leads to a statement
that is true (i.e. 2=2), it must also be a solution. So x=2 works.
No wait, isn't 3 an answer? I put 3 into both sides of the equation and
it also seems to work?
Isn't 4 an answer?
Isn't 5?
Isn't Pi?
Dang, I can't sem to find a value for x that doesn't work. Can you?
submissions: post to k12.ed.math or e-mail to k12math@k12groups.org
private e-mail to the k12.ed.math moderator: kem-moderator@k12groups.org
newsgroup website: http://www.thinkspot.net/k12math/
newsgroup charter: http://www.thinkspot.net/k12math/charter.html
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=3664852","timestamp":"2014-04-18T16:07:08Z","content_type":null,"content_length":"18887","record_id":"<urn:uuid:bd2dfa61-f4ff-46e8-8097-2d6e310389c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Boltzman machines: Constraint satisfaction networks that learn
Results 1 - 10 of 49
, 1986
"... Belief networks are directed acyclic graphs in which the nodes represent propositions (or variables), the arcs signify direct dependencies between the linked propositions, and the strengths of
these dependencies are quantified by conditional probabilities. A network of this sort can be used to repre ..."
Cited by 381 (7 self)
Add to MetaCart
Belief networks are directed acyclic graphs in which the nodes represent propositions (or variables), the arcs signify direct dependencies between the linked propositions, and the strengths of these
dependencies are quantified by conditional probabilities. A network of this sort can be used to represent the generic knowledge of a domain expert, and it turns into a computational architecture if
the links are used not merely for storing factual knowledge but also for directing and activating the data flow in the computations which manipulate this knowledge. The first part of the paper deals
with the task of fusing and propagating the impacts of new information through the networks in such a way that, when equilibrium is reached, each proposition will be assigned a measure of belief
consistent with the axioms of probability theory. It is shown that if the network is singly connected (e.g. tree-structured), then probabilities can be updated by local propagation in an isomorphic
network of parallel and autonomous processors and that the impact of new information can be imparted to all propositions in time proportional to the longest path in the network. The second part of
the paper deals with the problem of finding a tree-structured representation for a collection of probabilistically coupled propositions using auxiliary (dummy) variables, colloquially called "
hidden causes. " It is shown that if such a tree-structured representation exists, then it is possible to uniquely uncover the topology of the tree by observing pairwise dependencies among the
available propositions (i.e., the leaves of the tree). The entire tree structure, including the strengths of all internal relationships, can be reconstructed in time proportional to n log n, where n
is the number of leaves.
, 1985
"... We describe a distributed model of information processing and memory and apply it to the representation of general and specific information. The model consists of a large number of simple
processing elements which send excitatory and inhibitory signals to each other via modifiable connections. Infor ..."
Cited by 102 (10 self)
Add to MetaCart
We describe a distributed model of information processing and memory and apply it to the representation of general and specific information. The model consists of a large number of simple processing
elements which send excitatory and inhibitory signals to each other via modifiable connections. Information processing is thought of as the process whereby patterns of activation are formed over the
units in the model through their excitatory and inhibitory interactions. The memory trace of a processing event is the change or increment to the strengths of the interconnections that results from
the processing event. The traces of separate events are superimposed on each other in the values of the connection strengths that result from the entire set of traces stored in the memory. The model
is applied to a number of findings related to the question of whether we store abstract representations or an enumeration of specific experiences in memory. The model simulates the results of a
number of important experiments which have been taken as evidence for the enumeration of specific experiences. At the same time, it shows how the functional equivalent of abstract
representations—prototypes, logogens
, 1986
"... When Newell introduced the concept of the knowledge level as a useful level of description for computer systems, he focused on the representation of knowledge. This paper applies the knowledge
level notion to the problem of knowledge acquisition. Two interesting issues arise. First, some existing ma ..."
Cited by 73 (3 self)
Add to MetaCart
When Newell introduced the concept of the knowledge level as a useful level of description for computer systems, he focused on the representation of knowledge. This paper applies the knowledge level
notion to the problem of knowledge acquisition. Two interesting issues arise. First, some existing machine learning programs appear to be completely static when viewed at the knowledge level. These
programs improve their performance without changing their "knowledge." Second, the behavior of some other machine learning programs cannot be predicted or described at the knowledge level. These
programs take unjustified inductive leaps. The first programs are called symbol level learning (SLL) programs; the second, non-deductive knowledge level learning (NKLL) programs. The paper analyzes
both of these classes of learning programs and speculates on the possibility of developing coherent theories of each. A theory of symbol level learning is sketched, and some reasons are presented for
, 1990
"... Fundamental developments in feedfonvard artificial neural net-works from the past thirty years are reviewed. The central theme of this paper is a description of the history, origination,
operating characteristics, and basic theory of several supervised neural net-work training algorithms including t ..."
Cited by 60 (2 self)
Add to MetaCart
Fundamental developments in feedfonvard artificial neural net-works from the past thirty years are reviewed. The central theme of this paper is a description of the history, origination, operating
characteristics, and basic theory of several supervised neural net-work training algorithms including the Perceptron rule, the LMS algorithm, three Madaline rules, and the backpropagation tech-nique.
These methods were developed independently, but with the perspective of history they can a/ / be related to each other. The concept underlying these algorithms is the “minimal disturbance principle,
” which suggests that during training it is advisable to inject new information into a network in a manner that disturbs stored information to the smallest extent possible. I.
"... A Threshold Circuit consists of an acyclic digraph of unbounded fanin, where each node computes a threshold function or its negation. This paper investigates the computational power of Threshold
Circuits. A surprising relationship is uncovered between Threshold Circuits and another class of unbound ..."
Cited by 52 (1 self)
Add to MetaCart
A Threshold Circuit consists of an acyclic digraph of unbounded fanin, where each node computes a threshold function or its negation. This paper investigates the computational power of Threshold
Circuits. A surprising relationship is uncovered between Threshold Circuits and another class of unbounded fanin circuits which are denoted Finite Field ZP (n) Circuits, where each node computes
either multiple sums or products of integers modulo a prime P (n). In particular, it is proved that all functions computed by Threshold Circuits of size S(n) n and depth D(n) can also be computed by
ZP (n) Circuits of size O(S(n) log S(n)+nP (n) log P (n)) and depth O(D(n)). Furthermore, it is shown that all functions computed by ZP (n) Circuits of size S(n) and depth D(n) can be computed by
Threshold Circuits of size O ( 1 2 (S(n) log P (n)) 1+) and depth O ( 1 5 D(n)). These are the main results of this paper. There are many useful and quite surprising consequences of this result. For
example, integer reciprocal can be computed in size n O(1) and depth O(1). More generally, any analytic function with a convergent rational polynomial power series (such as sine, cosine,
exponentiation, square root, and logarithm) can be computed within accuracy 2,nc, for any constant c, by Threshold Circuits of
- Behavioral and Brain Sciences , 1986
"... This excerpt is provided, in screen-viewable form, for personal use only by ..."
- MACHINE LEARNING: AN ARTIFICIAL INTELLIGENCE APPROACH , 1986
"... This chapter presents an overview of goals and directions in machine learning research, and is intended to serve as a conceptual road map to other chapters. It investigates intrinsic aspects of
the learning process, classifies current lines of research, and presents the author's view of the relatio ..."
Cited by 40 (5 self)
Add to MetaCart
This chapter presents an overview of goals and directions in machine learning research, and is intended to serve as a conceptual road map to other chapters. It investigates intrinsic aspects of the
learning process, classifies current lines of research, and presents the author's view of the relationship among learning paradigms, strategies and orientations.
- PROCEEDINGS OF THE ECAI94 WORKSHOP ON COMBINING SYMBOLIC AND CONNECTIONIST PROCESSING, ECCAI , 1994
"... ..."
, 1990
"... We survey learning algorithms for recurrent neural networks with hidden units and attempt to put the various techniques into a common framework. We discuss fixpoint learning algorithms, namely
recurrent backpropagation and deterministic Boltzmann Machines, and non-fixpoint algorithms, namely backpro ..."
Cited by 27 (3 self)
Add to MetaCart
We survey learning algorithms for recurrent neural networks with hidden units and attempt to put the various techniques into a common framework. We discuss fixpoint learning algorithms, namely
recurrent backpropagation and deterministic Boltzmann Machines, and non-fixpoint algorithms, namely backpropagation through time, Elman's history cutoff nets, and Jordan's output feedback
architecture. Forward propagation, an online technique that uses adjoint equations, is also discussed. In many cases, the unified presentation leads to generalizations of various sorts. Some
simulations are presented, and at the end, issues of computational complexity are addressed. This research was sponsored in part by The Defense Advanced Research Projects Agency, Information Science
and Technology Office, under the title "Research on Parallel Computing", ARPA Order No. 7330, issued by DARPA/CMO under Contract MDA972-90-C-0035 and in part by the National Science Foundation under
grant number EET-8716324 and i...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=767159","timestamp":"2014-04-21T00:56:07Z","content_type":null,"content_length":"36318","record_id":"<urn:uuid:d7428e05-fc70-448f-a3da-03162e229696>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Literature aided determination of data quality and statistical significance threshold for gene expression studies
Gene expression data are noisy due to technical and biological variability. Consequently, analysis of gene expression data is complex. Different statistical methods produce distinct sets of genes. In
addition, selection of expression p-value (EPv) threshold is somewhat arbitrary. In this study, we aimed to develop novel literature based approaches to integrate functional information in analysis
of gene expression data.
Functional relationships between genes were derived by Latent Semantic Indexing (LSI) of Medline abstracts and used to calculate the function cohesion of gene sets. In this study, literature cohesion
was applied in two ways. First, Literature-Based Functional Significance (LBFS) method was developed to calculate a p-value for the cohesion of differentially expressed genes (DEGs) in order to
objectively evaluate the overall biological significance of the gene expression experiments. Second, Literature Aided Statistical Significance Threshold (LASST) was developed to determine the
appropriate expression p-value threshold for a given experiment.
We tested our methods on three different publicly available datasets. LBFS analysis demonstrated that only two experiments were significantly cohesive. For each experiment, we also compared the LBFS
values of DEGs generated by four different statistical methods. We found that some statistical tests produced more functionally cohesive gene sets than others. However, no statistical test was
consistently better for all experiments. This reemphasizes that a statistical test must be carefully selected for each expression study. Moreover, LASST analysis demonstrated that the expression
p-value thresholds for some experiments were considerably lower (p < 0.02 and 0.01), suggesting that the arbitrary p-values and false discovery rate thresholds that are commonly used in expression
studies may not be biologically sound.
We have developed robust and objective literature-based methods to evaluate the biological support for gene expression experiments and to determine the appropriate statistical significance threshold.
These methods will assist investigators to more efficiently extract biologically meaningful insights from high throughput gene expression experiments.
Gene expression data are complex, noisy, and subject to inter- and intra-laboratory variability [1,2]. Moreover, because tens of thousands of measurements are made in a typical experiment, the
likelihood of false positives (type I error) is high. One way to address these issues is to increase replicates in the experiments. However this is generally cost prohibitive. Therefore, quality
control of gene expression experiments with limited sample size is important for identification of true DEGs. Although the completion of the Microarray Quality Control (MAQC) project provides a
framework to assess microarray technologies, others have pointed out that it does not sufficiently address inter- and intra-platform comparability and reproducibility [3-5].
Even with reliable gene expression data, statistical analysis of microarray experiments remains challenging to some degree. Jeffery and coworkers found a large discrepancy between gene lists
generated by 10 different feature selection methods, including significance analysis of microarrays (SAM), analysis of variance (ANOVA), Empirical Bayes, and t-statistics [6]. Several studies have
focused on finding robust methods for identification of DEGs [7-15]. However, as more methods become available, it is increasingly difficult to determine which method is most appropriate for a given
experiment. Hence, it is necessary to objectively compare and evaluate different gene selection methods [6,16-18], which result in different number of DEGs and different false discovery rate (FDR)
estimates [19].
FDR is determined by several factors such as proportion of DEGs, gene expression variability, and sample size [20]. Controlling for FDR can be too stringent, resulting in a large number of false
negatives [21-23]. Therefore, determination of an appropriate threshold is critical for effectively identifying truly differentially expressed genes, while minimizing both false positives and false
negatives. A recent study, using a cross validation approach showed that optimal selection of FDR threshold could provide good performance on model selection and prediction [24]. Although many
researchers have made considerable progress in improving FDR estimation and control [25-27], as well as other significance criteria [28-31], the instability resulted from high level of noise in
microarray gene expression experiments cannot be completely eliminated. There is therefore a great need to make meaningful statistical significance and FDR thresholds by incorporating biological
Recently, Chuchana et al. integrated gene pathway information into microarray data to determine the threshold for identification of DEGs [32]. By comparing a few biological parameters such as total
number of networks and common genes among pathways, they determined the statistical threshold by the amount of biological information obtained from the DEGs [32]. This study seems to be the first
attempt to objectively determine the threshold of DEGs based on biological function. However, there are several limitations of this study. First, the method relied on Ingenuity pathway analysis which
may be biased toward well studied genes and limited by human curation. Second, the threshold selection is iteratively defined. Finally, the approach is manual, which is not realistic for large scale
genome-wide applications.
A number of groups have developed computational methods to measure functional similarities among genes using annotation in Gene Ontology and other curated databases [33-38]. For example, Chabalier et
al., showed that each gene can be represented as a vector which contains a set of GO terms [34]. Each term was assigned a different weight according to the number of genes annotated by this term and
the total number of annotated genes in the collection. Thus, GO-based similarity of gene pairs was calculated using a vector space model. Other studies not only focused on using GO annotations to
calculate gene-gene functional similarities but also to determine the functional coherence of a gene set. Recently, Richards et al utilized the topological properties of a GO-based graph to estimate
the functional coherence of gene sets [38]. They developed a set of metrics by considering both the enrichment of GO terms and their semantic relationships. This method was shown to be robust in
identifying coherent gene sets compared with random sets obtained from microarray datasets.
Previously, we developed a method which utilizes Latent Semantic Indexing (LSI), a variant of the vector space model of information retrieval, to determine the functional relationships between genes
from Medline abstracts [39]. This method was shown to be robust and accurate in identifying both explicit and implicit gene relationships using a hand curated set of genes. More recently, we applied
this approach to determine the functional cohesion of gene sets using the biomedical literature [40]. We showed that the LSI derived gene set cohesion was consistent across >6000 GO categories. We
also showed that this literature based method could be used to compare the cohesion of gene sets obtained from microarray experiments [40]. Subsequently, we applied this method to evaluate various
microarray normalization procedures [41]. In the present study, we aimed to develop and test a robust literature-based method for evaluating the overall quality, as determined by functional cohesion,
of microarray experiments. In addition, we describe a novel method to use literature derived functional cohesion to determine the threshold for expression p-value and FDR cutoffs in microarray
Gene-document collection and similarity matrix generation
All titles and abstracts of the Medline citations cross-referenced in the mouse, rat and human Entrez Gene entries as of 2010 were concatenated to construct gene-documents and gene-gene similarity
scores were calculated by LSI, as previously described [39,40,42]. Briefly, a term-by-gene matrix was created for mouse and human genes where the entries of the matrix were the log-entropy of terms
in the document collection. Then, a truncated singular value decomposition (SVD) of that matrix was performed to create a lower dimension (reduced rank) matrix. Genes were then represented as vectors
in the reduced rank matrix and the similarity between genes was calculated by the cosine of the vector angles. Gene-to-gene similarity was calculated using the first 300 factors, which has good
performance for large document collections [43].
Calculation of literature-based functional significance (LBFS)
This study is an extension of our previous work on gene-set cohesion analysis [40]. Briefly, we showed that LSI derived gene-gene relationships can be used effectively to calculate a literature
cohesion p-value (LPv). LPv is derived by using Fisher's exact test to determine if the number of literature relationships above a pre-calculated threshold in a given gene set is significantly
different from that which is expected by chance. In many cases, the size of the differentially expressed gene set can be very large. Computationally it is not feasible to calculate one LPv for a very
large gene set. Also, it is difficult to compare LPvs if the gene sets are vastly different in size. Therefore, we defined a new metric called literature cohesion index (LCI) of randomly sampled
subsets of 50 genes from the pool of DEGs. LCI is the fraction of the sampled subsets that have an LPv < 0.05. Then, the overall literature-based functional significance (LBFS) of the entire DEG set
is determined by a Fisher's exact test comparing the LCI to that expected by chance (i.e., under the complete null hypothesis that no differential expression exists) via a permutation test procedure
(Figure 1). In forming the 2-by-2 table, average counts from the multiple permutations are rounded to the nearest integers.
Figure 1. Overview of the LBFS algorithm. A statistical test was applied to get differentially expressed genes (DEGs) from the original labeled (OL) and permutated labeled (PL) samples. Subsets of 50
genes were randomly selected 1000 times from each pool of DEGs. Then literature p-values (LPvs) were calculated for each 50 gene-set. A Fisher's Exact test was used to determine if the proportion
(called LCI) of subsets with LPv <0.5 in the OL group was significantly different from that obtained from PL group.
Literature aided statistical significance threshold (LASST)
Now suppose a differential expression p-value (EPv) is computed for each probe (probeset) by a proper statistical test. A statistical significance threshold (an EPv cutoff) can be determined by
considering the relationship between the EPv and the LCI for a given DEG set. First, a grid of EPv cutoffs is specified such as 0.001, 0003, 0.005, 0.01, ⋯, 1, to generate a DEG set at each cutoff
value. Next, the LCI is calculated for each DEG set using the sub-sampling procedure as described above. Apart from some random fluctuations, the LCI value is typically a decreasing function of the
EPv threshold and assumes an L shape (Figure 2), implying that the LCI partitions the EPv thresholds (and the corresponding DEG sets) into two subpopulations: one with good LCI (the vertical part of
the L shape) and one with poor LCI. The EPv threshold at the boundary of the two subpopulations (i.e., at the bend point), can be used as a statistical significance cutoff for selecting DEGs. The
bend point can be determined by moving a two-piece linear fit to the L-shaped curve from left to right. The LASST algorithm is as follows:
Figure 2. Relationship between EPV and LCI. The fraction of gene sets with LPv < 0.05 (y-axis) was plotted at various expression p-value (EPv) thresholds (x-axis) for 3 different datasets. Inset
shows magnified view for EPv < 0.10.
(1) Specify an increasing sequence of EPv statistical significance thresholds α[1], ⋯, α[m ]and generate DEG sets at these specified significance levels.
(2) For each DEG set generated in (1), estimate the LCI using the sub-sampling procedure described above, to obtain pairs (α[i], L[i]), i = 1, 2, ⋯, m.
(3) Choose an integer m[0 ](3 by default) and perform two-piece linear fits to the curve as follows: For k = m[0], m[0]+1, ⋯, m-m[0], fit a straight line by lease square to the points (α[j], L[j]), j
= 1, 2, ⋯, k (the left piece) to obtain intercept and slope , . Similarly fit a straight line to the points (α[j], L[j]), j = k+1, 2, ⋯, m (the right piece) to obtain intercept and slope , .
Compute .
(4) Let k* be the first local maxima of V[k ](k == m[0], m[0]+1, ⋯, m-m[0]), that is, .
(5) Take the k*[th ]entry on the α sequence specified in (1) as the EPv significance cutoff.
Microarray data analysis
To test the performance of our approach, we randomly chose three publicly available microarray datasets from Gene Expression Omnibus (GEO): 1) interleukin-2 responsive (IL2) genes [44]; 2) PGC-1beta
related (PGC-1beta) genes [45]; 3) Endothelin-1 responsive (ET1) genes [46]. To be able to compare across these datasets, we focused only on experiments using the Affymetrix Mouse 430-2 platform. All
datasets (.cel files) were imported into GeneSpring GX 11 and processed using MAS5 summarization and quantile normalization. Probes with all absent calls were removed from subsequent analysis. As
discussed earlier, the content and literature cohesion of a DEG set can largely depend on the statistical test. For this reason, four popular statistical tests including empirical Bayes approach [47
], student t-Test, Welch t-Test and Mann-Whitney test were performed to identify DEGs with a statistical significance level 0.05.
Comparison of various statistical tests using LBFS
The goal of our study was to develop a literature based method to objectively evaluate the biological significance of differentially expressed genes produced by various statistical methods applied to
gene expression experiments. Previously, we developed a method and web-tool called Gene-set Cohesion Analysis Tool (GCAT) which determines the functional cohesion of gene sets using latent semantic
analysis of Medline abstracts [40]. However, this method was applicable only to small gene sets and could not be used to compare gene-sets with varying sizes. Here, we have extended this functional
cohesion method to determine the biological significance of larger gene sets, which are typically found in microarray studies. To accomplish this, we first calculate the Literature Cohesion Index
(LCI, see methods for details) of DEGs produced (Figure 1). Literature based functional significance (LBFS) is then calculated by comparing the LCI of the original labeled experiment and a permuted
experiment (Figure 1). Importantly, we found that LBFS values varied greatly between different statistical tests for a given dataset (Table 1). For example, the Empirical Bayes method produced the
most functionally significant DEGs for PGC-1beta dataset, but not the other two datasets. In contrast, the Welch t-test generated the most functionally significant DEGs for the IL2 dataset. Both
PGC-1beta and IL2 experiments showed significant (p<0.05) LBFS values with multiple statistical tests, whereas none of the tests on ET1 dataset produced DEGs with significant LBFS (Table 1). These
results suggest that the PGC-1beta and IL2 experiments likely produced biologically relevant DEGs compared with the ET1 experiments. The lack of biological significance for ET1 DEGs may be due to
poor data quality or lack of knowledge in the literature that functionally connects these DEGs. However, the latter may not be the case as the percentage of genes with abstracts was 68-84% for all
datasets and statistical tests (Additional file 1).
Table 1. Literature based functional significance (LBFS) of gene sets generated by four statistical tests for three different microarray experiments.
Additional file 1. Number of DE genes (with 0.05 EPv) and percentage of having abstracts that generated from different tests for PGC-1beta, IL2 and ET1 datasets.
Format: PDF Size: 29KB Download file
This file can be viewed with: Adobe Acrobat Reader
Determination of EPv threshold using LASST
In the above analysis, DEGs were selected using an arbitrary statistical threshold of p<0.05, as is the case for many published expression studies. However, in reality, there is no biological reason
why this threshold is selected for experiments. Once the appropriate statistical test was chosen by application of LBFS above, we tested if literature cohesion could be applied to determine the EPv
cutoff. We developed another method called Literature Aided Statistical Significance Threshold (LASST) which determines the EPv by a two-piece linear fit of the LCI curves as a function of EPv as
described in Methods. LASST was applied to p-values produced by Empirical Bayes for PGC-1beta experiment and Welch t-test for the IL2 and ET1 experiments. DEGs were produced at each point on a grid
of unequally-spaced statistical significance levels (α = 0.001, 0.003, 0.005,⋯). In computing the LCI, the LPv level was set to 0.05, and the size of the gene subsets from the DEG pool was set to 50
in the sub-sampling procedure as described in Methods. The LCI of a DEG set was plotted against various α levels of the EPv (Figure 2). Interestingly, application of LASST determined an EPv
significance threshold of 0.01 (corresponding LCI 0.55) for PGC-1beta dataset and 0.02 (LCI 0.315) for IL2 dataset. None of the DEG sets from the ET1 experiment had appreciable LCI, which remained
consistently low across the α levels (Figure 2). Thus, an EPv threshold could not be determined using the LCI approach for ET1 dataset. These results are consistent with what we observed above (Table
While computing LCIs in the above analysis, the LPv threshold was set at 0.05. We wondered if different LPv thresholds would affect LASST results. Therefore, we calculated LCI at different LPv
thresholds such as 0.01, 0.03, 0.05, 0.06, 0.08 and 0.1. We found that the shape of the LCI curves were similar with respect to EPv values (Figure 3), indicating that LASST is not sensitive to
different reasonably conservative LPv thresholds.
Figure 3. Relationship between EPV and LCI at various thresholds. The LCI at various LPv thresholds ranging from 0.01 to 0.1 (y-axis) was plotted against various EPv thresholds (x-axis) for PGC-1beta
dataset. Inset shows magnified view for EPv < 0.10. The shapes of the curves are similar at various LPv thresholds.
We next compared the LASST results with several popular multiple hypothesis testing correction procedures along with the unadjusted p-value threshold of 0.05 in a student t-test (Table 2). For the
IL2 experiment, Storey's q-value method at 0.1 identified the highest number of DEGs. In stark contrast, only 1 gene was selected by any of the four FDR correction methods for the PGC-1beta
experiment and 0 genes were selected for the ET1 experiment. Importantly, application of LASST selected 3485 genes at a p-value threshold of 0.02 (corresponding to FDR 0.032) and 1175 genes at a
p-value threshold of 0.01 (corresponding to FDR 0.074) for IL2 and PGC-1beta experiments, respectively. These results suggest that perhaps more biologically relevant DEGs can be selected with lower
FDR values.
Table 2. Number of significant genes identified by student t-test after correction for multiple hypotheses testing
Although microarray technology has become common and affordable, analysis and interpretation of microarray data remains challenging. Experimental design and quality of the data can severely affect
the results and conclusions drawn from a microarray experiment. Using our approach, we found that some datasets (e.g., PGC-1beta) produced more functionally cohesive gene sets than others (e.g.,
ET1). There can be many biological or technological reasons for the lack of cohesion in any microarray dataset. For instance, it is possible that the experimental perturbation (or signaling pathway)
simply did not alter mRNA expression levels in that system as hypothesized. It is also possible that the data are noisy due to technical or biological variations, which result in false differential
expression. Although our method will not identify the causes of this variation, it can help in assessment of the overall quality of the experiment and provide feedback to the investigators in order
to adjust the experimental procedures. For example, after observing a low LBFS value, the investigator may choose to remove outlier samples or add more replicates into the study design.
It is important to note that a low cohesion value could be due to a lack of information in the biomedical literature. In other words, it is possible that the microarray experiment has uncovered new
gene associations which have not been previously reported in the literature. This issue would affect any method that relies on human curated databases or natural language processing of biomedical
literature. However, our LSI method presents a unique advantage over other approaches because it extracts both explicit and implicit gene associations, based on weighted term usage patterns in the
literature. Consequently, gene associations are ranked based on their conceptual relationships and not specific interactions documented in the literature. Thus, we posit that LSI is particularly
suited for analysis of discovery oriented genomic studies which are geared toward identifying new gene associations. Further work is necessary to be able to determine exactly how (whether explicitly
or implicitly) a subset of functionally cohesive genes are related to one another in the LSI model.
A major challenge in microarray analysis involves selection of the appropriate statistical tests, which have different assumptions about the data distribution and result in different DEG sets. For
instance, parametric methods are based on the assumption that the observations adhere to a normal distribution. The assumption of normality is rarely satisfied in microarray data even after
normalization. Nonparametric methods are distribution free and do not make any assumptions of the population from which the samples are drawn. However, nonparametric tests lack statistical power with
small samples, which is often the case in microarray studies. In this study, we found that although Mann-Whitney nonparametric test identified the largest number of DEGs for PGC-1beta experiment, the
DEGs were not functionally significant (Table 1). Also, we found that some tests were selectively better for some experiments. For example, the Empirical Bayes method produced the best results for
the PGC-1beta experiment, while the Welch t-test produced the best results for the IL2 experiment. Taken together, we demonstrate that our method allows an objective and literature based method to
evaluate the appropriateness of different statistical tests for a given experiment.
Several groups have developed methods to assess functional cohesion or refine feature selection by incorporating biological information from either the primary literature or curated databases [38,48-
50]. To our knowledge, a literature based approach to evaluate the overall quality of microarray experiments has not been reported. Although we did not extensively compare our approach with these
methods, we performed a preliminary comparison with a well known Gene Set Enrichment Analysis (GSEA) method [49]. GSEA calculates the enrichment p-value for biological pathways in curated databases
for a given set of DEGs. Presumably, if a microarray experiment is biologically significant, then higher number of relevant pathways should be enriched. Indeed, we found that GSEA identified 410, 309
and 283 enriched pathway gene sets with FDR <0.25 for PGC-1beta, IL2 and ET1experiments, respectively. These results correlated well with our LBFS findings which showed that DEGs obtained from
PGC-1beta and IL2 were more functionally significant than ET1. However, GSEA identified a substantial number of enriched pathways for ET1. One issue is that GSEA only focuses on gene subsets and not
the entire DEG list. Thus, it does not evaluate the overall cohesion or functional significance of the DEG list. In addition, since GSEA relies on human curated databases such as GO and KEGG, it is
susceptible to curation biases, which favor well-known genes and pathways and contain limited information on other genes.
Assuming that microarray experiment is of high quality and an appropriate statistical test has been selected for a microarray experiment, selection of the expression p-value cutoff still remains
arbitrary for nearly all published studies. In our work, we found a positive correlation between literature cohesion index and EPv (Figure 2). Based on the distribution of LCI with respect to EPv, we
devised a method (called LASST) which empirically determined the EPv cutoff value. Not surprisingly, we found that different EPv cutoffs should be used for the different microarray experiments that
we examined. Indeed, we found that application of LASST resulted in a smaller p-value threshold and substantially smaller number of DEGs for both IL1 and PGC-1beta experiments. Therefore, LASST
enables researchers to narrow their gene lists and focus on biologically important genes for further experimentation.
Finally, another major challenge for microarray analysis is the propensity for high false discovery rate (FDR) caused by multiple hypothesis testing. Correction of multiple hypothesis testing
including family wise error rate (FWER) are often too stringent which may lead to a large number of false negatives. As with EPv cutoff concerns above, setting the FDR threshold at levels 0.01, 0.05,
or 0.1 does not have any biological meaning [29]. For instance, no false positive error correction method produced adequate DEGs for PCG-1beta and ET1 experiments. However, our analysis showed that
PGC-1beta dataset was biologically very cohesive (Table 1). This suggests that applying FDR correction to this dataset would produce a very large number of false negatives. Another important finding
of our study is that the false positive error correction procedures appear to be sensitive to DEG size. For instance, using student t-test IL2 dataset consisted of 5001 DEGs with a p-value <0.05,
whereas the Storey FDR method produced 5955 at q<0.1. However, our literature based analysis revealed that the IL2 dataset produced less biologically cohesive DEGs than the PGC-1beta dataset, which
showed only 1 gene with q<0.1. In the future, it will be important to expand these preliminary observations to a larger of set of microarray experiments and to determine the precise relationships
between false positive correction methods and biological significance.
In this study, we developed a robust methodology to evaluate the overall quality of microarray experiments, to compare the appropriateness of different statistical methods, and to determine the
expression p-value thresholds using functional information in the biomedical literature. Using our approach, we showed that the quality, as measured by the biological cohesion of DEGs, can vary
greatly between microarray experiments. In addition, we demonstrate that the choice of statistical test should be carefully considered because different tests produce different DEGs with varying
degrees of biological significance. Importantly, we also demonstrated that procedures that control false positive rates are often too conservative and favor larger DEG sets without considering
biological significance. The methods developed herein can better facilitate analysis and interpretation of microarray experiments. Moreover, these methods provide a biological metric to filter the
vast amount of publicly available microarray experiments for subsequent meta-analysis and systems biology research.
ANOVA: analysis of variance; DEGs: differentially expressed genes; EPv: expression p-value; ET1: Endothelin-1 responsive; FDR: False Discovery Rate; GCA: gene-set cohesion analysis; GCAT: Gene-set
Cohesion Analysis Tool; GEO: Gene Expression Omnibus; IL2: interleukin-2 responsive; LASST: Literature aided statistical significance thresholds; LBFS: literature-based functional significance; LCI:
literature cohesion index; LPv: literature cohesion p-value; LSI: Latent Semantic Indexing; MAQC: Microarray Quality Control; PGC-1beta: PGC-1beta related; SAM: significance analysis of microarrays;
SVD: singular value decomposition;
Authors' contributions
L. Xu developed the algorithm, carried out the data analyses, performed all of the evaluation and wrote the manuscript. C. Cheng developed the literature aided statistical significance thresholds
method and wrote part of the manuscript. E.O. George provided statistical supervision of the study. R. Homayouni conceived, co-developed the methods, supervised the study and wrote the manuscript.
We thank Dr. Kevin Heinrich (Computable Genomix, Memphis, TN) for providing the gene-gene association data. This work was supported by The Assisi Foundation of Memphis and The University of Memphis
Bioinformatics Program.
This article has been published as part of BMC Genomics Volume 13 Supplement 8, 2012: Proceedings of The International Conference on Intelligent Biology and Medicine (ICIBM): Genomics. The full
contents of the supplement are available online at http://www.biomedcentral.com/bmcgenomics/supplements/13/S8.
1. Luo J, Schumacher M, Scherer A, Sanoudou D, Megherbi D, Davison T, Shi T, Tong W, Shi L, Hong H, Zhao C, Elloumi F, Shi W, Thomas R, Lin S, Tillinghast G, Liu G, Zhou Y, Herman D, Li Y, Deng Y,
Fang H, Bushel P, Woods M, Zhang J: A comparison of batch effect removal methods for enhancement of prediction performance using MAQC-II microarray gene expression data.
Pharmacogenomics J 2010, 10:278-291. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
2. Scherer A: Batch Effects and Noise in Microarray Experiments: Sources and Solutions.
3. Chen JJ, Hsueh HM, Delongchamp RR, Lin CJ, Tsai CA: Reproducibility of microarray data: a further analysis of microarray quality control (MAQC) data.
BMC Bioinformatics 2007, 8:412. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
4. Shi L, Reid LH, Jones WD, Shippy R, Warrington JA, Baker SC, Collins PJ, de Longueville F, Kawasaki ES, Lee KY, Luo Y, Sun YA, Willey JC, Setterquist RA, Fischer GM, Tong W, Dragan YP, Dix DJ,
Frueh FW, Goodsaid FM, Herman D, Jensen RV, Johnson CD, Lobenhofer EK, Puri RK, Schrf U, Thierry-Mieg J, Wang C, Wilson M, Wolber PK, et al.: The MicroArray Quality Control (MAQC) project shows
inter-and intraplatform reproducibility of gene expression measurements.
Nat Biotechnol 2006, 24:1151-1161. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
5. Shi L, Campbell G, Jones WD, Campagne F, Wen Z, Walker SJ, Su Z, Chu TM, Goodsaid FM, Pusztai L, Shaughnessy JD, Oberthuer A, Thomas RS, Paules RS, Fielden M, Barlogie B, Chen W, Du P, Fischer M,
Furlanello C, Gallas BD, Ge X, Megherbi DB, Symmans WF, Wang MD, Zhang J, Bitter H, Brors B, Bushel PR, Bylesjo M, et al.: The MicroArray Quality Control (MAQC)-II study of common practices for
the development and validation of microarray-based predictive models.
Nat Biotechnol 2010, 28:827-838. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
6. Jeffery IB, Higgins DG, Culhane AC: Comparison and evaluation of methods for generating differentially expressed gene lists from microarray data.
BMC Bioinformatics 2006, 7:359. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
7. Kadota K, Konishi T, Shimizu K: Evaluation of two outlier-detection-based methods for detecting tissue-selective genes from microarray data.
Gene Regul Syst Bio 2007, 1:9-15. PubMed Abstract | PubMed Central Full Text
8. Kadota K, Nakai Y, Shimizu K: Ranking differentially expressed genes from Affymetrix gene expression data: methods with reproducibility, sensitivity, and specificity.
Algorithms Mol Biol 2009, 4:7. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
9. Pearson RD: A comprehensive re-analysis of the Golden Spike data: towards a benchmark for differential expression methods.
BMC Bioinformatics 2008, 9:164. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
10. Jung K, Friede T, Beiszbarth T: Reporting FDR analogous confidence intervals for the log fold change of differentially expressed genes.
BMC Bioinformatics 2011, 12:288. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
11. Hu J, Xu J: Density based pruning for identification of differentially expressed genes from microarray data.
BMC Genomics 2010, 11(Suppl 2):S3. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
12. Wille A, Gruissem W, Buhlmann P, Hennig L: EVE (external variance estimation) increases statistical power for detecting differentially expressed genes.
Plant J 2007, 52:561-569. PubMed Abstract | Publisher Full Text
13. Elo LL, Katajamaa M, Lund R, Oresic M, Lahesmaa R, Aittokallio T: Improving identification of differentially expressed genes by integrative analysis of Affymetrix and Illumina arrays.
Omics 2006, 10:369-380. PubMed Abstract | Publisher Full Text
14. Lai Y: On the identification of differentially expressed genes: improving the generalized F-statistics for Affymetrix microarray gene expression data.
Comput Biol Chem 2006, 30:321-326. PubMed Abstract | Publisher Full Text
15. Kim RD, Park PJ: Improving identification of differentially expressed genes in microarray studies using information from public databases.
Genome Biol 2004, 5:R70. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
16. Murie C, Woody O, Lee AY, Nadon R: Comparison of small n statistical tests of differential expression applied to microarrays.
BMC Bioinformatics 2009, 10:45. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
17. Bullard JH, Purdom E, Hansen KD, Dudoit S: Evaluation of statistical methods for normalization and differential expression in mRNA-Seq experiments.
BMC Bioinformatics 2010, 11:94. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
18. Dozmorov MG, Guthridge JM, Hurst RE, Dozmorov IM: A comprehensive and universal method for assessing the performance of differential gene expression analyses.
PLoS One 2010., 5 PubMed Abstract | Publisher Full Text | PubMed Central Full Text
19. Slikker W Jr: Of genomics and bioinformatics.
Pharmacogenomics J 2010, 10:245-246. PubMed Abstract | Publisher Full Text
20. Pawitan Y, Michiels S, Koscielny S, Gusnanto A, Ploner A: False discovery rate, sensitivity and sample size for microarray studies.
Bioinformatics 2005, 21:3017-3024. PubMed Abstract | Publisher Full Text
21. Ishwaran H, Rao JS, Kogalur UB: BAMarraytrade mark: Java software for Bayesian analysis of variance for microarray data.
BMC Bioinformatics 2006, 7:59. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
22. Ploner A, Calza S, Gusnanto A, Pawitan Y: Multidimensional local false discovery rate for microarray studies.
Bioinformatics 2006, 22:556-565. PubMed Abstract | Publisher Full Text
23. Jiao S, Zhang S: The t-mixture model approach for detecting differentially expressed genes in microarrays.
Funct Integr Genomics 2008, 8:181-186. PubMed Abstract | Publisher Full Text
24. Graf AC, Bauer P: Model selection based on FDR-thresholding optimizing the area under the ROC-curve.
Stat Appl Genet Mol Biol 2009., 8
PubMed Abstract
25. Lu X, Perkins DL: Re-sampling strategy to improve the estimation of number of null hypotheses in FDR control under strong correlation structures.
BMC Bioinformatics 2007, 8:157. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
26. Pounds S, Cheng C: Improving false discovery rate estimation.
Bioinformatics 2004, 20:1737-1745. PubMed Abstract | Publisher Full Text
27. Xie Y, Pan W, Khodursky AB: A note on using permutation-based false discovery rate estimates to compare different analysis methods for microarray data.
Bioinformatics 2005, 21:4280-4288. PubMed Abstract | Publisher Full Text
28. Cheng C: An adaptive significance threshold criterion for massive multiple hypothesis testing.
Optimality: The Second Erich L. Lehmann Symposium, Institute of Mathematical Statistics, Beachwood, OH, USA 2006, 49:51-76.
29. Cheng C, Pounds SB, Boyett JM, Pei D, Kuo ML, Roussel MF: Statistical significance threshold criteria for analysis of microarray gene expression data.
Stat Appl Genet Mol Biol 2004., 3
PubMed Abstract
30. Dudoit S, van der Laan MJ, Pollard KS: Multiple testing. Part I. Single-step procedures for control of general type I error rates.
Stat Appl Genet Mol Biol 2004., 3
PubMed Abstract
31. Genovese CWL: Operating characteristics and extensions of the false discovery rate procedure.
Journal of the Royal Statistical Society, Series B 2002, 64:499-517. Publisher Full Text
32. Chuchana P, Holzmuller P, Vezilier F, Berthier D, Chantal I, Severac D, Lemesre JL, Cuny G, Nirde P, Bucheton B: Intertwining threshold settings, biological data and database knowledge to
optimize the selection of differentially expressed genes from microarray.
PLoS One 2010, 5:e13518. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
33. Wang JZ, Du Z, Payattakool R, Yu PS, Chen CF: A new method to measure the semantic similarity of GO terms.
Bioinformatics 2007, 23:1274-1281. PubMed Abstract | Publisher Full Text
34. Chabalier J, Mosser J, Burgun A: A transversal approach to predict gene product networks from ontology-based similarity.
BMC Bioinformatics 2007, 8:235. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
35. Huang da W, Sherman BT, Tan Q, Collins JR, Alvord WG, Roayaei J, Stephens R, Baseler MW, Lane HC, Lempicki RA: The DAVID Gene Functional Classification Tool: a novel biological module-centric
algorithm to functionally analyze large gene lists.
Genome Biol 2007, 8:R183. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
36. Schlicker A, Domingues FS, Rahnenfuhrer J, Lengauer T: A new measure for functional similarity of gene products based on Gene Ontology.
BMC Bioinformatics 2006, 7:302. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
37. Ruths T, Ruths D, Nakhleh L: GS2: an efficiently computable measure of GO-based similarity of gene sets.
Bioinformatics 2009, 25:1178-1184. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
38. Richards AJ, Muller B, Shotwell M, Cowart LA, Rohrer B, Lu X: Assessing the functional coherence of gene sets with metrics based on the Gene Ontology graph.
Bioinformatics 2010, 26:i79-87. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
39. Homayouni R, Heinrich K, Wei L, Berry MW: Gene clustering by latent semantic indexing of MEDLINE abstracts.
Bioinformatics 2005, 21:104-115. PubMed Abstract | Publisher Full Text
40. Xu L, Furlotte N, Lin Y, Heinrich K, Berry MW, George EO, Homayouni R: Functional Cohesion of Gene Sets Determined by Latent Semantic Indexing of PubMed Abstracts.
PLoS One 2011, 6:e18851. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
41. Furlotte N, Xu L, Williams RW, Homayouni R: Literature-based Evaluation of Microarray Normalization Procedures.
42. Berry MW, Browne M: Understanding Search Engines: Mathematical Modeling and Text Retrieval.
43. Landauer TK, Laham D, Derr M: From paragraph to graph: latent semantic analysis for information visualization.
Proc Natl Acad Sci USA 2004, 101(Suppl 1):5214-5219. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
44. Zhang Z, Martino A, Faulon JL: Identification of expression patterns of IL-2-responsive genes in the murine T cell line CTLL-2.
J Interferon Cytokine Res 2007, 27:991-995. PubMed Abstract | Publisher Full Text
45. Vianna CR, Huntgeburth M, Coppari R, Choi CS, Lin J, Krauss S, Barbatelli G, Tzameli I, Kim YB, Cinti S, Shulman GI, Spiegelman BM, Lowell BB: Hypomorphic mutation of PGC-1beta causes
mitochondrial dysfunction and liver insulin resistance.
Cell Metab 2006, 4:453-464. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
46. Vallender TW, Lahn BT: Localized methylation in the key regulator gene endothelin-1 is associated with cell type-specific transcriptional silencing.
FEBS Lett 2006, 580:4560-4566. PubMed Abstract | Publisher Full Text
47. Smyth GK: Linear models and empirical bayes methods for assessing differential expression in microarray experiments.
Stat Appl Genet Mol Biol 2004., 3
PubMed Abstract
48. Raychaudhuri S, Altman RB: A literature-based method for assessing the functional coherence of a gene group.
Bioinformatics 2003, 19:396-401. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
49. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP: Gene set enrichment analysis: a knowledge-based approach for
interpreting genome-wide expression profiles.
Proc Natl Acad Sci USA 2005, 102:15545-15550. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
50. Tian L, Greenberg SA, Kong SW, Altschuler J, Kohane IS, Park PJ: Discovering statistically significant pathways in expression profiling studies.
Proc Natl Acad Sci USA 2005, 102:13544-13549. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
Sign up to receive new article alerts from BMC Genomics
|
{"url":"http://www.biomedcentral.com/1471-2164/13/S8/S23?fmt_view=mobile","timestamp":"2014-04-23T17:57:31Z","content_type":null,"content_length":"147889","record_id":"<urn:uuid:35621eed-08ad-4528-87aa-b50ad12f890b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matrix question
September 20th 2011, 06:04 PM #1
Sep 2011
Matrix question
Consider two vectors x = {{a},{b}} and y = {{c},{d}} which are both of the same length. Then there is a rotation matrix R(theta) which takes x to y, namely R(theta)x = y. We note that 0 =< theta
< 2pi. Use our knowledge of rotation matrices to establish a simple condition on a; b; c; d so that the angle 'theta' satisfies 0 < theta < pi
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/pre-calculus/188441-matrix-question.html","timestamp":"2014-04-20T16:34:45Z","content_type":null,"content_length":"28441","record_id":"<urn:uuid:7da4fbe7-cfb0-42af-b742-134e355e3d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Revealing IDer Subterfuge through the Basics of Blackjack Counting
Over at Uncommon Descent, Sal Cordova wrote a masterpiece of fallacies and flaws concerning the relation of the gambler's ruin and supposed limits of evolution. I dissected that article
Displeased with my treatment, particularly of his error concerning what it means to have a 1% advantage, Sal responded. However, in doing so, he merely dug his hole of ignorance deeper, and more
importantly, gave us a nice example of how IDers attempt to hide their lack of knowledge behind cut-n-pasted jargon.
The mistake I jumped on appears early in Sal's essay, and reveals a fundamental lack of understanding of statistics and gambling:
"If he has a 1% statistical advantage, that means he has a 50.5% chance of winning and a 49.5% chance of losing."
My original response will still suffice:
"No, that isn't what it means. That would be the case only in a game that resembled coin flipping, with a win paying the amount of the wager. However, in most Vegas games, such as blackjack, there
are several plays, such as splitting hands, doubling down, or getting a blackjack, which pay far more than the wager. The same can be said for craps, the other game Cordova mentions. A player in such
games with a 1% edge can expect to win, on average, 1% of the amount of his wager, per play. He will most certainly NOT expect to win 50.5% of his plays as Cordova suggests."
Sal then went through
many wild girations
trying to defend this error common to basic statistics classes. At first he claimed that my argument was invalid because I was assuming a single play, but as I explained on PT, that makes no
difference. 1 play or 1,000, a 1% edge still does not mean a 50.5% chance of winning.
But his main tactic was to fling as much technical sounding, but irrelevant, verbage and terminology my way. Unfortunately, when you don't understand the subject, doing so will only lead to more
errors, and reveal one's ignorance more starkly. His final effort was quite illuminating in this regard. I informed him that I had been a card counter (using an old system known as the Uston Advanced
Point Count). To this he retorted:
"Oh, really, then answer my question. While you’re at it, provide count values yielding an advantage of 1% for the other systems such as :
Silver Fox
Uston APC
It should be pretty easy if you really know what you’re talking about."
Sorry about the irony meters folks, because all Sal has done by asking this question is reveal beyond a shadow of a doubt that he has no idea what he is talking about. To show why, it will be
necessary to give a quick Readers' Digest version of blackjack counting systems.
Blackjack is one of the few gambling games where the probability of victory changes one play to the next. This is because 1) used cards are set to the side, and new hands played from the remaining
cards in the "shoe", until an arbitrary point is reached and they are all reshuffled, and 2) some cards are more valuable to the player (10's, A's), and others to the dealer (4's, 5's, and 6's).
Counting systems place numerical values on the cards (say +3 for 5's, -3 for 10's), and the player keeps a running total of this "count" in his head, adjusting in some systems for decks remaining and
/or aces. This count is used to determine bet size, and also when to vary one's play from Basic Strategy (the optimal play given no knowledge of previous cards played). For example, Basic Strategy
says to stand with two 10's against a dealers 6. However, if the count rises high enough, the optimal play can be to split the 10's. Counting systems have a grid of all possible player and dealer
scenarios, and what counts warrant variation from Basic Strategy, which a successful player must memorize.
One thing that should be obvious at this point is that counting cards in blackjack is no picnic. It takes a great deal of training and dedication to be able to keep the count accurately, make
whatever adjustments your system demands, and bet and play accordingly, all without raising the suspicions of the pit bosses or even the other players. What also should be obvious is that counters do
not learn all systems. They tend to pick one and stick to it, for that is more than enough challenge. And since each system assigns different values to different cards, and has different adjustments
to be made, it is clear that a counter will have little knowledge of the details of a system that he doesn't play. And finally, in many systems it is not necessary to know what % advantage one has in
various situations. One simply makes the systems calculations and makes the appropriate bets and plays.
So, all that in tow, let's look at Sal's question again:
"Oh, really, then answer my question. While you’re at it, provide count values yielding an advantage of 1% for the other systems such as :
Silver Fox
Uston APC
It should be pretty easy if you really know what you’re talking about."
It should be clear now that Sal is talking out of his hat. Never mind the complete irrelevancy of his questions to the matter at hand: whether an edge of 1% implies a winning percentage of 50.5%.
Never mind that he doesn't even seem to know that Uston APC is the system I played (he lists it under "other systems"). No one knowledgeable about counting would ask such a question, nor would likely
have the answer, since no one would know all three systems he lists. It also is a completely irrelevant question to the Uston APC system I played, which did not require this knowledge. And as an
added bonus, Sal's question doesn't even make sense, because the % advantage for a player with a given count in the Uston APC system is not constant, but instead varies by remaining decks. From table
9-2 of Ken Uston's "Million Dollar Blackjack", page 128 of my 1981 copy:
UPC True count of +3:
1 deck remaining: +1.0%
2 decks remaining: +0.7%
3 decks remaining: +0.6%
the values lay out in similar declining pattern for other counts and decks remaining.
So we see here clearly that Sal has no idea what he is talking about, and is simply cutting and pasting impressive-looking technical information in an attempt to hide his ignorance. No one who
understands card counting would have asked this question.
This is worth being on the lookout for when listening to IDer/creationist arguments. If it seems impossible to grasp their line of argument, don't blame yourself. It is likely they are doing what Sal
did above.
No comments:
|
{"url":"http://scienceavenger.blogspot.com/2008/05/revealing-ider-subterfuge-through.html","timestamp":"2014-04-20T03:10:08Z","content_type":null,"content_length":"61819","record_id":"<urn:uuid:a3dd4827-2320-4e55-9172-6f6da0ef3aa9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve using variation of parameters \[y''-y'=e^x\] \[r^2-r=0\] \[r_1=r_2=1\] \[y_c=c_1e^x+c_2xe^x\] \[W(y_1,y_2)=0\] Is that possible?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
No I did that wrong, one moment
Best Response
You've already chosen the best response.
no it equals \[e^{2x}\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
wrong complimentary
Best Response
You've already chosen the best response.
\[r^2-r=r(r-1)=0\implies r=\{?,?\}\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
r=0 and r=1
Best Response
You've already chosen the best response.
so the complimentary is...?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
right, now I get a bit dicey.... since you have e^x on the RHS as well I feel you may need to multiply this by x to keep it linearly independent
Best Response
You've already chosen the best response.
sure \[y_c=c_1+c_2xe^x\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
why on both c_1 and c_2, how did come to that conclusion?
Best Response
You've already chosen the best response.
or do you do it to the particular instead? hm... well normally you do it to the whole RHS particular, but since you need the complimentary for the particular in VP I'm not sure...
Best Response
You've already chosen the best response.
I'm not sure... DunnBros is about to kick me out, they close at 11pm
Best Response
You've already chosen the best response.
I'll be back on in like 15 mins when I get home
Best Response
You've already chosen the best response.
fair enough, I'm about to call it a night as well see ya, I will investigate this :)
Best Response
You've already chosen the best response.
Make that 7 mins, ha!
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[y_1=x\] \[y_2=xe^x\]
Best Response
You've already chosen the best response.
\[y_p(x)=-x\int \frac{e^x}xdx+xe^x\int x^{-1}dx\]
Best Response
You've already chosen the best response.
well ik it
Best Response
You've already chosen the best response.
my twin :D
Best Response
You've already chosen the best response.
hi twin
Best Response
You've already chosen the best response.
that first integral is not going to be so nice I don't think.... you know who knows exactly what we're missing @lalaly DE help please!!!!!
Best Response
You've already chosen the best response.
well i can do it
Best Response
You've already chosen the best response.
tell me equation for which i have to solve\y complimentary
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
we need variation of parameters @wasiqss
Best Response
You've already chosen the best response.
yehh just tell me equation i ll take out solution. as y complimentary is solved i ll solve for y particular
Best Response
You've already chosen the best response.
Our problem is that the solutions to the complimentary are not independent from the particular, which you can tell since we got W=0 on our first try
Best Response
You've already chosen the best response.
ok @wasiqss let's see what you got :)
Best Response
You've already chosen the best response.
tell me equation man completely cause this seems messy
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
easy one lol
Best Response
You've already chosen the best response.
then do it, by all mean!
Best Response
You've already chosen the best response.
y complimentary is y= c1 +c2e^-x
Best Response
You've already chosen the best response.
^yes, answer not important, we need the process...
Best Response
You've already chosen the best response.
complimentary is y=c1+c2e^x
Best Response
You've already chosen the best response.
i disagree with the above answer, shouldn't it be e^x, yeah what turing has
Best Response
You've already chosen the best response.
yes cause roots are zero and -1
Best Response
You've already chosen the best response.
no they are not
Best Response
You've already chosen the best response.
r^2-r=0 r(r-1)=0 r={0,1}
Best Response
You've already chosen the best response.
do you think my W is right? x^2 e^x
Best Response
You've already chosen the best response.
that I doube @MathSofiya because we have e^x on both left and right our solutions are linearly dependet (that means we can't solve it yet) so maybe if we multiply the g(x) by x ? not sure, but
worth a try
Best Response
You've already chosen the best response.
with W=e^2e^x I don't think that first integral is closed, check the wolf, I have not yet
Best Response
You've already chosen the best response.
w=x^2e^x *
Best Response
You've already chosen the best response.
do we agree that \[y_c=c_1x+c_2xe^x\] or should it be \[y_c=c_1+c_2e^x\]
Best Response
You've already chosen the best response.
yeah that's what I had
Best Response
You've already chosen the best response.
that I am not sure, but based on our attempt to use the second one failing I would now try the first (we are discovering this together right now :)
Best Response
You've already chosen the best response.
however if we are to try the first we need to multiply g(x) by x (I'm thinking....)
Best Response
You've already chosen the best response.
so let's try that
Best Response
You've already chosen the best response.
otherwise we get W=0 which is a failure
Best Response
You've already chosen the best response.
@lalaly I know you can help us, please where are you?
Best Response
You've already chosen the best response.
\[y_c=c_1x+c_2xe^x\] \[\left|\begin{matrix}x&xe^x\\1&e^x(x-1)\end{matrix}\right|=x^2e^x\]
Best Response
You've already chosen the best response.
can you explain the g(x) concept again and why it being e^x would be a problem because that cancels something out?...
Best Response
You've already chosen the best response.
no I think that is wrong because that leads to the particular having\[\int\frac{e^x}dxx\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
here is the problem... have you taken linear algebra?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@lalaly is here
Best Response
You've already chosen the best response.
okay. then we have a set of vectors (in this case equations) if their determinant is zero we know they are linearly dependent
Best Response
You've already chosen the best response.
now listen
Best Response
You've already chosen the best response.
for y particular
Best Response
You've already chosen the best response.
that is a bad thing in DE's we need all of our solutions to be linearly independent, so the determinant of the solutions, (the wronskian) must not be zero, otherwise our solutions are linearly
Best Response
You've already chosen the best response.
is it xe^x
Best Response
You've already chosen the best response.
so as the denominator becomes zero
Best Response
You've already chosen the best response.
and would you care to explain how/why ?
Best Response
You've already chosen the best response.
we multiply by x
Best Response
You've already chosen the best response.
cofficeint of e^1x
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
HENCE we get zero in denominator
Best Response
You've already chosen the best response.
anyway... to keep the particular linearly independent from the complimentary we often need to multiply by x, so that is what I am thinking must happen somehow @lalaly will confirm if I am making
any sense or not now
Best Response
You've already chosen the best response.
now we need to apply binomeal expansion
Best Response
You've already chosen the best response.
@wasiqss sorry to be so direct, but no we do not need any binomial expansion here please let's just have a listen to @lalaly
Best Response
You've already chosen the best response.
arghh i wish i was good at making you understand it
Best Response
You've already chosen the best response.
we need it!
Best Response
You've already chosen the best response.
you cannot have 0 in the denom, that shows you that you have already made a mistake !!!!
Best Response
You've already chosen the best response.
well to counter this zero!! we need to multiply x
Best Response
You've already chosen the best response.
...as I was saying...
Best Response
You've already chosen the best response.
plz no offense but plz go revise DE s
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
you know what .. we can approach a DE in differnt ways and this one i will certainly approach by binomeal.. cause we studied this way for this case
Best Response
You've already chosen the best response.
yes, but I think it is way more work than is necessary now I just wanna hear @lalaly
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
oh dear, we seem to have lost her I am going to try this op paper I guess
Best Response
You've already chosen the best response.
hahaha lana dear
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
we have to expand binomeally [1+(D(D+1)−1)−1
Best Response
You've already chosen the best response.
well i have a solution on page i wonder how to give you that
Best Response
You've already chosen the best response.
take a picture with your cellphone and attach it
Best Response
You've already chosen the best response.
or use \[\LaTeX\]
Best Response
You've already chosen the best response.
good idea twin can i do it tomorow because i would need to upload it too?
Best Response
You've already chosen the best response.
okay got it :)
Best Response
You've already chosen the best response.
latex is something not my type
Best Response
You've already chosen the best response.
turing my approach?
Best Response
You've already chosen the best response.
do everything normally, then multiply the particular by x at the end that is all
Best Response
You've already chosen the best response.
get used to it, it's a very helpful Latex is a very useful tool
Best Response
You've already chosen the best response.
turing binomeal is essential in it!
Best Response
You've already chosen the best response.
oh yeah?, just watch...
Best Response
You've already chosen the best response.
actually no need to multiply by x at all\[y_c=c_1+c_2e^x\]\[W=e^x\]\[y_p=-\int e^{-x}+e^x\int dx=e^x+xe^x\]\[y_p=e^x+xe^x\]\[y=y_c+y_p=c_1+c_2e^x+e^x+xe^x=c_1+e^x(1+x+c_2)\]and yes, but is your
solution in 5 lines?
Best Response
You've already chosen the best response.
oh crap I see a mistake
Best Response
You've already chosen the best response.
wait you told me w=xe^x!
Best Response
You've already chosen the best response.
dude this whole discussion was based on finding what W equaled
Best Response
You've already chosen the best response.
ok just ask lana what is the right answer
Best Response
You've already chosen the best response.
@MathSofiya yes, because the entire problem we were having was keeping our solutions linearly independent, which we check by finding W
Best Response
You've already chosen the best response.
waittttttttttttt what the....... i was solving by another method not by variation of parameter lol
Best Response
You've already chosen the best response.
\[y_c=c_1+c_2e^x\]\[W=e^x\]\[y_p=-\int e ^x dx+e^x\int dx=-e^x+xe^x\]\[y=y_c+y_p=c_1+c_2e^x-e^x+xe^x=c_1+e^x(x-1+c_2)\]
Best Response
You've already chosen the best response.
there we go... stupid algebra tripped me up :/
Best Response
You've already chosen the best response.
yeah that's what I get \[-e^x+e^xx\]
Best Response
You've already chosen the best response.
so yc+yp gives what I have, and what wolf has so we're good :)
Best Response
You've already chosen the best response.
good. Hi 5
Best Response
You've already chosen the best response.
back atchya!
Best Response
You've already chosen the best response.
As always thank you!
Best Response
You've already chosen the best response.
As always you're welcome :D
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5057e9cfe4b0a91cdf45497a","timestamp":"2014-04-21T08:04:42Z","content_type":null,"content_length":"306076","record_id":"<urn:uuid:3a7d1550-7b6b-44ab-91a3-188296525132>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The signature of a path - inversion
Seminar Room 1, Newton Institute
Since Hambly and I introduced the notion of the signature of a path and proved that it uniquely characterised paths of finite length up to treelike components and its expected value determines the
law of a compactly supported measure on these same paths an obvious but apparently difficult question has been to effectively determine the inverse and reconstruct the path from it's signature. At
last, progress has been made.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2010010609001.html","timestamp":"2014-04-19T13:09:09Z","content_type":null,"content_length":"5915","record_id":"<urn:uuid:b4adbf2b-1830-420e-ae42-96f97bf273a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Joy of Calculation
May 12th, 2009
I recently had the pleasure of finding a copy of the manual for my favorite calculator. I know it is incredibly nerdy to have a favorite calculator (and even more nerdy to read the manual), but it
really got me thinking.
The manual subtly sold an incredible point of view: the engineer’s view. The manual appears trivial at the surface but is in fact a very good rhetoric pushing a fascinating point of view: you can
infer things quickly. This led me to think about a number of technical viewpoints (engineers point of view, scientists point of view and lastly mathematicians point of view). They are all lumped
together as “quantitative” but they are radically different.
Listen to this (from the beginning of the HP15C calculator manual):
Notice the emphasis on the physical activity of calculation. The emphasis is not on equations, mathematics or physics. The calculation is deliberately described as key strokes. No attempt is made to
justify any of the steps or numbers used. The point being made is: if you are agile and ready (have the correct fore-knowledge) you can calculate. If you can calculate you can know things. Robert
Heinlein made this point about slide-rules in his science fiction story: “Have Spacesuit- Will Travel.” And likely a similar joy can be felt while accounting on an abacus.
This is the engineer’s view: the world continuously gives up many small and simple clues as to what is going on around you. These are like “tells” in poker. You can reason from them and build
incredible things using them. The smallness and simplicity of the techniques are pure comfort.
In Michael Lewis’s “Liars Poker” the author mentions a moment when he knows that everything he is being told about the market is a lie. He knows this because he attempts to convert one statement
about the market into another using his calculator. When he attempts the conversion (figuring out something he was not supposed to know from clues coming from something he was told) it does not add
up. Importantly he describes working this out on his calculator- not using a sophisticated computer model or a spreadsheet. He is comfortable in his heterodox position because he calculated it by
hand in small and simple steps.
This joy in comparing one conclusion to another (using a calculator) differs from the idealized scientist’s view in that there is no derivation or application of deeper laws. The engineer’s view is:
if you can remember it or guess at it then you don’t need to derive it.
Some of the great scientists (Enrico Fermi) and mathematicians (Stanislaw Ulam) became masters of the engineering view and could dazzle with it.
One of Fermi’s famous stunts was measuring the yield of a nuclear bomb test by observing how far scraps of paper were moved. Fermi may have worked from first principles, but he could also have used a
simple pre-prepared trick. If he had observed how far scraps of paper had moved in an earlier conventional bomb test (which he now knew the yield of) and then applied a simple engineering trick
called “dimensional analysis” that let him reason the amount of work observed (how far the slips of paper were moved) depended linearly on the bomb yield and decreased as the cube of how far away he
was from the explosion. So all he did was compute the ratio of of how far the slips moved in each test and then divide this three times in succession by the ratio of how far way he was from the
center of each test. Merely being able to divide told Fermi something (the new bomb yield) before he was officially allowed to know it. Notice how he did not need to use any facts about the bombs
being tested, the speed of sound, atmospheric pressure, density or temperature.
Such reasoning may seem crude- but it is far more informative and far more exciting than the published work of many lesser scientists. The bulk of most merely poor scientific work (as opposed to
outright wrong work) is of the form: “here are some pointless measurements I got by applying an expensive new instrument in exactly the situations the manufacturer designed it for.” Or “here are some
manipulations that seem original since I don’t feel I have to cite any non-physicists.”
I side with the mathematicians (not the engineers or even scientists) and I think it is safe to say that mathematicians (who have their own particular view) are more sympathetic to the engineer’s
view than to the scientist’s view.
One joke that has been told about me is that I am not happy at a presentation unless there is an equation on the board. This is typical of mathematicians. The excitement comes from the opportunity to
“kick the times.” Once you remove enough details an equation is a simple statement of the form “A=B” (to borrow the title of a wonderful book by Marko Petkovsek, Herbert Wilf and Doron Zeilberger).
An equation is a welcome moment of concreteness in contrast to the many painful abstractions that are necessary for much of mathematics. The dirty secret is that mathematicians perk up when an
equation is on the board not because they like equations- but they are hoping to plug in values for “A” and “B” such that the equation is shown to be false. My branch of mathematics (theoretical
computer science) is more a competitive than a cooperative field. One measure of audience interest in my field was if somebody to grab the magic marker out of your hand to try and write down a
counter-example to what you were trying to demonstrate. Gian-Carlo Rota tells a similar tale where someone in a mathematical audience grabs the chalk and tries to complete the presentation.
One reason I side with the mathematicians and not the engineers is: if pressed too far the engineer’s view goes wrong. The way it goes wrong is found in the thick classic comprehensive engineering
handbooks. These books attempt to store and systematize all of a given field’s engineering knowledge. Once you attempt to become comprehensive and are devoting all of your intellect to memorizing and
applying the standard approximations and estimates you are lost.
I also do not side with the scientists because mathematicians have no sympathy for trying to “buy your way out of solving a hard problem” by running an expensive experiment. Mathematicians do work
with data (even messy data) but we call this “application” not “proof.”
To me the best view is: if you can derive anything then you do not need to remember anything.
1. May 12th, 2009 at 13:14 | #1
Reply | Quote
Correction from a friend- the scientist with the slips of paper was Enrico Fermi ( http://www.cfo.doe.gov/me70/manhattan/trinity.htm ). Not, as I had mis-remembered, Richard Feynman.
|
{"url":"http://www.win-vector.com/blog/2009/05/the-joy-of-calculation/","timestamp":"2014-04-17T06:58:16Z","content_type":null,"content_length":"57096","record_id":"<urn:uuid:e0bd769a-421c-49f5-a019-9689397083ff>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
question #5250
engineering question #5250
James, a 38 year old male from Victoria asks on April 2, 2011,
I've watched a few videos about sound frequencies altering the geometric pattern of molecules. Some interesting work has been done on the effects of sound on matter. The discipline is called
Cymatics. My questions is, how can I find out which frequencies produce which results? In other words, if a certain frequency 'reconstructs" a molecule to a specific geometric pattern, what is the
number of that frequency? I want to reproduce those frequencies on my computer with an audio editor I have.
viewed 6618 times
the answer
Quentin Wright
answered on April 4, 2011,
As a scientific discipline, cymatics is not really about the reconstruction of molecules by sound. Rather, it is concerned with the study of visible sound and vibration. Sound waves can be made
visible in many ways. In a typical experimental context, something is coated or covered with fine particles or liquid. When sound frequencies are applied to the item, the coating is displaced by the
waves that have been created, making the wave patterns visible.
The 17^th Century German physicist Ernst Chladni is regarded as the first person to theorize about sound patterns and experimentally investigate the nature of sound. His work introduced acoustics as
a new branch of physics and had a profound influence on the development of wave theory.
In one of his most insightful experiments, Chladni sprinkled fine sand onto plates and rods of varying size, shape and material. He then systematically studied the patterns created in the sand when
the plates were vibrated at a range of sound frequencies.
These experiments led to the development of Chladni’s Law. This is an equation which expresses the relationship between the frequency (f) of modes of vibration of a flat, circular surface and the
diametric (m) and radial (n) nodes created by that frequency. The equation is as follows:
f = C(m + 2n)^p
C and p are coefficients which depend on the properties of the plate. For thin, flat circular plates, p is roughly 2 and C is roughly 1, which simplifies the equation somewhat. Chaldi's law is
actually a lot more complicated that this answer suggests. This formula is basically correct but there are other variables based on the properties of the plates, node location and so on. Still as an
approximation, if you want to, say, create a cymatic vibration pattern that has 3 diametric and 2 radial nodes on a thin flat circular plate, you could work out the frequency as follows:
f = 1 (3+(2*2))^2 or 7^2 or 49 cycles per second
So to return to the question, the visible wave forms created by various frequencies can certainly be predicted mathematically but it depends upon the size, shape and physical properties of what you
are vibrating.
You can conduct your own visible sound wave experiments fairly simply using a Chladni plate. Instructions for building a Chladni plate are available online here: http://www.make-digital.com/make/
If you found this answer useful, please consider
making a small donation to science.ca.
|
{"url":"http://www.science.ca/askascientist/viewquestion.php?qID=5250","timestamp":"2014-04-20T05:42:43Z","content_type":null,"content_length":"18817","record_id":"<urn:uuid:fcac8550-9ab7-4b2c-951c-b4b268446600>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coloring of subgraphs of G^n
up vote 2 down vote favorite
Let $G=(L,R,E)$ be a finite bipartite graph, such that for each $v\in L\cup R: deg(v)>0$. Define $E^{(n)}=\{(\overline{l},\overline{r}) | \overline{l}=(l_1,...,l_n)\in L^n , \overline{r}=
(r_1,...,r_n) \in R^n$ and for each $ 1 \le i \le n : (l_i,r_i)\in E\},$ and $G^{(n)}=(L^n,R^n,E^{(n)}).$
I want to show that for any number of colors $c>0$ exists an $n\in\mathbb{N}$ such that $G^{(n)} \mapsto (G)^2 _c $.
I thought about counting all the full sub-graphs of $G^{(n)}$ which are isomorphic to $G$ and then showing that there must be at least one full sub-graph whose edges are single colored, but I got a
bit tangled doing so, which made me think there should be an easier way to do so. Am I on the right path or is there actually a more convenient way to prove this?
co.combinatorics graph-theory graph-colorings ramsey-theory
2 What is a bigraph? What does $G^{(n)}\mapsto(G)^2_c$ mean? – bof Dec 6 '13 at 11:06
I meant $G$ is a bipartite graph, i.e. a graph such that it's vertices can be divided into two groups: left and right, so that every edge in the graph is between a vertex from the right and one
1 from the left. by $G^{(n)} \mapsto (G)^2 _c $ I meant that for every coloring $\varphi$ of edges from $G^{(n)}$ in $c$ colors, there is a full sub-graph of $G^{(n)}$ which is isomorphic to the
original $G$. – Roman Vale Dec 6 '13 at 14:58
add comment
1 Answer
active oldest votes
This follows directly from the Hales-Jewett theorem.
Observe that $E^{(n)}$ is isomorphic to $E^n$, the cartesian product of the edge set of $G$. A $c$-colouring of the edges of $G^{(n)}$ is then naturally a $c$-colouring of $E^n$, so, if
$n$ is sufficiently large, then by the Hales-Jewett theorem there is a monochromatic combinatorial line in $E^n$. But a line is precisely an isomorphic copy of $G$.
up vote 2
down vote This is best illustrated by an example. Take $L=\{l,m\}$, $R = \{r,s\}$ and let $G$ be the path $lrms$. A line in $E^5$ might look like $$(\star, \star, (l,r), (m,r), (m,s)) \equiv
accepted ((*,*,l,m,m),(\dagger, \dagger,r,r,s)),$$ where the $\star\equiv(*,\dagger)$'s mark the active coordinates and range over $E = \{(l,r), (m,r), (m,s)\}$. We obtain a corresponding
isomorphic copy of $G$ by separating the factors on the right-hand side and allowing $*$ to range over $L$ and $\dagger$ to range over $R$ independently. So for our example the tuples $
(l,l,l,m,m)$ and $(m,m,l,m,m)$ in $L^5$, and $(r,r,r,r,s)$ and $(s,s,r,r,s)$ in $R^5$ induce an isomorphic copy of $G$.
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory graph-colorings ramsey-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/151014/coloring-of-subgraphs-of-gn","timestamp":"2014-04-17T15:36:59Z","content_type":null,"content_length":"54958","record_id":"<urn:uuid:b11488d0-5021-4684-bcc8-f7f53a7e5740>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MIMO Spatial Multiplexing
- overview of MIMO - Multiple Input Multiple Output, spatial multiplexing used to provide additional data bandwidth in multipath radio scenarios.
One of the key advantages of MIMO spatial multiplexing is the fact that it is able to provide additional data capacity. MIMO spatial multiplexing achieves this by utilising the multiple paths and
effectively using them as additional "channels" to carry data.
The maximum amount of data that can be carried by a radio channel is limited by the physical boundaries defined under Shannon's Law.
Shannon's Law and MIMO spatial multiplexing
As with many areas of science, there a theoretical boundaries, beyond which it is not possible to proceed. This is true for the amount of data that can be passed along a specific channel in the
presence of noise. The law that governs this is called Shannon's Law, named after the man who formulated it. This is particularly important because MIMO wireless technology provides a method not of
breaking the law, but increasing data rates beyond those possible on a single channel without its use.
Shannon's law defines the maximum rate at which error free data can be transmitted over a given bandwidth in the presence of noise. It is usually expressed in the form:
C = W log[2](1 + S/N )
Where C is the channel capacity in bits per second, W is the bandwidth in Hertz, and S/N is the SNR (Signal to Noise Ratio).
From this it can be seen that there is an ultimate limit on the capacity of a channel with a given bandwidth. However before this point is reached, the capacity is also limited by the signal to noise
ratio of the received signal.
In view of these limits many decisions need to be made about the way in which a transmission is made. The modulation scheme can play a major part in this. The channel capacity can be increased by
using higher order modulation schemes, but these require a better signal to noise ratio than the lower order modulation schemes. Thus a balance exists between the data rate and the allowable error
rate, signal to noise ratio and power that can be transmitted.
While some improvements can be made in terms of optimising the modulation scheme and improving the signal to noise ratio, these improvements are not always easy or cheap and they are invariably a
compromise, balancing the various factors involved. It is therefore necessary to look at other ways of improving the data throughput for individual channels. MIMO is one way in which wireless
communications can be improved and as a result it is receiving a considerable degree of interest.
MIMO spatial multiplexing
To take advantage of the additional throughput capability, MIMO utilises several sets of antennas. In many MIMO systems, just two are used, but there is no reason why further antennas cannot be
employed and this increases the throughput. In any case for MIMO spatial multiplexing the number of receive antennas must be equal to or greater than the number of transmit antennas.
To take advantage of the additional throughput offered, MIMO wireless systems utilise a matrix mathematical approach. Data streams t1, t2, … tn can be transmitted from antennas 1, 2, …n. Then there
are a variety of paths that can be used with each path having different channel properties. To enable the receiver to be able to differentiate between the different data streams it is necessary to
use. These can be represented by the properties h12, travelling from transmit antenna one to receive antenna 2 and so forth. In this way for a three transmit, three receive antenna system a matrix
can be set up:
r1 = h11 t1 + h21 t2 + h31 t3
r2 = h12 t1 + h22 t2 + h32 t3
r3 = h13 t1 + h23 t2 + h33 t3
Where r1 = signal received at antenna 1, r2 is the signal received at antenna 2 and so forth.
In matrix format this can be represented as:
[R] = [H] x [T]
To recover the transmitted data-stream at the receiver it is necessary to perform a considerable amount of signal processing. First the MIMO system decoder must estimate the individual channel
transfer characteristic hij to determine the channel transfer matrix. Once all of this has been estimated, then the matrix [H] has been produced and the transmitted data streams can be reconstructed
by multiplying the received vector with the inverse of the transfer matrix.
[T] = [H]^-1 x [R]
This process can be likened to the solving of a set of N linear simultaneous equations to reveal the values of N variables.
In reality the situation is a little more difficult than this as propagation is never quite this straightforward, and in addition to this each variable consists of an ongoing data stream, this
nevertheless demonstrates the basic principle behind MIMO wireless systems.
By Ian Poole
|
{"url":"http://www.radio-electronics.com/info/antennas/mimo/spatial-multiplexing.php","timestamp":"2014-04-18T20:44:20Z","content_type":null,"content_length":"27249","record_id":"<urn:uuid:eceda6eb-24d5-4ac0-a810-bafb5ca108cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elizabethport, NJ Statistics Tutor
Find an Elizabethport, NJ Statistics Tutor
...As an English major student in college and English lover, I regard English as one of my favorite subjects and I do great in different kinds of English competitions. I've tutored students on
SAT Reading and received positive feedback. Besides I enjoy surfing on English websites during my free time.
9 Subjects: including statistics, English, reading, Chinese
...Teaching is my passion. I have worked with kids of all ages for the best six year, from one-on-one home tutoring to group tutoring in class rooms and after-school programs. Although I have a
bachelor's in biology, I am able to tutor different subject and help with homework for every grade.
26 Subjects: including statistics, chemistry, reading, geometry
...I am proficient in the material tested in the SAT Math subject tests, both 1 and 2. I've tutored the LSAT logical and analytical reasoning sections several times. I'm able to provide direct
and clear explanations for which choice is the correct one, and why each of the others are false.
32 Subjects: including statistics, physics, calculus, geometry
...Having earned three master's degrees and working on a doctoral degree, all in different fields, I have become very aware of the importance of approaching material in a way that minimizes the
anxiety of what may seem an overwhelming task. This involves learning how to strategize learning. Let me...
50 Subjects: including statistics, chemistry, calculus, physics
...Pricing depends on subject(s) taught, travel required, and minimum hours per week. I am available to teach on weekends and after 6 p.m. on most weekdays. If you have more than one child or
would like semi-private tutoring, rates may be adjusted further.
34 Subjects: including statistics, reading, writing, ESL/ESOL
Related Elizabethport, NJ Tutors
Elizabethport, NJ Accounting Tutors
Elizabethport, NJ ACT Tutors
Elizabethport, NJ Algebra Tutors
Elizabethport, NJ Algebra 2 Tutors
Elizabethport, NJ Calculus Tutors
Elizabethport, NJ Geometry Tutors
Elizabethport, NJ Math Tutors
Elizabethport, NJ Prealgebra Tutors
Elizabethport, NJ Precalculus Tutors
Elizabethport, NJ SAT Tutors
Elizabethport, NJ SAT Math Tutors
Elizabethport, NJ Science Tutors
Elizabethport, NJ Statistics Tutors
Elizabethport, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Avenel statistics Tutors
East Newark, NJ statistics Tutors
Elizabeth, NJ statistics Tutors
Hillside, NJ statistics Tutors
Kenilworth, NJ statistics Tutors
Linden, NJ statistics Tutors
Midtown, NJ statistics Tutors
Millburn statistics Tutors
North Elizabeth, NJ statistics Tutors
Parkandbush, NJ statistics Tutors
Peterstown, NJ statistics Tutors
Rahway statistics Tutors
Roselle Park statistics Tutors
Roselle, NJ statistics Tutors
Union Square, NJ statistics Tutors
|
{"url":"http://www.purplemath.com/Elizabethport_NJ_statistics_tutors.php","timestamp":"2014-04-20T21:35:48Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:55af3698-e0b3-4f40-9647-033de523ab6a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find the area
January 26th 2009, 06:24 PM #1
Jan 2009
Find the area
Find The Total area enclosed by the graphs of these 2 functions.
Can anyone show me how to solve this without graphing it, we need to make it into an integral and I can't seem to figure out the answer, any help would be much appreciated.
I might be wrong, but I think you can do it like this:
$x=2 \ or \ 5$.
So we want $\int_{2}^{5} x^2-7x+10dx$.
Don't ever do that again.
Your work was for nice way, so we have to split the given area into two integrals:
$\int_{0}^{2}{\alpha (x)\,dx}+\int_{2}^{5}{\beta (x)\,dx}.$
Where $\alpha,\beta$ are some functions to being integrated.
Now, put $f(x)=8x^2-x^3+x$ and $g(x)=x^2+11x.$ Note that $\alpha(x)=g(x)-f(x)\ge0$ on $[0,2]$ and $\beta(x)=f(x)-g(x)\ge0$ on $[2,5].$ Finally, the required area is $\mathcal A=\int_{0}^{2}{\big
January 26th 2009, 11:28 PM #2
January 27th 2009, 05:38 AM #3
|
{"url":"http://mathhelpforum.com/calculus/70099-find-area.html","timestamp":"2014-04-19T21:56:53Z","content_type":null,"content_length":"39058","record_id":"<urn:uuid:a040722b-df19-4a45-a7b1-975f2e231b94>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Technical Reports
86-01 Periodic DOL Languages. Tom Head, Clifton Lando.
87-01 A Problem in Counting Digits. Jack Distad, Ron Gatterdam.
87-02 An Algorithm for Fast Polynomial Approximation. Ron
88-01 The Linear Algebra of Interpolation with Finite
Applications Giving Computational Methods for Multivariate
Polynomials. Coert Olmsted.
88-02 Forced Oscillations of Two-Dimensional Nonlinear Systems.
Clifton Lando.
89-01 A Fully Two-Dimensional Flux-Corrected Transport Algorithm
for Hyperbolic Partial Differential Equations. Sen-Wei
89-02 Continued Powers and Roots. Dixon Jones.
89-03 Periodicity and Ultimate Periodicity of DOL Systems.
Barbara Lando.
89-04 On the Undecidability of Splicing Systems. Karl
Denninghoff, Ron Gatterdam.
89-05 Splicing Systems and Regularity. Ron Gatterdam.
89-06 Load Shedder: A Bidding Algorithm for Load Balancing in a
Wide Area Network. Gary Schmunk.
90-01 Horrock's Question for Monomially Graded Modules. Larry
90-02 Homotopy Theory of Diagrams and CW-Complexes Over a
Category. Robert Piacenza.
90-03 Some Extremal Results in Cochromatic and Dichromatic
Theory. Paul Erdos, John Gimbel, Dieter Kratsch.
90-04 Independent Edges in Bipartite Graphs Obtained from
Orientations of Graphs. John Gimbel, K. B. Reid.
91-01 Some Problems and Results in Cochromatic Theory. Paul
Erdos, John Gimbel.
92-01 Inequalities for Total Matchings of Graphs. John Gimbel,
Preben Vestergaard.
92-02 Sources in Posets and Comparability Graphs. John Gimbel.
92-03 Near-minimal Resolution IV Designs with Three Factors. Ron
93-01 A Sheaf Theoretic Approach to the Bott-Kubleka Commutator
Formula. Robert Piacenza, Peter Litvanyi.
94-01 Sources and Sinks in Comparability Graphs. John Gimbel.
94-02 Using Moving Averages to Generate Families of Valid
Variograms and Crossvariograms II. Crossvariograms and
Cokriging. Ron Barry.
95-01 Terrain Correction of Synthetic Aperture Radar Imagery
Using a CRAY T3D Scalable Parallel Processor. Tom Logan.
95-02 AVS Optimization for Vector Processing. Karen Woys.
95-03 Line Transect Sampling Under Varying Conditions with
Application to Aerial Surveys. Pham Quang.
95-04 Kernel Methods in Line and Point Transect Samplings. Pham
95-05 A Centroid-Based Nonparametric Regression Estimator. Ron
95-06 The Probabilistic Tool: Real Graph Theory, for Real Graph
Theorists. John Gimbel.
95-07 Approximating Functions with Linear Subspaces for
Non-Metizable Domains. Michael Flanagan.
95-08 Coloring Graphs with Fixed Genus and Girth. John Gimbel.
95-09 A Case Study of a Software Maintenance Organization with
Respect to the Capability Maturity Model. Randy Hayman.
95-10 Image Compression for Animation Sequences. Deepak Sinha.
95-11 High Resolution Interactive Medical Imaging on a CRAY T3D.
Greg Johnson.
95-12 Switching Distance Graphs. John Gimbel.
95-13 On the Random Superposition of Trees. John Gimbel.
96-01 Implementation of Deterministic Finite Automata on Parallel
Architecture. Pavel Sekanina
96-02 Development of an HTML Form Preprocessing Tool. Dale Clark.
96-03 A Preprocessor for Introductory Programming Education.
Michelle Neisser
96-04 Placement Testing for Math 107 - Functions for Calculus.
Diane Cook
96-05 Development of Embedded Software for Robot Navigation.
Kannan Narayanamurthy
97-01 On the Weakness of Ordered Set. Gimbel/Trenk
97-02 Coloring Triangle-Free Graphs with Fixed Size. John Gimbel.
97-03 Inverting the Involute. Kathleen Gustafson.
97-04 A Dynamic Production Model for Alaska SAR Facility Data System. Yi Zhang.
98-01 The Defect of the Completion of a Green Functor. Florian Luca.
98-02 Degree-Continuous Graphs. J. Gimbel/P. Zhang.
00-01 A Parallel Implementation of the Terrestrial Ecosystem Model (TEM). James Long.
00-02 Partitions of Graphs into Cographs. John Gimbel and Jaroslav Nesetril.
00-03 The Lattice Polynomial of a Graph. Jonathan Wiens and Kara Nance.
01-01 On a Complete Analysis of High Energy Scattering Matrix
Asymptotics for One Dimensional Schroedinger Operators with
Integrable Potentials. Alexei Rybkin.
01-02 On a Trace Approach to the Inverse Scattering Problem
In Dimension One. Alexei Rybkin.
01-03 KDV Invariants and Herglotz Functions. Alexei Rybkin
01-04 The Module of Derivations for an Arrangement of Subspaces. Jonathan Wiens.
01-05 Ingham-type Inequalities and Riesz Bases of Divided Differences. Jonathan Wiens.
01-06 Simultaneous Control Problems for Systems of Elastic
Strings and Beams. Sergei Avdonin and William Moran.
01-07 Location with Dominating Sets. John Gimbel,
Brian D. Van Gorden, Mihaela Nocolescu, Cheri Umstead, Nicole Vaiana.
02-01 Some New and Old Asymptotic Representations of
The Jost Solution and the Weyl m-Function for Schroedinger
Operators on the Line. Alexei Rybkin.
02-02 Numerical Approximation of a Two-Dimensional Thermomechanical
Model for Ice Flow. Ed Bueler.
02-03 Stability of Periodic Linear Delay-Differential Equations and the
Chebyshev Approximation of Fundamental Solutions. Edward Bueler, Eric Butcher.
02-04 Symbolic Stability of Delay Differential Equations. Victoria Averina.
02-05 Numerical Analysis of Ice Flow Numerical Solutions Using
Finite Difference Approximations. Latrice Bowman.
02-06 Computer Interconnection Based on Group Graphs. Rong Wang.
02-07 Semiclassical Approach for Calculating Regge-pole Trajectories
for Singular Potentials. N.B. Avdonina, S. Belov, Z. Felfi, A.Z. Msezane, N. Naboko.
|
{"url":"http://www.uaf.edu/dms/techreports/","timestamp":"2014-04-16T14:49:01Z","content_type":null,"content_length":"16995","record_id":"<urn:uuid:cd66fa85-6548-4537-b34f-3ed7aa738a5d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by shylo on Wednesday, November 25, 2009 at 9:24am.
Hi there i am having some problems trying to do my calculus homework and i really need help on how to show the step to proof the volume of a sphere which is V= 4/3pirsquare. But I have to use triple
integral to proof the volume of a sphere. Please help me and give me some good website that i can find the proper formula to use.
Here is the question:
Use a triple integral and trigonometric substitution to find the volume of a sphere with radius r.
• Calculus - Count Iblis, Wednesday, November 25, 2009 at 10:27am
What you have to do here is show that the volume element:
dxdydz can be written as
r^2 sin(theta)dphi dtheta dr
where theta is the angle w.r.t. the z-axis and phi is the angle that corresponds to rotating around the z-axis.
It is easy to see that this is the volume element because you can see the three orthogonal length elements hee:
r dtheta
r sin(theta) dphi
Note that if you rotate around the z-axis, your radius will be
r sin(theta)
If you want to prove this formally by direct substituton of
x = r sin(theta)cos(phi)
y = r sin(theta)sin(phi)
z = r cos(theta)
You have to write down the Jacobian, i.e. the 3x3 matrix of partial deivatives of the the three cartesian coordinates w.r.t. r, theta and phi.
Once you've got that the volume element is r^2 sin(theta)dphi dtheta dr you can integrate this straightforwadly. r ranges from zero to R, phi goes from zero to 2 pi and theta goes from zero to
Related Questions
Calculus - Hi there I am having some troubles trying to do my calculus homework ...
Trig - Find the value of the six trigonometric functions. t = 7(pi)/4 Can you ...
algebra 2 - A. I need to write an equation to the linear equation..(-2,-2), (3,3...
Precalculus - Hello, Thanks for all of the help guys. I really appreciate it. I ...
Calculus - Solve the IVP {y'=-(sinx)y + xexp(cosx) y(0) = 1 Please show me step ...
stoichiometry problems - Hey I hve problems with calculate stoichiometry ...
Algebra II - In an induction proof of the statement 4+7+10+...+(3n-1)=n(3n+5)/2 ...
math 012 - Hi, I am trying to do homework for my math 012 class- solving linear ...
Math - The following are the last two problems on my test review. I am trying to...
Calculus - What is the simplest solution to the Brachistochrone problem and the ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1259159072","timestamp":"2014-04-21T14:16:00Z","content_type":null,"content_length":"9643","record_id":"<urn:uuid:f0a4107d-e035-4849-9667-b8f75cea75cd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two Carts With Negligible Rolling Friction Are ... | Chegg.com
Image text transcribed for accessibility: Two carts with negligible rolling friction are connected as shown in Figure (1). An input force u(t) is applied. The masses of the two carts are M1 and M2
and their displacements are denoted as x(t) and q(t), respectively. The two carts are connected by spring k and damper b Figure 1 By using Newton's Second Law, derive the two equations that describe
the motion of the two carts. Hence derive the transfer function that relates the displacement x(t) and the input u(t), i. e. G(s) = X(s) / U(s).
Electrical Engineering
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/two-carts-negligible-rolling-friction-connected-shown-figure-1--input-force-u-t-applied-ma-q2765536","timestamp":"2014-04-20T07:02:03Z","content_type":null,"content_length":"20065","record_id":"<urn:uuid:97fbb19c-2cef-4dc0-8dd8-b23f4245ab11>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Challenging Research Issues in Statistics and Survey Methodology at the BLS"
Topic Statement: Stability of Linearization-Based Variance Estimators Computed from Potentially Unstable Estimators of First Derivatives
Key words: Asymptotics; Balanced repeated replication; Degrees of freedom; Inference; Nonlinear function of means; Replication-based variance estimation; t distribution approximation; Wishart
distribution approximation.
Contact for further discussion:
John L. Eltinge
Office of Survey Methods Research, PSB 1950 Bureau of Labor Statistics
2 Massachusetts Avenue NE
Washington, DC 20212
Telephone: (202) 691-7404
Fax: (202) 691-7426
E-mail: Eltinge.John@bls.gov
Background, Definitions and Notation:
In the analysis of complex survey data, we often need to estimate the variance of the approximate distribution of a random vector k -dimensional population mean n sample elements, and m -dimensional
real function of the k -dimensional real argument y.
In the complex-survey literature, regularity conditions on the sample design and population lead to results on the consistency of
Furthermore, under additional regularity conditions (e.g., Korn and Graubard, 1990), d is a known "degrees of freedom" term computed from the number of primary sample units and the number of strata.
Korn and Graubard (1990) also consider extensions of this Wishart approximation for cases in which
Now consider variance estimator for
Under regularity conditions, one can show that d , that was previously attributed to
Issue: In samples of moderate size, the estimated matrix of first derivatives,
Questions on Properties of Standard Variance Estimators for Nonlinear Functions of Estimated Means, and Modifications of Said Variance Estimators:
1. Assume that d is increasing at the same rate as n. Thus, we are excluding from consideration the case in which d is fixed, as would occur with a fixed number of strata and primary sample units,
and increasing numbers of sample elements within each primary sample unit. In addition, although we are using an asymptotic framework, we are implicitly excluding from consideration the cases in
which d and n are so large that the errors differences
Under what additional conditions can one establish that for some positive real number
2. Under the conditions of question (1), what is an appropriate estimator of
3. Under the conditions of question (1), one might wish to produce an estimator of that is more stable. This occurs, for example, in quantile estimation when one uses smoothed density estimators in
the computation of related variance estimators in some cases.
For general classes of smooth functions
4. To what extent do the issues (and prospective solutions) in (1)-(3) extend to replication-based variance estimators (in which the replication procedure produces, in an informal sense, a
nonparametric difference-based estimator of the derivative matrix F)?
5. Other authors have identified cases in which customary standard normal or t distribution approximations are problematic for quantities like t statistics, largely related to the correlation of the
"numerator" term
To what extent are these distributional problems exacerbated for
6. Finally, some authors consider inference for
The author thanks Moon Jung Cho, Alan Dorfman, Partha Lahiri and Michael Sverchkov for helpful comments on earlier versions of this topic statement.
Bickel, P. J., C.A.J. Klaassen, Y. Ritov, Y. and J.A. Wellner (1993). Efficient and Adaptive Estimation for Semiparametric Models. Baltimore: John Hopkins University Press.
Binder, D.A. (1983). On the variances of asymptotically normal estimators from complex surveys. International Statistical Review 51, 279-292.
Binder, D.A. and Z. Patak (1994). Use of estimating functions for estimation from complex surveys, Journal of the American Statistical Association 89 1035-1043.
Casady, R.J., A.H. Dorfman and S. Wang (1998). Confidence intervals for domain parameters when the domain sample size is random, Survey Methodology 24, 57-67.
Francisco, C.A. and W.A. Fuller (1991). Quantile estimation with a complex survey design, The Annals of Statistics 19, 454-469.
Korn, E.L. and Graubard, B.I. (1990). Simultaneous testing of regression coefficients with complex survey data: Use of Bonferroni statistics, The American Statistician 44, 270-276
Krewski, D. and J.N.K. Rao (1981). Inference from stratified samples: Properties of the linearization, jackknife and balanced repeated replication methods, The Annals of Statistics 19 , 1010-1019.
Ritov, Y. (1991). Estimating functions in semi-parametric models. Pp. 319-336 in Estimating Functions (V.P. Godambe, ed.), Oxford: Clarendon Press.
Shao, J. (1996). Resampling methods in sample surveys (with discussion), Statistics 27, 203-254.
Last Modified Date: January 06, 2006
Last Modified Date: July 19, 2008
|
{"url":"http://www.bls.gov/osmr/challenging_issues/ci09292005.htm","timestamp":"2014-04-20T10:53:54Z","content_type":null,"content_length":"45524","record_id":"<urn:uuid:a22e18ac-c1ff-4815-afc1-3d104445ed95>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
KEYNOTE LECTURE: DESIGN MODELING SYMPOSIUM BERLIN 2013
In architectural design, geometry is often described using geometric computer-aided design algorithms and sometimes analytical expressions. In contrast, the shape of the Mannheim Multihalle
(Mannheim, Germany 1974) and the ICD/ITKE Research Pavilion 2010 (Stuttgart, Germany 2010) derives from the large elastic deformation of flexible elements and is dictated by gravity and material
behavior. Recently the word bending active structures was coined to describe these forms.
The trouble with analytical expressions for two-dimensional curves of least bending strain energy. Back in the 19th century the Swiss mathematician Leonard Euler already defined the equilibrium shape
of a flexible rod (spline) when bent in two dimensions. Euler studied the buckled pinned strut over a large deflection range, seemingly the first non-linear treatment of elastic instability
phenomena. A slender spline is capable of bending far beyond the critical Euler buckling load while remaining in stable equilibrium. When only a pair of balancing forces act at the ends of the
initially straight member, the shape of this curve is an elastica. The elastica minimizes bending strain energy. The analytical solution to the shape of the 2D elastica problem, involves the solution
of fundamental elliptic integrals and thus in its very nature does not provide a very useful design tool to generate shapes.
The limitations of three-dimensional physical models. The design of the gridshells for the Mannheim Multihalle inverts the geometry of a hanging chain model(in tension) and results in a pure
compression shell. For active bending systems, this technique does not necessarily produce the shape as the bending effect of the splines is neglected. But it is a ‘good’ approximated shape.
Computationally finding the shape of bending active splines. To relate form to material behavior, non-linear Finite Elements Methods (FEM) have been used that simulate the structural behavior of
flexible splines. The simple spline algorithms we have developed, do not employ implicit solution methods like FEM but build on an explicit Dynamic Relaxation technique, initially developed for
form-finding of pre-stressed systems. Using compelling case studies of single spline splines and configurations, we explain how our bending formulations generate equilibrium shapes that obey material
and statics laws.
|
{"url":"http://formfindinglab.princeton.edu/news/keynote-lecture-design-modeling-symposium-berlin-2013/","timestamp":"2014-04-21T07:13:50Z","content_type":null,"content_length":"9956","record_id":"<urn:uuid:a593b343-955a-4b37-9c17-371d5257315d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Burn Meter Wrong? Help!
Weight Loss
[Select a Forum... ]
Burn Meter Wrong? Help!
So ive been watching my burn meter before and after I add an activity.
This morning I did 34 min on the treadmill and burned 200 cals. I normally burnt 1750 + 200 = 1950 right...
So how come my burn meter says 1910? What happened to those 40 cals I burned?
#1 Quote
You burn those 40 calories just by being alive, and they are already included in your burn meter.
#2 Quote
no no
I burn 1750 just by being alive
I did another 200 on the treadmill today
I added that up and by burn meter is only at 1910....it SHOULD be at 1950 though, right?
Simple math...
#3 Quote
The 1750 calories you burn is in a 24 hour day. When you add your exercise to your activity log, it replaces those minutes (out of the 24 hour day) with the minutes you spent on the treadmill.
If it was to add up to 1950 calories even, then you would be adding minutes to the day which is not possible.
#4 Quote
What caitling meant was that you normally would have burnt that other 40 cals during the 34 minutes you were on the treadmill.
40 calories in 34 minutes laying on the couch watching TV or sleeping -- part of the 1750 you burn living
200 calories in 34 minutes you spent on the treadmill, but 40 of that you would have burned anyway, so you only get to ADD the 160 extra that you burned.
#5 Quote
i spent 34 min on the treadmill
doesnt make sense to me... \
if i burn 1750 in a 24 hr period, and i burned another 200 in exercise, why doesnt that mean i burned 1950 in a 24 hr period?
#6 Quote
lets see if I can explain this.
You did 34 minutes on the treadmill...you burn 1750 just for being alive. if you divide 1750 by 1440 (the number of minutes in a day) you burn 1.21527 calories a minute. so 34*1.21527=41.31918
calories, so if you had done no exercise you would have burned those 40 calories, so the burn meter is subtracting the calories you would have burned by just living, from the calories you burned
while exercising. Otherwise you would be counting those 40 calories twice.
so 1750/1440= 1.21 per minute...1.21*34=41 calories...200-40=160...1750+160=1910
I hope I explained it. it took me a while to figure out too.
#7 Quote
Because you didn't burn *another* 200, you burned 200 in the 34 minutes -- when you normally would have burned 40 by just living and breathing.
You only burned *another* 160 on top of that 40, so that is all that's added to your burn meter.
#8 Quote
There are 1440 minutes in a day.
So you burn roughly 1.2 calories a minute.
If you did 34 minutes on the treadmill, you would have to subtract 34 minutes from your normal day and the 40 calories that you would normally burn in that 34 minute time period which would put you
at 1710 calories.
You then replace that 34 minutes with the time you spent on the treadmill and add the 200 calories you burned which is where the 1910 calories comes from.
#9 Quote
Original Post by karozel:
What caitling meant was that you normally would have burnt that other 40 cals during the 34 minutes you were on the treadmill.
40 calories in 34 minutes laying on the couch watching TV or sleeping -- part of the 1750 you burn living
200 calories in 34 minutes you spent on the treadmill, but 40 of that you would have burned anyway, so you only get to ADD the 160 extra that you burned.
this explanation makes sense to me!
thank you.
#10 Quote
Glad we could help. It's confusing at first, and it's awfully frustrating when you work out so hard and then add it in and you think you got cheated out of some of that hard work.
#11 Quote
I'm so glad this question was asked and that I read this thread! I'm new to CC and I didn't get the burn and what all the numbers meant. You all explained it very well and I understand it more.
#12 Quote
I'm glad it was asked too. This is really helpful!
#13 Quote
Original Post by lapslazuli:
lets see if I can explain this.
You did 34 minutes on the treadmill...you burn 1750 just for being alive. if you divide 1750 by 1440 (the number of minutes in a day) you burn 1.21527 calories a minute. so 34*1.21527=41.31918
calories, so if you had done no exercise you would have burned those 40 calories, so the burn meter is subtracting the calories you would have burned by just living, from the calories you burned
while exercising. Otherwise you would be counting those 40 calories twice.
so 1750/1440= 1.21 per minute...1.21*34=41 calories...200-40=160...1750+160=1910
I hope I explained it. it took me a while to figure out too.
Wow! Awesome calculations! Impressive! You hurt my head but thanks!
|
{"url":"http://caloriecount.about.com/forums/weight-loss/burn-meter-wrong-help","timestamp":"2014-04-20T03:14:43Z","content_type":null,"content_length":"56445","record_id":"<urn:uuid:363093f8-08b4-49ac-89f6-500397211142>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bremerton Trigonometry Tutor
...As mentioned, I have lived in Washington state for most of my life. I graduated high school with a high cumulative GPA and various awards and honors. I have over five years work experience in
various fields from construction to retail (grocery), and my tutoring experience spans a little over one year, approaching two.
38 Subjects: including trigonometry, reading, English, physics
...I started my computer experience with a Macintosh. While I was working in the schools, we used Macintosh computers. While working in an office, I was responsible for getting iPhones up and
running for people when they ordered new ones.
39 Subjects: including trigonometry, English, reading, writing
...My primary programming language is currently Java. Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have
taken 2 quarters of Discrete Structures (Mathematics) at University of Washington, Tacoma.
16 Subjects: including trigonometry, chemistry, French, calculus
...For six of those seven years, I taught at least one Algebra 1 class. I taught four Algebra 1 classes during my last year of full-time teaching. I have a Bachelor's degree in Mathematics and a
Master's Degree in Secondary Math Education.
16 Subjects: including trigonometry, geometry, GRE, algebra 1
...I hope I can be of assistance for you.I played soccer for 15 years of my life, from the age of 4 until senior year of high school. Aside from playing at the state level outside of school and
being varsity all four years of high school I have extensive experience considering how soccer is played....
25 Subjects: including trigonometry, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/bremerton_wa_trigonometry_tutors.php","timestamp":"2014-04-17T07:56:23Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:93f70df3-fc4b-4491-a5d2-9adc0440096d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thompson's group F and monoidal categories
up vote 4 down vote favorite
(This is a cross-post from MathSE, as someone remarked that the question would be more appropriate on MO)
Fiore and Leinster have proved that if $\mathcal{A}$ is a monoidal category freely generated by one object $A$ and an isomorphism $\alpha: A \otimes A \to A$, then for every object $X \in \mathcal
{A}, Aut(X)$ is isomorphic to the Thompson group $F$.
My question is the following: if we assume instead that $\alpha: A \otimes A \to A$ is not necessarily an isomorphism, and that there exist a morphism $\beta: A \to A \otimes A$ such that $\alpha \
circ \beta = id$, is the result of Fiore and Leinster still true ?
I have a feeling we at least have $F \subset Aut(X)$. Loosely speaking, my approach is that since every element of $F$ can be represented as a pair $(R,S)$ of forests, we can always represent $R$ by
a suitable composition of $\beta$ maps, then $S$ by a composition of $\alpha$ maps, the identity $\alpha \circ \beta = id$ ensuring that every facing caret gets cancelled to form a reduced forest
diagram, i.e a unique element of $F$.
gr.group-theory monoidal-categories
In order to be precise, Fiore and Leinster do not work with existential quantifiers but with structures, i.e. they consider a free monoidal category on one object $A$ and an isomorphism $A\otimes A
\rightarrow A$. The introduction, even the position of existential quantifiers may produce different answers to similar questions. – Fernando Muro Jun 27 '12 at 23:40
add comment
1 Answer
active oldest votes
Edit I noticed that in Fiore-Leinster preliminate the condition (free monoidal category of an isomorphism $ \alpha: A \otimes A \to A $) is different from what is written in the
preliminary question, so I reworked my answer substantially.
In a Monoidal category $\mathcal{C}$ consider (a non empty) class of sections of the type $\beta: A\to A\otimes A$ and let $\Sigma$ its tensor product closure (finite tensor
products of some morphisms of type $ \beta $ of the choose class and some identities).
From the article "Note on monoidal localizations " by B. Day (link text) the category of fraction $\mathcal{C}_\Sigma$ is (naturally) a monoidal category.
let $P: \mathcal{C}\to \mathcal{C}_\Sigma$ the natural functor.
The elements of $\Sigma$ are all monomorphisms (are sections) , and if $\Sigma$ admits a calculus of left fractions the canonical functors $P$ is faithful (see "Categories" H
Shubert, 12.9.6(a), p.261). THen $\mathcal{C}$-$Aut(X)$ is a subgroup of $\mathcal{C}_\Sigma$-$Aut(X)$ (because $P$ is faithful).
Now consider the Monoidal category $[A, \alpha, \beta]$ free on (the condition):
"one object $A$ and on two morphisms $\alpha: A\otimes A\to A$, $\beta: A\to A\otimes A$, with $\alpha\circ \beta=1_A$".
up vote 3 down
vote accepted This category has the following universal property: for any monoidal categories $\mathcal{C}$ with choose morphisms $a: X\otimes X\to X,\ b: X\to X\otimes X$ with $a\circ b=1_X$
there exists a unique strict monoidal functor $F_{a,b}: [A, \alpha, \beta]\to \mathcal{C}$ with $F(\alpha)=a,\ F(\beta)=b$.
Now in $[A, \alpha, \beta]$ consider the tensor closure $\Sigma$ of the section $\beta$,
and let $P:[A, \alpha, \beta]\to [A, \alpha, \beta]_\Sigma$ the category of fractions.
the category $[A, \alpha, \beta]_\Sigma $ has the universal property of the monoidal category on one isomorphisms
$\beta: A\to A\otimes A$ as in the FIore-Leinster article, then $F\cong [A, \alpha, \beta]_\Sigma$-$Aut(A)$.
Now, IF $\Sigma$ admit a calculus of left fraction then $P$ is faithful and $[A, \alpha, \beta]$-$Aut(A)$ is isomorphic to a subgroup of $F$.
P.S. I seems that $\Sigma$ admit a calculus of left fraction, but I have not checked it in detail
Thank you for your answer, which is however very difficult for me to understand. When I posted the question, I had in mind a map $\alpha$ which would be surjective-like, thus I
introduced a section $\beta$. But from your answer, why not use the localization of $\mathcal{A}$ at $\alpha$ instead of $\beta$ ? – AlexPof Jun 28 '12 at 7:49
Because in the theorme 12.9.6(a), p.261 of the H.Shubert Book, the mophisms of Sigma need to be monomorphisms. I improved my answere. – Buschi Sergio Jun 28 '12 at 15:34
You're right my question was badly formulated. I'll edit my question with the proper statement of Fiore and Leinster, and I'll add my comment in there... – AlexPof Jun 29 '12 at
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory monoidal-categories or ask your own question.
|
{"url":"http://mathoverflow.net/questions/100802/thompsons-group-f-and-monoidal-categories?sort=oldest","timestamp":"2014-04-17T15:57:40Z","content_type":null,"content_length":"58199","record_id":"<urn:uuid:b37f18a9-7f0b-4f47-ace7-337306c78769>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear First Order PDE- Looking for an example
February 2nd 2011, 11:08 AM #1
Linear First Order PDE- Looking for an example
Hi All,
I have an examinable question that I am stuck on and obviously I cannot submit it here.
Instead I am asking if some one could provide a simple problem that demonstrates the principle. Here goes:
We have linear first order PDE u(x,y). Asked to find u given intial conditions which I can.
Next it asks to show that the solution is not defined when y > f(x). (I know what the f(x) of x is but cannot show it)
Could anyone provide a simple example to demonstrate this or at least what do i do?
How about something like
$2u u_y = -1, \;\;u(x,0) = x^2?$
It's a PDE so integrating gives (taking the positive solution)
$u(x,y) = \sqrt{f(x)-y}$.
Ok, the constant must be a function of x ie
$2\int{u} du=-\int{y} dy$ therefore
$u^2=-y+c$ but c=f(x)
$u(x,y)=+\sqrt{-y+f(x)}$ using boundary conditions give particular as
How do we use this to show the solution is not defined when y> some function of x?
What if $y > x^4$?
Ok, after some reading and with your help my interpretation is the following:
there are 2 restrictions: We never divide by 0 and we assume real values functions. Therefore based on this and looking at the above example, the quantity $-y+x^4 ot < 0$. Therefore $x^4 \geq y$.
Otherwise we get a complex number.
What is the domain of the function?
The domain of this function is the set of all real values of x and y whose quantity $-y+x^4 ot < 0$.
Is there a better way of stating this mathematically?
February 2nd 2011, 03:09 PM #2
February 2nd 2011, 10:23 PM #3
February 3rd 2011, 04:14 AM #4
February 3rd 2011, 05:19 AM #5
February 3rd 2011, 08:51 AM #6
February 3rd 2011, 09:55 AM #7
|
{"url":"http://mathhelpforum.com/differential-equations/170020-linear-first-order-pde-looking-example.html","timestamp":"2014-04-18T03:07:54Z","content_type":null,"content_length":"53097","record_id":"<urn:uuid:0a6e7536-6671-4403-94a6-083b41dea068>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dear Reader,
Below are three tricky problems on algebra, you have to find the value or number of terms in the given expression.
Question 1
Dear Reader,
Below are three problems on simplifications using equations which you can expect in Infosys and other companies placement paper.
Question 1
Dear Reader,
Below are three probability problems using time calculations.
Question 1
Dear Reader,
Below are three train problems which deals with time,speed and distance calculations.
Question 1
Dear Reader,
Below are three simple problems based on combinations.
Note :
Number of Combinations:
The number of all combinations of n things, taken r at a time is:
nCr = n! / (r!)(n - r!)
Question 1
Dear Reader,
Below are three numeric puzzles dealing with some basic arithmetic calculations.
Question 1
Find X's age which equals the number of grand children of a man who has 4 sons and 4 daughters. Each daughter of the man's wife have 3 sons and 4 daughters and each son of the man's wife have 4 sons
and 3 daughters.
Dear Reader,
Below are three problems dealing with reversible numbers which is multiplied by 4 and 9.
Question 1
If UVWX / 9 = XWVU and U = 10 - X, then find the 4-digit numbers UVWX.
a) 9801 b) 9081 c) 9810 d) 9108
Answer : a) 9801
Solution :
Given that, UVWX / 9 = XWVU
That is, XWVU x 9 = UVWX
Dear Reader,
Below are three problems based on the concept of number of brothers and sisters, you have to find the number of siblings according to given statements.
Question 1
Dear Reader,
Below are 3 problems based on the concept of time and distance.
Question 1
Dear Reader,
Below are two problems on time and distance with some tricky calculations on time.
Question 1
|
{"url":"http://www.careersvalley.com/solved-placement-papers/company/infosys","timestamp":"2014-04-21T02:12:37Z","content_type":null,"content_length":"38490","record_id":"<urn:uuid:bc27bda2-3ed4-4c90-8bcc-f419582f368a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
> restart;
Differential Equations
In this worksheet we demonstrate basic techniques of working with ordinary differential equations , (ODE's), using Maple. An ODE is a differential equation in which the unknown function is a function
of one variable. We shall stick with first order ODE's which contain only the first derivative of the unknown function. Moreover, we shall consider only first order ODE's in the normal form , that is
in the form:
where the left hand side is simply the derivative of the unknown function, in this case, and the right hand side is a given function of x and y. You know that for such ODE's, under reasonable
assumptions about , there exists a unique solution to any initial value problem, ( IVP ), of the form
, ,
where , are given.
Solving and Plotting ODE's
Basic tools for solving and plotting ODE's are contained in the packages " plots " and " DEtools ". We begin with loading these packages. Please, don't forget to click on the two commands below.
> with(plots):
> with(DEtools):
We are familiar with the package " plots ". If you are curious about the content of " DEtools ", replace the colon at the end of the command with a semicolon and click on it again.
Example 1. (a) Find the general solution to the ODE: .
(b) Solve the following two initial value problems:
, ,
, .
(c) Plot the solutions to the IVP's together with the slope field corresponding to the ODE.
In order to simplify many commands below let's first label our ODE:
> ODE1:=diff(y(x),x)=-2*x*y(x);
Observe that correct syntax. The derivative is entered using the " diff " command. The command " D(y)(x) " could be used as well. Note that " y " has to be entered as " y(x) ". The main command for
solving ODE's is " dsolve ".
> dsolve(ODE1,y(x));
Maple returned the general solution. " _C1 " denotes, of course, an arbitrary constant. Instead of using the name "ODE1" you could have entered the differential equation "diff(y(x),x)=-2*x*y(x)"
directly into the " dsolve " command. Maple can handle initial value problems, as well. The proper syntax looks as follows.
> dsolve({ODE1,y(0)=2},y(x)); dsolve({ODE1,y(0)=1/2},y(x));
Maple can plot slope fields, as well as slope fields together with particular solutions. Proper commands are " dfieldplot " and "DEplot ", both contained in the " DEtools " package. Let's see how
they work. Pay attention to the syntax.
> dfieldplot(ODE1,[y(x)],x=-2..2,y=-2..2,color=blue,scaling=constrained,arrows=LINE,dirgrid=[30,30]);
Maple plotted the slope field for our equation. All the options under the " dfieldplot " command regarding color, appearance of the arrows, scaling and dirgrid are, of course, optional. You can play
with them and see what will happen. " dirgrid " tells Maple how dense you want the field of slopes to be. The default setting is dirgrid[20,20] and it tends to be a little rough. On the other hand, a
finer grid may take more time to compute.
Remark. Whenever plotting field of slopes, you should use the "scaling = constrained" option. Otherwise, the pictures may appear misleading, as slopes become distorted.
The command " DEplot " plots the slope field together with particular solutions.
> DEplot(ODE1,y(x),-2..2,[[y(0)=2],[y(0)=1/2]], linecolor=magenta,color=blue,scaling=constrained,arrows=LINE);
As we expected, the two particular solutions are bell-shaped curves. If you do not want the slope field plotted with particular solutions, you add an option " arrows=NONE " under the " DEplot "
Example 2. (a) Try to find the general solution to the ODE: .
(b) Solve the IVP:
, .
Let's see how Maple handles this ODE.
> ODE2:=diff(y(x),x)=cos(x*y(x));
> dsolve(ODE2,y(x));
Maple returned no output! That means Maple is unable to solve the equation. If you are curious what steps Maple went through to find a solution before failing to do so, you can ask to see the steps
using the command " infolevel ". The levels of information that you can request range from 0 to 5. The higher number, the more information Maple will reveal.
> infolevel[dsolve]:=3:
> dsolve(ODE2,y(x));
Methods for first order ODEs:
trying to isolate the derivative dy/dx
successful isolation of dy/dx
-> trying classification methods
trying a quadrature
trying 1st order linear
trying Bernoulli
trying separable
trying inverse linear
trying simple symmetries for implicit equations
trying homogeneous types:
trying Chini
trying exact
-> trying 2nd set of classification methods
trying Riccati
trying Abel
-> trying Lie symmetry methods
As you see, Maple tried to match the equation with the types of first order ODE's that it knows how to solve in a closed form, and failed. Out of curiosity, let's see how Maple solved the previous
equation, ODE1 , with which it was successful.
> dsolve(ODE1,y(x));
Methods for first order ODEs:
trying to isolate the derivative dy/dx
successful isolation of dy/dx
-> trying classification methods
trying a quadrature
trying 1st order linear
1st order linear successful
As you see, Maple solved the equation by the first successful method, that is, as a linear equation in y(x). You don't know that method yet. You could, however, solve the equation by hand as a
separable equation. Besides the common type equations listed above, Maple is familiar with more sophisticated techniques of solving ODE's involving power series expansion, Laplace transform and
others. You have to tell Maple if you want it to apply those techniques. We are not going to do so at this point. Let's get back to the normal infolevel for " dsolve " command.
> infolevel[dsolve]:=0:
Maple couldn't find the general solution of the equation and neither can we. However, the solution to the IVP of (b), can be found using numerical methods . We describe them in the next section.
Solving Initial Value Problems Numerically Using Maple
Maple is programmed for the whole array of sophisticated numerical methods for solving ODE's. Let's find the numeric solution to the IVP of Example 2. (b). The proper syntax for evoking Maple's
numerical solver is " dsolve(....,numeric) ". You should always label the resulting solution. For example as " soln ".
> soln:=dsolve({ODE2,y(0)=3},y(x),numeric);
The output looks like a failure. It is not. Maple reports its numerical solution as an algorithm that allows us to calculate values of the solution at any point we want, as well as plot it. The
expression " rkf45 " stands for the Fehlberg fourth-fifth order Runge-Kutta method, which is a well-known numerical scheme for solving ODE's. Maple uses it as a default option. You can guess that the
rkf45 algorithm is much more advanced that the Euler method that we have learned.
We can find values of our numerical solution " soln " at any single point we want, or at a list of points using the following syntax:
> [soln(1),soln(1.5),soln(2),soln(2.5),soln(3),soln(3.5),soln(4),soln(4.5),soln(5)];
Maple displays the consecutive values of x together with the corresponding values of the solution y(x). We can also plot a numerical solution to an IVP using the " odeplot " command. This command is
contained in the " plots " package, which we have already loaded. The command can be used only with numerical solutions.
> odeplot(soln,[x,y(x)],0..5,color=magenta,thickness=2,scaling=constrained);
It may be interesting to see how this numerical solution relates to the slope field of the equation . To see both together you could use the " DEplot " command. You can also use the " display "
command from the " plots " package.
> p1:=odeplot(soln,[x,y(x)],0..5,color=magenta,thickness=2,scaling=constrained):
> p2:=dfieldplot(ODE2,[y(x)],x=0..5,y=0..5,color=blue,arrows=LINE,scaling=constrained,dirgrid=[30,30]):
> display([p1,p2]);
An Applied Example: Mr. Jones' Date
Example 3. Mr. Jones has invited his date for dinner for 6 p.m. He plans to serve baked chicken. At 5:30 p.m. he realizes that he hasn't started baking the chicken. In a panic, Mr. Jones forgets to
preheat the oven. He puts cold chicken from the fridge, at F, into a cold oven and turns the oven on. The temperature of the oven is rising according to the function
where t is the time, in minutes, after the oven is turned on. The internal temperature of the chicken, , in degrees Fahrenheit, t minutes after the oven is turned on, changes according to Newton's
Law of Cooling:
The chicken is done when its internal temperature reaches F. At what time will Mr. Jones be able to serve the dinner?
To find the function , we have to solve the following IVP:
, .
Then we can determine when . Observe that this equation is different from the other ones we have seen in connection with Newton's Law of Cooling because in our present example we are not assuming
that the temperature of the oven is constant. As a result, the differential equation is no longer separable, we will have trouble solving it by hand. Let's hope Maple can help us.
> ODE3:=diff(H(t),t)=-(7/1000)*(H(t)-350+280*exp(-(7/10)*t));
Maple somewhat simplified the equation. Let's solve the IVP.
> dsolve({ODE3,H(0)=40},H(t));
We have replaced all decimals by fractions in our ODE, as in earlier releases of Maple the " dsolve " command does not work if an equation contains decimals.
IMPORTANT NOTE: It is very important to realize that the output of the "dsolve" command may seem like a function, but, as far as Maple is concerned, H(t) is neither a function nor an expression. H(t)
has not been properly defined as either one. In order to further process a solution to a given ODE, you have to make it into an expression or a function.
To obtain your solution as an expression, say exH , you can use the following syntax:
> dsolve({ODE3,H(0)=40},H(t)); exH:=rhs(%);
" rhs " stands for the " right hand side ". Recall, that " % " stands for the last output. Now, exH is an expression in terms of t and you can use commands, like " plot " and others with it. If you
prefer to have your solution as a function, and you don't want to type long formulas, you have to use the command with a strange name " unapply " . The command turns an expression into a function as
follows. We shall turn the expression exH in terms of t into a function fH of t.
> fH:=unapply(exH,t);
The logic behind the name "unapply" is that an expression in terms of t may be thought of as a result of applying a certain function to a given t. Hence, to turn an expression back into a function we
have to "unapply".
Let's plot the function fH to see what is happening to Mr. Jones' chicken.
> plot(fH(t),t=0..90);
We see the the temperature of the chicken will reach 175 degrees somewhere between t = 60 and t = 90.
> fsolve(fH(t)=175,t,60..90);
Well, the chicken will be ready about 83 minutes after 5:30 p.m., that is, about 6:50 p.m. Mr. Jones has to hope that his date doesn't arrive too hungry!
Euler's Method
Maple can be used to illustrate Euler's method that we have learned. One can say that Euler's method is to numerical methods for ODE's what Left Sums are to numerical integration. It is a rudimetary
and not very efficient numerical method, which, however, is easy to understand and very illuminating. Maple has Euler's method built in as one of the options, but in order to see all the steps, we
shall program it from scratch.
Example 4. Construct approximate solutions for x from 0 to 5 to the initial value problem
using Euler's method with the three different step sizes = 0.5, 0.25, 0.125. Compare your solutions with Maple's solution to the IVP.
To execute Euler's method, we could go three times step-by-step through ten steps. We could tell Maple in one command to do the ten steps for our particular IVP using the so-called loop structure.
Instead, we shall program a simple procedure that calculates Euler approximations for any given ODE with a right hand side f(x,y), any initial conditions , and any number od steps n. The loop
structure " do...od " is a part of the procedure. This is a simple example of Maple programming. We will program a procedure called "Eulermethod" which, given f(x,y), initial conditions (x0,y0), step
size h, and number of steps n, will calculate the steps of Euler's method and display the resulting list of pairs [ [x[0],y[0]], [x[1],y[1]], ...,[x[n],y[n] ]. To clarify the structure of our little
program we will comment on each step.
> Eulermethod:=proc(f,x0,y0,h,n) local i,x,y;
x[0]:=x0; y[0]:=y0; first pair of the list is given by the initial condition
for i from 1 to n we tell Maple to perform n steps
do mark beginning of the loop commands
y[i]:=y[i-1]+evalf(f(x[i-1],y[i-1]))*h; get the next y value
x[i]:=x[i-1]+h; get the next x value
od; mark the end of the loop commands
[seq([x[i-1],y[i-1]],i=1..n+1)]; form the final list of pairs
Let's apply the procedure to our function .
> f:=(x,y)->cos(x*y);
Lat's use Eulermethod with step sizes h = 0.5 =1/2 , h = 0.25=1/4, h=0.125=1/8 and the corresponding values of n=10,20,40 to cover the interval [0,5]. We label the resulting list of values by
> A1:=Eulermethod(f,0,3,0.5,10);
> A2:=Eulermethod(f,0,3,0.25,20);
> A3:=Eulermethod(f,0,3,0.125,40);
Remember, the three lists of pairs correspond to approximate solutions y(x) on [0,5] provided by the Euler method with smaller and smaller step size. Recall that we considered the same IVP : , in
Sections 1 and 2. The equation was labeled ODE2 and the numeric solution for the IVP provided by Maple was labeled " soln ". We obtain the plot of a numeric solution with " odeplot " command.
> plo1:=odeplot(soln,[x,y(x)],0..5,color=magenta,scaling=constrained):
> plo2:=pointplot(A1,color=blue,scaling=constrained):
> plo3:=pointplot(A2,color=green,scaling=constrained):
> plo4:=pointplot(A3,color=black,scaling=constrained):
> display([plo1,plo2,plo3,plo4]);
As you see, our Euler solutions are getting closer and closer to Maple's solution.
Homework Problems
Do your homework in the same Maple session during which you reexecuted the commands in the above worksheet. Do not put the "restart" command in your homework worksheet. If you do, you will have to
reload " with(plots) " and " with(DEtools) " as well as copy, paste and reexecute the definition of Eulermethod procedure.
Problem 1. Consider the following ODE
(a) Find the general solution to the equation.
(b) Plot the slope field corresponding to the equation for x and y between -2 and 2.
(c) Solve the initial value problems
, ,
, .
(d) Plot the two solutions and the slope field in one coordinate system for x between -2 and 2 using the DEplot command.
Problem 2. The growth of a certain animal population is governed by the equation
where denotes the number of animals at time t, t is measured in months. There are 200 animals initially.
(a) Find .
(b) Graph in a large enough interval to see the longterm behavior of the population.
(c) When will the population reach 400 animals?
Hint : Remember to define the solution to your ODE as an expression or a function, as in Example 3, before attempting to process it.
Problem 3. Consider the IVP
, .
(a) Solve the IVP numerically using Maple's numerical solver.
(b) Plot the solution in the interval [0,4] using the "odeplot" command.
(c) For three different values of step size h calculate the corresponding Euler approximation.
(d) Display your Euler approximations and the Maple's solution in one coordinate system.
MTH 142 Maple Worksheets written by B.Kaskosz and L.Pakula, Copyright 1999.
Last modified August 1999.
|
{"url":"http://www.math.uri.edu/Center/workht/calc2/odes1.html","timestamp":"2014-04-18T18:10:25Z","content_type":null,"content_length":"38847","record_id":"<urn:uuid:158cd0ac-7456-4d80-bd8d-eda82ce7f1b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Created NumPy 1.7.x branch
josef.pktd@gmai... josef.pktd@gmai...
Mon Jun 25 21:37:23 CDT 2012
On Mon, Jun 25, 2012 at 9:50 PM, Travis Oliphant <travis@continuum.io> wrote:
> On Jun 25, 2012, at 7:53 PM, josef.pktd@gmail.com wrote:
>> On Mon, Jun 25, 2012 at 8:25 PM, <josef.pktd@gmail.com> wrote:
>>> On Mon, Jun 25, 2012 at 8:10 PM, Travis Oliphant <travis@continuum.io> wrote:
>>>> You are still missing the point that there was already a choice that was
>>>> made in the previous class --- made in Numeric actually.
>>>> You made a change to that. It is the change that is 'gratuitous'. The pain
>>>> and unnecessary overhead of having two competing standards is the problem
>>>> --- not whether one is 'right' or not. That is a different discussion
>>>> entirely.
>>> I remember there was a discussion about the order of the coefficients
>>> on the mailing list and all in favor of the new order, IIRC. I cannot
>>> find the thread. I know I was.
>>> At least I'm switching pretty much to the new polynomial classes, and
>>> don't really care about the inherited choice before that any more.
>>> So, I'm pretty much in favor of updating, if new choices are more
>>> convenient and more familiar to new users.
>> just to add a bit more information, given the existence of both poly's
>> nobody had to rewrite flipping order in scipy.signal.residuez
>> b, a = map(asarray, (b, a))
>> gain = a[0]
>> brev, arev = b[::-1], a[::-1]
>> krev, brev = polydiv(brev, arev)
>> if krev == []:
>> k = []
>> else:
>> k = krev[::-1]
>> b = brev[::-1]
>> while my arma_process class can start at the same time with
>> def __init__(self, ar, ma, nobs=None):
>> self.ar = np.asarray(ar)
>> self.ma = np.asarray(ma)
>> self.arpoly = np.polynomial.Polynomial(self.ar)
>> self.mapoly = np.polynomial.Polynomial(self.ma)
> That's a nice argument for a different convention, really it is. It's not enough for changing a convention that already exists. Now, the polynomial object could store coefficients in this order, but allow construction with the coefficients in the standard convention order. That would have been a fine compromise from my perspective.
I'm much happier with the current solution. As long as I stick with
the np.polynomial classes, I don't have to *think* about coefficient
order. With a hybrid I would always have to worry about whether this
animal is facing front or back.
I wouldn't mind if the old order is eventually deprecated and dropped.
(Another example: NIST polynomial follow the new order, 2nd section
no [::-1] in the second version.)
>> As a downstream user of numpy and observer of the mailing list for a
>> few years, I think the gradual improvements have gone down pretty
>> well. At least I haven't seen any mayor complaints on the mailing
>> list.
> You are an *active* user of NumPy. Your perspective is valuable, but it is one of many perspectives in the user community. What is missing in this discussion is the 100's of thousands of users of NumPy who never comment on this mailing list and won't. There are many that have not moved from 1.5.1 yet. I hope your optimism is correct about how difficult it will be to upgrade for them. As long as I hold any influence at all on the NumPy project, I will argue and fight on behalf of those users to the best that I can understand their perspective.
oops, my working version
>>> np.__version__
I'm testing and maintaining statsmodels compatibility from numpy 1.4.1
and scipy 0.7.2 to the current released versions (with a compat
statsmodels dropped numpy 1.3 support, because I didn't want to give
up using numpy.polynomial.
Most of the 100,000s of numpy users that never show up on the mailing
list won't worry much about most changes, because package managers and
binary builders and developers of application packages take care of
most of it.
When I use matplotlib, I don't care whether it uses masked arrays, or
other array types internally (and rely on Benjamin and others to
represent matplotlib usage/users). Wes is recommending users to use
the pandas API to insulate them from changes in numpy's datetimes.
>> For me, the big problem was numpy 1.4.0 where several packages where
>> not available because of binary compatibility, NaN's didn't concern me
>> much, current incomplete transition to new MinGW and gcc is currently
>> a bit of a problem.
> It is *much*, *much* easier to create binaries of downstream packages than to re-write APIs. I still think we would be better off to remove the promise of ABI compatibility in every .X release (perhaps we hold ABI compatibility for 2 releases). However, we should preserve API compatibility for every release.
freeze the API wherever it got by "historical accident"?
>> Purely as an observer, my impression was also that the internal numpy
>> c source cleanup, started by David C., I guess, didn't cause any big
>> problems that would have created lots of complaints on the numpy
>> mailing list.
> David C spent a lot of time ensuring his changes did not alter the compiling experience or the run-time experience of users of NumPy. This was greatly appreciated. Lack of complaints on the mailing list is not the metric we should be using. Most users will never comment on this list --- especially given how hard we've made it for people to feel like they will be listened to.
I think for some things, questions and complaints on the mailing list
or stackoverflow is a very good metric. My reason to appreciate
David's work, is reflected in that the number of installation issues
on Windows has disappeared from the mailing list.
I just easy_installed numpy into a virtualenv without any problems at
all (it just worked), which was the last issue on Windows that I know
of (last seen on stackoverflow). easy_installing scipy into a
virtualenv almost worked (needed some help).
> We have to think about the implications of our changes on existing users.
> -Travis
>> Josef
>>> Josef
>>>> --
>>>> Travis Oliphant
>>>> (on a mobile)
>>>> 512-826-7480
>>>> On Jun 25, 2012, at 7:01 PM, Charles R Harris <charlesr.harris@gmail.com>
>>>> wrote:
>>>> On Mon, Jun 25, 2012 at 4:21 PM, Perry Greenfield <perry@stsci.edu> wrote:
>>>>> On Jun 25, 2012, at 3:25 PM, Charles R Harris wrote:
>>>>>> On Mon, Jun 25, 2012 at 11:56 AM, Perry Greenfield <perry@stsci.edu>
>>>>>> wrote:
>>>>>> It's hard to generalize that much here. There are some areas in what
>>>>>> you say is true, particularly if whole industries rely on libraries
>>>>>> that have much time involved in developing them, and for which it is
>>>>>> particularly difficult to break away. But there are plenty of other
>>>>>> areas where it isn't that hard.
>>>>>> I'd characterize the process a bit differently. I would agree that it
>>>>>> is pretty hard to get someone who has been using matlab or IDL for
>>>>>> many years to transition. That doesn't happen very often (if it does,
>>>>>> it's because all the other people they work with are using a different
>>>>>> tool and they are forced to). I think we are targeting the younger
>>>>>> people; those that do not have a lot of experience tied up in matlab
>>>>>> or IDL. For example, IDL is very well established in astronomy, and
>>>>>> we've seen few make that switch if they already have been using IDL
>>>>>> for a while. But we are seeing many more younger astronomers choose
>>>>>> Python over IDL these days.
>>>>>> I didn't bring up the Astronomy experience, but I think that is a
>>>>>> special case because it is a fairly small area and to some extent
>>>>>> you had the advantage of a supported center, STSci, maintaining some
>>>>>> software. There are also a lot of amateurs who can appreciate the
>>>>>> low costs and simplicity of Python.
>>>>>> The software engineers use tends to be set early, in college or in
>>>>>> their first jobs. I suspect that these days professional astronomers
>>>>>> spend a number of years in graduate school where they have time to
>>>>>> experiment a bit. That is a nice luxury to have.
>>>>> Sure. But it's not unusual for an invasive technology (that's us) to
>>>>> take root in certain niches before spreading more widely.
>>>>> Another way of looking at such things is: is what we are seeking to
>>>>> replace that much worse? If the gains are marginal, then it is very
>>>>> hard to displace. But if there are significant advantages, eventually
>>>>> they will win through. I tend to think Python and the scientific stack
>>>>> does offer the potential for great advantages over IDL or matlab. But
>>>>> that doesn't make it easy.
>>>> I didn't say we couldn't make inroads. The original proposition was that we
>>>> needed a polynomial class compatible with Matlab. I didn't think
>>>> compatibility with Matlab mattered so much in that case because not many
>>>> people switch, as you have agreed is the case, and those who start fresh, or
>>>> are the adventurous sort, can adapt without a problem. In other words, IMHO,
>>>> it wasn't a pressing issue and could be decided on the merits of the
>>>> interface, which I thought of in terms of series approximation. In
>>>> particular, it wasn't a 'gratuitous' choice as I had good reasons to do
>>>> things the way I did.
>>>> Chuck
>>>> _______________________________________________
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>> _______________________________________________
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-June/062899.html","timestamp":"2014-04-17T10:41:50Z","content_type":null,"content_length":"17387","record_id":"<urn:uuid:8f2963c7-ca92-4193-8fde-94cb7ab83554>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Transform of Convolution
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
From Eq.5.5), we have that the output linear time-invariant filter with input impulse response convolution of i.e.,
where ``z transform of both sides of Eq.6.3) and applying the convolution theorem from the preceding section gives
where H(z) is the z transform of the filter impulse response. We may divide Eq.6.4) by
This shows that, as a direct result of the convolution theorem, the z transform of an impulse response transfer function linear and time invariant.
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
|
{"url":"https://ccrma.stanford.edu/~jos/filters/Z_Transform_Convolution.html","timestamp":"2014-04-17T01:21:14Z","content_type":null,"content_length":"12917","record_id":"<urn:uuid:86321bc7-2895-4053-9c94-a006978c6a78>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Information
Textbook: Math 0300 Course Materials, Prepared by Cindi Bluhm (sold only in NVC bookstore)
Required Materials: Loose leaf notebook paper, a 1 inch 3 ring binder, 7 dividers with tabs, pencils, and medium index cards
Catalog Description: Topics include operations on fractions, decimals, and integers; order of operations; and appropriate applications. Please be advised that in addition to this course, you must
complete a mandatory laboratory component of 600 minutes.
Prerequisites: Placement by Accuplacer Exam or THEA Exam.
Semester Credit Hours: (3-3-0)
Course Outcomes: After the successful completion of this course, you will be able to solve application problems using the following skills:
1. Perform basic arithmetic operations with integers, fractions, and decimals.
2. Find the perimeter and area of polygons and circles.
3. Evaluate and simplify expressions using the order of operations.
4. Simplify arithmetic expressions involving positive exponents.
5. Use proportions to solve percent problems.
6. Evaluate and Simplify Algebraic Expressions
7. Solve applied problems using skills developed in this course.
Textbook: Beginning and Intermediate Algebra: 1^st Ed, McKenna & Kirk; Pearson.
ISBN10: 0201787377 or ISBN13: 9780201787375
Catalog Description: Topics include natural number exponents; algebraic expressions; linear equations and inequalities; concepts of lines; and appropriate applications. Please be advised that in
addition to this course, you must complete a mandatory laboratory component of 600 minutes.
Prerequisites: A grade of “C” or better in MATH 0300 or placement by Accuplacer Exam or THEA Exam.
Semester Credit Hours: (3-3-0)
Course Outcomes: After the successful completion of this course, you will be able to solve application problems using the following skills:
1. Perform operations on fractions and signed numbers.
2. Simplify algebraic expressions.
3. Solve linear equations and inequalities.
4. Solve literal equations for one variable.
5. Graph linear equations using (a) slope-intercept form, y = mx + b and (b) by finding x- and y-intercepts. Solve applied problems using skills developed in this course.
Textbook: Beginning and Intermediate Algebra: 1^st Ed , McKenna & Kirk; Pearson.
ISBN10: 0201787377 or ISBN13: 9780201787375
Catalog Description: Topics include integer exponents; polynomials; factoring; rational expressions; rational equations; and appropriate applications. Please be advised that in addition to this
course, you must complete a mandatory laboratory component of 600 minutes.
Prerequisites: A grade of “C” or better in MATH 0301 or placement by Accuplacer Exam or THEA Exam.
Semester Credit Hours: (3-3-0)
Course Outcomes: After the successful completion of this course, you will be able to solve application problems using the following skills:
1. Factor polynomials.
2. Perform arithmetic operations with polynomials.
3. Perform operations with rules of exponents.
4. Simplify rational expressions.
5. Solve factorable quadratic equations.
6. Perform arithmetic operations with rational expressions.
7. Solve applied problems using skills developed in this course.
Textbook: Beginning and Intermediate Algebra: 1^st Ed , McKenna & Kirk; Pearson.
ISBN10: 0201787377 or ISBN13: 9780201787375
Catalog Description: Topics include rational exponents; radicals; linear and quadratic equations; linear systems; concepts of relations and functions; and appropriate applications. Please be
advised that in addition to this course, you must complete a mandatory laboratory component of 600 minutes.
Prerequisites: A grade of “C” or better in MATH 0302 or placement by Accuplacer Exam or THEA Exam.
Semester Credit Hours: (3-3-0)
Course Outcomes: After the successful completion of this course, you will be able to solve application problems using the following skills:
1. Simplify and operate on complex numbers.
2. Solve quadratic equations by factoring, taking square roots, completing the square, and using the quadratic formula.
3. Simplify and operate on radical and rational expressions.
4. Solve equations involving radicals and rational expressions.
5. Graph and interpret parabolas.
6. Review topics covered throughout the developmental math curriculum.
7. Solve applied problems using skills developed in this course.
Information will be posted as soon as the syllabus for the Spring 2013 semester is revised...
Textbook: College Algebra: 2^nd Ed, Coburn; McGraw Hill.
ISBN: 978-0-07351941-8 or 978-0-07-746671-8
Catalog Description: Topics include functions, including the algebra of functions, composites, graphs of polynomial and rational functions, inverse functions, logarithmic and exponential functions,
systems of equations using Cramer's Rule, matrices and determinants, sequences and series.
Prerequisites: A grade of “C” or better in MATH 0303 or placement by Accuplacer Exam or THEA Exam. Other prerequisites include ENGL 0300 and READ 0302.
Semester Credit Hours: (3-3-0)
Course Outcomes: After successful completion of this course, you should be able to:
1. Find combinations, compositions, and inverses of functions.
2. Find the domain of a function given its algebraic or graphical form.
3. Solve and graph linear and second-degree equations.
4. Use linear transformations to graph functions or determine the function from its graph.
5. Graph polynomial and rational functions.
6. Solve inverse functions.
7. Graph exponential and logarithmic functions.
8. Solve systems of linear equations by substitution and elimination.
9. Solve applied problems using skills developed in this course, including applications involving percent, simple and compound interest, taxes, and mathematical modeling with emphasis on exponential
growth and decay, logistics growth, and logarithmic models.
Textbook: College Mathematics for Business, Economics, Life Sciences, and Social Sciences, twelfth edition by Barnett, Ziegler, and Byleen; Pearson Prentice Hall, 2011.
Course Description: Topics will include linear equations, quadratic equations, functions and graphs, inequalities, simple and compound interest, annuities, linear programming, matrices, systems of
linear equations, applications to management, economics, and business.
Prerequisites: A grade of "C" or better in MATH 0303 or placement by Accuplacer Exam or THEA Exam. Other prerequisites include ENGL 0300 and READ 0302.
Credits: (3-3-0)
Course Outcomes: After the successful completion of this course, you will be able to solve application problems using the following skills:
1. Solve and graph linear equations and inequalities.
2. Evaluate and graph functions, including quadratic, exponential, and logarithmic functions.
3. Solve problems involving simple interest and compound interest, including, including interest compounded continuously.
4. Find future value and present value of an annuity.
5. Solve systems of linear equations.
6. Solve systems of linear inequalities in two variables.
7. Maximize and minimize expressions using the Simplex Method.
8. Solve applied problems using skills developed in this course.
Textbook: College Mathematics for Business, Economics, Life Sciences, and Social Sciences,
12^th Edition by Barnett, Ziegler, and Byleen; 2011
Pearson Prentice Hall
ISBN: 0-321-6454-6
Catalog Description: Topics include limits, continuity, derivatives of algebraic functions, implicit differentiation, higher-order derivatives, extrema, logarithmic and exponential functions, and
integrals. Emphasis is on applications to business.
Prerequisites: A grade of "C " or better in MATH 1324 (Finite Mathematics) or MATH 1314/MATH 1414 (College Algebra).
Credits: (3-3-0)
Course Outcomes: After successful completion of this course, you will have acquired the skills to:
1. Find limits of functions.
2. Determine the limit at infinity.
3. Define the equations of tangent lines.
4. Find derivatives of polynomial, rational, and other functions.
5. Apply the chain rule.
6. Determine relative and absolute extrema.
7. Differentiate exponential and logarithmic functions.
8. Find simple indefinite integrals.
9. Use definite integrals to find areas.
10. Find marginal cost, marginal revenue, and marginal profit.
Textbook: Math for Liberal Arts, Math 1332, Fall 2012 Edition. Available only at NVC Bookstore . OR Thinking Mathematically: 5th Ed, Blitzer; Pearson.
ISBN 0-321-64585-5
Catalog Description: This course is designated for nonmathematics majors and nonscience majors who need three hours of mathematics for degree requirements. The course includes topics from logic,
algebra, trigonometry, and probability.
Prerequisites: MATH 0302 with a grade of "C" or better, or equivalent. Other prerequisites include ENGL 0300 and READ 0302.
Semester Credit Hours: (3-3-0)
Course Objectives: With emphasis on the bolded items, the student will be able to:
1. Exhibit an understanding of elementary, intuitive set theory, including the concepts of subset, union, intersection, and cardinal number
2. Develop symbolic truth tables
3. Change numbers from base ten to other bases and vice versa
4. Exhibit facility with solving problems with the calculator, including integer problems, decimal problems, and percent problems
5. Convert within the metric system and between the metric system and the U.S. customary system
6. Solve elementary linear programming problems
7. Exhibit an understanding of elementary probability
8. Compute the mean, median, mode and standard deviation of a distribution
9. Use the normal curve to solve elementary problems in statistics
10. Calculate simple and compound interest
11. Find the sum of arithmetic and geometric series
Textbook: Mathematical Reasoning for Elementary Teachers (Authors: Long, DeTemple, Millman) 6^th ed. ISBN-13: 978-0-321-71718-4 ISBN-10: 0-321-71718-X
Required Materials: course packet, a 2 inch 3-ring D binder, reinforcements, loose-leaf paper, pencils, 10 plastic pocket dividers
Catalog Description: Topics include sets, functions, numeration systems, number theory, and properties of the natural numbers, integers, rational, and real number systems. The emphasis is conceptual
understanding, problem solving, and critical thinking. This course is designed specifically for students seeking teacher certification through grade 8.
Prerequisites: MATH 1314 (College Algebra) with a grade of "C" or better, or equivalent.
Semester Credit Hours: (3-3-0)
Course Outcomes: After successful completion of this course, you should be able to:
1. Use various techniques in problem solving.
2. Use inductive and deductive reasoning in problem solving.
3. Recognize different types of sets and use Venn diagrams in problem solving.
4. Use algebra and function notation in problem solving.
5. Perform operations using different numeration systems.
6. Perform operations using alternative methods.
7. Use divisibility rules in factoring.
8. Use prime and composite numbers in finding GCF and LCM.
9. Perform operations using integers, decimals, and fractions.
10. Order integers, fractions, and decimals.
11. Solve applied problems using skills developed in this course.
Textbook: Mathematical Reasoning for Elementary Teachers (Authors: Long, DeTemple, Millman) 6^th ed. ISBN-13: 978-0-321-71718-4 ISBN-10: 0-321-71718-X
Required Materials: course packet, a 2 inch 3-ring binder, reinforcements, loose-leaf paper, pencils, 10 plastic pocket dividers
Catalog Description: Topics include geometry, measurement, algebraic properties, data representation, probability, and statistics. The emphasis is conceptual understanding, problem solving, and
critical thinking. This course is designed specifically for students seeking teacher certification through grade 8.
Prerequisites: MATH 1350 with a grade of "C" or better, or equivalent.
Semester Credit Hours: (3-3-0)
Course Outcomes: After successful completion of this course, you should be able to:
1. Solve problems using ratio, proportions, and percents.
2. Represent and interpret data.
3. Use and interpret measures of central tendency and dispersion.
4. Use counting techniques and interpret measures of chance.
5. Use the basic ideas of geometry to recognize and to examine two and three-dimensional figures.
6. Work problems using transformations, tessellations, similarity, and topology.
7. Use different systems of measurement and convert from one to another.
8. Work problems concerning perimeter and area.
9. Work problems concerning surface area and volume.
10. Solve problems using the Pythagorean Theorem.
11. Solve applied problems using the skills developed in this course.
Textbook: College Algebra and Trigonometry: 2^nd Ed, Ratti & McWaters; Pearson Addison Wesley.
ISBN: 0-321-64471-9
Catalog Description: This course includes the study of quadratics; polynomial, rational, logarithmic, and exponential functions; systems of equations; progressions; sequences and series; and
matrices and determinants.
Prerequisites: A grade of “C” or better in MATH 0303 or placement by Accuplacer Exam. Other prerequisites include ENGL 0300 and READ 0302.
Semester Credit Hours: (4-4-0)
Course Outcomes: After successful completion of this course, you should be able to:
1. Review exponents, dimensional analysis, factoring, and simplifying rational expressions.
2. Find combinations, compositions, and inverses of functions.
3. Find the domain of a function given its algebraic or graphical form.
4. Solve and graph linear and second-degree equations.
5. Use linear transformations to graph functions or determine the function from its graph.
6. Solve and graph polynomial and rational functions.
7. Solve and graph exponential and logarithmic functions.
8. Solve application problems covering simple interest, taxes, exponential growth and decay models, logistics growth, logarithmic models, etc.
9. Solve systems of linear equations by substitution, elimination, and matrices.
10. Perform algebraic operations on matrices and find the inverse and determinant of a matrix.
Textbook: Essentials of Statistics, 4th ed. by Mario F. Triola, Pearson Education, Inc.
ISBN: 0-321-64149-3
Catalog Description: This course is a non-calculus introduction to statistics. Topics include the presentation and interpretation of data (using histograms and other charts, measures of location
and dispersion, and exploratory data analysis), elementary probability and probability distribution functions (binomial, normal, t, chi-square), confidence intervals, hypothesis testing, correlation
and linear regression, analysis of variance, and the use of statistical software.
Prerequisites: A grade of “C” or better in MATH 0303 or MATH 0442 or placement by Accuplacer Exam. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: (4-4-0)
Course Outcomes: This course will prepare the student to better understand a world ruled by statistics including their utility possible bias. For students whose curriculum includes data analysis,
this course will provide the groundwork for experimental design and analysis. This course is to provide the student with an understanding of statistical concepts and problem-solving methods for use
in his/her chosen field or in further mathematics studies. The goals of the course will be achieved by the following course outcomes. After the successful completion of this course, you will be able
to do the following skills:
1. To read and be able to critically interpret scientific statistical information.
2. Be able to organize and present data in a variety of formats.
3. Understand the fundamentals of probability.
4. Understand measures of central tendencies and dispersion.
5. Interpret the standard normal, binomial, and student t distribution tables.
6. Sampling and making inferences about populations using distributions is the fundamental goal of this course. Distributions used will include the binomial, uniform, student t, and the normal.
7. Construct confidence intervals and interpret the probability of a true population mean falling within an interval.
8. Design and test hypotheses based on the normal sampling and proportional sampling.
9. An introduction to linear correlations and regressions.
Textbook: Linear Algebra and Its Applications, 4th Edition by David C. Lay Pearson/ Addison Wesley, 2005
ISBN: 0321385179
Catalog Description: Topics include systems of linear equations, matrices and matrix operations, determinants, vectors and
vector spaces, inner products, change of bases, linear transformations, eigenvalues and eigenvectors
Prerequisites: MATH 2413 with a grade of “C” or better, or equivalent. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: (3-3-0)
Course Outcomes: After successful completion of this course, the student should be able to:
1. Solve a linear system using row operations and augmented matrices.
2. Find linear combinations of vectors.
3. Solve matrix equations.
4. Determine linear dependence and independence.
5. Find images of linear transformations.
6. Perform matrix operations.
7. Find the inverse of a matrix.
8. Determine which matrices are invertible.
9. Compute determinants.
10. Determine vector spaces and vector subspaces.
11. Find a basis for a space.
12. Find a coordinate vector related to a given base.
13. Find a basis and the dimension of a vector space.
14. Find the rank of a matrix.
15. Find the eigenvalues and eigenvectors of a matrix.
16. Find the characteristic equation of a square matrix.
17. Diagonalize a given matrix.
18. Find inner products, lengths, and orthogonality of vectors.
19. Find an orthogonal projection of one vector onto a span.
20. Find a Least-Squares solution of a matrix equation.
(This course is only offered on the basis of need and at the discretion of the department. Refer to your transfer institution to see if this course will count.)
Textbook: (At discretion of instructor)
Catalog Description: Ordinary differential equations, including linear equations, systems of equations, equations with variable coefficients, existence and uniqueness of solutions, series solutions,
singular points, transform methods, and boundary value problems; application of differential equations to real-world problems.
Prerequisites: MATH 2414 with a grade of "C" or better, or equivalent. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: (3-3-0)
Course Objectives:
Textbook : College Algebra and Trigonometry 2nd ed. By Ratti/McWaters ISBN: 978-0-321-64471-8
Catalog Description: This course applies algebra and trigonometry to the study of polynomial, rational, exponential, logarithmic, and trigonometric functions and their graphs. Also included are conic
sections; circular and trigonometric functions, inverse circular functions, identities, conditional equations, graphs, solution of triangles, polar coordinates, complex numbers, and vectors; and
mathematical induction.
Prerequisites: MATH 1314 or MATH 1414 with a grade of “C” or better, or permission by department. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: (4-4-0)
Course Outcomes: After successful completion of this course, you should be able to solve problems using the following skills:
1. Exhibit an understanding of second degree relations, functions, and graphs.
2. Decompose expressions using partial fractions.
3. Exhibit an understanding of functions and their graphs, including conic sections and polynomial, exponential, and logarithmic functions.
4. Determine composites and inverses of functions.
5. Solve exponential and logarithmic equations.
6. Determine the values of trigonometric functions, including those involving special angles and right triangles.
7. Graph trigonometric functions; illustrate wavelength and phase shift.
8. Solve problems involving trigonometric identities and equations.
9. Solve problems involving the Law of Sines and the Law of Cosines.
10. Solve introductory problems involving vectors in the plane and polar coordinates.
11. Solve applied problems using skills developed in this course.
Textbook: Essential Calculus: Early Transcendentals, 2nd Edition James Stewart McMaster University
ISBN-10: 1133112285 ISBN-13: 9781133112280
Catalog Description: This course includes limits, continuity, derivatives and integrals of algebraic, transcendental, and inverse trigonometric functions, implicit differentiation and higher order
derivatives, related rates, Rolle’s theorem, mean value theorem, velocity, acceleration, curve sketching and other applications of the derivative, indeterminate forms and L’Hopital’s rule, area,
Riemann sums, and the fundamental theorems of calculus.
Prerequisites: MATH 1316 or MATH 2412 with a grade of “C” or better, or equivalent or permission by department. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: (4-4-0)
Course Objectives: After successful completion of this course, you will be able to:
1. Exhibit a thorough understanding of functions.
2. Express the continuity of a function in terms of limits.
3. Determine equations of tangent lines.
4. Understand the concept of the derivative of a function.
5. Find derivatives using the power, product and quotient rules.
6. Find derivatives of trigonometric functions.
7. Find derivatives using the Chain Rule and generalized power rule.
8. Find higher-order derivatives.
9. Find derivatives of implicit functions.
10. Solve applied problems, including those involving related rates.
11. Find derivatives of exponential and logarithmic functions.
12. Evaluate indeterminate forms using L'Hospital's Rule.
13. Determine critical numbers, Maximum & minimum values, and inflection points.
14. Understand the implications of the Mean Value Theorem.
15. Exhibit facility sketching curves.
16. Solve optimization problems.
17. Solve other applied problems using skills developed in this course.
Textbook: Essential Calculus: Early Transcendentals, 2nd Edition James Stewart McMaster University
ISBN-10: 1133112285 ISBN-13: 9781133112280
Catalog Description: This course includes areas between curves, volumes, arc length, surface area of a solid of revolution and other applications of integration, techniques of integration, numerical
integration, improper integrals, parametric equations, derivatives, areas, and lengths in polar coordinates, sequences, and series.
Prerequisites: MATH 2413 with a grade of "C" or better, or equivalent. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: 4 - 4 - 0
Course Objectives: The student should be able to solve problems involving:
1. Sum, product, quotient, chain, and power rules; implicit differentiation,
2. Maxima, minima, extrema, points of inflection, curve sketching,
3. Limit points, limits of functions, one-sided and infinite limits, and continuity,
4. Derivatives and integrals involving trigonometric functions,
5. Derivatives, integrals, and applications involving logarithmic and exponential functions,
6. Derivatives and integrals involving inverse trigonometric functions,
7. Indeterminate forms and L'Hospital's Rule,
8. Methods of integration (formulas, substitutions, trigonometric substitutions,
9. Arc length, volumes, and surface areas of solids of revolution,
10. Conic sections,
11. Parametric equations and derivatives,
12. Polar coordinates and graphs, polar-rectangular relationships, derivatives and areas in polar coordinates, and
13. Sequences and series, comparison, limit comparison ratio, and integral tests.
Textbook: Essential Calculus: Early Transcendentals, 2nd Edition James Stewart McMaster University
ISBN-10: 1133112285 ISBN-13: 9781133112280
Catalog Description: This course includes vector calculus, vector-valued functions, tangents to curves, velocity vector, curl, partial derivatives, chain rule, gradients, implicit functions, extrema
of functions of several variables, multiple integrals including change of order and applications, surface integrals, and path independent line integrals.
Prerequisites: MATH 2414 with a grade of “C” or better, or equivalent. Other prerequisites include ENGL 0301 and READ 0303.
Semester Credit Hours: (4-4-0)
Course Objectives: The student should be able to develop a conceptual understanding of the fundamental ideas underlying geometry in the space and vectors, derivatives of functions of two or more
variables, multiple integrals and vector calculus.
Introduction/3D Coordinate Systems
Vectors & Vector Algebra
Lines & Planes
Cylinders & Quadratic Surfaces
Vector Functions & Space Curves
Functions of Several Variables/Partial Derivatives
The Chain Rule
Directional Derivatives & The Gradient Vector
Maximum and Minimum Values & Lagrange Multipliers
Double Integrals/ Applications of Double Integrals
Triple Integrals/Applications of Triple Integrals
Change of Variables in Multiple Integrals
Vector Calculus/Line Integrals
Green’s Theorem, Curl & Gauss’s Divergence Theorem
Green’s Theorem, Curl & Gauss’s Divergence Theorem
Parametric Surfaces, Surface Integrals & Stoke’s Theorem
Math Area Contacts
Tori Martinez
Administrative Service Specialist
Phone: (210) 486-4298
Email: vmartinez420@alamo.edu
Office: JH-213
Yvette Uresti
Adjunct Coordinator
Phone: (210) 486-4320
Email: yuresti@alamo.edu
Office: JH-213P
Thomas Pressly, PhD
Department Chair
Phone: (210) 486-4841
Email: tpressly@alamo.edu
Office: JH-213F
|
{"url":"http://www.alamo.edu/nvc/academics/departments/math/resources/courses/","timestamp":"2014-04-21T07:10:56Z","content_type":null,"content_length":"203393","record_id":"<urn:uuid:6a344cdb-5696-419f-b00c-4c693ef06998>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Regression of many variables
July 31st 2009, 10:22 AM #1
Jul 2009
[SOLVED] Regression of many variables
I have a pretty good understanding of regressions from high school statistics, with one independent variable and one dependent variable.
I would like to create a regression with a vector of N independent variables and a vector of M dependent variables. At this point I don't know what shape my data will be, so I figure a high-order
polynomial regression will work. Maybe something of this form:
$y_i = \sum_{j=1}^N\sum_{k=0}^{order}{a_{ijk}*{x_j}^k}$
Is this something that is done very often or am I on the wrong track? I would be happy to just get some links.
By letting $w_{jk}=x_j^k$ this is linear in the coefficient, a's.
So you can apply multivariate regression here.
The question is, whether or not your errors are normal or not.
See 'General linear data model' at http://en.wikipedia.org/wiki/Regression_analysis
Last edited by matheagle; July 31st 2009 at 10:33 PM.
Cool, thanks for that. I did a lot of reading and it turns out I was looking for supervised learning.
July 31st 2009, 09:39 PM #2
August 5th 2009, 09:03 AM #3
Jul 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/96623-solved-regression-many-variables.html","timestamp":"2014-04-16T20:39:48Z","content_type":null,"content_length":"36299","record_id":"<urn:uuid:a73ad381-5636-4a9c-b568-21c20023cc2b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Mathematician's Brain
The Mathematician's Brain poses a provocative question about the world's most brilliant yet eccentric mathematical minds: were they brilliant because of their eccentricities or in spite of them? In
this thought-provoking and entertaining book, David Ruelle, the well-known mathematical physicist who helped create chaos theory, gives us a rare insider's account of the celebrated mathematicians he
has known-their quirks, oddities, personal tragedies, bad behavior, descents into madness, tragic ends, and the sublime, inexpressible beauty of their most breathtaking mathematical discoveries.
Consider the case of British mathematician Alan Turing. Credited with cracking the German Enigma code during World War II and conceiving of the modern computer, he was convicted of "gross indecency"
for a homosexual affair and died in 1954 after eating a cyanide-laced apple--his death was ruled a suicide, though rumors of assassination still linger. Ruelle holds nothing back in his revealing and
deeply personal reflections on Turing and other fellow mathematicians, including Alexander Grothendieck, René Thom, Bernhard Riemann, and Felix Klein. But this book is more than a mathematical
tell-all. Each chapter examines an important mathematical idea and the visionary minds behind it. Ruelle meaningfully explores the philosophical issues raised by each, offering insights into the
truly unique and creative ways mathematicians think and showing how the mathematical setting is most favorable for asking philosophical questions about meaning, beauty, and the nature of reality.
The Mathematician's Brain takes you inside the world--and heads--of mathematicians. It's a journey you won't soon forget.
Review: The Mathematician's Brain: A Personal Tour Through the Essentials of Mathematics and Some of the Great Minds Behind Them
User Review - Tom - Goodreads
Only problem with this book is that it's too short. Read full review
Review: The Mathematician's Brain: A Personal Tour Through the Essentials of Mathematics and Some of the Great Minds Behind Them
User Review - David - Goodreads
This book is not at all what I expected from the blurb. While Grothendieck and Turing are certainly mentioned, this book is not primarily about mathematical characters and their quirks. Rather, it is
... Read full review
Scientific Thinking 1
What Is Mathematics? 5
The Erlangen Program 11
Mathematics and Ideologies 17
The Unity of Mathematics 23
A Glimpse into Algebraic Geometry and Arithmetic 29
A Trip to Nancy with Alexander Grothendieck 34
Structures 41
Structures and Concept Creation 73
Turings Apple 78
Mathematical Invention Psychology and Aesthetics 85
The Circle Theorem and an Infinite Dimensional Labyrinth 91
Mistake 97
The Smile of Mona Lisa 103
Tinkering and the Construction of Mathematical Theories 108
The Strategy of Mathematical Invention 113
The Computer and the Brain 46
Mathematical Texts 52
Honors 57
Infinity The Smoke Screen of the Gods 63
Foundations 68
Mathematical Physics and Emergent Behavior 119
The Beauty of Mathematics 127
Notes 131
Index 157
References from web pages
David Ruelle: The Mathematician’s Brain
J Stat Phys (2008) 130: 823–825. DOI 10.1007/s10955-007-9469-8. David Ruelle: The Mathematician’s Brain. Princeton University Press, Princeton, 2007 ...
www.springerlink.com/ index/ 1M15725WU0615554.pdf
Sample Chapter for Ruelle, D.: The Mathematician's Brain: A ...
Sample Chapter for The Mathematician's Brain: A Personal Tour Through the Essentials of Mathematics and Some of the Great Minds Behind Them by Ruelle, D., ...
press.princeton.edu/ chapters/ s8477.html
Convergence | The Mathematician's Brain
A book delving into the working of the mathematical mind
mathdl.maa.org/ convergence/ 1/ ?pa=content& sa=viewDocument& nodeId=1660
Inside the Mathematical Mind - August 29, 2007 - The New York Sun
Books | Review of: The Mathematician's Brain ... If "The Mathematician's Brain" does not answer the questions it poses, this is because no other book has ...
www.nysun.com/ article/ 61508
What do mathematicians do? : Article : Nature
BOOK REVIEWED-The Mathematician's Brain: A Personal Tour Through the Essentials of ... The Mathematician's Brain by David Ruelle tackles some of the same ...
www.nature.com/ nature/ journal/ v449/ n7165/ full/ 449982b.html
Not Even Wrong » Blog Archive » Book Reviews
Book Reviews. Felix Berezin. Misha Shifman has edited a wonderful book about the mathematician Felix Berezin, which recently appeared with the title Felix ...
www.math.columbia.edu/ ~woit/ wordpress/ ?p=596
Mindful Hack: Mathematics is more than just climbing "the greasy ...
Mathematician David Berlinski muses gracefully on the nature of mathematical genius, while reviewing David Ruelle's new book, The Mathematician's Brain. ...
mindfulhack.blogspot.com/ 2007/ 09/ mathematics-is-more-than-just-climbing.html
A Certain Ambiguity Topics in Commutative Ring Theory The ...
“Mixing fiction with nonfiction, A Certain Ambiguity is a veritable. history of mathematics disguised as a novel. Starting with the ...
www.ams.org/ notices/ 200709/ tx070901135p.pdf
Powell's Books - The Mathematician's Brain: A Personal Tour ...
The Mathematician's Brain poses a provocative question about the world's most brilliant yet eccentric mathematical minds: were they brilliant because of ...
www.powells.com/ biblio/ 9780691129822
The Mathematician's Brain - Boek - BESLIST.nl
Bekijk en vergelijk informatie, beoordelingen, vragen & antwoorden en de beste winkels voor 'The Mathematicians Brain' op BESLIST.nl ▪ Boeken Engels ...
boeken_engels.beslist.nl/ boeken_engels/ d0001250387/ The_Mathematicians_Brain.html
Bibliographic information
|
{"url":"http://books.google.ca/books?id=B3A1bjOkOaEC&redir_esc=y","timestamp":"2014-04-23T18:56:21Z","content_type":null,"content_length":"123968","record_id":"<urn:uuid:26b33d39-5d04-4c0d-bb8b-5ae158e91bd6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Title Construction of the integral closure of an affine domain in a finite field extension of its quotient field
Author(s) Sihem Mesnager
Type Article in Journal
The construction of the normalization of an affine domain over a field is a classical problem solved since sixteen's by Stolzenberg (1968) and Seidenberg (1970–1975) thanks to
classical algebraic methods and more recently by Vasconcelos (1991–1998) and de Jong (1998) thanks to homological methods. The aim of this paper is to explain how to use such a
Abstract construction to obtain effectively the integral closure of such a domain in any finite extension of its quotient field, thanks to Dieudonné characterization of such an integral closure.
As application of our construction, we explain how to obtain an effective decomposition of a quasi-finite and dominant morphism from a normal affine irreducible variety to an affine
irreducible variety as a product of an open immersion and a finite morphism, conformly to the classical Grothendieck's version of Zariski's main theorem.
Length 17
Copyright Elsevier B.V.
URL doi:10.1016/j.jpaa.2004.04.011
Language English
Journal Journal of Pure and Applied Algebra
Volume 194
Number 3
Pages 311 - 327
Year 2004
Month December
Translation No
Refereed No
|
{"url":"http://www.ricam.oeaw.ac.at/Groebner-Bases-Bibliography/details.php?details_id=663","timestamp":"2014-04-21T03:56:34Z","content_type":null,"content_length":"4479","record_id":"<urn:uuid:2ee570ad-d1e8-4c98-a43c-2cc29c95c47e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thermal-Hydraulic Phenomena - February 20, 2001
Official Transcript of Proceedings
NUCLEAR REGULATORY COMMISSION
Title: Advisory Committee on Reactor Safeguards
Thermal-Hydraulic Phenomena Subcommittee
Docket Number: (not applicable)
Location: Rockville, Maryland
Date: Tuesday, February 20, 2001
Work Order No.: NRC-076 Pages 1-292
NEAL R. GROSS AND CO., INC.
Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
(202) 234-4433 UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
+ + + + +
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
THERMAL-HYDRAULIC PHENOMENA SUBCOMMITTEE
+ + + + +
FEBRUARY 20, 2001
+ + + + +
ROCKVILLE, MARYLAND
+ + + + +
The Subcommittee met at the Nuclear
Regulatory Commission, Two White Flint North, Room
T2B3, 11545 Rockville Pike, at 8:30 a.m., Dr. Graham
B. Wallis, Chairman, presiding.
COMMITTEE MEMBERS:
GRAHAM B. WALLIS Chairman
THOMAS S. KRESS Member
DANA A. POWERS Member
WILLIAM J. SHACK Member
Virgil Schrock
Novak Zuber
ACRS STAFF PRESENT:
Paul Boehnert
Ralph Caruso
Ralph Landry
Joe Staudemeyer
ALSO PRESENT:
Jack Haugh
Mark Paulsen
G. Swindelhurst
AGENDA ITEM PAGE
Introduction by Chairman Wallis. . . . . . . . . . 4
RETRAN-3D T/H, R. Landry, NRR. . . . . . . . . . . 8
EPRI Presentation
G. Swindelhurst, Duke Energy . . . . . . . . . .54
M. Paulsen, CSA. . . . . . . . . . . . . . . . .92
NRR Staff Presentation: Status of T/H Code . . . 282
Review Submittals, R. Landry
Subcommittee Caucus
Follow-on Items from this Meeting
Future Actions
Committee Action
Adjournment. . . . . . . . . . . . . . . . . . . 292
(8:30 a.m.)
CHAIRMAN WALLIS: The meeting will now
come to order. This is a meeting of the ACRS
Subcommittee on Thermal-Hydraulic Phenomena. I am
Graham Wallis, the Chairman of the Subcommittee.
ACRS members in attendance are Doctors
Thomas Kress, Dana Powers and William Shack. ACRS
consultants in attendance are Messers Virgil Schrock
and Novak Zuber, who also have PhDs.
The purpose of this meeting is for the
Subcommittee to continue its review of the Electric
Power Research Institute RETRAN-3D thermal-hydraulic
transient analysis code and discuss the status of the
NRC staff's pending reviews of industry thermal-
hydraulic codes.
The Subcommittee will gather information,
analyze relevant issues and facts, and formulate
proposed positions and actions, as appropriate, for
deliberation by the full Committee.
Mr. Paul Boehnert is the cognizant ACRS
Staff Engineer for this meeting.
The rules for participation in today's
meeting have been announced as part of the notice of
this meeting previously published in the Federal
Register on January 30, 2001.
Portions of the meeting may be closed to
the public, as necessary, to discuss information
considered proprietary to Electric Power Research
Institute. I would ask EPRI to point out if that is
the case at anytime.
A transcript of this meeting is being
kept, and the open portions of this transcript will be
made available as stated in the Federal Register
notice. It is requested that speakers first identify
themselves and speak with sufficient clarity and
volume so that they can be readily heard.
We have received no written comments or
requests for time to make oral statements from members
of the public.
Now I am going to do what I almost never
do at these meetings, and that's make some preliminary
There is a history to this story. About
two years ago we received some documents from EPRI
describing their code RETRAN-3D and, having read these
documents, I made a presentation to the ACRS which was
concerned with some problems with the momentum
equations and their formulation and use.
EPRI met with us after that, still almost
two years ago, but we never had any technical
discussion or resolution of the issues at that time.
Since then, there's been an exchange of RAIs and
responses between the staff and EPRI, and the staff
has prepared an SER. I'm not quite sure if it is
drafted at this time or final, but the ACRS itself
hasn't been directly involved in this issue since
1999. So this is our chance to really get to grips
with it.
I suggest there are three questions or
maybe six. There are three questions, and each one of
them raises another one that goes with it.
The first one is: What are the
formulations of these equations? Let's clarify.
Let's get the information straight so we know exactly
what is going on. It's a fact finding question.
Related to that is the question that goes
along with it, which is: Are they valid, and under
what circumstances and with what kind of
approximations or whatever?
The next question is: How are these
formulations used? How do they actually apply to the
real life nodes, control volumes and whatever in
reactor systems? The question that goes along with
that is: Are these methods of use valid? What's the
basis for validity, and what's perhaps the limitations
and so on?
The third question is: How does the
overall code work using these particular methods?
Going along with that is the question: What's the
basis for validation of this code?
I separate these questions, because it's
conceivable that the formulations contain
approximations, even errors, maybe used in a way which
is difficult or can be qualified in some way, but
there is a claim still made that, nonetheless, the
code works, because there's some measure of working
which is applied to the code.
The other thing I wish to say is I can't
imagine how we would spend all day on these issues,
and I actually have a plan to leave at three o'clock.
Originally, we were going to have about an hour
presentation, and I didn't expect that we would spend
all day on these matters. Let's see how it goes.
I'm not sure just how EPRI is going to
prepare, but if you prepared -- if you can address the
three questions that I posed in the order that they
were posed, that would help me anyway.
So I'm sorry to take the time of the
Subcommittee, and now I will call upon Dr. Ralph
Landry who is bursting with enthusiasm to give his
view on this matter.
DR. LANDRY: Thank you, Mr. Chairman. I
don't know if I should say bursting with enthusiasm to
get the view known, but I'll try to get it known. I
think in our presentation and in our SER we do, in a
sense, address some of the questions and our views of
the answers to some of your questions.
First, what I'd like to do is very quickly
go over the topics that we are going to cover and a
quick rundown of some of the milestones, just to
refresh the new members of the subcommittee on what we
have done with this code, because it has been for
quite a period of time.
So I'd like to get a highlight of the
milestones, talk a little bit about the staff approach
to the review, the evaluation of some of the aspects
of RETRAN-3D which we did in the evaluation, which
includes momentum equation, 5-equation model, critical
flow model, and down the list talk a little bit about
using RETRAN-3D in a RETRAN-02 mode.
One of the concerns that was raised by the
applicant was that they would like to have permission
to use RETRAN-3D in substitute for RETRAN-02, and I'll
have some remarks on that, because you can't exactly
substitute the code. There are changes that cannot go
back in time to the old code.
I would like to touch briefly on the
conditions on use. The former versions of the code
had an extensive number of revision conditions on use.
We have reviewed some of those, and we have added
more. Then the conclusions of the staff.
(Slide change)
DR. LANDRY: Very quickly, as the Chairman
said, about two years ago, approaching three years
ago, we received the request to review RETRAN-3D. We
received the code itself and documentation in
September of 1998. In December of that year we issued
our acceptance for review of the material.
We met with the Subcommittee in December
of '98, March, May, July of '99, March 2000 and again
now in 2001, a lot of meetings that we've held with
the Subcommittee. There was a meeting with the full
Committee, as the Chairman pointed out, at which time
he expressed his concerns with some of the material in
the documentation.
The staff has met with EPRI on a lot of
times, and we prepared our SER in December of 2000.
(Slide change)
DR. LANDRY: The approach that we took to
this review is, as we have said several times, in the
past we used a lot of contractor support in reviewing
codes. This was one of the first codes in a long time
in which we assembled a staff group to do the review
without relying on contractors.
We assembled a group of four, which a
former member of the Subcommittee referred to as "the
Gang of Four," to perform the review, people with
expertise in thermal-hydraulics, kinetics, numerics,
that could look at the code and do a review.
Originally, we had planned on
concentrating on only the differences between RETRAN-
3D and RETRAN-02. However, fundamental problems that
were pointed out caused us to go back and start
looking at the basis in the code itself, some of the
fundamentals which we had not planned on reviewing.
We exercised the code extensively. We
made many, many, many computer runs using models which
we obtained from the applicant, models which we put
together, attempts to break the code, to find where
the code could fail and where it had shortcomings.
We looked at the conditions and
limitations on the previously approved versions of the
code. We identified additional conditions and
limitations, and we put together a long list of
conditions on use of RETRAN-3D as RETRAN-02
substitute. I'll go into some of those a little later
in this presentation.
(Slide change)
DR. LANDRY: Okay. One of the first
problems we ran into in looking at extensive concerns
with the code was with the momentum equation. Some of
these problems, as Dr. Zuber has pointed out, go all
the way back to 1974, and with the RELAP3 and RELAP4
codes from which RETRAN derived. In fact, I believe
some of them even go back all the way to the FLASH
Some of the points of concern that the
staff raised, and these are all delineated and
discussed further in the SER: We were concerned with
the attempt at rigor in the derivation of the momentum
equation. A lot of effort is spent on a derivation,
forms, terms that are not really in RETRAN-3D itself.
We have problems with the notation that
was used in the derivation in the text. The
documentation goes through an indicial notation, then
goes into a non-standard notation. So that when we
thought we understood an equation on one page, we
encounter the equation on another page and, because of
the change in notation, it's a totally different
equation. We have to sit down and try to figure out
what in the world we are looking at on the next page.
There are typographical errors. Sometimes
we weren't sure if we were seeing typographical errors
or changes in notation.
Distributed descriptions occurred in the
text. Descriptions of the equations spanned sections
and chapters in the documentation. There wasn't one
concise description and derivation.
Nomenclature is missing. Sometimes terms
are defined within the text. When we have found a
term in an equation we didn't understand, we didn't
know if it was a change in notation, a typographical
error, or if we had to go back and start reading text
to find what the term meant, because it wasn't in the
nomenclature list.
DR. SCHROCK: Excuse me, Ralph. On slide
3 you had an item "acceptance of RETRAN-3D for
review." It seems to me that so much of the effort --
it's just wheel spinning here -- could have been
avoided if you had stepped up to the plate and said at
that time, this list of things renders this submittal
inadequate for NRC review at this time.
I think that's what you were driving for
in your revision of the standard review plan and in
the reg guide supporting it. So I think you've got to
address that issue at some time. When are you going
to do that?
DR. LANDRY: Well, some of these problems
-- Let me back up. We have to understand what the
acceptance for review process is, in the first place.
If it's a mini-review to see that there is enough
material there to begin a review, then you won't find
these kind of problems until you go into the text in
depth and start finding the problems.
If you want to say an acceptance review
means that everything is absolutely correct in the
text, then you have done the review at the same time.
The acceptance review process which we envisioned was
one in which we would look at the documentation and
say, yes, this documentation has enough material,
covers all the topics that it should cover, and we can
begin the in depth review of the material.
That was our initial goal in doing an
acceptance review.
DR. SCHROCK: Well, when I reviewed it, I
could have said after the first two hours that this is
not a document that describes a code that can be
DR. ZUBER: Zuber. Let me add, I
completely agree with Virgil. If a code has errors,
which should be really at the junior level, that code
should not ever be reviewed for the reason that this
is not acceptable, period, and not really go for two
years, which we have been now and God knows how long.
I think that doesn't do credit to NRC. It
doesn't do credit to the technology. Basic errors in
the code which on the junior level can be detected
should not be even accepted.
DR. LANDRY: In our writing of the SRP,
which has a lot of information and based on our
experience in this review, I think that in the future
we are going to do a greater review of material before
we accept it; whereas, at this point we just looked
through, said okay, there's enough material here we
can start a review.
We were coming off of an experience with
a previous design submittal in which we were reviewing
an SB LOCA code, which is -- the documentation was
less than a quarter of an inch thick. We said,
obviously, this isn't adequate to do a review of the
So we needed to do a review of the
material first to see if there's enough material to
review before we could accept it, and that was the
mindset that got us into this position; and as we then
got into the review in depth, we've started learning
more and more of errors in it and that perhaps this
should have been done in a different way.
CHAIRMAN WALLIS: Maybe it would help if
you had someone like Professor Schrock review the code
and say just what he said, that after sort of an hour
reading he could tell you that, you know, this ship is
headed for an iceberg, let's not let it happen.
DR. LANDRY: I think we have learned a lot
from this experience and learned how we have to do
DR. ZUBER: Just one more comment. You
see, you get into a position. If you accept it for
review and after sometime you cannot really approve
it, you are being accused or put in a position it
costs so much money to go to NRC, and then you get the
lawyers on your back.
What you should really do, stop in the
beginning and say this is not acceptable, go back.
You save them money, but if they want to waste their
money, that's their own prerogative. But they should
not waste your time.
DR. LANDRY: We agree with you, Novak. We
have learned a lot from this.
Okay, I think I was down to the last step.
In looking through the derivations of the momentum
equation, we also found that there were missing steps.
Where there had been a great deal of detail lavished
on the initial phases of the derivation, the
derivation became very sparse, and very great leaps
were taken at the end.
(Slide change)
DR. LANDRY: In our review we determined
that the so called "vector momentum equation" really
isn't. The equation that is in the material is a
scalar equation of motion. It is projected on a
vector momentum along a control volume dependent
direction. It's really not a vector momentum
We found a number of errors, some of which
you corrected.
CHAIRMAN WALLIS: You concluded that it
was a projection of a vector momentum equation?
DR. LANDRY: We viewed it as that it's a
projection of vector momentum along a --
CHAIRMAN WALLIS: Because it appears to be
a strange hybrid, which isn't really projection of a
vector momentum equation. It isn't energy
conservation, and it isn't recognizable based.
DR. LANDRY: That's what we are trying to
say. It's not --
CHAIRMAN WALLIS: But you said something
there which I don't think is --
DR. LANDRY: Well, I was trying to shorten
up a statement.
CHAIRMAN WALLIS: -- is true.
DR. STAUDEMEYER: Joe Staudemeyer, NRC
staff. If you look at the derivation, it really is a
projection of a vector momentum equation along a
direction that depends on what the volume is on the
direction of volume that it's in. So --
CHAIRMAN WALLIS: Well, that's what it
claims to be.
DR. STAUDEMEYER: And you can work out all
the terms, and it does work out that it's that. But
then you end up with 20 terms left over that don't --
CHAIRMAN WALLIS: Okay. Well, we are
going to get into that with EPRI, I guess. I think
that all of us have great difficulty projecting
several of these terms in a direction which makes any
direct link between a vector momentum equation and the
equation that actually appears.
DR. STAUDEMEYER: Yes. Well, there are
some other assumptions I have to go into to get it,
DR. LANDRY: Okay. In the review we
pointed out a number of errors. I know the Chairman
had pointed out a number of errors and a lot of
information on the momentum equation also. Some of
these overlap. We haven't gone back to see if we have
a one to one correspondence, but I'm sure some of
this overlaps with what the Chairman has pointed out
We found that there is a cosine term
missing from a vector dot product in going from
equation 236 to equation 237. We pointed out where
this would be easily seen if one tries to solve this
equation for a bend in a pipe.
We said that it was mathematically
possible to eliminate the cosine term from the
pressure difference term if a constant pressure is
assumed in the cell, but then the cosine term has to
appear somewhere else. It has to be moved to the Floc
term, projection of nonuniform normal wall forces.
The EPRI staff told us that this was going
to be evaluated based on empirical information and
empirical data. The staff is anxiously awaiting to
see the source of that information. We are not aware
of any such information. We would very much like to
see it.
We have said that the equation for
mechanical energy conservation cannot be derived from
the equation of motion. Therefore, you cannot show
that your mechanical energy is being conserved.
We said that pipe configuration with a TEE
split or two parts coming together, such as a jet
pump, results in a non-zero pressure difference that
is dependent on the area of the exit path or paths.
The EPRI staff agreed with this, and has gone back and
fixed the information.
We have looked at an attempt at a
derivation called the "Porsching Paper." According to
the staff's view is that the paper is irrelevant. We
said it's irrelevant, because the paper does not
appear to have any mathematical errors, but the
definitions and restrictions on control volumes that
are required to be consistent with the mean value
theorem makes the paper irrelevant.
Pressures and flows in RETRAN are defined
in a control volume with specified functional
dependencies. The integrals in the paper should be
evaluated with the RETRAN-3D assumed function
dependence for pressure and flow. In our view, the
paper doesn't pertain to the derivation.
I'm sure EPRI is going to want to respond
to that a little later also.
(Slide change)
DR. LANDRY: We looked at the 5-equation
non-equilibrium model. This is a topic that caused us
quite a bit of concern during the review.
Part-way -- A year into the review, part-
way into the review, we found out that there was a
fundamental change in the code which we weren't aware
of. That was being in the works at the time the code
was submitted.
This caused us a great deal of
consternation. Finally, the material was submitted.
We came back in our review and said that this model
has not been assessed properly and, therefore, is not
acceptable for use.
Licensees who wish to use the 5-equation
non-equilibrium model have to provide separate
effects, integral systems effects assessment over the
full range of conditions that are to be encountered
for which the model is applied.
Assessment of uncertainties --
DR. ZUBER: Every time you get an
applicant, you will have to do another review.
DR. LANDRY: That is correct.
DR. ZUBER: Well, that's a waste of money
and waste of time. In a sense, you have to review,
but not at this level. This should be the level of
code acceptance, and then you apply to a given plant.
That's another story, but this is really the basic
equation, the basic model. If you have to do this for
every applicant, it takes your time. It costs money,
and it costs them money, and I'm really surprised that
they didn't address this problem.
DR. LANDRY: This was -- In the original
phase of the review, our understanding was that the
code was being submitted so that we could review the
code, as we have with a number of other industry
codes, and say that the code is approved for use and,
as long as it's used within the constraints, we don't
have to review the submittal. But -- Let me finish,
Back in the RETRAN-02 days, RETRAN-02 was
approved, but there were 39 conditions and limitations
on use, which meant virtually everybody who used the
code had to come in with a justification assessment
and support why that code is applicable for their
We thought we were getting out of that
mode. However, we can't. We are still in that mode
with RETRAN-3D, because of our view of the assessment.
When the code is used, it still must be heavily
supported for every application, and yes, we agree
with you.
DR. ZUBER: That really surprises me, that
the industry complains for money, and yet really that
puts NRC in the position that they have to do it.
DR. LANDRY: The code has to be assessed
properly for the application. If it's not done
generically, then it has to be done for each specific
DR. ZUBER: Well, this approach really
makes -- four separate effects. You can really put
the burden not on the co-developer but on the
applicant. I mean, that's the philosophy, I mean, if
you follow logically this approach.
DR. LANDRY: You are going to hear me say
that throughout this presentation.
DR. ZUBER: Good.
CHAIRMAN WALLIS: Well, what happens when
the applicant, say Maine Yankee which doesn't exist
anymore -- we'll pick someone -- comes up, wants to
use RETRAN, and their engineers look at it and say,
gee whiz, we can't figure out this momentum equation?
Is it their responsibility to defend it?
DR. LANDRY: The defense of a methodology
used in a licensing application is put on the
applicant, the licensee. The licensee --
CHAIRMAN WALLIS: But do they have to
defend something in the code which they didn't
DR. LANDRY: The licensee is responsible
for everything that is submitted on their application.
CHAIRMAN WALLIS: So they have to go over
the same terrain again maybe.
DR. LANDRY: If material is not correct.
CHAIRMAN WALLIS: It's an awfully wasteful
and inefficient process.
DR. LANDRY: If the material is not
correct or not done adequately, then the burden is
placed on the licensee.
CHAIRMAN WALLIS: I guess that's the theme
that the ACRS keeps trying to sing that no one has
listened to. If you do the job right the first time,
it saves one hell of a lot of wasted energy and money.
DR. LANDRY: We're not going to disagree
with you.
CHAIRMAN WALLIS: What's wrong with that
statement? We've said that before, and we get all
this gripe about, well, it's too much effort to do it
right and, no, no.
DR. ZUBER: Limited resources, too much
money, and it ends up they to minimize any effort.
I'm sorry.
DR. LANDRY: We don't disagree with you.
CHAIRMAN WALLIS: Let's move on.
DR. LANDRY: This is why we like to do --
let's call them topical type reviews, because we can
review a material one time and then, when it's
applied, all we have to do is see that it's applied
properly. It saves everybody.
CHAIRMAN WALLIS: Well, it's like the
homework. If everything is right, you just check it
out and give them an A, and that's the end of it, and
it's five minutes work.
DR. LANDRY: I wish I had people like you
in school. Do you give multiple choice quizzes?
CHAIRMAN WALLIS: The interesting part is,
if it's a better derivation than the professor's, then
you have to think about it.
DR. SCHROCK: Doesn't it seem reasonable
that, if you are unable to approve something as a
generic tool for general applications, maybe a minor
exception here and there, but for general
applications, if you are not able to do that, then why
don't you ask what is the function of such an
approval? What does it accomplish for anybody?
DR. LANDRY: When we look at a new
methodology, new model, that's one of the things we
look at. Back on another code that we reviewed
recently when we talked about replacing one of the
transfer correlations, we were looking at what is the
benefit. It's a newer equation --
DR. SCHROCK: I'm not talking about
details at that level, but the conclusion is that the
code is basically unacceptable for -- I guess you've
identified something like 40 different situations, and
if it's going to be used for those situations, then
additional -- significant additional work will have to
be done.
So you really haven't produced anything
that's useful either to the industry or to the
regulators, it seems to me. You've --
DR. LANDRY; Well, one of the bottom lines
we are going to get to in this is that, while there
are a great many conditions and limitations on use and
a great many things that the applicant must do when
using this code, the code is an improvement over
It is numerically improved. It is more
robust. But -- and then we get into the "buts," all
the things that must be done. So, yes, it is an
improvement, but it's not perfect especially in that
it's not totally supported in its assessment
This has been an ongoing discussion.
DR. ZUBER: But that is where I see the
kind of trouble recently is you try always to put
everything in that one basket. One is the
formulation, the basic equations, and this is, I
think, question one that Graham had.
The second one is what kind of
validations. I think you should separate those, and
on the first level the equations, the formulations,
then the constitutive equations. Then you will go
back into validations. Don't try to kind of jump from
one to the other, I think. Focus on one. If it's
acceptable, then look at the validation, but putting
them together -- and this is what industry does -- is
at least confusing.
CHAIRMAN WALLIS: I think you also have to
decide when you review is this sort of a series of
filters, and if you filter the fundamental formulation
and, if it doesn't pass that filter, do you go any
further or do you sort of go on to start looking at
assessment and stuff with loft, no matter what
happened in the previous filters.
I think that's something you guys have to
think about in the process. That is a sort of series
of steps with yes/no and, if there's a no, you don't
go any further or do you have some yes/maybes. How
are you going to do that?
DR. LANDRY; That's a difficulty. As you
come down through a particular model, is this -- does
it look valid, what they have done? Is it assessed?
If you come out no, do we stop altogether or do we go
to the next stage and say, okay, the next model. Okay,
we've a yes here. This one we have a yes. This one,
you have to go back and support or you put a
limitation. We keep doing down the list --
CHAIRMAN WALLIS: How many of those are
necessary, and to what degree is the thing. You have
to ask yourselves pretty carefully.
DR. LANDRY: That is a very difficult
question, because that really doesn't show up until
you start assessing against things like an integral
system or full size data where you can say the
overall package does a bad job or the overall package
does a good job, but it could do a better job if this
was fixed.
DR. ZUBER: Let me say, I think, if I may
be direct, there you are really going on a kind of a
tangent. You used the word model. What does it mean?
Does it mean the formulation, which is also model, or
does it mean the constitutive relations, and that's
all similar. And don't put those things together.
The first thing is, is the formulation
correct? Are the equations correct? If they are,
then you proceed to the next one. Then you look at
the model. If you question the model, you go to the
validation. There is a kind of a hierarchal approach,
how you look at these problems. But don't take
immediately model, because you don't know -- at least,
I don't know what you are really addressing.
So look at the formulation equations. If
they pass, fine. If not, send it back to the student.
Go back then to the constitutive equations. If they
are acceptable, fine. If not, what is the validation.
CHAIRMAN WALLIS: Well, there is also the
question of what you mean by correct. I mean, there
are errors that reveal a fundamental misunderstanding,
and there are errors which are more in the form of you
can't solve this thing exactly, so you make some
assumptions, but you've got to be clear what they are,
and those aren't exactly errors. Correctness, I
think, has to be qualified.
You are not looking for something which is
exact. We are looking for something which is
plausible and doesn't contain real errors, which sort
of exaggerate some fundamental misunderstanding and
produce a ludicrous answer under some circumstances,
that sort of thing.
DR. LANDRY: I think to follow that up and
back up just a second to the momentum equation as an
example, in the derivation -- Now we've argued about
it. We've heard the Chairman's views on it, the views
of the Committee, the views of the consultants, our
views. We've been in a long debate with the
I think the bottom line to this as an
example is that this was an attempt at a rigorous
derivation of a momentum equation for use in a
computer code. Fundamentally, you can't get to that
point the way it's been done.
A far more productive method, and one
which we pointed out to the applicant at one point,
would be to tell us what is in the code. What are the
terms? What do the terms mean? Why is it acceptable?
Why is it valid?
Rather than trying to do a rigorous
derivation from basic principles, which you can't get
to because of all the assumptions you have to make,
tell us what's in the code, and tell us why it's
This would have been a far more productive
CHAIRMAN WALLIS: Well, I assume what's in
the code is what's written down in the equation. Is
there something different between the equations and
the code?
DR. LANDRY: What's in the documentation
can't be in the code, and that's not the way it has
been derived.
CHAIRMAN WALLIS: Well, see, that's yet
another mystery that I didn't raise in my questions,
is what's actually in the code.
DR. LANDRY: And that's what I'm getting
to, that tell us exactly what's in the code. Tell us
what the terms are. Tell us what the terms mean, and
tell us why it is valid.
CHAIRMAN WALLIS: Doesn't the code come
with some kind of code documentation which says that
these lines in the code formulate the momentum
equation, and these lines have the momentum fluxes,
these are how the terms are evaluated? I would think
that has to be, as just quality assurance in code
DR. LANDRY: Well, some codes are better
than others at that. Some codes have a great deal of
comment in them. Some do not.
CHAIRMAN WALLIS: Well, I think you should
require enough comment so that you can read the code.
DR. KRESS: Do you do that with codes, go
through line by line?
DR. LANDRY: No.
DR. KRESS: You normally don't do that, do
DR. LANDRY: No.
CHAIRMAN WALLIS: I do, when a student
writes me something that purports to be the right way
to do something. I mean, that's how I learned how to
program a computer, was by figuring out what the
students were doing.
DR. KRESS: What you generally have is
this is the finite difference form of the equation
that we coded in the codes. You usually have that
DR. LANDRY: Right. That is in the
manuals. We can say, okay, that's done right. We
assume that they've gone from this to the code itself.
CHAIRMAN WALLIS: Because, well, if there
are typos of the type we've seen in some of the
documentation, there should be -- you would expect
typos in the code, too.
DR. LANDRY: We've had this discussion
before for years, and no, we don't go line by line in
the code.
CHAIRMAN WALLIS: I think you should. At
least, if you don't, you should threaten to, and you
should perhaps do it from time to time in a small bit,
bite sizes.
DR. LANDRY: I think our management would
like to discuss resources.
CHAIRMAN WALLIS: Oh, don't give me that
nonsense. If it's the right thing to do, it has to be
DR. LANDRY: Anyway, this is an example of
how we've tried to interact and say what you should be
doing to make this job right, and we just disagree
with the approach that's been taken.
DR. SCHROCK; Code people have developed
standard procedures for validating codes. They use
different words to describe the different parts of the
process. There are, in fact, codes available that
will check that programming.
I never hear about those things having
been applied in this arena, and I wonder why not. And
it's not that it isn't known to industry.
I served on a review committee concerning
the NPRs that went into this in great detail at
General Atomic, and it was very clear that industrial
representatives were on top of this. But it doesn't
come here. Why is that?
DR. LANDRY: Well, there is almost a loss
of corporate memory going on. This began -- and you
were involved in it, if I remember right, Virgil --
back '78-'79, even before Paul was with the
Subcommittee. I think Andy Bates was with the
Subcommittee, and Milt Plessett was the Chairman then.
We've got into a long debate, and this
began out in Idaho Falls at a meeting, a long debate
over what do the terms validation, verification
assessment mean, and after about six months finally
arrived at definitions, which we started using as we
went through the code development work for a number of
Now we are going back, and we have a new
crop coming in, and we seem to have lost our
understanding of what those terms meant.
CHAIRMAN WALLIS: One of those words means
that the code as written reflects equations as
formulated. That's one of those words, verification.
DR. SCHROCK: That says that the equations
you meant to program are, in fact, in the program.
DR. LANDRY: Validation says that, yes,
it's performing the function it was intended to
perform. An assessment is that the code is performing
at this level overall.
DR. SCHROCK; But there are available
recognized methods, computerized, to check that
verification step. Have those ever been applied to a
code like RETRAN?
DR. LANDRY: Not that I'm aware of.
MR. CARUSO: Dr. Schrock, this is Ralph
Caruso from the staff. I think that's actually quite
a good idea. I'm just going to give an observation.
It's my observation that -- I'm thinking
back to some people that I know in Europe who used to
work with RELAP, and I do believe that they tried to
use one of these tools about ten years ago, and they
were not successful.
I believe it had to do with the same
reason -- same reasons that we have problems with
compilers trying to optimize codes; and when you try
to optimize some of these codes with those optimizers,
they don't work very well because of the way the codes
are structured, because they were designed to run
originally on very small memory machines, and people
were very creative in how they did the coding. So
that the logic checkers get confused. They don't
understand what's going on.
DR. SCHROCK; I think what you are saying
is it's a matter of getting caught up in obsolescence.
The actual programming is so old that the modern
techniques can't recognize what it's all about.
MR. CARUSO: I do believe I heard an
argument about this similar to this about ten years
ago, but one of the reasons I believe -- A lot of the
people who are doing code development now are updating
the codes. They are restructuring them. Research
here is doing that with TRAC-G, so that it will be
able to be maintained better and also to be optimized
better and maybe even make it amenable to these logic
checker programs.
I don't know if RETRAN-3D was restructured
with that in mind, but that's certainly something that
we would like to keep in mind in the future.
DR. ZUBER: Well, I think this will be a
good field for Research to contribute. If the method
is not available, a contribution to NRR would be to
develop such a method instead of doing some other
things which are really irrelevant.
CHAIRMAN WALLIS: Let's move on.
(Slide change)
DR. LANDRY: Okay. Another area of our
review, another topic we would like to bring up, was
the critical flow model. Three critical flow models
are included in RETRAN-3D: Extended Henry/Fauske;
Moody; and Isoenthalpic Expansion/Homogeneous
DR. SCHROCK: That one I pointed out in my
report, that isoenthalpic expansion is a misnomer for
what it actually does.
CHAIRMAN WALLIS: Let's just point out,
Ralph, you had this set of slides before this
Subcommittee before.
DR. LANDRY: Some of this, I may have.
CHAIRMAN WALLIS: So I don't want to go
through it all again.
DR. LANDRY: Okay. These are points we
brought out --
CHAIRMAN WALLIS: We would like to focus
on --
DR. LANDRY: -- in the SER.
CHAIRMAN WALLIS: We would really like to
focus on what EPRI has as a response to our concerns,
and we have had a discussion with you about this
DR. LANDRY: Did you want to cover at all
the drift flux, Chexal-Lellouche?
(Slide change)
DR. LANDRY: Chexal-Lellouche is --
CHAIRMAN WALLIS: We didn't really get so
far. You see, we got hung up by asking questions of
EPRI to which they did not respond in our first
encounter with them, and then I thought what we were
trying to do today was to reach, if possible, some
consensus on those matters.
DR. LANDRY: Okay. I was trying to just--
CHAIRMAN WALLIS: And you are helpful, but
we have been through all this before with the
Subcommittee, not quite the same membership, but you
had a meeting with us a month ago or something where
you went through this.
DR. LANDRY: Okay. I'll let the members
just read through then. Basically, our conclusion is
that overall Chexal-Lellouche is accurate, but the
user must be careful and use it within the range of
validity and for the proper fluid. You cannot use
Chexal-Lellouche for air-water parameters for steam
water calculations.
DR. SCHROCK: What are they left to do if,
in fact, they find that they are operating outside the
DR. LANDRY: Then they have to come up
with a database or a different methodology.
CHAIRMAN WALLIS: That's one of your
restrictions that you have.
DR. LANDRY: Right.
DR. ZUBER: I think, if this is correct,
I think those equations are not applicable to this,
and you should stick to it.
DR. LANDRY: Yes, that's what we've said,
unless they can prove it.
(Slide change)
DR. LANDRY: Boron transport, I think we
have already discussed, the technology --
CHAIRMAN WALLIS: The interesting thing
with boron transport -- excuse me -- is that if you
have a code that's validated in terms of peak clad
temperature for LOCAs, then --
DR. LANDRY: This code can't be used for
CHAIRMAN WALLIS: -- it can't be used just
without any testing or validation or assessment for
boron transport. It's a different problem.
DR. LANDRY: And this code is not used for
CHAIRMAN WALLIS: Right. Thank you.
(Slide change)
DR. LANDRY: Let's see. Neutron kinetics,
we've gone through in great detail with you. We
showed you our calculations. The only problem there
is we felt a little rub would --
CHAIRMAN WALLIS: Did you want the ACRS to
give as much attention to the neutron kinetics as it
did to the momentum equation?
DR. LANDRY: No.
(Slide change)
DR. LANDRY: Code assessment: We had a
lot of problems. We've pointed this out throughout
the review, that the bulk of the assessment -- This
gets back to what we were just talking about a few
minutes ago. The bulk of the assessment is based on
plant calculations performed by utilities. A lot of
the figures don't include who did them, what code
version they even used.
CHAIRMAN WALLIS: There are options in the
code, aren't there?
DR. LANDRY: What options were used. So
that the assessment models do not explicitly --
approved in the SER will be either the responsibility
of the licensee or the applicant.
The bottom line is each applicant of
RETRAN-3D will have to submit a valid approach to
assessment which we think should include a PIRT.
(Slide change)
DR. LANDRY: Code use: Code, as we've
discussed a number of times, is highly dependent upon
the user. We've pointed out throughout the discussion
problems in use of the code.
DR. ZUBER: That really concerns me,
because as time goes by people who have some
experience and knowledge who are away, the memory is
gone, and you have people who are not experienced
working under pressure of being efficient and pushing
the limits.
I think this is really a topic which NRC
should really consider.
DR. LANDRY: That's why in our SER we've
said that there has to be a statement, a certification
of the ability, the training, the background, the
experience of the analyst who has used the code, one
that a submittal is sent in.
DR. ZUBER: That applies to NRC also.
DR. LANDRY: It's harder to regulate
(Slide change)
DR. LANDRY: RETRAN-3D in a RETRAN-02
mode: I'd like to spend just a minute on this one.
This was a topic that came up. I don't know if we've
discussed this at length with the Subcommittee. But
the request was made to approve use of RETRAN-3D as a
RETRAN-02 substitute by utilities that have RETRAN-02.
We looked at this and said there are a
number of areas where RETRAN-3D is an improvement over
02, improvements that cannot be backed out. Implicit
numerical solution, time step lock improvements,
improved water property tables are good, and these are
improvements in the code to make the code more robust.
We would not want to back off from those.
There are a number of items that we point
out in the SER that can be used in using RETRAN-3D in
an 02 mode, and there are a number of models which we
point out, a number of options which the analyst
cannot use, that they are not permitted to be used for
RETRAN-3D as a RETRAN-02 substitute.
One of the big ones, again, is Chexal-
Lellouche that we were just talking about a minute
DR. SCHROCK: What is the 3-D neutronics?
What's the reason for that one being excluded?
DR. LANDRY: Because 02 does not have 3-D
neutronics. RETRAN-02 is point kinetics, and -- or 1-
D, 1-D kinetics. There are significant differences
between 3-D and 1-D kinetics, and we've said that you
cannot use the 3-D kinetics.
DR. SCHROCK: Yes, I get it.
DR. LANDRY: The bottom line is that
organizations that have been approved for using
RETRAN-02 can use 3-D in an 02 mode without additional
NRC approval, as long as they stay within the
constraints of the SER. However, if they go outside
of those constraints, they then have to have
individual approval for use of 3-D.
This is quite a restriction, because this
says that a utility, an entity who has not been
approved for use of RETRAN-02 cannot come and say,
okay, now we're using RETRAN-3D, but we're using it as
RETRAN-02, is that okay. We are saying, no, it's not
okay. You're not approved for use of RETRAN-02. How
can you use 3-D in a substitute mode?
This case has come up, by the way, and we
asked that utility an identical RAI: Enclosed are the
45 conditions on use on RETRAN-3d; show you compliance
with each and every one of them. When they get to
this one, they can't.
(Slide change)
DR. LANDRY: Conditions on use: RETRAN-02
had 39 conditions on use. Ten of those still apply to
RETRAN-3D. In addition, we have added six more which
are rather restrictive.
DR. ZUBER: I thought you just mentioned
45 conditions.
DR. LANDRY: There are 45 total for
conditions on use which we address in the SER.
DR. ZUBER: And on RETRAN-02 there were
DR. LANDRY: Right.
DR. ZUBER: So this is not a progression.
This is retrograding.
DR. LANDRY: Well, some of those 39 no
longer apply.
CHAIRMAN WALLIS: Now when you've got
these conditions on use, it seems to me that they have
something to do with the importance of doing it right,
to getting a valid answer for nuclear safety purposes;
and if the problem with the momentum equation has an
effect on nuclear safety, then one has a real
justification for saying you've got to do something
about that.
I don't see how these conditions are tired
in with some sort of leverage on the important
question of nuclear safety, and you can put on
conditions, but really you have to focus on those
parts of the things you are nervous about or uncertain
about or are not quite right somewhere and what effect
they have on regulation and so on.
I get the impression that people have sort
neglected the momentum equations in the past, because
there's been some kind of corporate belief that it
didn't matter anyway. That, seems to me, a very
dangerous line to take.
So then it becomes -- You don't question
it; you don't make a condition on it. You don't think
about it. You've got to tie in -- If you're nervous
about some term in the momentum equation, it would
seem that when you are thinking ahead to realistic
codes that someone then has to say, okay, suppose it's
twice as big or something and suppose you're uncertain
about how big this term is, what effect does it have?
What leverage does it have on the kinds of answers
we're likely to get in our code prediction?
That, I think, is going to happen in the
future, isn't it? So the conditions -- I'm making a
speech, I suppose, but these conditions have to be
related to the actual use to answer safety questions.
DR. LANDRY: Right. When individual
applications come in, that needs to be addressed.
CHAIRMAN WALLIS: And those will be
different, depending on the question. And if you come
up with a new reactor design or some new concern like
boron dilution or something --
DR. LANDRY: That's correct. It will be
different for each application, each use of the code.
CHAIRMAN WALLIS: And you may actually
find when you look at some of these applications that
you need other kinds of conditions. That's the
sensitivity of the answer to something you hadn't
realized before.
DR. LANDRY: That's correct. Just because
something has not been pointed out in this review does
not mean in an individual application review an
additional condition cannot surface.
CHAIRMAN WALLIS: Well, BWR, of course, is
an interesting one, because as you upgrade the power,
you may be pushing some of those envelopes.
DR. LANDRY: Right.
CHAIRMAN WALLIS: We haven't really
studied that yet enough to know how important the
resurgence might be.
DR. LANDRY: That's right.
DR. ZUBER: Let me just make -- follow on
what Graham said. What is missing from this approach,
the NRC and the industry after 20 years or 25 years,
really, they didn't establish really the importance of
some factors or elements, when you can really neglect
something and when you must take it into account and,
if you don't have to take into account, you are
justified to not use it, then you had a good -- to
defend it. But then you have to address what is
important, and this was really never done.
I think this could really improve the
efficiency of a regulatory agency, and I think this is
what research should do. This will also cut the cost
of approval by the industry. I think this is a field
which really -- and therefore, it should be done in
this -- you know, that regulation.
DR. LANDRY: I think the attempt at a PIRT
is an initial step at that, and I realize -- another
concern that you've expressed before -- that we have
to understand for individual events what are the
overriding effects, what are the critical effects for
a particular event, and which are unimportant, and are
there certain effects taking place that mask
everything else happening. That hasn't been done.
CHAIRMAN WALLIS: That's what we call sort
of concluding the loop. You put experts in the room.
They give you the PIRT. That's just the first step,
and you have to go through the whole questions of
making sure that, if something is of high importance,
that it's actually evaluated and someone checks. But,
yes, indeed, you have a good enough evaluation to meet
some criteria.
DR. LANDRY: Okay. We have pointed in the
conditions on use also that anytime that an auxiliary
calculation is performed, an auxiliary code, that
there has to be an assessment showing that there is a
consistency in going from RETRAN-3D to that auxiliary
calculation, such as DNB.
As I said earlier, we have to have a
statement on the user's experience and qualification
with the code, and assessment of the code for models
and correlations not specifically approved must be
submitted by the licensee or the applicant.
(Slide change)
DR. LANDRY: Our conclusions in the SER
are that RETRAN-3D is a significant advancement in the
analysis tools base for licensees.
CHAIRMAN WALLIS: Did it significantly
advance the momentum equation? Well, we are here
today because of the momentum equation.
DR. LANDRY: I know. No, I would go back
to what I said earlier. The formulation that is
given, the derivation that is given, in our view,
should not be in there. It's much more productive to
say --
DR. ZUBER: Let me say -- I mean, this is
passing the word. What does it mean, it should not be
there? If the derivation is incorrect, call it
incorrect and call a spade a spade. It's irrelevant
-- Well, you may take the -- but these equations are
in the code. No, you cannot have it both ways.
DR. LANDRY: This is what I meant earlier.
The code equations, the formulation that is in the
code should be explained and why that formulation is
correct, not the derivation that is given.
The code lacks sufficient assessment in
places and places a burden on the applicants to
justify the code use. Code used in the RETRAN-02 mode
can be used in the RETRAN-02 mode, provided it's
justified, and we have outlined what that takes.
One thing that the RETRAN Maintenance
Group has done that we are very encouraged by and that
we agree with very strongly is that a peer review
process has been put in place. We were told back in
November that the RETRAN Maintenance Group has taken
the step of -- It's not legislated to anybody using
the code, but the members are encouraged to submit
their material to their peer review process before
it's submitted to the NRC.
This would alleviate a number of the staff
concerns over user experience, over nodalization
selection, over option selection, because the RETRAN
Maintenance Group would look at this, the experienced
people, and say, yes, this has been a valid approach
that has been taken to the analysis. We feel that
that is a very encouraging step.
We have said that Chexal-Lellouche drift
flux model is an improvement, but it has to be used
cautiously. You can't use it outside --
DR. SCHROCK: Why do you think it's an
DR. LANDRY: It seems to give good results
for certain ranges of a void fraction. There are some
ranges of void fraction where it does not.
DR. SCHROCK: And do you think it's not
possible to do that with phenomenologically based
DR. LANDRY: Yes, it should be. But this
is a heavily supported correlation. It has a great
deal of --
DR. SCHROCK: Well, it has a lot of
politics behind it, but for you to make the statement
that it's an improvement, an improvement compared to
what and on what --
CHAIRMAN WALLIS: This is the model that
uses a bubbly flow model to model annular flow?
DR. LANDRY: That's correct. But that --
And we point that that's not good.
DR. ZUBER: A droplet, a mist flow.
DR. LANDRY: Yes. We've pointed out in
annular, an annular mist flow -- we pointed out in the
SER that the correlation underpredicts.
DR. ZUBER: Okay. Now let me ask you.
What about the type where you have the perforations?
Is it there you have a void fraction maybe of .3 or
.4, but do you consider this applicable or not? Let
me help you. It would not be applicable.
The reason is -- I wrote a memo to -- my
last memo to you to tell you why the thing is not
applicable, not only for this equation but the other
There are data in the literature which you
could use and test your models, and the applicants,
especially Lellouche, never used it to my knowledge,
and that equation has absolutely no physical meaning.
It's a hodge podge of everything, and it cannot be
applied -- -- cannot be applied to mist flow, to
droplet flow.
CHAIRMAN WALLIS: It's a big recipe with
quite a big database.
MR. STAUDEMEYER: This is Joe Staudemeyer,
Reactor Systems Branch. The statement that it's an
improvement is based on void fraction predictions in
BWR channels, which is its biggest place of
If you look at the Chexal-Lellouche
results compared to previous RETRAN correlations, it's
much better at predicting void fraction in BWR
DR. SCHROCK: So it is compared against
the previous correlation in RETRAN. But other things
are available that were not compared. So I think your
statement is misleading.
DR. LANDRY: I think you have to read the
whole text in the SER. I tried to condense the SER in
these slides.
DR. SCHROCK: Well, I've read the SER, and
I find it, frankly, to be very much more flattering to
the code than it deserves, despite the criticisms.
DR. ZUBER: And I agree with Virgil.
CHAIRMAN WALLIS: There seems to be two
discussions going on today. One is with you, and one
which we are going to get to with EPRI, which I think
is going to be on a different plane altogether.
DR. SCHROCK; But this may be our only
DR. LANDRY: We have also said that final
acceptance of RETRAN-3D for licensing basis
calculations depends on successful adherence to
conditions and limitations on use discussed in the
CHAIRMAN WALLIS: Now is this SER a final
DR. LANDRY: No SER can be final-final.
At this point --
CHAIRMAN WALLIS: It's not labeled Draft.
DR. LANDRY: It's not labeled Draft. At
this point we've given it --
CHAIRMAN WALLIS: So EPRI has it?
DR. LANDRY: EPRI has it.
CHAIRMAN WALLIS: And if the ACRS had some
concerns about, let us say, the momentum equation,
that might irrelevant of the regulatory process?
DR. LANDRY: No, it might be relevant, and
if material comes out that necessitates a supplement
or addendum to our SER, then we can write one. We're
not restricted to this is the last word.
CHAIRMAN WALLIS: Well, I guess what
concerns me is that I think we are going to find that
our discussion with EPRI is somehow of a different
nature than yours. We are going to go after where
this equation comes from, what does this term mean,
not common sense because all you have to do is bend
the pipe this way and you get an absurd answer or
something like that. That's the kind of thing we are
going to do.
The process you've been through, it's not
clear to me brings that sort of thing out. We seem to
be doing something different here, and I'm not sure
how the sort of thing we are going to be doing relates
to the formal regulatory process of coming up with an
Maybe we'll come back to you with that at
the end of the day. Unless we've gained some time, I
think you're through, Ralph. Thank you very much.
Should we move on? Can we move on with
EPRI? I would like to say that we are here today
because of concerns about formulation of momentum
equations, and really the sooner we can get to that,
the better.
I'm not sure what EPRI has in mind, but
last time we never got to it, and our advice, as much
as we could get through in a short time with e-mails
and so on, to EPRI was explain the responses to RAIs
which resolve the questions which ACRS had and not go
through a lot of stuff which we've already been
through before about the code and industry and uses
and things, which are not part of our discussion.
I'm not quite sure what you have in mind
for this presentation.
MR. SWINDELHURST: My name is Greg
Swindelhurst. I'm the Chairman of the RETRAN
Maintenance Group, which is the group which are the
main users of RETRAN, both domestically and
I'm going to give a very short
presentation which does respond to some of the
questions which came up during Ralph Landry's
discussion, but we realize what you really want to get
to, and we will get to that quickly.
(Slide change)
MR. SWINDELHURST: I will not repeat the
items which the NRC has adequately covered, but there
are a few things which need some emphasis, and that is
that we have worked for over two years with the NRC
staff to go through their issues, their concerns, and
those have been resolved.
I'm not saying that they have been
resolved in a positive, successful way that everybody
is happy with, but they have been resolved to the
extent that perhaps things have happened like certain
models have been withdrawn from review, because we
realized they did not have an adequate validation
We've resolved some things in the form of
errors which have been identified, which have been
CHAIRMAN WALLIS: Now there are two code
errors. That says code. That's not documentation.
It's actually something in the code itself?
MR. SWINDELHURST: Right. I'm referring
to two code errors.
Now there's also been numerous
documentation problems which we are cleaning up and
correcting. We've issued change pages to the NRC
staff along the way, and that will all take place, and
when we --
DR. SCHROCK: Can you tell me where I
could read about these two code errors?
MR. SWINDELHURST: I think Ralph already
had them on his previous slide.
CHAIRMAN WALLIS: They are not responsive
to the ACRS concerns.
MR. SWINDELHURST: Yes, they are.
CHAIRMAN WALLIS: Well, I don't think so,
because as far as I can see, the new documentation is
the same as the old. There is one thing which has to
do with resolving something through an angle. Is that
one of the ones you meant?
MR. SWINDELHURST: We will cover these in
detail in a minute, but --
CHAIRMAN WALLIS: Okay, we'll get to
MR. SWINDELHURST: -- I think the staff
and ourselves agree there have been two code errors
that -- They are Fortran errors which have been
corrected. There's been a lot of equation and --
CHAIRMAN WALLIS: If they were Fortran
errors, that's fine.
(Slide change)
MR. SWINDELHURST: Okay. I would also
like to point out, as Ralph mentioned, a lot of the
issues remain to be addressed by the applicant
submitting in the future an application of this code.
DR. ZUBER: You are really confusing me.
You said everything was resolved between EPRI and NRR.
The problem is that we just heard that NRR said that
RETRAN -- I mean the formulation is irrelevant,
because it was incorrect. How did you want to answer
MR. SWINDELHURST: I think --
DR. ZUBER: That's not an error. This is
the basic questions. Do you agree with that statement
they make? If yes, why? If no, again why?
MR. SWINDELHURST: Okay. The review
process results in the NRC asking us questions which
we respond to, and then the SER is written with
certain conditions and limitations on the use of the
code. When I say that we've resolved it, what I mean
is we've gone through that process, and we reached
this point in the review where an SER has been
issued, and we understand how we are permitted to use
this code in the future.
Now a lot of the issues are carrying over
to the applicant in the area of validation, licensing
of new RETRAN-3D models, things of that sort.
DR. ZUBER: But wait. You are really
dancing around the point, using this expression. if
the code -- If I understood Ralph, they detected some
incorrect -- or errors in the formulation, and for
this reason they said it's not applicable or
irrelevant. Do you agree with that statement, yes or
MR. SWINDELHURST: We do not.
DR. ZUBER: Are you going to address it?
MR. SWINDELHURST: Yes.
DR. ZUBER: Today?
MR. SWINDELHURST: Yes.
DR. ZUBER: Okay.
MR. SWINDELHURST: We may not address it
to your satisfaction, but we will address it. We
think that these code equations are suitable for the
intended use of this code.
DR. ZUBER: Again, suitable -- It's a
very elastic word. It may be suitable for something
and not suitable for others.
CHAIRMAN WALLIS: Can we get to it when we
actually look at an equation and find out if it's
suitable? I might point out that the ACRS has
deliberately been a spectator. We are not involved in
producing SERs. The staff does that.
We don't do the RAI process. So we've
been spectators up to now, and now we're coming in
again and saying do we like what we see.
MR. STAUDEMEYER: We understand that, but
we are also of the opinion that your comments have
been heard by the NRC staff, and they have been
forwarded to us through the RAI process, and that's
the way that we respond to things. That's just --
That's nor NRR works.
CHAIRMAN WALLIS: Yes, that is the
process. Right. I agree.
(Slide change)
MR. SWINDELHURST: I'm skipping one slide,
because we've covered it adequately. On this slide I
would like to just emphasize a couple of things,
although Ralph has gone through this adequately.
We do have this RETRAN-02 mode. We do
have any of the new models, not the RETRAN-02 mode.
Maybe the validation hasn't been adequate. The future
applicants are going to have to come in and justify
that to the staff's satisfaction. We fully agree with
It's a fully acceptable way to move
forward with the use of this code for licensing
applications. This is nothing new, this third bullet
here. Any organization using a code like a thermal-
hydraulic is obligated to come explain to the NRC and
document and show that they are skilled and capable of
using this code, and we fully agree that that process
ought to continue in the future.
CHAIRMAN WALLIS: But it says here,
"Organizations without NRC-approved models." So the
implication you have is that the models in RETRAN have
been approved and do not need to be reviewed again?
MR. SWINDELHURST: Some of the models
have, and some of them have not. The ones which have
not are called out in the conditions of the SER.
CHAIRMAN WALLIS: So if we look at -- So
something like equation 2.3-4 -- this is a momentum
equation or subsequent things -- Your impression is
that NRC has given these derivations its blessing?
MR. SWINDELHURST: I would say NRC has
given the use of this code, including those equations
as they end up in the coding -- Yes.
CHAIRMAN WALLIS: And so, if an
undergraduate student read this equation and submitted
it to me in homework and I gave him a D, and he said,
no, it's got NRC blessing, would that be a true
statement by the student?
MR. SWINDELHURST: I guess that would
depend on what his intended application of that
equation was.
DR. ZUBER: Now let me say, you just
remind me of something, the difference between science
and technology and politics and law. In science and
technology, the word is mean is technology is either
correct or is incorrect, and what you are saying
there, it depends how you end up with the thing.
MR. SWINDELHURST: Certainly.
DR. ZUBER: The question is not if it's
wrong. Even for a junior, it cannot be fashionable.
MR. SWINDELHURST: Just as a simple
example, you know, we are clearly stating that this
code should not be used for doing large brick LOCA
calculations. We agree to that, because these
equations are not suitable for that application.
They may very well, and we maintain they
are suitable for a lot of other applications where the
phenomena are less complex and the event that you are
simulating is less dynamic.
DR. ZUBER: The simplest example is the
flow to a straight pipe, and what I keep seeing here
in the memo which was sent by Lance Agee, I think, the
error there is -- I mean, on the junior level.
MR. SWINDELHURST: Well, we'll cover that
in the next --
CHAIRMAN WALLIS: I think your claim is
that, even if there should be errors, it doesn't
matter for the applications you have in mind.
MR. SWINDELHURST: I don't think we would
call them errors. I think they are approximations
that are used to put the equations in a form they can
be solved in a computer for this type of an
CHAIRMAN WALLIS: Well, that's
interesting, because if you look at what happens in
politics, our late President was accused of lying
about something which many people may have considered
to be minor, and then he did a lot of things which
were valid policies. But half the political body in
Washington seems to condemn him for this incorrect,
invalid statement he made right up front. That
somehow for them cast a shadow over everything else.
I think what you are saying is it doesn't
matter, because for the purposes we have in mind,
everything is okay. Is that your viewpoint?
MR. SWINDELHURST: We would claim it's not
an error. It's the way the equation is being
formulated for this application.
CHAIRMAN WALLIS: But if it were that
these equations had errors in them which were of a
really fundamental nature --
DR. SCHROCK: The problem I have is the
presentation has this pretense of rigor. The errors
are there, and then you emerge from that with a claim
that these are approximations. They are never
introduced as approximations. They are simply errors,
sometimes even defended as not being errors.
Then the bottom line is that you say,
well, the equations are okay, because they are, in
fact, engineering approximations. But this has never
been shown that they are satisfactory approximations,
what is being approximated, that indeed the
approximation is satisfactory for all applications
that have been approved.
MR. SWINDELHURST: I think we understand
the comment that the documentation hasn't met your
needs, and we understand --
DR. ZUBER: The basic need of -- you
expect from a junior.
CHAIRMAN WALLIS: Now you cannot write
statements which just are not correct and get
validity, really, it seems to me. It's very dangerous
to make statements about momentum balances which just
are not true and then to expect credibility in the
rest of the document. That's where we are.
Now the NRC, it may be, operates in a
different way, but that's the puzzle we have anyway.
So we're going to get to that.
MR. SWINDELHURST: We're going to get to
DR. ZUBER: Just one -- Think about
intervenor going in front of television and showing
your equations, and he is a professor somewhere. He
says, look, I would have flunked a junior if he gave
me this solution. Then you say NRC and industry
license safety calculations based on these errors.
What would this do to the industry?
MR. SWINDELHURST: We don't think we are
in that situation.
DR. ZUBER: You may well be.
MR. SWINDELHURST: I understand, but --
DR. ZUBER: You may well be, and let me
say you will be there.
CHAIRMAN WALLIS: But if you were there,
it would be a serious matter, would it not be? That's
what's baffled me about this whole thing, is that, you
know, you've had two years to respond to what seem to
me just very trivial things, and you come back with
not really seeming to understand the issue.
We'll get to that, I'm sure, but I think
you as a sort of manager, a responsible person, ought
to wonder about whether this matters and whether you
can really go forward with the statements you are
making when somebody, as Dr. Zuber said, could make
those kind of claims against you. It is a sort of
Achilles heel which I wouldn't want to have.
DR. ZUBER: This is going to kill this
CHAIRMAN WALLIS: No, it isn't going to
kill the industry. I mean, the last thing we want to
do is kill the industry because of something so
DR. ZUBER: Foolish things kill big
CHAIRMAN WALLIS: Okay. Well, I guess we
have to move on. I think we have to say something to
you, because you are a responsible person, really. I
don't know if the buck stops with you, but it stops
with somebody.
MR. SWINDELHURST: I think you're right.
CHAIRMAN WALLIS: Does it stop with you?
MR. SWINDELHURST: It certainly does in
terms of what we do --
CHAIRMAN WALLIS: The buck stops with you?
MR. SWINDELHURST: -- and my company doing
this type of work. We have to be sure it's correct
and accurate for the intended purposes. There's no
doubt. We're the licensee. The licensee is
DR. ZUBER: I also think it's the
responsibility of NRR to accept or discard such an
DR. KRESS: On your fourth point, before
we take that, if there are things in RETRAN-3D that
are -- we say are fundamentally wrong with the
momentum equation, it's very likely that those are in
RETRAN-02 also. Does that put into question the use
of RETRAN-02?
If it puts in question the use of RETRAN-
3D, would it also put into question use of RETRAN-02?
MR. SWINDELHURST: Most of what we are
talking about is also applicable to RETRAN-02 and,
when we find an error in the RETRAN-3D, we go
backwards and see if the same error s in RETRAN-02.
If it is, we get that fixed also.
DR. KRESS: And do you have to reapply for
approval of that part -- those changes? Is there a
MR. SWINDELHURST: No. No, if there's an
error correction, the NRC staff allows us to correct
errors without re-review.
DR. KRESS: Okay. That probably answers
my question.
(Slide change)
MR. SWINDELHURST: I would like to just
bring in a topic which may not seem like it's directly
applicable, but we believe it is.
As you are aware, the staff has issued
this draft guide 1096 for comment within the industry.
We are expecting that this will run through -- The
comment process will be issued, and it will require
more technical justification for future submittals of,
you know, realistic or best estimate, whatever
terminology you prefer, codes and applications in this
thermal-hydraulics area. That's a fact, and that's
perfectly okay.
As Ralph also mentioned, we think these
requirements ought to be commensurate with the
significance of the application. If the application
is a relatively simple transient where the phenomena
are mild and all of us would agree to that, then they
should not be a lot of required validation testing,
because there shouldn't be any concern of the need for
CHAIRMAN WALLIS: That is an awkward -- I
essentially agree with that, but this is all done, I
think, in the public view, and you have to be
sensitive to the view of the community, and that
includes people like undergraduate students. If they
read something in the document which their professor
has just corrected as wrong in their homework, then
that's going to demolish a lot of their faith in
what's going on in this industry, isn't it?
I mean, it's not just a question of the
requirements being commensurate with this, a game
played between you and the NRC. At some level I think
you have to be concerned with a wider audience.
That's where, I think, you fall down here.
I agree that it may well be that, as I
found with TMI, analyzing it myself, mass and energy
balances is most of the story for most transients, and
who cares about momentum equations. Well, if you can
show that, that's great. But if you claim that you've
got a derivation where the term so and so means
something and the term something means something, and
it doesn't mean something, then that cannot be
excused, I think, just by making the statement in line
3 here.
I agree with line 3, but I think there is
a wider audience out there. It includes your own
MR. SWINDELHURST: I realize that. I
think we realize that there is a wider audience and
that --
CHAIRMAN WALLIS: Industrial engineers and
NRC staff and everybody.
MR. SWINDELHURST: I understand.
DR. ZUBER: See, underlining what Graham
said is you are going for exactness. I mean
responsibility. You have a good derivation, basic
principles, etcetera, etcetera, and you come with
something which is not so.
You could really simplify the problem
which you can defend and be more efficient, and then
if something cannot be applied, then you have to
develop a rigor. What you have here is something, a
mish-mash. I mean exactness or basic principles,
which they are not and something, then which is very
difficult to apply long running.
It's not really an efficient way to do
this analysis.
CHAIRMAN WALLIS: But you could say that
no one knows how to solve this problem. So we make
the only thing we know how to do, which is to analyze
it as a series of straight pipes, whatever it is, and
then say, look, we've got some data that show that for
our purposes that's okay.
MR. SWINDELHURST: I think that's exactly
what we're doing. Okay?
CHAIRMAN WALLIS: But it's this -- Well,
we'll see when we get to it. Okay.
MR. SWINDELHURST: And we've never gotten
into that in this discussion, is to what extent does
it make a difference? To what extent do you get
acceptable answers at the end of this? That's where
we are, and we've been there for 20 years using this
code in that way.
Okay. The last item here I just want to
mention is, you know, we did talk a lot about best
estimate/realistic, and that's kind of the looking
forward way of doing licensing type analyses perhaps,
but we've still got this traditional conservative way
which is the way things are done now, and we need to
-- the industry needs to make certain that that option
is still recognized as being a valid and useful way to
continue to do this work in the future.
DR. SCHROCK: That is a problem, I think.
Seems to me that industry should want to get away from
that crutch in the long term. I don't understand why
you have this strong desire to preserve it.
In the shorter term you don't want to go
through required relicensing processes to continue to
qualify operating plants. That's understandable. But
you should expect a normal transition, and that normal
transition might even be as long as 20 years. I don't
know. But you should at some point in the future
stand up and say, yeah, we're proud of the fact that
we began an industry which was based on pretty shaky
engineering calculations. We did it very
conservatively. We went through a transition in which
we believe we have better calculations, and we can,
therefore, up-rate the power on our plants. But
indefinitely into the future, we demand the right to
license power plants according to 1971 technology.
That's stupid.
MR. SWINDELHURST: I think that the
evolution you are talking about probably is going to
occur. You know, it may take 20 years. Who knows?
But --
DR. SCHROCK: It won't occur in 20 years
the way we are moving.
MR. SWINDELHURST: Well, this is rather
new, though, this transition to realistic or best
DR. SCHROCK: No, it goes back 13 years.
What do you mean, it's new?
MR. SWINDELHURST: For the non-LOCA stuff,
it's relatively new, and the reason, in my opinion,
that it hasn't been moving that way faster is because
there hasn't been a need to do it.
As you mentioned, with up-ratings or other
things that come along, there may, in fact, be a need
to do it, and then it will be forced through another
DR. ZUBER: And if you wait for 20 years,
you won't have this industry.
CHAIRMAN WALLIS: Well, it's coming back.
DR. ZUBER: Not following this work. This
will kill it.
(Slide change)
MR. SWINDELHURST: Okay. Just a few more
comments here.
The NRC has mentioned, as Ralph has, that
there's a concern of an absence of user guidelines.
We don't share that perspective. We think there's
adequate documentation and understanding within
organizations as to what it takes to use a code like
this to build models, to submit topic reports to the
NRC to get approval to do this type of licensing work,
and we do not share their opinion that there's an
absence of information available to do this.
CHAIRMAN WALLIS: Well, I'm not a user,
but I must say, when I looked at your RAI reply, which
we are going to get to about how you model, say, the
lower plenum, I hadn't a clue what was going on and
how anything you told me there related to what I would
stick into your momentum equation in order to have a
So I struggled. I had sleepless nights,
and I still couldn't figure out what was going on. So
at least this user didn't understand how to use the
code for a geometry other than the very simple ones
showed in your examples.
MR. SWINDELHURST: We will have to work on
that then, and we are prepared to talk about that as
There has also been expressed concern
about inexperienced users or maybe even experienced
users misusing this code. That's true of any code.
You've got to have code experience. You've got to
know what you're doing.
This is a highly technical code. Ralph
has mentioned that there's a lot of options and a lot
of different ways users can model a plant, model a
particular analysis. That's one of the reasons why,
just because we are not starting this, we're not
embarking on a new program here -- this is 20 years of
organizations using codes like this -- it's very
difficult to retool and standardize and do everything
the same way.
There's lots of different plant designs
out there, and different organizations do things
different ways. And because of that and because the
organizations are not choosing to retool and start all
over with standard methods, it is necessary for us to
get to this step, which may not be desirable and may
not be efficient, of individual organizations needing
to validate, assess their models independently.
There's really no other choice on this
point. That's where we are, and that's what we are
going to have to do, and we accept that.
Ralph mentioned peer review. This is a
brand new thing. We've been waiting for the SER to
come out so we could start communicating this. We
think it's a good thing. We have unanimous support
within the RETRAN user group that this is something
people ought to consider doing.
Then again, it's still optional, and we
will definitely encourage people, especially new
users, to make use of this. It makes sense, and
especially with the incentive from Reactor Systems
Branch that this is something they would think is
worthwhile. It would be good for an applicant to do
it in terms of their future deliberations with the NRC
CHAIRMAN WALLIS: I would hope that some
of those peers are like some of the people you see on
this side of the table today who have come with a sort
of basis of knowledge but not -- they're not so tied
up with the code, they have anything at stake in it or
anything, and they don't know the history. So they
can ask the questions which maybe haven't been asked
before and things like that.
MR. SWINDELHURST: Okay. We really have
attempted to answer your questions, and the questions
we've attempted to answer is what we see in the RAIs.
We are certainly going to try to answer your questions
We also realize that it's very likely we
will not be able to reach an agreement that we are all
happy with in terms of your questions being --
CHAIRMAN WALLIS: I was going to ask you
what you hoped would come out of this meeting today.
What's your expected result?
MR. SWINDELHURST: I was hoping that we
would be able to answer your questions maybe better
than we have in the past, and maybe to characterize
our perspective that the equations are suitable for
the intended purposes.
Yes, there's approximations that need to
be made along the way. There's engineering judgments
that need to be made along the way. But the end
result of that in terms of using this code for the
types of analyses we do with this code, non-LOCA
analyses, that it's a suitable framework for doing
these analyses.
CHAIRMAN WALLIS: If we were -- You know,
we are all competent professional technical people,
and these are relatively straightforward matters. It
would seem that you ought to be open for a consensus.
MR. SWINDELHURST: I think that's a nice
thing to hope for, but I would say, based on where we
are, we're not really expecting that.
DR. ZUBER: You are here where you were
two years ago, the same position. Reading this memo
from Agee, I didn't see much difference of what we saw
two years ago.
CHAIRMAN WALLIS: So what's happened? You
have helped us -- You helped me. You've been more
explicit about some of the things in your
documentation. That helps me to know what it is you
are saying, but not perhaps to understand why you are
saying it; because the basic problems seem to be still
the same.
You've clarified. So I think there's
better information. But that may just reinforce our
areas of disagreement.
MR. SWINDELHURST: That may be true.
CHAIRMAN WALLIS: We'll see about that
MR. SWINDELHURST: But we'll give it a
try, and we'll see how it works.
CHAIRMAN WALLIS: For all of our sakes,
the best thing that could come out of here today would
be all agreed that, yes, this written down is a good
momentum equation. The way it's resolved is fine, and
so on and so on and so on, check off these things, and
say let's go home and open a bottle of champagne or
MR. SWINDELHURST: Just for example, let's
say somebody is not happy unless it's a three-
dimensional code, and I don't mean 3-D neutronics. I
mean the whole code, the whole thing is three-
If that's somebody's expectations of
what's necessary here, then certainly we are not going
to get there.
CHAIRMAN WALLIS: No. My level of review
is the same -- I got to put it bluntly -- is the same
as the level of review I would give to an
undergraduate homework in flow mechanics. And if we
can't agree on that, I'm astonished and flabbergasted
and bamboozled and vexed and -- you know, I could go
on for a whole torrent. It's very strange.
DR. ZUBER: Okay. Graham, you gave us the
most optimistic, desirable solution. The other one is
for the industry to admit, yes, there is an error. We
are aware. We didn't correct it for two years, but we
shall now evaluate case by case the effect and do
sensitivity analysis, and then give us how you are
going to do it.
Then let me guess it's wrong. What you
really want is to smother something which doesn't
smell too good and use all these elastic words. I
think this is not good for the industry at all, and
for a regulatory agency.
CHAIRMAN WALLIS: Well, it may be or may
not be. Got to be careful what words we use.
DR. ZUBER: For me or they?
CHAIRMAN WALLIS: Well, they have to be
careful, too. Maybe you don't have to be careful.
DR. SCHROCK; In my mind, the operative
word here is formally. You've formally addressed ACRS
concerns, but you've not addressed ACRS concerns in
spirit. You've dealt with the regulatory process in
a way that you perceive as meeting the requirements of
the regulatory process, but you've not had deep
concern about the technical issues which have been
raised here.
MR. SWINDELHURST: I think we've had deep
reflection on all the technical issues raised here,
and we've gone back and considered each one.
DR. SCHROCK: For one, I don't see that
you have.
CHAIRMAN WALLIS: Well, this is going to
be embarrassing, because I mean, if we get someone up
there and we look at this equation and the claim that
say the pressure drop is balanced by the frictional
forces when all the other terms are out of this
momentum equation, well, that isn't true.
We can pull the whole thing apart, and we
can look at all these statements. That's going to be
very embarrassing to go through. Are we going to go
through that sort of thing?
MR. SWINDELHURST: To the extent you want
to go through it, I believe we will go through it.
CHAIRMAN WALLIS: Well, if you don't
resolve it, I guess we are going to be under some
obligation to write our opinion.
MR. SWINDELHURST: That's right, and --
CHAIRMAN WALLIS: And if there's no
response from you that helps to clarify things, it's
going to be the opinion we get from reading the
documentation, which is the same -- at least from my
point of view, the same opinion I had before. And in
a way, it's reinforced, because the strange features
are now clearer.
MR. SWINDELHURST: Well, let's give it a
chance, and maybe there will be some --
CHAIRMAN WALLIS: Well, I'm giving you a
chance. You know, I'd love to feel that I was wrong
and discover that I was wrong.
DR. ZUBER: I'd like to have a bottle of
MR. SWINDELHURST: We believe that your
concerns are generally generic to other codes like
this code, and I believe you've shared that opinion.
CHAIRMAN WALLIS: That is -- Yes, that is
a niggling thing, isn't it? That's true.
DR. SCHROCK: It's true, but it doesn't
help RETRAN-3D.
CHAIRMAN WALLIS: No, it doesn't help.
MR. SWINDELHURST: I'm not saying -- I'm
just saying that this is the way the industry uses
codes like this, and not all codes do things this way,
but a lot do.
DR. ZUBER: You see, but the difference is
you are addressing problems which we had 30 years ago,
25 years ago. Now you get into the edge of the
regulations, and again has changed. The environment
has changed, and you cannot use the same argument --
alibi, one used 20 years ago when you hear much of
conservatism, which are now going to decrease and,
therefore, all these codes which were applicable for
a previous era are not good for a time and era which
is coming now.
MR. SWINDELHURST: I agree with you. When
you start decreasing the conservatisms, the importance
of accurate modeling becomes even more--
DR. ZUBER: Even more so. Even more so.
CHAIRMAN WALLIS: Go ahead.
DR. ZUBER: One was able to live under
those -- with these errors, because we had a large
conservatism, which we didn't have to have, had we
done it correctly. Now when we want to decrease it,
we have to do it correctly. I think this is the
problem which neither the industry nor NRR, NRC, has
addressed, as far as I have seen today.
MR. SWINDELHURST: Well, I think we're
seeing that in the draft guide that came out. That's
exactly what it's speaking to, and we recognize that.
DR. ZUBER: But I don't see this reflected
in the code developments and code analysis. That's my
CHAIRMAN WALLIS: I'd like to say
something in praise of EPRI. You do realize that
there are generically applicable difficulties,
particularly with the momentum equations in codes. I
think EPRI realizes that or your contractors do.
So an effort was made to provide different
justifications, and I think that's praiseworthy. It's
good. It's just that we have some difficulties with
what you have now done. I think it's good that you've
faced up to the fact there was a problem there.
MR. SWINDELHURST: Well, we certainly got
concerns we have to respond to, and we're trying to do
that, and I think a lot of it --
CHAIRMAN WALLIS: No. I mean you faced up
to a problem that probably is generic in a whole batch
of codes and tried to better than they have. I think
that's a good thing to try to do.
(Slide change)
MR. SWINDELHURST: Okay. We have -- EPRI
has had an independent derivation of the RETRAN
momentum equation, as it's been labeled, by Dr. Thomas
Porsching. He is with us here today.
CHAIRMAN WALLIS: If someone will explain
that to me, because the equation I saw Dr. Porsching
derive is not the same as the RETRAN momentum
MR. SWINDELHURST: We'll be prepared to
speak about that.
CHAIRMAN WALLIS: Okay.
DR. SCHROCK: But you also have the fact
that NRR has said that it's irrelevant to the issues
that it's examined. So what's the purpose of the
bullet on this slide?
MR. SWINDELHURST: Well, obviously, we
don't agree with that. So I guess we would like to
take an opportunity today to have our side of that
We would like to clearly point out that we
are calling it a momentum equation. It's sort of a
perhaps mislabeling of this equation, and we just want
to admit up front that we recognize that. It's more
directly a flow equation.
CHAIRMAN WALLIS: But you call it a
momentum equation, and all your derivations say it's
based on some general microscopic momentum balance.
And I agree. It does look more like a flow equation,
but that's not the claim that's made in any of your
DR. ZUBER: Let me say, I have a problem.
I mean, I know momentum mass energy. I never knew a
flow equation. What is that?
MR. SWINDELHURST: We will cover that in
the next --
DR. ZUBER: Well, no. What you are really
doing -- Are you developing new physics or what?
MR. SWINDELHURST: I think the reason we
are making this acknowledgment, which we've made in
the past, is that momentum equation means a very
specific thing.
DR. ZUBER: Momentum.
MR. SWINDELHURST: And the way it's used
to term the equation in this code and the derivation
of it is somewhat loose with that terminology.
CHAIRMAN WALLIS: So we are going to get
into that after the break, I guess.
MR. SWINDELHURST: Correct.
(Slide change)
MR. SWINDELHURST: Just repeating one
thing. You know, we believe that this equation is
suitable for this application. The whole code, the
use of the code, all the features is up to the user to
defend it. He has to deal with the SER we have, the
conditions and limitations we have.
If more assessment work is done as
requested by the staff, that is the logical, normal
next step in the process when trying to use this code
for an application. We accept all that.
CHAIRMAN WALLIS: And he has to be able to
figure out how to use it, too.
MR. SWINDELHURST: Certainly.
CHAIRMAN WALLIS: So that's sort of my
second question. First question is: Are the methods
valid? What are they, and are they valid? The second
one is: How do you use it? That's another question.
MR. SWINDELHURST: And this is the way
it's always been. It's always --
CHAIRMAN WALLIS: If it's not clear how to
use it, then you can't very well make it the
responsibility of the user.
MR. SWINDELHURST: We don't believe we
have that problem with documentation.
CHAIRMAN WALLIS: It's like driving a car
where steering wheel isn't connected to the front end.
It's rather awkward to ask the user to be responsible
or that.
DR. POWERS: In fairness, Graham, I mean,
don't they have several hundred users of this code,
and it's been used by a lot of people?
CHAIRMAN WALLIS: Apparently.
MR. SWINDELHURST: Tens of users.
DR. POWERS: Tens of users, okay.
CHAIRMAN WALLIS: Okay. So you're going
to explain to us how the equation applies to, say,
that plenum model and --
MR. SWINDELHURST: Certainly.
CHAIRMAN WALLIS: Okay. Thank you.
MR. SWINDELHURST: And just the last
point: When we shift in the future to the best
estimate/realistic, we realize that's a different
world, and there will be different rules we'll be
playing by when we are doing best estimate plus
uncertainty type analyses.
CHAIRMAN WALLIS: I go back to Dr. Powers'
point, though, and it may be that a lot of people are
using this thing. But how are they using it, if it's
not clear how to use it? Maybe you can explain that
to us. But if it isn't clear, if our view is that it
isn't clear, and you can't tell me how to explain it,
then it's baffling.
MR. SWINDELHURST: I think the explanation
is that there has been hundreds and hundreds of people
who have learned how to use this code in various
organizations, and they learn it from the
documentation. They learn it from training sessions.
They learn it from mentoring, from people who have
gone before them.
When they need help, they go to the
vendor, and it's a community of code users who -- just
like any other code.
CHAIRMAN WALLIS: So you are going to
essentially tell us. We're going to come in as naive
code users and say we can't figure out how to evaluate
the momentum flux at this end of this box, and --
DR. POWERS: I am not sure that that's the
right standard to apply. I guess that's what I'm
driving at, is that at least in a lot of the codes
that I get associated with, they aren't this mature,
and the documentation is spotty at best. But they
become -- The codes get internationally used because
of this -- what you call it -- mentoring or training
sessions or group exercises calculating individual
problems, that there is a tremendous -- Like many
engineering disciplines, there's a great deal of oral
lore associated with how to use and how not to use the
So I think I don't see a need to come in
and say is this documentation such that, if I am an
obstinate and recalcitrant user, I can find flaws in
its explanations and dream up examples that are where
it's just not going to work following this.
I mean, I think that's an unfair standard
to apply to this. I think you have to have a much
more liberal standard applying to the logic, because
there is so much of this, and that's not unusual. I
mean, all the codes I'm associated with are that.
DR. ZUBER: This is my problem I always
had with code users and the documentation. Really,
they don't really look what's in the document. They
don't understand, and very often they just put it on
the computer and then run it and fiddle around.
I think then they show good agreement with
some experiments by adjusting some coefficients
without really acquiring -- I mean inquiring is it
applicable, can I use it, when can I not use it. I
think this is my problem, one of the problems with
this code.
MR. SWINDELHURST: Let me just give an
example of that as how a user would use this code.
First you have to go to your plant model. Okay?
You've looked at other people's plant models. You
know what they did. You look at your plant design.
You adopt the good ideas, and you have to do some
initial -- some new type of work.
You go to your plant model. You go
through the code manuals. You select every option in
the code based on what other options are recommended,
what other people are using, what works best to invoke
the right equations, right options, right
You build all your model. Then you have
to validate it against something, and we do usually
use plant data. A lot of work is done by the code
developer and by some contractors looking at, you
know, scale data.
We use plant data, and then you have to go
play this all in front of the NRC in the form of a
submittal saying this is how we use this code, all
these options turned on. And if we do some knob
turning or some dialing in on something, that is laid
out as part of the submittal: We adjusted this
parameter, because it didn't match plant response.
And that's part of your modeling. Okay?
CHAIRMAN WALLIS: But in setting up this
model, someone has to look at this noding and say
here's some W's defined here and here, and they have
to be somehow put into a structure which then the
equation uses.
Maybe you can help us later on to explain
how various W's are related to what's in the equation
when someone is actually doing the noding and so on.
That would help us a lot.
MR. SWINDELHURST: But these activities
have been routinely done by many, many people, and it
isn't as big a mystery as --
CHAIRMAN WALLIS: Well, many people
followed Hitler. I mean, there's no excuse because
many people did something that it's all right.
DR. ZUBER: That's a problem, really.
CHAIRMAN WALLIS: That's the sort of naive
attitude we have, being outsiders to this business.
Are we going to get to see Dr. Paulsen
after the break?
MR. SWINDELHURST: Yes. We've got Dr.
Paulsen coming up next. He's got a presentation, plus
he's prepared to answer anything you want to ask.
We've got Dr. Thomas Porsching here to discuss his
development, if you have any interest in that. Jack
Haugh is here as EPRI management.
CHAIRMAN WALLIS: Good. That will be very
I think what we would like to do is we
would like to look at only two or three equations and
their derivation and understand it, and also
understand how it's related to some of those weird
shaped nodes. Maybe that's all we need to know.
So it shouldn't take very long.
MR. SWINDELHURST: I think we're perfectly
willing to follow your lead as to what you want to
CHAIRMAN WALLIS: That would be great, and
it would sort of follow what you send as a response to
the RAIs. So we've had a chance to look at all this
before. We're pretty well prepared. It's not as if
you had to explain everything.
So perhaps we can get most of that or all
of it done this morning. Good, thank you. So we'll
take a break until quarter of eleven -- Sorry, before
I use this gavel, it's going to a break, 15 minutes
until 10:30.
(Whereupon, the foregoing matter went off
the record at 10:15 a.m. and went back on the record
at 10:30 a.m.)
CHAIRMAN WALLIS: We are now going to hear
a presentation from Mark Paulsen.
MR. BOEHNERT: Mark, I'm assuming this is
going to be open. There is no proprietary information
DR. PAULSEN: This is open, yes.
What we hope to cover today is to address
some of the concerns that have been raised about the
momentum equation, the formulation of the momentum
equation, address also some of the issues relative to
how we apply the RETRAN equations to complex geometry.
What do the users have to do when they want to model
a three-dimensional plant using these simplified
DR. KRESS: Can you orient me as to what
CSA is and how you fit into the --
DR. PAULSEN: CSA -- We are a consulting
firm that is the developer -- We have been involved in
the development of RETRAN. We also do the maintenance
portion of the work for RETRAN, and we provide user
support and training.
DR. ZUBER: Where are you located?
DR. PAULSEN: We are located in Idaho
CHAIRMAN WALLIS: Now RETRAN actually
appeared about 20 years ago.
DR. PAULSEN: RETRAN actually began
probably in about the late Seventies.
CHAIRMAN WALLIS: There was a report from
whatever the embodiment then was of Idaho Falls.
DR. PAULSEN: Yes. It began at Energy
Incorporated. It was a spin-off from the RELAP-4
code, and it was designed specifically to provide
utilities a tool to analyze Chapter 15 transients,
because at that point in time utilities were relying
solely upon industry.
CHAIRMAN WALLIS: I think what we found in
the original documentation that you submitted with
RETRAN in 1998 was almost exactly the same as EG&G or
whoever they were had submitted in their report in
1980. Very, very similar.
DR. PAULSEN: For which one now?
CHAIRMAN WALLIS: The documentation we
first read two years ago, two and a half years ago.
DR. PAULSEN: Oh, okay, the original
CHAIRMAN WALLIS: Was exactly the same for
RETRAN as in the report that is now 20 years old from
Idaho Falls.
DR. PAULSEN: Yes. It was an EI report.
CHAIRMAN WALLIS: Right.
DR. PAULSEN: That's right. Okay.
DR. ZUBER: This shows the genetic
CHAIRMAN WALLIS: Yes, of the genes.
DR. SCHROCK: Let's see. You are going to
clarify Mr. Swindelhurst's comment about the vagary of
the terminology momentum equation and flow equation?
DR. PAULSEN: I hope to.
DR. SCHROCK: Good. Okay.
DR. PAULSEN: And the approach I have
taken was I went back and looked at some of the
concerns that have been raised in the previous ACRS
meetings and looked at the RAIs that we had been
issued by the staff and tried to put together a
cohesive story that starts at the top and goes to the
So I'm not following the order of the RAI
questions. I'm trying to start at the top and make a
cohesive story. Now if you have questions as we go,
I'm sure you're not bashful, and you will ask
So we may not even get to the RAI
question, if we can get things resolved up at the top.
DR. SCHROCK: Does the 2 mean a second
round of questions?
DR. PAULSEN: That's correct. This round
of questions dealt primarily with the staff's trying
to direct -- or to relay the ACRS concerns about the
momentum equation. So there's a lot of overlap in
these RAI2 questions with what the ACRS concerns were
on the momentum equation.
Most of the concern has arisen on how we
use the one-dimensional momentum equation. We start
with a 1-D equation, and then we develop what we use
as our flow equation, and we are going to try and talk
about that, point out the definitions and some of the
assumptions we make.
So while we are doing this, we hope we can
address your concerns. I hope you don't get the
feeling that we've been trying to avoid your concerns
for two years. We have actively been trying to
resolve them.
CHAIRMAN WALLIS: We have no feelings at
DR. PAULSEN: Okay. What this has led to
is the fact that we have -- In responding to the
request for additional information, we have attempted
to make the documentation more usable, more accurate,
and we have also identified several code errors which
we'll talk about, and we have corrected them.
(Slide change)
DR. PAULSEN: So as we go through the
development of the RETRAN-3D flow equation, first of
all, I'm going to start with some general comments to
try and point out where we are going with all of this,
so that we don't put equations down on the board
before we actually know where we are trying to go, and
maybe that will help clarify things.
Then we also want to list, as many as
possible anyway, our definitions in the assumptions
that we make. We will then go through the case where
we actually start with a constant area channel, start
with the momentum equation, and derive our flow
equation, and then go through later how we apply that
to variable area channels and then for situations
where we may have more connections than just a simple
straight piece of pipe --
CHAIRMAN WALLIS: Now your constant area
channel you are going to show us is that bend to --
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: That originally had a
variable area, because in the first document we saw it
had an AEK and an AK plus one, which was different.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And now it seems to have
fallen back to being constant area.
DR. PAULSEN: For this initial development
CHAIRMAN WALLIS: Then it fell back to a
more special case?
DR. PAULSEN: This is a constant area.
CHAIRMAN WALLIS: Later it gets
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: To be a variable area?
DR. PAULSEN: That's right, and it's
really with the abrupt area change --
CHAIRMAN WALLIS: Without an area change
at all, just a tube with different areas on the end.
Are you going to list that one?
DR. PAULSEN: Basically, we'll go through
three developments, one where we start with a constant
area. Then we'll go to one where there is an abrupt
area change --
CHAIRMAN WALLIS: Why abrupt? In a
variable area it doesn't have to be abrupt. I'm just
pointing out that in the original documentation what
you now have as a constant area channel was a variable
area channel.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: And for some reason it's
fallen back, maybe --
DR. PAULSEN: Because in -- There was a
lot of confusion about those figures that led to --
and really, that figure was used to develop a constant
area equation.
CHAIRMAN WALLIS: But if your equation is
right, it should apply to a variable area channel
without a sudden change of area.
DR. PAULSEN: Well, we'll talk about that
as we go. Okay?
Then we are going to look at complex
geometries on how we actually apply these One-D
equations, what kind of assumptions do users have to
make, what are some of the sensitivities, and how do
they apply them? Where do you break a model up to
start applying these equations?
We will also identify where some of this
guidance is available for users. There is actually
documentation available that directs users on how to
do some of this nodalization.
(Slide change)
CHAIRMAN WALLIS: What does this first
statement mean?
DR. PAULSEN: That it's fundamentally one-
CHAIRMAN WALLIS: What does it mean? What
do you mean by that? I want to see what he says it
DR. PAULSEN: We are starting with a one-D
momentum equation.
CHAIRMAN WALLIS: What does that mean?
DR. PAULSEN: We are not going to account
for any momentum in the transverse direction.
CHAIRMAN WALLIS: So you mean it's a
momentum equation resolved in one direction?
DR. PAULSEN: In one direction. That's
CHAIRMAN WALLIS: So when I -- I want to
be clear about this. I don't want to criticize
something which is different.
You are saying this is the resolution of
momentum fluxes, forces of momentum changes in one
DR. PAULSEN: Yes, and we'll see --
CHAIRMAN WALLIS: And when you get to a
bend, you are going to explain how a bend can be one-
dimensional and things like that?
DR. PAULSEN: We'll talk about that.
CHAIRMAN WALLIS: Okay. I just want to be
DR. SHACK: You're saying more, though,
right? You're saying the flows are all one-
dimensional, too. There are no transverse flows --
DR. PAULSEN: That's true. That's true.
CHAIRMAN WALLIS: Your averaging works in
a one-dimensional sense?
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: Right.
DR. ZUBER: What do you mean by flow
DR. PAULSEN: An equation of motion.
DR. ZUBER: You conserve three things in
thermal-hydraulics. It's momentum, energy and mass.
You don't conserve the flow. If this is the
conservation equation, then it's the momentum
You see, this kind of elastic --
CHAIRMAN WALLIS: Well, I think -- Let's
clarify. What you are going to do is you are going to
manipulate this momentum equation resolved in one
direction in some way until it looks like something
else, which isn't quite recognizable as a momentum
equation --
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: -- and you call that a
flow equation. Is that what you're doing?
DR. PAULSEN: That's correct.
DR. ZUBER: Am I correct to understand
that, even up in your new conservation equation --
CHAIRMAN WALLIS: No. They are going to
do some manipulation to get something which isn't
immediately recognizable as a momentum equation but
came from the momentum equation. That's what I
understand you are going to show us.
DR. PAULSEN: That is correct.
DR. ZUBER: They may write new textbooks.
DR. PAULSEN: Okay. I think one of the
areas where we've probably introduced some confusion
in the past was, as pointed out, maybe trying to be
too rigorous with the implication that there was more
fundamental physics behind the code than really there
We are not really trying to do anything in
three dimensions. There's a lot of development where
we've emphasized the vector momentum equation.
Really, it's a scalar equation, but we carry some
vector information along. We'll show the purpose of
that in a few minutes.
CHAIRMAN WALLIS: But you have an example
which is a 90-degree bend.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: And that isn't in your
list here, is it, but it's an example in your --
DR. PAULSEN: Places where we really
recommend -- where we would recommend angle
information be used.
CHAIRMAN WALLIS: Yes, but you are showing
how to use it for a 90-degree bend in your
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So I think we ought to
look at that example.
DR. PAULSEN: And I have an example -- I
have some discussion of 90-degree bends and what level
of detail you go into when you are modeling.
Basically, by carrying some of this angle
information along, you can get a more correct
representation of the momentum flux in some of these
areas where you've got multi-dimensional pieces coming
Where we really don't recommend using --
We're not trying to use angles to represent three-
dimensional flow patterns in downcomers or lower
plenums. We admit that right up front. And in most
models -- One of the examples we gave was the elbow
which Dr. Wallis pointed out. That was simply
supposed to be there to represent the effects of the
We don't recommend users model individual
elbows. In practice, users are going to lump straight
sections of pipe and elbows into one section.
CHAIRMAN WALLIS: But you have an example
in your first response to the RAIs where you have the
cold leg and the downcomer. There's a node that spans
both of them. That looks awfully like an elbow to me.
DR. PAULSEN: And we'll talk about that.
There's an example that --
CHAIRMAN WALLIS: I think we need to talk
about that.
DR. PAULSEN: Yes. Then we don't use the
angle information to simulate every turn in the piping
CHAIRMAN WALLIS: You do not?
DR. PAULSEN: We do not.
DR. SCHROCK: What do you do to simulate?
DR. PAULSEN: Pardon me?
DR. SCHROCK: What do you do to simulate
the turns in the piping?
DR. PAULSEN: Those we generally account
for with loss coefficients.
CHAIRMAN WALLIS: Your claim in one of
your documents is that the friction on the wall is
balanced by the pressure drop in that situation.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And that that comes from
a momentum equation?
DR. PAULSEN: Basically, I think when we
get through our momentum -- I keep wanting to call it
momentum equation. Pardon me. It's years of
incorrect training.
CHAIRMAN WALLIS: But you told us where it
comes from. It's your principle you're using.
DR. PAULSEN: That's right, and I'll
probably be calling it the momentum equation, but what
we are referring to is the One-D or the scalar
equation is what we actually get to.
CHAIRMAN WALLIS: Well, I guess we'll get
to that, but your claim is that the momentum -- the
overall momentum balance simply have a bend in the
pipe with no momentum change and stuff that goes
around. Frictional forces are balanced by the
pressure drop in the momentum balance.
DR. PAULSEN: What we end up --
CHAIRMAN WALLIS: That's not -- I need to
question you about that, because I don't think that's
DR. PAULSEN: Because basically, what we
end up with when we get our equation, if we drop the
time derivative term and we look at just the terms on
the righthand side of the equation, it looks like the
mechanical energy equation.
CHAIRMAN WALLIS: No, it doesn't.
DR. PAULSEN: It's very similar. We have
the Bernoulli terms --
DR. ZUBER: Wait, wait, wait, wait, wait.
Do you know how the Bernoulli equation is derived?
DR. PAULSEN: Yes. Bernoulli -- It's a
mechanical energy equation.
DR. ZUBER: Okay. How do you derive the
Bernoulli equation?
DR. PAULSEN: Well, I don't think that
really is relevant here, because --
DR. ZUBER: No, it is.
DR. PAULSEN: -- we're talking about the
momentum equation.
DR. ZUBER: No, exactly, because you said
that, when you drop the storage terms, the equations
are like the energy equations. Then you brought in
the Bernoulli equations. Graham said no. I said no.
You tell me -- I'm questioning you how are you
deriving the Bernoulli equation?
DR. PAULSEN: Well, let's wait until we
see what the equations are.
CHAIRMAN WALLIS: I'd like to see it,
Novak. I'd like to see the equations.
DR. PAULSEN: Let's look at the equations
DR. ZUBER: He does not know --
CHAIRMAN WALLIS: I think that may become
clear later on. We'll find out. I don't think we --
(Slide change)
DR. PAULSEN; Okay. We are going to start
with the illustration of the momentum cell shows an
elbow, and the primary reason for showing this elbow
is just so that we keep track of some of the effects
of the vector information on the flow into the
momentum cell, because we used that in some of our
components -- for instance, T's in plenums, as we'll
see toward the end of this discussion.
As I mentioned previously, we don't
recommend using angles in every elbow or change in
direction in the piping network. What we'll see is,
if you include an angle for an elbow, you're going to
see a pressure change there as a result of that angle,
but as soon as you get around a bend where you've put
in an angle, the pressure goes back the same. It's a
recoverable loss.
So the only place it really affects the
pressure is locally where you have included that angle
(Slide change)
DR. PAULSEN: So at this point, here we
have our momentum cell. Basically, our momentum cell
overlaps two mass and energy cells. So here we have
an upstream mass and energy cell and a downstream mass
and energy cell, which we refer to as control volumes.
Now this momentum cell -- you might refer
to it as a control volume also. But in RETRAN
terminology, control volumes are mass and energy
cells, and we'll call this a momentum cell or
CHAIRMAN WALLIS: Then let's look at this:
Ak user supplied down there, and you have Ak+1 user
supplied. So that would make me think they could be
different, and they were in your original derivation.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: Yet in your equation you
make them the same. Why is that?
DR. PAULSEN: Because we are going to use
this to develop a uniform area flow equation.
CHAIRMAN WALLIS: Yes, but your RETRAN
equation has different areas in it.
DR. PAULSEN: And we'll get to that as we
develop --
CHAIRMAN WALLIS: No, but please, if you
are going to say you've got a general equation with
two areas in it, it should apply to this shape, too,
shouldn't it? Yes?
DR. PAULSEN: If we have two areas?
CHAIRMAN WALLIS: If Ak and Ak+1 are not
equal, your equation has Ak and Ak+1 different, your
general RETRAN equation. Right? So it should apply
to this.
DR. PAULSEN: That's the one where we
assume -- after we've gone through the development of
having an area change.
CHAIRMAN WALLIS: What you call the RETRAN
equation -- right? -- has Ak+1 and Ak in it. Right?
What you call the RETRAN equation?
DR. PAULSEN: Is that on the next slide?
CHAIRMAN WALLIS: Wherever it appears, it
has an Ak and an Ak+1, which are different. Right?
DR. PAULSEN: They may or may not be
CHAIRMAN WALLIS: They could be different
in this figure, right? And your equation -- The
RETRAN equation, which you want us to believe, has an
Ak and an Ak+1 which are different in it, in general.
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: It doesn't have a step
change or anything, and you originally had a
derivation for this shape in which the Ak and the Ak+1
were different, and for some reason you've fallen back
to Ak, and I think the reason is you couldn't get rid
of the Ak's and the Ak+1s multiplying the pressures.
So you just said we won't do it, because we don't know
how to do it. We'll just forget it.
DR. PAULSEN: That goes with part of this
development --
CHAIRMAN WALLIS: In your original
derivation, when we get to the part, the pressures on
the ends multiplied different areas. Right?
DR. PAULSEN: They have an area for the
node upstream and the node --
CHAIRMAN WALLIS: Well, we'll back to that
when you do your derivation. But I'm just pointing
Another question I have to ask you: This
bend could be a 90 degree bend or 180 degree end or
any kind of bend? Still works?
DR. PAULSEN: In actual practice in RETRAN
the bends are usually limited to 90 degrees.
CHAIRMAN WALLIS: This equation
development, this theory, would apply to any kind of
a bend in a pipe of constant area. Right? Okay. So
if I give you a picture of an 180 degree bend, you can
tell me how it applies to that?
DR. PAULSEN: A what now?
CHAIRMAN WALLIS: 180 degrees, and 360
degree bend, your pipe comes along, goes to loop and
goes off, you will show me how this equation applies
to that? It's a general bend? Can we get into that
sort of discussion?
DR. PAULSEN: Well, let's get the slides
up, and then we'll get the equations up --
CHAIRMAN WALLIS: Can we do that? Is that
DR. PAULSEN: Okay.
DR. KRESS: Before you take that one off,
how is it you know exactly where to place the momentum
cell with respect to the two mass and energy cells?
DR. PAULSEN: to the mass and energy
cells? In practice, where users generally put
junctions is what we would call this, the momentum
cell, is where there are changes in geometry.
CHAIRMAN WALLIS: It's sort of in the
middle. It's a convenient place.
DR. PAULSEN: Right. And in some cases,
depending on the type of transient, if you want to get
spatial resolution, then you may add more nodes where
you don't have geometry changes. But most places
you'll see junctions will be, say, where the cold leg
connects to the downcomer, a surge line comes off of
the cold leg.
DR. KRESS: So the junction would be where
1 is on that.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: They tend to be mass and
energy cell junctions, but the momentum cell is
something else. Was that your question, how do you
locate the momentum cell?
DR. KRESS: Yes. You know, you could just
place it -- You could leave the junction where you had
it, but you seem to -- like you have some freedom to
locate the momentum cell.
DR. PAULSEN: That's right.
DR. KRESS: I just wondered what rationale
was used to place it anywhere when you go to divide up
your circuit into those cells. Like, for example, one
might look at an actual bend and say let's make the
angle phi to 1 the same for the inlet and exit between
those two. That would be one choice, for example.
DR. PAULSEN: Right.
DR. KRESS: That might help you in how you
derive the equation. But I don't know what the
rationale was.
DR. PAULSEN: Okay. It's basically where
you have area changes, and then in some cases where
you have long sections of piping you may put in
additional nodes just to get additional spatial
resolution so that you come closer to approximating
the difference equations.
CHAIRMAN WALLIS: And this phi i -- you
are going to resolve in the direction of phi i?
DR. PAULSEN: Yes, of this --
CHAIRMAN WALLIS: Now I notice when you
have, say, the downcomer picture, your face at the end
of the cold leg is parallel to the upstream face phi
k or something. Well, I know that phi is upsized
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Which kind are they?
DR. PAULSEN: These are phis. All of
these are phis, I think.
CHAIRMAN WALLIS: So that phi i could be
parallel to phi k in some cases or parallel to phi
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: It's not necessarily
halfway between it -- somewhere, anywhere.
DR. PAULSEN: That's correct. Somewhere.
DR. KRESS: That was my question, yes.
CHAIRMAN WALLIS: It's an arbitrary angle.
DR. PAULSEN: But in actual practice,
these either will be the same angle or in general 90
CHAIRMAN WALLIS: I noticed that with the
bend. You had it the same at one end and different at
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Okay. So it's not
defined to be halfway between or anything special.
It's anywhere.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Okay.
DR. PAULSEN: And one of the reasons that
I think historically that this staggered mesh was used
was because flows were needed to obtain the mass and
energy balance on these control volumes and, by
overlapping this flow equation, the flow was
calculated at that location. That was the rationale.
CHAIRMAN WALLIS: Your CFD was the same
thing in many cases.
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: Then you have to do some
interpolation or upwinding or various different rules
which you go into in your effects.
DR. PAULSEN: Right. But in reality, when
you start looking at a model, we would never -- Well,
I can't say that. In plant models where people are
modeling reactor systems for Chapter 15 analyses, you
wouldn't see someone modeling an elbow this way. An
elbow would be lumped into a long section of piping.
DR. KRESS: Your L where you have one-half
L, where is it on this?
DR. PAULSEN: These are geometric
properties. This would be the flow length of this
control volume. So, basically, our momentum cell
covers half the length of the upstream volume and half
the length of the downstream volume.
DR. KRESS: So that does fix where you
place this momentum cell?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So it's a little
problematic if the volumes are of changeable area.
DR. PAULSEN: That's right. In fact, if
you have something like a nozzle -- and this really
gets into comparing with experimental data. If you
are going to look at comparing a nozzle, then you are
going to have to put in some kind of representative
geometry that is representative of the section
spatially that you are --
CHAIRMAN WALLIS: Right. I think in the
reactor you try to choose these things so that they
are essentially constant area on both sides of the
DR. PAULSEN: That's correct.
DR. SCHROCK: You don't show the forces on
the diagram, the gravitational force, for example.
DR. PAULSEN: That's right. This is just
basically geometric information.
CHAIRMAN WALLIS: -- forces downwind?
DR. SCHROCK: That is what I'm commenting
on. How do these -- What's the justification of the
balance without specifying more clearly what these f's
DR. PAULSEN: Okay.
DR. SCHROCK: I mean, the f gravitational
passes through the center of mass.
DR. PAULSEN: That's correct.
DR. SCHROCK: How does it align with other
DR. PAULSEN: Okay.
DR. SCHROCK: And how does that become a
one-dimensional equation? They are in different
DR. PAULSEN: Okay. That we'll show on
the next slide. Maybe you've already looked at the
CHAIRMAN WALLIS: You need to get there,
but we need to understand this.
DR. SCHROCK: Oh, yes. That's what I'm
looking at, in fact, is the next slide.
DR. PAULSEN: Okay. And one thing worth
noting before we move to that equation is that, with
this momentum cell, we are going to have terms where
we have momentum that moves across this boundary and
the downstream boundary. So there will be velocities
at these two surfaces, and these velocities in a
straight piece of pipe will align with the normal
vector for the REA, but in general they can be at some
other velocity -- or other direction.
DR. KRESS: Is this intended for a single
fluid or two-phase fluid?
DR. PAULSEN: The momentum equation or
this flow equation looks at the mixture of fluid.
DR. KRESS: As if it were one fluid?
DR. PAULSEN: As if it were one fluid.
Then there's a separate equation that actually
calculates the velocity difference, if there happens
to be two-phase.
DR. ZUBER: It is a two-phase mixture, not
a single phase. It's a two-phase mixture.
DR. PAULSEN: It can be, yes. If you have
two-phase conditions, it will be a two-phase mixture.
DR. ZUBER: I think this was a question.
You have a two-phase mixture going out the densities,
and then you have another equation where you have the
difference in velocities.
DR. PAULSEN: That's correct. This is
basically the mixture equation.
DR. ZUBER: Mixture equation.
CHAIRMAN WALLIS: Now this pressure you
talk about -- what is that, this p-i -- or pk? What
is pk?
DR. PAULSEN: Well, maybe that will come
out on the next slide, but we do have pressures that
are defined for the mass and energy cells. So we will
have a pressure, a representative pressure, for our
upstream mass and energy cell and a different pressure
for our downstream mass and energy cell.
DR. ZUBER: And they act where?
DR. PAULSEN: They act where?
DR. ZUBER: Where does the pressure act?
DR. PAULSEN: Well, let me put the next
slide up, and I'll leave this one out for just a
(Slide change)
DR. PAULSEN: Because we have -- First of
all, this is just kind of introductory material to
show how the equations are closed. We have the mass
and energy cells where we actually do a mass and
energy balance. So we will have total mass in those
cells, and we will have total energy, and then based
on water properties, we have a pressure equation
estate where for our fixed control volume, given the
mass and energy in a node, we can calculate the
DR. KRESS: Now the energy that's in that
thing includes the energy that's due to friction --
You account for that energy in another equation that
adds in the friction.
DR. PAULSEN: That's right. In fact, we
have -- In RETRAN-3D we use an internal energy
equation, and in general the viscose terms, the
dissipation terms, are small compared to the others.
So we've currently neglected that viscose dissipation
in the energy equation, but it includes the convective
terms in and out of the volume, heat addition from
various either heat conductors or decay heat.
So we, in effect, do our internal energy
balance to come up with our internal energy and mass,
and then we have a pressure for that control volume.
That pressure, we assume -- Well, let's go on here for
just a minute.
DR. KRESS: Well, it's an equilibrium
DR. PAULSEN: For the three-equation model
it is equilibrium, and we have the pressure as a
function of total mass and total energy. When we go
to our five-equation model which has -- constrains
nonequilibrium, it's developed primarily for
applications in BWRs where you have subcooled boiling.
One phase is constrained at saturation, if
we have two-phase conditions, that being the vapor
phase. The liquid phase can then be subcooled or
superheated, and this pressure equation estate then
changes so that our pressure is a function of our
total mass, total energy, and then our vapor mass
that's in the volume.
So depending on the governing equations,
this pressure equation estate can change. If we have
noncondensables in the system, it can also change.
But for the simple case, our pressure is determined by
the mass and energy for the simple three-equation
DR. SCHROCK: So one-phase is constrained
to be equilibrium, and the other is not?
DR. PAULSEN: That's right.
DR. KRESS: So for noncompressible fluids
that are flowing adiabatically, your pressure becomes
a constant, a constant area?
DR. PAULSEN: It should effectively do
that. Right now we would actually do a separate mass
and energy balance for each node and, if the specific
volume and specific internal energy don't change, then
we should end up with the same pressure.
DR. KRESS: That's why I was asking what
you did with the friction term?
DR. PAULSEN: Okay.
DR. KRESS: Never mind. Go on.
DR. ZUBER: Can you explain those terms on
the righthand side?
DR. PAULSEN: I think you ought to explain
every one.
DR. PAULSEN: Yes. Okay. So this --
We've talked a little bit about the momentum cell
geometry where we are using a staggered mesh. What we
are hoping to get from this place we're starting is an
equation that will allow us to calculate flow at the
boundary between those mass and energy cells.
So we have our time rated change of
momentum for the momentum cell volume averaged over
the momentum cell volume, and then at this point we
have the, in effect, momentum that's being transferred
through the flow surfaces, the ends of --
CHAIRMAN WALLIS: So assuming they are
parallel to the -- the surface is perpendicular to the
velocity there?
DR. PAULSEN: The assumption that we have
here is that this area is the normal area. It's
CHAIRMAN WALLIS: Forces normal to the
DR. PAULSEN: Right.
CHAIRMAN WALLIS: Because in some earlier
derivation of this, you had some a-primes and all
DR. PAULSEN: Well, you pointed out there
were some errors in there, and we agreed that there
were some problems there.
So at this point in this --
CHAIRMAN WALLIS: -- flow rate out of j?
DR. PAULSEN: That's right. This ends up
being the velocity that's the normal component of the
velocity. This would then be the true velocity
crossing that surface, which may or may not be normal.
For most applications in RETRAN, it will be.
Then we have our forces. This is our wall
force that's parallel to the wall, our viscose
friction term. This is a term which --
DR. SCHROCK: I asked you about the forces
in the diagram. You're writing single forces here now
in this balance relationship. Where are these forces
in this diagram?
DR. PAULSEN: Okay. This force will be
parallel to the wall.
DR. SCHROCK: Well, the wall isn't
everywhere parallel.
DR. PAULSEN: At any point along the wall,
it will --
DR. SCHROCK: But this is an equation for
the control volume.
DR. PAULSEN: Yes. Basically --
DR. SCHROCK: So you don't get it by
taking the point differential equations and
integrating over the volumes. You have an ad hoc
equation, and you're trying to explain your way out of
the terms in the ad hoc equation.
Now what I'm asking you to do is show a
force diagram.
DR. PAULSEN: And basically --
DR. SCHROCK: You've shown a control
diagram. Now show a force diagram.
DR. PAULSEN: And basically, these wall
forces are, like you said, ad hoc models.
CHAIRMAN WALLIS: They are frictional
sheer stresses on the wall, but then it's the integral
of all that over the whole volume.
DR. PAULSEN: That's right. And basically
we'll apply something like a Moody model where we know
the length of the flow path. We will use that --
CHAIRMAN WALLIS: Moody doesn't tell you
that. If you have, say, a -- I'm going to give you
this 180 day bend in a little while. But if you look
at the sheer stresses on a 180 degree bend, you find
their resultant is in the direction which is right
angles to the end faces of the bend. It's completely
orthogonal to the pressure forces on the ends.
I mean, the pressure drop in the pipe is
not the same as a momentum balance for a pipe. The
Moody -- except for a straight pipe.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Not the same thing. So
it is obscure, what your Fs are here.
DR. PAULSEN: And basically, what we are
trying to show here -- and I appreciate your point
about where those forces are applied. When we end up
doing our next operation, we are going to have some
kind of a scaler term for our friction.
CHAIRMAN WALLIS: Well, this isn't scaler
DR. PAULSEN: It's not scaler yet.
CHAIRMAN WALLIS: One-dimensional. It's
a misnomer? Okay. I'm sorry, because I thought
that's what you were talking about.
So this F tilde is the integral of all the
sheer stresses on the wall over the area?
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: Whatever direction it
happens to be.
DR. PAULSEN: That's right. It's
uncalculable but general.
CHAIRMAN WALLIS: It's a big arrow, the
resultant force from all due to friction. Okay.
DR. PAULSEN: And these forces are normal
sheer forces that you will see when you have changes
in geometry were obstacles in your flow pattern
somewhere in here.
CHAIRMAN WALLIS: That's the same thing as
integral PBS, isn't it? What's different about it?
DR. PAULSEN: These may be from internal.
CHAIRMAN WALLIS: Same thing. Surfaces,
whatever the surface is, wiggles, squiggles.
DR. KRESS: Yes, that's what bothered me.
CHAIRMAN WALLIS: There's nothing
different about Floc. Right?
DR. PAULSEN: Floc is --
CHAIRMAN WALLIS: The sheer stress of
pressure. right?
DR. PAULSEN: It's another viscose loss
CHAIRMAN WALLIS: Well, I think that's
where there's a misleading thing. You see, now you're
going to an energy balance when this is a momentum
balance. Floc, it seems to me, is either incorporated
in FF or in integral feed --
DR. PAULSEN: Let me tell you where we are
trying to get to.
CHAIRMAN WALLIS: I know where you're
trying to get to.
DR. PAULSEN: All right. Now we're trying
to get to somewhere that looks like --
DR. ZUBER: The Bernoulli equation.
DR. PAULSEN: -- the Bernoulli equation.
CHAIRMAN WALLIS: You are trying to fudge
your way to Bernoulli's equation. Right? And we're
just trying to keep you honest.
DR. PAULSEN: That's fine.
CHAIRMAN WALLIS: But if you go back to
fundamentals, which you do -- I mean, you try to
establish the fundamentals, because you do a lot of
hairy math later on -- there's only the integral of
the sheer stress tensor with the surface and the
integral of the normal stress, if you want to break it
out from the sheer stress. That's all.
The only thing the surface does to the
flow is via sheer stress and normal stress integrated
over it. There are two forces, and really one, if you
put them together.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So this -- I think what
you are doing -- What I find throughout all your
derivations, you sort of mix up these ideas of energy
losses with momentum, and Floc really doesn't have any
business in the momentum equation.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: That's what confused me.
DR. PAULSEN: So maybe we would be better
off taking this out and then letting it appear when we
actually apply mechanical energy --
CHAIRMAN WALLIS: Maybe if we work
together, we can come up with something.
DR. PAULSEN: Yes, I can see where you're
coming from. And some of this is historical.
CHAIRMAN WALLIS: Yes, I know, but some of
it is because people didn't understand properly in the
first place.
DR. PAULSEN: And some of what was
understood by the people that have gone by the wayside
and retired wasn't documented, and so we're trying to
reconstruct history and maybe leaving out some steps.
DR. ZUBER: Well, why did you have to
reconstruct? You can start from correct formulation
and forget about history. It's almost like going to
the Neanderthals to derive something.
DR. PAULSEN: Okay. The next term that we
have here is just from additional things that are very
complicated that we can't really model at a
fundamental level, things like pumps and turbines.
CHAIRMAN WALLIS: Electromagnetic forces?
DR. PAULSEN: That's right. We know that
there's going to be some additional forces, and then
we have the body force term, the gravity.
DR. SCHROCK: You put secondary flows in
that category? I mean, in this geometry you induce a
secondary flow.
DR. PAULSEN: That's right.
DR. SCHROCK; It's not specifically
thought about.
DR. PAULSEN: No. If the secondary flows
are an important part, then that's a limitation.
CHAIRMAN WALLIS: It wold appear in Ffw.
DR. SCHROCK: That is where that would
show up.
CHAIRMAN WALLIS: It captures it all.
DR. SCHROCK: Yes.
CHAIRMAN WALLIS: So this is momentum
equation. It has to be resolved in some direction.
DR. ZUBER: Wait, wait, wait. What is
this Stot for the pressure?
DR. PAULSEN: The what now?
CHAIRMAN WALLIS: Stot?
DR. ZUBER: That last term.
DR. PAULSEN: This one?
CHAIRMAN WALLIS: Stot.
DR. PAULSEN: This is for the total
surface area.
CHAIRMAN WALLIS: That's a new
development. You used to have it over the ends, and
now you are going to -- This is a completely new
development in your theory?
DR. PAULSEN: That's right.
DR. ZUBER: Well, how do you differentiate
the second term -- I mean the Ffw from this integral
DR. PAULSEN: These are viscose forces.
They've been separated out from the pressure terms.
DR. SHACK: The sheer and the normal you
can resolve. It's Floc and the integral of p that
become confusing.
CHAIRMAN WALLIS: The sheer you can't
resolve or amend.
DR. SHACK: Well, but you can get an
integral result. You can calculate it.
DR. KRESS: You can apply an integral
equation that's derived or based on the data or
derived some other way.
DR. ZUBER: What you are really deriving
are new dynamics.
CHAIRMAN WALLIS: Well, that's
interesting. Let's go ahead.
DR. KRESS: I'm still confused about that
last term.
DR. ZUBER: That's the point.
DR. KRESS: Because what I view that as is
the effect on the momentum in changing direction.
DR. PAULSEN: That's what that is.
DR. KRESS: And it seems to me like in a
one-dimensional equation, you don't have that, because
your direction is along the stream line, and that's
what confused me.
DR. PAULSEN: Do you have some insight,
DR. PORSCHING: Well, first of all, that
equation is -- It's a misnomer. At this point it's a
three-dimensional lumped equation that you've gotten
by taking a --
CHAIRMAN WALLIS: Sir, could you get to
the microphone and identify yourself for the record?
DR. PORSCHING: Sure. I am sorry. I am
Tom Porsching. I'm an Americus Professor of
Mathematics from the University of Pittsburgh.
Just by way of insertion here,
introduction, I was asked a year and a half or so ago
by EPRI to examine the equations of motion and fluid
dynamics and see if there was a rational way to derive
a scalar balance or a scalar relationship of the type
that is used, as it turns out, in the RETRAN equation.
So that's my role. That's a role I've played in this,
and just recently received from Mark four or five days
ago copies of these slides.
So I haven't had a real chance to digest
them, but I notice that the equation that he is
discussing right now is an evolved version of what you
could get by taking the Navier-Stokes equations or, if
you want to lump the viscose terms in a term such as
that Ffw term, the Euler equations, and integrating
them over a control volume.
The term that you see at the very end
there, that pndS term over Stot, can be derived, can
result from the first relationship that I mentioned by
viewing the pressure gradient term that shows up in
the Euler equations as really a tensor, a divergence
of a tensor where the tensor is, in fact, the identity
That allows you, after you've done the
integration over the volume, to use the divergence
theorem to convert that to a pressure -- to an
integral over a surface.
CHAIRMAN WALLIS: There's no need to do
that. This is simply an overall force balance, and
it's straightforward.
DR. PORSCHING: Well, maybe. That's my
view. That's the way I view it.
CHAIRMAN WALLIS: You would need no
Navier-Stokes equations to do this. The thing which
is confusing to us, I think, is when we first saw
this, Bert, Stuart and Lightfoot was involved, and
Bert, Stuart and Lightfoot make it quite clear that
they've got pressures over the end areas, and they've
got a pressure and an S. That's what you wrote in
your first documentation that we reviewed.
Now we've got something different.
DR. ZUBER: Well, but they have the same
result. They have the same result.
CHAIRMAN WALLIS: Well, this, I think, is
a different story than we saw, because you're invoking
Bert Stuart and Lightfoot. You're not invoking
something that everybody believes. You're invoking
something new.
DR. ZUBER: But more than that. They are
developing something completely new, because last time
they obtained relations which are completely different
from the Bert, Stuart and Lightfoot.
CHAIRMAN WALLIS: That's what is so
DR. ZUBER: It's interesting, wrong,
amusing or sad.
CHAIRMAN WALLIS: Maybe it's all of the
DR. KRESS: Well, the only place I would
need that last term, it seems to me like, is if I'm
trying to determine the response of the pipe to the
flow and, you know, trying to get the support forces.
When I'm looking at the flow itself, I don't need that
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You don't need that
term? The pressure drop between the ends that
accelerates the flow.
DR. KRESS: Oh, I thought that was in one
of the other terms.
DR. PAULSEN: It's included in this term.
It's part of this term.
DR. KRESS: I do need that term then, if
that's what it is.
CHAIRMAN WALLIS: But we'll buy this as
long as we understand what we're looking at. But this
is so obvious, as long as we are clear about what we
mean, I think we can go on.
DR. PAULSEN: I think the part of the term
that you were talking about is going to be the
integral over this surface area.
CHAIRMAN WALLIS: That's Bert, Stuart and
Lightfoot have.
DR. SCHROCK: I'm afraid anybody reading
the record of this meeting would be very confused by
the composite of the statements that have just been
You admitted when I suggested that it's an
ad hoc equation that, yes, indeed it is an ad hoc
equation. Dr. Porsching stood up and told us it's not
a one-dimensional equation; it's a three-dimensional
integral representation of a three-dimensional
situation; and in fact, it is derivable from first
principles. But if that is the case, then it's
incumbent on you to show us how that happens. How is
it derived from first principles?
So I think the sequence of things that I
heard in the last five minutes are absolutely self-
DR. PAULSEN: You want this equation
CHAIRMAN WALLIS: This is just momentum
and force balance. I think we can move on, as long as
we are clear what you mean. Stot is the integral over
the whole surface, which is the ends and the walls of
the pipe.
DR. PAULSEN: That's correct, and --
CHAIRMAN WALLIS: And the sheer stress --
resultant of the sheer stresses is f-squiggle, and we
can forget about Floc and Fp. Right? So we've got
sheer stresses, pressure forces, gravity, momentum
fluxes, and they are balanced or not balanced. If
they are not balanced, there's got to be an
acceleration by Newton. We're not going to question
DR. PAULSEN: Right.
CHAIRMAN WALLIS: So what's the problem?
Can we go on?
DR. PAULSEN: Sure.
CHAIRMAN WALLIS: But now this is going to
be resolved to make it one-dimensional?
DR. PAULSEN: I hope so.
CHAIRMAN WALLIS: Okay, let's resolve it.
You're going to resolve every term in one direction?
Are you going to resolve the momentum fluxes in that
DR. PAULSEN: Yes. And I'm going to have
to apologize here. I think your hard copies are
correct, but when I printed these slides, the Greek
characters disappeared. So your slides are going to be
correct --
CHAIRMAN WALLIS: So this says the change
of the momentum in the I directional, the p to
whatever you call it -- end direction, whatever. It's
the psi direction, right?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Is equal to the change
in momentum flux in that psi direction. I notice here
Ak+1 instead of Ak. So Ak, I think, is different from
Ak. You're going to make it the same for some reason?
DR. PAULSEN: At this point --
CHAIRMAN WALLIS: No reason it has to be
the same.
DR. PAULSEN: At this point, we are doing
it for a uniform area.
CHAIRMAN WALLIS: No, you're not. You've
got Ak+1 that's different from --
DR. PAULSEN: That's right. That's simply
to show that where it came from is from the downstream
CHAIRMAN WALLIS: Well, I think you're
hiding from the fact that if you put in an Ak+1, you
can't make it go away, you know. That's what you said
before. Dr. Porsching's paper has an A1, n A2 and A0,
three different areas. You only have one. And if you
use his equation, you get a different answer than you
get by generalizing your equation.
So we have a problem with that. But
anyway, this is resolved in the direction m, right?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And that Fw is some
resolution of the forces from the wall sheer stresses
in that direction.
DR. PAULSEN: That's right, or along the
DR. PAULSEN: And the Tg is resolved as
DR. PAULSEN: The what?
CHAIRMAN WALLIS: The g is resolved as
well? Should be, right?
DR. PAULSEN: That's right.
DR. ZUBER: What is that delta-p, sub-p?
DR. PAULSEN: Was it this term, Dr. Zuber?
It's the pump. Yes, it's just a source term that gets
added for volumes that have pumps.
So here we have the momentum coming in,
and this will be the momentum going out the other end.
What we have effectively done at this point is dot
this equation then with the junction normal vector to
make this a scalar equation.
DR. KRESS: But you don't know that angle
in general.
DR. PAULSEN: That angle is input.
CHAIRMAN WALLIS: You're free to chose it.
DR. PAULSEN: The user would input that
angle in his input description.
DR. KRESS: Well, if you are going to then
take the -- invoke the divergence theory, then doesn't
that fix that angle for you?
CHAIRMAN WALLIS: You're getting too
complicated for me, Tom.
DR. KRESS: Well, the divergence theory
fixes the point at which the mean value -- I mean the
mean value theory. It fixes -- When you invoke the
mean value theory, that fixes that point and that
CHAIRMAN WALLIS: But you can resolve in
any direction. Now this next statement is really
weird: "Pressure assumed uniform." How can you have
a pressure difference if it's assumed uniform?
DR. PAULSEN: Within each of the control
volumes --
CHAIRMAN WALLIS: Now you get into a sort
of a logical --
DR. PAULSEN: This upstream side and the
downstream side, we're assuming that we have one
pressure and that it's uniform in --
DR. ZUBER: Well, what difference is that
CHAIRMAN WALLIS: So you're assuming
something incredibly unphysical. Right? In order to
get on with the problem?
DR. PAULSEN: Well, we really don't know
the pressure distribution --
DR. ZUBER: Hold on. Hold on. What is --
You mean where you have the arrow in the middle?
DR. PAULSEN: This arrow?
DR. ZUBER: Yes. You have a pressure
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You got into something
absurd here. Your flow goes around the bend. The
bend is like a turbine bucket, and the pressure on the
outside of the bend is different from the pressure on
the inside, and that's why it turns. If you are going
to assume it's uniform pressure, it's got to go
So the whole idea is contrary to physics.
DR. ZUBER: Look, Graham, you see that
middle point, middle dotted line. It has a pressure
CHAIRMAN WALLIS: That's right. I guess
he has that.
DR. ZUBER: He has. I mean, this is like
you have a supersonic flow.
CHAIRMAN WALLIS: But also he assumes the
pressure on the inside of the bend and the outside are
the same. So there's no force to turn the flow to the
right. There's nothing that stops the flow from going
straight up in the air there.
DR. PAULSEN: Except we know that the flow
has to go through our junction, and we've defined
those angles.
DR. ZUBER: That is unbelievable.
DR. PAULSEN: It's the pressure balance
that -- pressure drop balance that really drives the
CHAIRMAN WALLIS: But you see, you -- But
you're using a momentum balance. So you've got to
keep track of forces and directions.
DR. ZUBER: You know, if you really follow
fluid dynamics, if you have a pressure discontinuity
across an interface normal to the flow -- I usually
call it shock or something -- then you have a velocity
DR. PAULSEN: yes.
DR. ZUBER: And this is what I get.
CHAIRMAN WALLIS: -- because in the
original documentation you had a pressure on the
bottom area and the top area which were the Pk and the
Pk+1. All the books do it.
DR. PAULSEN: And if you do the mean value
theorem or apply the mean value theorem, you can get
the pressure at that --
CHAIRMAN WALLIS: You take everything as
mean. You lose some of the physics, because the only
reason it goes around the bend is because the pressure
is bigger on one side than the other, and taking the
mean pressure doesn't capture that at all.
DR. PAULSEN: We know that we can make
fluid flow by using the Bernoulli equation where we're
just looking at --
CHAIRMAN WALLIS: That's not what we are
talking about here.
DR. PAULSEN: That's where we're trying to
get to.
CHAIRMAN WALLIS: Why don't you just use
it then? I mean, giving a bogus derivation of
Bernoulli equation is worst than just invoking it, if
it's bogus. Now maybe it's good. I don't know yet.
We are obviously having some difficulty with it. So
DR. PAULSEN: One of the differences is
what we have for our time derivative, but the steady
state form of the equation looks a lot like the --
CHAIRMAN WALLIS: So your equation -- I'm
going to give you this right angle bend there.
There's a 180 degree bend, and you can tell me how
your forces work for that, if you like. Would you
want to do that, because I claim that the momentum
fluxes are in one direction, the net wall sheer
stresses in the other. What's the momentum? What's
the direction of momentum? It's in this direction.
DR. PAULSEN: In RETRAN the momentum will
be in the direction of whatever you define the
junction angle to be.
CHAIRMAN WALLIS: But you're not looking
at pressure forces on the ends anymore. It doesn't
matter what the orientation at the ends is? It's
irrelevant? Seems to me, the orientation of the ends
in terms of pressure is irrelevant in your model.
DR. PAULSEN: The orientation of the ends
has to be normal -- or perpendicular to the walls of
the pipe.
CHAIRMAN WALLIS: Okay. How does that --
Just put another coil in the pipe. Doesn't make any
difference to your equation. A little bit more curl
or something doesn't make any difference. Yet the
pressure is acting on a different surface.
DR. PAULSEN: We will definitely get
different flows in a situation like that if you model
the actual flow lengths and then the losses that you
would normally get through a form loss type term.
CHAIRMAN WALLIS: Well, would it be
appropriate for you to take my 180 degree pipe and
show us the Fs and the forces and the momentum fluxes
and so on? Would it be appropriate? He's got
something you can draw with here.
DR. PAULSEN: Well, I think what we ought
to do is maybe look at the equation we end up with.
CHAIRMAN WALLIS: But I'm just saying that
your model should apply to any bend, and you're going
to resolve it in some direction. I think, if you look
at 180 degree pipe, you'll find that the momentum
fluxes and the pressures that are orthoganol to the
friction forces in the momentum change.
Since yours is general, it ought to apply
to that, oughtn't it? I'm just trying to clarify it.
if it's general, you've got that thing at the top
there. Just put the arrows for the momentum fluxes in
DR. PAULSEN: Are these control volumes or
is this a momentum cell?
CHAIRMAN WALLIS: It's a momentum cell.
Put in the momentum fluxes as you would have them
going in the ends there. Maybe you'll be right.
Maybe we'll be convinced here. I don't know.
We need one that works. Get the one that
works. This is government.
DR. PAULSEN: Let's see if this works.
CHAIRMAN WALLIS: You've got momentum
fluxes coming in there. Right? And then going out
there. Where is the -- what's the average momentum?
It's an incompressible flow, and let's say A1=A2.
What's the average momentum in the pipe? What's its
DR. PAULSEN: Is this -- Okay, so this is
our momentum cell?
CHAIRMAN WALLIS: Yes, the whole thing is
a momentum cell.
DR. PAULSEN: At some point we have to
assign an angle for this thing.
CHAIRMAN WALLIS: Well, let's do that
later, because we would solve for the overall thing.
What is the direction of the overall momentum in the
pipe there? It's horizontal, isn't it? Okay, so it's
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: So it's horizontal. Now
what's the direction of the net sheer stresses on the
wall? By symmetry, it's also horizontal.
DR. PAULSEN: Right.
CHAIRMAN WALLIS: So your FFW is
horizontal. How can that, in the momentum balance,
balance the pressures at the end which are vertical,
which you claimed it does. You said that Moody --
pressure drop in the pipe is balanced by the sheer
stress, you said, in the trivial case. Yet they are
in opposite direction. How does it happen?
DR. PAULSEN: Basically, what we have done
is made this one dimensional so that, in effect, we
have a straight pipe.
CHAIRMAN WALLIS: You straightened it out.
DR. PAULSEN: That's right.
DR. KRESS: Or another way to say it is
you've resolved all these things along the stream
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You had to resolve each
little bit around the whole thing, but when you
resolve the whole thing, you've got absurdities. if
you take that loop at the bottom there, you've got
even more absurdities. You've got that there's no
change in momentum flux, and the pressures are all --
There's nothing to accelerate the flow, but we know it
isn't true.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So you haven't done the
momentum balance, it seems to me. You've done an
integration of little pieces of momentum or something
or you've done a Bernoulli type flow, which is also
historic with RELAP and all that.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: But you've really
confused us by this kind of hybrid, which is neither
fish nor fowl. It seems to be a mixture of the two.
DR. ZUBER: Graham, it's violating
everything we have learned in fluid dynamics.
CHAIRMAN WALLIS: Well, not necessarily.
DR. ZUBER: Graham, look, they develop a
discontinuity of pressure -- how can this be? If you
have a discontinuity in pressure, you must have then
a discontinuity in velocities.
DR. PAULSEN: If you look at any code, and
they can assume node-wise pressure --
DR. ZUBER: Forget code. Don't try
everybody is cheating, therefore I can cheat also.
That's another argument here.
DR. PAULSEN: No, that's something that
comes with difference equations.
DR. ZUBER: No, no, no.
DR. KRESS: It's a finite difference
DR. ZUBER: No. They assume the same
pressure, you see, at the entrance, and then a
different at the exit. The discontinuity occurs in
the middle, and you have a pressure jump in the
middle. You have to have that independence in
velocity. Those are called the jump conditions.
CHAIRMAN WALLIS: You have a real problem
mathematically to relate the integral of pressure
around a surface to the integral of pressures
throughout a volume. There's a real -- It's not as if
you've got a gradient of pressure or anything.
The volume integral of pressure is a
different animal from the surface integral of
pressure, and yet you are saying your volume integral
of pressure you used for your code in the thermal-
dynamics can somehow be borrowed and immediately
transported into some surface integral of pressure,
which is the kind of thing that the Porsching
influence has led you to, because the other one didn't
work out very well. But you just have another
mathematical problem then, I think, when you do that.
It may be that, if you really acknowledge
these and really say, well, we made that assumption
because that's the only thing we knew how to do, then
Novak can get as blue in the face as he likes, but at
least you've said that's what we've done.
DR. ZUBER: Oh, no. I agree. If they
would say, Graham, this was wrong; the effect of this
error is such and such, and it took us many
calculations to show that this is not the important
The problem I have here, they don't want
to acknowledge candidly the wrong formulation, the
wrong results, and I don't agree that they have done
sufficient sensitivity calculations.
CHAIRMAN WALLIS: The difficulty, Novak,
is that the code has some formulation in it, and all
this story has developed and ways to try to justify
what someone has put in the code for reasons which the
present users may not even believe.
DR. ZUBER: Well, the trouble is then you
have to say to the public we have really codes we have
to believe in that you flunk a junior student on that.
CHAIRMAN WALLIS: Well, let's see now. I
don't know. Do you see the problem I have with the
180 degree bend?
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: It's that the forces are
in different directions, and I don't know how you
resolve them in any direction to get your equation.
That's all. Maybe you can think about that after
lunch or something.
DR. SCHROCK: It seems to me that that
problem is present for any degree of bend, isn't it?
CHAIRMAN WALLIS: Anything except a spring
DR. SCHROCK: The fact that the forces
that are described in these what I'm calling ad hoc
equations are simply not in the same direction.
Therefore, it's a little difficult to understand how
they can represent a force balance.
DR. PAULSEN: And in fact, we may be
better off just saying that we are doing it for a
straight piece of pipe and elbows are handled --
CHAIRMAN WALLIS: You could use what I
call the two-pipe plus junction model, which is what
you almost do.
DR. PAULSEN: That's what we've attempted,
CHAIRMAN WALLIS: But you haven't, because
you've tried to then resolve it. You've got the
vector thing mixed up. Two-part plus junction model
works if the pipes are in any -- Here's a pipe.
Here's a junction. Here's not a pipe. Doesn't matter
where it is, as long as you've got that, but you've
confused everything by calling it a vector equation
and resolving it.
DR. PAULSEN: I think I see where you are
coming from now.
CHAIRMAN WALLIS: It's taken a long time.
DR. PAULSEN: Well, some of it is, I think
-- Well, yes.
CHAIRMAN WALLIS: California is a long way
from New England. I know that.
DR. PAULSEN: Well, some of it may have
helped if we could have worked with --
CHAIRMAN WALLIS: Well, you've got a
Californian here, too.
DR. PAULSEN: Things have been kind of
indirect, I guess.
DR. ZUBER: Two years ago, I mean, we
discussed some of these things.
MR. BOEHNERT: What's this got to do with
CHAIRMAN WALLIS: Well, that's why it
takes a long time. I mean time and distance. No, I
don't think we want anymore comparisons like that.
DR. PAULSEN: The next step in the
development is based on the assumption that we have
spatially uniform pressures.
CHAIRMAN WALLIS: Yes, but then you
shouldn't be doing this hairy -- You've cast this
hairy surface integral when you've already assumed the
problem away by having it uniform is really strange.
DR. ZUBER: Well, it's wrong. It's not
strange. It's wrong, because if you do that and you
have -- discontinuity, and that's not physics.
DR. PAULSEN: Well, with the finite
difference codes there is always pressure differences
in each node.
CHAIRMAN WALLIS: What you are doing --
Okay, you're going -- you're doing this whole
integral, and essentially you are getting some sort of
a surface average pressure over the entire surface.
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: That's essentially what
you are doing, and I'm saying mathematically that is
not the same thing as some volume integral of
pressure, which is what you use in your thermal-
dynamics. So there's a sleight of hand at a different
level going on here.
DR. PAULSEN: In the thermal-dynamics we
only know mass and energy on a global basis.
CHAIRMAN WALLIS: But you know a thermal-
dynamic pressure.
DR. PAULSEN: Pardon me?
CHAIRMAN WALLIS: You know a thermal-
dynamic pressure.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So when a flow goes
around a bend, it goes around because the pressure on
one side is greater than the pressure on the other,
and an average pressure of thermal-dynamics -- we will
never reflect that. Never.
DR. PAULSEN: Okay.
DR. ZUBER: Okay. Do you agree with the
statement that Ralph made that this derivation is
DR. PAULSEN: Well, I think we can show by
comparison with simple experiments, simple thought
problems that the resulting equations reproduce
reality. We can actually do comparisons of
expansions, contractions, T's where we actually
reproduce reality.
DR. SHACK : Well, and you can't apply
your equations to a straight pipe.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And I think what happens
in RELAP, though it's very difficult to get them to
say that, but every time that people come up with a --
They've come up with the RELAP documentation. All
they do is analyze a straight pipe.
Then they say, well, here's a straight
pipe, a straight pipe, a straight pipe. A bend turns
out to be a sequence of straight pipes, but they never
tell you that up front, we're going to model
everything as a straight pipe.
DR. ZUBER: Graham, it is not even that.
They cannot even apply to a straight pipe, and we
shall come to it, because what you have in this
handout, it doesn't apply to a straight pipe. It goes
contrary to whatever we have in Bert, Stuart &
Lightfoot. Your results here --
CHAIRMAN WALLIS: Well, let's do this
integral over the areas. I guess we're going to have
to do it. What you essentially come down to is this
Pk, Pk+1 times Ak.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And the Pk is really a
definition of a pressure over an entire area composed
of the pipe and the end, which does not include the
junction. It's that whole surface of whatever it is
that wiggles and squiggles and everything which does
not include the junction.
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: Because it's a very
funny pressure. It's some sort of average pressure
over all the surface there.
DR. ZUBER: But Ak is a surface normal to
the --
CHAIRMAN WALLIS: Well, in Porsching A is
A0. It's different, and that's what I was pointing
out to you earlier, that if you take the Porsching
equation with an A0 there, then your equation -- it
looks different. Your pressure difference multiplies
an A0. When you divide it through by it, it's not the
same as Ak.
DR. PAULSEN: Right.
CHAIRMAN WALLIS: So his equation is not
the same as yours, even if you believe this. But this
Pk and Pk+1 are not the same as the Pk that are in Bert,
Stuart and Lightfoot, which are on the ends of the
pipe. They are an average over the whole wall and the
end, coming all the way back to this.
DR. ZUBER: See, but Graham, it is
integrated over one normal area, k, which assumes that
Ak and Ak+1 are equal.
CHAIRMAN WALLIS: Well, it's the junction
area, really. In Porsching's paper it's an A0, which
is like a middle of the pipe, not the ends at all.
DR. ZUBER: But you don't know what that
A0 is.
DR. PAULSEN: And then this uniform pipe--
CHAIRMAN WALLIS: Doesn't have to be
uniform. All that needs to be is the area of the
junction that cuts the middle of your picture.
Doesn't have to be uniform pipe for the Porsching
approach. But then you can't divide through by Ak and
get your answer, because A0 isn't the same as Ak.
So even if you believe Porsching and even
if you are willing to say Pk, Pk+1 equals the same
pressures as the sum dynamic pressures, you still have
a problem with the areas being --
DR. SHACK: Well, no. Porsching is
rigorous. It's just that you don't know where the P
is evaluated. I mean, it's a mean value over some
portion of the surface. There exists a point at which
that statement is true.
DR. ZUBER: No, it is not, because here's
the equation, and here are uses, and we can't bring it
CHAIRMAN WALLIS: There's another
Porsching paper, though, a more recent one, which
seems to realize that there's a problem here, and it
sort of works for a straight pipe and it works for a
pipe with a slight bend in it, but you have a problem
when you have big bends because of the surface
So there's a learning process going on
here which is fascinating to watch. It's a difficult
problem. I think what you have to do is face up to
I wrote a tutorial on the momentum
equation. I guess you haven't seen it. Here's the
momentum equation. Here's why it's very difficult to
use and, therefore, you have to make assumptions and
so on, and these are the kind of things people have
I think that would be a much better
presentation than this sort of attempt to do something
rigorous that gets people a little hot under the
collar, because they say, how can you do that?
DR. PAULSEN: But in general, where we are
trying to go is to something that looks like the
Bernoulli equation.
DR. ZUBER: Well, then you could have
started with Bernoulli. Let me ask you something.
Are you familiar with a book by Ginsberg, this book?
DR. PAULSEN: No.
DR. ZUBER: It was translated by NASA 30
years ago, and he deals with this problem. It's the
best book I saw on this approach, and I would strongly
advise you, go and read it, and also, too, NRC.
CHAIRMAN WALLIS: So if this were
Porsching, you would have Ak and Ak+1 instead of Ak in
there, and you would have -- They are still resolved
in some direction psi?
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: And then you would have
A0 in there, and A0 depends on what psi is. You
change psi, you change A0. Is that right?
DR. PAULSEN: Yes. They go together. So
this ends up being our scalar equation where this is
our time rated change of momentum in our momentum
Then we use some definitions, a geometry
term. This ends up being the volume of the momentum
cell where it's just based on half the length of the
upstream and the downstream volumes.
CHAIRMAN WALLIS: But those were in
different directions.
DR. PAULSEN: What's that?
CHAIRMAN WALLIS: Those are in different
directions. When the cell -- Now it's a bend. Those
L's are in different directions. So don't the momenta
have to be resolved in some way? And you seem to be
resolving the momentum flux and not resolving anything
else. The momentum has to be resolved in the two
pipes. You got two pipes here, right?
DR. PAULSEN: We have two pipes.
CHAIRMAN WALLIS: And you got to resolve
that momentum. Have you?
DR. PAULSEN: Well, we think we have.
CHAIRMAN WALLIS: See, you can't do it
that way in general.
DR. PAULSEN: Well, let me take a look at
this next step then.
(Slide change)
DR. PAULSEN: Basically, what we've done
then is defined that time rated change of momentum to
be basically a flow term. There was a geometry term
factored out.
CHAIRMAN WALLIS: It's like a pipe.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You've got two pipes is
what you've really got.
DR. PAULSEN: It is two pipes. That's
DR. ZUBER: But the same area.
DR. PAULSEN: Two pipes here with the same
CHAIRMAN WALLIS: They don't have to have
the same.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You get an L over A or
something, whatever it is.
DR. PAULSEN: That's right.
DR. ZUBER: But then they would not get
the pressure.
CHAIRMAN WALLIS: See, and when you -- The
thing I find difficult is what am I looking at? The
RETRAN flow equation, when it appears later, has an
Ak2 and it has an Ak12 in there.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Which is simply written
down. Now if you are going to derive it, you better
have a pipe which has a different area in and out
rather than just generalizing something without any
DR. PAULSEN: Okay. That's the next step.
CHAIRMAN WALLIS: If you look at
Porsching, it shouldn't be Ak+12 anyway. It should be
Ak+1A0, even if you believe Porsching. So you can't
just say it's a pipe of constant area and then write
down an equation with no explanation for a pipe with
bearing area.
DR. PAULSEN: Well, the next step is to
try and show you how we have come up with the equation
for --
CHAIRMAN WALLIS: The way you've done that
is with two pipes which are straight.
DR. PAULSEN: Two straight pipes and
connect them with the mechanical energy equation.
CHAIRMAN WALLIS: So you are essentially
saying we're going to take any old bend of any shape--
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: -- and model it as two
straight pipes.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Or any shape of any kind
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Like a cobra that
swallowed a pig, and it's got a big bulge in the
middle --
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: But he still treats it
as a straight pipe.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: So I think that's what
you've done.
DR. PAULSEN: Yes, it is.
CHAIRMAN WALLIS: All this other stuff is
very misleading.
DR. SCHROCK: Was this last equation one
that you can put a number on in the EPRI report,
RETRAN report?
DR. PAULSEN: I am not sure if I've got
the latest copy that was mailed.
DR. ZUBER: I think it's 236 or something
like that.
DR. PAULSEN: Must be about 2.3.
CHAIRMAN WALLIS: 2.3.10 is in revision 5.
DR. PAULSEN: It's more like 10. Twenty-
six is the one after we get some of the area change.
DR. ZUBER: You are right.
CHAIRMAN WALLIS: It's 2.3.10 in 5.
There's 5(b) where it's somewhere else. In version
5(b) it's 2.3.10.
You see, you write down the average
momentum of cell is All k over 2 plus All k plus 1
over 2 times W. Well, that's not resolved in any
direction. Okay.
(Slide change)
DR. PAULSEN: So what we have come up with
then is a scalar equation of motion.
CHAIRMAN WALLIS: Wall forces disappear.
DR. PAULSEN: We have the pressure force
in effect and the --
CHAIRMAN WALLIS: You see, the problem
with wall forces disappears -- we know that the sort
of token bucket turns the flow because of a wall
force. You can't make it disappear. Even someone who
knows no math at all will tell you the force on the
wall becomes as a momentum balance. Don't need to
know any math at all.
DR. KRESS: But momentum is made up of
direction and mass times velocity. So wall forces
generally only affect the direction. If you're
talking about integrating along a stream line, I think
those wall forces just sort of change the direction,
and you don't really need them.
CHAIRMAN WALLIS: As long as you don't
have things like changes of area.
DR. KRESS: That's right.
DR. PAULSEN: And that's basically what's
being done, is integrating along the stream line.
CHAIRMAN WALLIS: See, you don't use that
rationale at all.
DR. KRESS: If you had started with that
rationale, I think we would have a lot less trouble.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: See, I don't know what
we are doing here. Are we helping you to devise an
acceptable rationale?
DR. PAULSEN: Well, I think we're coming
to understand maybe where your problems are.
CHAIRMAN WALLIS: I thought they were
obvious two years ago, but nobody listened.
DR. PAULSEN: I think we've covered them
in a little more depth now, and I think --
CHAIRMAN WALLIS: You went back to the
same sort of thing. Except for the Porsching
rationale, you really have the same.
DR. ZUBER: Since you see where we are
coming from, you know where you are going to? I'm
quite serious. I mean, you see our problems,
basically dynamics. Hopefully, you said you have that
equation you agreed to. Now where are you going?
DR. PAULSEN: Well, that's not my
decision. That's up to EPRI, but I think we can relay
word that we now understand your concerns, and maybe
be able to come up with a resolution.
DR. ZUBER: See, what they said about this
-- This was obvious two years ago. For whatever
reason, arrogance or ignorance, you never addressed
it, and now it's facing us straight, and you are
putting kind of a burden on NRR. We are becoming
critical, and you are just writing --
DR. PAULSEN: Well --
CHAIRMAN WALLIS: That's very unfortunate.
DR. ZUBER: It is really sad and very
inefficient way of using money and time.
DR. PAULSEN: Well, we tried to work with
the staff, and I think their intention was to try and
relay your concerns.
DR. ZUBER: Well, you were here at the
DR. PAULSEN: And our intentions were to
try and resolve those concerns, and somehow between
the two of us we didn't.
CHAIRMAN WALLIS: I was wondering if we
could go to the next one before lunch, just to get it
out of the way.
DR. SHACK: I still have one problem, even
with this equation. That is your momentum term is
really a V.NC, and you've lost the V.NC and replaced
it with the V-Normal.
DR. PAULSEN: This equation?
DR. SHACK: Yes.
DR. PAULSEN: Okay.
DR. SHACK: If you just go back a step, go
back to 11 or go back to 9 --
DR. PAULSEN: Is it on 11 or is it 9?
DR. SHACK: Well, try 9, because that
shows the dot product. Okay, now how does V-dot-n-phi
end up as V-normal? So it's V-dot-n-phi on this graph
DR. PAULSEN: On this graph?
DR. SHACK: No, no, on the lefthand side
of the equation, V-dot-n-phi. Now go to 11, and it's
just V. You've lost the dot product.
CHAIRMAN WALLIS: Right. He hasn't
resolved the momentum.
DR. SHACK: You haven't resolved the
CHAIRMAN WALLIS: No, he hasn't.
DR. SHACK: You better stick to a straight
CHAIRMAN WALLIS: That's what I was
saying. With two straight pipes, he's got his L1 and
L2. He doesn't resolve them in any way.
Now there is a problem. I guess we can't
leave it alone. This W that you have here resolved --
W is a scalar.
DR. SCHROCK: That's right.
CHAIRMAN WALLIS: So when you start
resolving W, as we'll see if we get to it at the flow
around the bend, you get into real problems. I think
we totally disagree with your momentum flux terms and
even the simple thing of your example of flow around
the bend.
DR. SHACK: But his W-phi is just V
multiplied by Row A. Then he divides by Row A.
CHAIRMAN WALLIS: Yes, but you see, when
you look at his flow around the bend, the momentum
term in there doesn't fit any of the patterns. It's
something else. So maybe you have to have lunch
first, but we are going to get to that, I think, too.
So there's a danger in saying W resolved
in the direction, because W is a scalar quantity. You
have to be very careful about it. I think it's
possible to do it, but you have to be damn careful
that you know what you are doing -- what you mean by
it, because it's not a physical quantity. It's
something you've artificially contrived.
DR. PAULSEN: One of the things that we
have corrected was that there was an error in the
momentum flux term that Dr. Wallis pointed out --
CHAIRMAN WALLIS: This was the cosine of
DR. PAULSEN: There was a missing cosine
or an extra cosine. There was an extra cosine, and
that has been corrected.
DR. SHACK; So you got rid of one, and you
lost another one.
DR. PAULSEN: And as we have mentioned,
we've had Dr. Porsching review this, and --
CHAIRMAN WALLIS: That's very interesting.
I found that interesting to read. But of course,
there is a long history of fluid mechanics and
attempts to deal with this sort of problem. So either
there's a revolutionary insight or -- May be, but
strange it comes out of the blue.
I think what the difficulty with the first
Porsching paper was that the kinds of averaging to get
the pressures somehow got confused. Pk, Pk+1, in yours
weren't the same as in his, and all that. I think you
have tried to resolve that now.
DR. PAULSEN: I think so.
DR. ZUBER: Did you say tried?
CHAIRMAN WALLIS: Tried to, yes. I said
tried to.
DR. PAULSEN: And this was basically the
incorrect term where this -- we basically had that Pk
resolved in that psi direction squared. As a result
of that, we have looked at what effect that might have
on users in the field using the code.
We filed a trouble report probably about
two years ago that identified that to users, and we
went back and looked at how that error might effect
situations. As it ended up, it was probably
fortuitous, but most user models have angles of zero
or 90 degrees where that error doesn't actually show
CHAIRMAN WALLIS: I think, when we get to
-- I think this afternoon we should look at sort of
your bend model and your downcomer and so on. I think
I still have real troubles with your angles, because
you have sort of momentum, if it's going out in this
direction, it's resolved -- disappears, because it's
in the Y direction and not the X direction; whereas,
in reality the momentum in the whole thing has to be
accelerated somehow.
So we have some problems with angles of 90
Can we very quickly look at the abrupt
area change, because I want -- Your original figure
was better than the new one, because it actually
showed the sort of discontinuity, implying that these
were long pipes.
DR. PAULSEN: That these are?
CHAIRMAN WALLIS: This sort of model of
one-dimensionalizing this problem only works if the
pipes are long.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And then there's a
junction in between. So it's what I call the two-
pipe-plus-junction model.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And you use a straight
pipe theory, which everyone can agree on, for each
pipe. So we don't need to go over the equations.
Then you do some -- You say the pressure drop is given
by some sort of empirical thing. Then you eliminate
the pressures, and this is two pipe plus junction
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: It's simply saying that
everything is straight pipe and junction; putting two
pipes and a junction together is just a generalization
of something more fundamental.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: But then you say you're
going to resolve these Ws in the psi direction. You
don't do that for the straight pipe, and you can't do
it now.
DR. PAULSEN: Well, that's the junction.
CHAIRMAN WALLIS: You can't do it now.
DR. PAULSEN: Do we say that?
CHAIRMAN WALLIS: Yes. Your slide number
20 has the Ws in the psi direction, and that's
inappropriate for the two pipe plus junction model.
DR. PAULSEN: Basically, for straight
pieces of pipe.
CHAIRMAN WALLIS: No, that's the only
thing you are analyzing, is two straight pieces of
DR. PAULSEN: Yes, that's right.
CHAIRMAN WALLIS: And if you start --
DR. PAULSEN: These will be the same.
CHAIRMAN WALLIS: If you start resolving
in the psi direction, you get the wrong answer. You
don't get Bernoulli's equation. You've got to get
these squared over two. You don't get it, if you
start resolving in a psi direction.
In fact, if you start saying that -- Say,
if it's a momentum balance in the X direction,
anything in the Y direction gets thrown away, you get
the wrong answer.
DR. PAULSEN: For the straight piece of
pipe, this would end up being -- All angles would be
the same.
CHAIRMAN WALLIS: So you've got two pipes
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And you are now going to
resolve -- You are mixing up two ideas of the bend and
the two pipes. Two pipes can be joined with a
junction, but these squareds are the P squareds in
those pipes and not resolved in any way whatsoever.
DR. PAULSEN: The what now?
CHAIRMAN WALLIS: Two pipes like this.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: You analyze this one.
You analyze that one. You analyze the junction. You
eliminate the pressure drops of the junction. You get
the pressures at the end. You end up with P squared.
You don't end up with Wk, Wk over Ak2, WK-phi. You
end up with P2 here and P2 there, and not resolved
That's why you need Bernoulli's, because
you are going to take that final thing there with the
W2 over 2, combine it with the first two terms, and
show that it looks like Bernoulli's equation for a
last list system.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: That won't happen if you
have a psi in there.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You've got to have a W2
DR. PAULSEN: These cases, that psi would
all be the same angle.
CHAIRMAN WALLIS: Shouldn't be there.
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: No, it shouldn't be
there at all. If you have flow coming in and going
out at different angles, you don't resolve those terms
for the two pipe model.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: Think about it. Just do
it. So you've somehow mixed up your idea that you are
resolving momentum with something like this, which
really is a flow equation --
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: -- which is blessed by
the NRC since time immemorial, because they didn't
know what else to do, but it didn't have the psi in
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And in trying to do
something better, I think you've produced something
which logically doesn't make sense anymore.
DR. PAULSEN: And as we'll see maybe after
lunch, there are several places where we use angles,
and --
CHAIRMAN WALLIS: Well, maybe after lunch
we should look at that bend model where you actually
take -- you lead us through, and you actually develop
the momentum in and out, and you have this strange
thing, the Wx is W over 2 and Wy is W over 2, all that
stuff. Can you lead us through that?
DR. PAULSEN: Which case is that now?
CHAIRMAN WALLIS: That's the simple bend
which we had as an example, sort of the first thing I
tried to understand, this one here in the documents to
RAIs, momentum cells for an example elbow. And you
have statements such as W-4Y is a half-something or
other and all these things. You have things about W2
being a half-W-3. W2y being half-w-3, all those
things. Can you lead us through that?
DR. PAULSEN: Okay. Where are we heading
with that, I guess?
CHAIRMAN WALLIS: I think with that, it
shows a fundamental misunderstanding of how to
evaluate these momentum flux terms. But maybe you can
convince us.
You see, the difficulty I have is you may
be doing something using a different logic from what
we are used to, and we are trying to figure out what
that logic is. It may first appear to be wrong. It
may be that, when we follow your logic, we say, well,
maybe if you think in this way, which may be unusual,
one could justify it or something.
DR. PAULSEN: Where there's an assumption
made that's not apparent.
CHAIRMAN WALLIS: Right.
DR. PAULSEN: Okay. I don't have a slide
on that.
CHAIRMAN WALLIS: I think you ought to
think about this over lunch, this psi thing with the
two, because I think we are -- ACRS might accept the
two pipe plus junction model if that's the only thing
anyone knows how to do, and you got to get on with the
problem, realizing that it contains assumptions. But
this sort of mixture of things where it doesn't really
make sense, and there are statements that, you know,
say that the pressure drop is balanced by the friction
and all the other terms disappear is not true, if
you're just making a momentum balance. But it is true
if you're making a stream line.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So it's those kind of
untrue statements that bother us. The answer may be
something which is usable.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: So is it time to break
for lunch?
DR. PAULSEN: And the stream line argument
kind of carries over into the complex geometry
CHAIRMAN WALLIS: But you've got to be
careful. You know, when stream lines get mixed up,
they are no longer stream lines.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: They don't follow a
stream line. So you can get bogus answers by trying
to follow a stream line. But I think we'll all agree
that there isn't a simple answer to this momentum
balance problem when you try to write a code, and
you've done -- You've made a valiant attempt.
DR. PAULSEN: Okay. More after lunch.
CHAIRMAN WALLIS: So we will adjourn until
one o'clock. We'll have a break, a recess until one
o'clock. Thank you very much.
(Whereupon, the foregoing matter went off
the record at 11:57 a.m.)
A-F-T-E-R-N-O-O-N S-E-S-S-I-O-N
(1:00 p.m.)
CHAIRMAN WALLIS: We will come back into
session and continue our discussion of the RETRAN-3D
code. We have a request from Jack Haugh of EPRI to
make a statement at this time.
MR. HAUGH: Thank you, Dr. Wallis. I
appreciate that. Again, for everyone, my name is Jack
Haugh. I'm the Area Manager, which is EPRI-speak for
program manager for a variety of areas, including most
of the safety work, and the RETRAN work rolls up to me
in a managerial sense.
I think my remarks were intended to say,
well, I always like after a couple of hours of
discussion going on to kind of say where are we in all
of this? I think the message I would like to convey
is severalfold.
The first is that, regarding RETRAN
itself, as has been pointed out, this code was
developed as an offshoot or a derivative from older
RELAP versions and so on, and there is historical
material in the code development and documentation,
and there are equations written and so on.
Clearly, the results of the in depth
considered review by the ACRS, for example, has
demonstrated that there are places where the approach
taken to try and derive a set of equations that can be
used has its shortcomings.
There have been points raised, exceptions
noted, etcetera, to point out that it doesn't quite do
the job that it needs to do, and that there is a
seeming rigor or academic rigor to it that is, in
reality, not really there.
Had we to do this all over again, 20 years
ago, knowing what we know now and thanks in great part
to the critiques and the study given by the ACRS, it
would have been done differently.
I think -- You know, I heard some comments
before dealing with the momentum equation. According
to Graham, it's a very difficult thing to do, that we
have made, I think he said, a noble attempt. Novak
said an heroic attempt, which I must say heroism is
wonderful, but I don't know.
If it gets you into trouble in the end,
maybe it's not so smart, but the bottom line is, you
know, I think your observations had, had you started
with something more simple -- you know, go to friend
Bernoulli, make a few statements that you're
connecting a bunch of linear segments -- Assuming you
have a straight pipe, you accommodate some of these
things like the bends and the separation around the
bends and the awkwardness with pressures and so on.
You take a loss term in there, and you try and just
fit it in there, and you come up with this quasi-
empirical sort of thing which you tune to the plant
and which you utilize or demonstrate its applicability
to your own minds by how you match the plant
conditions. What else needs to be said?
All right. Now --
DR. ZUBER: Sensitivity analysis.
MR. HAUGH: I beg your pardon?
DR. ZUBER: Sensitivity analysis.
MR. HAUGH; Yes. I mean, that's always an
important thing, because you need to know the range of
applicability of things.
CHAIRMAN WALLIS: There is something else
that needs to be said. I've said it this morning.
There's a public out there watching. It's not just
you and the plant and the NRC that are in this. There
is a theater as well of public opinion.
So it has -- you have to say things in a
way which is not going to give people qualms.
MR. HAUGH: Yes. Well, we certainly
appreciate that, and I can assure both the committee
and the public that that is certainly always our
intent as EPRI.
Now at this point now it becomes where do
we go from here, having said what I just did. I had
thought that, rather than belaboring the point by my
assuring you that we understand the message that has
been given to us, that continued working our way the
equations and finding the exceptions or the confusions
and so on is perhaps not the best utilization of your
time this afternoon, nor is it mine.
CHAIRMAN WALLIS: Well, there is one thing
I would like to do, though. I would like to look at
this bend example, because it seems to show -- You
know, it's actually how you use something. It's not
just a derivation.
MR. HAUGH: If you wish, we would be very
CHAIRMAN WALLIS: I have some problem with
even if you believe the equation, how do you use it
the way you use it. So I think we need to do --
That's the second part of my thing.
First, you have to establish the
equations. Then you have to sort of show that they
can be used in a sensible way, and then you have to
show that they give good results for a plant.
MR. HAUGH: Yes. And that is where, from
my perspective, I would like to see the discussion
ultimately move today. That is to say, we finally
come up with some formulation that we believe works
and can be utilized in a computer code and can be
utilized to replicate the plant transients within
ranges of applicability, and that if those ranges are
understood by the users -- and we take pains to be
sure that they do understand those ranges of
applicability -- that the demonstrations that we can
match the plant data are very important, a very
important consideration to see that the tool is useful
for its purported purposes.
That's all I would like to leave you with
at the moment, and hope that we can get onto that
first presentation that the Chairman has asked for,
and then to what we have done by way of validation.
CHAIRMAN WALLIS: So we've already --
Perhaps you are suggesting -- We forget the first
question I asked, which is what equations you are
using and are the derivations valid. We've already
been over that terrain. You don't want to go over it
MR. HAUGH: Yes. I think, you know, it's
been made quite clear today that there are
CHAIRMAN WALLIS: Right. And now we've
got to look at how they are used. I think the bend is
an example. I would personally like to see how you
propose to use them for something like this, you know,
the downcomer and the lower plenum, because that was
all that we got response to the RAI is this is how we
set up the cells.
I couldn't figure out in any way how you
write a momentum equation for those cells as set up.
If we could get some guidance and if you are ready to
do that --
MR. HAUGH; Well, I'll ask Mark to come up
here. I'm not sure to what degree of completeness he
has that laid out, but we'll ask him to do so.
CHAIRMAN WALLIS: If it's not completed
here and then you want to come before the full
committee next week, we are going to have to say that
we still have a lot of unresolved issues, and it might
make more sense for you and us to agree that these are
the unresolved issues, and then for us to meet as a
subcommittee so before we go to the full committee
with all that's implied there and letters to the
Commission and all that stuff, we actually have some
better understanding of what you think in the final
version of things is sort of an acceptable
presentation before that committee. I think we need
to do that.
MR. HAUGH: Well, perhaps that is the
better way to proceed at this point.
CHAIRMAN WALLIS: It would really be
premature to go next week with something which is
still -- it still has all these unresolved issues in
it, which I don't think we are going to resolve fully
DR. ZUBER: I am gratified that you
recognize our concerns. The only questions I have is
what are you going to do about it?
MR. HAUGH: The first thing is, if we can
agree that the code does work and does do its job
properly -- Well, perhaps before shaking your head no,
you'll let me finish. Okay? Body language speaks
reams, Novak.
DR. ZUBER: Look, I want for you to be
MR. HAUGH: I know you do very much, and
we certainly appreciate that.
If it is simply a matter that the
derivations, again, purport a degree of rigor and
correctness that is not there, there are easy ways to
alert all of our users to this fact. The RETRAN
newsletter can carry that in depth.
If it is necessary to revise the code
manuals, that can be done. But I wouldn't make an ad
hoc commitment to do so at the moment. It depends on
the nature of the need.
CHAIRMAN WALLIS: Well, maybe it will show
up. It will be clearer to you when we look at
something like this bend. Here you are saying, okay,
we can accept this equation as being usable, let's use
MR. HAUGH: Yes.
CHAIRMAN WALLIS: Then when we use it for
the bend, you seem to get results which are very
peculiar; and if it doesn't work for this simple bend
-- results look really peculiar for that bend -- how
can we sort of say that this is now going to be good
for other geometries. So maybe we need to --
MR. HAUGH: Well, I appreciate the nature
of the comment and, hopefully, we are going to be able
to address that to your satisfaction this afternoon.
CHAIRMAN WALLIS: So even if we accept the
equation, then the way it's used seems to raise some
other questions.
MR. HAUGH: Yes. I appreciate that.
CHAIRMAN WALLIS: I don't think we are
ready to move on to the question of does the code as
a whole fit some plant data or something, because that
could be for lots of other reasons, that someone has
tweaked this or chosen this. You know, there are
options in the code to make things work.
That's a big whole other --
MR. HAUGH: Well, let's take this in the
next step, as you have proposed, and let's go from
there. Upon completion of that, perhaps we'll know
whether it's advisable to proceed to the full
CHAIRMAN WALLIS: I don't think we're
going to be ready for the full committee. I don't
think this subcommittee will know what to write. I'm
not sure you will know what to say.
DR. SCHROCK: I'd like to just address a
point that came up earlier today that I think is one
that you need to pay attention to. That is this idea
that these codes have to be in the hands of experts,
people who know what they are and what they do and how
to make them function correctly in their application.
The difficulty that you have with the
group of people that are out there that know how to
run these codes is that they have been oversold.
That's my experience in talking with many of them.
They have been oversold on the rigor
that's in the code, and so many of them really believe
-- I mean sincerely believe that they learn physics by
operating these codes.
That's a dangerous situation. That's a
dangerous situation.
MR. HAUGH: Well, if there are
misperceptions of that sort, we'll do our best to
disabuse them of that.
DR. SCHROCK: I don't know if you
recognized that.
MR. HAUGH: I appreciate the nature of
your comment, certainly.
DR. SCHROCK: All right.
MR. HAUGH: With that, I'll ask Mark to
come back and resume his presentation, but to focus it
on the matter raised by Dr. Wallis.
CHAIRMAN WALLIS: Thank you. That was
very helpful. Thank you.
DR. PAULSEN: The point I was thinking
about resume this discussion was starting at the point
where we have what we call our RETRAN flow equation
and then discuss how it's applied to more complex
CHAIRMAN WALLIS: I'd like to see it
applied to simple geometry first.
DR. PAULSEN: A simple geometry?
CHAIRMAN WALLIS: Like this bend here or
the T, because this business of I's and J's, you can
just get lost in generalities. But if you would show
us how it works for this sort of thing -- I have real
problems with that and, unless I get an answer, I'm
going to have to write it up in some form to form some
other record, which we don't want it to be.
The same thing with the T, the treatment
of the T is very strange from a momentum balance point
of view, too, and it's a simple thing. I think it's
much better to do these examples than it is to go into
something where you have some generalized math, which
-- it's hard to get hold of.
DR. PAULSEN: Okay. so the T -- You're
looking at the newer write-up, I believe.
CHAIRMAN WALLIS: Whatever your latest
version of the bend is.
DR. PAULSEN: Okay. Which revision, I
CHAIRMAN WALLIS: This is revision 5.
DR. PAULSEN: Revision 5? Okay.
CHAIRMAN WALLIS: I think the answer is
the same as in revision 1. No, I think you've got a
factor of root 2 in there.
DR. PAULSEN: There was an error in the
first one where we were missing a cosine.
CHAIRMAN WALLIS: You changed the other
root 2 in there. Right. So either version you could
look at and explain to us how you get the terms and
what's going on.
Are you prepared to do that? Do you have
transparencies of --
DR. PAULSEN: Okay. I don't have
transparencies of that example. I do have a sample
problem where we actually ran an angle. That might
address your question. Shall we take that approach
and then --
CHAIRMAN WALLIS: No. I mean, I have
questions about how W-2X is a half-W-2 and things like
that. I mean very simple questions. If you can
remember the problem, maybe you can answer that.
DR. PAULSEN: Basically -- Let me just put
this elbow up.
CHAIRMAN WALLIS: I thought you would have
this ready, because I -- Maybe I responded to Lance
Agee and said you guys should come with transparencies
of all the RAI answers. I know that message got
through. I don't know quite who reads the messages.
DR. PAULSEN: I didn't bring
transparencies for that T example, but basically --
Let's start here. I think I'm losing my battery.
CHAIRMAN WALLIS: You see, the problem is,
when I made the presentation two years ago, I had a
detailed critique of the bend, the T and the Y. I
have problems with terms in all of those and, unless
there's some sort of answer, those difficulties will
remain, and they shouldn't remain.
DR. PAULSEN: My impression of your
critique of the Y initially was the fact that we were
missing a -- that, basically, there was an error in
what we had.
CHAIRMAN WALLIS: I think I had about six
critiques of the Y.
DR. PAULSEN: I mean of the elbow.
CHAIRMAN WALLIS: Oh, the elbow, yes.
DR. PAULSEN: For the elbow.
CHAIRMAN WALLIS: Well, let's look at the
example you actually work out, this one here.
DR. PAULSEN: Okay. So we'll just go back
CHAIRMAN WALLIS: You can get started on
DR. PAULSEN: Is he going to make a Vu-
CHAIRMAN WALLIS: Well, yes, he is, but
you might get started on it. So we have W1 and W2 and
W3 defined at the edges of mass balance.
DR. PAULSEN: Can you hold that up? I'm
just trying to remember --
CHAIRMAN WALLIS: You don't have a
nodalization. You could take the one that Ralph has
given you there. So the 1, 2, 3s are the boundaries
of mass and energy cells, and then the 1-circle, 2-
circle are the boundary of the momentum cell.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: Right?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And you have to decide
what your W1 and W2 bar are, because they are in your
momentum equation?
DR. PAULSEN: That's correct, and they
happen to be the -- Then if we were looking at this
momentum equation at this point here, we have a
boundary in the way this is drawn at that these two
So at those points we need to know those
CHAIRMAN WALLIS: Right. So I think what
you do is you say W1 bar is a half-W1 plus W2. It's
sort of an interpolation of --
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: Then you suddenly say
it's equal to W2. So you're assuming some sort of--
DR. PAULSEN: That's at steady state, I
CHAIRMAN WALLIS: But it isn't steady
state. The whole thing is a transient analysis.
DR. PAULSEN: This was just a steady state
CHAIRMAN WALLIS: No. This is the example
of a transient -- Okay. Well, that's what really
confused me, because you seemed to invoke the steady
state all the time. But, really, you are showing us
how to do a transient.
DR. PAULSEN: That's correct, and --
CHAIRMAN WALLIS: So what you put in your
transient is a half-W1 plus W2. It's not W2.
DR. PAULSEN: That's correct. We put in
the one-half, and the specific case we were looking at
was a steady state.
CHAIRMAN WALLIS: That's very misleading.
I think you don't -- Well, but the whole purpose is to
develop a dynamic transient equation, and it's very
misleading if you suddenly invoke steady state, which
is not valid in a transient.
So we should take this to be half-W1 plus
W2? All right.
DR. PAULSEN: Yes. And in fact, the way
-- We need a model, and you can call it interpolation
or whatever. You need something to get the boundary
velocities or flows at these --
CHAIRMAN WALLIS: Okay. So let's say
you've got W1, W2 and half-W1 plus W2 going in. Right?
DR. PAULSEN: Right. So for this one we
just do -- There's actually a model where we can
either use a donor cell approach or --
CHAIRMAN WALLIS: But you used the half-
DR. PAULSEN: And that example uses the
half. So it would use the average --
CHAIRMAN WALLIS: So what goes in as a
halfW1 plus W2? Could you write that on there or
something so we can see what we are doing?
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: So that's called W2,
that one there.
DR. PAULSEN: This one here?
CHAIRMAN WALLIS: All right. And W1 is
what goes in, and at your point at the momentum cell
it's a half-W1 plus W2, that lefthand thing. Okay.
That's going in.
Now we need to know -- Now you say W1 psi
is W1x. What does that mean? It would be 1-bar-x.
You're saying that psi is in the x direction.
DR. PAULSEN: That's in the direction of
this angle.
CHAIRMAN WALLIS: So you are making a
momentum balance in x direction?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Now for coming out you
say W2x-bar, that's coming out of that 45 degree thing
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: W2x-bar is a half-W2.
Where did that come from?
DR. PAULSEN: It would also be half of
this other flow.
CHAIRMAN WALLIS: Well, is the idea that
it's a half of W2 in x direction plus W3 in x
direction, and there is no W3?
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: But you can't resolve
flow rates that way. The flow rate across that is a
1/2W2 plus W3, same way as for the other, because
flows are continuous. They don't -- When it goes
around the bend, flows are conserved.
DR. PAULSEN: The flows are conserved.
That's right.
CHAIRMAN WALLIS: You don't conserve just
the x direction of the flow. You can't say that W2-
bar is a 1/2W2x. It doesn't mean anything. You can't
average the x direction velocities flow rates in a
pipe. The flow is continuous. It goes around the
bend. All of W2 goes around the bend, not half of it.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So how does W2x get to
be 1/2W2?
DR. PAULSEN: In fact, I think what we end
up here is that this flow will be oriented in this
direction, and it will end up being equal to the
steady state -- for the steady state --
CHAIRMAN WALLIS: But you say W2x is a
1/2W2, and W2y is 1/2W2. So that, to me, says half the
flow is going in the x direction and half of it is
going in the y direction. You've got a statement
here: W2x-bar is a 1/2W2 at that boundary. I'm trying
to understand what it means.
DR. PAULSEN: Which equation is that that
you are looking at?
CHAIRMAN WALLIS: It's in the middle of
page II-93. You've got the same edition that I have,
Revision 5, an non-numbered equation, the fourth one
down: W2x-bar is a 1/2W2.
So you are explaining how to use the code.
That's why we are going into this, and I don't
understand that statement at all. Then W2y-bar is a
1/2W2 is the next line.
What it seems to say is that the flow in
the x direction is half the total flow. A flow in the
y direction is a half the total flow. Is that what it
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: But that doesn't make
any sense. If you draw a boundary in the y direction,
you've got the whole flow going across it, and equally
true for the x direction. You can't resolve flow
rates in x and y directions. You just cannot do it.
It's non-physical. Flow rates across any section in
that pipe are the same.
DR. SHACK: But these are his closure
relations, not his conservation equation.
CHAIRMAN WALLIS: They are what he is
going to put into his equation to use.
DR. SHACK: He's going to eventually end
up conserving mass, but at the moment he's not doing
CHAIRMAN WALLIS: No. He's using -- These
are the terms that go into the momentum equation, this
DR. SHACK: Right. But he's calculating
them from his closure relations, not from a
conservation relation.
CHAIRMAN WALLIS: But what do they mean?
Where are they coming from?
DR. PAULSEN: It is simply an average.
It's an interpolation.
CHAIRMAN WALLIS: But you can't average --
W doesn't have components. So you can't average x
direction component of a scalar.
DR. KRESS: I thought they come about
because it's a 45 degree angle and --
CHAIRMAN WALLIS: That comes later.
DR. KRESS: -- and that gives it one-half.
CHAIRMAN WALLIS: No, there's a 1 over
root-2 that comes later for that.
DR. KRESS: Oh, there's another one?
CHAIRMAN WALLIS: Yes.
DR. SHACK: But if you go back to this 3-
28, those are his closure relations. Those come from
CHAIRMAN WALLIS: I'm saying it doesn't
make any sense.
DR. SHACK: Don't ask if it makes sense.
Just follow the rules and see where you end up. Give
him a chance.
CHAIRMAN WALLIS: No, but what does the
rule mean?
DR. SHACK: He defines the closure rules.
Let him do that.
DR. ZUBER: What does it physically mean?
CHAIRMAN WALLIS: It doesn't mean
DR. SHACK: It means he's saying the
velocity is the average of the -- you know, the in and
out velocities.
CHAIRMAN WALLIS: It's not. It's not a
velocity. It's a flow rate.
DR. SHACK: Well, the quantity.
CHAIRMAN WALLIS: But he's saying it's the
component of a flow rate in an x direction, which I
say doesn't exist. Flow rates don't have components.
DR. SHACK: Just think of it as a
variable, and he's averaging the variable.
CHAIRMAN WALLIS: You can define any
variable. It means nothing.
DR. SHACK: But you know, we're doing
mathematics here now. You know, we've got a quantity
that's varying. So we know what it knows, and we have
to find -- interpolate a value somewhere else.
CHAIRMAN WALLIS: No, because we are going
to use it in a momentum equation. It's got to mean
DR. SHACK: Ah. When he uses it in a
momentum equation, it means something. But the
equation he is writing down now is simply how he is
going to interpolate these discrete values.
CHAIRMAN WALLIS: What you are telling me
is you understand the logic that he's using, albeit it
may be unphysical. Right.
DR. SHACK: Yes. You know, it's the sort
of thing you would do in a mathematical thing when
I've got discrete quantities and I need to get a value
CHAIRMAN WALLIS: But I'm saying that I
don't know what then W2x is. If you are going to
average something, you better tell me what it is.
DR. SHACK: Well, in this case it's just
a variable. You know, when he goes to his momentum
equation or he goes to his conservation equation, he
had better end up conserving mass.
CHAIRMAN WALLIS: This has nothing to do
with conserving mass.
DR. SHACK: No, this doesn't. This is an
interpolation scheme.
CHAIRMAN WALLIS: So let's go back to
where we were. We've got W2x is 1/2W2, and let's then
explain that in terms of interpolation scheme.
DR. PAULSEN: Okay. I'm trying to get my
diagram here to match.
CHAIRMAN WALLIS: W2-bar is across the 45
degree. Right?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And W2 is coming in
there, and W3 is going out the bottom, and I'm asking
what W2x-bar is.
DR. PAULSEN: Okay. The model that we
have used, there's either the donor or the average,
and you've picked the average.
CHAIRMAN WALLIS: Right. You picked the
DR. PAULSEN: Yes, that's right. I'm
sorry. So for this particular case, what we would
call the flow that's normal to that surface --
CHAIRMAN WALLIS: In the x direction.
DR. PAULSEN: -- in the x direction is
going to be basically -- well, it will be 1/2W2. Is
that what we've got?
CHAIRMAN WALLIS: Yes, 1/2W2 you say it
is. Right. Why is not 1/2W2 plus --
DR. PAULSEN: It's 1/2W3x, but W3x is equal
to zero. The W2x back at the ranch is W2, because it's
in that direction. W3x is zero, because it's straight
down. So when you do the average, you get half.
CHAIRMAN WALLIS: But in the momentum
equation we need to know the mass flux across the
area. We don't need to know some strange Wx.
DR. SHACK: At the moment he's just
interpolating. He's not doing momentum yet.
CHAIRMAN WALLIS: No, but he is going to.
DR. SHACK: Yes, when he does momentum,
then nail him momentum, but at the moment let him
CHAIRMAN WALLIS: Well, let's say now --
My critique would be you can't resolve flow rates in
x and y direction. So what you are doing is something
fantastic rather than representing physics.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: I mean, you could do it
if that's the rules you are going to play by,
according to Dr. Shack, but it's a very funny game.
DR. PAULSEN: And what we are doing is
trying to resolve things in the x and y directions.
CHAIRMAN WALLIS: Yes, I understand that's
what you must have been thinking you were doing.
DR. PAULSEN: So the flow in the x
direction for this particular surface would just be
1/2 of W2.
CHAIRMAN WALLIS: And in the y direction
it's 1/2W2.
DR. PAULSEN: In the y direction it's
CHAIRMAN WALLIS: What does flow in the x
direction mean, though? How do you define a flow in
the x direction?
DR. PAULSEN: That's going to be what we
take to be the velocity divided by the density.
CHAIRMAN WALLIS: Times some area?
DR. PAULSEN: Times an area.
CHAIRMAN WALLIS: But then it would be a
root-2, wouldn't it, if it's a velocity?
DR. PAULSEN: It's a what now?
CHAIRMAN WALLIS: The square root of 2, if
it's a velocity, rather than a half.
DR. PAULSEN: The half is simply the
averaging scheme that was developed. If we use a
donor approach, then it's just the upstream turn.
CHAIRMAN WALLIS: Let me say this. In
steady flow W2 = W3 --
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: -- equals W1-bar equals
W2-bar, all the same. Right?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So W2-bar better be W2,
and then its x component is 1/2W2?
DR. PAULSEN: This term? The x component
at this location will be half.
CHAIRMAN WALLIS: All the Ws are equal.
DR. SHACK: Part of the problem is he
thinks of W sometimes as a mass flow rate and
sometimes it's a velocity.
CHAIRMAN WALLIS: But it's neither in this
sense. It can't be either. The flow rate is W across
that surface. The x direction velocity is x over root
two. It's not a half.
If you are using a mass balance, the flow
rate over there is the total flow rate, not half of
it. So when we get to the momentum equation, I guess
we'll see that.
So when you get down to the bottom of the
page, you are going to take W2 over 2, which is your
Wx-bar, divided by A2, and you are going to multiply
it by the velocity it takes with it, which is W2 over
I guess we would agree that the velocity
component resolved is 1 over root two, but I would
maintain, if you are going to make a momentum balance,
you've got to multiply it by the whole flow rate, not
the flow rate in some direction. I mean, your whole
momentum equation was the flow rate times component of
velocity, not flow rate component times component of
velocity. So that half shouldn't be there.
The problem I have with this is that there
seems to be a fundamental conceptual mistake in a very
simple example, and this presumably is in all the more
complicated geometries, too, to some degree, but even
more difficult to figure out because they are more
If you are giving the user advice to do
this for this simple bend, then I don't understand how
we can believe the advice for a more complicated
geometry. This doesn't make sense.
DR. PAULSEN: The point is that we don't
really use this in modeling RETRAN.
CHAIRMAN WALLIS: Well, why do you present
it then?
DR. PAULSEN: Well, that's a good
DR. ZUBER: Well, how do you use it?
DR. PAULSEN: It was going to be an
illustrative example to show simply that, once you go
around the bend, you get the pressure back. You will
see an increase in pressure as you go into the bend
and, once you are around the bend --
CHAIRMAN WALLIS: That won't wash. I
mean, the user has to write a momentum equation for
this cell, 1-2. Right? It has to be there somehow.
So what does RETRAN use for the momentum equation, the
actual equation used for that cell?
DR. PAULSEN: For this cell? In most
cases, if the user does not input angles, he is simply
going to use that momentum equation -- that flow
equation that we looked at earlier.
CHAIRMAN WALLIS: So there won't be any 2
or root two in there?
DR. PAULSEN: Unless he puts in an angle.
CHAIRMAN WALLIS: So you are making it
arbitrary whether or not there is a factor of 1/22?
Could be there or not there, depending on what the
user chooses to do?
DR. SHACK: Well, I think what he's saying
is that, by the time he gets to the end of the elbow,
it won't make any difference whether he modeled it as
an elbow or as a straight pipe --
CHAIRMAN WALLIS: If you get to the end of
the elbow. But you might not. You might discharge
into a container.
DR. SHACK: If he had that geometry, you
would do something different. But if he's just doing
an elbow versus a straight pipe --
CHAIRMAN WALLIS: You see the problem I
have. You have a fundamental equation, one is to
believe can be used. You use it for something like
this half an elbow, and it doesn't make sense.
DR. PAULSEN: Okay. I guess the point we
were trying to show here was that once you get around
the elbow, everything comes back, that since it's a
recoverable loss, and you really don't need to include
the detail of elbows in loops.
CHAIRMAN WALLIS: I don't think that's
necessarily true, because then you would have to use
your y component of momentum or something on the other
side of it.
DR. PAULSEN: That's right, and it ends up
canceling out.
DR. SHACK: He's got his momentum equation
3-37-C to show his pressure drop in the first -- you
know, as he coming through there in the first part.
Then he is going to get a pressure recovery when he
computes pre-3 minus T-1.
CHAIRMAN WALLIS: That's not really
kosher. I mean, you can say we calculated this whole
thing wrong up to 2, and we make the same error in
reverse from 2 to 3. So the error is irrelevant.
That's -- I don't think that is really respectable.
Now maybe if you went around in a complete
circle, you might find the errors build up instead of
DR. PAULSEN: Well, I guess it looks like
maybe that we haven't addressed the issue here on the
elbow example. We'll have to go back and look at that
CHAIRMAN WALLIS: I think it's really
fundamental. This is supposed to illustrate the use
of an equation, and doesn't reinforce the equation at
DR. ZUBER: And then if you cannot explain
the simplest case, how can one believe -- at least,
how could I believe or Graham or anybody else -- then
you got a more complicated case.
CHAIRMAN WALLIS: What would be on the
lefthand side if it were a transient? You have a d by
T of something with a L1 and L2 in there?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Then you would have Dr.
Shack's problem, that the L1 and L2 are not in the
same direction; are they resolved in some way? That's
not explained either.
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: So it seems to me that
you can't explain this simple thing. How should the
user use it for something more complicated?
DR. PAULSEN: Well, let's go on and look
at some of the more detailed cases.
DR. ZUBER: This is the simplest detailed
case, and you cannot explain why --
CHAIRMAN WALLIS: We can look at the T,
too. I mean, the T has this peculiar one-fourth of W1
minus W22 in it. If you make W2 zero, you find
Bernoulli's equation has a quarter in it. Now
Bernoulli's equation doesn't have a quarter in it.
So again -- I mean, I don't want to go
into all these details, but I've found that in writing
my review of this stuff, I was writing page after page
of stuff saying that this doesn't make sense.
DR. PAULSEN: Okay. It sounds like we
still have something to do with the elbows.
CHAIRMAN WALLIS: You must have a
reasonable excuse for the equation you are using, and
you must have a reasonable exposition of how it
applies to some simple geometries that doesn't appear
to have some logical disconnects in it. Then I think
it's acceptable.
DR. PAULSEN: Okay. The point that I
would like to make at this point is the fact that
initially we started out trying to show rigor and
including the angles, and that was probably a mistake;
because we don't really use angles in a code.
CHAIRMAN WALLIS: But even so, if you are
going to use the two pipe plus junction model, how
does it apply to a bend? Still the same issue. How
does it apply to the downcomer?
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: You're just going to say
there's two straight pipes and a junction up there?
Maybe there is there, but I don't see any two straight
pipes here. I see a bend. So I don't know how to
define my two straight pipes.
DR. PAULSEN: Basically, it does use a
straight pipe model, a two straight pipe model.
CHAIRMAN WALLIS: But it isn't, because
it's got this root 2 --
DR. PAULSEN: And that comes from the
angle piece that we normally would not include.
CHAIRMAN WALLIS: See, the genuine -- If
you model this as two straight pipes, you wouldn't
have the factor 2 or the factor root 2 in there at
all. That's my contention. If you simply took two
straight -- Excuse me -- with a 45 degree bend like
that and said the bend is a junction, you wouldn't
have any of those root 2s and 2s in there.
DR. PAULSEN: That's right. Because
normally we would model an elbow as a node -- straight
node that way and a node in that direction.
CHAIRMAN WALLIS: Right. And there
wouldn't be any of these 2, root 2s and stuff.
DR. PAULSEN: No. There's none of the
root 2s.
CHAIRMAN WALLIS: So you have an equation
which differs from the other one by a factor of --
what, 2.8 or something? Well, in that case we should
do sensitivity studies to see when the factors vary
between half and 4 or something. Does it make a
difference or something?
DR. PAULSEN: Right.
CHAIRMAN WALLIS: And it may well be that
what you've just said is that when you are really
worried about a circuit, everything sort of washes out
in the end anyway, and random fluxes don't matter
because what you lose here, you gain there may well be
DR. PAULSEN: In this particular case,
most applications where we have elbows would not use
that 45 degree angle. We would do something either
like that nodalization or something like that
CHAIRMAN WALLIS: So I guess we get back
to Ralph Landry's point, that what's in the code and
how it's actually used is different from the
exposition in the documentation, and the code seems to
work, and the documentation in that context is
DR. PAULSEN: And I guess maybe what we
need to do is focus on that. There are --
CHAIRMAN WALLIS: But you see the problem
I have. I'm coming from the outside. I'm like the
naive sophomore student trying to understand this,
because my professor says go and figure out what they
are doing with this bend. I come back, and I say,
prof, I just can't figure out what they are doing.
And that's not good.
DR. SCHROCK: Do I understand you put a
loss coefficient in when you do what you've shown
DR. PAULSEN: Yes. In something like this
there would be a loss coefficient.
CHAIRMAN WALLIS: And there is no pressure
from the wall. There's no force from the wall.
DR. PAULSEN: No.
CHAIRMAN WALLIS: So there's nothing to
turn the flow to the other direction. There is no
force in the x direction to turn it around the bend?
DR. PAULSEN: In this case?
CHAIRMAN WALLIS: There's no force from
the wall.
DR. PAULSEN: Just the pressure difference
that we would see.
DR. SCHROCK: So that is strictly modeling
straight pipe -- stringing together straight pipes to
represent the actual geometry.
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: Maybe you better go back
and say that's just what you are doing.
DR. PAULSEN: That's probably the best
CHAIRMAN WALLIS: And make all the excuses
-- Well, don't make excuses. This is engineering. We
understand engineering approximations. We understand
you do the best you can do, and that you test to see
if it works, and we would buy that.
We cannot buy what appears to be logical,
sort of non sequiturs. So you see, I have a problem
not just at the formulation of the equations but in
the examples showing how they are used. If that's not
the way you really use them, then you need to show us
examples of how you do really use them.
DR. PAULSEN: And that is kind of where I
was headed.
CHAIRMAN WALLIS: And that's where I had
a problem with the T, because the T seemed to me to
give some funny results, but maybe it's okay for
nuclear safety.
DR. PAULSEN: Well, I've got some examples
of a T where we might need to include some of the
effects of angles.
DR. SCHROCK: There is also the downcomer.
DR. PAULSEN: Yes. Shall we just skip
over the 1-D stuff. I think you probably --
CHAIRMAN WALLIS: Well, yeah, I guess we
can. We're going to spend a lot of time -- The T is
not 1-D, because it comes in one way and goes out the
DR. PAULSEN: Right.
CHAIRMAN WALLIS: And you have this
mysterious magnitude of the volume sent at the flow,
and you have again this mysterious W1x, W1y. Stuff is
coming in in this direction, but it seems to have a
component in that direction even though it's all going
in this direction.
DR. SCHROCK: Then there's issues of flow
-- or phase separation in Ts.
DR. PAULSEN: That's right, and none of
that is really handled. That all has to be done with
sensitivity studies or constitutive models.
CHAIRMAN WALLIS: Frankly, everybody knows
you cannot model a T with a simple momentum balance.
You cannot do it.
DR. PAULSEN: And basically, what we have
-- the form that we have after responding to one of
the NRC questions is a form that pretty much maintains
the Bernoulli had the p plus rho-v, one-half rho-v.
CHAIRMAN WALLIS: What you need to do is
you need to do experiments. You need to define some
empirical coefficients reflecting how much it's like
Bernoulli and how much it's like momentum and
capturing that, and then you have to have coefficients
that come from experiments. You energy loss depends
not just on one flow rate but the ratio of the flow
rates and things like that.
When you have flow going all the around
the bend instead of carrying on, the pressure recovery
is quite different from when it was going straight on.
It's not a simple problem.
DR. PAULSEN: And the real problem during
an application is that those flow patterns can change,
and the relative magnitudes can change. So you have
to try and capture something that bounds the --
CHAIRMAN WALLIS: But there's nothing of
bounding in your -- You see, your example is presented
as if this is right, and if you had qualified is and
said that in reality it's doing something like this
and in order to get on with the problem we make this
assumption which we think is bounding or something,
that would, I think, help a great deal. When you just
put it down as if it's right --
DR. PAULSEN: I understand your concern
CHAIRMAN WALLIS: -- then this psi thing
is sometimes x and sometimes y. I thought it was some
intermediate angle in the bend somewhere.
I would think it needs to be fixed up.
Otherwise, we may have to write a critique based on
what we see. It's all we've got to go on.
DR. SHACK: Who are you going to believe,
your eyes or what you hear?
DR. ZUBER: Well, my advice would be
really to go and go through the entire document and
really address point by point. State your
assumptions, the equations, and proceed from there.
This is really arm waving -- really arm waving.
CHAIRMAN WALLIS: You are still
constrained to what's really in the code, and that's
where we still have a bit of a mystery as to what it
really does with these things.
DR. PAULSEN: And I guess that was kind of
the purpose of going over these next few slides, is
that --
CHAIRMAN WALLIS: But these are much more
complicated things. So I have difficulty.
DR. PAULSEN: These will be some arm
CHAIRMAN WALLIS: You lose me in this arm
waving completely, because it gets even -- it
obfuscates the issues even more. Which one did you
want to go into?
DR. PAULSEN: Well, let's just kind of go
through this quickly and see if --
CHAIRMAN WALLIS: Did you want to go
through 29 and 30? Okay, that's fine.
(Slide change)
DR. PAULSEN: One of the things that we
have to do in RETRAN is we've got our flow equation
which basically looks like the Bernoulli equation, and
it came from 1-D information. We don't have anymore
information than that, and now we have to try and
model a complex system where we've actually got some
3-D geometry and some different flow paths.
So what we have to do is use a number of
approximations on how we apply that equation then to
these three-dimensional geometries. There's a whole
volume of RETRAN documentation that's devoted to
setting up a model for a plant that provides specific
guidance for how do I model a plenum, what do I have
to consider, how do I calculate a length, how do I
calculate diameters when I've got these weird geometry
That's all discussed in the modeling
guidelines for RETRAN-2. Now that document hasn't
been rewritten for RETRAN-3D, because what is given
there is equally applicable to RETRAN-3D in terms of
how you set up nodalization and how you set up your
input parameters for that flow equation.
So, basically, that modeling guideline
provides us with some general rule as to how we would
define the input. In many instances, it will provide
alternate methods for calculating some of that input
One of the things that is required,
though, is that we typically require some sensitivity
studies, because we are doing approximations.
CHAIRMAN WALLIS: Well, you responded to
that by the next slide, 30. That's a question, was
one of the RAIs: How do you model these kinds of
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: And frankly, I looked at
the tables, and I couldn't understand what any of the
terms in those tables meant. So I was left none the
wiser than I was before as a result of the response to
this RAI.
DR. PAULSEN: There was also some -- a
reference to this modeling guidelines document --
CHAIRMAN WALLIS: And if you had said,
look, here is the node, let's say the downcomer to low
plenum. Let's say we've got this complicated thing we
have to model. We are going to use the RETRAN
momentum equation in some form. This is how we
evaluate Pk, Pk+1, Wk, Wk psi, and this is the final
equation we come up with; this is how we get the Ls,
you know -- None of that is in this reply. So I have
no idea.
DR. PAULSEN: Okay. for that information
we actually referred to this modeling guideline
document. It's NP-18.50, volume 5.
CHAIRMAN WALLIS: See, you're replying to
an RAI or I guess it's also one we stirred the staff
up to ask this question. The Table 1, Table 2 didn't
help at all. I don't know what you are talking about.
There are junctions which are labeled 1 and 2, and
there are junctions which are labeled 2-circle and so
DR. PAULSEN: Okay. The circled
quantities are the volumes.
CHAIRMAN WALLIS: But they seem to be the
same. There's no distinction between the two kinds of
junction. Then I couldn't understand these 1/2W2s and
1/2W3s. They seem to be something like the halves
that you have in the bend.
DR. PAULSEN: They are.
CHAIRMAN WALLIS: Then you get this
quarter-W32. Well, so these have the same strange
features that we didn't like about the bend.
DR. PAULSEN: Those are those boundary
flows that you need at the momentum cell boundary.
CHAIRMAN WALLIS: So I guess, to be happy,
it will be nice to see how you did it. When you've
got, say, the lower plenum -- Look at the lower plenum
downcomer. We've got four boundaries to the outside
world. We've got 2 and 3 and 4 and 5. How do you get
away with two pressures, P1 and P2 when you've got
four boundaries to the outside world?
DR. PAULSEN: Okay. Let's put that slide
up here for just a minute.
(Slide change)
DR. PAULSEN: That may be confusing where
we actually have these two flows shown. But basically
in this case, when we write the momentum equation or
our flow equation, it would actually be written for
just this one junction, and then we actually have to
have a boundary rho vA at this surface and one at this
surface of the momentum equation.
CHAIRMAN WALLIS: So is it they are in the
same direction at 2 and 3? So what happens to 4 and
5 then?
DR. PAULSEN: Four and 5 are factored into
this boundary condition here. They are factored into
this flow with this boundary.
CHAIRMAN WALLIS: There isn't any flow at
that boundary, is there?
DR. PAULSEN: That's the rho vA on this
CHAIRMAN WALLIS: I understand there's no
flow going into the bottom of the lower plenum.
DR. PAULSEN: What's that now?
CHAIRMAN WALLIS: No flow coming out that
bottom line across there.
DR. PAULSEN: At this one?
CHAIRMAN WALLIS: Yes. Is that flow
coming out of there?
DR. PAULSEN: What this boundary is is the
net. It's sort of an average based on the conditions
in these junctions, and I think --
DR. ZUBER: How do you determine that?
DR. PAULSEN: I have an example that
CHAIRMAN WALLIS: Your momentum equation
is assuming it is coming in at 2-circle and going out
to 3-circle?
DR. PAULSEN: The 2-circle.
CHAIRMAN WALLIS: That's the inlet Wk, and
the Wk+1 is --
DR. PAULSEN: And then there will be a
boundary on this surface, yes. There will be a
surface flow on this surface.
CHAIRMAN WALLIS: And then what do the
other flows do, the W4, W5?
DR. PAULSEN: These W4s and W5s are
actually used to -- It's actually W3, 4 and 5 are used
to calculate this value.
CHAIRMAN WALLIS: See, if I were to use
Bernoulli, I would use it from 2-circle up into the
core. It's going from 2-circle up to W4, W5. It's
turning the corner.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: It's not going from 2
into the lower plenum, is it? You seem to say that
the inlet is at 2 and the outlet is at 3, and the rest
of it is --
DR. PAULSEN: What we should do is, when
we start looking at a momentum cell, it's really this
part that we have written for.
CHAIRMAN WALLIS: You shaded it, right?
DR. PAULSEN: The shaded part.
CHAIRMAN WALLIS: And the in is the top,
and the out is the bottom?
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: That's a very funny
DR. PAULSEN: And we don't have those
explicitly included. They are included in that
surface flow.
CHAIRMAN WALLIS: I guess, if we looked at
the details of this equation, if you developed it for
us, we would have a whole lot of questions about it
probably. It would be nice to see it, though.
DR. PAULSEN: What we still apply here is
that two pipe equation.
CHAIRMAN WALLIS: I can't see two pipes.
The two pipes are from 2 down to W3 and from W3 down
to the lower plenum? That flat thing is a pipe going
DR. PAULSEN: This is the lower plenum.
CHAIRMAN WALLIS: That flat thing is a
pipe oriented downwards?
DR. PAULSEN: That's half -- Yes, half the
lower plenum.
CHAIRMAN WALLIS: And that's a good model
of that part of the system? See, the key thing is
it's coming in W3 and going out at W4, W5. The pipe
should be horizontal or something to connect between
W3 and W4, W5, shouldn't it?
DR. PAULSEN: Some of this has to do with
the level of nodalization, but in this simple
nodalization this is the way that pipe would be
DR. SCHROCK: Does that vertical leg on
that thing represent the entire downcomer or some
section of it?
DR. PAULSEN: In a very simple model, this
could be the whole downcomer. In some cases, the
downcomer may be nodalized vertically.
DR. SCHROCK: No, no. I'm not concerned
with vertical nodalization but as it represents the
entire downcomer?
DR. PAULSEN: In a lot of cases it is
modeled with one downcomer. In some cases, models
will have multiple downcomer volumes, depending on the
type of transient that is being modeled.
CHAIRMAN WALLIS: So what I'm supposed to
do is take some of these terms in Table 1 and Table 2
and just substitute them into your equation 5, which
is your RETRAN momentum equation, and that's going to
be a momentum equation for that shaded volume?
DR. PAULSEN: That's correct.
CHAIRMAN WALLIS: Well, there aren't
enough terms. I think sort of a commencing argument,
you have to complete the loop. You have to say,
right, we are going to show you how to evaluate P1,
P2, Ak, Ak+1, all the things that appear in that
equation, because it's not transparent in any way at
I wouldn't have a clue how to evaluate Ak,
DR. PAULSEN: And I think some of the
problem is because we haven't given the preliminaries
on how that's done, and we have referred just to that
modeling guidelines where all that information exists.
CHAIRMAN WALLIS: Seems to me, this is
very important.
DR. ZUBER: But that was for another code.
DR. PAULSEN: What's that?
DR. ZUBER: That was for another code, not
for this one.
DR. PAULSEN: Those terms are unchanged.
The mixture momentum equation is unchanged. You model
the nodalization the same way.
CHAIRMAN WALLIS: This is for what code?
DR. PAULSEN: RETRAN-2. It was the
predecessor to RETRAN-3D. So that we basically have
the same momentum equation formulation.
DR. ZUBER: I have a problem. Really,
throughout your presentation you use basically --
basically. I prefer something more definite.
DR. PAULSEN: The momentum equation is the
same. The mixture momentum equation is the same.
CHAIRMAN WALLIS: If I had to write a
review of this today, I would write that the
description in this reply to this RAI is completely
DR. PAULSEN: Well, yes. And I think part
of the problem is that we relied on what was in the
modeling guidelines without specifically including
some of that.
CHAIRMAN WALLIS: Well, maybe there is a
good option, but it just isn't here.
DR. PAULSEN: Yes. I think --
DR. SCHROCK: I have a sort of simple
question. In talking about pipes, elbows, etcetera,
we finally ended with a conclusion that what the code
actually does is calculate flows in straight pipe
segments and then puts loss coefficients for the
junctions between those.
DR. PAULSEN: That's right.
DR. SCHROCK: That's what is programmed
into the code. Now you are talking about this more
complex system. You've got this set of variables
defined on the board. It seems incomplete to include
flow into and out of the lower part of the lower
plenum, but what isn't clear to me is are you showing
us something that is actually programmed into the code
or is it again a situation where you are trying to
illustrate in principle what you think the code does,
but the code has actually got equations that are not
from this? Which is it?
DR. PAULSEN: Trying to illustrate what
the code does. This is not hard wired into the
DR. SCHROCK: And what does user
guidelines in choosing nodalization mean for these
complex geometries? What is the user actually doing
that influences what the code calculates? That's one
The other question is what are the
equations that are programmed into the code?
DR. PAULSEN: Okay. Basically, the
equation that's programmed is that equation with the
area change included in it. So it has momentum flux
terms. It has the form loss terms, the pressure
gradient on the righthand side. The lefthand side has
a thing that's factored out that is called geometric
inertia. It's basically the L over A, and that's
multiplied times --
DR. ZUBER: What is the L for this? You
have a volume.
DR. PAULSEN: Okay. That's the next step
in this discussion, is what the L is. For the 1-D
case that we've talked about, the L and the A are just
geometric terms. They are the geometric length and
the geometric flow area.
CHAIRMAN WALLIS: What are the Ps? The Ps
and 2 and 3 or 2 and 4? What are the Ps?
DR. PAULSEN: The Ps in this case are at
2 and 3.
CHAIRMAN WALLIS: That's absolutely naive.
The pressure that pushes this up around it is between
2 and 4. Three is irrelevant. Three is just a token
bucket. The pressure that accelerates this flow is
between 2 and 4.
DR. PAULSEN: And if sum those equations,
you can show that, too.
CHAIRMAN WALLIS: Oh, you sum them?
DR. PAULSEN: No, if you were to.
CHAIRMAN WALLIS: This is one equation.
You have one equation for that entire shaded area.
DR. PAULSEN: We have one equation for
this path.
CHAIRMAN WALLIS: I would put the
DR. PAULSEN: And we have another equation
for this path.
CHAIRMAN WALLIS: It's a shaded -- I
understood that 2-circle is a volume for mass
conservation, and 3-circle is the lower plenum. 2-
circle is the downcomer. Take half the downcomer and
half the lower plenum, make a momentum cell.
DR. PAULSEN: For this path.
CHAIRMAN WALLIS: It is not divided at
all. It's one equation for that whole shaded thing.
Right? One equation for that whole shaded thing in
the middle.
DR. PAULSEN: For this?
CHAIRMAN WALLIS: Yes. One equation, one
RETRAN equation for that whole shaded thing.
DR. PAULSEN: That's right, and that's for
CHAIRMAN WALLIS: And now you are telling
me it is subdivided in some way.
DR. PAULSEN: No. This one equation that
we've just talked about is for this flow from the
downcomer to the lower plenum.
CHAIRMAN WALLIS: And that's between 2 and
3. In terms of pressures it's the top surface and the
bottom surface.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: It's driving the flow.
DR. PAULSEN: Two and 3, and then we write
another equation for junction 4 and another one for
junction 5, and basically this equation looks at the
pressure between 3 and 4, and this other one would
look between 3 and 5.
CHAIRMAN WALLIS: So the fact that the
flow is coming out at 4 and 5 doesn't figure out
somehow in your momentum, though in the bend we had
going in and coming out. That coming out is somehow
different from the coming out at 3.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So I'm trying to figure
out what the two pipe model is saying. It's saying
that there is actually a pipe between -- this flat
thing, this disk-like thing is a pipe between the top
and the bottom, and the flow is coming in and going
DR. PAULSEN: Right.
DR. SCHROCK: I can't read the subscripts
on it, but on the last picture right there you've got
a W, looks like 2 going down.
DR. PAULSEN: This one here?
DR. SCHROCK: That one. The flow going
into the horizontal surface on the top of that flat
segment, going down there; on the other side, it's
going up. One of them is out of the downcomer. The
other one is into the downcomer. How do you square
that with the other picture?
DR. PAULSEN: That's basically -- This may
be misleading by including these, and it appears that
maybe it is, because this is the situation we have
where we have the downcomer flow and then the core and
bypass flow or two core flows.
For this particular momentum cell, we only
worry about that case, and these flows --
DR. SCHROCK: 5 is bypass flow?
DR. PAULSEN: It could be core bypass.
It's just one of the parallel paths through the core
at this point or it may be another core channel. But
we would write one equation for this path, and then
another equation for this middle path, and it would
probably be less confusing if we had left those flows
off, and then a similar equation --
DR. SCHROCK: Or put them in the middle.
I mean, the downcomer is modeled as one pipe.
CHAIRMAN WALLIS: No, there are two
downcomers. One is going up; one is coming down.
There's two different cells for the downcomer.
DR. SCHROCK: What?
CHAIRMAN WALLIS: Two and 5 are different.
DR. PAULSEN: Yes. This would be a core
channel, and it may be a bypass or a second core
DR. SCHROCK: Well, it's not the
DR. ZUBER: No, the downcomer would just
be one on the left.
DR. PAULSEN: Just the 2 is the downcomer.
DR. SCHROCK: 2 is the whole downcomer,
and you do that as one pipe. Then the upflows are in
the core and in a bypass. You make it look in this
picture as though 5 is into the downcomer.
DR. PAULSEN: At this flow?
DR. SCHROCK: Well, I mean your picture --
it just geometrically looks like a part of the
downcomer, and that's not what you mean.
DR. PAULSEN: No. No, this isn't part of
the downcomer. This flows both --
CHAIRMAN WALLIS: Well, I think all this
illustrates that we need more explanation. It may
well be that the whole thing you've put together has
a kind of modular structure, which at some level makes
sense, but it's difficult to figure out what it is.
DR. PAULSEN: And I think we have
sometimes difficulty seeing the forest for the trees,
because maybe we are too close. I don't know, but the
DR. ZUBER: The trees -- The forest
doesn't make any difference. No, really. I can see
that you have two pipes and you connect them. If it's
a pipe flow here, you really approximate the whole
downcomer by horizontal pipe. Right?
DR. PAULSEN: By a vertical pipe, in this
case. Yes.
DR. ZUBER: Downcomer. Then in the lower
plenum it's another pipe.
CHAIRMAN WALLIS: It's a vertical pipe.
DR. ZUBER: No, the horizontal --
CHAIRMAN WALLIS: It's a vertical pipe.
I think the lower plenum is a vertical pipe. Its
horizontal momentum isn't --
DR. ZUBER: Well, what determines this
horizontal line? Where is it?
DR. PAULSEN: This one? That's just half
of this normal volume.
DR. ZUBER: Well, can it be three-
quarters, five-fifths?
DR. PAULSEN: No. It's half.
DR. ZUBER: Why?
DR. PAULSEN: That's just the way the code
is formulated, is that you get half of --
DR. ZUBER: No. Look, the code doesn't
formulate anything. It's you who formulate the code,
and you tell the code what to do.
DR. PAULSEN: Let's back up to the
inertia, because that's really where -- There's some
of these terms that you input for these things that
really aren't 1-D, and it's really not the length.
Maybe that's what you are getting at.
DR. ZUBER: I would like to see what are
you doing.
DR. PAULSEN: I think that's maybe what
you are getting at.
DR. SCHROCK; If you were just dealing
with steady state, the net flow across that horizontal
surface would be zero, and if would have no impact on
the rest of your equations. But you are dealing with
a transient. So you have to account for accumulation
and depletion in that volume.
The only way to do that is to account for
the inflows and the outflows through that horizontal
surface. You're not doing that.
DR. PAULSEN: I think that's what we are,
and I'll show you in a slide.
CHAIRMAN WALLIS: But the pressures --
You're saying the pressures on the end, the top and
bottom, determine the flow, but there is a pressure on
that other boundary there going to W4, W5, which is
not the same as either of the other two pressures. It
doesn't seem to appear in there at all. There's a
pressure across that boundary where W4, W5 come out
that affects the balance on that box. No, the bottom
thing. Look at that shaded thing there.
DR. PAULSEN: This one here?
CHAIRMAN WALLIS: Your two pipe equation
says everything is going from 2 to 3. That's where we
evaluate P1 and P2.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: There's a pressure
across that top there that doesn't appear in the
equation at all.
DR. PAULSEN: Across this one?
CHAIRMAN WALLIS: Right.
DR. PAULSEN: Right. Now where we only
use that simplified momentum equation, we only have
coupling with one upstream volume --
CHAIRMAN WALLIS: So physically it doesn't
make any sense.
DR. PAULSEN: -- and one downstream
CHAIRMAN WALLIS: -- post-balanced the
pressure across there has got to come into the
balance. Right?
DR. PAULSEN: So the flow equation we
write is simply for this pressure, this pressure.
CHAIRMAN WALLIS: Well, I think it would
be good if you could go through and actually derive
the answer for this problem, all the way through to
the final equation, showing how you evaluate the Ls
and the As.
DR. PAULSEN: That's sort of what I've got
outlined here.
DR. SCHROCK: Is there a version of this
slide somewhere where you can read the subscripts? I
can't read them -- the one that's up there.
DR. PAULSEN: The one that's up there?
DR. SCHROCK: Where can I find that where
I can read the subscripts?
CHAIRMAN WALLIS: They are lost in the
DR. PAULSEN: It's in the RAI and it's an
DR. SCHROCK: Is it legible there?
DR. PAULSEN: It should be.
CHAIRMAN WALLIS: When it's FAX'ed, it's
not legible. Well, it's fascinating, because if
you've done this, you've done something which is very
DR. PAULSEN: Well, the users do this all
the time. So they've done the challenging work.
DR. ZUBER: Are you implying they are
doing it correctly?
DR. PAULSEN: Pardon me?
DR. ZUBER: Are you implying they are
doing it correctly?
DR. PAULSEN: I think they have
demonstrated in most cases that they are.
CHAIRMAN WALLIS: This is a discussion of
length here?
DR. PAULSEN: Yes. That's what this deals
DR. SCHROCK: See, all of this that you
are telling us seems to me to govern the choice of
equations that are going to describe the system, but
those equations have already been programmed into
RETRAN. What I'm having difficulty understanding is
how your user guidelines on nodalization can influence
what has been programmed. I don't see that there is
any way that it can do that unless you are going to
tell us that there are a number of different things
that are programmed and that the user chooses among
various options.
DR. PAULSEN: For the most case, the user
will use the compressible form of the momentum
equation that has the momentum flux terms, and then
there will be area changes on the upstream and
downstream volume for these kinds of geometries. So
there is basically an equation that's programmed that
they use the same equation, and their input then will
affect various terms in that equation.
As I mentioned, one of the primary terms
in this equation is the inertia. For one-D
components, it's a geometric inertia. For these 3-D
components, it's something else.
In effect, if you imagine where you have
flow coming into a downcomer, the flow first has to
kind of wrap around the downcomer and then turn and go
down. So what the user has to do in determining the
inertia then is to estimate via some engineering
judgment or hand waving -- it's not an exact
calculation -- be able to calculate what that flow
path might be in determining what the geometric
inertia is.
CHAIRMAN WALLIS: Some kind of average
length of a stream line or something?
DR. PAULSEN: That's one way of doing it.
One of the complications you run into there is that
stream lines may change during the transient.
CHAIRMAN WALLIS: See, in the Porsching
paper it says it's volume divided by area, which I
don't think is really the right answer. I convinced
myself, if I had a pipe that went around a complete
circle, that the momentum in that pipe is zero,
because everything balances, and it can be a long
circle. If I take a pipe and bend it in a circle,
that circle has no momentum in it, if it's a steady
flow or incompressible flow.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: If you are following the
length all the way around, that makes sense.
DR. PAULSEN: So you follow, in effect,
the flow path of the stream lines.
CHAIRMAN WALLIS: That's different from
looking at the entire momentum, because the actual
momentum added up is zero.
DR. PAULSEN: Right.
DR. ZUBER: Do you contend that this code
is a best estimate code or what?
DR. PAULSEN: A best estimate? In some
senses, yes. There aren't an awful lot of
conservatisms built into the code itself.
DR. ZUBER: There are?
DR. PAULSEN: There aren't. One area --
You know, there may be a model here or there like a
critical flow model that's a conservative form of the
model, but because of the way that model is used--
CHAIRMAN WALLIS: So you showed a picture
here. The answer is it's the length, the average
length of a stream line or something, and there is
some rationale for that?
DR. PAULSEN: That's right. So you have
to kind of visualize what the flow path might be.
CHAIRMAN WALLIS: Does it matter if it's
fatter at one end than the other?
DR. PAULSEN: Yes, it does, if you want to
account for all of those effects. Basically, in the
user guidelines it spells out whether you've got
parallel paths that you've got lumped into your
junction or whether you have serial paths.
CHAIRMAN WALLIS: Okay. So you could walk
us through, if you had the time, how you actually
calculate L and A for those three geometries you just
showed us?
DR. PAULSEN: That's right. Basically,
all I wanted to point out was that there are not
rigorously developed equations for how you do it but
some rationale for how you actually calculate those.
CHAIRMAN WALLIS: See, the concern I have
is I couldn't understand what you say, and I still
don't. But maybe a user gets enough training that he
or she can do it. If it's so much up to the user, the
users might choose all kinds of different Ls and As,
depending on their own preference. So you get all
different answers to the same problem, depending on
who happened to use the code.
DR. PAULSEN: Well, and in fact, that's
why you have to do some sensitivity studies to find
out where the sensitivities are in the model. You
know, some -- Inertia on some junctions may not affect
this solution --
CHAIRMAN WALLIS: Does this mean that the
NRC has to review how a particular user has chosen to
work out these things and provide some kind of
validation of it every time?
DR. PAULSEN: There are a lot of
guidelines that people routinely follow and standard
sensitivity analyses that people do. A lot of things
that are common in models, although they are not
exactly the same, but people learn from previous use.
For instance, a report for a particular
transient may identify that there's a particular
inertia that is sensitive. So you may have to use
some kind of a more representative inertia in a
particular area for a given kind of transient.
CHAIRMAN WALLIS: So there's a whole lot
of evolution of how to use the code which we don't
know about, which is why it works today, because
people have learned. You don't just blindly follow
some guideline, that you have to do something special
with this particular node and with that node.
So all of that is missing when we simply
look at some documentation.
DR. PAULSEN: Yes. A lot of that
information is like in Volume 5.
CHAIRMAN WALLIS: But we need to know
that. I think, if we are going to make a judgment, we
have to know that, and we can't just base it on our
assurance that this 30 years of experience, therefore,
has to be good.
DR. PAULSEN: I can appreciate that.
CHAIRMAN WALLIS: Sorry, Novak, you had a
DR. ZUBER: I agree completely with you.
But it is distressing that the code which we developed
years ago, 20 years, 30 years ago, I realize to be
poor, and we wanted to do something better. Now you
are developing a best estimate code which essentially
has the same shortcomings as those codes of 20 years
ago and 30 years ago.
And we are really not -- to say, look,
this is how it is done and this is what it's for. You
have kind of an arm waving argument, and you passed it
back to NRC and to the customer, and this is not a
good way to evaluate safety.
CHAIRMAN WALLIS: But it may not -- You
know, it's evolved. So there may not be equations
that describe the workings of your knee, but your knee
has evolved until it works. So something like that is
happening here with this code.
DR. ZUBER: Well, the thing is -- the
simplest thing, look at the downcomer -- Even for the
pipes, on the straight pipes we have problems. There
were problems you were not able to explain. With the
downcomers, it's purely arm waving.
DR. PAULSEN: There is no doubt about
these three-dimensional geometries. They --
DR. ZUBER: -- if it was just the elbow,
I mean, that Graham brought up. That's a simple one.
I can always make it more complicated and say I cannot
solve it, and we have to agree on that.
DR. PAULSEN: One of the points I would
like to make, though, is the fact that elbows normally
aren't modeled. In a model you may have something
that looks like an elbow where the hotleg or the
coldleg connects to the downcomer, where the hotleg
leaves the upper plenum.
You may have a T where the surge line
connects. I've got some examples that kind of show
the types of pressure differences we see there by
including these momentum effects, and then an example
of what it does to a typical Chapter 15 transient that
might kind of give you a flavor for what's going on.
CHAIRMAN WALLIS: How much stuff do we
have to look at to go through all that?
DR. PAULSEN: There's just a little bit.
I don't want to spend a lot of time, because I think
we understand where your concerns are.
Basically, I just wanted to point out that
when you go to this three-dimensional, modeling the
three-dimensional components,t here's some guidance in
terms of rules, but there is nothing absolute. There
is nothing as definitive as the equation for the 1-D
or for the straight pipe.
CHAIRMAN WALLIS: So the staff in
evaluating a code like this would be quite within
their purview and all that to say we think that L
should be twice as long for this node; let's try it
and see what happens.
DR. PAULSEN: And I think past reviews
have done things like that. Past reviews have done
things like that or asked what's the sensitivity in
this particular loss coefficient or inertia.
DR. ZUBER: Had you done the thing
correctly two years ago, we would not have this
discussion. If you had addressed all the concerns and
then even using the same equations addressed the
sensitivity of each term, and probably you could get
some of these problems to rest. Now it is just plan
arm waving.
DR. PAULSEN: When it comes to coming from
the downcomer into the lower plenum, then the inertia
terms and the flow length terms, you have to kind of
visualize what the flow --
CHAIRMAN WALLIS: You see, that's the
problem I had, because you told me it was coming in
and going out into the lower plenum. So you in was at
the top, and your out was at the bottom.
Now you are saying your in is at the top
and your out is at the top again. That's a different
model from what you just described.
DR. PAULSEN: This is how we would
calculate the inertia. We actually -- For the inertia
we actually look at that path.
CHAIRMAN WALLIS: But your two pipe thing
-- you just explained to me it comes in at the top and
it goes into the lower plenum. That's the in and the
out. Now your in and out is a different in and out.
It can't be both.
DR. PAULSEN: Yes, I see what your concern
CHAIRMAN WALLIS: Yes. It's a very simple
concern. I mean, my seven-year-old grandson would
probably have the same concern.
DR. ZUBER: Let me ask you, how do you
determine that it is one-half of your downcomer?
Again, why not one-third or one-fifth?
DR. PAULSEN: Oh, it isn't. For a
situation like this, it isn't one-half. For a
situation like this, the user has to look at how he's
got his model nodalized. If he's got one node, then
he has to kind of look at what the flow path might be
through the hardware.
In fact, normally, this flow length is
going to be much longer than the one-half.
DR. ZUBER: Okay. Then let me ask you:
Did you do a sensitivity analysis on that to see what
is the --
DR. PAULSEN: Users do these kinds of
sensitivity --
DR. ZUBER: No. Look, users -- I am not
talking to the users. I am merely talking, did you?
I have concerns about your approach and assumptions,
and then you want to defend it. And my question is
you tried to explain how you determine -- My question
is did you perform some analyses, calculations, or
take twice that length, half that length and see what
is the effect?
DR. PAULSEN: Those are pretty common
things that we do when we do analysis.
DR. ZUBER: Well, the question is -- My
question is did you, and what is the effect; and if
you did it, then where I can read it?
DR. PAULSEN: Okay. We haven't run any
specific analyses right now that we could point a
finger to, but --
DR. ZUBER: You answered the question.
You see, the problem we have is you have these
assumptions you cannot really defend. You always say
this was done 30 years ago, this was approved, and
then you explain something and you don't run the
sensitivity analysis.
DR. PAULSEN: For a specific application,
these sensitivity studies are run quite often.
DR. ZUBER: See, but you want to have a
code approved, and this is something which is
questioned in the analysis. You have two pipes. You
want to model a very complicated -- my own guess is an
engineer would be, okay, I should then run L to see
what is the effect, and if the effect is important, I
would address it. If it's not important, I would say,
Zuber, shut up, I have done it and here is the result.
And you didn't do it.
CHAIRMAN WALLIS: Well, I just wonder --
DR. PAULSEN: This term will vary,
depending on the kind of transient you are having.
CHAIRMAN WALLIS: I'm wondering where we
could be. I mean, we could simply say that as ACRS we
appreciate there's a 30 year law of how to interpret
all these things so that they work, and there is no
way that we can possibly penetrate this tribal
knowledge by simply saying we don't really have much
to say.
DR. SHACK; Well, we've passed a lot of
other codes that had the same problems.
CHAIRMAN WALLIS: And that's part of the
DR. KRESS: That is exactly the
DR. ZUBER: Wait, wait just a moment.
Wait, wait, wait. No, you have to answer your
question. These codes were designed to address one
problem, where we have quite a different era. We had
quite a bit of conservatism. Now we are getting into
the regulations. That conservatism is going to
decrease. We have to have better codes.
What I hear from this presentation and
previous, we won't have them. We don't have them, and
the worst irritation to me is you don't even
appreciate what this will mean to this technology.
This intervenor could really run rings
around the NRC in the analysis like that, and I hope
it doesn't come. But --
CHAIRMAN WALLIS: I think we have to say
that there is no way we can penetrate the lore, l-o-r-
e, of 30 years. But we can look at something like an
example of following a flow around a bend and say does
this establish credibility. That's about the only
level the ACRS can penetrate to, because there is so
much other stuff that you have to sort of been in the
business for years to --
DR. ZUBER: But, Graham, but the point is
that first simple thing you cannot really even get a
positive answer to it, and the question is then -- my
judgment would be have a letter, list the concerns,
and it's up to the NRR to make a decision.
DR. SCHROCK: I am still struggling with
the problem of the simplest technical communication
here. We have something on the projector there at the
moment which is unclear, unexplained, and it's being
talked about as though we all understand what it
I don't think we have the same
understanding of it. I certainly don't understand
what you mean by that picture. Have you divided the
downcomer into four segments, and you are showing what
projects onto the -- Here you got a --
DR. PAULSEN: That's right.
DR. SCHROCK: -- cut through the thing,
and now it's projecting down into a vertical view of
the lower plenum, and --
DR. PAULSEN: That's right.
DR. SCHROCK: -- you are showing the flow
which is in that one-fourth of the whole downcomer?
DR. PAULSEN: That's correct. It would be
flow coming down --
DR. SCHROCK: Did others understand that?
DR. PAULSEN: -- down this portion of the
downcomer. So it would be --
DR. SCHROCK: But the whole downcomer is
represented as one pipe.
DR. PAULSEN: That's right. And so if
you've represented this as one pipe and you've got
four parallel paths that you have -- in this case,
they might be symmetric parallel paths. So there are
guidance on how you would combine inertias for these
four parallel paths for one effective path.
DR. ZUBER: Which you evaluate the
sensitivity of, how you calculated.
DR. PAULSEN: That's the recommendation,
is that we do sensitivity studies on these.
DR. ZUBER: And you didn't do them yet,
did you?
DR. PAULSEN: I'm trying to point out that
those sensitivity studies are model-specific. We have
done sensitivity studies on a number of models, but
the sensitivities may be different when you move to a
different model.
DR. ZUBER: What you mean, model? It's
the same equations or what?
DR. PAULSEN: The level of nodalization.
CHAIRMAN WALLIS: I am going to backtrack
to this morning when I showed you a 180 degree bend,
and I wondered if it was fair to do that. But it
seems to me, you are showing it to me now in something
you prepared before I showed it to you. I had a lot
of trouble figuring out how the size and he momentum
fluxes and things applied to a 180 degree bend, and I
think that's still a problem.
You know, that equation doesn't clearly
apply to something like this picture, and this isn't
two pipes. So I'm not quite sure what you are showing
me. Do you claim that your RETRAN equation works for
this sort of a --
DR. SCHROCK: He is going to show you how
to calculate the equivalent pipe through the lower
DR. PAULSEN: That's right.
DR. SCHROCK: By looking at this flow
CHAIRMAN WALLIS: It's the length of that
DR. ZUBER: Yes. That pipe can be
DR. PAULSEN: It's the length of this flow
CHAIRMAN WALLIS: And the pressures are at
the top of each side and all that, and clearly the
momentum balance doesn't work, but you have sort of
imagined that if it were straight, it would work out
this way.
DR. ZUBER: Let me ask you something. I'm
sorry. What kind of guidance -- What kind of peer
review groups you had in conducting this?
DR. PAULSEN: In doing this?
DR. ZUBER: I mean developing this RELAP
-- or RETRAN-3D.
DR. PAULSEN: This portion of the code
really is not new. This modeling technique has been
used and is a carryover from RELAP-4.
DR. ZUBER: And RELAP3.
DR. PAULSEN: And in some cases -- The
momentum flux terms, no, but these inertia terms are
carryovers from RELAP3. The other systems' codes do
this type --
DR. ZUBER: The problem is this was a
different requirement, different environment, and you
are developing code that should be used for the next
ten years when you have increase of power, decrease of
CHAIRMAN WALLIS: That's why we need lots
of assessment if it is going to be used.
DR. ZUBER: Well, I don't see it here, but
in doing even this most trivial one, just to see what
is the effect of that length, and you leave it to the
user to the NRR to calculate it.
DR. PAULSEN: All I'm trying to say is
there is no way we can do one sensitivity study on an
inertia in a given model and give blanket coverage of
what the --
DR. ZUBER: No, just for your intellectual
-- Granted, I mean, it will take twice as long, three
times as short, and see what the result is.
DR. PAULSEN: We have done those in years
past on numbers of cases when we were doing loft and
semi-scale experiments, when we were doing plant
transients to look at responses. Those are the kinds
of things that we do and that we tell users to do, is
to look at the sensitivities of those terms.
DR. SCHROCK; But what do you do about
phase separation in this imagined u-bend representing
the plenum?
DR. PAULSEN: The case that we've shown
here would be for a case with no phase separation.
You know, if you start getting phase separation or
your flow pattern changes significantly, then that
inertia can change during the transient, and then
that's another sensitivity that you are going to have
to consider.
DR. SCHROCK; Well, of course, it's the
inertia that causes the phase separation.
CHAIRMAN WALLIS: So check several of
those works for a 180 degree bend like that?
DR. PAULSEN: In most cases, we wouldn't
apply that down here.
CHAIRMAN WALLIS: Well, you've got to use
DR. SCHROCK: But in some cases, you
DR. PAULSEN: For the transient, Chapter
15 transient analyses that we typically use RETRAN
for, we wouldn't encounter that kind of a situation.
That would be more of a small break.
CHAIRMAN WALLIS: I think we know where we
are now with all this, and we're not quite sure where
we are going. I think, since some of us are leaving
at three, we ought to have a discussion between us and
you folks and NRR about where we want to go from here.
DR. PAULSEN: I guess I have one question.
That's if this last information kind of completes the
picture or if there's still something missing?
CHAIRMAN WALLIS: Well, what you've sort
of done is I think that you've assured me, and I
believe it, that people have worked with these things,
whatever their weaknesses, for 30 years, and they have
evolved a lore and a way of learning how you have to
fix things up so that all these things work out for
the kind of problems they have been solving.
I think you do have a real problem with
documentation as it is, establishing credibility of
the methods for anyone except someone who is one of
those people familiar with this 30 year lore. I think
that is a real problem when you face the intellectual
community, professors at universities, the students in
universities, the engineers out there who become
engineers who see this stuff and say, gee whiz, I
don't want to be part of that because it doesn't make
sense to me, I'll go and get a different job, all that
kind of stuff.
That's where the problem is. That's where
we have the problem, but we are not sure that we are
in the position -- NRR can evaluate the lore and say
we believe that's fine, we understand there's been a
big learning experience.
We have much more difficulty with that.
But we can evaluate much more readily the worked
examples, the justification for the equations, and I
think that's where you fall down. It's not a
convincing story.
I wonder what you wanted to do about that
in terms of fixing it up, and do you really want to go
before the full committee next week where some of
these questions may come up again in exactly the same
form. I 'm not quite sure -- I don't think they are
resolved, really, are they?
DR. PAULSEN: I don't think all of them
are, no. Before we move from here, I guess, if we
were to revise the documentation to kind of point
users in the direction of how you model more complex
geometry with some of the illustrations that I've just
presented, and then refer them to modeling guidelines,
do you think that would be an improvement and address
some --
CHAIRMAN WALLIS: That would certainly
convey much more information which would be helpful.
It might give us the same qualms about the momentum
balance, because we would look at these complex
geometries and say, gee whiz, the same way we do for
the simple ones.
DR. PAULSEN: But at least put together
the picture of how the whole plant would be modeled
and how the pieces go together.
CHAIRMAN WALLIS: Well, of course, that is
the engineering problem you address. My critique of
all these codes is they launch into Navier-Stokes
equations, blah, blah, blah. They should define the
problem first and say this is what we need to do, and
these are the kind of assumptions we may have to make,
because these are the variables we are dealing with
and these are the things that matter; this is why we
are going to do it.
Developing sort of equations in a vacuum
and then saying, we think they apply, is just not
quite perhaps the way to do it. I don't know how much
you want to rewrite.
I think we have a real concern with SERs
being issued before the documentation has been fixed
DR. PAULSEN: Okay. And I guess we have
taken the position that we have told the Commission
that we are going to make particular revisions.
CHAIRMAN WALLIS: See, we have told the
Commission that there appear to be basic errors in
these momentum balances, and you have told the
Commission they are out to lunch, they are fine,
everything is great about these momentum balances, and
we are going to show them it's all right.
That was about the dialogue that was
presented at the beginning of today, and I don't think
we are much further ahead there. We still have the
same reservation. The only thing that we are doubtful
about is, well, in spite of all that, is this still a
good code.
DR. PAULSEN: Okay. But I think we have
come to -- at least that the equations that we are
using in the code seem to predict the physics
reasonably well in terms of expansions --
DR. ZUBER: I didn't see that. I didn't
see it.
DR. PAULSEN: Okay.
DR. ZUBER: It seems convincing. You were
not able to explain the simple examples. When I asked
you how you did out of the Bernoulli equations, you
were not able to see the difference between the
Bernoulli equations and the momentum equations. It's
an ad hoc approach, and if you use this approach, and
it's up to you to say up front this is what we did
and, therefore, we have done this and this and this
sensitivity analysis to address this and this and
these questions, and I didn't see that either.
This is my summary.
CHAIRMAN WALLIS: So we may end up, if we
have this meeting next week, about where we are now.
I'm not quite sure how we would come out.
DR. KRESS: I can't see much of a
possibility of us coming out in the full committee any
different than where we are right now.
DR. SHACK: What are our questions? I
mean, is it how do you nodalize a three-dimensional
flow for a one-dimensional code? When we approved S-
RELAP last time, it didn't bother us then.
DR. ZUBER: It is a little different
environment. We did this approach 30 years ago,
because we addressed a different problem. Now you are
decreasing the -- you want to obtain better, more
efficient blend, and they should do it. But they
should really correct the approach.
I won't do the same approach with the same
shortcoming by decreasing the margin of safety, and
that's a problem I see. Otherwise, I wouldn't be
complaining, because we have enough margin. But this
code is going to be used for the next 15 years, and in
15 years you can imagine how much the margin will be
reduced, and this is the problem which ACRS has to
CHAIRMAN WALLIS: Well, we were careful in
RELAP. We said that this is -- we don't disagree with
the staff, because the staff is in a box. It's
approved RELAP for other purposes; they can't very
well turn it down for Siemens. That was the kind of
thing. We were in the box. But then we had a lot of
qualifications in saying the documentation had all of
these errors and things we point out before and, when
this is done for a best estimate code, that's got to
be fixed.
DR. SHACK: Let's separate best estimate
codes -- You know, there we had the explicit
requirement to evaluate uncertainties, and that
includes all uncertainties, whether it's, you know,
how do you nodalize a three-dimensional problem into
a one-dimensional problem. But as I say, to suddenly
at this point bring up how do you nodalize a three-
dimensional solution into a one-dimensional problem,
as though, you know, we've been doing it for --
CHAIRMAN WALLIS: I don't think it's
unfair at all, and we are members of the public
looking in at how things are done, and if it's been
done this way for 20 years and it still doesn't make
sense to us, we have every right to say it doesn't
make sense to us. There must be some mysterious lore
practiced by this industry which makes it work. We
have every right to say that.
DR. ZUBER: And as technical men, it was
all right to go to the public at technical meetings.
CHAIRMAN WALLIS: But we don't want to
give the public the wrong impression.
DR. ZUBER: Okay, fine. But don't --
CHAIRMAN WALLIS: We don't want to give
them the impression that, because there are all these
things that you wouldn't accept in undergraduate
homework, the whole structure that's evolved over 30
years is hopeless. We don't want to give that
DR. ZUBER: No, you can leave it, because
you have quite a bit of conservatism. I mean, you can
always argue that point. But now we are going to
decrease it, and we should. But then you should do it
correctly. I think this is the change of environment.
This is what the ACRS has to consider.
On the other hand, I'm not concerned with
the 3D. I'm really concerned with the other momentum
equations and --
CHAIRMAN WALLIS: There are bigger
questions, though. It's this working entirely in this
regulatory environment that bothers me. I go back to
the question of the student. If I have students
working on fluid mechanics and they start to say I
want to be a nuclear engineer, and they start to look
at this stuff, if all they see is this sort of
documentation, I think you turn them off, because they
wouldn't see all the other stuff which is the 30 years
of experience that it works. I don't want that to
DR. ZUBER: There is something worse.
Sometimes when I read reports like this, I feel sorry
that I have put my technical life in this technology.
CHAIRMAN WALLIS: Well, I have trouble
sleeping sometimes, and that shouldn't happen.
DR. SCHROCK: I'd like to point out that
in this last little bit of discussion here, you,
Graham, were saying, well, we can maybe accept a lot
of this fuzziness, but when we go on to best estimate
codes, Bill made a comment about best estimate codes
which implied to me that you didn't hear what Mr.
Paulsen had to say about what he views this code as
He says it's best estimate.
CHAIRMAN WALLIS: It's the best they could
DR. SHACK: You know, people have
different meanings to the meaning best estimate.
DR. KRESS: He meant there were no
purposeful conservatisms in that.
DR. SHACK: But from the NRC's point of
view, a best estimate code means one where you
explicitly address all the uncertainties.
DR. SCHROCK; It means one that is going
to be submitted under the new rules and required
uncertainty evaluation, precisely.
CHAIRMAN WALLIS: Well, I admire your
struggling with the problem. It's a difficult one,
and you may have got something which works for the
kind of things we've done up to now in nuclear safety.
But I can't get over the business of just looking at
this thing that, if it were on undergraduate homework,
I wouldn't like to see that kind of an answer. That
shouldn't happen, and I can't reconcile these things.
DR. PAULSEN: We'll go back and reexamine
CHAIRMAN WALLIS: Do you guys want to come
in next week? What are you going to say?
DR. PAULSEN: Jack?
CHAIRMAN WALLIS: Usually, we give someone
advice about how you should present yourselves to the
full committee. Is there a hurry? You've taken two
years. Do we have to rush to judgment next week?
MR. HAUGH: Certainly, the schedule is
yours to set. I mean, if this is locked in concrete
beyond --
CHAIRMAN WALLIS: No, there was another
presentation on water hammer where the EPRI presenters
decided they didn't like the critique they had had,
and they wanted to go back and work on it and come
back with something better. That happened a month or
two ago. You don't have to meet this schedule.
MR. HAUGH: Well, perhaps we could
reconsider that date, as you are suggesting, but
there's a question of just, you know, how much work
can be done in any one given time, and that's going to
be a question of trying to define exactly what we need
to bring back to you, and that would determine the
length of time needed.
We can't, in the space of a week or so,
completely re-derive everything, reformulate all the
documentation, etcetera, etcetera. So I mean, there
has to be some specificity as to what would be needed.
CHAIRMAN WALLIS: That's not realistic for
you to reformulate everything in a week.
MR. HAUGH: Yes. That's what I'm saying.
CHAIRMAN WALLIS: So in a week, we would
have to write a judgment based on what we see, and if
we felt so moved to, we might write a detailed
critique of all these twos and root twos and lack of
forces on bends and stuff, and put it all in writing
as the report next week. I just wonder if you want to
see that happen, if we have to critique what we've
got. I'm not sure that gets anybody any benefit.
MR. HAUGH: I think I would agree with you
on that statement, certainly.
CHAIRMAN WALLIS: But if it gets no one
any benefit, why are we doing it?
MR. HAUGH: Right.
CHAIRMAN WALLIS: Well, I think you
perhaps need to think about it in the next day or two
or something.
DR. KRESS: Keep in mind, you don't lose
anything at all by delaying it and not showing up.
There's nothing you lose except a little time.
CHAIRMAN WALLIS: Well, he's sensitive
that Dana is no longer the Chair. We don't have this
person keeping us on track all the time, we have to do
things on time.
DR. POWERS: If they choose not to come
before the full committee, all that would happen would
be that the subcommittee Chairman would give a summary
of what their status was on things, which to my mind
is -- I apologize that I've been over meeting with the
Commission, so I haven't heard everything, but that
there is -- you are on a path, closer pathway to
resolution than I've ever seen before.
CHAIRMAN WALLIS: I think we could say
that we had a very good meeting, that now we
understand each other. Now we think EPRI understands
the concerns, and we believe that they understand them
well enough that there's hope that they address them
That would be what we would say, something
like that. We would hope we would be able to say
that, because you've been far more responsive in this
meeting than the last time we had a meeting with EPRI,
and we are not on some treadmill that says next week
we have to do what was on the schedule.
MR. HAUGH: Okay. To have complementarity
in terms of the good feelings on leaving the meeting,
we would like to request an opportunity to present at
least one more piece of information here.
It shows when you make these different
cases and ways of doing it, does it really make a
difference or not? And perhaps that's something that
needs to be considered as well in terms of crafting
what it is that we would be expected to do by whatever
time. We would appreciate your indulgence on that.
CHAIRMAN WALLIS: That is useful
information, too.
MR. HAUGH: Okay. Thank you.
CHAIRMAN WALLIS: Do you want to say that
DR. PAULSEN: There are just a couple of
quick slides that I'd like to show, just for
illustrative purposes on what some of the effects
might be.
I think this was the reason we got caught
in the trap of trying to carry the vector information
along, which did nothing but add confusion. For a
situation like this where we have a coldleg and a
pressurizer with a surge line, if we have a situation
where we don't account for the effects of angles, in
this particular case where we have no angles, if we
were to input a pressure of 2200 in the pressurizer,
our pressurizer in the hotleg would end up being 2205,
which is less than the hydrostatic head for that
particular path.
When we actually put in the angle effect
in this junction, it in effect knocks out this
velocity head upstream so that it doesn't affect this
piece that goes off at 90 degrees.
CHAIRMAN WALLIS: But if it were a true
Bernoulli, it might affect it.
DR. PAULSEN: If there were some flow,
CHAIRMAN WALLIS: And of course, in this
case you might have flow coming from two and one, and
I had a problem with that T when you had your W1, W2.
Two actually could be negative, and I didn't quite
understand how you handled that.
DR. PAULSEN: So this is just an example
showing at steady state where these angular effects in
some cases need to be included to get the right
pressure distribution.
CHAIRMAN WALLIS: It indicates to me,
though, that you got to be careful, because the delta
p, the difference in the pressure here, is 5 and 26.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: Depending on what
assumptions you make. That's a big change.
DR. PAULSEN: That's right. But it's only
at local pressure. But in the case where we specify
the pressure in the pressurizer, this could give us 20
psi wrong in the rest of the system.
DR. PAULSEN: It's the delta P that drives
the flow and, if it's five times as big, you might get
a flow in the surge line which is wrong by a factor of
two and half or something. That might make a
difference to the transient.
DR. PAULSEN: That's right.
CHAIRMAN WALLIS: So there are cases. We
were a bit concerned about these dP 600, 1000 type
things where the pressure, hydrostatic balances
throughout these different bathtubs that Novak talks
about makes a difference to whether the water goes
this way or that way. You have to get your delta Ps
right more accurately than perhaps you do in some of
these things where there's a big hole and everything--
DR. PAULSEN: Where you don't have as
large forcing function, those hydrostatic heads there
are what drive the system. That's right.
DR. SHACK; Now let me just understand
this a little bit better. In the case one you sort of
arbitrarily set the angles to zero.
DR. PAULSEN: This is if we neglect all of
the angle information and just treat everything as
straight pipes. Then in effect, this pipe is going to
be, you know, downstream of this pipe. So it's going
to see the velocity term upstream from this piece,
which will then --
CHAIRMAN WALLIS: Turn around the corner?
DR. PAULSEN: Yes. It makes it think it's
going around the corner, where it is really not. So
CHAIRMAN WALLIS: But the two pipe model
would do that to you, because a two pipe model is
energy conservation, and Bernoulli would do that,
wouldn't it?
DR. PAULSEN: This is a case -- I probably
didn't give enough conditions. This is the steady
state case where we have no flow in the surge line.
So that's something I missed.
So I just want to point this out, that
there are some cases where you really need to account
for some of these --
CHAIRMAN WALLIS: What I did with your T
was I said what happens if W2 is zero. That's where
I got my V2 over 4.
DR. PAULSEN: Oh, that one?
CHAIRMAN WALLIS: And I couldn't see how
I took this V2 over 4 and the other and compared it
with Bernoulli.
DR. PAULSEN: Okay.
CHAIRMAN WALLIS: So I think you are
illustrating that there might be -- it might be
important if you do this right.
DR. PAULSEN: Right. This just happens to
be something that looks like an elbow where we have
actually included -- This is horizontal. So it's
laying at a plane. So we don't have any gravity.
We've turned off all the friction, and we have a
uniform pipe, no heating.
So you can see that we have a uniform
pressure until we hit this bend, and then it looks
sort of like a stagnation point.
CHAIRMAN WALLIS: Right.
DR. PAULSEN: So the pressure elevates.
But as soon as you get around the bend, that
recoverable pieces come back.
CHAIRMAN WALLIS: What this would do,
though, is it would squirt flow out and prevent flow
coming in, so that it would rob the corner of mass,
wouldn't it? It would unofficially rob the corner of
mass, because these pressure differences would then
drive a change in flow if you put this into the code.
So it would rob that corner of mass.
DR. PAULSEN: I'm not sure it would really
have any effect unless you had some other connections.
CHAIRMAN WALLIS: It would calculate a
DR. PAULSEN: Yes.
CHAIRMAN WALLIS: Probably.
DR. SCHROCK: In that picture, you have
the junction 21, junction 21 and -2 in your little
tables, but in the picture you have a 19 and no 20.
DR. PAULSEN: These angles would be for
20. This is 20. This would be 20, 21. So these would
be these two, this path, and this one and two then
would be the hotleg path. So that's a 20.
CHAIRMAN WALLIS: Well, this is why for
some of these T junctions RES is actually doing
research and measuring some of these things.
DR. PAULSEN: We are actually --
CHAIRMAN WALLIS: That would be the way to
resolve some of these questions.
DR. PAULSEN: And we actually have some
people in Switzerland that are using the code to
actually try and compare with some data for T's. So,
hopefully, we'll have some data in the -- or some
actual comparisons with data to help justify use of
the code for T's.
This is an example where we took just a
simple PWR transient model. It was a single loop
model, and generally users don't model a lot of this
angular information. They would even model this as a
straight pipe, this as a straight pipe, and then they
will angle differences down here.
So what I've got is a case where we have
zero angles and then the case where we have actually
put in the 90 degree turn here, and then have these
two junctions 180 degrees away from this junction.
Basically, the results of this case are
that, when you do that -- provide that information, it
doesn't change anything in the system except where the
angle changes. So we would see a change in the
downcomer, a change in the upper plenum, and a change
in the lower plenum pressure.
So case one listed here are the base cases
with no angles, and then case two was where we have
included the effects of angles.
CHAIRMAN WALLIS: Probably means the
momentum flux terms are small.
DR. PAULSEN: And that's what the next
slides show, is that, you know, at most this affects
pressures by a psi or two. It's probably not going to
-- and based on that, I would say it's not going to
change the transient much, but I've cheated. We
already ran that.
CHAIRMAN WALLIS: I think you should say
that right up front. You should say there are
difficulties with modeling momentum equation. You
have to make some assumptions to get on with it.
We've tried various assumptions. They make this kind
of difference, and this is what's in the code, and
that's why.
DR. PAULSEN: I think we're seeing --
CHAIRMAN WALLIS: This sort of pseudo-
academic stuff is not doing much help.
DR. PAULSEN: That's right. And
basically, you can see that this is the two cases for
a typical transient, and this happens to be the surge
line flow for a typical Chapter 15 analysis.
Basically, there was no difference in the
surge line flow. There was really no difference
anywhere except in the upper plenum pressures where we
had about a psi or two that were different, and
there's just an offset in the pressure that tracks
through the transient.
I think, if you look at the slides, one of
them, there's a slight difference, but in effect these
momentum flux terms which for at least this case don't
have much significant --
CHAIRMAN WALLIS: The academic reviewer
looking at these things tends to regard it as being
basically immoral to do a full momentum balance.
DR. PAULSEN: Well, and there are
situations where momentum flux may be more important.
So you want to make sure it's right.
CHAIRMAN WALLIS: But in reality, it
doesn't matter. It's only a small sin.
DR. PAULSEN: I guess I'm going to skip
over the RAI questions. I think I wanted to go to
this last slide, because this kind of summarizes what
we've tried to do.
We've actually, from my personal point of
view, tried to make a conscientious effort to address
your concerns, but I think we were missing the target,
and I think now --
CHAIRMAN WALLIS: I think our conclusion,
looking at your replies, was you don't understand what
we're talking about.
DR. PAULSEN: So we've made -- We have
made some code revisions and error corrections where
we identified those, and we've tried to evaluate the
error corrections on what impact they might have, and
we've attempted to revise the documentation so it's
more complete.
I think maybe we have identified some
areas where we could use some further change. But the
plan was that we would issue a new code at the end of
this review process that would have the updates that
had come about as a result of the review, correcting
errors and those kinds of revisions, as well as
distributing new documentation with it that would
resolve the problems.
I think that's still the plan.
CHAIRMAN WALLIS: It would really be
appropriate for us to see some new documentation and
comment on that, because that's the end of the story,
isn't it? It's difficult to comment now when you are
still in the process of changing it.
DR. ZUBER: Especially after this meeting,
you may consider to revise the documentation. I would
really advise you to do this.
CHAIRMAN WALLIS: It is a moving target.
I mean, if I look back at the responses to RAIs and I
look at your new derivation with the Porschingesque
integral with the divergence and all that, that's
completely different rationale than we had before.
DR. PAULSEN: Some of this has been as a
result of our dialogue with the staff, trying to, I
guess, address their concerns, too. So it's been kind
of an evolving thing.
CHAIRMAN WALLIS: I think that the effort
by Dr. Porsching to introduce some rigor was a good
thing, but it seems to sometimes -- You know, you got
to be careful then that the definitions of
mathematical terms he has are not quite the same as
yours. You may give the appearance of being on the
same track, but when you look in details, it turns out
his equation isn't the same as yours.
So again, you got to be careful about
jumping to conclusions here.
DR. ZUBER: Graham, I would like to go on
the record that his equation on page 8 where he has
two parts, horizontal connected, is incorrect, and it
does not agree with the standard equation which is in
Bert, Stuart and Lightfoot.
Although I appreciated reading it,
somebody on the divergence and the main integral
theorem, I was really surprised that she didn't put
the section from Bert and Stuart and Lightfoot to
compare is results with the standard. I think that
analysis was wrong.
CHAIRMAN WALLIS: My conclusion is that
probably you don't want to come next week unless you
are determined to do so.
DR. PAULSEN: Well, I don't think that we
would have anything to gain.
CHAIRMAN WALLIS: If you do, I don't quite
know what you would come with. You've condensed this
story. Which part of it would you tell us, and --
DR. PAULSEN: Well, I think what we would
probably want to be able to do is to revise the
development of the equations. Did you want to
comment, Jack?
MR. HAUGH: This is Jack Haugh speaking
again for EPRI.
I think, given all that has transpired, it
would be inappropriate to push this next week, but
again I believe we've been given very broad guidance
and suggestions as to the nature of the problem, and
this could become a very protracted business in terms
of, you know, how long this all takes to have a
pleasant meeting of the minds when this is all over
with, with the ACRS.
So it would be helpful to us to have as
complete a set of things to come back with. I think
that's not unfair to ask of the committee to assist us
in that fashion.
I really don't want to get into a never-
ending process of, okay, now go after this, and now go
after that, etcetera, etcetera.
DR. ZUBER: It's too bad that you were not
here two years ago to make this statement. We
probably would not even have this discussion today.
MR. HAUGH: Well, like the Bible, they
save the good wine until last. Okay?
DR. ZUBER: Oh. My advise: You should
really look at this book by Ginsberg.
MR. HAUGH: Yes. Well, we'll endeavor --
DR. ZUBER: I think you will get quite a
bit of guidance on what it is and how to deal with
these things.
MR. HAUGH: And perhaps offline afterwards
you might help me on the title as best you recall it,
DR. PAULSEN: That was by Ginsberg?
DR. ZUBER: Ginsberg. It was translated
in the early Seventies. I had a copy, but somebody
borrowed it, and I don't have it. But to my judgment,
this is probably the best document which summarized
this kind of approach, and you can take some ideas
from that book.
DR. PAULSEN: Thank you.
CHAIRMAN WALLIS: I think we've got to be
careful about us participating too closely in
development of your documentation. We could simply
stand back and say you do whatever you believe is
right, and we'll critique it and, if we don't like it,
we'll say so.
I hope it isn't that you are doing this in
response to what we said. I mean, if there is
something that we've unearthed which you believe to be
not the best you could do, then you should change it,
not just because we said so.
DR. ZUBER: Graham, just something. You
have a good write-up in your --
CHAIRMAN WALLIS: I've got a tremendous
DR. ZUBER: Wait, wait, wait. That's one.
You have your --
CHAIRMAN WALLIS: Oh, the --
DR. ZUBER: The tutorial.
CHAIRMAN WALLIS: -- tutorial on the
momentum equation.
DR. ZUBER: And then you had something
right in your concerns. I think this should really
be -- or could be of great help to them. I don't know
whether this is appropriate or not, but --
CHAIRMAN WALLIS: Well, we can talk about
that. But again, I'm not in the business of
developing your documentation.
DR. PAULSEN: Well, I can appreciate that,
MR. BOEHNERT: I did have a question, I
guess, for the staff. That is, they have issued an
SER. So where does this all sit, given the SER being
MR. LANDRY: We will wait and see what is
done with the documentation by EPRI, and we will
review that material. We are not adverse to issuing
a supplement or addendum to our SER.
CHAIRMAN WALLIS: So I think the progress
we've made over two years is that it's taken us
actually meeting face to face, which hasn't happened
for two years, to realize that probably neither of us
is completely off the wall, and there's something that
has to be worked out.
DR. PAULSEN: I agree.
CHAIRMAN WALLIS: But this should have
happened the first day perhaps, if we had any sense.
What else do we need to say? Ralph, do
you have something to say at this point to help us
finish up?
MR. LANDRY: No. I think this has been a
good process. We've been trying to get through a
process like this for over two years now, and I think
that in a lot of ways we've been talking past each
other, meaning us and the applicant.
Finally, I think we've come to an
understanding of one another and are moving toward
resolution, at least being able to issue an addendum
to an SER that says all of these criticisms or some of
these criticisms can go away as long as we have the
proper understanding of the code and its use.
CHAIRMAN WALLIS: Well, that may not be
our point of view.
MR. LANDRY: We do have a feeling that
they have made improvements in the RETRAN family of
codes by going to RETRAN-3D. We have not been happy
with the course that they have taken in this
particular matter.
I think, if this gets cleared up, that we
will have a much better position to take on the code.
CHAIRMAN WALLIS: I think we have to have
a discussion among the ACRS about what are the
criteria for acceptability, and your criteria would
seem to be that the code as written, programmed and
tried out, evaluated, assessed, works for reactor
transients, and that's the thing that really matters.
And ACRS would have to say, well, is it all that
matters? How important is it that it have some good
justification in terms of the kind of theory that most
professional people understand.
So I think we are going to have to discuss
among ourselves what weight we give to these various
things in terms of the way we would evaluate the code.
MR. LANDRY: I'm not saying that we don't
feel that there has to be some justification either.
One of the things that I suggested this morning in
talking, and have continued throughout this review, to
say is the applicant should explain what is in the
code and why it is acceptable or why it is -- they
should justify it.
What's in the code? Why does it work, and
why should we accept it? That's almost a minimum
level of justification that needs to prepared for any
DR. ZUBER: Ralph, I think the minimum
should be that we cannot license actually the code
which has errors which are junior. I think this is a
bad policy for the NRC.
With the first level, I would say does it
violate a knowledge of a junior; and if it violates --
CHAIRMAN WALLIS: Ralph, you had something
you wanted to present about code review in general?
But this is really a RETRAN meeting. Do we need to go
into that or should we just take it home and read it?
It lets us know where we are with the review of these.
Do we need to go into that?
MR. LANDRY: No. This was simply --
CHAIRMAN WALLIS: It's simply just a
MR. LANDRY: This was placed on the
schedule, and --
CHAIRMAN WALLIS: Well, it simply a list
of where we are.
MR. LANDRY: -- you can take it home and
read it. All it basically says is where we are with
the codes that we have in-house today under review,
and what do we anticipate coming in.
We anticipate RELAP5 Realistic LOCA, and
we anticipate W-COBRA TRAC Realistic small break LOCA
this springtime. And we anticipate sometime in the
future TRAC-G for BWR Realistic LOCA.
So that's really more to apprise the
committee on what we have, what we expect to have, so
that for both of us we can plan what our interactions
and workloads are going to be in the future.
We do understand the comments and concerns
that you expressed on S-RELAP5, Appendix K. We have
discussed those with that applicant, and are prepared
to push ahead in the review on S-RELAP5 Large Break
LOCA, and what is expected of that material.
We hope that the Westinghouse people, who
were sitting in that subcommittee meeting, also
understand the concerns and, when they come in with
their W-COBRA TRAC Realistic Small Break, they will
take to heart those same concerns.
CHAIRMAN WALLIS: We would hope that when
all this is through that a method is established for
making this whole process much more efficient. We
don't have to take so long to review things which
eventually get fixed up.
Things would come in without elements of
the documentation that we even have to question. That
would be a wonderful world.
MR. LANDRY: It would for us also, and
this process has been a learning process from the
RETRAN to S-RELAP5, and now into the Realistic LOCA
space. It's been a learning process, and we
understand your concerns. We share many of those
concerns, and I think we are making progress with
CHAIRMAN WALLIS: Do my colleagues have
any wisdom? I'd like comments from the consultants.
DR. SCHROCK: At this moment?
CHAIRMAN WALLIS: Well, you are going to
write something on your way home or something, so we
have something to go on fairly quickly? I think we
have all said a lot today, and I'm not sure -- unless
there is something you want to add which you didn't
say earlier or I didn't hear earlier.
DR. SCHROCK: I don't think that I would
-- I mean, in my mind it's fairly complex, and it is
going to take a little time to write it down. But
I'll get it to you promptly.
DR. POWERS: I would appreciate it, Virgil
and Novak both. In the morning you both brought up
topics where you thought Research ought to be
providing support to Ralph and his people.
You, Virgil, mentioned codes for doing
logic checking as a tool. Novak, I can't quite
remember what it was.
DR. ZUBER: Oh, I have quite a few. I can
send it again. I wrote it in my last memo to Graham.
CHAIRMAN WALLIS: I asked him to address
that question at lunchtime.
DR. ZUBER: There are many things they
could do that should help NRR and the industry.
DR. POWERS: Anything that would provide
tools to make the processes either higher quality or
higher efficiency --
DR. ZUBER: Efficiency, efficiency.
DR. POWERS: -- and I think we should --
Well, I think quality, too.
DR. ZUBER: Well, together, together.
DR. POWERS: I think we need to factor
that into our long range thinking about where the
research program is going.
DR. ZUBER: Had they used quality, we
would not hear this discussion for two years. They
could have saved money, and they would have saved
DR. POWERS: One of the questions -- It
may not be arising here, but one of the questions that
continues to perturb me in this general area -- It's
a philosophical question. It's one I asked you
sometime ago.
Within the realm of physical chemistry,
there is something known by various names, but it's
basically the Poisson ultimate equation for finding
the activity of an ion in solution.
It is manifestly absolutely impossible
wrong in its technical formulation. It's an incorrect
use of supra position. It is hailed as one of the
triumphs of physical chemistry. Everybody knows it
can't possibly be correct. It just works very, very
I keep coming back and wondering,
especially as we move into this best estimate case,
what do we do about that case?
CHAIRMAN WALLIS: That's quite different,
I think, from the momentum equation.
DR. POWERS: We are talking about supra
position in electrostatics. It is as fundamental a
thing as I could think of.
DR. KRESS: You are saying this is an
analog, that we have these momentum equations that
appear manifestly wrong in many respects, and yet when
we compare with the data, it doesn't seem to make any
difference. You get good results.
DR. POWERS: Here I think you can compare
to the data, and the momentum terms are small, and you
get good comparisons.
DR. KRESS: I don't think that's ever been
shown, though.
DR. POWERS: In the Poisson case, that's
not the case. The terms are huge.
DR. KRESS: The terms are huge, and you
still get the right answer.
DR. POWERS: And in fact, it's the other
way around. They are so huge that supra position
itself gets wiped out.
CHAIRMAN WALLIS: Here we have -- You
know, thousands of homework problems have been solved
using these momentum equations, and we know which of
these are acceptable answers. There's also kinds of
engineering experience with them.
So I think it's at a different level
altogether from what you are referring to.
DR. POWERS: Well, I would be willing to
bet that there have been more homework problems solved
in supra position of electrostatics than thermal-
hydraulics by several orders of magnitude.
CHAIRMAN WALLIS: Well, we could debate
that sometime, not here.
DR. POWERS: I mean, it seems to me -- It
seems to me that, as we move to best estimate codes,
you pretty soon have to confront this, that if you've
got a complex set of equations by Messers Navier and
Stokes, I suppose, that cannot be solved, and so
people throw terms away and do high handed things
because it fits the data.
CHAIRMAN WALLIS: Yes. Yes.
DR. POWERS: And you can't -- I mean, I
don't know what the answer to this question is. I
mean, it has perturbed the physical chemists for a
long time, but it was fully 80 years after the
Poisson-Bothman equation was first used before
somebody could come up with something that was
rigorous that reproduced things to equivalent
exactness and, having done that, everybody proceeded
to ignore it and went right back to using the Poisson-
Bothman equation.
CHAIRMAN WALLIS: I think the basic
question is how good is good enough is the question we
have in reactor safety. How safe is safe enough? How
good is good enough in terms of momentum equation?
DR. POWERS: Well, or how do you know when
it's good enough. I mean, you know -- I mean, I would
classify much of what we look at here as irrational
proximations. That is, it's not like a finite
difference equation. You know, you can't -- you are
not starting from fundamental partial differential
You know, when somebody does an
idealization of a three-D geometry into a one-D
geometry, you know, how do you quantify the error in
that. You know, beyond engineering judgment, you
know, seems to me you are kind of hard pressed for a
better solution.
Now if somebody goes out and CFDs it to
death --
DR. ZUBER: I think a problematic match,
kind of a complex level. I would start from the
simplest. If I mean something which I know has been
working since, let's say, my junior year, the people
before and people after, and it's a standard approach,
I would expect this approach will be applicable -- And
I think if it's not applicable of what the reactor is
using, it's not applicable to the simple approach, I
cannot defend it.
I could defend things, for example, if
something is very complex. I think this is addressing
-- you will get it next week. If this code is so
complex and we know something is wrong in there, and
it still works, then the question should be asked.
I'm very sad that NRC and the industry did
not themselves, why is he talking; and there must be
a reason why, and try to find it. They could
simplify it, they could defend these things, and it
would be efficient. This is one thing which was not
This approach, again, is not going to do
it either. But this is something which needs to be
done in the next two years.
DR. POWERS: But certainly something that
Professor Wallis has mentioned as a direction that the
industry and the NRC together might want to pursue is
why do these things work, even when they have high
handed and --
CHAIRMAN WALLIS: Actually, it's in an
ACRS letter signed by the previous Chair, I think. We
know where the buck stops.
DR. POWERS: He very often signs like
CHAIRMAN WALLIS: So I am going to close
this meeting. I think we have achieved some things,
and I do look forward to a resolution of all of these
to the point where everyone thinks that we've said
enough and the product is good enough.
So I am going to close the meeting now.
Thank you.
(Whereupon, the foregoing matter went off
the record at 3:10 p.m.)
Page Last Reviewed/Updated Wednesday, February 12, 2014
|
{"url":"http://www.nrc.gov/reading-rm/doc-collections/acrs/tr/subcommittee/2001/th010220.html","timestamp":"2014-04-18T00:25:46Z","content_type":null,"content_length":"444582","record_id":"<urn:uuid:0fdd5d19-a3ba-4ef4-9225-231610a1eee0>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US6230153 - Association rule ranker for web site emulation
Publication number US6230153 B1
Publication type Grant
Application number US 09/099,538
Publication date 8 May 2001
Filing date 18 Jun 1998
Priority date 18 Jun 1998
Fee status Paid
Publication number 09099538, 099538, US 6230153 B1, US 6230153B1, US-B1-6230153, US6230153 B1, US6230153B1
Inventors Steven Kenneth Howard, David Charles Martin, Mark Earl Paul Plutowski
Original Assignee International Business Machines Corporation
Export Citation BiBTeX, EndNote, RefMan
Patent Citations (5), Non-Patent Citations (12), Referenced by (45), Classifications (13), Legal Events (7)
External Links: USPTO, USPTO Assignment, Espacenet
Association rule ranker for web site emulation
US 6230153 B1
A method and apparatus that allows association rules defining URL-URL relationships, and URL-URL relationships that are strongly influenced by a web site's topology, to be identified and respectively
qualified. Superfluous association rules may be separated from non-topology affected association rules and discounted as desired. The invention may be implemented in conjunction with a probalistic
generative method used to model a web site and simulate the behavior of a visitor traversing the site. The invention further allows randomized web site visitor behavior to be separated into
“interesting” and “uninteresting” behavior.
What is claimed is:
1. A method for sorting data mining association rules, the method comprising:
identifying statistically significant relationships within a cumulated distribution of data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the data as a reference, wherein said emulated distribution is based upon emulated events
that are different than actual events.
2. The method recited in claim 1, separating meaningful association rules from unmeaningful association rules by sorting the rules by their support within the distribution of data.
3. The method recited in claim 1, separating meaningful association rules from unmeaningful association rules by sorting the rules by their confidence within the distribution of data.
4. The method recited in claim 2, wherein the association rules are sorted in sets of rules for different systems of events, where C represents an emulated system of events for a web site and B
represents an actual system of events for the same web site data, and where if K(P[AC]:R[AC])>K(P[AB]:R[AB]), the association rules applied to the emulated system of events C have greater relevance
then the association rules applied to the actual system of events B, and where P is a probability distribution for an association rule for the actual system of events, and R is an emulated support
for an association rule for the emulated system of events, and wherein K is a constant.
5. The method recited in claim 4, wherein P is measured by an association rule's support in a respective system of events.
6. The method recited in claim 5, wherein P[AB]=P(A, B[1]), P(A, B[2]), . . . P(A, B[n]) and R[AB]=R(A, B[1]), R(A, B[2]) . . . R(A, B[n]), and P[AC]=P(A, C[1]), P(A, C[2]), . . . P(A, B[n]).
7. The method recited in claim 2, wherein the association rules are sorted in descending order of support.
8. The method recited in claim 7, where P(A,B[1])/R(A,B[1]), P(A,B[2])/R(A,B[2]), . . . P(A,B[n])/R(A,B[n]), where B represents an event in an actual system of events, and where the association rules
pertaining to the actual system of events have equal support as association rules pertaining to an emulated system of events if P[AB]/R[AB]=1, where P is a probability distribution for an association
rule for the actual system of events, and R is an emulated support for an association rule for the emulated system of events, and where P[AB ]has lesser support than R[AB ]as P[AB]→0 and R[AB]→∞, and
where P[AB ]has greater support than R[AB ]as R[AB]→0 and P[AB]→∞.
9. The method recited in claim 2, wherein the association rules are sorted in ascending order of support.
10. The method recited in claim 9, where R(A,B[1])/P(A,B[1]), R(A,B[2])/P(A,B[2]), . . . R(A,B[n])/P(A,B[n]), where B represents an actual system of events, and where the association rules pertaining
to the actual system of events have equal support as the association rules pertaining to an emulated system of events if R[AB]/P[AB]=1 where R is an emulated support for an association rule for an
emulated system of events, and P is a probability distribution for an association rule for the actual system of events, and where R[AB ]has lesser support than P[AB ]as R[AB]→0 and P[AB]→∞, and where
R[AB ]has greater support than P[AB ]as P[AB]→0 and R[AB]→∞.
11. The method recited in claim 2, wherein the association rules are further sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1])), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R(A,B
12. The method recited in claim 8, wherein the association rules are farther sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1]), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R(A,B
13. The method recited in claim 10, wherein the association rules are further sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1]), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R(A,B
14. The method recited in claim 2, wherein event D[1 ]corresponds to a set of uniform resource locator data, and event D[1 ]corresponds to all other sets of uniform resource locators not in set D[1],
and where D={D[1], D[2], . . . , D[m]}, where the uniform resource locator data sets do not comprise a system of events and {(P(A,D[i])/R(A,D[i])÷P(A,D[i])/R(A,D[i]))/2}, i=1, 2, . . . , m, where i=
1, 2, . . . , m.
15. The method recited in claim 14, sorting further comprising {P(A,D[i])log(P(A,D[i])/R(A,D[i]))÷P(A,D[i])log(P(A,D[i])/R(A,D[i]))}, i=1,2, . . . , m, {P(A,D[i])/R(A,D[i])}, i=1, 2, . . . , m.
16. The method recited in claim 2, wherein association rules having high levels of support compared to the emulated distribution are ranked highest, regardless of whether the association rules are
highly supported in the uniform resource locator data as determined by P, and where P is a probability of an occurrence of the association rule.
17. The method recited in claim 16, wherein two association rules with identical P values are further sorted so that the rule with greater support in the emulated data is sorted higher than the rule
with the lesser support.
18. A method for sorting data mining association rules, the method comprising:
identifying statistically significant relationships within a cumulated distribution of data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the data as a reference by sorting the rules by their support within the distribution of
wherein the uniform resource locator data does not comprise a system of events and is sorted by m sets of uniform resource locator data, where {P(A,D[i])/R(A,D[i])}, i=], 2, . . . , m, and D[i ]
corresponds to sets of uniform resource locator data 1 to m.
19. The method recited in claim 18, wherein the association rules are further sorted where {P(A,D[i])log(P(A,D[i])/R(A,D[i]))}, i=1, 2, . . . , m.
20. A method for sorting data mining association rules, the method comprising:
identifying statistically significant relationships within a cumulated distribution of data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the data as a reference by sorting the rules by their confidence within the distribution
of data,
sorting of the association rules comprising ranking the rules by their confidence, where {P(A|D[i])/R(A|D[i])}, i=1, 2, . . . , m, where P is a probability of an occurrence of an association rule,
and where two association rules with identical P values are further sorted so that the rule with greater support in the emulated data is sorted higher than the rule with the lesser support.
21. The method recited in claim 20, wherein the association rules are further sorted where {P(A,D[i])log(P(A|D[i])/R(A|D[i]))}, i=1, 2, . . . , m.
22. An article of manufacture comprising a data storage medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method for
sorting data mining association rules, the method comprising:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events.
23. The article recited in claim 22, separating meaningful association rules from unmeaningful association rules by sorting the rules by their support within the distribution of uniform resource
locator data.
24. The article recited in claim 22, separating meaningful association rules from unmeaningful association rules by sorting the rules by their confidence for support within the distribution of
uniform resource locator data.
25. The article recited in claim 22, wherein the association rules are sorted in sets of rules for different systems of events, where C represents an emulated system of events for a web site and B
represents an actual system of events for the same web site data, and where if K(P[AC]:R[AC])>K(P[AB]:R[AB]), the association rules applied to the emulated system of events C have greater relevance
then the association rules applied to the actual system of events B, and where P is a probability distribution for an association rule for the actual system of events, and R is an emulated support
for an association rule for the emulated system of events, and wherein K is a constant.
26. The article recited in claim 25, wherein P is measured by an association rule's support in a respective system of events.
27. The article recited in claim 22, wherein P[AB]=P(A, B[1]), P(A, B[2]), . . . P(A, B[n]) and R[AB]=R(A, B[1]), R(A, B[2]) . . . R(A, B[n]), and P[AC]=P(A, C[1]), P(A, C[2]), . . . P(A, B[n]).
28. The article recited in claim 22, wherein the association rules are sorted in descending order of support.
29. The article recited in claim 22, where P(A,B[1])/R(A,B[1]), P(A,B[2])/R(A,B[2]), . . . P(A,B[n])/R(A,B[n]), where B represents an event in an actual system of events, and where the association
rules pertaining to the actual system of events have equal support as association rules pertaining to an emulated system of events if P[AB]/R[AB]=1, where P is a probability distribution for an
association rule for the actual system of events, and R is an emulated support for an association rule for the emulated system of events, and where P[AB ]has lesser support than R[AB ]as P[AB]→0 and
R[AB]→∞, and where P[AB ]has greater support than R[AB ]as R[AB]→0 and P[AB]→∞.
30. The article recited in claim 22, wherein the association rules are sorted in ascending order of support.
31. The article recited in claim 22, where R(A,B[1])/P(A,B[1]), R(A,B[2])/P(A,B[2]), . . . R(A,B[n])/P(A,B[n]), where B represents an actual system of events, and where the association rules
pertaining to the actual system of events have equal support as the association rules pertaining to an emulated system of events if R[AB]/P[AB]=1 where R is an emulated support for an association
rule for an emulated system of events, and P is a probability distribution for an association rule for the actual system of events, and where R[AB ]has lesser support than P[AB ]as R[AB]→0 and P[AB]
→∞, and where R[AB ]has greater support than P[AB ]as P[AB]→0 and R[AB]→∞.
32. The article recited in claim 22, wherein the association rules are further sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1])), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R(A,B
33. An article recited in claim 29, wherein the association rules are further sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1])), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R(A,B
34. An article recited in claim 31, wherein the association rules are further sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1])), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R(A,B
35. The article recited in claim 22, wherein event D[1 ]corresponds to a set of uniform resource locator data, and event D[1 ]corresponds to all other sets of uniform resource locators not in set D
[1], and where D={D[1], D[2], . . . , D[m]}, where the uniform resource locator data sets do not comprise a system of events and {(P(A,D[i])/R(A,D[i])+P(A,D[i])/R(A,D[i]))/2}, i=1, 2, . . . , m,
where i=1, 2, . . . , m.
36. The article recited in claim 22, the method steps further comprising: {(P(A,D[i])log(P(A,D[i])/R(A,D[i]))÷P(A,D[i])log(P(A,D[i])/R(A,D[i]))}, i=1,2, . . . , m, {P(A,D[i])/R(A,D[i])}, i=1, 2, . .
. , m.
37. The article recited in claim 22, wherein association rules having high levels of support compared to the emulated distribution are ranked highest, regardless of whether the association rules are
highly supported in the uniform resource locator data as determined by P, and where P is a probability of an occurrence of the association rule.
38. The article recited in claim 22, wherein two association rules with identical P values are further sorted so that the rule with greater support in the emulated data is sorted higher than the rule
with the lesser support.
39. An article of manufacture comprising a data storage medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method for
sorting data mining association rules, the method comprising:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events;
wherein the uniform resource located data does not comprise a system of events and is sorted by m sets of uniform resource locator data, where {P(A,D[i])/R(A,D[i])}, i=1, 2, . . . , m, and D[i ]
corresponds to sets of uniform resource locator data l to m.
40. The article recited in claim 22, wherein the association rules are further sorted where {P(A,D[i])log(P(A,D[i])/R(A,D[i]))}, i=1, 2, . . . , m.
41. An article of manufacture comprising a data storage medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a method for
sorting data mining association rules, the method comprising:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events;
sorting of the association rules comprising ranking the rules by their confidence, where {P(A\D[i])/R(A\D[i])}, i=1, 2, . . . , m, where P is a probability of an occurrence of an association rule,
and where two association rules with identical P values are further sorted so that the rule with greater support in the emulated data is sorted higher than the rule with the lesser support.
42. The article recited in claim 22, wherein the association rules are further sorted where {P(A,D[i])log(P(A|D[i])/R(A|D[i]))}, i=1, 2, . . . , m.
43. An apparatus to sort data mining association rules, comprising:
a processor;
a database including URL data;
circuitry to communicatively couple the processor to the database;
storage communicatively accessible by the processor; the processor sorting mining association rules by:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events.
44. The apparatus recited in claim 43, the processor further sorting by: separating meaningful association rules from unmeaningful association rules by sorting the rules by their support within the
distribution of uniform resource locator data.
45. The apparatus recited in claim 43, separating meaningful association rules from unmeaningful association rules by sorting the rules by their confidence for support within the distribution of
uniform resource locator data.
46. The apparatus recited in claim 43, wherein the association rules are sorted in sets of rules for different systems of events, where C represents an emulated system of events for a web site and B
represents an actual system of events for the same web site data, and where if K(P[AC]:R[AC])>K(P[AB]:R[AB]), the association rules applied to the emulated system of events C have greater relevance
then the association rules applied to the actual system of events B, and where P is a probability distribution for an association rule for the actual system of events, and R is an emulated support
for an association rule for the emulated system of events, and wherein K is a constant.
47. The apparatus recited in claim 43, wherein P is measured by an association rule's support in a respective system of events.
48. The apparatus recited in claim 43, wherein P[AB]=P(A, B[1]), P(A, B[2]), . . . P(A, B[n]) and R[AB]=R(A, B[1]), R(A, B[2]) . . . R(A, B[n], and P[AC]=P(A, C[1]), P(A, C[2]), . . . P(A, B[n]).
49. The apparatus recited in claim 43, wherein the association rules are sorted in descending order of support.
50. The apparatus recited in claim 43, where P(A,B[1])/R(A,B[1]), P(A,B[2])/R(A,B[2]), . . . P(A,B[n])/R(A,B[n]), where B represents an event in an actual system of events, and where the association
rules pertaining to the actual system of events have equal support as association rules pertaining to an emulated system of events if P[AB]/R[AB]=1, where P is a probability, distribution for an
association rule for the actual system of events, and R is an emulated support for an association rule for the emulated system of events, and where P[AB ]has lesser support than R[AB ]as P[AB]→0 and
R[AB]→∞, and where P[AB ]has greater support than R[AB ]as R[AB]→0 and P[AB]→∞.
51. The apparatus recited in claim 43, wherein the association rules are sorted in ascending order of support.
52. The apparatus recited in claim 43, where R(A,B[1])/P(A,B[1]), R(A,B[2])/P(A,B[2]), . . . R(A,B[n])/P(A,B[n]), where B represents an actual system of events, and where the association rules
pertaining to the actual system of events have equal support as the association rules pertaining to an emulated system of events if R[AB]/P[AB]=1 where R is an emulated support for an association
rule for an emulated system of events, and P is a probability distribution for an association rule for the actual system of events, and where R[AB ]has lesser support than P[AB ]as R[AB]→0 and P[AB]
→∞, and where R[AB ]has greater support than P[AB ]as P[AB]→0 and R[AB]→∞.
53. The apparatus recited in claim 43, wherein the association rules are further sorted where P(A,B[1])log(P(A,B[1])/R(A,B[1])), P(A,B[2])log(P(A,B[2])/R(A,B[2])), . . . , P(A,B[n])log(P(A,B[n])/R
54. The apparatus recited in claim 50, wherein the association rules are further sorted where P(A, B[1])log(P(A, B[1])/R(A,B[1])), P(A, B[2])log(P(A, B[2])/R(A, B[2])), . . . , P(A, B[n])log(P(A, B
[n])/R(A, B[n])).
55. The apparatus recited in claim 52, wherein the association rules are further sorted where P(A, B[1])log(P(A, B[1])/R(A,B[1])), P(A, B[2])log(P(A, B[2])/R(A, B[2])), . . . , P(A, B[n])log(P(A, B
[n])/R(A, B[n])).
56. The apparatus recited in claim 43, wherein event D[1 ]corresponds to a set of uniform resource locator data, and event D[1 ]corresponds to all other sets of uniform resource locators not in set D
[1], and where D={D[1], D[2], . . . , D[m]}, where the uniform resource locator data sets do not comprise a system of events and {(P(A,D[i])/R(A,D[i])÷P(A,D[i])/R(A,D[i]))/2}, i=1, 2, . . . , m,
where i=1, 2, . . . , m.
57. The apparatus recited in claim 43, sorting further comprising {P(A,D[i])log(P(A,D[i])/R(A,D[i]))÷P(A,D[i])log(P(A,D[i])/R(A,D[i]))}, i =1, 2, . . . , m, {P(A,D[i])/R(A,D[i])}, i=1, 2, . . . , m.
58. The apparatus recited in claim 43, wherein association rules having high levels of support compared to the emulated baseline reference distribution are ranked highest, regardless of whether the
association rules are highly supported in the uniform resource locator data as determined by P, and where P is a probability of an occurrence of the association rule.
59. The apparatus recited in claim 43, wherein two association rules with identical P values are further sorted so that the rule with greater support in the emulated data is sorted higher than the
rule with the lesser support.
60. An apparatus to sort data mining association rules, comprising:
a processor;
a database including URL data;
circuitry to communicatively couple the processor to the database;
storage communicatively accessible by the processor; the processor sorting mining association rules by:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events;
where the uniform resource locator data does not comprise a system of events and is sorted by m sets of uniform resource locator data, where {P(A,D[i])/R(A,D[i])}, i=1, 2, . . . , m, and D[i ]
corresponds to sets of uniform resource data l to m.
61. The apparatus recited in claim 43, wherein the association rules are further sorted where {P(A,D[i])log(P(A,D[i])/R(A,D[i]))}, i=1, 2, . . . , m.
62. An apparatus to sort data mining association rules, comprising:
a processor;
a database including URL data;
circuitry to communicatively couple the processor to the database;
storage communicatively accessible by the processor; the processor sorting mining association rules by:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events;
sorting of the association rules comprising ranking the rules by their confidence, where {P(A\D[i])/R(A\D[i])}, i=1,2, . . . , m, where P is a probability of an occurrence of an association rule, and
where two association rules with identical P values are further sorted higher than the rule with the lesser support.
63. The apparatus recited in claim 43, wherein the association rules are further sorted where {P(A,D[i])log(P(A|D[i])/R(A|D[i]))}, i =1, 2, . . . , m.
64. An apparatus for sorting data mining association rules, comprising:
means for storing URL data;
means for processing the URL data by:
identifying statistically significant relationships within a cumulated distribution of uniform resource locator data, the significant relationships represented by association rules; and
separating meaningful association rules from unmeaningful association rules using an emulated distribution of the uniform resource locator data as a reference, wherein said emulated distribution is
based upon emulated events that are different than actual events.
65. The apparatus recited in claim 64, the processing means further sorting data by separating meaningful association rules from unmeaningful association rules by sorting the rules by their support
within the distribution of uniform resource locator data.
66. The apparatus recited in claim 64, the processing means further sorting data by separating meaningful association rules from unmeaningful association rules by sorting the rules by their
confidence for support within the distribution of uniform resource locator data.
67. The apparatus recited in claim 64, wherein the association rules are sorted in sets of rules for different systems of events, where C represents an emulated system of events for a web site and B
represents an actual system of events for the same web site data, and where if K(P[AC]:R[AC])>K(P[AB]:R[AB]), the association rules applied to the emulated system of events C have greater relevance
then the association rules applied to the actual system of events B, and where P is a probability distribution for an association rule for the actual system of events, and R is an emulated support
for an association rule for the emulated system of events, and wherein K is a constant.
68. The apparatus recited in claim 64, wherein P is measured by an association rule's support in a respective system of events.
69. The apparatus recited in claim 64, wherein P[AB]=P(A, B[1]), P(A, B[2]), . . . P(A, B[n]) and R[AB]=R(A, B[1]), R(A, B[2]) . . . R(A, B[n]), and P[AC]=P(A, C[1]), P(A, C[2]), . . . P(A, B[n]).
1. Field of the Invention
The present invention relates to applying data mining association rules to sessionized web server log data. More particularly, the invention enhances data mining rule discovery as applied to log data
by reducing large numbers of candidate rules to smaller rule sets.
2. Description of the Related Art
Traditionally, discovery of association rules for data mining applications has focused extensively on large databases comprising customer data. For example, association rules have been applied to
databases consisting of “basket data”—items purchased by consumers and recorded using a bar-code reader—so that the purchasing habits of consumers can be discovered. This type of database analysis
allows a retailer to know with some certainty whether a consumer who purchases a first set of items, or “itemset,” can be expected to purchase a second itemset at the same time. This information can
then be used to create more effective store displays, inventory controls, or marketing advertisements. However, these data mining techniques rely on randomness, that is, that a consumer is not
restricted or directed in making a purchasing decision.
When applied to traditional data such as conventional consumer tendencies, the association rules used can be order-ranked by their strength and significance to identify interesting rules (i.e.
relationships.) But this type of sorting metrics is less applicable to sessionized web site data because site imposed associations exist within the data. Imposed associations may be constraints
uniformly imposed on visitors to the web site. For example, to determine a relationship between site pages that web site visitors (visitors) find “interesting” using traditional data mining
association rules, a researcher might look at pages that have strong link associations. However, for typical web site data, this type of association rule would probably be meaningless because of the
site's inherent topology as discussed below.
Associations amongst web site pages—web site pages being commonly identified by their respective uniform resource locator (URL)—exhibit behavior biased by at least two major effects: 1) the
preferences and intentionality of the visitor; and, 2) traffic flow constraints imposed on the visitor by the topology of the web site. Association rules used to uncover the preferences and
intentionalities of visitors can be overwhelmed by the effects of the imposed constraints. The result is that a large number of “superfluous” rules—rules having high strength and significance yet
essentially uninformative with respect to true visitor preferences—may be discovered. Commonly, these superfluous rules tend to be the least interesting to the researcher.
For example, association rules can be used to identify unsafe patterns of sessionized visits to a web site. Such rules deliver statements of the form “75% of visits from referrer A belong to segment
B.” Traffic flow patterns can also be uncovered in the form of statements such as “45% of visits to page A also visit page B.” However, such rules that characterize behavior due to intentionality of
the visitor will tend to be overwhelmed by rules that are due to the traffic flow patterns imposed upon the visitor by the site topology. Therefore, sorting these rules in the conventional manner
will place high importance on rules of the form “100% of visitors that invoked URL A also visited URL B.” When a visitor's conduct is dominated by the web site topology, rules emanating from such
conduct need to be discounted.
Thresholding out the strongest associations between web site pages is neither practical nor desirable, and manually wading through mined association rules for such associations would be
excruciatingly tedious and defeat the basic premise upon which data mining was developed.
What is desperately needed is a way to identify association rules that are strongly influenced by web site topology and therefore considered uninteresting as an association rule. Further, there is a
need for the ability to eliminate superfluous association rules from sessionalized web site log data and yet retain the superfluous rules for future use.
Broadly, the present invention allows association rules that are strongly influenced by a web site's topology to be identified. These superfluous association rules may be separated from non-topology
affected association rules and discounted as desired.
In one embodiment, the present invention is implemented in conjunction with a method to model a web site and simulate the behavior of a visitor traversing the site. The methods of the present
invention are practiced upon the data generated by the generative model, also referred to as the Web Walk Emulator, and disclosed in U.S. Patent Application entitled “WEB WALKER EMULATOR,” by Steven
Howard et al., assigned to the assignee of the current invention, incorporated by reference herein and being filed concurrently herewith. The present invention allows randomized behavior within an
emulated session to be reduced into “interesting” and “uninteresting” behavior. In another embodiment, the present invention may be practiced upon data accumulated from actual web site visits.
In another embodiment, the invention may be implemented to provide a method to sort association rules by their relative empirical frequency (relevance), or support, within a database comprising URL
data. This relevance ranking is dependant upon the URLs constituting a complete set of events, and ranks rules where the relevance of each data set is measured by comparing its associational support
against the reference given by an emulated distribution. In another embodiment, rules within a set of rules may be compared. The degree deviation of the relevance, or likelihood. of a rule is
compared to a reference, such as the number 1, to determine peaks and lows. These peaks and lows are used to determine whether the behavior of actual users compares favorably with the behavior of
emulated users. In another embodiment, these rules may be further sorted to determine point-by-point relevance information to distinguish rules that share a common likelihood ratio yet have different
In another embodiment. associations may be ranked even if the URLs comprise an incomplete system of events that may render an emulated choice non-mutually exclusive. In this case, the events are
converted into a probability distribution and sorted. In still another embodiment, the converted events may be sorted using more sensitive associations to seek out rules that have unusual levels of
support compared to a baseline reference distribution. In another embodiment, association rules may be ranked by their confidence to estimate these conditional probabilities.
In still another embodiment, the invention may be implemented to provide an apparatus to sort association rules as described in regards to the various methods of the invention. The apparatus may
include a client computer interfaced with a server computer used to sort the associations.
In still another embodiment, the invention may be implemented to provide an article of manufacture comprising a data storage device tangibly embodying a program of machine-readable instructions
executable by a digital data processing apparatus to perform method steps for sorting association rules as described with regards to the various methods of the invention.
The invention affords its users with a number of distinct advantages. One advantage is that the invention provides a way to avoid the necessity of storing massive amounts of historical URL data used
to make future comparisons regarding the actions of a user traversing a web site. Another advantage is that the invention reduces the computational time required to process URL data and associations.
Further, the invention allows the evaluation of “emulated” events that did not actually occur, allowing future behavior of a web site user to be studied using these events.
The nature, objects, and advantages of the invention will become more apparent to those skilled in the art after considering the following detailed description in connection with the accompanying
drawings, in which like reference numerals designate like parts throughout, wherein:
FIG. 1 is a block diagram of the hardware components and interconnections for discovering association rules in accordance with one embodiment of the invention;
FIG. 2 is a flowchart of an operational sequence to sort association rules in accordance with one embodiment of the invention;
FIG. 3 is a flowchart of an operational sequence to sort association rules in accordance with one embodiment of the invention;
FIG. 4 is a flowchart of an operational sequence to sort association rules in accordance with one embodiment of the invention;
FIG. 5 is a flowchart of an operational sequence to sort association rules in accordance with one embodiment of the invention;
FIG. 6 is a flowchart of an operational sequence to sort association rules in accordance with one embodiment of the invention;
FIG. 7 is a flowchart of an operational sequence for sorting association rules in accordance with the invention; and
FIG. 8 is a perspective view of an exemplary signal-bearing medium in accordance with the invention.
The present invention concerns discovering association rules in sessionized web server log data in the presence of constraints that may be expressed as Boolean expressions over the presence or
absence of items in the database. Such constraints allow users to specify a subset of rules in which the users are interested. The constraints are integrated into an association rule discovery method
instead of being performed in a post-processing step, thereby substantially reducing the time required to discover association rules.
The present invention includes various preferred methods for generating “candidate” itemsets and may be implemented in a broader sense as discussed in U.S. Pat. No. 5,615,341, Agrawal et al., for
“SYSTEM AND METHOD FOR MINING GENERALIZED ASSOCIATION RULES IN DATABASES.” assigned to the assignee of the current invention and incorporated herein by reference above. Furthermore, the present
invention may be used in conjunction with other methods using candidate generation, such as disclosed in Toivonen, “Sampling Large Databases for Association Rules,” Proc. of the 22nd Int'l. Conf. on
Very Large Databases (VLDB), Mumbai (Bombay), India, September 1996, and may be applied directly to the methods disclosed in Srikant et al., “Mining Generalized Association Rules,” Proc. of the 21st
Int'l. Conf. on Very Large Databases (VLDB). Zurich, Switzerland, September 1995. Each of the above references are also incorporated by reference herein.
To better understand the methods of the invention, a general statement of the relationships, nomenclature, and environment used to implement the various embodiments of the invention follows in
sections A-E. Thereafter, the apparatuses. methods and signal bearing mediums of the present invention are described.
A. Introduction
A “session” is an ordered set of URLs associated with a particular visitor to a web site. A session tracks the “click-stream lifespan” of a visitor to a web site. “Sessionizing” web server log data
involves splitting the data into mutually disjoint sessions. The click-stream lifespan of a session therefore consists of the sequence of URLs visited along the way during the session.
Sessionized visits allow the invention to discover where visits come from, and where a user traversing the site tends to exit the site. For instance, given a set of “referrers” (sites from which a
visit originates), and a set of candidate “exit pages” (URLs which may serve as the final URL visited during a session) the invention may evaluate the probability that a session originated from a
particular referrer, as well as the probability that a session ends via a particular exit page. Possible associations between the two may be discovered by examining the probability that a session
will end via a particular exit page, given that the session originated from a particular URL. In another example, the intuition may discover whether visitors to page A also tend to visit page B,
where page A and B can be chosen from choices that are not mutually exclusive over the life of a session, and whereas each session has only a single referrer, entry page, and exit page.
B. Definitions
Let U={u[1], u[2], . . . , u[R]} be a table of URLs, and let uεU. A session s=(s[1], s[2], . . . , s[L]), s[i]εU, i=1, 2, . . . , L, for some finite integer L=L(s)=#s. Therefore, a session is a
sequence of URLs.
Further, Ω=∪L/l=1×L/i=1U where L=∞. For a finite session, L is finite. Observed sessions are a realization on the probability space (Ω, 2^Ω, μ) where 2^Ω is the sigma algebra in Ω given by the set of
all subsets of Ω. An element ω of Ω—where ω is a “sample point”—gives a realization of a session, and an element 2^Ω(an “event”) is a set of sessions. Ω can be thought of as an index table containing
pointers into the set of all “permissible” sessions, where for convenience all sessions of up to length L are considered. If S is a random session from this set, and given a particular ω in Ω, s=S(Ω)
and denotes a realization of S. Therefore, if A={all sessions containing u} then the probability that a random session contains (i.e., “visits”) u at least once is given by μ(A).
The probability that a random session S contains u may be denoted by P(uεS), or in a simple embodiment simply P(u). Likewise, the probability that a session visits, for example, both u[1 ]and u[2 ]
may be denoted by P(u[1], u[2]).
C. Association Rules and Web Sites
Association rules find regularities between sets of items, for example, when an association rule A→B indicates that transactions of a database which contain A also contain B. Either the left hand
side (“antecedent” or “head”) or the right hand side (“consequent” or “body”) can comprise multiple events. Rules of the form u[1 ]u[2]→u[3 ]u[4 ]u[5 ]may be encountered. A rule A→B is defined as
having a confidence c% over a set of sessions if c% of the sessions that contain A also contain B, and support s if s% of all sessions contain both A and B.
Efficient algorithms for finding association rules have been provided for mining large databases such as discussed in Agrawal et al., “Fast Discovery of Association Rules,” Advances in Knowledge
Discovery and Data Mining, Fayyad, U. M. et al. eds., AAAI Press/The MIT Press, Menlo Park, Calif., 1996. However, when applied to web server data, the problem arises that an abundant set of rules
must be “distilled” to a manageable size. One way is to rank order rules according to measures of “relevance,” “strength,” or “importance.” One measure of relevance is the support s. A useful measure
of strength is given by the confidence c. Other candidate measures are the product of the two, such as cs, as well as c log c and s log s. In conventional transactional databases, these measures can
be meaningful, as s measures the portion of transactions in which a rule is relevant, and c gives a direct measure of the associational strength.
However, when used to rank order rules over URLs gleaned from sessionized web server log data, ranking by confidence and support can yield poor results. This is the case when association rules are
used to analyze traffic flow patterns of visits to a site, and then those traffic flow patterns are used to infer regularities about the preferences and intentionality of the visitors. Association
rules based on confidence and support detect regularities in traffic flow regardless of whether they are due to intentionality on the part of the visitor, or due to forced paths imposed upon the
visitor by the web site structure. A rule with substantial support s and strong confidence c can be uninteresting. This follows from what we know about the web site construction, because essentially
all visits are subject to certain traffic flow constraints, that may provide little option for choice.
A particular example is given by an entry form. To view the entry form, one must visit URL “E.” This is a matter of choice as not all visitors must view the form. To submit the form, one must visit
URL F. This is also a matter of choice, because not all visitors that view the form must submit it. However, it is unsurprising that the rule F→E will have confidence of 100% if all visitors that
submit the form must also view the form because this association is not a matter of choice: the two-necessarily occur together if F occurs at all.
Another scenario may arise where page A on a given web site has links to pages B, C, and D, and where these three pages are not accessible from links off any other page other than A on this site.
Then rules B→A, C→A, and D→A may have confidences of 100% for the same reason: namely, traffic flow constraints impose this regularity. On the other hand, consider the rules A→B, A→C, and A→D.
Furthermore, assume that page A has no other links. If these rules have confidences of 33%, 33%, and 34% respectively, it indicates a very balanced distribution of traffic across these three links.
This fact might be interesting to the administrator of the site, or even to the web architect whose job it is to arrange the content on the site to suit the visitors' preferences. On the other hand,
it may be less interesting to those most interested in traffic flowing to page D. Although it receives slightly more traffic than the other two paces, the traffic flow it receives is not much more
than that which can be explained by random choice, for example, for visitors making completely random choices at each decision point.
Alternatively, it might be of substantial interest if these confidences were instead 5%, 5%, and 90%. It might be even more interesting where E→D, where E is a page that does not have direct links to
either A or D, yet rule E→D has confidence of 10%. Although 10% may seem like a low number relative to the examples just considered, this level of confidence may actually be striking if it is due to
apparent strong mutual interest in both E and D even though the two pages are not directly accessible from each other.
Currently, eliminating these types of problems by direct analysis of a web site's topology is either impractical or entirely unachievable. For example, graph connectivity analysis alone does not
suffice, because solving this problem requires knowledge of the routing between traversal links. In actual web server logs, the situation is complicated by the fact that pages tend to be accessible
in multiple ways, and that links can appear on multiple pages. Furthermore, pages can be created dynamically depending upon the attributes of the visitor. Because page content can determine the link
traversal topology, web site topology itself can therefore be dynamic.
D. The “Web Walker Emulator”
The Web Walker Emulator incorporated by reference above may be used to implement the methods of the present invention. In one embodiment, the Web Walker Emulator is a method for creating a
probabilistic generative model of a web site that simulates the behavior of visitors traversing through the site. This simulation “emulates” the behavior of actual visitors to a web site. The
parameterization of the simulation can be adjusted in one embodiment such that these “emulated” visitors display behavior that is substantially indistinguishable from those of actual users (or a
subset thereof) with respect to population statistics observed over their respective traffic patterns. Or, in another embodiment, it can be tuned to display hypothetical behavior such as visitors
acting without evidence of intentional choice. Tracking the site usage traffic of emulated visitors may yield a set of reference distributions (“emulated distributions”) against which may be compared
the site usage distributions obtained for actual users. The emulated distributions are used to implement estimation methods which measure relative information content. The Kullback-Liebler
Information Criterion and the Bayesian criteria, widely known to those schooled in the art, are two such estimation methods. The result is a set of reference distributions against which the
distributions obtained for actual users may be compared.
E. Applying Emulated Distributions
A set of session logs derived from actual visits to a site generally provide the basis for a set of distributions that describe the behavior of those visits. In particular, these distributions
describe behavior that is visible from the web server. If a distribution based upon behavior that is unobservable to the web server is obtained, such a distribution may embody behaviors that are
known to exist but are unobservable or, purely hypothetical. However, the availability of such a distribution allows differences between arbitrary distributions to be discovered. This is useful in
cases where conventional statistics are unsatisfactory. For instance, conventional statistical analysis of “significance,” or of associational “strength” and “relevance” implicitly assume that the
reference distribution is a uniform distribution, that is, where sample points are equally likely under the same hypothesis. In certain applications such statistics are at best unsatisfactory or at
worst misleading, because the preferable null hypothesis is one where the sample points are drawn from a distribution with different yet known qualities. In one embodiment, the present invention
allows randomized behavior within an emulated session to result in highly structured behavior that is “significant” in the usual statistical sense.
In the present invention, a reference distribution allows powerful and general-purpose information theoretic statistics to be applied as discussed below for extracting information from a distribution
of interest. The Kullback-Liebler Information Criterion (KLIC) mentioned above is one such method that can be used by the present invention for discriminating between distributions. In particular, it
measures the directional divergence between two distributions, meaning that the measure is not symmetric. Although it is not a distance measure, it is sometimes referred to as the “KL-distance.” It
is also easy to construct a variation of the KLIC that yields a non-directional pseudo-distance measure (cf.[Ullah, A., “Entropy, Divergence and Distance Measures with Econometric Applications,”
Working Paper in Economics, Department of Economics, University of California, Riverside, Riverside, Calif., Journal of Statistical Planning and Inference, 49:137-162, 1996]). For background on the
KLIC see White, H., “Parametric Statistical Estimation with Artificial Neural Networks: A Condensed Discussion,” From Statistics to Neural Networks: Theory and Pattern Recognition Application, V.
Cherkassky, J. H. Friedman and H. Wechsler eds., 1994 and White, H., “Parametric Statistical Estimation with Artificial Neural Networks,” P. Smolensky, M. C. Mozer and D. E. Rumelhart eds.,
Mathematical Perspectives on Neural Networks, L. Erlbaum Associates (to appear), Hilldale, N.J., 1995. For an elegant and concise overview of distributional information measures in general, see
Ullah, A., 1996, supra. A brief introduction of KLIC is provided below.
1. Relative Entropy
Let P and R be two candidate session generating processes (i.e., probability measures) over the set of permissible sessions, an index into which we have denoted Ω. More precisely, P and R are
probability measures on (Ω, 2^Ω). We wish to determine which of P and R is responsible for generating a given (realization of a) session s=S(ω) where ωεΩ.
Let Q be a probability measure that dominates both P and R. This means that for each permissible set of sessions A, Q(A)=0 implies P(A)=0 and R(A)=0. In some practical circumstances Q may equal R.
Let ρ[P]=dP/dQ, and ρ[R]=dR/dQ, representing associated (Radon-Nikodym) type density functions. Applying Kullback and Liebler, the log density ratio is log[ρ[P](ω)/ρ[Q](ω)] as the information in ω
for discriminating between P and R. This quantity is known as the log likelihood radio and may be optimal in a variety of senses for discriminating between P and R.
The expected value of the log likelihood ratio yields the KLIC:
I(ρ[P]:ρ[R])=∫Ω log(ρ[P](s)/ρ[R](s))ρ[P](s)Q(ds).
If Web sessions are generated by P, then the KLIC quantifies the information theoretic measure of “surprise” experienced on average when sessions are described by P and are described by R. When the
intersection of the supports of R and Q is nonempty and the integral is taken over a finite space, this simplifies to:
I(ρ[P]:ρ[R])=Σ/sεsupp(R) (ρ[P](s)log(ρ[P](s)/ρ[R](s)).
and is commonly referred to as “cross-entropy” or “relative entropy.” Accordingly, let K(P:R)=I(ρ[P]:ρ[R]).
Reference may also be made to the following two quantities:
The former is the likelihood ratio. The latter is the information in s for discriminating between P and R.
2. Information and Entropy
Several forms of information and entropy exist. Information and entropy are closely related, as entropy is “minus” the information [Ullah, A., 1996, “Entropy, Divergence and Distance Measures with
Econometric Applications,” Working paper in Economics, Department of Economics, University of California, Riverside, USA 92521, J. of Statistical Planning and Inference, 49:137-162.]. KLIC
generalizes the notion of entropy. Shannon-Wiener entropy is a special case of the KLIC that arises when R dominates P [White, H., 1994, supra]. Entropy measures the “uncertainty” of a single
distribution (cf. [Khinchin, A., Mathematical Foundations of Information Theory, Dover Publications, Inc., NY, 1957], [Ullah, A., 1996, supra]).
To illustrate the difference between Shannon-Wiener entropy and the KLIC, consider for example a finite probability scheme. When applied to a “complete system” of n events (a mutually exclusive set
of which one and only one must occur at each trial) uncertainty is maximized when all events are equally likely (giving each a pointwise probability or “density” of n^−I). Furthermore, given that all
events are equally likely, uncertainty increases with the number of events n. For a finite system, uncertainty is minimized when all likelihood concentrates on a single point, in which case the
entropy is zero.
By comparison, the KLIC is a relative measure of information available for distinguishing a target distribution from a reference distribution. It's absolute value is minimized when the target is
indistinguishable from the reference—in this case knowing the reference implies knowing the target. For a finite system in which each event is equally likely under the reference distribution, the
KLIC is equal to minus the Shannon-Wiener entropy (discussed above) of the target distribution plus a constant. In the preferred embodiment, the present invention requires KLIC (relative entropy)
because traditional entropy measure methods rank superfluous association rules highly, exactly the problem that the present invention addresses. One reason superfluous association rules may be highly
ranked is because even “randomized” visitor behavior can be highly structured—therefore, have low entropy—due to traffic flow constraints imposed by the web sit topology.
For additional background on the KLIC, White, H., 1994, supra and White, H., 1995, supra, may be consulted and for a concise yet comprehensive survey of KLIC compared and contrasted with other
information measures (including Shannon-Wiener information and mutual information) see Ullah, A., 1996, supra.
The above discussion relating to definitions used to explain the methods of the invention and the environment in which the methods may be practiced should be particularly helpful in understanding how
the methods are implemented, and the hardware associated therewith.
Hardware Components & Interconnections
One aspect of the invention concerns an apparatus for extracting desired data relationships from a web site database, which may be embodied by various hardware components and interconnections as
described in FIG. 1.
Referring to FIG. 1, a data processing apparatus 100 for analyzing databases for generalized association rules is illustrated. In the architecture shown, the apparatus 100 includes one or more
digital processing devices, such as a client computer 102 having a processor 103 and a server computer 104. In one embodiment, the server computer 104 may be a mainframe computer manufactured by the
International Business Machines Corporation of Armonk, N.Y., and may use an operating system sold under trademarks such as MVS. Or, the server computer 104 may be a Unix computer, or OS/2 server, or
Windows NT server, or IBM RS/6000 530 workstation with 128 MB of main memory running AIX 3.2.5. The server computer 104 may incorporate a database system, such as DB2 or ORACLE, or it may access data
on files stored on a data storage medium such as disk, e.g., a 2 GB SCSI 3.5″ drive, or tape. Other computers, servers, computer architectures, or database systems than those discussed may be
employed. For example, the functions of the client computer 102 may be incorporated into the server computer 104, and vice versa.
FIG. 1 shows that, through appropriate data access programs and utilities 108, a mining kernel 106 accesses one or more databases 110 and/or flat files (i.e., text files) 112 which contain data
chronicling transactions. After executing the steps described below, the mining kernel 106 outputs association rules it discovers to a mining results repository 114, which can be accessed by the
client computer 102.
Additionally, FIG. 1 shows that the client computer 102 can include a mining kernel interface 116 which, like the mining kernel 106, may be implemented in suitable computer code. Among other things,
the interface 116 functions as an input mechanism for establishing certain variables, including the minimum support value or minimum confidence value. Further, the client computer 102 preferably
includes an output module 118 for outputting/displaying the mining results on a graphic display 120, print mechanism 122, or data storage medium 124.
In addition to the various hardware embodiments described above, a different aspect of the invention concerns a method for applying association rules to sessionalized web server log data. Throughout
the following description, a given set of sessions from which web server log data is gathered may be treated as a realization of a random variable drawn from a stationary data generating process.
One use of an emulated distribution is to simulate the behavior of actual visitors. The procedure may comprise in one embodiment parameterizing the Web Walk Emulator to closely match the behaviors of
actual visitors and fine-tuning these parameters to minimize the relative entropy divergence between the emulated and actual distributions by, for example, way of gradient local optimization or by
global optimization over a computational grid laid down over parameter space, or by a combination of global and local search. The resulting optimized parameterization can be used to generate the
population statistics exhibited by the original visitors.
Consider the task of comparing user behavior from historical data with current day behavior. As a simple means of accomplishing this comparison, historical data can be saved and used for future
comparisons. However, this approach has several drawbacks:
1. The amount of data can be massive, requiring excessive storage space,
2. Even if storage space is not an expensive resource, processing such a large amount of data may be more expensive than is necessary, especially if the data is highly redundant. One solution is to
compress the data into a set of sufficient statistics and then save only that compressed data set.
3. While compressing a store of data into a set of sufficient statistics can be desirable, anticipating what statistics to calculate for future reference can be difficult, especially where subjective
preferences change over time: what is considered important today may not be interesting tomorrow. Conversely, information considered discardable today may become useful tomorrow.
Having “emulated users” with the same behavioral characteristics as historical users allows us to evaluate an arbitrary set of statistics at a later time, including statistics that were invented
after the historical data was observed. It is possible to create hypothetical situations that were not presented to the historical users, and computationally “imagine” what behaviors historical users
night have exhibited if subjected to the hypothetical set of choices. In the present invention. “emulation” combined with “simulation” allows hypothetical situations to be considered, such as, “how
would last year's users react to this year's web site structure?” This behavior can then be used as a reference distribution for comparison against this year's behavior. The following discussion
discloses the methods of the present invention that may be used by the Web Walk Emulator for detecting meaningful URL-URL associations.
Overall Sequence of Operation
FIGS. 2-7 show several methods to illustrate examples of the various method embodiments of the present invention. For ease of explanation, but without any limitation intended thereby, the examples of
FIGS. 2-7 are described in the context of the apparatus 100 described above.
1. Ranking Association Rules by Support
a. Sets of Rules over a Complete System of Events
In the present invention, a complete system of events may be a mutually exclusive set of which one and only one must occur at each trial. Association rules can be applied to measure the strength of
association between an event A and a set of options that comprise a complete system, say one having n events such as B=(Bl, B[2], . . . , B[n]). This generates n association rules: A→B[1], A→B[2 ]. .
. A→B[n]. One distribution of interest is given by examining the “support” of each rule. Let P give the target probability distribution for actual visits contained in a database of step 204 shown in
FIG. 2. The “support” of rule A→B[1 ]over the set of sessions observed from actual visits (actual support) measures a specific quantity—defined above with respect to Association Rules—that is
directly observable in empirical samples. Although strictly speaking “support” measures an empirical relative frequency, given a sufficient amount of data and a sufficiently “stationary” data
generating process, it may be used to estimate the generally unobservable P(A,B[1]). Therefore, the support s may be used as an estimate of the probability P(A,B[1]) that defines the data generating
process underlying the original data set. For clarity, probabilistic quantities (such as P(A, B[1])) rather than their estimates s will be discussed. However, the following holds true if s is used
A rule such A→B[1 ]can be evaluated over different realizations of the same type of data, such as that produced by different realizations (e.g., as provided by a generative model such as the Web Walk
Emulator, or, as observed from the same web site over a different time span) and stored in the database of step 204. If R(A,B[1]) gives the support of this rule as measured over emulated visits
(henceforth we refer to it as “emulated support”) in step 206, two probability distributions over n events are considered that can be compared via relative entropy, namely, P[AB]=P(A,B[1]),P(A,B[2]),
. . . P(A,B[n]) and R[AB]=R(A,B[1]),R(A,B[2]), . . . , R(A,B[n]). Further, K(P[AB]:R[AB])=0, where it is some constant value, if and only if these distributions are identical. One way to apply this
is to compare P[AB ]with a different set of association rules, say P[AC]=P(A,C[1]),P(A,C[n]), . . . ,P(A,C[n],), for some integer m and complete system of events C=(C[1], C[2], . . . , C[n],), by
computing K(P[AC]:R[AC]) and comparing with K(P[AB]:R[AB]) in step 208. If K(P[AC]:R[AC])>K(P[AB]:R[AB]), then association rules applied to the system of events C have higher relevance on average (as
compared against the backdrop of the reference R[AC]) than that observed for rules over B (as compared against the backdrop of the reference R[AB]). The method ends in step 210.
b. Individual Rules over a Complete System of Events
The ranking method discussed above with respect to FIG. 2 compares the relevance of two sets of rules in which the consequents of the rules comprise a complete set of events, where the relevance of
each set is measured by comparing its associational support against the reference given by an emulated distribution. However, rules within the same set may also be compared as shown in FIG. 3.
Relative entropy is a measure of “expected” information content for discriminating between two distributions—i.e., it is an average value of a pointwise measure. This pointwise measure can be used to
compare individual rules within a set of rules. More, precisely: it can be used to compare measures over a set of rules, given that these measures comprise a probability distribution.
For example, if the rules A→B[1], A→B[2], . . . , A→B[n ]contained in the database of step 304 are to be ranked according to their “surprise” in the sense that the rule support measured over the
actual users is large relative to the rule support measured over emulated visits, one way—based upon the relationship of ρ[P](s)/ρ[R](s) discussed above with regards to relative entropy—is to sort
the quantities in descending order in step 306:
P(A,B [1])/R(A,B [1]), P(A,B [2])/R(A,B [2]), . . . , P(A,B [n])/R(A,B [n]).
Sorting these likelihood ratios is equivalent to traversing P[AB]/R[AB ]and looking for places where it deviates significantly from 1, here P[AB]=P(A,B[1]),P(A,B2), . . . , P(A,B[n]) and R[AB]=R(A,B
[1]),R(A,B[2]), . . . , R(A,B[n]). Peaks (ratios much greater than 1) show where the support of rules as described by P[AB ]is significantly greater than that described by R[AB]. Dips (ratios much
less than 1) show where the support under P[AB ]is unusually lower than what is suggested by R[AB]. Ratios close to 1 show where the behavior of actual users is consistent with that emulated users.
The method ends in step 308.
C. Individual Rules over a Complete System of Events, Weighted by Support
Sorting rules by their support likelihood as discussed above with respect to FIG. 3 results in rules with very high or very low likelihood ratios being distinguished, even if those rules have
negligible support. An appropriate solution is to sort in step 406 of FIG. 4 the quantities based upon the relationship of ρ[P](s)log(ρ[P(s)/ρ] [R](s)), after accessing a database in step 404
discussed above with regard to relative entropy:
P(A,B [1])log(P(A,B [1])/R(A,B [1])), P(A,B [2])log(P(A,B [2])/R(A,B [2])), . . . , P(A,B [n])log(P(A,B [n])/R(A,B [n])).
The “pointwise” information derived in this sorting method, given two rules for which the likelihood ratios are equal, breaks the “tie” by considering the actual support. Therefore, if using the
ratio to detect unusually high support (sorting in descending order in step 408 and picking rules that rise to the top in step 410), then the rule with the higher actual support will prevail in step
412. If using the ratio to detect unusually low support (sorting in ascending order in step 414 and picking rules that rise to the top in step 416), then the rule with the lower actual support will
prevail in step 418. The method ends in step 420.
d. Ranking Rule Support over Non-Mutually Exclusive Choices
If associations are applied to URLs in the database step 502 of FIG. 5 that do not comprise a complete system of events in step 504, a users choice or event may not be mutually exclusive. If the
complete system of events has occurred, a different method for sorting is applied in step 506, for example, a method as shown in FIGS. 2-4. However, “incomplete” events may be converted into a
probability distribution by letting event D[1 ]correspond to a set of URLs, where a head and a body of an association can refer to a single URL or to a set of URLs in step 508, and event D[1 ]
corresponds to “not D[1]”—i.e., all other URLs besides those in D[1]. Therefore, {D[1, D] [2], . . . , D[m]} can be an arbitrary set of objects that are not mutually exclusive and can be examined in
step 510 by sorting “m” quantities as follows:
{P(A,D [i])/R(A,D [i])+P(A,D [i])/R(A,D [i])/2}, i=1,2, . . . ,m.A.
This indicates for each i where the observed support for the two rules {A→D[p]A→D[i]} is much different than that exhibited in the reference distribution, for example, as provided by a generative
model such as the Web Walk Emulator. In order to give preference to rules having higher support, relative entropy is applied in step 512, sorting the following m quantities:
{P(A,D [i])log(P(A,D [i])/R(A,D [i]))÷P(A,D [i])log(P(A,D [i])/R(A,D [i]))}, i=1,2, . . . , m.B.
This compares the pairs of rules {(A→D[1],A→D[1]),(A→D[2],A→D[2]) . . . {(A→D[m],A→D[m]),} with each other on the basis of whether their support over one data set is unusually high (respectively,
low) as compared with the support as evaluated over a data set representing a baseline reference distribution. The method ends in step 514.
Both of the sorting method notations expressed in this section and FIG. 5A are applications of general methods of converting each rule into a corresponding distribution, and then using a
distributional measure (average likelihood ratio in (A), and relative entropy in (B)) to compare the resulting distribution with a baseline reference. However, the following quantities may be sorted
instead as shown in steps 610 and 612 of FIG. 6:
{P(A,D [i])/R(A,D [i])}, i=1,2, . . . ,m.C.
{P(A,D [i])log(P(A,D [i])/R(A,D [i]))}, i=1,2, . . . ,m.D.
Statement (C) may be interpreted as seeking out rules that have unusual levels of support compared to a baseline reference distribution, regardless of whether or not the rules are highly supported in
the available data as determined by P. Statement (D) also seeks out rules having unusually high or low support, but weights them according to their support over the observed data as determined by P
such that given two rules with identical likelihood ratios, the one with greater support will be sorted closer to the head (or tail) of the rank ordering. The method ends in step 614.
2. Ranking Rules by Their Confidence
In another embodiment. and under the appropriate conditions (e.g., sufficient data, stationary data generating process), association rules' measures of “confidence” can be used in one method to
estimate conditional probabilities. In particular, the confidence of rule A→B gives a useable estimate of the conditional probability P(A|B). The same techniques as described immediately above may be
applied for rule support to compare the confidence of rules against a baseline reference distribution. Relationships such as defined in statements (C) and (D) above are easily applied to evaluating
rule confidence. With substitution of the appropriate conditional probabilities, the relationships and the rules are sorted in steps 706 and 708 where:
{(P(A|D [i])/R(A|D [i])}, i=1,2, . . . , m.E.
{(P(A,D [i])log(P(A|D [i])/R(A|D [i]))}, i=1,2, . . . , m.F.
Sorting likelihood ratios as described above are equivalent to traversing the distribution P[AB]/R[AB ]and looking for places where it deviates significantly from 1. Peaks (ratios much greater than
1) show where the confidence of rules under P[AB ]is significantly greater than what is suggested by R[AB], and dips (ratios close to 0) show where the confidence under P[AB ]is unusually lower than
what is suggested by R[AB]. Comparatively speaking, the interpretation of the relationship statement (E) is not as tidy because the conditional probabilities do not in general lend themselves to
forming a probability distribution; for each i in statement (E) simply delivers a pointwise measure of the information content for discriminating between the two distributions {P(A|D[i]),P(A|D[i])}
and {R(A|D[i]),R(A|D[i])}. The relationships in statement (F) add the benefit of giving emphasis to rules with greater support, which is ideally suited to the determining applications for which these
techniques are intended. The method ends in step 710.
Data Storage Device
Such methods as discussed above may be implemented, for example, by operating the processor 103 of the client computer 102 shown in FIG. 1 to execute a sequence of machine-readable instructions.
These instructions may reside in various types of data storage medium. In this respect, one aspect of the present invention concerns an article of manufacture, comprising a data storage medium
tangibly embodying a program of machine-readable instructions executable by a digital data processor to perform method steps to extract desired data relationships from web site data.
This data storage medium may comprise, for example, RAM contained within the client computer 102. Alternatively, the instructions may be contained in another data storage medium, such as a magnetic
data storage diskette 800 (FIG. 8). Whether contained in the client computer 102 or elsewhere, the instructions may instead be stored on another type of data storage medium such as DASD storage
(e.g., a conventional “hard drive” or a RAID array), magnetic tape, electronic read-only memory (e.g., CD-ROM or WORM), optical storage device (e.g., WORM), paper “punch” cards, or other data storage
media. In an illustrative embodiment of the invention, the machine-readable instructions may comprise lines of compiled C-type language code.
Other Embodiments
While there have been shown what are presently considered to be preferred embodiments of the invention, it will be apparent to those skilled in the art that various changes and modifications can be
made herein without departing from the scope of the invention as defined by the appended claims.
Cited Filing Publication Applicant Title
Patent date date
US5615341 8 May 1995 25 Mar 1997 International Business Machines System and method for mining generalized association rules in databases
US5668988 8 Sep 1995 16 Sep 1997 International Business Machines Method for mining path traversal patterns in a web environment by converting an original log sequence into a set of
* Corporation traversal sub-sequences
US5724573 22 Dec 3 Mar 1998 International Business Machines Method and system for mining quantitative association rules in large relational tables
* 1995 Corporation
US5832482 20 Feb 3 Nov 1998 International Business Machines Method for mining causality rules with applications to electronic commerce
* 1997 Corporation
US6061682 12 Aug 9 May 2000 International Business Machine Method and apparatus for mining association rules having item constraints
* 1997 Corporation
1 Agrawal et al, "Mining Association Rules Between Sets of Items In Large Databases", Proc. 1993 ACM SIGMOD Conf. pp. 207-216, 1993.
2 Agrawal et al, "Parallel Mining Of Association Rules: Design, Implementation, And Experience", IEEE Transaction On Knowledge Data Engineering, vol. 8, No. 6, pp. 962-969, Dec. 1996.
3 Argrawal et al, "Fast Algorithms For Mining Association Rules", Proceedings of the 1994 VLDB Conferences, pp. 487-499, 1994.
4 H. Mannila et al, "Improved Methods For Finding Association Rules", Pub. No. C-1993-65, 20 pages, Univ. Helsinki, 1993.
5 Han et al, Discovery Of Multiple Level Association Rules Form Large Databases, Proceedings of the 21st International Conference on Very Large Data Bases, Zurich, Switzerland, Sep. 11-15, 1995,
pp. 420-431.
6 J.S. Park et al, "An Effective Hash Based Algorithm For Mining Association Rules", Proc. ACM-SIGMOD Conf. On Management of Data, San Jose, May 1994.
7 J.S. Park, et al, "Efficient Parallel Data Mining For Association Rules", IBM Research Report, RJ 20156, Aug. 1995.
8 Piatetsky-Shapiro, Chapter 13 "Discovery, Analysis, And Presentation Of Strong Rules", from Knowledge Discovery in Databases, pp. 229-248, AAAI/MIT,Press, Menlo Park, Ca 1991.
9 Savascre et al, "An Efficient Algorithm For Mining Association Rules In Large Databases", Proceedings of the 21st VLDB Conference, Zurich, Switzerland, 1995, pp. 432-444.
10 Srikant et al, "Mining Generalized Association Rules", Proceedings of the 21st VLDB Conference, Zurich, Switzerland, 1995, pp. 407-419.
11 Swami, "Research Report: Set-Oriented Mining For Association Rules", IBM Research Division, RJ 9567 (83573 Oct. 1993.
12 Ullah, "Entropy, divergence and distance measures with economic applications", Journal Of Statistical Planning And Interence, Elsevier 1993, pp. 137-163.
Citing Patent Filing date Publication date Applicant Title
US6397205 * 22 Nov 1999 28 May 2002 Duquesne University Of The Holy Ghost Document categorization and evaluation via cross-entrophy
US6415287 * 20 Jan 2000 2 Jul 2002 International Business Machines Method and system for mining weighted association rule
US6529865 18 Oct 1999 4 Mar 2003 Sony Corporation System and method to compile instructions to manipulate linguistic structures into separate functions
US6535886 18 Oct 1999 18 Mar 2003 Sony Corporation Method to compress linguistic structures
US6633882 * 29 Jun 2000 14 Oct 2003 Microsoft Corporation Multi-dimensional database record compression utilizing optimized cluster models
US6651048 * 22 Oct 1999 18 Nov 2003 International Business Machines Interactive mining of most interesting rules with population constraints
US6651049 * 22 Oct 1999 18 Nov 2003 International Business Machines Interactive mining of most interesting rules
US6691163 23 Dec 1999 10 Feb 2004 Alexa Internet Use of web usage trail data to identify related links
US6721697 18 Oct 1999 13 Apr 2004 Sony Corporation Method and system for reducing lexical ambiguity
US6785671 * 17 Mar 2000 31 Aug 2004 Amazon.Com, Inc. System and method for locating web-based product offerings
US6801945 * 31 Jan 2001 5 Oct 2004 Yahoo ! Inc. Systems and methods for predicting traffic on internet sites
US6829608 * 30 Jul 2001 7 Dec 2004 International Business Machines Systems and methods for discovering mutual dependence patterns
US6850988 * 15 Sep 2000 1 Feb 2005 Oracle International Corporation System and method for dynamically evaluating an electronic commerce business model through click stream
US6853982 29 Mar 2001 8 Feb 2005 Amazon.Com, Inc. Content personalization based on actions performed during a current browsing session
US6912505 29 Mar 2001 28 Jun 2005 Amazon.Com, Inc. Use of product viewing histories of users to identify related products
US6924828 * 26 Apr 2000 2 Aug 2005 Surfnotes Method and apparatus for improved information representation
US6928448 * 18 Oct 1999 9 Aug 2005 Sony Corporation System and method to match linguistic structures using thesaurus information
US6973492 7 Sep 2001 6 Dec 2005 International Business Machines Method and apparatus for collecting page load abandons in click stream data
US7117208 * 17 Dec 2004 3 Oct 2006 Oracle Corporation Enterprise web mining system and method
US7152106 5 Jul 2002 19 Dec 2006 Clickfox, Llc Use of various methods to reconstruct experiences of web site visitors
US7159023 16 Dec 2003 2 Jan 2007 Alexa Internet Use of web usage trail data to identify relationships between browsable items
US7299270 * 10 Jul 2001 20 Nov 2007 Lycos, Inc. Inferring relations between internet objects that are not connected directly
US7395259 30 Jul 2004 1 Jul 2008 A9.Com, Inc. Search engine system and associated content analysis methods for locating web pages with product offerings
US7430561 30 Mar 2006 30 Sep 2008 A9.Com, Inc. Search engine system for locating web pages with product offerings
US7558865 * 1 Oct 2004 7 Jul 2009 Yahoo! Inc. Systems and methods for predicting traffic on internet sites
US7567979 * 30 Dec 2003 28 Jul 2009 Microsoft Corporation Expression-based web logger for usage and navigational behavior tracking
US7660778 * 8 Feb 1999 9 Feb 2010 Accenture Global Services Gmbh Runtime program analysis tool for a simulation engine
US7676464 17 Mar 2006 9 Mar 2010 International Business Machines Page-ranking via user expertise and content relevance
US7685074 19 Nov 2004 23 Mar 2010 Amazon.Com, Inc. Data mining of user activity data to identify related items in an electronic catalog
US7797197 12 Nov 2004 14 Sep 2010 Amazon Technologies, Inc. Method and system for analyzing the performance of affiliate sites
US7840555 * 12 Sep 2006 23 Nov 2010 Teradata Us, Inc. System and a method for identifying a selection of index candidates for a database
US7877385 21 Sep 2007 25 Jan 2011 Microsoft Corporation Information retrieval using query-document pair information
US7882115 25 Mar 2005 1 Feb 2011 Scott Hirsch Method and apparatus for improved information representation
US7886051 * 26 Aug 2009 8 Feb 2011 Webtrends, Inc. On-line web traffic sampling
US7970664 10 Dec 2004 28 Jun 2011 Amazon.Com, Inc. Content personalization based on actions performed during browsing sessions
US8051066 7 Jul 2009 1 Nov 2011 Microsoft Corporation Expression-based web logger for usage and navigational behavior tracking
US8224715 9 Aug 2010 17 Jul 2012 Amazon Technologies, Inc. Computer-based analysis of affiliate site performance
US8301623 22 May 2007 30 Oct 2012 Amazon Technologies, Inc. Probabilistic recommendation system
US8407105 6 Jun 2011 26 Mar 2013 Amazon.Com, Inc. Discovery of behavior-based item relationships based on browsing session records
US8433621 18 Jun 2012 30 Apr 2013 Amazon.Com, Inc. Discovery of behavior-based item relationships
US8510178 7 Jun 2012 13 Aug 2013 Amazon Technologies, Inc. Computer-based analysis of seller performance
US8620767 14 Mar 2013 31 Dec 2013 Amazon.Com, Inc. Recommendations based on items viewed during a current browsing session
WO2003005168A2 * 5 Jul 2002 16 Jan 2003 Clickfox Llc Use of various methods to reconstruct experiences of web site visitors
WO2003052621A1 * 13 Dec 2002 26 Jun 2003 Kate Armstrong System for identifying data relationships
WO2006055269A2 * 3 Nov 2005 26 May 2006 Michelle K Anderson Computer-based analysis of affiliate web site performance
Date Code Event Description
6 Mar SULP Surcharge for late Year of fee payment: 11
2013 payment
6 Mar FPAY Fee payment Year of fee payment: 12
17 Dec REMI Maintenance fee reminder
2012 mailed
23 Sep FPAY Fee payment Year of fee payment: 8
22 Sep FPAY Fee payment Year of fee payment: 4
Owner name: JOHNSON & JOHNSON INDUSTRIA E. COMERCIO LTDA, BRAZ
26 Mar AS Assignment Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FALLEIROS, ALEXANADARE PETROCINI;FILHO, JULIO MALVA;REEL/FRAME:010160/0138
Effective date: 19990423
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
14 Sep AS Assignment
1998 Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HOWARD, STEVEN KENNETH;MARTIN, DAVID CHARLES;PLUTOWSKI, MARK EARL PAUL;REEL/FRAME:009455/0350;
SIGNING DATES FROM 19980805 TO 19980904
Original Image
|
{"url":"http://www.google.com.au/patents/US6230153","timestamp":"2014-04-19T17:25:03Z","content_type":null,"content_length":"162538","record_id":"<urn:uuid:9628d2ad-914d-4902-89d9-8c5db2e64878>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Bayesian Perceptual Model Replicates the Cutaneous Rabbit and Other Tactile Spatiotemporal Illusions
When brief stimuli contact the skin in rapid succession at two or more locations, perception strikingly shrinks the intervening distance, and expands the elapsed time, between consecutive events. The
origins of these perceptual space-time distortions are unknown.
Methodology/Principal Findings
Here I show that these illusory effects, which I term perceptual length contraction and time dilation, are emergent properties of a Bayesian observer model that incorporates prior expectation for
speed. Rapidly moving stimuli violate expectation, provoking perceptual length contraction and time dilation. The Bayesian observer replicates the cutaneous rabbit illusion, the tau effect, the kappa
effect, and other spatiotemporal illusions. Additionally, it shows realistic tactile temporal order judgment and spatial attention effects.
The remarkable explanatory power of this simple model supports the hypothesis, first proposed by Helmholtz, that the brain biases perception in favor of expectation. Specifically, the results suggest
that the brain automatically incorporates prior expectation for speed in order to overcome spatial and temporal imprecision inherent in the sensorineural signal.
Citation: Goldreich D (2007) A Bayesian Perceptual Model Replicates the Cutaneous Rabbit and Other Tactile Spatiotemporal Illusions. PLoS ONE 2(3): e333. doi:10.1371/journal.pone.0000333
Academic Editor: Nava Rubin, New York University, United States of America
Received: November 19, 2006; Accepted: March 9, 2007; Published: March 28, 2007
Copyright: © 2007 Daniel Goldreich. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Funding: Supported by an individual Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Competing interests: The author has declared that no competing interests exist.
How does the brain interpret information from the senses? This unresolved question carries fundamental importance for neuroscience.
The brain faces a challenge as it attempts to translate sensory information into perception: Sensorineural activity imprecisely represents the physical world, In the case of tactile perception,
spatial imprecision due to low receptor density poses a particular challenge, especially when brief stimuli preclude exploration. The most discriminating tactile sensors of primates, the fingertips,
house a few hundred sensory nerve fibers per square cm [1], [2], a density four orders of magnitude lower than the peak ganglion cell density in the retina [3]. Without the benefit of exploratory
movements, the fingertips' resolving power is on the order of one mm [4], [5], whereas the forearm has much worse acuity, resolving detail on the order of one cm [5]. Sensory systems face not only
spatial, but also temporal imprecision, an expected consequence of stochastic variation in action potential timing, such as the several ms jitter in stimulus-evoked first-spike latencies of
somatosensory cortical neurons [6].
A growing body of research suggests that the brain takes advantage of prior knowledge to enhance perceptual resolution beyond the limits imposed by sensorineural imprecision [7]. For example, the
assumption that light originates from above disambiguates the retinal image, allowing the brain to more accurately perceive object shape from shading [8], [9]. Reliance on prior knowledge comes at a
cost, however, as the rare physical event that violates expectation (e.g., a visual scene lit from below) is then misperceived. A percept that misrepresents physical reality–an illusion–is thus both
a consequence of, and a clue to the brain's expectations regarding the world.
Tactile perception is subject to characteristic spatiotemporal illusions. The best-known of these is the cutaneous rabbit, in which a sequence of three or more taps to two skin sites evokes the
perception of an object hopping along the skin from the first site to the second, landing in the process on intervening skin that was never touched [10]–[14] (Fig. 1A). A vivid illusory tap occurs
even when the intervening skin is anesthetized [11], revealing that the rabbit has its origins in the central nervous system, not in skin mechanics. Apparently related to the rabbit is the classic
tau effect, in which the more rapidly traversed of two equal distances (defined by three stimuli) is perceived as shorter [15], [16] (Fig. 1B). Similarly, two different distances can be made
perceptually equal simply by adjusting stimulus timing [17] (Fig. 1C). Even more remarkably, the perceived locations of two stimuli delivered in very rapid succession merge to a single point on the
skin [18] (Fig. 1D). When stimulus timing is held constant, the perceived distance between stimuli both underestimates, and grows in proportional with, the actual distance [19], [20] (Fig. 1E). In
the kappa effect, by contrast, the perceived time between stimuli dilates as the distance between stimuli is increased [21] (Fig 1F).
Figure 1.
Tactile length contraction (A–E) and time dilation (F) illusions. Actual stimulus sequences (plotted points) evoke illusory perceived sequences (positions on forearms in A–E; clock times in F).
Colored arrows in panels A, B, E, and F indicate direction of perceptual effect (arrow at right) caused by adjustment to corresponding stimulus location or time (arrow at left). (A) Rabbit illusion
[12]. The two intermediate taps, separated by short temporal interval (rapid movement), are perceptually displaced towards one another. (B) Classic tau effect [15], [16]. The more rapidly traversed
of two equal distances is perceived as shorter. (C) Tau effect with two-arm comparison [17]. Stimulus parameters were adjusted to reach the point of subjective equality, at which the greater distance
(faster movement) is perceived equal to the shorter distance (slower movement). (D) Perceptual merging [18]. At very rapid velocities, the perceived locations of the two taps merge to a single point.
The velocity required to accomplish perceptual merging increases with tap separation. (E) Two-stimulus distance estimation [19]. When inter-stimulus distance is increased at fixed inter-stimulus
time, perceived distance both underestimates, and grows with, actual distance. (F) Kappa effect [21]. When inter-stimulus distance is increased at fixed inter-stimulus time, perceived inter-stimulus
time overestimates actual time. Stimulus parameters were adjusted to reach the point of subjective equality, at which perception dilates the temporal interval defined by the greater distance (faster
movement) to equal the slightly longer temporal interval defined by the smaller distance (slower movement).
The above illusions apparently reflect just two fundamental perceptual distortions: underestimation of inter-stimulus distance (ISD), and overestimation of inter-stimulus time (IST). I term these
distortions perceptual length contraction and time dilation, in analogy with the relativistic phenomena of those names [22]. Perceptual length contraction underlies many illusions [10]–[20], [23] (
Fig. 1A–E). Perceptual time dilation, for reasons discussed below, has been less frequently reported [14], [21] (Fig. 1F). The present work proposes to explain the inferential process that generates
these perceptual distortions. Related phenomena reported in vision [24] and audition [25] may share a similar explanation.
The Bayesian observer model described here replicates the spatiotemporal illusions illustrated in Figure 1. The model forms perceptual judgments by interpreting a spatially and temporally imprecise
sensorineural signal in light of two plausible prior assumptions: 1) Stimuli separated by small spatial and temporal intervals originate from uniform object motion, and 2) objects that contact the
skin tend to move slowly. As shown below, perceptual length contraction and time dilation are emergent properties of the Bayesian observer. When confronted with a fast stimulus sequence, the observer
perceptually reduces ISD, and increases IST, reconciling velocity perception with expectation.
To infer which of many possible trajectories was taken by a sensed object, the Bayesian observer multiplies each candidate trajectory's prior (its probability, given only the expectation of slow
movement) by its likelihood (probability of the sensorineural activity, given the trajectory) to obtain its posterior (probability of the trajectory, given sensorineural activity and expectation).
The mode of the resulting posterior distribution, the most probable trajectory, is the percept: a compromise between imprecise sensorineural information and the observer's expectation of slow
movement (see Materials and Methods for mathematical details).
Basic Bayesian Observer
I first describe a basic version of the observer, which admits spatial but not temporal imprecision (Fig. 2). This model experiences length contraction but not time dilation. The observer's perceived
ISD, l', is related to actual ISD, l, and IST, t, by the length contraction equation (for derivation, see Materials and Methods):1
where λ = σ[v]/σ[s] is the single free parameter of the model (see Fig. 2A).
Figure 2.
Basic Bayesian observer. (A) Two stimuli touch skin in rapid succession (filled circles). Reflecting sensorineural imprecision, each stimulus evokes a Gaussian likelihood function, centered on its
actual position, with spatial standard deviation σ[s] (vertical arrows: ±1 σ[s]). The observer considers slow movement most probable a priori, adopting a Gaussian prior probability distribution for
velocity, centered on zero, with standard deviation σ[v] (slopes: ±1 σ[v]). (B) Candidate trajectories, represented by first stimulus position and velocity (left column) or, equivalently, first and
second stimulus positions (right). Intensity represents probability. Prior (top) x likelihood (middle) ∝ posterior probability (bottom). The actual trajectory (red crosshairs in all panels) occupies
the position of maximal likelihood, but its velocity exceeds prior expectation. Perception (mode of posterior; red dot) is a compromise between reality and expectation. (C) Actual (filled circles,
solid line) and perceived (open circles, dashed line) trajectories. Perceived ISD (l' = 0.67 cm; dotted bar) underestimates actual ISD (l = 2 cm; solid bar), and perceived velocity (v' = 6.7 cm/s)
underestimates actual velocity (v = 20 cm/s).
Equation 1 predicts that perceived ISD will: 1) underestimate actual ISD; 2) asymptotically approach actual ISD as IST increases; and 3) increase linearly with actual ISD, at constant IST. Each of
these predictions is borne out by the human perceptual data. Indeed, the Basic observer model explains between 80 and 95% of the variance in the data from five studies of tactile length contraction
illusions (Fig. 3A–E).
Figure 3.
Human data from five studies (symbols) and basic Bayesian observer's performance on the same tasks (solid curves in A–E). For each study, the value of λ was chosen to minimize the mean-squared error
between model and human performance. (A) Rabbit on forearm (Fig. 1A) [12]. R^2: 0.80. λ: 12.7/s. (B) Two-arm tau effect (Fig. 1C) [17]. x-axis: IST ratio ( pair 1/pair 2 ). Pair 1 ISTs (from left to
right) were 0.2, 0.35, 0.5, 0.65, and 0.8 s; pair 2 IST = 1.0 s-pair 1 IST. y-axis: ISD ratio ( pair 2/pair 1 ) that resulted in equality of perceived ISDs ( pair 1 l' = pair 2 l' ). Pair 1 ISD was
fixed at 10 cm. R^2: 0.95. λ: 9.4/s. (C) Perceptual merging experiment (Fig. 1D) [18]. R^2: 0.92. λ: 4.2/s. (D) Two-stimulus distance estimation for longitudinally separated stimuli on forearm
(circles) and horizontally separated stimuli on forehead (crosses) at 0.24 s IST (Fig. 1E) [19]. Forearm R^2: 0.94. Forehead R^2: 0.90. Forearm λ: 4.9/s. Forehead λ: 10.5/s. (E) Two-stimulus distance
estimation for longitudinally separated taps to the index finger [20]. Circles: 1.1 s IST; crosses: 26 ms IST. R^2 (1.1 s): 0.94, R^2 (26 ms): 0.90. λ: 85.1/s. (F) Point localization accuracies for
finger, forehead, and forearm [5] plotted against 1/λ (dashed line). R^2: 0.99.
Figure 3A shows rabbit illusion data [12] (Fig. 1A). As predicted by Equation 1, perceived ISD between the second and third taps to the forearm asymptotically approached actual ISD (10 cm) as IST was
Figure 3B shows two-arm tau effect data [17] (Fig. 1C). For each pair-1 to pair-2 IST ratio, the pair-2 ISD was found that was perceptually equal to the fixed, 10-cm pair-1 ISD. In agreement with
Equation 1, relatively shorter pair-2 ISTs (t[1]/t[2]>1) required relatively larger pair-2 ISDs (l[2]/l[1]>1) as the condition for perceptual equality.
Figure 3C shows perceptual merging data [18] (Fig. 1D). At each ISD, the IST was determined for which two electrocutaneous pulses to the forearm became spatially indistinguishable. For modeling
purposes, the assumption was made that this occurs when perceived ISD drops below a threshold value. The data were best fit with a perceived ISD threshold of 0.8 cm, a sensible value given that the
point localization accuracy of the human forearm is approximately 1 cm [5]. As predicted by Equation 1, larger ISDs required shorter ISTs.
Figure 3D shows perceived distance between two electrocutaneous pulses at fixed IST [19] (Fig. 1E). As predicted by Equation 1, perceived and actual ISD correlated linearly. Note also that the
forehead showed less perceptual length contraction than did the forearm (see Lambda Variation below).
Figure 3E shows perceived distance between two taps to the index finger, determined at two ISTs [20]. As predicted by Equation 1, less length contraction occurred at the longer IST, and perceived and
real ISD correlated approximately linearly. The data at the shorter IST suggest a slight nonlinearity, a result predicted by the full Bayesian observer model (below).
Lambda Variation
A small λ results from strong expectation for slow movement (small σ[v]) and/or poor spatial acuity (large σ[s]), either of which facilitates perceptual length contraction (Equation 1). Conversely,
when λ is large, less length contraction occurs. The model's replication of human data shows that the value of λ varies from one body region to another. Length contraction is most pronounced on the
forearm (Figs. 3A–D, average λ: 7.8/s), somewhat less pronounced on the forehead (Fig. 3D, λ: 10.5/s), and least pronounced on the finger (Fig. 3E λ: 85.1/s). Is this variation in λ due to variation
in σ[s], in σ[v], or both?
The value of σ[s] is reflected in the accuracy with which humans localize a single point stimulus, an indicator of tactile acuity that has been mapped throughout the body surface [5]. Therefore, a
linear relation between point localization accuracy and 1/λ would suggest that σ[v] remains constant throughout the body surface, and that λ variation is caused by variation in σ[s]; conversely, a
nonlinear relationship would indicate variation in σ[v].
Figure 3F applies this reasoning to the two studies that reported perceived vs. real distance (Figs. 3D, E). Since these used similar perceptual tasks, differences in λ are attributable primarily to
body region. The model's best-fit λ values for these studies are 4.9, 10.5, and 85.1/s, for forearm, forehead, and finger, respectively. The corresponding point localization accuracies, approximately
1, 0.4, and 0.1 cm [5], indeed correlate linearly with 1/λ (Fig. 3F), strongly suggesting that the low-velocity prior, σ[v], is conserved from one body region to another, and that variation in λ with
body region results from variation in tactile acuity (σ[s]) alone.
Temporal Order Judgment
The mode of the posterior probability distribution is the trajectory that the model “perceives;” however, the mode represents only a single point from the full posterior distribution (Fig. 2B). If
the brain, like the model, could access the full distribution, what sort of additional information would be in its possession?
Access to the full posterior distribution would allow the formulation of probabilistic perceptual inferences, such as the perceived probability that movement occurred in one or the other direction.
This probability is not available from the mode of the posterior distribution, but is readily obtained from the full posterior distribution by integration.
This integration can be viewed as a two-step process. First, the posterior probability distribution (Fig. 2B, lower left) is integrated at each value of velocity (y-axis) across all values of first
stimulus position (x-axis). This yields a posterior probability distribution for velocity (Fig. 4A). Next, the velocity distribution is integrated to the right of zero, yielding the perceived
probability that the velocity was positive, P(v>0).
Figure 4.
Temporal order judgments of the basic Bayesian observer. (A) Posterior probability distributions for velocity, for 4 cm ISD and 0.01 s-0.30 s ISTs, obtained by integrating across the corresponding
2-dimensional posterior probability distributions (e.g., Fig. 2B, lower left). A second integration finds the area under each curve to the right of zero, P(v>0). (B) TOJ curve, plotting P(v>0) from
(A), and additional values for the opposite movement direction (negative x-axis), against IST. (C) Upper panel: TOJ curves for 2 cm to 8 cm ISD, and −80 ms to 80 ms IST. Lower panel: The same curves
plotted with y-axis probit (cumulative normal probability) coordinate spacing. As with human TOJ curves plotted in this manner [26]–[28], these curves are linear. Model parameter values used for all
panels: σ[s], 1 cm; σ[v] , 10 cm/s.
Since positive velocity indicates movement in a particular direction, for instance distally along the forearm (see Fig. 1), P(v>0) represents a graded opinion regarding the direction of motion, or
equivalently, a graded answer to the question: “Which stimulus (distal or proximal) came first?” Interestingly, P(v>0), plotted against IST, (Fig. 4B) resembles a human temporal order judgment (TOJ)
curve, which plots against IST the percent of correct responses to this same question [26]–[28].
It may seem surprising that the basic observer model, which accurately registers the time of occurrence of each stimulus, nevertheless remains uncertain as to stimulus order (0<P(v>0)<1). This
situation arises because, although the model knows when each stimulus occurred, it is uncertain where the stimulus occurred (see Fig. 2B, lower right), and consequently it is uncertain about which
location (e.g., distal or proximal) was stimulated first.
Interestingly, for a given IST, the model grows more confident of stimulus order as ISD increases; equivalently, the model's TOJ threshold [27] or just-noticeable difference [28], the IST at which P
(v>0) = 0.75, decreases with increasing ISD (Fig. 4C). Intriguingly, this influence of ISD agrees qualitatively with results from several human perceptual studies [27]–[29]. For instance, TOJ
thresholds on the thigh decrease by several ms when ISD is doubled from 10 to 20 cm [27].
Also in agreement with human data [26]–[28], the model's TOJ curves (for–0.08 s to 0.08 s IST) are linear when transformed to probit (cumulative normal) coordinates (Fig. 4C, lower). This linearity
arises because the model's posterior probability distribution for velocity maintains a nearly fixed Gaussian shape as it shifts nearly linearly to the right with increasing IST (Fig. 4A, upper three
These points of concordance between human and model TOJ performance suggest that the brain indeed integrates across the full posterior probability distribution. However, more detailed human data are
needed to quantitatively compare to the model's TOJ performance.
Spatial Attention
Figure 2C shows that the basic Bayesian observer perceives the first and second stimulus positions as shifted by equal distances in opposite directions, such that the perceived and actual
trajectories share the same midpoint. In one circumstance, however, this prediction does not match human perception: When instructed to focus their attention on one of the two stimulus locations,
humans report a smaller perceptual shift for taps at that location than at the other. The midpoint of the perceived trajectory thus shifts towards the attended location [12].
This result is reproduced by the basic Bayesian observer if attention directed towards one location reduces spatial uncertainty there (Fig. 5). The modulation of somatosensory cortical neuronal
activity by spatial attention [30]–[32] provides a plausible mechanism for this local refinement of tactile acuity. The influence of spatial attention on the Bayesian observer is graded. The greater
the attentional imbalance between the two locations, the more closely the perceived trajectory midpoint approaches the preferentially attended location (see Materials and Methods).
Figure 5.
Basic Bayesian observer with directed spatial attention. (A) Plot of the same stimuli (filled circles) shown in Figure 2. Attention directed to the location of the second stimulus lowers σ[s2] and
increases σ[s1] (vertical arrows: ±1 σ[s]). The observer considers slow trajectories most probable a priori (red slopes: ±1 σ[v]). (B) Likelihood and posterior distributions in positional trajectory
space (The prior is identical to that shown in Fig. 2). The oval-shaped likelihood distribution results because σ[s1]≠σ[s2]. The mode of the posterior (red dot) shows that the perceived location of
the first stimulus has shifted more than that of the second stimulus, relative to their actual locations (red crosshairs). (C) Actual (filled circles, solid line) and perceived (open circles, dashed
line) trajectories. The midpoint of the perceived trajectory has shifted towards the location of stimulus 2 by 0.3 cm relative to the actual trajectory midpoint. Model parameter values used for all
panels: σ[s1], 1.23 cm; σ[s2], 0.70 cm; σ[v] , 10 cm/s.
Full Bayesian Observer
The basic observer accurately registers the time of occurrence of each stimulus, and therefore perceives IST veridically. However, some studies indicate that perceived IST increases subtly as ISD is
lengthened. For instance, in a point-of-subjective-equality experiment [21], two taps to the forearm at 12 cm ISD, 269 ms IST evoked the same perceived IST as taps at 6 cm ISD, 308 ms IST (Fig. 1F).
This time dilation illusion, the kappa effect [14], [21], has been studied much less extensively than the length contraction illusions considered above, and is reportedly less robust [11].
The kappa effect is reproduced by the full Bayesian observer model, in which tactile sensation suffers from temporal as well as spatial uncertainty (Fig. 6A). The full observer experiences perceptual
time dilation as well as length contraction (Fig. 6B). Furthermore, it experiences increasing time dilation as ISD increases at fixed IST (Fig. 6C), the hallmark of the kappa effect.
Figure 6.
Full Bayesian observer. (A) Two stimuli (filled circles) touch the fingertip in rapid succession. The observer is uncertain as to stimulus location (vertical arrows: ±2 σ[s] for clarity) and time of
occurrence (horizontal arrows: ±1 σ[t] ), and considers slow movement most probable a priori (inset slopes: ±1 σ[v]). (B) Actual (filled circles, solid line) and perceived (open circles, dashed line)
trajectories. Perception underestimates ISD (l' = 0.64 cm <l = 1 cm; vertical bars) and overestimates IST (t' = 40 ms>t = 26 ms; horizontal bars). (C) Perceived IST on finger dilates as ISD increases
from 0–20 mm (solid line; kappa effect). The basic observer, by contrast, perceives IST veridically (dotted line). (D) Time dilation of full observer on forearm for 0–20 cm ISD (solid line).
Perception on finger (C) is reproduced for comparison (dashed line). All panels: IST, 26 ms; σ[t], 5 ms; σ[s] (finger), 1 mm; σ[s] (forearm), 1 cm; σ[v], 4.7 cm/s.
What causes the kappa effect? As ISD is lengthened, the trajectory velocity (slope in Fig. 6A) increases. Like the basic observer, the full observer is inclined by its slow-movement expectation to
perceptually reduce trajectory slope. However, the full observer has not one but two ways to accomplish this. The steeper a line segment, the more efficiently its slope is reduced by horizontal
expansion (time dilation) compared to vertical compression (length contraction). An emergent property of the model, then, is that it relies more heavily on time dilation as ISD increases.
Why has the kappa effect, a time dilation illusion, been more elusive than the rabbit, the tau effect, and other length contraction illusions? The Bayesian observer provides a simple explanation:
Most studies of tactile spatiotemporal illusions, and all studies of the kappa effect, have utilized the forearm. Due to its poor spatial resolution, the forearm is an ideal choice for investigations
of length contraction illusions, but, for the same reason, the model experiences a very small kappa effect on the forearm (Fig. 6D). Where tactile spatial acuity is poor (e.g. forearm; large σ[s]),
length contraction readily reconciles perception with prior expectation. Only where spatial acuity is relatively good (e.g. fingertip; small σ[s]) does time dilation necessarily play a greater role.
The length contraction equation for the full observer is: 2
Equation 2 resembles Equation 1, but substitutes perceived IST, t', for actual IST, t. Because t' increases with l (the kappa effect), Equation 2, unlike Equation 1, predicts a nonlinear relationship
between perceived and real ISD. This nonlinearity will be most pronounced (but still subtle) when the kappa effect is at its strongest; that is, for fast trajectories on body areas with fine tactile
acuity. This prediction is consistent with the subtly nonlinear relationship observed between perceived and real ISD on the fingertip, at 26 ms IST (Fig. 3E, crosses). The full model fits these data
(Fig. 7E, crosses) better than does the basic model, while its perception of slower trajectories on the fingertip (Fig. 7E, circles) and its perception on body areas other than the fingertip (Fig.
7A–D, Fig. 8), is nearly indistinguishable from that of the basic model.
Figure 7.
Human data from five studies and full Bayesian observer's performance on the same tasks. The same five data plots shown in Fig. 3 (symbols) are reproduced here along with performance of the full
model (curves). σ[t] was fixed at 5 ms, σ[s] set to 1 cm (forearm) or 0.1 cm (finger), and the value of λ adjusted in each case to minimize the mean-squared error between model and human performance.
The performance of the full model is very similar to that of the basic model (compare to Fig. 3A–E). However, perception on the finger at 26 ms IST (crosses in E) is better-matched by the nonlinear
performance of the full model (arrow; R^2: 0.95) than by the linear performance of the basic model (R^2: 0.90; compare Fig. 3E).
Figure 8.
Temporal order judgment and spatial attention effects of the full Bayesian observer. (A) TOJ curves for 2 cm to 8 cm ISD, and −80 ms to 80 ms IST, plotted with y-axis probit coordinate spacing
(compare to Fig. 4C lower). Model parameter values used: σ[s], 1 cm; σ[v], 10 cm/s; σ[t], 5 ms. (B) Actual (filled circles, solid line) and perceived (open circles, dashed line) trajectories when the
full observer directs attention to the location of the second stimulus (compare to Fig. 5C). Model parameter values used: σ[s1], 1.23 cm; σ[s2], 0.70 cm; σ[v], 10 cm/s; σ[t], 5 ms.
Perceived Velocity
The perceived velocity evoked by two punctate tactile stimuli has yet to be measured experimentally. The basic Bayesian observer's perceived velocity, v' = l'/t, is given by (see Materials and
This equation shows that perceived velocity underestimates real velocity, v = l/t. Interestingly, the equation also predicts that real and perceived velocities will relate non-monotonically when IST
is reduced at fixed ISD (Fig. 9A). Thus, the Bayesian observer experiences a perceptual speed limit. Perceived velocity, l'/t, initially grows as IST, t, decreases. However, as IST is progressively
reduced, the retarding effect of the consequent length contraction (reduction in l'; Equation 1) counters and eventually overcomes the effect of IST reduction, so that perceived velocity diminishes.
Indeed, perceived velocity peaks at real velocity, v*, given by4
and the maximum perceived velocity, v'[max], equals half v*:5
The full Bayesian observer's perceived velocity, v' = l'/t', peaks similarly, but falls off more slowly than does the basic observer's perceived velocity (Fig. 9B). Once again, this difference
between the two models is most pronounced where tactile acuity is greatest (e.g., the fingertip).
Figure 9.
Velocity perception of the Bayesian observer models. Perceived velocity, v', is plotted against real velocity, v, for the basic (A) and full (B) Bayesian observer models, on both forearm (top panels)
and fingertip (bottom panels). In all cases, real velocity was increased by reducing IST at fixed ISD (4 cm for forearm; 4 mm for fingertip). (A) Basic observer: Perceived velocity, v' = l'/t, was
derived from Equation 3. Real velocity v* = 28.28 cm/s (Equation 4) results in peak perceived velocity v'[max] = 14.14 cm/s (Equation 5). (B). Full observer: Perceived velocity, v' = l'/t', was
determined from Equations 2 and 16, with σ[t] set to 5 ms. Dotted lines in all panels: x = y. (A) and (B): σ[s] was set to 1 cm (forearm) or 1 mm (finger), and σ[v] to 10 cm/s.
Tactile spatiotemporal illusions have long intrigued and puzzled researchers. Perhaps the earliest description was made by Weber, who in 1834 reported that the perceived separation between two fixed
caliper points expands as the points are dragged along the skin from the forearm towards the fingertips [33]. Weber concluded, in agreement with modern studies [19], [20], [34], that distance is
underestimated on skin regions with poor tactile acuity, a phenomenon termed spatial compression by Green [34]. Some 100 years after Weber's publication, Helson [15] described the tau effect, showing
that perceived tactile distance depends on inter-stimulus timing. The rabbit illusion later described by Geldard and Sherrick [10] confirmed the temporal dependence of spatial perception, while the
kappa effect, described concurrently by Cohen and colleagues [24] in vision and Suto [21] in touch, revealed the spatial dependence of temporal perception.
Several clever theoretical explanations have been advanced to account for these illusions. Collyer [35], [36] proposed that the brain expects movement to occur at the same velocity in all segments of
a multi-segment stimulus sequence, and that it adjusts space and time perception accordingly. For instance, the classic tau effect (Fig. 1B) was hypothesized to arise because the brain expects
movement to occur at the same velocity between the first and second, as between the second and third stimulus positions. A related line of reasoning was followed by Jones and Huang [37], who modeled
perceived inter-stimulus distance and time as weighted averages of actual and expected inter-stimulus distance and time, with the expected values derived from a constant velocity assumption. A
different and particularly creative approach was taken by Brigner, who hypothesized that spatiotemporal illusions result from rotation of a perceptual space-time coordinate frame [38], [39]. The
hypothesized transformation achieves spatial and temporal perceptual adjustments in a way that is, roughly, the converse of that shown in Figure 6B: The trajectory line (filled circles) remains
fixed, while the space and time axes rotate together counterclockwise.
None of these interesting explanations has been applied quantitatively to a wide variety of experimental data, and each has shortcomings. Collyer's hypothesis may prove relevant to the perception of
sequences with three or more stimulus locations, but its application to sequences with just two spatial positions, which also produce illusions (e.g., Fig. 1A), is less clear. The weighted average
model proposed by Jones and Wang leaves unanswered the question of how the relative weights are determined, and particularly what mechanism governs their evident dependence on the duration of the
stimulus sequence. Brigner's intriguing proposal is able to explain, at least qualitatively, perceptual illusions evoked by stimuli at just two positions, but how or why the brain would undertake the
proposed coordinate transformation is unclear.
The Bayesian observer model described here provides a coherent explanation for perceptual length contraction and time dilation, and replicates the rabbit illusion, the tau effect, the kappa effect,
and a variety of other spatiotemporal illusions. The results suggest that the brain takes advantage of the expectation for slow speed, presumably based in tactile experience, to improve perception
beyond the limits imposed by spatial and temporal uncertainty inherent in the sensorineural signal.
The Bayesian observer's slow-speed expectation recalls a visual model with that expectation that reproduces contrast effects on motion perception [40]. The remarkable explanatory power of these
models supports Helmholtz's view of perception as a process of unconscious inference, in which “previous experiences act in conjunction with present sensations to produce a perceptual image” [41].
The perceptual space-time distortions that emerge from the Bayesian observer, and characterize human tactile perception, are loosely analogous to the physical length contraction and time dilation
described in the Special Theory of Relativity [22]. I do not attach special significance to this analogy, but note simply that it arises because any postulated constraint on speed naturally yields
distortions of space and/or time.
The Bayesian observer makes several novel testable predictions and suggests many experiments. For example, the model predicts more pronounced time dilation (Fig. 6), as well as less pronounced length
contraction (Fig. 3), on body areas with finer tactile acuity, and it predicts a perceptual speed limit on the velocity evoked by dual punctate stimuli with fixed spacing (Fig. 9). Temporal
perception experiments will determine whether the kappa effect is indeed more pronounced on body areas with finer tactile acuity (Fig. 6D), while velocity perception experiments will provide data for
comparison to the curves shown in Figure 9B. In addition, the model suggests experiments with within-subjects designs to determine the contributions of σ[s] and σ[v] to variation in λ, not only
across body regions (Fig. 3F), but also across perceptual tasks and as a result of perceptual learning. Finally, although designed to model tactile perception, the Bayesian observer may prove
relevant to perception in other sensory modalities that show similar spatiotemporal illusions. For instance, Figure 6D, translated to visual perception, predicts a greater kappa effect for foveal
than peripheral stimulus sequences.
Important work related to the model remains to be done. Experiments are needed to determine the precise shapes of the prior and likelihood distributions assumed by human observers as they perceive
tactile stimulus sequences, as has been done for visual motion perception [42]. The Gaussian priors and likelihoods used in the model may need to be refined as a result of such experiments.
Furthermore, theoretical work is needed to extend the model to treat the perception of more complex punctate stimulus sequences (e.g., [43], [44]), and of smoothly moving objects [45], [46].
Interestingly, humans progressively underestimate the fixed distance traversed by a brush swept briskly across the skin as sweep duration decreases [45], a result in qualitative agreement with
Equation 1.
Finally, research is needed to determine where in the brain the Bayesian probability distributions hypothesized to serve tactile perception are represented, and by what neural mechanism they are
generated. Interestingly, topographically appropriate somatosensory cortical activity accompanies illusory rabbit percepts on the forearm [47]. Research is needed, then, to explore connections
between models of somatosensory cortical function recently proposed to account for the rabbit illusion [14], [48], and hypothesized neural representations of Bayesian probability distributions [49].
Materials and Methods
Basic Model (Fig. 2)
Each candidate trajectory was described by a velocity (slope) m, and first stimulus position (y-intercept), b.
Bayes' theorem relates the posterior probability of the candidate trajectory, given stimulus-evoked neural data, D, P(m,b|D), to the trajectory's prior probability, P(m,b), and likelihood, the
probability of the stimulus-evoked neural data given the trajectory, P(D|m,b):6
The prior, P(m,b), was represented by a Gaussian distribution for trajectory velocity, centered at zero, to reflect the observer's expectation of slow movement. P(m,b) was independent of b, because a
uniform prior (no constraint) was assumed for b:7
The likelihood, P(D|m,b), was represented by the product of two Gaussian likelihoods, representing the probability of the neural data evoked by the first stimulus, given the starting position of the
candidate trajectory, and the probability of the neural data evoked by the second stimulus, given the endpoint of the candidate trajectory. Each likelihood was centered at the actual location of the
corresponding stimulus:8
where x[1] and x[2] represent the actual first and second stimulus positions, respectively; t represents IST; and the standard deviation σ[s] is the same for each likelihood. Actual ISD, l, was x[2]
-x[1], and actual velocity, v, was l/t.
Substituting Equations 7 and 8 into Equation 6 provided an expression for the posterior probability of each candidate trajectory:9
The intensity plots of Fig. 2B, left column, were obtained by computing the values of P(m,b), P(D|m,b), and P(m,b|D) from Equations 7, 8, and 9, respectively, for a range of m and b values, using σ
[s] = 1 cm, and σ[v] = 10 cm/s. The plots in Fig. 2B, right column, were derived numerically from those shown in the left column.
The mode of the posterior was found analytically by setting to zero the partial derivatives of the exponent of Equation 9 with respect to m and b. This resulted in expressions for perceived velocity,
v' (the value of m at the mode of the posterior; Equation 3) and perceived ISD, l' (i.e., v't; Equation 1). The partial derivative of Equation 3 with respect to t was set to zero to derive Equations
4 and 5.
Basic Model with Spatial Attention (Fig. 5)
The basic model was extended to allow σ[s] to take on different values at the two stimulus positions. The prior (Equation 7) was the same as that for the basic model, but the likelihood included
independent spatial uncertainty terms, σ[s1] and σ[s2], representing the standard deviations of the Gaussian likelihoods evoked by the first and second stimuli, respectively. This modification
resulted in the posterior:10
The mode of the posterior was found by setting to zero the partial derivatives of the exponent of Equation 10 with respect to m and b. This resulted in expressions for perceived velocity, v' (the
value of m at the mode of the posterior) and perceived ISD, l' (i.e., v't):11
where the modified λ replaces σ[s] with the root-mean-square of σ[s1] and σ[s2]:
When the spatial uncertainties are equal, Equation 11 reduces to Equation 1.
The value of b at the mode of the posterior (the perceived position of the first stimulus), together with l', was used to calculate the midpoint of the perceived trajectory. The midpoint of the
perceived trajectory was found to be displaced from that of the real trajectory, (x[1]+x[2] )/2, by a distance Δl given by:12
Equation 12 shows that as the difference between σ[s1] and σ[s2] increases, the perceived midpoint more closely approaches the position of the preferentially attended (smaller σ[s] ) location. When σ
[s1] equals σ[s2], the extended basic model reduces to the original basic model, and Δl = 0, indicating that the perceived and real trajectories share the same midpoint.
Full Model (Fig. 6)
The full model admits temporal as well as spatial uncertainty. Each candidate trajectory was described by a velocity, m; a first stimulus position, b; a starting stimulus time, t[1]; and a duration,
τ. As in the basic model, each Gaussian spatial likelihood was centered at the actual location of the corresponding stimulus. In addition, analogous temporal likelihoods were centered at the actual
times of the corresponding stimuli (The actual time of the first stimulus was defined as zero, and that of the second stimulus, as t).
The trajectory likelihood was then:13
As in the basic model, the prior reflected an expectation for slow movement:14
Note that Equation 14 has the same form as Equation 7, reflecting the use of uniform priors for all parameters except velocity.
The posterior, proportional to the product of prior and likelihood, was:15
The mode of the posterior was found by setting to zero the partial derivatives of the exponent of Equation 15 with respect to m, b, t[1], and τ. This resulted in expressions for perceived IST, t'
(the value of τ at the mode of the posterior); perceived velocity, v' (the value of m at the mode of the posterior); and perceived ISD, l' (i.e., v't' ; Equation 2):
The equation relating t to t' was found to be:16
Equation 16 was solved numerically for t', given values for t, l, λ, σ[t] and σ[s]. The equation shows that real IST, t, is less than perceived IST, t'; that is, the model experiences perceptual time
dilation. Note that t' tends towards t in the limit of large σ[s]; that is, relatively little time dilation occurs on areas of skin with poor spatial acuity. Finally, Equation 16 yields t = t' when σ
[t] is set to zero, as the full model then reduces to the basic model, which perceives time veridically.
Data Extraction
The data plotted in Figures 3B and C were taken from Table 1 of reference [17] and Table 3 of reference [18], respectively. The data plotted in Figures 3A, D, and E were extracted from Figure 1 of
reference [12], Figure 1 of reference [19], and Figure 6 of reference [20], respectively, using GraphClick v. 12.9 (Arizona Software).
The author thanks D. Gillespie for many enjoyable conversations and insightful suggestions that helped guide the development of this work, and D. Gillespie, P. Bennett, and P. Goldreich for their
thoughtful comments on the manuscript.
Author Contributions
Conceived and designed the experiments: DG. Performed the experiments: DG. Analyzed the data: DG. Wrote the paper: DG.
Correction to Equation 12
Posted by DGoldreich
|
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000333","timestamp":"2014-04-16T13:12:59Z","content_type":null,"content_length":"192441","record_id":"<urn:uuid:706c4be7-dc6c-496e-a1f0-9b7e0b5e089b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Theory of electric networks: the two-point resistance and impedance
Seminar Room 1, Newton Institute
We present a new formulation of the determination of the resistance and impedance between two arbitrary nodes in an electric network. The formulation applies to networks of finite and infinite sizes.
An electric network is described by its Laplacian matrix L, whose matrix elements are real for resistors and complex for impedances. For a resistor network the two-point resistance is obtained in
terms of the eigenvalues and eigenvectors of L, and for an impedance network the two-point impedance is given in terms of those of L*L. The formulation is applied to regular lattices and
non-orientable surfaces. For networks consisting of inductances and capacitances, the formulation predicts the occurrence of multiple resonances.
Related Links
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/CSM/seminars/2008042509001.html","timestamp":"2014-04-19T10:03:59Z","content_type":null,"content_length":"7121","record_id":"<urn:uuid:d037c277-c373-41b8-a24e-3eb3c81c00dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Identifying Scalene, Isosceles, and Equilateral Triangles
Triangles are classified according to the length of their sides or the measure of their angles. These classifications come in threes, just like the sides and angles themselves.
The following are triangle classifications based on sides:
• Scalene triangle: A triangle with no congruent sides
• Isosceles triangle: A triangle with at least two congruent sides
• Equilateral triangle: A triangle with three congruent sides
(For the three types of triangles based on the measure of their angles, see the article, Identifying Triangles by Their Angles. )
Because an equilateral triangle is also isosceles, all triangles are either scalene or isosceles. But when people call a triangle isosceles, they’re usually referring to a triangle with only two
equal sides, because if the triangle had three equal sides, they’d call it equilateral. So does this classification scheme involve three types of triangles or only two? You be the judge.
Identifying scalene triangles
In addition to having three unequal sides, scalene triangles have three unequal angles. The shortest side is across from the smallest angle, the medium side is across from the medium angle, and —
surprise, surprise — the longest side is across from the largest angle. The above figure shows an example of a scalene triangle.
The ratio of sides doesn’t equal the ratio of angles. Don’t assume that if one side of a triangle is, say, twice as long as another side that the angles opposite those sides are also in a 2 : 1
ratio. The ratio of the sides may be close to the ratio of the angles, but these ratios are never exactly equal (except when the sides are equal).
If you’re trying to figure out something about triangles — such as whether an angle bisector also bisects (cuts in half) the opposite side — you can sketch a triangle and see whether it looks true.
But the triangle you sketch should be a non-right-angle, scalene triangle (as opposed to an isosceles, equilateral, or right triangle). This is because scalene triangles, by definition, lack special
properties such as congruent sides or right angles. If you sketch, say, an isosceles triangle instead, any conclusion you reach may be true only for triangles of this special type. In general, in any
area of mathematics, when you want to investigate some idea, you shouldn’t make things more special than they have to be.
Identifying isosceles triangles
An isosceles triangle has two equal sides (or three, technically) and two equal angles (or three, technically). The equal sides are called legs, and the third side is the base. The two angles
touching the base (which are congruent, or equal) are called base angles. The angle between the two legs is called the vertex angle. The above figure shows two isosceles triangles.
Identifying equilateral triangles
An equilateral triangle has three equal sides and three equal angles (which are each 60°). Its equal angles make it equiangular as well as equilateral. You don’t often hear the expression equiangular
triangle, however, because the only triangle that’s equiangular is the equilateral triangle, and everyone calls this triangle equilateral. (With quadrilaterals and other polygons, however, you need
both terms, because an equiangular figure, such as a rectangle, can have sides of different lengths, and an equilateral figure, such as a rhombus, can have angles of different sizes.)
|
{"url":"http://www.dummies.com/how-to/content/identifying-scalene-isosceles-and-equilateral-tria.html","timestamp":"2014-04-20T21:11:11Z","content_type":null,"content_length":"54599","record_id":"<urn:uuid:13393c04-77b6-46d7-be53-98977e31852d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what's (e^ix)^2 where x is a variable
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4fa8fbc7e4b059b524f4d15a","timestamp":"2014-04-21T07:37:56Z","content_type":null,"content_length":"41611","record_id":"<urn:uuid:1acba2cf-c56c-4bf6-8dd9-b9e31d78a348>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: how to deal with categories?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: how to deal with categories?
From philippe van kerm <philippe.vankerm@ceps.lu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: RE: how to deal with categories?
Date Tue, 3 Jun 2008 15:22:48 +0200
The command -dummieslab- (available on the SSC archive) may be useful to deal with your second question:
'DUMMIESLAB': module to convert categorical variable to dummy variables using label names
dummieslab generates a set of dummy variables from a categorical
variable. One dummy variable is created for each level of the
original variable. Names for the dummy variables are derived from
the value labels of the categorical variable.
To install from within Stata:
ssc install dummieslab
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Andrea Bennett
Sent: Tuesday, June 03, 2008 2:40 PM
To: statalist@hsphsun2.harvard.edu
Subject: st: how to deal with categories?
Dear all,
Right now I am wondering what is the better way to deal with
categorical information.
What is about the best way to implement income groups into a
regression? E.g. as income has (usually) no upper limits, I tend to
generate an interaction term (dummy==1) if the individual is in the
highest income category (0 if else). Further, am I right in the
assumption that building categories is usually not sensible when the
number of observations is high? One issue I face is that very young
adults and very old adults are under-represented in the dataset
(meaning, not that many unique observations for these groups, sample
itself is good). Is there a rule of thumb what would be better,
building categories for all age-classes (increasing observations in
young/old group) or do not build classes at all (having more detailed
info)? It's clearly a trade-off but maybe there's some advice. I tend
not to use categories here, also because age-squared might be
important to have at hand, later.
The "xi" command can help to make life less messy (in large data sets,
I think). But it seems to kill all my value labels in these
categorical groups! I could not find any option to tell "xi" to use
the already defined value labels. Is there a workaround at hand so
that the regression table will instead use the values defined (e.g.
for sex; 0==male, 1==female) as new variable names?
Many thanks for all your inputs,
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-06/msg00060.html","timestamp":"2014-04-16T07:22:13Z","content_type":null,"content_length":"8138","record_id":"<urn:uuid:f2f96727-399e-464f-b764-f42bf617df5f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spectral Modeling Synthesis
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
This section reviews elementary spectral models for sound synthesis. Spectral models are well matched to audio perception because the ear is a kind of spectrum analyzer [292].
For periodic sounds, the component sinusoids are all harmonics of a fundamental at frequency :
where denotes time in seconds, is the th harmonic radian frequency, is the period in seconds, is the amplitude of the th sinusoidal component, is its phase, and is the number of the highest audible
Aperiodic sounds can similarly be expressed as a continuous sum of sinusoids at potentially all frequencies in the range of human hearing:^11.6
where denotes the upper bound of human hearing (nominally kHz).
Sinusoidal models are most appropriate for ``tonal'' sounds such as spoken or sung vowels, or the sounds of musical instruments in the string, wind, brass, and ``tonal percussion'' families. Ideally,
one sinusoid suffices to represent each harmonic or overtone.^11.7 To represent the ``attack'' and ``decay'' of natural tones, sinusoidal components are multiplied by an amplitude envelope that
varies over time. That is, the amplitude in (10.15) is a slowly varying function of time; similarly, to allow pitch variations such as vibrato, the phase may be modulated in various ways.^11.8 Sums
of amplitude- and/or frequency-enveloped sinusoids are generally called additive synthesis (discussed further in §10.4.1 below).
Sinusoidal models are ``unreasonably effective'' for tonal audio. Perhaps the main reason is that the ear focuses most acutely on peaks in the spectrum of a sound [179,304]. For example, when there
is a strong spectral peak at a particular frequency, it tends to mask lower level sound energy at nearby frequencies. As a result, the ear-brain system is, to a first approximation, a ``spectral peak
analyzer''. In modern audio coders [16,200] exploiting masking results in an order-of-magnitude data compression, on average, with no loss of quality, according to listening tests [25]. Thus, we may
say more specifically that, to first order, the ear-brain system acts like a ``top ten percent spectral peak analyzer''.
For noise-like sounds, such as wind, scraping sounds, unvoiced speech, or breath-noise in a flute, sinusoidal models are relatively expensive, requiring many sinusoids across the audio band to model
noise. It is therefore helpful to combine a sinusoidal model with some kind of noise model, such as pseudo-random numbers passed through a filter [248]. The ``Sines + Noise'' (S+N) model was
developed to use filtered noise as a replacement for many sinusoids when modeling noise (to be discussed in §10.4.3 below).
Another situation in which sinusoidal models are inefficient is at sudden time-domain transients in a sound, such as percussive note onsets, ``glitchy'' sounds, or ``attacks'' of instrument tones
more generally. From Fourier theory, we know that transients, too, can be modeled exactly, but only with large numbers of sinusoids at exactly the right phases and amplitudes. To obtain a more
compact signal model, it is better to introduce an explicit transient model which works together with sinusoids and filtered noise to represent the sound more parsimoniously. Sines + Noise +
Transients (S+N+T) models were developed to separately handle transients (§10.4.4).
A advantage of the explicit transient model in S+N+T models is that transients can be preserved during time-compression or expansion. That is, when a sound is stretched (without altering its pitch),
it is usually desirable to preserve the transients (i.e., to keep their local time scales unchanged) and simply translate them to new times. This topic, known as Time-Scale Modification (TSM) will be
considered further in §10.5 below.
In addition to S+N+T components, it is useful to superimpose spectral weightings to implement linear filtering directly in the frequency domain; for example, the formants of the human voice are
conveniently impressed on the spectrum in this way (as illustrated §10.3 above) [174].^11.9 We refer to the general class of such frequency-domain signal models as spectral models, and sound
synthesis in terms of spectral models is often called spectral modeling synthesis (SMS).
The subsections below provide a summary review of selected aspects of spectral modeling, with emphasis on applications in musical sound synthesis and effects.
Subsections Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
[Lecture Video] [Exercises] [Examination]
|
{"url":"https://ccrma.stanford.edu/~jos/sasp/Spectral_Modeling_Synthesis.html","timestamp":"2014-04-17T21:25:58Z","content_type":null,"content_length":"26154","record_id":"<urn:uuid:43921bdf-2561-4ed8-bd90-a4e6bbe91049>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Deflection of beam. Efect of % errors in span and load data.
September 4th 2011, 03:26 AM
Deflection of beam. Efect of % errors in span and load data.
OK so I'm practicsing some of the questions out of my text book for when I start my course in 2 weeks and finding this one pretty difficult, but here it goes.
Bestend Properties is refurbishing an old hospital that was built in the 1950's. The building's main structure is a steel frame. The company's structural engineers are trying to determine the
strength of the existing floor beams by loading them up and measuring the deflection.
The Formula that relates to deflection to the strength and loadings on the beam is:
E - Is Young's Modulus (strength of material)
w - Is the measured uniformly distributed load on the beam
l - Is the moment of inertia of the beam (a constant due to the cross sectional shape)
L - Is the measured span of the beam.
y - Is the deflection of the loaded beam
After the on-site testing was done, it was discovered that some of the measuring equipment was wrongly calibrated. The errors that were discovered were as follows:
The span L was 5% too long.
The load w was 3% too low.
Using you knowledge of binomial theory, produce a formula to show the net percentage effect of these calibration errors on the value of E (Young's Modulus).
I'm not even sure where to start, if anyone could help me I would really appreciate it.
September 4th 2011, 10:05 AM
Re: Can someone help me with this, I have no clue?
Hi Chip6891,
I think you would do this beam to beam and calculate wrong values then right values. Find % difference. You need size and weight of beam, span , uniform load in kips and midpoint deflection in
inches.Also the moment of inertia of cross section
September 4th 2011, 02:44 PM
Re: Can someone help me with this, I have no clue?
There are several methods for dealing with this sort of problem, but since the binomial theorem is mentioned I suppose that it has to be this one ...
Replace w with w + $\delta$w and L with L + $\delta$L. Those are the 'original' values together with there respective (actual) errors.
As a result of these changes, E will change to E + $\delta$E.
Now multiply out the RHS (using the binomial theorem for the L to the power four term), retaining the term for E and first order terms only. (i.e ignore any terms involving the products of two or
more $\delta$'s.)
The E's on both sides will cancel.
Now divide both sides of the resulting equation by 100E (replacing E with its original expression on the RHS) and you should arrive at a result that says that the percentage error in E is
approximately equal to the sum of the percentage error in w plus 4 times the percentage error in L.
September 5th 2011, 05:26 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi, thanks for getting back.
So when you say "Replace w with w + http://www.mathhelpforum.com/math-he...7d1eba1e18.pngw and L with L + http://www.mathhelpforum.com/math-he...e15357ad29.pngL."
Should step one in solving the equation now look like this?
E=5(w + http://www.mathhelpforum.com/math-he...7d1eba1e18.pngw)(L + http://www.mathhelpforum.com/math-he...e15357ad29.pngL^4)/384ly
or is this wrong? It's ok maths isn't my strongest point :(
September 5th 2011, 08:41 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi Chips891,
I thought about my first answer and I see a simplification.
E= 5 wl* l^3/ 384 I y where wl is uniform load in kips ( lb per inch) y is deflection in inches
E1/E2 = wl1 (l1)^3/wl2 (l2)^3
wl1/wl2 =1.05 (l1)^3 /(l2)^3= (.95)^3
E2/E1= 1.05* (.95)^3 = .88 so the first E would be lowered by 12%
September 5th 2011, 08:43 AM
Re: Deflection of beam. Efect of % errors in span and load data.
The first line should look like this ...
E + $\delta$E = 5(w + $\delta$w)(L + $\delta$L $)^4$/384ly
The changes in w and L bring about a change $\delta$E in E.
September 5th 2011, 09:09 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi BobP,
The formula for the deflection of a beam is as given by theOP and is used to calculate this amount using 29* 10^6 psi as E. It can be used to calculate E given a deflection by inserting the other
values.In addition to the simplification I show I applied this to a real beam with load and span.If you have an interest I would post it
September 5th 2011, 10:01 AM
Re: Deflection of beam. Efect of % errors in span and load data.
OK, so I have done that.
So for the next step do I have to use Binomial Theorem and expand (L+&L)^4 ?
September 5th 2011, 10:23 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi Chip6891,
What does your book ask you to do? Are you asked for a statistical analysis after testing all beams in the structure? What course are you going to take?
September 5th 2011, 10:36 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi bjh
I'm starting the BTEC National in Construction in about 2 weeks time. I'm doing a practice question out of the BTEC National in Construction and Civil Engineering text book, its a distinction
The question is asking me to produce a formula using Binomial Theorem to show the net percentage of the errors to the origional question asked above. I've never had to learn maths on this scale
before, but I've got to learn it for this course if I want to get the distinction questions.
September 5th 2011, 05:24 PM
Re: Deflection of beam. Efect of % errors in span and load data.
Hello again Chip6891,
I read what I could find on line for BTECand see that it is a2year course in Construction and Civil Engineering.I don't understand what the Binomial Theorem would have to do with this.If it said
use basic algebra I have done that.I did make a mistake in post 5.E1 in this question has a higher modulus than E2(post revised)where E2 is the one with corrections.
It is very important in similar problems that all units are consistant.
September 6th 2011, 05:59 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi bjh
Yea to be honest I'm a little stumped to how to apply binomial theorem to this equation.
I'm only doing one year of the BTEC Natational as a bridge to HNC in construction. This is because I am already a qualified tradesmen (Carpenter) to level 3. Also because of this I don't think I
have to do the maths unit, but I thought it would help as this leads to more professional positions in construction.
It does show me a worked example in the textbook which I do not fully understand.
Find the approximate percentage error in the calculated volume of a right circular cone if the radius is taken as 2% too small and the hight as 3% too large.
Volume (V) of a right cone = 1/3(Pir^2h)
The error in height (http://latex.codecogs.com/png.latex?%5Cdeltah) = h3/100
The error in radius (http://latex.codecogs.com/png.latex?%5Cdeltar) = r2/100
V+http://latex.codecogs.com/png.latex?%5CdeltaV = 1/3(Pi(r-r*2/100)^2(h+h*3/100)
V+http://latex.codecogs.com/png.latex?%5CdeltaV = 1/3(Pir^2h(1-2/100)(1-3/100))
Since the errors in height and its radius are small when compared to the origional lengths, we can approximate using binomial theory that:
V+http://latex.codecogs.com/png.latex?%5CdeltaV = 1/3Pir^2h(1-2*2/100)(1+3/100)
Multiplying out the brackets:
V+http://latex.codecogs.com/png.latex?%5CdeltaV = 1/3Pir^2h(1-1/100) approximate, ignoring the last small term.
V+http://latex.codecogs.com/png.latex?%5CdeltaV = V(1-1/100)
http://latex.codecogs.com/png.latex?%5CdeltaV = -V(1/100)
Therefore. for a right circular cone with a radius taken as 2% too small and a height taken as 3% too large, the calculated volume will be 1% too small.
September 6th 2011, 07:01 AM
Re: Deflection of beam. Efect of % errors in span and load data.
We seem to have two threads running wrt this problem, I confess to knowing very little about the 'practical' side of the problem, I'm simply looking at it from a maths point of view.... given
percentage changes in the values of w and L, derive a formula for the resulting percentage change in E.
The binomial expansion isn't difficult when you see the pattern, look at the expansions for 2, 3 and 4.
$(a+b)^2 = a^2 + 2ab + b^2,$
$(a+b)^3 = a^3 + 3a^2b+3ab^2+b^3,$
$(a+b)^4 = a^4 + 4a^3b+6a^2b^2+4ab^3+b^4.$
You get the coefficients from Pascal's triangle.
If you see the pattern and you have Pascal's triangle it's easy to write out the expansions for 5, 6, ...
Only the first two terms in the expansion of (L + $\delta$L $)^4$ are required, because we are going to ignore terms involving the products of two $\delta$'s. (We assume that one $\delta$ is
small so that the product of any two of them will be small enough to be negligible.)
The next step is to multiply out the brackets on the RHS.
September 6th 2011, 07:02 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi Chip,
I agree with your answer which I calculated using basic algebra.Good luck with your course and future in construction
September 11th 2011, 08:36 AM
Re: Deflection of beam. Efect of % errors in span and load data.
Hi guys
Well anyways I had a little time the weekend to play around with what you have told me, and this is what I have got so far to the original question, I'm just wondering if it is right?
E+&E = (w+&w)(L+&L)^4/384ly
E+&E = (w+&w)(L^4+4L^3&L)/384ly
E+&E = (w+&w*3/100)(L^4+4L^3*5/100)
E+&E = [5wL^4(&w*3/100)(4L^3*5/100)]
E+&E = [5wL^4(1*3/100)(12*5/100)]
I was just wondering if this was right so far, and if so what would I have to do next? I'm guessing I have to multiply out the brackets, but would I have to ignore any terms?
|
{"url":"http://mathhelpforum.com/math-topics/187256-deflection-beam-efect-errors-span-load-data-print.html","timestamp":"2014-04-16T11:51:24Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:32f9908b-06a3-4f97-bed5-80584ea8fa2d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[R] converting coordinates from utm to longitude / latitude
Werner Wernersen pensterfuzzer at yahoo.de
Wed Aug 20 00:08:56 CEST 2008
It would be nicer to convert directly the entire shapefile object to long/lat coordinates but if that is not possible, I will convert the other points to UTM.
Hence, I am playing around with rgdal.
SP <- SpatialPoints(cbind(32.29252, -0.3228500),
spTransform(SP, CRS("+proj=utm +zone=36"))
> spTransform(SP, CRS("+proj=utm +zone=36"))
coords.x1 coords.x2
[1,] 421274.4 -35687.37
Coordinate Reference System (CRS) arguments: +proj=utm +zone=36 +ellps=WGS84
This result corresponds with what I get when using convUL() but my map of that area in UTM coordinates does not extend to the negative.
An external program converts the point to x=420994 y=9964407 which also seems correct with respect to the map. Fore sure, I am using the function wrongly somehow. Can anyone give me a hint?
That's very much appreciated!
----- Ursprüngliche Mail ----
Von: Werner Wernersen <pensterfuzzer at yahoo.de>
An: r-help at stat.math.ethz.ch
Gesendet: Dienstag, den 19. August 2008, 20:28:29 Uhr
Betreff: converting coordinates from utm to longitude / latitude
is there a function in R to convert data read with read.shape and which is originally in UTM coordinates into longitude / latitude coordinates?
I found the convUL() function from the PBSmapping package but I have no idea how I could apply that to the read.shape object.
Many thanks,
fügt über einen herausragenden Schutz gegen Massenmails.
ragenden Schutz gegen Massenmails.
More information about the R-help mailing list
|
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-August/171415.html","timestamp":"2014-04-19T09:59:42Z","content_type":null,"content_length":"4609","record_id":"<urn:uuid:b0558aee-4000-4040-9e3f-562a70d681f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Comparing Basic Numerical Integration Methods
Numerical integration methods are used to approximate the area under the graph of a function over an interval . Select a function and a method to visualize how the area is being approximated. Then
increase the number of equal-width subintervals to see that more subintervals lead to a better approximation of the area. The effectiveness of various methods can be compared by looking at the
numerical approximations and their associated errors. In particular, you can investigate how doubling the number of subintervals impacts the error. To use the magnifying tool, click anywhere in the
graph for a closer look.
|
{"url":"http://demonstrations.wolfram.com/ComparingBasicNumericalIntegrationMethods/","timestamp":"2014-04-17T06:45:15Z","content_type":null,"content_length":"43752","record_id":"<urn:uuid:3faeb3a0-ce2a-4dfe-a2b5-93cb363e7f02>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
|
David M. Bressoud May, 2006
C.1: Develop mathematical thinking and communication skills
Courses designed for mathematical sciences majors should ensure that students
• ...
• Gain experience in careful analysis of data;
• ...
This month, I have chosen to address the second bullet of CUPM’s elaboration of what it means to develop mathematical thinking and communication skills. It is a point that will reappear in
recommendation C.3 where the assertion is made that students need to be aware of mathematics as stochastic as well as mathematics as deterministic. In this recommendation, the phrase “analysis of
data” was chosen intentionally over “statistics” to make it very clear that a course in probability would not be an acceptable substitute.
There are two strands to what it means to “gain experience in careful analysis of data.” The first is that majors in the mathematical sciences should be particularly adept at quantitative reasoning
(QR). It may appear that this comes automatically with a mathematics major, but that is not necessarily the case. Over five years of refining our program in QR, Macalester College has identified six
skill sets that mark what we mean by basic QR. Students should be able to
• describe the world quantitatively,
• evaluate sources and quality of data,
• distinguish association from causation,
• understand trade-offs,
• understand uncertainty and risk,
• use estimation and modeling to evaluate claims and test theories.
At many institutions, a major in mathematics by itself provides no guarantee that a student can do any of these. A good course in statistics should address these goals. A description of what
constitutes a good course in introductory statistics is available in the American Statistical Association’s GAISE Report [1].
The second strand is more directly related to what our majors need as future mathematicians. Information technology is shaping the direction of mathematics for the 21st century, the development of
Google’s search engine being one of many illustrations of this. Understanding stochastic processes and how to deal with large amounts of data are an essential part of mathematics, whether this is for
the student going directly to work as an “analyst” for a large corporation, for the student who will hone mathematical skills in order to bring them to bear on financial, health-related, or
environmental problems, or for the student pursuing a doctorate who might one day tackle the complex mathematical problems now being generated by Biology, Chemistry, Economics, and Physics.
For mathematicians to sideline statistics would be as serious a mistake as that made by philosophers when they decided to exclude psychology, a field that was growing within their own discipline.
Philosophy would not have been subsumed by its child. Rather, it could have been reinvigorated. Philosophy is poorer today for the chasm that often lies between these disciplines.
Statistical thinking is part of mathematical thinking. Its importance is real and growing. It needs to be addressed intentionally and specifically.
[1] Guidelines for Assessment and Instruction in Statistics Education (GAISE), American Statistical Association, 2004. www.amstat.org/education/gaise/
Do you know of programs, projects, or ideas that should be included in the CUPM Illustrative Resources?
Submit resources at www.maa.org/cupm/cupm_ir_submit.cfm.
We would appreciate more examples that document experiences with the use of technology as well as examples of interdisciplinary cooperation.
││David Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College in St. Paul, Minnesota, he was one of the writers for the Curriculum Guide, and he currently serves as Chair of the │
││CUPM. He wrote this column with help from his colleagues in CUPM, but it does not reflect an official position of the committee. You can reach him at bressoud@macalester.edu. │
|
{"url":"http://www.macalester.edu/~bressoud/pub/launchings/lccgc1b.html","timestamp":"2014-04-21T07:08:31Z","content_type":null,"content_length":"7686","record_id":"<urn:uuid:9cad9bab-28d2-4e4b-b02d-cf4014c1777f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fermat's Principle
Fermat's Principle:Reflection
Fermat's Principle: Light follows the path of least time. The law of reflection can be derived from this principle as follows:
The pathlength L from A to B is
Since the speed is constant, the minimum time path is simply the minimum distance path. This may be found by setting the derivative of L with respect to x equal to zero.
This derivation makes use of the calculus of maximum-minimum determination, the derivative of a square root, and the definitions of the triangle trig functions.
Reflection concepts
Mirror concepts
|
{"url":"http://hyperphysics.phy-astr.gsu.edu/HBASE/phyopt/fermat.html","timestamp":"2014-04-19T09:25:44Z","content_type":null,"content_length":"5090","record_id":"<urn:uuid:30d56a56-f950-401a-b4e7-235f56b9ee8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
C# Sorting Question
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
• I wonder what is the best C# data structure I should use to sort efficiently?
• Is it List or Array or what?
• And why the standard array [] does not implement sort method in it?
up vote 1 down vote favorite Thanks
c# optimization sorting performance
add comment
Both Lists and Arrays can be sorted, as can other data structures.
The efficiency of the sort depends on what data you are sorting, and what sorting algorithm you are using.
up vote 2 down vote accepted
This site gives a good demonstration of different sorting algorithms (click on the green arrrows to start the demonstration).
add comment
Both Lists and Arrays can be sorted, as can other data structures.
The efficiency of the sort depends on what data you are sorting, and what sorting algorithm you are using.
This site gives a good demonstration of different sorting algorithms (click on the green arrrows to start the demonstration).
Sorting efficiency depends on the amount of data and the sort rules. There is no one definite answer to your question.
up vote 2 down vote The array[] class does implement sort. See here. It is a static method to be called as Array.Sort.
add comment
Sorting efficiency depends on the amount of data and the sort rules. There is no one definite answer to your question.
The array[] class does implement sort. See here. It is a static method to be called as Array.Sort.
Use a generic list, it has a built in Sort
up vote 1 down If you are looking for an answer on which is better in your situation then my suggestion would be to run some benchmark testing against sample data you would be expecting to
vote sort....then make a decision.
add comment
Use a generic list, it has a built in Sort
If you are looking for an answer on which is better in your situation then my suggestion would be to run some benchmark testing against sample data you would be expecting to sort....then make a
The default implementation is quick sort for containers like List<>. Quick sort is about O(n * Ln(n)). For most cases it's a good choice. It is a little bit slower on small amounts of
data then O(n * n) algorithms and sometimes not so good on special types of data where you'd better use special sort algorithms (suppose you can implement them by yourself or use 3-d
up vote 0 party framework)
down vote
add comment
The default implementation is quick sort for containers like List<>. Quick sort is about O(n * Ln(n)). For most cases it's a good choice. It is a little bit slower on small amounts of data then O(n *
n) algorithms and sometimes not so good on special types of data where you'd better use special sort algorithms (suppose you can implement them by yourself or use 3-d party framework)
Actually, If list changes a lot, perhaps you should keep it ordered with a SortedList.
up vote 0 down vote If sorting happens only once. And then, the reallity: if you are sorting in-memory, any O(n * Ln(n)) will suffice. You will notice no difference between List or Array or whatever.
add comment
Actually, If list changes a lot, perhaps you should keep it ordered with a SortedList.
If sorting happens only once. And then, the reallity: if you are sorting in-memory, any O(n * Ln(n)) will suffice. You will notice no difference between List or Array or whatever.
|
{"url":"http://stackoverflow.com/questions/2674573/c-sharp-sorting-question","timestamp":"2014-04-23T10:50:00Z","content_type":null,"content_length":"83529","record_id":"<urn:uuid:053f8c29-69f3-456c-bac6-aca9a66da4a7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Author Message
Joined: 02 Feb 2013
35% (medium)
Posts: 22
Question Stats:
Location: India
Operations, Strategy (01:36) correct
GMAT 1: 690 Q47 V38 10% (00:00)
GMAT 2: 720 Q48 V41
GPA: 3.2
based on 10 sessions
WE: Programming If x is not equal to 0 and xy=1, then which of the following must be true?
(Computer Software) I. x=1
II. x=1 and y=0
Followers: 0 III. x=1 or y=0
Kudos [?]: 5 [0], I only
given: 17 II only
III only
I and III only
NoneNotice that if x=−1 and y is any even number, then (−1)even=1, thus none of the options must be true.
The correct answer is Eoption C seems to make perfect sense here though. Could someone please elaborate on this please ?
Spoiler: OA
This post received
Expert's post
abilash10 wrote:
If x is not equal to 0 and xy=1, then which of the following must be true?
I. x=1
II. x=1 and y=0
III. x=1 or y=0
I only
II only
III only
I and III only
Notice that if x=−1 and y is any even number, then (−1)even=1, thus none of the options must be true.
The correct answer is E
option C seems to make perfect sense here though. Could someone please elaborate on this please ?
If x is not equal to 0 and x^y=1, then which of the following must be true?
I. x=1
II. x=1 and y=0
III. x=1 or y=0
A. I only
Bunuel B. II only
Math Expert C. III only
Joined: 02 Sep 2009 D. I and III only
Posts: 17283 E. None
Followers: 2867 Notice that if x=-1 and y is any even number, then
Kudos [?]: 18328 [1] (-1)^{even}=1
, given: 2345
, thus none of the options must be true.
Answer: E.
As for your doubt:
Option C is III only: x=1 or y=0. Now, if x=-1 and y=2 (
), then neither x is 1 not y is 0, thus this option is not necessarily true.
Hope it's clear.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math
Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and
Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12
Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With
Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
Joined: 02 Feb 2013
Posts: 22 makes sense. Thanks !
Location: India
Operations, Strategy
GMAT 1: 690 Q47 V38
GMAT 2: 720 Q48 V41
GPA: 3.2
WE: Programming
(Computer Software)
Followers: 0
Kudos [?]: 5 [0],
given: 17
|
{"url":"http://gmatclub.com/forum/m28-155728.html?sort_by_oldest=true","timestamp":"2014-04-16T16:18:31Z","content_type":null,"content_length":"160229","record_id":"<urn:uuid:d6a9dcfd-405d-4bc2-98b6-d619985e2e2e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[plt-scheme] Help: Uber newb
From: Todd O'Bryan (toddobryan at gmail.com)
Date: Tue Jan 8 23:10:08 EST 2008
I think this assignment is so tough because it looks like a string
concatenation problem and most students don't make the leap to realize that
it's actually a place-value problem. I've had students write all the
examples and still not get the problem.
My first hint is, "Remember when you wrote expanded notation in elementary
school?" This works for about 15% of them.
I then mention place-value and, if that doesn't work, I ask them which place
each digit in the function call ends up in. Usually, at some point, they
have a light-bulb moment and are kind of embarrassed. This is one that some
students never "get" until you tell them, but all of them understand after
they see it.
On Jan 8, 2008 10:23 PM, John Clements <clements at brinckerhoff.org> wrote:
> On Jan 8, 2008, at 7:08 PM, William Stanley wrote:
> > Exercise 2.2.4. Define the program convert3. It consumes three
> > digits, starting with the least significant digit, followed by the
> > next most significant one, and so on. The program produces the
> > corresponding number. For example, the expected value of
> > (convert3 1 2 3)
> > is 321. Use an algebra book to find out how such a conversion works.
> >
> >
> > Okay I had gone past this exercise but as I couldn't figure it out
> > it was bugging me... and I must say its been a while since I
> > regularly used algebra but after racking my brain for a while and
> > googling for even longer I throw myself at your mercy... I am sure
> > it is rather simple but I am stumped... sometimes I wish I could
> > peek at those answers... even just to see if I got it right.
> >
> >
> > Travis Stanley
> I hate to be the design-recipe-bot, but... have you tried following
> the design recipe? What are the examples you came up with? Are
> there simple examples that you know how to do?
> John Clements
> _________________________________________________
> For list-related administrative tasks:
> http://list.cs.brown.edu/mailman/listinfo/plt-scheme
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.racket-lang.org/users/archive/attachments/20080108/24cd5eba/attachment.html>
Posted on the users mailing list.
|
{"url":"http://lists.racket-lang.org/users/archive/2008-January/022299.html","timestamp":"2014-04-18T18:15:37Z","content_type":null,"content_length":"7926","record_id":"<urn:uuid:05973c21-b5c8-458c-a087-695c86b349b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finite extensions of field of rational functions in one variable
up vote 1 down vote favorite
Let K=F(x), where x is transcendental over F and F is an algebraically closed field. Does there exist a non-commutative division algebra L with the center K and [L : K] < infinity?
I think, but I'm not sure, that an old result due to Tsen implies that the answer is no? I'd like to know if there's another way, other than applying Tsen's theorem, to prove this? Thanks.
1 Serre's Galois cohomology book gives an elegant purely algebraic treatment of behavior of cohomological dimension with respect to the formation of rational function fields, and the relation
between vanishing of Brauer groups and low cohomological dimension. In particular, it gives a purely algebraic proof of Tsen's theorem, so not sure what more you could want in the direction of a
"more algebraic" proof. – Boyarsky Jun 13 '10 at 18:58
That's great, thanks. I'll take a look at the book. – carlos Jun 13 '10 at 19:09
Another reference is Pierce's Associative Algebras. – Robin Chapman Jun 13 '10 at 19:52
add comment
2 Answers
active oldest votes
Yes, you are right to assume that there are no such division algebras. And your plan of proving that is correct: the norm function (a polynomial of degree $n$ in $n^2>n$ variables) will
up vote 1 vanish at some point because of Tsen's theorem. And Tsen's theorem can be seen as quite algebraic, so maybe you can elaborate on what you would consider a less geometric proof?
down vote
Thank you very much. What I meant was that if there's another way other than applying Tsen's theorem to solve the problem. – carlos Jun 13 '10 at 19:12
Then I'd strongly recommend you to rewrite the title of the question. Maybe "How to prove that Br(F(t))=0 without Tsen's theorem?", or something like that. This way you stand a better
chance to attract the attention of someone who actually knows the answer... – Vladimir Dotsenko Jun 13 '10 at 20:24
add comment
The answer is yes, though. K is not algebraically closed. You may mean something else.
up vote 0 down
Sorry, I meant L is a skew field. I fixed the mistake. – carlos Jun 13 '10 at 18:52
en.wikipedia.org/wiki/Tsen%27s_theorem and references. The Brauer group is indeed trivial. I suppose this is proved in geometric analogues of class field theory. – Charles
Matthews Jun 13 '10 at 18:58
add comment
Not the answer you're looking for? Browse other questions tagged fields or ask your own question.
|
{"url":"http://mathoverflow.net/questions/28051/finite-extensions-of-field-of-rational-functions-in-one-variable","timestamp":"2014-04-18T08:04:22Z","content_type":null,"content_length":"61257","record_id":"<urn:uuid:94ec0518-1a91-470e-b75b-0ea20cce15fd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Theory mainly concerned with $\lambda$-calculus?
up vote 3 down vote favorite
Automata theory is mainly concerned with Turing machines and all its relatives-in-spirit. $\lambda$-calculus is rather rarely mentioned in textbooks on automata theory.
What's the common name of the theory mainly concerned with $\lambda$-calculus and its relatives? (I think, "mathematical logic", "computability theory", "programming language theory" and "recursion
theory" are too general, compared to "automata theory". But there should be an "$\lambda$-theory", shouldn't it?)
soft-question lo.logic lambda-calculus automata-theory
1 What's wrong with just calling it "lambda calculus"? – Qiaochu Yuan Jan 15 '10 at 12:25
2 Because it's too specific: I think of "lambda calculus and its relatives". And "lambda" is just an artificial word, other than "automata", "computability", and so on. – Hans Stricker Jan 15 '10 at
Maybe one could call it "function theory" if this name wasn't already in use. – Hans Stricker Jan 15 '10 at 12:31
Anything wrong with "functional programming" besides the fact that it's a computer science term? – François G. Dorais♦ Jan 15 '10 at 13:59
That would never stick unless there's another good reason. Besides, the schism between cs and math is very recent, I would contend that "functional programming" is actually a math term,
2 historically speaking. More importantly, it would be wrong to use a term different than those who use it most, namely theoretical computer scientists, who are very competent mathematicians by the
way. – François G. Dorais♦ Jan 15 '10 at 14:22
show 5 more comments
5 Answers
active oldest votes
"combinatory logic"
up vote 4 down vote
1 I should have known this, since I'm a long-time fan of Raymond Smullyan. How could I have overseen this! Thanks. (To say the least: it's an established name.) – Hans
Stricker Jan 15 '10 at 16:10
No problem. I've been using lambda calculus in various forms for twelve years now and it wasn't until recently that I felt I really understood the connection to
combinators. – Adam Jan 15 '10 at 20:57
add comment
In recent years almost anything I have read about lambda calculus has been about typed lambda calculus. Broadly speaking, I think computer scientists would say that these papers were
part of the field known as Type Theory.
up vote 6 down If that doesn't quite fit what you want I'd suggest reading the PLT article on Wikipedia.
For example, there is a family of lambda calculi that can be arranged in what is known as the lambda cube. The wikipedia article starts "In mathematical logic and type theory..."
2 Should qualify: constructive type theory. – Charles Stewart Jan 15 '10 at 20:34
add comment
I don't know of one that seems sufficiently general. The theory's at an intersection:
1. It (in its untyped guise) is one of the four most important Turing-complete computation systems;
2. It is algebraically natural, connected fundamentally to Cartesian-closed categories (though with horrid baggage around $\alpha$-equivalence);
3. It is a foundational theory, possibly the theory that best captures the notion of schematic function; and
4. It plays an important role in philosophical logic, due to its link to natural deduction, which is among the few treatments of formal logic that is relevant to the actuality of
up vote 4 down how we reason.
Maybe a name formed out of keywords from several of these domains would give a suitable term? How about Cartesian function-calculus?
add comment
Untyped Lambda Calculus is part of the Recursion Theory, I would say. Typed Lambda Calculus is Type Theory, and is connected with constructive mathematics.
up vote 2 The Lambda-Calculus is a concept that can be applied to many parts of mathematics, so there are few books especially about Lambda-Calculus (Imho lambda calculus : logic = measure theory
down vote : analysis, sort of). And the ones I know (i.e. Barendregt, etc.) are just referring to it as "Lambda Calculus".
add comment
The lambda calculus is considered one of the fundamental topics in Theoretical Computer Science. You can study it from many points of view:
1. If you study its denotational semantics, then you use algebra and category theory.
2. If you study its operational semantics, then you mainly use rewriting theory.
3. If you study logic, then you will be interested in its type system. You may also be interested in the related system of combinatory logic.
up vote 0 down vote 4. If you are a functional programmer, then you are using lambda calculus as a Turing-complete formalism.
5. If you study the foundations of mathematics, then the lambda calculus is related to many foundational systems: Illative combinatory logic, Map Theory, Type Theory, ...
However the term "lambda-theory" is already reserved for a very specific concept: a lambda-theory is a congruence on the set of lambda-terms that contains alpha-beta conversion.
add comment
Not the answer you're looking for? Browse other questions tagged soft-question lo.logic lambda-calculus automata-theory or ask your own question.
|
{"url":"https://mathoverflow.net/questions/11845/theory-mainly-concerned-with-lambda-calculus/11861","timestamp":"2014-04-18T11:13:20Z","content_type":null,"content_length":"76796","record_id":"<urn:uuid:328d6557-62ff-4eec-9008-f06dffcf2324>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DSpace Community: 10. PatentsThe Community's search engineDriving method of driving a liquid crystal display elementA method of driving display element and its driving deviceSystem and method to drive display matrix
http://hdl.handle.net/2289/1520 Search the Channel searchhttp://dspace.rri.res.in:8080/jspui/simple-search http://hdl.handle.net/2289/3892 Title: Driving method of driving a liquid crystal display
element<br/><br/>Authors: Kuwata, T.; Ruckmongathan, T.N.; Nakagawa, Y.; Koh, H.; Hasebe, H.; Yamashita, Takashi; Nagano, Hideyuki; Ohnishi, T. http://hdl.handle.net/2289/3891 Title: A method of
driving display element and its driving device<br/><br/>Authors: Kuwata, Takeshi; Ruckmongathan, T.N.; Nakagawa, Y.; Koh, H.; Nakazawa, Akira<br/><br/>Abstract: A method of driving a display element
wherein a light transmittance of a pixel selected by a row electrode and a column electrode changes in accordance with a difference between voltages applied on the row electrode and the column
electrode, which satisfies the following conditions: (1) row electrodes are divided into a plurality of row electrode subgroups composed of L row electrodes which are selected simultaneously wherein
L is an integer greater than 1: (2) signals [ alpha mn] where alpha mn is an element of a m-th row component and a n-th column component of an orthogonal matrix, m is an integer of 1 through L and n
is a suffix showing that the n-th column component of the orthogonal matrix corresponds to a n-th selection signal in a single display cycle are applied on the selected row electrodes as row
electrode signals: (3) a signal into which an image signal corresponding to positions of the selected row electrodes on a display panel is converted by the orthogonal function is applied on a column
electrode as a column electrode signal: and (4) a first voltage which is in proportion to a second voltage Vd expressed by the following equation is substantially applied to a column voltage to
provide a predetermined gray shade level d(j.L+i),k which is a value between 1 showing an off state and -1 showing an on state in accordance with a degree of gray shade with respect to a pixel of a
k-th column where k is an integer and an i-th row where i is an integer of 1 through L of a j-th row electrode subgroup where j is an integer: <MATH> where <MATH> where <MATH> indicates a summing
operation of a content of [ ] with respect to i=0 through L and alpha in' indicates an element of an i-th row component and a n-th column component of an orthogonal matrix wherein a 0-th row
component is added to [ alpha mn]. http://hdl.handle.net/2289/3847 Title: System and method to drive display matrix<br/><br/>Authors: Ruckmongathan, T.N.<br/><br/>Abstract: A system and method to
drive display matrix, comprising: a voltage level generator to provide predetermined voltages, a row voltage selector to select a group of voltages from the voltage level generator depending on
select vector to drive row drivers, a column voltage selector to select a group of voltages from the voltage level generator depending on data vector to drive column drivers, and a controller to
generate control signals to scan the display as dictated by addressing technique.
|
{"url":"http://dspace.rri.res.in/feed/rss_1.0/2289/1520","timestamp":"2014-04-16T14:02:09Z","content_type":null,"content_length":"4692","record_id":"<urn:uuid:2a2a512d-0fa5-4de3-8ffe-d6d315e54b51>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
- Annals of Pure and Applied Logic , 1991
"... The mathematical framework of Stone duality is used to synthesize a number of hitherto separate developments in Theoretical Computer Science: • Domain Theory, the mathematical theory of
computation introduced by Scott as a foundation for denotational semantics. • The theory of concurrency and system ..."
Cited by 231 (10 self)
Add to MetaCart
The mathematical framework of Stone duality is used to synthesize a number of hitherto separate developments in Theoretical Computer Science: • Domain Theory, the mathematical theory of computation
introduced by Scott as a foundation for denotational semantics. • The theory of concurrency and systems behaviour developed by Milner, Hennessy et al. based on operational semantics. • Logics of
programs. Stone duality provides a junction between semantics (spaces of points = denotations of computational processes) and logics (lattices of properties of processes). Moreover, the underlying
logic is geometric, which can be computationally interpreted as the logic of observable properties—i.e. properties which can be determined to hold of a process on the basis of a finite amount of
information about its execution. These ideas lead to the following programme:
- In Proc. of LICS ’98 , 1997
"... Marcelo Fiore , Glynn Winskel (1) BRICS , University of Aarhus, Denmark (2) LFCS, University of Edinburgh, Scotland December 1997 Abstract We develop a 2-categorical theory for recursively
defined domains. ..."
Cited by 23 (14 self)
Add to MetaCart
Marcelo Fiore , Glynn Winskel (1) BRICS , University of Aarhus, Denmark (2) LFCS, University of Edinburgh, Scotland December 1997 Abstract We develop a 2-categorical theory for recursively defined
- Mathematics, Algorithms, Proofs, number 05021 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum (IBFI), Schloss Dagstuhl , 2005
"... Abstract. We present a constructive algebraic integration theory. The theory is constructive in the sense of Bishop, however we avoid the axiom of countable, or dependent, choice. Thus our
results can be interpreted in any topos. Since we avoid impredicative methods the results may also be interpret ..."
Cited by 9 (5 self)
Add to MetaCart
Abstract. We present a constructive algebraic integration theory. The theory is constructive in the sense of Bishop, however we avoid the axiom of countable, or dependent, choice. Thus our results
can be interpreted in any topos. Since we avoid impredicative methods the results may also be interpreted in Martin-L type theory or in a predicative topos in the sense of Moerdijk and Palmgren. We
outline how to develop most of Bishop’s theorems on integration theory that do not mention points explicitly. Coquand’s constructive version of the Stone representation theorem is an important tool
in this process. It is also used to give a new proof of Bishop’s spectral theorem.
- , 2004
"... Much Australian work on categories is part of, or relevant to, the development of higher categories and their theory. In this note, I hope to describe some of the origins and achievements of our
efforts that they might perchance serve as a guide to the development of aspects of higher-dimensional wo ..."
Cited by 6 (0 self)
Add to MetaCart
Much Australian work on categories is part of, or relevant to, the development of higher categories and their theory. In this note, I hope to describe some of the origins and achievements of our
efforts that they might perchance serve as a guide to the development of aspects of higher-dimensional work. I trust that the somewhat autobiographical style will add interest rather than be a
distraction. For so long I have felt rather apologetic when describing how categories might be helpful to other mathematicians; I have often felt even worse when mentioning enriched and higher
categories to category theorists. This is not to say that I have doubted the value of our work, rather that I have felt slowed down by the continual pressure to defend it. At last, at this meeting, I
feel justified in speaking freely amongst motivated researchers who know the need for the subject is well established. Australian Category Theory has its roots in homology theory: more precisely, in
the treatment of the cohomology ring and the Künneth formulas in the book by Hilton and Wylie [HW]. The first edition of the book had a mistake concerning the cohomology ring of a product. The
Künneth formulas arise from splittings of the natural short exact sequences
, 2005
"... Abstract. Following Lawvere, a generalized metric space (gms) is a set X equipped with a metric map from X 2 to the interval of upper reals (approximated from above but not from below) from 0 to
∞ inclusive, and satisfying the zero self-distance law and the triangle inequality. We describe a complet ..."
Cited by 4 (0 self)
Add to MetaCart
Abstract. Following Lawvere, a generalized metric space (gms) is a set X equipped with a metric map from X 2 to the interval of upper reals (approximated from above but not from below) from 0 to ∞
inclusive, and satisfying the zero self-distance law and the triangle inequality. We describe a completion of gms’s by Cauchy filters of formal balls. In terms of Lawvere’s approach using categories
enriched over [0, ∞], the Cauchy filters are equivalent to flat left modules. The completion generalizes the usual one for metric spaces. For quasimetrics it is equivalent to the Yoneda completion in
its netwise form due to Künzi and Schellekens and thereby gives a new and explicit characterization of the points of the Yoneda completion. Non-expansive functions between gms’s lift to continuous
maps between the completions. Various examples and constructions are given, including finite products. The completion is easily adapted to produce a locale, and that part of the work is
constructively valid. The exposition illustrates the use of geometric logic to enable point-based reasoning for locales. 1.
"... Abstract. Formal Concept Analysis (FCA) begins from a context, given as a binary relation between some objects and some attributes, and derives a lattice of concepts, where each concept is given
as a set of objects and a set of attributes, such that the first set consists of all objects that satisfy ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. Formal Concept Analysis (FCA) begins from a context, given as a binary relation between some objects and some attributes, and derives a lattice of concepts, where each concept is given as a
set of objects and a set of attributes, such that the first set consists of all objects that satisfy all attributes in the second, and vice versa. Many applications, though, provide contexts with
quantitative information, telling not just whether an object satisfies an attribute, but also quantifying this satisfaction. Contexts in this form arise as rating matrices in recommender systems, as
occurrence matrices in text analysis, as pixel intensity matrices in digital image processing, etc. Such applications have attracted a lot of attention, and several numeric extensions of FCA have
been proposed. We propose the framework of proximity sets (proxets), which subsume partially ordered sets (posets) as well as metric spaces. One feature of this approach is that it extracts from
quantified contexts quantified concepts, and thus allows full use of the available information. Another feature is that the categorical approach allows analyzing any universal properties that the
classical FCA and the new versions may have, and thus provides structural guidance for aligning and combining the approaches.
"... Abstract. In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal
component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The ..."
Add to MetaCart
Abstract. In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal
component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as
flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by
Abramsky. In this paper we pursue a particle view of information, formalized in distance spaces, which generalize metric spaces, but are slightly less general than Lawvere’s generalized metric
spaces. In this framework, the task of extracting the ’principal components ’ from a given matrix of data boils down to a bicompletion, in the sense of enriched category theory. We describe the
bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security. 1
"... Abstract. We introduce the concept of a prenormed model of a particular kind of singlesorted finitary first-order theories, interpreted over a category with finite products. These are referred
to as prealgebraic theories, for the fact that their signature comprises, together with arbitrary function ..."
Add to MetaCart
Abstract. We introduce the concept of a prenormed model of a particular kind of singlesorted finitary first-order theories, interpreted over a category with finite products. These are referred to as
prealgebraic theories, for the fact that their signature comprises, together with arbitrary function symbols (of finite ariety), only relation symbols whose interpretation, in any possible model, is
a reflexive and transitive binary relation, namely a preorder. The result is an abstract approach to the very notion of norm and, consequently, to the theory of normed structures. hal-00696491,
version 3- 22 May 2012 1.
, 2010
"... Process modeling languages such as “Dynamical Grammars ” are highly expressive in the processes they model using stochastic and deterministic dynamical systems, and can be given formal semantics
in terms of an operator algebra. However such process languages may be more limited in the types of objec ..."
Add to MetaCart
Process modeling languages such as “Dynamical Grammars ” are highly expressive in the processes they model using stochastic and deterministic dynamical systems, and can be given formal semantics in
terms of an operator algebra. However such process languages may be more limited in the types of objects whose dynamics is easily expressible. For many applications in biology, the dynamics of
spatial objects in particular (including combinations of discrete and continuous spatial structures) should be formalizable at a high level of abstraction. We suggest that this may be achieved by
formalizating such objects within a type system endowed with type constructors suitable for complex dynamical objects. To this end we review and illustrate the operator algebraic formulation of
heterogeneous process modeling and semantics, extending it to encompass partial differential equations and intrinsic graph grammar dynamics. We show that in the operator approach to heterogeneous
dynamics, types require integration measures. From this starting point, “measurable ” object types can be enriched with generalized metrics under which approximation can be defined. The resulting
measurable and “metricated ” types can be built up systematically by type constructors such as vectors, products, and labelled graphs. We find conditions under which functions and quotients can be
added as constructors of measurable and metricated types. 1 Measureable Types V25.nb 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1616882","timestamp":"2014-04-18T06:27:11Z","content_type":null,"content_length":"35489","record_id":"<urn:uuid:5053c505-c014-4db2-979a-ca4b426da603>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|