content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Monads Made Difficult
This is a short, fast and analogy-free introduction to Haskell monads derived from a categorical perspective. This assumes you are familiar with Haskell typeclasses and basic category theory.
If you aren’t already comfortable with monads in Haskell, please don’t read this. It will confuse your intution even more.
Suppose we have a abstract category \(\mathcal{C}\) with objects and morphisms.
• Objects : \(●\)
• Morphisms : \(● \rightarrow ●\)
For each object there is an identity morphism id and a composition rule \((\circ)\) for combining morphisms associatively. We can model this with the following type class in Haskell
class Category c where
id :: c x x
(.) :: c y z -> c x y -> c x z
In Haskell there is a category we call Hask over the constructor (->) of function types.
type Hask = (->)
instance Category Hask where
id x = x
(g . f) x = g (f x)
The constructor (->) is sometimes confusing, for example the following are equivalent.
(->) ((->) a b) -> ((->) a c)
(a -> b) -> (a -> c)
Between two categories we can construct a functor denoted \(T\) which maps between objects and morphisms of categories.
• Objects : \(T(●)\)
• Morphisms : \(T (● \rightarrow ●)\)
With the condition that \(T (f \circ g) = T (f) \circ T (g)\). In Haskell we model this with a multiparameter typeclass:
class (Category c, Category d) => Functor c d t where
fmap :: c a b -> d (t a) (t b)
The identity functor \(1_\mathcal{C}\) for a category \(\mathcal{C}\) is a functor mapping all objects to themselves and all morphisms to themselves.
newtype Id a = Id a
instance Functor Hask Hask Id where
fmap f (Id a) = Id (f a)
An endofunctor is a functor from a category to itself.
type Endofunctor c t = Functor c c t
The repeated image of a endofunctor over a category is written with exponential notation:
\[ \begin{align*} T^2 &= T T : \mathcal{C} \rightarrow \mathcal{C} \\ T^3 &= T T T: \mathcal{C} \rightarrow \mathcal{C} \end{align*} \]
newtype FComp g f x = C { unC :: g (f x) }
instance (Functor a b f, Functor c d g) => Functor a d (FComp f g) where
fmap f (C x) = C (fmap (fmap f) x)
Natural Transformations
For two functors \(F,G\) between two categories \(\mathcal{A,B}\):
\[ F : \mathcal{A} \rightarrow \mathcal{B} \\ G : \mathcal{A} \rightarrow \mathcal{B} \]
We can construct a natural transformation \(\eta\) which is a mapping between functors \(\eta : F \rightarrow G\) that associates every object \(X\) in \(\mathcal{A}\) to a morphism in \(\mathcal{B}
\[ \eta_X : F(X) \rightarrow G(X) \]
Show diagrammaticlly as:
Such that the following naturality condition holds for any moprhism \(f : X \rightarrow Y\):
\[ \eta_Y \circ F(f) = G(f) \circ \eta_X \]
This is expressible in our general category class as the following existential type:
type Nat c f g = forall a. c (f a) (g a)
In the case of Hask we a family of polymorphic functions with signature: forall a. f a -> g a. The canoical example is the natural transformation between the the List functor and the Maybe functor (
where f = List, g = Maybe ).
safeHead :: forall a. [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:xs) = Just x
Either way we chase the diagram we end up at the same place.
fmap f (safeHead xs) ≡ safeHead (fmap g xs)
Run through each of the cases if you need to convince yourself of this fact.
fmap f (safeHead [])
= fmap f Nothing
= Nothing
safeHead (fmap f [])
= safeHead []
= Nothing
fmap f (safeHead (x:xs))
= fmap f (Just x)
= Just (f x)
safeHead (fmap f (x:xs))
= safeHead [f x]
= Just (f x)
We can finally define a monad over a category \(\mathcal{C}\) to be a triple \((T, \eta, \mu)\) of:
1. An endofunctor \(T: \mathcal{C} \rightarrow \mathcal{C}\)
2. A natural transformation \(\eta : 1_\mathcal{C} \rightarrow T\)
3. A natural transformation \(\mu : T^2 \rightarrow T\)
class Endofunctor c t => Monad c t where
eta :: c a (t a)
mu :: c (t (t a)) (t a)
With an associativity square:
\[ \mu \circ T \mu = \mu \circ \mu T \\ \]
And a triangle equality:
\[ \mu \circ T \eta = \mu \circ \eta T = 1_T \\ \]
Alternatively we can express our triple as a series of string diagrams in which we invert the traditional commutative diagram of lines as morphism and objects as points andand draw morphisms as
points and objects as lines. In this form the monad laws have a nice geometric symmetry.
With the coherence conditions given diagrammatically:
Bind/Return Formulation
There is an equivalent formulations of monads in terms of two functions ((>>=), return) which can be written in terms of mu, eta)
In Haskell we define a bind (>>=) operator defined in terms of the natural transformations and fmap of the underlying functor. The join and return functions can be defined in terms of mu and eta.
(>>=) :: (Monad c t) => c a (t b) -> c (t a) (t b)
(>>=) f = mu . fmap f
return = eta
In this form equivalent naturality conditions for the monad’s natural transformations give rise to the regular monad laws by substitution with our new definitions.
fmap f . return ≡ return . f
fmap f . join ≡ join . fmap (fmap f)
And the equivalent coherence conditions expressed in terms of bind and return are the well known Monad laws:
return a >>= f ≡ f a
m >>= return ≡ m
(m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
Kleisli Category
The final result is given a monad we can form a new category called the Kleisli category from the monad. The objects are embedded in our original c category, but our arrows are now Kleisli arrows a
-> T b. Given this class of “actions” we’d like to write an operator which combined these morphisms: (b -> T c) -> (a -> T b) -> (a -> T c) just like we combine functions ( b -> c ) -> ( a -> b ) ->
( a - > c) in our host category.
In turns we out can, for a specific operator (<=<) over Kleisli arrows which is precisely morphism composition for Kleisli category. The Kleisli category embodies “composition of actions” that forms
a very general model of computation.
The mapping between a Kleisli category formed from a category \(\mathcal{C}\) is that:
1. Objects in the Kleisli category are objects from the underlying category.
2. Morphisms are Kleisli arrows of the form : \(f : A \rightarrow T B\)
3. Identity morphisms in the Kleisli category are precisely \(\eta\) in the underlying category.
4. Composition of morphisms \(f \circ g\) in terms of the host category is defined by the mapping:
\[ f \circ g = \mu ( T f ) g \]
Simply put, the monad laws are the equivalent category laws for the Kleisli category.
(<=<) :: (Monad c t) => c y (t z) -> c x (t y) -> c x (t z)
f <=< g = mu . fmap f . g
newtype Kleisli c t a b = Kleisli (c a (t b))
instance Monad c t => Category (Kleisli c t) where
-- id :: (Monad c t) => c a (t a)
id = Kleisli eta
-- (.) :: (Monad c t) => c y (t z) -> c x (t y) -> c x (t z)
(Kleisli f) . (Kleisli g) = Kleisli ( f <=< g )
In the case of Hask where c = (->) then we indeed see the instance give rise to the Monad and Functor instances similar to the Prelude ( if the Prelude had the proper Functor/Monad hierarchy! ).
class Functor t where
fmap :: (a -> b) -> t a -> t b
class Functor t => Monad t where
eta :: a -> (t a)
mu :: t (t a) -> (t a)
(>>=) :: Monad t => t a -> (a -> t b) -> t b
ma >>= f = join . fmap f
Haskell Monads
For instance the List monad would have have:
1. \(\eta\) returns a singleton list from a single element.
2. \(\mu\) turns a nested list into a flat list.
3. \(\mathtt{fmap}\) applies a function over the elements of a list.
instance Functor [] where
fmap f (x:xs) = f x : fmap f xs
instance Monad [] where
-- eta :: a -> [a]
eta x = [x]
-- mu :: [[a]] -> [a]
mu = concat
The IO monad would intuitively have the implementation:
1. \(\eta\) returns a pure value to a value within the context of the computation.
2. \(\mu\) turns a sequence of IO operation into a single IO operation.
3. \(\mathtt{fmap}\) applies a function over the result of the computation. | {"url":"http://www.stephendiehl.com/posts/monads.html","timestamp":"2014-04-20T08:15:06Z","content_type":null,"content_length":"20609","record_id":"<urn:uuid:fb5c71b1-844d-4c5f-b7e2-dc4134ee9ed0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
When first published in 2005, Matrix Mathematics quickly became the essential reference book for users of matrices in all branches of engineering, science, and applied mathematics. In this fully
updated and expanded edition, the author brings together the latest results on matrix theory to make this the most complete, current, and easy-to-use book on matrices.
Each chapter describes relevant background theory followed by specialized results. Hundreds of identities, inequalities, and matrix facts are stated clearly and rigorously with cross references,
citations to the literature, and illuminating remarks. Beginning with preliminaries on sets, functions, and relations,Matrix Mathematics covers all of the major topics in matrix theory, including
matrix transformations; polynomial matrices; matrix decompositions; generalized inverses; Kronecker and Schur algebra; positive-semidefinite matrices; vector and matrix norms; the matrix exponential
and stability theory; and linear systems and control theory. Also included are a detailed list of symbols, a summary of notation and conventions, an extensive bibliography and author index with page
references, and an exhaustive subject index. This significantly expanded edition of Matrix Mathematics features a wealth of new material on graphs, scalar identities and inequalities, alternative
partial orderings, matrix pencils, finite groups, zeros of multivariable transfer functions, roots of polynomials, convex functions, and matrix norms.
• Covers hundreds of important and useful results on matrix theory, many never before available in any book
• Provides a list of symbols and a summary of conventions for easy use
• Includes an extensive collection of scalar identities and inequalities
• Features a detailed bibliography and author index with page references
• Includes an exhaustive subject index with cross-referencing
"When a matrix question is thrown my way, I will now refer my correspondents . . . to Bernstein's handbook."--Philip J. Davis, SIAM News
"Matrix Mathematics contains an impressive collection of definitions, relations, properties, equations, inequalities, and facts centered around matrices and their use in systems and control. The
amount of material that is covered is quite impressive and well structured. . . . I highly recommend the book as a source for retrieving matrix results that one would otherwise have to search for in
the extensive literature on matrix theory."--Paul Van Dooren, IEEE Control Systems Magazine
"The author was very successful in collecting the enormous amount of results in matrix theory in a single source. . . . A beautiful work and an admirable performance!"--Monatshefte für Mathematik
"It is a remarkable source of matrix results. I will put it on the shelf near to my desk so that I have quick access to it. The book is an impressive accomplishment by the author. . . . I can
enthusiastically recommend it to anyone who uses matrices. The author has to be applauded for the accomplishment of putting together this impressive volume."--Helmut Lutkepohl, Image
"The book is a well-organized treasure trove of information for anyone interested in matrices and their applications. Look through the Table of Contents and see if there isn't some section that will
tempt you and/or illuminate your pathway through the extensive literature on matrix theory. Researchers should have access to this authoritative and comprehensive volume. Academic and industrial
libraries should have it in their reference collections. Their patrons will be grateful."--Henry Ricardo, MAA Reviews
More reviews
Table of Contents
Subject Areas: | {"url":"http://press.princeton.edu/titles/8961.html","timestamp":"2014-04-20T13:31:29Z","content_type":null,"content_length":"19417","record_id":"<urn:uuid:f20955d6-544e-4e3f-8bbd-3ff860865dcf>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel Shortest Paths
To illustrate the use of the Parallel Boost Graph Library, we illustrate the use of both the sequential and parallel BGL to find the shortest paths from vertex A to every other vertex in the
following simple graph:
With the sequential BGL, the program to calculate shortest paths has three stages. Readers familiar with the BGL may wish to skip ahead to the section Distributing the graph.
Define the graph type
For this problem we use an adjacency list representation of the graph, using the BGL adjacency_list``_ class template. It will be a directed graph (``directedS parameter ) whose vertices are stored
in an std::vector (vecS parameter) where the outgoing edges of each vertex are stored in an std::list (listS parameter). To each of the edges we attach an integral weight.
typedef adjacency_list<listS, vecS, directedS,
no_property, // Vertex properties
property<edge_weight_t, int> // Edge properties
> graph_t;
typedef graph_traits < graph_t >::vertex_descriptor vertex_descriptor;
typedef graph_traits < graph_t >::edge_descriptor edge_descriptor;
Construct the graph
To build the graph, we declare an enumeration containing the node names (for our own use) and create two arrays: the first, edge_array, contains the source and target of each edge, whereas the
second, weights, contains the integral weight of each edge. We pass the contents of the arrays via pointers (used here as iterators) to the graph constructor to build our graph:
typedef std::pair<int, int> Edge;
const int num_nodes = 5;
enum nodes { A, B, C, D, E };
char name[] = "ABCDE";
Edge edge_array[] = { Edge(A, C), Edge(B, B), Edge(B, D), Edge(B, E),
Edge(C, B), Edge(C, D), Edge(D, E), Edge(E, A), Edge(E, B)
int weights[] = { 1, 2, 1, 2, 7, 3, 1, 1, 1 };
int num_arcs = sizeof(edge_array) / sizeof(Edge);
graph_t g(edge_array, edge_array + num_arcs, weights, num_nodes);
Invoke Dijkstra's algorithm
To invoke Dijkstra's algorithm, we need to first decide how we want to receive the results of the algorithm, namely the distance to each vertex and the predecessor of each vertex (allowing
reconstruction of the shortest paths themselves). In our case, we will create two vectors, p for predecessors and d for distances.
Next, we determine our starting vertex s using the vertex operation on the adjacency_list``_ and call ``dijkstra_shortest_paths``_ with the graph ``g, starting vertex s, and two property maps``_ that
instruct the algorithm to store predecessors in the ``p vector and distances in the d vector. The algorithm automatically uses the edge weights stored within the graph, although this capability can
be overridden.
// Keeps track of the predecessor of each vertex
std::vector<vertex_descriptor> p(num_vertices(g));
// Keeps track of the distance to each vertex
std::vector<int> d(num_vertices(g));
vertex_descriptor s = vertex(A, g);
(g, s,
make_iterator_property_map(p.begin(), get(vertex_index, g))).
make_iterator_property_map(d.begin(), get(vertex_index, g)))
Distributing the graph
The prior computation is entirely sequential, with the graph stored within a single address space. To distribute the graph across several processors without a shared address space, we need to
represent the processors and communication among them and alter the graph type.
Processors and their interactions are abstracted via a process group. In our case, we will use MPI for communication with inter-processor messages sent immediately:
typedef mpi::process_group<mpi::immediateS> process_group_type;
Next, we instruct the adjacency_list template to distribute the vertices of the graph across our process group, storing the local vertices in an std::vector:
typedef adjacency_list<listS,
distributedS<process_group_type, vecS>,
no_property, // Vertex properties
property<edge_weight_t, int> // Edge properties
> graph_t;
typedef graph_traits < graph_t >::vertex_descriptor vertex_descriptor;
typedef graph_traits < graph_t >::edge_descriptor edge_descriptor;
Note that the only difference from the sequential BGL is the use of the distributedS selector, which identifies a distributed graph. The vertices of the graph will be distributed among the various
processors, and the processor that owns a vertex also stores the edges outgoing from that vertex and any properties associated with that vertex or its edges. With three processors and the default
block distribution, the graph would be distributed in this manner:
Processor 0 (red) owns vertices A and B, including all edges outoing from those vertices, processor 1 (green) owns vertices C and D (and their edges), and processor 2 (blue) owns vertex E.
Constructing this graph uses the same syntax is the sequential graph, as in the section Construct the graph.
The call to dijkstra_shortest_paths is syntactically equivalent to the sequential call, but the mechanisms used are very different. The property maps passed to dijkstra_shortest_paths are actually
distributed property maps, which store properties for local edges or vertices and perform implicit communication to access properties of remote edges or vertices when needed. The formulation of
Dijkstra's algorithm is also slightly different, because each processor can only attempt to relax edges outgoing from local vertices: as shorter paths to a vertex are discovered, messages to the
processor owning that vertex indicate that the vertex may require reprocessing.
Return to the Parallel BGL home page | {"url":"http://www.boost.org/doc/libs/1_55_0/libs/graph_parallel/doc/html/dijkstra_example.html","timestamp":"2014-04-18T09:37:24Z","content_type":null,"content_length":"12038","record_id":"<urn:uuid:03f780af-52ff-4114-bba8-218003653d4d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kindergarten Kindergarten
Kindergartners love patterns. Or should I say "pat-ter-ens." They're super fun! But what they don't know is that they are an essential building block in their understanding about numbers. What starts
out in kindergarten as making pretty designs with pattern blocks eventually leads to skip counting, repeated addition (and therefore multiplication and division), algebraic reasoning, and beyond! So
it's important that we provide them with a solid foundation.
Day 1
I want my students to recognize patterns in many different contexts. I want them to understand that red...red..green is the same as orange...orange...blue...is the same as big...big...little.
For this activity, I make several patterns on large pieces of chart paper and place them on tables around the room. I divide the students into pairs and give them different math tools. Their
challenge is to go around the room and create the different patterns with their own math tools.
I try to make sure the math tools I provide are different sizes and shapes so that they cannot just place their own tools on top of my pattern. I want them to find the pattern core and then recreate
it on their own.
You can extend this activity by having the students create their own patterns in their math journals and then seeing how many different ways they can make the same pattern...different colors, shapes,
Day 2
I debate every year over whether or not to teach my students how to name patterns using letters. My concern is that they will come to associate patterns with letters and not see their connection to
numbers. But it is an easy way to identify patterns, especially when I'm pushing them to create different patterns. Some kids will create LOTS of AAB patterns...red-red-orange...green-green-blue
...circle-circle-square. When I ask them to create a different pattern--they say, "But they ARE different!" If we can name them with letters, they eventually see that they are actually the same.
So my compromise is to teach them how to do it, but to make it very clear that this is just ONE way that they can name patterns. I start by having a student create a pattern in the pocket chart. Then
I have different students recreate that same pattern using different math tools. Eventually, I will give a student a set of letter cards and have them recreate the pattern with those.
Next I challenge the students to find the core of the patterns.
I explain that a simple way to give a pattern a name is to just refer to the core, because if we know that, then we can make that pattern as long as we want. We could name the pattern any of these
options: green-yellow; square-circle; hexagon-square. But a very simple, and popular way to name a pattern is with letters: A-B.
We make some more challenging patterns and name them in the same way. But I make sure to stress that we could just as easily name this pattern chicka-boom-boom-splat. ABBC is just a convenient way
that they may be asked to use throughout their school years.
Day 3
As much fun as we have making patterns out of pattern blocks and macaroni and stamps and shapes--we cannot forget numbers! I start this lesson by simply asking the kids to make patterns with numbers.
Some kids use actual numbers, some kids make repeating groups of objects (and, of course, some kids don't know where to begin). We share and discuss the different ideas the kids have and base our
discussion on them.
I eventually make a simple pattern like the AAAB pattern below. Then I ask the kids what would happen if I counted how many of each color were in each "group." We recreate the same pattern using
connecting cubes and place them vertically in the pocket chart.
Then we count the cubes and put the corresponding number under each tower.
And then we discuss it--Is it a pattern? Does it repeat? Is it easy to tell what comes next? What is the core?
I give the kids connecting cubes and challenge them to create their own number patterns. Then I have them record them on a grid paper for their math journals.
Download Can you make a number pattern Grid
Day 4
Here, I challenge the kids to apply what they have learned about patterns to solve a problem. Remember--resist the urge to show the kids how to solve the problem. Read the problem, discuss it,
provide them with a varity of tools and see what happens. Make sure to circulate among your students and guide them with questioning and gentle nudges. And then come together during mathematician's
chair to have the kids explain the different ways they solved it.
Day 5
Here is similar problem. You can, of course, adjust the numbers or difficulty of the pattern to fit the needs of your own class (the example below is actually different than the one on the label).
You can also solve these problems together as a group (although you need to sit back and let the kids take the lead!)
Bonus Activity
I always like to reward my kids at the end of a unit with something fun. In this case, I let them make bracelets out of pony beads and chenille stems. The beads are our school colors. Of course, the
bracelet MUST be a pattern.
You can also have them record it in their math journals by drawing the pattern and naming the core.
We are not done with patterns, yet. We will continue to revist them throughout the year in many contexts. And we will explore number patterns (specifically growing and shrinking patterns) in a few
Have fun! | {"url":"http://www.kindergartenkindergarten.com/patterns/","timestamp":"2014-04-25T08:01:05Z","content_type":null,"content_length":"40345","record_id":"<urn:uuid:11b3e037-0a48-4f71-a927-403a8c93f8c8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
convert 90f to celcius
You asked:
convert 90f to celcius
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/convert_90f_to_celcius","timestamp":"2014-04-25T00:29:19Z","content_type":null,"content_length":"55672","record_id":"<urn:uuid:903d8341-b0b0-436b-828a-79ac12792bce>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Transformed values in logistic regression
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Transformed values in logistic regression
From Richard Williams <Richard.A.Williams.5@nd.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Transformed values in logistic regression
Date Thu, 26 Aug 2004 22:59:47 -0500
At 08:27 PM 8/26/2004 -0700, Ricardo Ovaldia wrote:
Specifically we were interested in modeling
case-control status as a function of several patient
covariates including serum creatinine which in our
data ranges from 0.11 to 1.98.
Because of skewness and to make the odds ratio
independent of the units measurement, we decided to
log-transform the creatinine values before entering
them into our logistic model. However the reviewer
wrote "Using a log-transform for creatine is absurd
because a 1-unit increase in ln(x) is equivalent to
increasing x by a factor of 2.718 which is in the
realm of impossibility"
Is he correct? Isn't the coefficient estimated such
that the predicted values are within the range of the
data and this only a problem if you attempt to
extrapolate beyond the data range? What I am missing?
While I am not an expert in creatinine (whatever that is) I am inclined to agree with you. You can always plug in implausible/impossible numbers and come up with a prediction, e.g. how much would
somebody make if they had -2,000,000 years of education? I've never heard of a rule which says that x = 1 has to be a plausible or even possible value. For presentation purposes, you might want to
scale your variables in ways which make them easier to understand and present (e.g. measure income in thousands of dollars rather than in dollars) but it is not essential. There may be other good
reasons for not doing what you are doing, but the reason given seems odd to me, unless maybe it violates some sort of convention in your field. If you want to make this reviewer happy, maybe you
could measure creatinine in milligrams instead of grams or whatever happens to be reasonable so that a 1 unit increase in x is possible.
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
FAX: (574)288-4373
HOME: (574)289-5227
EMAIL: Richard.A.Williams.5@ND.Edu
WWW (personal): http://www.nd.edu/~rwilliam
WWW (department): http://www.nd.edu/~soc
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-08/msg00796.html","timestamp":"2014-04-16T04:19:01Z","content_type":null,"content_length":"7623","record_id":"<urn:uuid:7faecfe3-b252-4bf7-81a2-d739cf9a1252>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECCC - Reports tagged with logic
Attempts at classifying computational problems as polynomial time
solvable, NP-complete, or belonging to a higher level in the polynomial
hierarchy, face the difficulty of undecidability. These classes, including
NP, admit a logic formulation. By suitably restricting the formulation, one
finds the logic class MMSNP, or monotone monadic strict NP without
more >>> | {"url":"http://eccc.hpi-web.de/keyword/14145/","timestamp":"2014-04-18T10:35:58Z","content_type":null,"content_length":"23265","record_id":"<urn:uuid:d75e52e9-1c97-4856-ab13-ebe94233bece>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
[3ACTS] Hot Coffee
November 2nd, 2011 by Dan Meyer
If that first act interests you, download the full story.
[h/t @MrPicc112]
7 Responses to “[3ACTS] Hot Coffee”
1. on 03 Nov 2011 at 4:51 pm1 Emily
This one is fabulous, as are all three of these, indeed. I was intrigued by them all.
Dan, I especially like your insistence on making guesses, thus defining and hopefully refining a reasonable range for the solution set. I think too often we see estimation as a lower level skill
and assume our upper level students have got it down. The Common Core gives it a place in the third grade!
Nevertheless, the time spent on refining a guess and identifying a reasonable and relatively narrow range gives students valuable insight on identifying the correct focus and therefore directs
the learning. How many students stare at us blankly as we try to explain how to choose an appropriate scale/ viewing window when graphing functions? Too many for me.
Kudos to you. Depending upon where the act 2 discussions lead, I think you can add N-Q.3 and F-IF.5 to your list of standards.
2. This one, with some modifications, will be a great introduction for my calculus class.
How many hoses would be required to fill up the cup within two hours ?
What would be the rate of change of the coffee level in function of the flow ?
And if we change the cup for a giant Martini glass ?
Or the spherical water tank of a city ?
Thanks for the inspiration !
Emily: I think too often we see estimation as a lower level skill and assume our upper level students have got it down.
I think this is right. In the upper grades we keep estimation in the corner, but then once students have an incorrect answer on an applied problem, we pretend like estimation is our best friend.
“Does that answer make sense?” we ask, making too little of an appeal to intuition too late.
Because what’s the student going to say? If she answers “no, it makes no sense,” she has to re-do her work. There’s serious cost to that answer. Which is why we have to make that appeal to
intuition as early as possible.
4. on 05 Nov 2011 at 10:23 am4
How do you find all of these videos Dan??
5. @Debbie, Timon Piccini tweeted out a picture of the coffee cup, which I linked above. That got me searching for the source of the picture which led me to a bunch of a clips, which I then
condensed and edited together.
I spend a few minutes a day trolling around a few sites for interesting opportunities for math modeling but it’s great when guys like Timon act like an extra set of eyes.
6. on 22 Nov 2011 at 8:54 pm6 David
Interestingly, as we were doing this in class, a student on their laptop found articles (like http://today.msnbc.msn.com/id/40007591/ns/today-food/t/wake-smell-worlds-largest-coffee-gallons/
#.Tsx8CFYfh6o) claiming the coffee cup was 8 by 8 (rather than the 7 by 7 Dan provided). The 7 by 7 gives the right gallons – perhaps the 8 by 8 was outside dimensions? Or they didn’t fill the
cup all the way?
On another note, the record has since been broken:
7. on 19 Mar 2013 at 9:43 pm7 Frank
It seems as if this record keeps going up…
The largest cup of coffee contains 13,200 litres (2,903.6 UK gal; 3,487.1 US gal) and was created by De’Longhi (Italy), in London, UK, on 5 November 2012.
Black coffee was used and the cup measured 2.9 m (9 ft 6 in) tall and 2.65 m (8 ft 8 in) wide.
Unfortunately, it doesn’t seem as if the Guinness site has historical data to look at the progression of large coffee cups over time. | {"url":"http://blog.mrmeyer.com/2011/3acts-hot-coffee/","timestamp":"2014-04-17T07:25:50Z","content_type":null,"content_length":"38419","record_id":"<urn:uuid:e31f2076-6c9e-434f-8703-e4c5b696ab1b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math problem 55 for grades k-2
Balance It
Problem #82
Deadline to submit the solution: October 31st, 2002
The scale above is in balance. It means that total weight on the left side is equal to the total weight on the right side.
Q1. How heavy is the weight on the right side?
michelle from usa
Grade: 2 Age: 7
Q1: 5
well you have to add the left side which is 2+4=6 and then figure out what would go on the right side by subtracting like 6-1=5 and that is how i figured it out! so then when you add the two
sides you should come out with the same amount on both sides,like; 4+2=6 and 1+5=6 and they would balance right !
About me: well i like hockey and all sports . i like my dog poco and joey . and i love my family and like to go to school and learns new and exciting things and problems
doda from u.s.a.
Grade: 4 Age: 9
Q1: 5
It is the number FIVE.If the problem was 2 and 4, the ONLY ballance would be ONE and FIVE.
About me: I can explain einstien's theory of relitivity.
Sai from U.S.A.
Grade: 1 Age: 6
Q1: 5
As 4+2=6 in the left side,we have to equal the weights to 6 on the right side also to balance.There is already 1 inthe right side so,we have to add 5 to that 1.
About me:
Katherine from USA
Grade: 2 Age: 7
Q1: 5
because if ther is 2 and 4 then there has to be a 1 and 5. I took the one from the two and added it to the four.
About me: I like math and puppies. I like the smell of roses. I like doing school sometimes.
Sam from USA
Grade: 2 Age: 7
Q1: 5
2+4=1+5 both sides have to be the same. Since there was only 1 on the right side you needed 5 more on the right side to equal the number 6 on the left side.
About me:
Sushant from USA
Grade: 2 Age: 7
Q1: 5
I came to this answer because the weight on the left side is equal to the right side 2+4=6 6-1=5 and that is why it is 5.
About me: I like to play sports.
Esther Dodd from Canada
Grade: 2 Age: 7
Q1: 5
The missing weight on the right side of the scale is 5.
About me: I am in grade 2 and I am home schooled. I have 5 brothers and 2 sisters.
Crystal from USA
Grade: 2nd Age: 7
Q1: 5
On the left side there is a 2 and a 4. On the right hand side is only one. 2 + 4 = 6. Therefore you need a 5 to balance it because 5 + 1 = 6.
About me: I'm a good worker. I usually get all hundreds on my papers at school.
Quekenshia Aljernea Ford from America
Grade: 7th Age: 13
Q1: 5
because 2 and 4 = 6 and then you have to find another number on the othere side to balance the same size on both sides and on the other side it had 1 and I added 5 so that it could become 6 same
as the other side.
About me: My NAME IS QUEKENSHIA ALJERNEA FORD. AND I AM 13 YEARS OLD AND I AM A A AND B STUDENT AT EAST JUNIOR HIGH SCHOOL. AND I AM INTERESTED IN ALL MY WORK. I ENJOY LEARNING NEW THINGS HANGING
OUT WITH FRIENDS GOING SHOPPING TALKING ON THE PHONE VISTING MY RELATAVIES AND EVERYTHING ELSE A CHILD LOVES TO DO.
Melissa Hibbert from U.S.A
Grade: 2 Age: 7
samantha from US
Grade: 1 Age: 6
Sarah from America
Grade: 1 Age: 7
Shashank Chitta from U.S.A
Grade: First Age: 6
Mayan Ayman Nafie from Egypt
Grade: KG1 Age: 5 | {"url":"http://www.dositey.com/problems/k2/problem82.htm","timestamp":"2014-04-18T00:13:51Z","content_type":null,"content_length":"17388","record_id":"<urn:uuid:28ea45d8-6b95-44b4-bb28-6e1b6332cd4b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of derived category of a stack
up vote 3 down vote favorite
In their book, Bernstein an Lunts define the equivariant derived category in several ways. One can be expressed as follows:
Let $X$ be a say complex variety with an action by an algebraic group $G$.
Consider the fibration $\pi:D^b(- , \mathbb Z) \rightarrow Var$ which assigns to every variety $T$ the bounded derived category of sheaves of say abelian groups $D^b(T, \mathbb Z)$. Then the
equivariant derived category $D_G^b(X,\mathbb Z)$ is defined to be the category of cartesian sections of $\pi$ over the quotient stack $X/G \rightarrow Var$. Informally one can read this as:
"An equivariant sheaf is on a space $X$ is just a map from $X/G$ to the classifying space of sheaves"
Now my questions are:
In what situation is this the correct definition? What properties of $X/G$ and $D^b(T,\mathbb Z)$ make this definition reasonable?
For example:
If we replace $X/G$ by an artin stack $\mathfrak{X}$ does this give a reasonable definition of $D^b(\mathfrak{X}, \mathbb Z)$?
If we replace $D^b(T, \mathbb Z)$ by some other "derived category of sheaves" do we still get a reasonable definition?
For example we could drop the boundedness assumption in one or both directions/allow non finitely generated cohomology sheaves, replace sheaves of abelian groups by quasi-coherent sheaves etc...
stacks derived-category sheaf-theory
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged stacks derived-category sheaf-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/122626/definition-of-derived-category-of-a-stack","timestamp":"2014-04-19T02:34:38Z","content_type":null,"content_length":"46595","record_id":"<urn:uuid:19c12f37-d3d5-48a6-b506-51ba9cbfbba8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
High Energy Particle Physics
1204 Submissions
[5] viXra:1204.0101 [pdf] submitted on 2012-04-28 14:29:22
Modeling the Electron as a Stable Quantum Wave-Vortex: Interpretation α ≈1/137 as a Wave Constant
Authors: George Kirakosyan
Comments: 22 pages, 10 figures, Article published in HADRONIC JOURNAL
The connection of alpha (α ≈1/137) to redistribution of intensities in interference of circularly polarized waves it has shown. Obtained number coincides to known one in reached accuracy: 10^-10. The
photon represented as a quantum wave packet. The electron’s model proposed as Compton’s circularly polarized standing wave. The origins of the mass and static fields (charges) interpreted as a
relativistic mass and pseudo static electromagnetic fields (“halos”) arising in interference of quanta. Electron’s magnetic moment and g value obtained with 10^-10 accuracy. Physical interpretation
of de Broglie’s wave is proposed.
Category: High Energy Particle Physics
[4] viXra:1204.0087 [pdf] submitted on 2012-04-23 14:19:54
The Alpha-Quantized Systematics of Quark-Dominated Elementary Particle Lifetimes
Authors: Malcolm H. Mac Gregor
Comments: 33 pages, 15 figures
The experimental lifetime systematics of the 36 long-lived quark and particle metastable ground states is displayed graphically on a logarithmic global alpha-grid spaced in powers of the fine
structure constant alpha ~ 1/137 and centered on the pi± lifetime. The hadron lifetimes separate into four non-overlapping lifetime groups, each dominated by a single quark flavor. These in turn
divide into slow flavor-breaking electroweak decays and fast flavor-conserving paired-quark and radiative decays, separated by alpha^4 lifetime gaps. The long-lived electroweak subgroups feature
contral lifetime (CL) bands that map onto the alpha-grid lines, plus particles that are displaced by factors of 2, 3 or 4 from the CL. The quark lifetime dominance rule is c > b > s. The neutron and
muon lifetimes lie on the global alpha-grid. The tauon lifetime fits into the c-quark lifetime group.
Category: High Energy Particle Physics
[3] viXra:1204.0078 [pdf] submitted on 2012-04-18 23:12:33
E8QC - Quantum Contraction Building Block of Our Universe
Authors: Frank Dodd Tony Smith Jr
Comments: 6 Pages.
Our Universe can be described by an Algebraic Quantum Field Theory (AQFT) that is the Completion of the Union of all Tensor Products of a Building Block. This Fundamental Building Block can be
embedded in the Real Clifford Algebra Cl(8,8). By the 8-Periodicity Property of Real Clifford Algebras, all Tensor Products of it are themselves Real Clifford Algebras, and the Completion of the
Union of all Tensor Products is well-behaved and constitutes a generalized hyperfinite II1 von Neumann factor AQFT. The purpose of this paper is to describe the structure of this Fundamental Building
Block and its physical interpretation. The Fundamental Building Block is the Maximal Contraction of the E8 Lie Algebra that lives inside Cl(8,8). Since it leads to an AQFT, it is called the E8
Quantum Contraction (E8QC).
Category: High Energy Particle Physics
[2] viXra:1204.0061 [pdf] submitted on 2012-04-15 13:32:52
The Bipolar Structure of Particles and Interactions
Authors: Peter Kohut
Comments: 41 Pages.
The basic elementary constituent of the physical Universe is discovered. It is a dynamic bipolar relation of opposites (+/-) as a basic elementary building unit of matter, energy, space and time.
Energy being a motion has its basic manifestation in attraction and repulsion of opposites. Without acceptance of bipolarity principle of matter (space, energy) it is impossible to discover how
matter and space are structured and constructed. As matter is spatial and space is material, so the elementary structural unit of space is at the same as the elementary building block of matter
(energy). This simple fact has not been accepted by theoretical physic so far and the search for the essence of matter is far from the correct direction.
Category: High Energy Particle Physics
[1] viXra:1204.0008 [pdf] replaced on 2012-04-04 10:15:29
Higgs Alpha-Quantized Coupling Constants for Quarks and Metastable Particles
Authors: Malcolm H Mac Gregor
Comments: 11 pages, 3 figures
Higgs mass-generation particle coupling constants g are calculated as suggested by Lederman and Hill, who use the top quark t as the reference Higgs mass. Accurate alpha-quantized coupling constants
g are obtained for the metastable leptons, constituent quarks, proton, Bc meson, and W and Z gauge bosons, where alpha is the fine structure constant, and where the particle masses are the inertial
masses of the states involved. An analogous set of inverse Higgs-like coupling constants f is also presented in which the electron serves as the reference mass, as suggested by the alpha-boosted mass
generation structure of the experimental f and g coupling constant values.
Category: High Energy Particle Physics | {"url":"http://vixra.org/hep/1204","timestamp":"2014-04-17T04:29:00Z","content_type":null,"content_length":"9634","record_id":"<urn:uuid:7bbe68fd-7044-431d-9fab-91290279441d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Friday Sketchers Sketch number 30 sponsored by Sir Stampalot
Hi...Good Morning and welcome to sketch number 30! I'm
Carole from England
and I'm the host of the sketch this week. I hope you'll have time to join us, I know it's a very busy time of year for everyone and Thanksgiving in the US, so I've kept the sketch really easy for
this reason. Please feel free to add your sentiment and embellishments wherever suits your image. Have fun and enjoy! If you don't have a blog and would like to join in, please email me your picture,
thank you.
We are delighted to have
Sir Stampalot
from the UK sponsoring us this week.
Sir Stampalot
is well known for having a variety of stamps from a vast range of companies from Tilda to Art Impressions, Lily and Milo, to Whipper Snapper. As well as being a mail order company and web-based shop,
they exhibit at shows around the UK and also have a shop. The shop is based in Peterborough, Cambridgeshire and has a workshop area for holding classes and demo days - talking of which, if you are in
the area, don't miss out on a fantastic demo day Sir Stampalot have lined up for Saturday 6th December - there will be free demonstrations by at least six different talented crafting ladies, free
refreshments and plenty of bargains! Sir Stampalot will ship internationally.
Finally, the winner of last weeks challenge, sponsored by Scrapshop.no is
Ali's Stamping Heaven
- Congratulations, please contact me to arrange for your prize to be sent to you, thank you.
Random Integer Generator
Here are your random numbers:19
Timestamp: 2008-11-27 20:51:09 UTC
Here are the two Rachelle Anne Miller stamps kindly given to Friday Sketchers by Janice from
Sir Stampalot
...many thanks Janice :-)
Here's sketch 30 - Looking forward to seeing your creations!
Kristine Representing Norway
Bea Representing Germany
117 comments:
Hallo girls.... a beautiful Sketch and uhi.. I'm the first... The Cards from DT are so great..
here is my Card .. thanks for look... i wish a beautiful Weekend
Hi girls,
fantastic DT cards.
Here is my card.
Thanks for looking!
xx mandy
Here is my card for this week! A fast and simple one this time:.-) you find it here at my blog:-)
Thanks Carole for your lovely sketch!!
have a nice weekend everyone!
Another gorgeous sketch and amazing DT cards. Mine can be seen here.
Thanks for looking
Cute DT cards.
Here is my card
See you,
This comment has been removed by the author.
Great sketch thank you.
My card is here
Fab cards from the DT!!
My entry is on my blog
first time with you so hope you like the card I have made, DT cards are stunning, HAzel xox
Here's Mine!
Fabulous cards again by the team ,stunning
huge Dawnxxx
here’s mine.
What a great challenge :D
Lovely cards!
Here´s my card from Finland
Have a nice weekend!
Hey. Great cards. :)
my card
Thanx for looking.
I liked this sketch, simple yet effective Heres mine
Love DT cards. Lovely sketch.
Here is my card
Hi girls
great DT-cards and sketch!!
my card
Thanks for looking!
Have a nice weekend!
Hugs Gisela
Hi there
Here is my Card – Thanks for Looking!
Great sketch this week and fab DT cards as always. Here is my card.
Hi everyone, gorgeous dt cards I am in awe!!! mine is here:-
my card
thanks for looking.x
Lovely simple sketch and so I have made a simple card. My card is on my blog
Carole, this is a wonderful Sketch!And the DT Cards are beautiful!
Here is my Card.
TFL! Hugs Tanja
hi girls
great sketch Carole:) and fab DT cards:)
here is my card
Have a good week-end
Hi Carola, great sketch.
And DT members beautiful cards.
here is my card.
Thanks for looking.
Greetings, THEA
Wonderful sketch to work with and great inspiration from DT.
My Card
Gorgeous sketch and fabulous cards DT
Here's mine
fabulous sketch and dt cards.
here is my card.
Thanks for looking. hugs Rachxx
merci pour le sketch
voici ma carte
What a lovely sketch and great work by the dt. here is my card
Fabulous cards ladies...love them all. HERE is my card.
Hi Girls
Great sketch. And lovely cards.
Here is my card
beautiful sketch!DT's cards are fab!! Mine is on my blog.
Hi Ladies, you have made so gorgeous cards this week ... as always!
And this is a great sketch from Carole!
Here is my card.
Wow what a lovely cards from the design team!! I really liked this sketch I hope you will like my card. You can see it http://car-d-elicious.blogspot.com/2008/11/no-patterned-paper.html
Sorry that link isn't good. here is the good link!
Sorry about that!
I have kept it real simple! The result is here
This comment has been removed by the author.
Loved this sketch.
Here is my entry.
Hi,great sketch to work with this week,thank you Carole,and gorgeous DT cards,thanks for the challenge,
Take care
Sandra x
My card is her TFL
Gorgeous cards from the DT and fab sketch to follow this week.
Here's my card. Michele x
Wonderful sketch - it really helped me out today with a card I had to make - love the DT cards.
Here is my Entry
Thanks for looking
Lovely sketch and I like the DT cards a lot!
Here is my card!
~ Leonie ~
Another great sketch to work with and beautiful DT examples. My card is here:-
Lovely array of cards from the DT xx
Here's my card
Great sketch Carole and beautiful DT items. Here is my card
Lovely sketch and amazing cards from DT. Here is my entry.
x Natasha x
Hello Girls.
Here is my card for this week :)
Thanks for the sketch and wonderful DT-Cards.
click here
Greetings, dANA
Hi DT Friday Sketchers !!!
very wonderful card's !!!
MY CARD
Sorry this is MY CARD
TRES JOLIES CARTE ( I LOVE )
MY CARD
Great cards from the DT!
Here is mine
This comment has been removed by the author.
Hi everyone, thanks for the great sketch, I love it. Great DT cards. My card is here. TFL. Chris x
What a great sketch Carole and wonderful DT cards!!
Here is my card
Thank you Carole for this beautiful sketch ! Here's the link of my card : http://lemondedevali.canalblog.com/archives/2008/11/30/11576696.html
Thanks for looking. See you soon !
Hugs. Val.
Great sketch and I loved all of the DT cards!
Here is my card and thanks for taking a peek:
Here is my card http://emzstewart.blogspot.com/2008/11/friday-sketchers-scetch-number-30.html
this is the first challenge i've done, hope you like my card
What a FAB sketch and great designs from the DT. Here is my card.
Thank you for looking.
having a {me} day
Hi, Lovely, simple sketch and fab cards from the DT again. My card is HERE
Thanks for the beautiful sketch!! Dt cards are gorgeous!! Here's mine if you care to take a look!! Thanks again:)Kathy
Such a fab sketch and the DT examples are all so different!
You can see my entry
Judy x
gorgeous cards from the DT and a great sketch for quick christmas cards! Mine is here
hugs, annie x
I like that sketch!
Here is my card
I try again, the link didnt work.
Here is my card
Great sketch! Here is mine!
great sketch ~wonderful cards
here is my card
thanks for looking
vanessa xx
Hi, fab easy sketch! Super cards by the DT! HERE is my card.
Thanks x
Great sketch this week thank you carole, and great examples from the DT
Here is my card
My brown Christmas card is here
An absolute fabulous sketch. Here's mine: http://justcrafting.blogspot.com/2008/12/friday-sketchers.html
Her is my contribution.
my card
Fab challenge, lovely entries.
Here is mine
Just added my card to my blog xxx
This is an amazing sketch. Thank you Carole! The DT's cards are all so beautiful! Here is my card. Hugs Jeanette
Fantastic sketch this week Carol, I really enjoyed this one and fab cards by the DT too :) Donna x
My card is here on my blog
Gorgeous cards DT girls and lovely sketch Carole thank you.
My card can be found here
kim x
Another incredible job by the DT!!!Here is my card. Thanks for the fun and inspiration!!! Have a fabulous week!!!
Fantastic sketch and great DT cards. My card is here.
Thanks for looking
Cathy xx
Beautifull sketch and great cards by the DT.
HERE'S MY CARD
xoxo karin
Thanks for a great sketch Carole and i love the fabby cards by the DT :)
Here is my card
Great cards by the DT and easy sketch to work with. My card is
Thanks for the great sketch !
I made this card: http://livethelife-marjolijne.blogspot.com/
Thanks for looking!
A great sketch to work with...and lovely cards by the DT
here's mine
Sue x
Great DT´s!
Here is my idea:
Beautiful dt cards, i like the sketch.
this is my card based on it.
What a nice versatile sketch - and lovely DT cards, as ever.
Here's my entry
Brilliant sketch to work with Carole, and wonderful DT cards! Here's my card
Great sketch and such wonderful examples-thank you.
love Dingle.xx
Here is my card.
Wonderful sketch and great cards! I actually made 3 cards straight away!! You can see them here
Joanne x
Silly me I put my entry on the wrong post some how. So here goes again...
Another great sketch and lovely DT cards
Here is my card
X Amy X
loved this sketch! great dt cards :)
my take on this weeks challenge can be seen HERE
thank you so much for taking the time to look :)
Great sketch ladies and wonderful DT cards - love them all. My card is here
Pauline x
Very nice DT cards!
Here is my card
Dt wonderfull cards here is mine CARD
Thanks for looking,
Big hugs Agnesxx
Wow, what a nice sketch.
Here is my card for it
wow what a sketch...its FAB....i hope that u like what i have done...u can see my entry at
thanx...luv n hugs Lesley xxx
Thanks for the challenge. Here is my card
Great DT cards, i love the sketch. I kept it simple this this week.
my card
Great sketch. Here's my card
Thanks for looking
Wonderful DT cards. There's one from me here
This comment has been removed by the author.
Love the sketch, well done to DT xx
Here's mine!
Hi everyone, I just love all the DT's cards... Well, here is mine
lovely cards. here is mine
fab cards from the DT!!
my card is here
thanks for looking
hugs mx
Another great sketch and lovely DT cards
my card
Thanks for looking
Hugs, Sonja xx
fab cards great challenge.here is mine
Thanks Carole for a great sketch and beautiful cards from DT
Here is my card
Karen x
Awesome DT-work!
Here is my shaker card.
Thanks for looking!
You think about everything!
here is my card
X karin
my entry is here not a card, but papercraft
thanks for looking
Sarah x
My entry card is here
fantastic sketch and wonderful DT cards you are all so talented hugs Jill x
here is mine.
Lovely sketch, and lots of beautiful cards that has sprung from it. Hope I have followed the sketch enough :)
My entry
Wonderful sketch, kept my card quite simple as lots of sparkle on the tree! Its here TFL
is my entry- sorry it is last minute, hope you will let me enmter. I made it on Wednesday but was too ill to upload it
try again | {"url":"http://fridaysketchersblog.blogspot.com/2008/11/friday-sketchers-sketch-number-30.html?showComment=1228261980000","timestamp":"2014-04-17T00:49:27Z","content_type":null,"content_length":"260293","record_id":"<urn:uuid:fd16d6f7-29e6-4a92-b5f3-c1f66147711e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monads Made Difficult
This is a short, fast and analogy-free introduction to Haskell monads derived from a categorical perspective. This assumes you are familiar with Haskell typeclasses and basic category theory.
If you aren’t already comfortable with monads in Haskell, please don’t read this. It will confuse your intution even more.
Suppose we have a abstract category \(\mathcal{C}\) with objects and morphisms.
• Objects : \(●\)
• Morphisms : \(● \rightarrow ●\)
For each object there is an identity morphism id and a composition rule \((\circ)\) for combining morphisms associatively. We can model this with the following type class in Haskell
class Category c where
id :: c x x
(.) :: c y z -> c x y -> c x z
In Haskell there is a category we call Hask over the constructor (->) of function types.
type Hask = (->)
instance Category Hask where
id x = x
(g . f) x = g (f x)
The constructor (->) is sometimes confusing, for example the following are equivalent.
(->) ((->) a b) -> ((->) a c)
(a -> b) -> (a -> c)
Between two categories we can construct a functor denoted \(T\) which maps between objects and morphisms of categories.
• Objects : \(T(●)\)
• Morphisms : \(T (● \rightarrow ●)\)
With the condition that \(T (f \circ g) = T (f) \circ T (g)\). In Haskell we model this with a multiparameter typeclass:
class (Category c, Category d) => Functor c d t where
fmap :: c a b -> d (t a) (t b)
The identity functor \(1_\mathcal{C}\) for a category \(\mathcal{C}\) is a functor mapping all objects to themselves and all morphisms to themselves.
newtype Id a = Id a
instance Functor Hask Hask Id where
fmap f (Id a) = Id (f a)
An endofunctor is a functor from a category to itself.
type Endofunctor c t = Functor c c t
The repeated image of a endofunctor over a category is written with exponential notation:
\[ \begin{align*} T^2 &= T T : \mathcal{C} \rightarrow \mathcal{C} \\ T^3 &= T T T: \mathcal{C} \rightarrow \mathcal{C} \end{align*} \]
newtype FComp g f x = C { unC :: g (f x) }
instance (Functor a b f, Functor c d g) => Functor a d (FComp f g) where
fmap f (C x) = C (fmap (fmap f) x)
Natural Transformations
For two functors \(F,G\) between two categories \(\mathcal{A,B}\):
\[ F : \mathcal{A} \rightarrow \mathcal{B} \\ G : \mathcal{A} \rightarrow \mathcal{B} \]
We can construct a natural transformation \(\eta\) which is a mapping between functors \(\eta : F \rightarrow G\) that associates every object \(X\) in \(\mathcal{A}\) to a morphism in \(\mathcal{B}
\[ \eta_X : F(X) \rightarrow G(X) \]
Show diagrammaticlly as:
Such that the following naturality condition holds for any moprhism \(f : X \rightarrow Y\):
\[ \eta_Y \circ F(f) = G(f) \circ \eta_X \]
This is expressible in our general category class as the following existential type:
type Nat c f g = forall a. c (f a) (g a)
In the case of Hask we a family of polymorphic functions with signature: forall a. f a -> g a. The canoical example is the natural transformation between the the List functor and the Maybe functor (
where f = List, g = Maybe ).
safeHead :: forall a. [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:xs) = Just x
Either way we chase the diagram we end up at the same place.
fmap f (safeHead xs) ≡ safeHead (fmap g xs)
Run through each of the cases if you need to convince yourself of this fact.
fmap f (safeHead [])
= fmap f Nothing
= Nothing
safeHead (fmap f [])
= safeHead []
= Nothing
fmap f (safeHead (x:xs))
= fmap f (Just x)
= Just (f x)
safeHead (fmap f (x:xs))
= safeHead [f x]
= Just (f x)
We can finally define a monad over a category \(\mathcal{C}\) to be a triple \((T, \eta, \mu)\) of:
1. An endofunctor \(T: \mathcal{C} \rightarrow \mathcal{C}\)
2. A natural transformation \(\eta : 1_\mathcal{C} \rightarrow T\)
3. A natural transformation \(\mu : T^2 \rightarrow T\)
class Endofunctor c t => Monad c t where
eta :: c a (t a)
mu :: c (t (t a)) (t a)
With an associativity square:
\[ \mu \circ T \mu = \mu \circ \mu T \\ \]
And a triangle equality:
\[ \mu \circ T \eta = \mu \circ \eta T = 1_T \\ \]
Alternatively we can express our triple as a series of string diagrams in which we invert the traditional commutative diagram of lines as morphism and objects as points andand draw morphisms as
points and objects as lines. In this form the monad laws have a nice geometric symmetry.
With the coherence conditions given diagrammatically:
Bind/Return Formulation
There is an equivalent formulations of monads in terms of two functions ((>>=), return) which can be written in terms of mu, eta)
In Haskell we define a bind (>>=) operator defined in terms of the natural transformations and fmap of the underlying functor. The join and return functions can be defined in terms of mu and eta.
(>>=) :: (Monad c t) => c a (t b) -> c (t a) (t b)
(>>=) f = mu . fmap f
return = eta
In this form equivalent naturality conditions for the monad’s natural transformations give rise to the regular monad laws by substitution with our new definitions.
fmap f . return ≡ return . f
fmap f . join ≡ join . fmap (fmap f)
And the equivalent coherence conditions expressed in terms of bind and return are the well known Monad laws:
return a >>= f ≡ f a
m >>= return ≡ m
(m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
Kleisli Category
The final result is given a monad we can form a new category called the Kleisli category from the monad. The objects are embedded in our original c category, but our arrows are now Kleisli arrows a
-> T b. Given this class of “actions” we’d like to write an operator which combined these morphisms: (b -> T c) -> (a -> T b) -> (a -> T c) just like we combine functions ( b -> c ) -> ( a -> b ) ->
( a - > c) in our host category.
In turns we out can, for a specific operator (<=<) over Kleisli arrows which is precisely morphism composition for Kleisli category. The Kleisli category embodies “composition of actions” that forms
a very general model of computation.
The mapping between a Kleisli category formed from a category \(\mathcal{C}\) is that:
1. Objects in the Kleisli category are objects from the underlying category.
2. Morphisms are Kleisli arrows of the form : \(f : A \rightarrow T B\)
3. Identity morphisms in the Kleisli category are precisely \(\eta\) in the underlying category.
4. Composition of morphisms \(f \circ g\) in terms of the host category is defined by the mapping:
\[ f \circ g = \mu ( T f ) g \]
Simply put, the monad laws are the equivalent category laws for the Kleisli category.
(<=<) :: (Monad c t) => c y (t z) -> c x (t y) -> c x (t z)
f <=< g = mu . fmap f . g
newtype Kleisli c t a b = Kleisli (c a (t b))
instance Monad c t => Category (Kleisli c t) where
-- id :: (Monad c t) => c a (t a)
id = Kleisli eta
-- (.) :: (Monad c t) => c y (t z) -> c x (t y) -> c x (t z)
(Kleisli f) . (Kleisli g) = Kleisli ( f <=< g )
In the case of Hask where c = (->) then we indeed see the instance give rise to the Monad and Functor instances similar to the Prelude ( if the Prelude had the proper Functor/Monad hierarchy! ).
class Functor t where
fmap :: (a -> b) -> t a -> t b
class Functor t => Monad t where
eta :: a -> (t a)
mu :: t (t a) -> (t a)
(>>=) :: Monad t => t a -> (a -> t b) -> t b
ma >>= f = join . fmap f
Haskell Monads
For instance the List monad would have have:
1. \(\eta\) returns a singleton list from a single element.
2. \(\mu\) turns a nested list into a flat list.
3. \(\mathtt{fmap}\) applies a function over the elements of a list.
instance Functor [] where
fmap f (x:xs) = f x : fmap f xs
instance Monad [] where
-- eta :: a -> [a]
eta x = [x]
-- mu :: [[a]] -> [a]
mu = concat
The IO monad would intuitively have the implementation:
1. \(\eta\) returns a pure value to a value within the context of the computation.
2. \(\mu\) turns a sequence of IO operation into a single IO operation.
3. \(\mathtt{fmap}\) applies a function over the result of the computation. | {"url":"http://www.stephendiehl.com/posts/monads.html","timestamp":"2014-04-20T08:15:06Z","content_type":null,"content_length":"20609","record_id":"<urn:uuid:fb5c71b1-844d-4c5f-b7e2-dc4134ee9ed0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Man Page
Manual Section... (3) - page: XDrawArc
XDrawArc, XDrawArcs, XArc - draw arcs and arc structure
int XDrawArc(Display *display, Drawable d, GC gc, int x, int y, unsigned int width, unsigned int height, int angle1, int angle2);
int XDrawArcs(Display *display, Drawable d, GC gc, XArc *arcs, int narcs);
Specifies the start of the arc relative to the three-o'clock position from the center, in units of degrees * 64.
Specifies the path and extent of the arc relative to the start of the arc, in units of degrees * 64.
arcs Specifies an array of arcs.
d Specifies the drawable.
Specifies the connection to the X server.
gc Specifies the GC.
Specifies the number of arcs in the array.
Specify the width and height, which are the major and minor axes of the arc.
y Specify the x and y coordinates, which are relative to the origin of the drawable and specify the upper-left corner of the bounding rectangle.
delim %%
draws a single circular or elliptical arc, and
draws multiple circular or elliptical arcs. Each arc is specified by a rectangle and two angles. The center of the circle or ellipse is the center of the rectangle, and the major and minor axes are
specified by the width and height. Positive angles indicate counterclockwise motion, and negative angles indicate clockwise motion. If the magnitude of angle2 is greater than 360 degrees,
truncates it to 360 degrees.
For an arc specified as %[ ~x, ~y, ~width , ~height, ~angle1, ~angle2 ]%, the origin of the major and minor axes is at % [ x +^ {width over 2} , ~y +^ {height over 2} ]%, and the infinitely thin path
describing the entire circle or ellipse intersects the horizontal axis at % [ x, ~y +^ {height over 2} ]% and % [ x +^ width , ~y +^ { height over 2 }] % and intersects the vertical axis at % [ x +^
{ width over 2 } , ~y ]% and % [ x +^ { width over 2 }, ~y +^ height ]%. These coordinates can be fractional and so are not truncated to discrete coordinates. The path should be defined by the ideal
mathematical path. For a wide line with line-width lw, the bounding outlines for filling are given by the two infinitely thin paths consisting of all points whose perpendicular distance from the path
of the circle/ellipse is equal to lw/2 (which may be a fractional value). The cap-style and join-style are applied the same as for a line corresponding to the tangent of the circle/ellipse at the
For an arc specified as % [ ~x, ~y, ~width, ~height, ~angle1, ~angle2 ]%, the angles must be specified in the effectively skewed coordinate system of the ellipse (for a circle, the angles and
coordinate systems are identical). The relationship between these angles and angles expressed in the normal coordinate system of the screen (as measured with a protractor) is as follows:
% roman "skewed-angle" ~ = ~ atan left ( tan ( roman "normal-angle" )
* width over height right ) +^ adjust%
The skewed-angle and normal-angle are expressed in radians (rather than in degrees scaled by 64) in the range % [ 0 , ~2 pi ]% and where atan returns a value in the range % [ - pi over 2 , ~pi over 2
] % and adjust is:
%0% for normal-angle in the range % [ 0 , ~pi over 2 ]%
%pi% for normal-angle in the range % [ pi over 2 , ~{3 pi} over 2 ]%
%2 pi% for normal-angle in the range % [ {3 pi} over 2 , ~2 pi ]%
For any given arc, XDrawArc and XDrawArcs do not draw a pixel more than once. If two arcs join correctly and if the line-width is greater than zero and the arcs intersect, XDrawArc and XDrawArcs do
not draw a pixel more than once. Otherwise, the intersecting pixels of intersecting arcs are drawn multiple times. Specifying an arc with one endpoint and a clockwise extent draws the same pixels as
specifying the other endpoint and an equivalent counterclockwise extent, except as it affects joins.
If the last point in one arc coincides with the first point in the following arc, the two arcs will join correctly. If the first point in the first arc coincides with the last point in the last arc,
the two arcs will join correctly. By specifying one axis to be zero, a horizontal or vertical line can be drawn. Angles are computed based solely on the coordinate system and ignore the aspect ratio.
Both functions use these GC components: function, plane-mask, line-width, line-style, cap-style, join-style, fill-style, subwindow-mode, clip-x-origin, clip-y-origin, and clip-mask. They also use
these GC mode-dependent components: foreground, background, tile, stipple, tile-stipple-x-origin, tile-stipple-y-origin, dash-offset, and dash-list.
XDrawArc and XDrawArcs can generate BadDrawable, BadGC, and BadMatch errors.
structure contains:
typedef struct {
short x, y;
unsigned short width, height;
short angle1, angle2; /* Degrees * 64 */
} XArc;
All x and y members are signed integers. The width and height members are 16-bit unsigned integers. You should be careful not to generate coordinates and sizes out of the 16-bit ranges, because the
protocol only has 16-bit fields for these values.
A value for a Drawable argument does not name a defined Window or Pixmap.
A value for a GContext argument does not name a defined GContext.
An InputOnly window is used as a Drawable.
Some argument or pair of arguments has the correct type and range but fails to match in some other way required by the request.
Xlib - C Language X Interface
This document was created by
, using the manual pages.
Time: 15:27:28 GMT, June 11, 2010 | {"url":"http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=XDrawArc","timestamp":"2014-04-25T07:33:23Z","content_type":null,"content_length":"58877","record_id":"<urn:uuid:85e1f9ab-c6c2-46a2-86d8-a3ae4f731c7b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus help
[tex]\int \sin^3{3x} \cos^3{3x}\,dx[/tex]
First things first, get rid of the 3x's with a u substitution. It's easy to see that all it does is change the solution by a factor of 1 over 3. So we have,
[tex]\frac{1}{3}\int \sin^3{u} \cos^3{u}\,du[/tex]
If we had only 1 sine term or only 1 cosine term, we'd be gold. Problem solved. But we got 2 too many. So let's get rid of them!
[tex]\sin^2{x} + \cos^2{x} = 1[/tex]
Use this to turn the integral into
[tex]\frac{1}{3}\int \sin^3{u} (1 - \sin^2{u}) \cos{u}\,du[/tex]
which is easily separated and solved by substitution.
Have another shot at the second one, keeping in mind that
[tex]\frac{d}{dx}\tan{x} = \sec^2{x}[/tex] | {"url":"http://www.physicsforums.com/showthread.php?t=16519","timestamp":"2014-04-16T10:32:46Z","content_type":null,"content_length":"28323","record_id":"<urn:uuid:c7a619c4-6eec-4d5d-b262-c569e87be7d9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Should I use a confidence interval, standard error, or something else here? (self.statistics)
submitted ago by sylocybin
sorry, this has been archived and can no longer be voted on
I have k collections of interarrival times, and I've calculated the average number of arrivals of each collection (the number of arrivals in the collection divided by the sum of the interarrival
times). I now have k average arrival rates, and I'd like to use these to generate more average arrival rates to use as parameters in a simulation. I apologize if my knowledge of how to proceed is
completely wrong.
I thought of two ways: find the 95% confidence interval (standard in my field) of the mean of the k values and draw uniformly from that interval, or take the mean of the k values and draw from a
normal distribution with that mean and standard deviation equivalent to the standard error of the k values. Are either of these approaches correct, and if so, which should I use?
I'm doing this in Java and Matlab, but I don't think that will be relevant here.
My background: I work in engineering, and I was basically a math minor in college, though I almost exclusively did pure math (number theory, mostly).
all 4 comments
[–]4 points5 points6 points ago
sorry, this has been archived and can no longer be voted on
Neither approach is correct. Using these approaches, you will only generate new rates very close to the mean--i.e., the standard deviation of your new average rates will be much lower than the
standard deviation of the real average rates.
The correct approach is to calculate the mean and variance (or standard deviation--NOT standard error) of the k values and draw from a normal with that mean and variance. Or, better yet, (1+1/k)
times that variance (to adjust for the uncertainty in the mean).
This is known as a prediction interval, as opposed to a confidence interval.
[–]sylocybin[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]sylocybin[S] 0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on | {"url":"http://www.reddit.com/r/statistics/comments/14ax9q/should_i_use_a_confidence_interval_standard_error/","timestamp":"2014-04-19T02:26:40Z","content_type":null,"content_length":"61894","record_id":"<urn:uuid:b56084dd-7d87-466e-a8ac-3d6348b2a273>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factoring Equations Videos
Now finding factoring equations help is easier. Below find the list of all video tutorials we have on factoring equations. Each video clip helps you better understand the subject matter. Seeing
different problems makes you familiar with the topic. Our factoring equations video tutorials give brief but to-the-point review on factoring equations. You no longer need to feel overwhelmed with
factoring equations homework and tests.
Use TuLyn as your math help, factoring equations help, help with factoring equations, factoring equations homework help, free factoring equations help, help with factoring equations homework, math
help on factoring equations, free factoring equations math help, online factoring equations help, math homework help, math problems help, math tips, and more.
Enjoy our video tutorials and improve your math grades.
Factoring Quadratic Equations Video Clip
Factoring Quadratic Equations 2 Video Clip
Factoring Quadratic Equations 3 Video Clip | {"url":"http://www.tulyn.com/videotutorials/factoring_equations.htm","timestamp":"2014-04-19T04:20:07Z","content_type":null,"content_length":"11724","record_id":"<urn:uuid:f4c4b372-b6ec-4c04-b294-b2dd22007c3e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alternating-current motor control apparatus - Patent # 8143839 - PatentGenius
Alternating-current motor control apparatus
8143839 Alternating-current motor control apparatus
(7 images)
Inventor: Ide, et al.
Date Issued: March 27, 2012
Application: 12/555,808
Filed: September 9, 2009
Inventors: Ide; Kozo (Kitakyushu, JP)
Sato; Sadayuki (Kitakyushu, JP)
Iura; Hideaki (Kitakyushu, JP)
Morimoto; Shinya (Kitakyushu, JP)
Assignee: Kabushiki Kaisha Yaskawa Denki (Fukuoka, JP)
Primary Benson; Walter
Assistant Agared; Gabriel
Attorney Or Ditthavong Mori & Steiner, P.C.
U.S. Class: 318/811; 318/400.01; 318/400.02; 318/400.32; 318/700; 318/727; 318/799; 318/802; 318/807
Field Of 318/400.01; 318/400.02; 318/400.32; 318/700; 318/727; 318/799; 318/802; 318/807; 318/811
International H02P 27/04; H02P 27/00; H02P 6/00; H02P 21/00
U.S Patent
Patent 2003-319697; WO 02/091558
Abstract: An alternating-current motor control apparatus includes a stator frequency computing unit configured to compute a stator frequency of a motor magnetic flux; a torque error computing
unit configured to compute a torque error by using the motor magnetic flux, an estimated current, and a motor current; and a speed estimator configured to estimate a speed of the
alternating-current motor by using the stator frequency and the torque error. The speed estimator includes a proportional controller configured to reduce the torque error to zero, and
an adaptive filter configured to eliminate a high-frequency component of the torque error.
Claim: What is claimed is:
1. An alternating-current motor control apparatus including a pulse width modulation controller for driving an alternating-current motor by outputting a command voltage,comprising: a
motor model computing unit configured to compute a motor magnetic flux and an estimated current of the alternating-current motor by using the command voltage; a current detector
configured to detect a motor current flowing in thealternating-current motor; a stator frequency computing unit configured to compute a stator frequency of the motor magnetic flux; a
torque error computing unit configured to compute a torque error by using the motor magnetic flux, the estimatedcurrent, and the motor current; and a speed estimator configured to
estimate a speed of the alternating-current motor by using the stator frequency and the torque error, the speed estimator comprising: an adaptive filter configured to adapt a
filtercharacteristic based on the torque error and configured to compute a first estimated speed value by eliminating a high-frequency component of the torque error; a proportional
controller configured to compute a second estimated speed value based on thetorque error to reduce a torque error computed by the torque error computing unit to zero; and an adder
configured to compute a third estimated speed value by adding the first estimated speed value to the second estimated speed value.
2. The alternating-current motor control apparatus according to claim 1, wherein the adaptive filter has a coefficient determined in accordance with a cutoff frequency associated with
the torque error, the torque error, and the statorfrequency.
3. The alternating-current motor control apparatus according to claim 2, wherein the cutoff frequency is proportional to the torque error.
4. An alternating-current motor control apparatus including a pulse width modulation controller for driving an alternating-current motor by outputting a command voltage, comprising: a
motor model computing unit configured to compute a motormagnetic flux and an estimated current of the alternating-current motor by using the command voltage; a current detector
configured to detect a motor current flowing in the alternating-current motor. a stator frequency computing unit configured tocompute a stator frequency of the motor magnetic flux; a
torque error computing unit configured to compute a torque error by using the motor magnetic flux, the estimated current, and the motor current; and a speed estimator configured to
estimate aspeed of the alternating-current motor by using the stator frequency and the torque error, wherein the speed estimator estimates the speed of the alternating-current motor
by using a value obtained by adding an output of a proportional controllerconfigured to reduce the torque error to zero to an output of an adaptive filter configured to eliminate a
high-frequency component of the torque error, and wherein a cutoff frequency is computed in accordance with a reactive power error computed byusing the command voltage, the estimated
current, and the motor current, and the adaptive filter has a coefficient determined in accordance with the cutoff frequency, the torque error, and the stator frequency.
5. The alternating-current motor control apparatus according to claim 4, wherein the cutoff frequency is proportional to the reactive power error.
6. An alternating-current motor control apparatus including a pulse width modulation controller for driving an alternating-current motor by outputting a command voltage, comprising: a
motor model computing unit configured to compute a motormagnetic flux and an estimated current of the alternating-current motor by using the command voltage; a current detector
configured to detect a motor current flowing in the alternating-current motor; a stator frequency computing unit configured tocompute a stator frequency of the motor magnetic flux; a
torque error computing unit configured to compute a torque error by using the motor magnetic flux, the estimated current, and the motor current; and a speed estimator configured to
estimate aspeed of the alternating-current motor by using the stator frequency and the torque error, wherein the speed estimator estimates the speed of the alternating-current motor
by using a value obtained by adding an output of a proportional controllerconfigured to reduce the torque error to zero to an output of an adaptive filter configured to eliminate a
high-frequency component of the torque error, wherein the adaptive filter has a coefficient determined in accordance with a cutoff frequencyassociated with the torque error, the
torque error, and the stator frequency, and wherein the adaptive filter operates as an integrator when the coefficient is 0, and operates as a primary delay filter when the
coefficient is 1.
7. The alternating-current motor control apparatus according to claim 4, wherein the adaptive filter operates as an integrator when the coefficient is 0, and operates as a primary
delay filter when the coefficient is 1.
8. An alternating-current motor control apparatus including a pulse width modulation controller for driving an alternating-current motor by outputting a command voltage, comprising:
means for computing a motor magnetic flux and an estimatedcurrent of the alternating-current motor by using the command voltage; means for detecting a motor current flowing in the
alternating-current motor; means for computing a stator frequency of the motor magnetic flux; means for computing a torque errorby using the motor magnetic flux, the estimated
current, and the motor current; and means for estimating a speed of the alternating-current motor by using the stator frequency and the torque error, the means for estimating a speed
of thealternating-current motor comprising: means for adapting a filter characteristic based on the torque error and computing a first estimated speed value by eliminating a
high-frequency component of the torque error; means for computing a second estimatedspeed value based on the torque error to reduce a torque error computed by the torque error
computing unit to zero; and means for computing a third estimated speed value by adding the first estimated speed value to the second estimated speed value.
9. The alternating-current motor control apparatus according to claim 8, wherein the means for adapting the filter characteristic has a coefficient determined in accordance with a
cutoff frequency associated with the torque error, the torqueerror, and the stator frequency.
10. The alternating-current motor control apparatus according to claim 9, wherein the cutoff frequency is proportional to the torque error.
11. The alternating-current motor control apparatus according to claim 8, wherein a cutoff frequency is computed in accordance with a reactive power error computed by using the
command voltage, the estimated current, and the motor current, andthe means for adapting the filter characteristic has a coefficient determined in accordance with the cutoff
frequency, the torque error, and the stator frequency.
12. The alternating-current motor control apparatus according to claim 11, wherein the cutoff frequency is proportional to the reactive power error.
13. The alternating-current motor control apparatus according to claim 9, wherein the means for adapting the filter characteristic operates as an integrator when the coefficient is 0,
and operates as a primary delay filter when the coefficientis 1.
14. The alternating-current motor control apparatus according to claim 11, wherein the means for adapting the filter characteristic operates as an integrator when the coefficient is
0, and operates as a primary delay filter when the coefficientis 1.
Description: CROSS REFERENCES TO RELATED APPLICATIONS
The present application is related to Japanese Patent application no. 2008-247028 filed at Japan Patent Office titled "AC Motor Controller and Its Control Method", all of which are
incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an alternating-current (AC) motor control apparatus and an AC motor control method for performing torque control or speed control of an AC motor
without using a position or speed sensor.
2. Description of Related Art
Methods for estimating the position and speed of an AC motor without using a position or speed sensor are roughly classified into methods in which the position and speed of an AC
motor are estimated in accordance with a detected or estimatedvalue of a motor induced voltage and methods in which the position and speed of an AC motor are estimated, by applying a
high-frequency signal to the AC motor, in accordance with a detected value of a voltage or a current that depends on the inductancecharacteristic of the AC motor. The former method is
suitable for driving an AC motor for which the inductance characteristic of the AC motor is not available in advance. However, in the former method, in a case where the frequency at
which the ACmotor is driven is low, since the induced voltage is low, the signal-to-noise (S/N) ratio is reduced due to the influences of measured noise and the nonlinearity of
characteristics of a driving circuit. Hence, a speed estimation error is increased.
For example, WO2002/091558 suggests a technique in which the speed of a motor is estimated, not directly in accordance with an induced voltage, but by estimating magnetic flux in
accordance with a motor model, and at the same time, by estimatingan error signal in accordance with an estimated value of magnetic flux and a deviation between a redundant estimated
value of a current and a detected value of a current, using a proportional-plus-integral compensator that reduces the error signal tozero.
In addition, Japanese Unexamined Patent Application Publication No. 2003-319697 suggests a technique in which a gain computing unit is improved in such a manner that a gain of a
deviation amplifier used for correcting the input of a motor modelis properly output and the accuracy and responsiveness of speed estimation are thus improved while the reliability
and responsiveness of speed estimation are taken into consideration.
SUMMARY OF THE INVENTION
According to an aspect of the present invention, there is provided a control apparatus including a pulse width modulation controller for driving an alternating-current motor by
outputting a command voltage. The control apparatus includes amotor model computing unit configured to compute a motor magnetic flux and an estimated current of the
alternating-current motor by using the command voltage; a current detector configured to detect a motor current flowing in the alternating-currentmotor; a stator frequency computing
unit configured to compute a stator frequency of the motor magnetic flux; a torque error computing unit configured to compute a torque error by using the motor magnetic flux, the
estimated current, and the motorcurrent; and a speed estimator configured to estimate a speed of the alternating-current motor by using the stator frequency and the torque error.
According to another aspect of the present invention, there is provided a control method performed in a control apparatus including a pulse width modulation controller for driving an
alternating-current motor by outputting a command voltage. The control method includes a step of detecting a motor current flowing in the alternating-current motor; a step of
computing a motor magnetic flux and an estimated current of the alternating-current motor by using the command voltage; a step ofcomputing a stator frequency of the motor magnetic
flux; a step of computing a torque error by using the motor magnetic flux, the estimated current, and the motor current and estimating a speed of the alternating-motor current in
accordance with anoutput value that has been subjected to proportional control in such a manner that the torque error is reduced to zero; and a step of correcting the estimated speed
in accordance with a value obtained by eliminating a high-frequency component of thetorque error by using the stator frequency.
BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments of the present invention will be described in detail based on the following figures, wherein:
FIG. 1 is a block diagram of an AC motor control apparatus according to a first embodiment;
FIG. 2 is a detailed block diagram of a speed estimator according to the first embodiment;
FIG. 3 includes illustrations for explaining filter characteristics of an adaptive filter according to the first embodiment;
FIG. 4A includes chart diagrams showing a case where the related art is applied;
FIG. 4B includes chart diagrams showing effects achieved in a case where the first embodiment of present invention is applied;
FIG. 5 is a detailed block diagram of a speed estimator according to a second embodiment; and
FIG. 6 is a flowchart showing a control method performed in an AC motor control apparatus according to a third embodiment.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
FIG. 1 is a block diagram of an AC motor control apparatus I according to a first embodiment of the present invention.
The control apparatus I includes a current detector 102 for detecting three-phase currents (iu, iv, and iw) of a motor 101, and a three-phase/two-phase converter 103 for converting
the three-phase currents (iu, iv, and iw) into detectedtwo-phase currents (i.sub.s.alpha. and i.sub.s.beta.) in the rest system of coordinates.
The control apparatus I further includes a pulse width modulation (PWM) controller 104 for converting two-phase voltage commands (V*.sub.sd and V*.sub.sq) output from a vector
controller 107 into three-phase voltage commands (V*u, V*v, and V*w)in the fixed system of coordinates by using a magnetic flux azimuth .theta.^ and applying the obtained three-phase
voltage commands (V*u, V*v, and V*w) to the motor 101.
The control apparatus I further includes a phase computing unit 105 for computing the magnetic flux azimuth .theta.^ in accordance with an arctangent operation using estimated
magnetic flux values (.lamda.^.sub..alpha. and .lamda.^.sub..beta.)output from a motor model computing unit 109 and outputting the magnetic flux azimuth .theta.^ to the PWM controller
104 and a vector converter 106.
The control apparatus I further includes the vector converter 106 for performing coordinate conversion of the voltage commands (V*.sub.sd and V*.sub.sq) output from the vector
controller 107 into two-phase voltage commands (V*.sub.s.alpha. andV*.sub.s.beta.) in the rest system of coordinates and outputting the two-phase voltage commands (V*.sub.s.alpha. and
V*.sub.s.beta.) to the motor model computing unit 109.
The control apparatus I further includes the vector controller 107 for performing vector control of the motor 101 in the method described later; and a subtracter 108 for computing a
difference (speed deviation .DELTA..omega.r) between a givenspeed command value .omega.r* and an estimated speed value .omega.r^ output from a speed estimator 114 and outputting the
speed deviation .DELTA..omega.r to the vector controller 107.
The control apparatus I further includes the motor model computing unit 109 for computing estimated magnetic flux values (.lamda.^.sub..alpha. and .lamda.^.sub..beta.) and estimated
two-phase currents (i^.sub.s.alpha. and i^.sub.s.beta.) inaccordance with the computation described later; and subtracters 110 and 111 for computing deviations (.DELTA.i.sub.s.alpha.
and .DELTA.i.sub.s.beta.) between the estimated two-phase currents (i^.sub.s.alpha. and i^.sub.s.beta.) and the detectedtwo-phase currents (i.sub.s.alpha. and i.sub.s.beta.) and
outputting the deviations (.DELTA.i.sub.s.alpha. and .DELTA.i.sub.s.beta.) to a torque error computing unit 113.
The control apparatus I further includes a stator frequency computing unit 112, the torque error computing unit 113, and the speed estimator 114 for computing an estimated speed value
.omega.r^ in the method described later, and drives the motor101.
The vector controller 107 receives the speed deviation .DELTA..omega.r, a given magnetic flux command .lamda.r, and a magnetic flux component id and a torque component iq (not
illustrated) of a motor current. The vector controller 107 performsspeed control and current control in such a manner that the speed deviation .DELTA..omega.r is reduced to zero, and
outputs the two-phase voltage commands (V*.sub.sd and V*.sub.sq) to the PWM controller 104 and the vector converter 106. Since methodsfor computing and controlling the magnetic flux
component id and the toque component iq of a motor current and these commands are known, the explanation and illustration of these methods will be omitted.
The motor model computing unit 109 receives the two-phase voltage commands (V*.sub.s.alpha. and V*.sub.s.beta.) in the rest system of coordinates, and estimates magnetic flux values
and currents in accordance with a mathematical model based onequations (1) and (2) as a motor model. The motor model computing unit 109 outputs the estimated magnetic flux values
(.lamda.^.sub..alpha. and .lamda.^.sub..beta.) to the phase computing unit 105, the stator frequency computing unit 112, and thetorque error computing unit 113, and outputs the
estimated currents (i^.sub.s.alpha. and i^.sub.s.beta.) to the subtracters 110 and 111 so that deviations (.DELTA.i.sub.s.alpha. and .DELTA.i.sub.s.beta.) can be calculated. In the
following equations(1) and (2), vector notation is used, and voltage vector information represented as "V" in other parts of the description is represented as "u":
dd.times..times..times..times..times..times..times..times..rho..times..ti- mes..omega..times..lamda..times.dd.times..lamda..times..times..times..time-
s..times..times..times..omega..times..lamda. ##EQU00001## where state variables arerepresented as a stator current vector: .sub.s= .sub.s.alpha.+j .sub.s.beta., a stator voltage
vector: u.sub.s=u.sub.s.alpha.+ju.sub.s.beta., and a magnetic flux vector: {circumflex over (.lamda.)}={circumflex over (.lamda.)}.sub..alpha.+j{circumflexover (.lamda.)}.sub..beta.
in the rest system of coordinates.
In addition, in the case of an induction motor, parameter definitions are as described below:
.times..times..sigma..times..times..times.'.times..times.'.sigma..times..- times..times..rho..sigma..times..times..times. ##EQU00002##
.times..times..times..times..sigma..times..times. ##EQU00002.2## where Rs represents a primary resistance,
'.times. ##EQU00003## represents a secondary resistance obtained by conversion on the primary side,
' ##EQU00004## represents a mutual inductance obtained by primary conversion, .sigma.Ls represents a leakage inductance, Ls represents a primary self-inductance, Lr represents a
secondary self-inductance,
##EQU00005## represents a secondary time constant, M represents a mutual inductance, and {circumflex over (.omega.)}.sub.r represents a rotator angular velocity.
Equations (1) and (2) are based on a continuous system. However, obviously, in the case of implementation, discretized equations may be used.
Next, the stator frequency computing unit 112, the torque error computing unit 113, and the speed estimator 114 will be sequentially described in detail.
The stator frequency computing unit 112 computes a stator frequency .omega..sub.0 in accordance with equation (3) by using the estimated magnetic flux values (.lamda.^.sub..alpha. and
.lamda.^.sub..beta.) estimated by the motor model computingunit 109:
.omega..lamda..alpha.dd.times..lamda..beta..lamda..beta.dd.times..lamda..- alpha..lamda..alpha..lamda..beta. ##EQU00006##
A differential operation portion of equation (3) may be obtained by dividing a value obtained by subtracting the last magnetic flux value from the current magnetic flux value by a
computation time, causing the computation result to pass througha low-pass filter, and eliminating a ripple portion generated in a sudden change.
The torque error computing unit 113 is provided to compute a difference between the estimated torque and the actual torque. However, since the actual torque cannot be directly
measured, a torque error .DELTA..tau. is computed by using theestimated magnetic flux values (.lamda.^.sub..alpha. and .lamda.^.sub..beta.) estimated by the motor model computing unit
109 and the deviations (.DELTA.i.sub.s.alpha. and .DELTA.i.sub.s.beta.) computed by the subtracters 110 and 111, in accordancewith equation (4): .DELTA..tau.={circumflex over
(.lamda.)}.sub..alpha..DELTA.i.sub.s.beta.-{circumflex over (.lamda.)}.sub..beta..DELTA.i.sub.s.alpha. (4)
Next, the speed estimator 114 will be explained. FIG. 2 is a detailed block diagram of the speed estimator 114. The speed estimator 114 includes a region discriminator 201, a cutoff
frequency computing unit 202, a proportional controller 203,an adaptive filter 204, and an adder 205.
The region discriminator 201 is configured to perform conditional comparison of the stator frequency .omega..sub.0 and the torque error .DELTA..tau., and sets a coefficient g.sub.i to
1 or 0. More specifically, in a case where the absolutevalue of the stator frequency .omega..sub.0 is smaller than or equal to a set value (about 1/200 of the rated driving frequency)
and the absolute value of the torque error .DELTA..tau. is equal to or greater than a set value (0.5% of the rated torque),the coefficient g.sub.i is set to 1. In a case where the
above conditions are not met, the coefficient g.sub.i is set to 0. That is, in a case where the torque error .DELTA..tau. increases in a region near the zero frequency, the
coefficient g.sub.iis set to 1.
The cutoff frequency computing unit 202 is configured to compute a cutoff frequency .omega..sub.i that is proportional to the torque error .DELTA..tau.. By setting the conversion
factor between the torque and the frequency to .mu., the cutofffrequency .omega..sub.i is computed by using equation (5): .omega..sub.i=.mu.|.DELTA..tau.| (5)
Note that the conversion factor .mu. should be set to within a range of about 1 to about 10 [rad/s] when the torque error .DELTA..tau. is equal to the rated torque of the motor 101.
The adaptive filter 204 has the coefficient g.sub.i and the cutoff frequency .omega..sub.i. The adaptive filter 204 receives the torque error .DELTA..tau., and computes a first
estimated speed value .omega.^.sub.r1 in accordance with equation(6):
.omega..times..times..function..omega..times..omega..times..DELTA..times.- .times..tau. ##EQU00007##
The filter characteristic of the adaptive filter 204 is set in such a manner that the adaptive filter 204 operates as a full integrator when the coefficient g.sub.i is 0 and the
filter bandwidth is increased to the cutoff frequency .omega..sub.iand the phase is changed from -90 degrees to 0 degree when the coefficient g.sub.i is 1, as shown in FIG. 3.
The proportional controller 203 multiplies the received torque error .DELTA..tau. by a gain Kpw in accordance with equation (7) and outputs the obtained value as a second estimated
speed value .omega.^.sub.r2: {circumflex over(.omega.)}.sub.r2Kpw.DELTA..tau. (7)
The adder 205 adds the first estimated speed value .omega.^.sub.r1 to the second estimated speed value .omega.^.sub.r2, and outputs the obtained value as the final estimated speed
value .omega.r^.
As described above, the speed estimator 114 estimates the speed of the motor 101 by using the value obtained by adding the output of the proportional controller 203 configured to
reduce the torque error .DELTA..tau. to zero to the output of theadaptive filter 204 configured to eliminate a high-frequency component of the torque error .DELTA..tau..
FIG. 4A includes chart diagrams showing a case where the related art is applied. FIG. 4B includes chart diagrams showing effects achieved in a case where the first embodiment of the
present invention is applied. FIG. 4A shows an estimatedspeed error and a phase error obtained, by using a known speed estimator employing proportional-plus-integral compensation,
when the motor 101 is switched from normal rotation to reverse rotation in a rated load state of the motor 101. FIG. 4B shows anestimated speed error and a phase error obtained in a
case where the first embodiment of the present invention is applied under the same conditions.
In the related art, near a region in which the speed is zero, an estimated speed error increases and the phase error accordingly increases. Meanwhile, according to an aspect of the
present invention, both the estimated speed error and the phaseerror are reduced, and in particular, the speed error and the phase error are significantly reduced near the zero
frequency, thus maintaining a reliable operation.
Since an AC motor control apparatus according to the first embodiment of the present invention is configured as described above, the operations and effects described below can be
Since the position and speed of a motor can be reliably estimated even in a region in which the driving frequency of the motor is low (including zero), torque control and speed
control of the motor can be performed without using a position orspeed sensor. Furthermore, the cutoff frequency of a filter used when a torque error is computed can be varied, and
vibrations caused by the characteristics of the motor and a machine to which the motor is connected can be handled. Therefore, controlinstability can be reduced.
FIG. 5 is a detailed block diagram of a speed estimator 114' according to a second embodiment of the present invention. In the first embodiment, the speed estimator 114 in which the
cutoff frequency computing unit 202 computes a cutofffrequency that is proportional to a torque error is used. Meanwhile, in the second embodiment, the speed estimator 114' is used
instead of the speed estimator 114. That is, additional input signals are input to the speed estimator 114', and a cutofffrequency computing unit 502 of the speed estimator 114'
computes a cutoff frequency that is proportional to a reactive power error .DELTA.q. As shown in FIG. 5, the speed estimator 114' further includes a region discriminator 501, a
proportionalcontroller 503, an adaptive filter 504, and an adder 505. Operations of the region discriminator 501, the proportional controller 503, the adaptive filter 504, and the
adder 505 are the same as those of the region discriminator 201, the proportionalcontroller 203, the adaptive filter 204, and the adder 205. Hence, the explanation of the region
discriminator 501, the proportional controller 503, the adaptive filter 504, and the adder 505 will be omitted.
The cutoff frequency computing unit 502 will be explained. The cutoff frequency computing unit 502 computes a reactive power error .DELTA.q in accordance with equation (8) by using
two-phase voltage commands (V*.sub.s.alpha. andV*.sub.s.beta.) in the rest system of coordinates and deviations (.DELTA.i.sub.s.alpha. and .DELTA.i.sub.s.beta.) computed by the
subtracters 110 and 111, and computes a cutoff frequency .omega..sub.i in accordance with equation (9) by using aconversion factor .mu.q between power and frequency: .DELTA.q=
V*.sub.s.alpha..DELTA.i.sub.s.beta.-V*.sub.s.beta..DELTA.i.sub.s- .alpha. (8) .omega..sub.i=.mu..sub.q|.DELTA.q| (9)
Note that the conversion factor .mu.q should be set to within a range of about 1 to about 10 [rad/s] when the reactive power error .DELTA.q is equal to the rated output of the motor
Since, as with a torque error .DELTA..tau., the reactive power error .DELTA.q is caused by a speed estimation error, the adaptive filter 504 is capable of obtaining a first estimated
speed value .omega.^.sub.r1' in accordance with an operationsimilar to that of the adaptive filter 204 shown in FIG. 2.
As described above, the speed estimator 114' estimates the speed of the motor 101 by using the value obtained by adding an output of the proportional controller 503 that is configured
to reduce the torque error .DELTA..tau. to zero to an outputof the adaptive filter 504 that eliminates a high-frequency component of the reactive power error .DELTA.q.
Since the speed estimator 114' is configured as described above in the second embodiment of the present invention, the operations and effects described below can be achieved.
Since the position and speed of a motor can be reliably estimated even in a region in which the driving frequency of the motor is low (including zero), torque control and speed
control of the motor can be performed without using a position orspeed sensor. Furthermore, the cutoff frequency of a filter used when a reactive power error is computed can be
varied, and vibrations caused by the characteristics of the motor and a machine to which the motor is connected can be handled. Therefore,control instability can be reduced.
FIG. 6 is a flowchart showing a control method performed in an AC motor control apparatus according to a third embodiment of the present invention. A speed estimation method according
to the third embodiment will be explained with reference tothe flowchart of FIG. 6.
In step 1, motor magnetic flux values (.lamda.^.sub..alpha. and .lamda.^.sub..beta.) and estimated currents (i^.sub.s.alpha. and i^.sub.s.beta.) are computed by using voltage commands
(V*.sub.sd and V*.sub.sq) output from the vector controller107 to the motor 101 and a magnetic flux azimuth .theta.^. This processing has been described above in the explanation of
the motor model computing unit 109 in the first embodiment.
In step 2, a stator frequency .omega..sub.0 of the motor magnetic flux values (.lamda.^.sub..alpha. and .lamda.^.sub..beta.) computed in step 1 is computed. This processing has been
described above in the explanation of the stator frequencycomputing unit 112 in the first embodiment.
In step 3, a torque error .DELTA..tau. is computed by using the motor magnetic flux values (.lamda.^.sub..alpha. and .lamda.^.sub..beta.) and the estimated currents (i^.sub.s.alpha.
and i^.sub.s.beta.) computed in step 1 and motor currents(i.sub.s.alpha. and i.sub.s.beta.) detected by using the current detector 102 and obtained by performing coordinate
conversion. This processing has been described above in the explanation of the subtracters 110 and 111 and the torque error computingunit 113 in the first embodiment.
In step 4, a first estimated speed value .omega.^.sub.r1 is computed by multiplying the torque error .DELTA..tau. computed in step 3 by a proportional gain Kpw. This processing has
been described above in the explanation of the proportionalcontroller 203 in the first embodiment.
In step 5, a second estimated speed value .omega.^.sub.r2 is computed by eliminating a high-frequency component of the torque error .DELTA..tau. by using the stator frequency
.omega..sub.0 computed in step 2 and a cutoff frequency .omega..sub.idetermined in accordance with the torque error .DELTA..tau. computed in step 3. This processing has been described
above in the explanation of the adaptive filter 204 in the first embodiment.
In step 6, an estimated speed value .omega.^.sub.r is computed by adding the first estimated speed value .omega.^.sub.r1 computed in step 4 to the second estimated speed value .omega.
^.sub.r2 computed in step 5. The estimated speed value.omega.^.sub.r is used for vector control and speed control performed in the vector controller 107 and the like in the first
In the processing of step 5, as described in the second embodiment, a high-frequency component of the torque error .DELTA..tau. may be eliminated by computing a reactive power error
.DELTA.q by using voltage commands (V*.sub.sd and V*.sub.sq),estimated currents (i^.sub.s.alpha. and i^.sub.s.beta.), and motor currents (i.sub.s.alpha. and i.sub.s.beta.), and
determining a cutoff frequency .omega..sub.i in accordance with the reactive power error .DELTA.q.
Since the control method performed in an AC motor control apparatus according to the third embodiment of the present invention is implemented, operations and effects similar to those
of the first and second embodiments can be achieved.
According to the foregoing embodiments, the position and speed of a motor can be reliably estimated and torque control and speed control can be performed even in a region in which the
driving frequency of the motor is low (including zero) byimproving a speed estimator itself, without performing input correction of a motor model, unlike in the related art.
Therefore, the present invention can be applied to general industrial machinery, in particular, to uses under circumstances in which aspeed sensor cannot be used due to high
temperature or high vibration.
* * * * *
Randomly Featured Patents | {"url":"http://www.patentgenius.com/patent/8143839.html","timestamp":"2014-04-19T02:20:58Z","content_type":null,"content_length":"53239","record_id":"<urn:uuid:3c5a77aa-0993-46f5-8413-8bc377b6ef4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Keyword : Beta
g01eec Upper and lower tail probabilities and probability density function for the beta distribution
g01fec Deviates for the beta distribution
g01gec Computes probabilities for the non-central beta distribution
g01sec Computes a vector of probabilities for the beta distribution
g01tec Computes a vector of deviates for the beta distribution
g05sbc Generates a vector of pseudorandom numbers from a beta distribution
g05sec Generates a vector of pseudorandom numbers from a Dirichlet distribution
s14cbc Logarithm of the beta function ln(B,a,b)
s14ccc Incomplete beta function I[x](a,b) and its complement 1 − I[x]
© The Numerical Algorithms Group Ltd, Oxford UK. 2012 | {"url":"http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/INDEXES/KWIC/beta.html","timestamp":"2014-04-19T09:58:55Z","content_type":null,"content_length":"3626","record_id":"<urn:uuid:ecd7386f-271a-4db1-a520-aefb2fa5681a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recovering Marginal Effects and Standard Errors from Interaction Terms in R
March 5, 2012
By Frank Davenport
When I fit models with interactions, I often want to recover not only the interaction effect but also the marginal effect (the main effect + the interaction) and of course the standard errors. There
are a couple of ways to do this in R but I ended writing my own function (essentially a wrapper around the deltaMethod() function) to fit my needs.
In this case, I have a model where a continuous variable has been interacted with a discrete variable. I want to create a data.frame that stores the main effect, the marginal effects of the
interactions, and their standard errors.
I also want the standard errors for the marginal effects on the interaction to match the standard errors on the main effect which in this case were HAC (Heteroskedastic and Autocorrelation Consistent
ie. Newey-West).
The function below is a little sloppy as I built it iteratively while I was working through a specific problem, but it is generalized to any lm() type object. The basic use case is that you have 1 or
mor models, each with 1 or more interactions and the discrete terms in the interactions have more than two levels (it will work with two levels, but in that case the function might be overkill). In
my case I fit a series of regions specific panel models where two of the main effects were interacted with a year term. I wanted have a generalizable function where I could just feed it the model
object, variable name, and covariance estimator, and return a data.frame with estimates and standard errors. In the next post I'll demonstrate the function with simulated data and show how to
visualize the marginal in ggplot.
Note: The code below assumes you have already loaded car (for deltaMethod()), sandwich (for vcovHAC), and lmtest (for coeftest()).
##Calls this function which is a wrapper for coeftest from the sandwhich package
sehac<-function(fit,vcov=vcovHAC){ #Convenience function for HAC standard erros
funinteff<-function(mod,var,vcov=vcovHAC){ #mod is an lm() object, var is the name of the main effect that was interacted, vcov is the type of variance covariance method you want to use
#Extract Coefficient names create 'beta names' to feed to deltaMethod()
pnams<-data.frame('b'=paste('b',0:(length(cnames)-1),sep=""),'est'=cnames) #assign parameter names so that deltaMethod does not throw an error
#Extract the specific parameters of interest
#--Create Data Frame to store Main Effect
int<-sehac(mod,vcov=vcov)[var1,c('Estimate','Std. Error')]
#Loop through and store the results in a data.frame
for(i in 1:length(intvars)){
Created by Pretty R at inside-R.org
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/recovering-marginal-effects-and-standard-errors-from-interaction-terms-in-r-2/","timestamp":"2014-04-18T08:07:03Z","content_type":null,"content_length":"47844","record_id":"<urn:uuid:f68ad15b-c027-47bd-a731-6d091a57e816>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circular Motion, Gravity, Muddy Wheel
You're actually looking for a ≥ (or ≤ depending on how you set things up), so finding the = is good enough. That is, you're looking for the first point at which the clump falls off.
That assumes the force is initially less. I have a couple of concerns with this problem.
In general, the adhesive force cannot be purely radial. The resultant has to be radial, and the force of gravity is not, so the adhesion must supply a tangential force.
The adhesive force will be tested most when gravity opposes it. For the question to make sense, the mud should first appear high up on the wheel. The only reason a lot of mud can be thrown up in
practice is that wet mud adheres very well to begin with but loses its grip as the mud deforms. | {"url":"http://www.physicsforums.com/showthread.php?p=4269219","timestamp":"2014-04-17T00:59:31Z","content_type":null,"content_length":"65740","record_id":"<urn:uuid:123343ed-018c-4d2d-a183-944bc6f80634>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-day questions
You only have one day to answer 10 questions. Tomorrow, I will post another 10, even though there are no replies to the questions.
Let's start!
Last edited by julianthemath (2012-12-18 20:25:16)
Re: One-day questions
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
Please specify.
1) Who is Godzilla?
2) There are choices already in number 7 (maybe).
Re: One-day questions
I am not seeing any choices for 7.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
I can not believe that there is a whole generation that does not know who Godzilla is!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
11. Where is julianthemath (where am I (country))?
12. What is next?
1 4 9 16 __
13. What is next?
961 900 841 784 729 676 ____
14. What is my favorite subject?
15. What is the population of the Philippines (in estimated hundred millions)?
16. Is 1234567890 a perfect square?
17. Is 999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999 prime?
18. How much is a million pennies?
19. Who is julianthemath?
20. What is my favorite color?
Re: One-day questions
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
Hi julianthemath;
How did I do?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
Maybe around 9 or 8...
Re: One-day questions
The last two really made me think.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
21. How much is 100 nickels?
22. What is 1.23 x 4.56?
23. Is this correct? 1*2=3
24. Where is Philippines?
25. Where is Davao?
26. Do I have a YouTube account?
27. What topic has the largest audience (most number of replies) among all my topics?
28. Who is ganesh?
29. Where does julianthemath live?
30. Is this correct? 1 4 5 6 9 16 25 100 455 636
Re: One-day questions
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
21. Yup
22. Yup
23. Nope
24. Yup
25. In the Philippines.
26. Yes
27. Add 99 more and post it forever. Bobbym, among all my topics only. Not among all groups.
28. A moderator.
29. Philippines
30. Nope.
Re: One-day questions
I don't exactly have all that stuff memorized.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
Anybody ask 10 questions? Anybody can ask besides me,
Re: One-day questions
You are doing fine. We will keep you as the question asker.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
I will just decrease the questionnaire to 2 questions so it is easy for me to ask.
31) In math, is 3! said as "loud 3" or "3 factorial"?
32) How many posts does this topic have? (Include your post, too)
Re: One-day questions
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
Sorry for the long wait.
Oficial answers:
31. 3 factorial
32. 22 (as of now)
33) How many hide tags does this have?
34) Solve this: !@#$%^&*(), and no letters.
Hint: It is a 10-digit number.
Last edited by julianthemath (2012-12-30 21:24:16)
Re: One-day questions
33) 16
34) What do you want to do with that one?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
33. 16 hide tags
34. I'll clarify first:
I want you to rack the code, but there is no key. It is like I swear, but no letters allowed.
And it is a 10-digit number.
So, it is 1,234,567,890.
Re: One-day questions
Couldn't it have been 8721435960?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: One-day questions
Explanation: Those symbols !@#$%^&*() are activated when you press the Shift key. But, when the Shift key is not pressed, it is like this: 1234567890 | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=244581","timestamp":"2014-04-18T00:14:37Z","content_type":null,"content_length":"41650","record_id":"<urn:uuid:c217f5e6-c955-40d6-8dc4-f65659637f40>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Carlos, CA Trigonometry Tutor
Find a San Carlos, CA Trigonometry Tutor
...I get great happiness out of seeing a student progress. For me, the greatest pleasure comes when I have explained a concept three different ways and finally the light goes on. The student says
"I get it!" or "That's easy, why didn't you say that the first time."Algebra 1 is the first big hurdle to a technical career.
32 Subjects: including trigonometry, reading, calculus, English
...I administer the ISEE for students wishing to enter private school in the Bay Area, so I am very familiar with the test. I also tutor and teach how to take the test, giving students tips on
how to approach the test and familiarize them with test questions. In addition, I am credentialed to teach the subjects covered in the test because my credential is in math, English and science.
35 Subjects: including trigonometry, chemistry, reading, English
...I earned my PhD in Chemistry from New York University. As the recipient of first prize at the National Chemistry Olympiad and second prize at the National Mathematics Olympiad, I consider
myself an expert problem solver, and I can help my students to improve the problem solving skills. During m...
15 Subjects: including trigonometry, chemistry, calculus, Chinese
...The subjects have ranged from pre-algebra to Calculus II. Along with taking my classes, I am teaching Algebra 1 this Fall at CSUEB. A lot of people know Math, a lot of people can tutor Math,
but for me it's about the individual needing help.
8 Subjects: including trigonometry, reading, algebra 1, algebra 2
...It helps for many upcoming years. "Great Tutor" - Margaret P. Oakland, CA Andreas is a fabulous tutor. He has assisted our 8th grade child in knowledge, understanding the concepts, and our
child's grade has risen to an "A" from "C".
41 Subjects: including trigonometry, calculus, geometry, statistics
Related San Carlos, CA Tutors
San Carlos, CA Accounting Tutors
San Carlos, CA ACT Tutors
San Carlos, CA Algebra Tutors
San Carlos, CA Algebra 2 Tutors
San Carlos, CA Calculus Tutors
San Carlos, CA Geometry Tutors
San Carlos, CA Math Tutors
San Carlos, CA Prealgebra Tutors
San Carlos, CA Precalculus Tutors
San Carlos, CA SAT Tutors
San Carlos, CA SAT Math Tutors
San Carlos, CA Science Tutors
San Carlos, CA Statistics Tutors
San Carlos, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/san_carlos_ca_trigonometry_tutors.php","timestamp":"2014-04-17T11:11:56Z","content_type":null,"content_length":"24425","record_id":"<urn:uuid:5a763390-cf64-4c6e-aa1c-139a726f7258>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
The semantics of Clear, a specication language
, 1993
"... The Larch family of languages is used to specify program interfaces in a two-tiered definitional style. Each Larch specification has components written in two languages: one that is designed for
a specific programming language and another that is independent of any programming language. The former a ..."
Cited by 27 (1 self)
Add to MetaCart
The Larch family of languages is used to specify program interfaces in a two-tiered definitional style. Each Larch specification has components written in two languages: one that is designed for a
specific programming language and another that is independent of any programming language. The former are the Larch interface languages, and the latter is the Larch Shared Language (LSL). Version 2.3
of LSL is similar to previous versions, but contains a number of refinements based on experience writing specifications and developing tools to support the specification process. This report contains
an informal introduction and a self-contained language definition. This report supersedes Pieces II and III of Larch in Five Easy Pieces [Guttag, Horning, and Wing 1985b] and "Report on the Larch
Shared Language" [Guttag and Horning 1986]. iii Report on the Larch Shared Language, Version 2.3 Chapter 1: Overview 1.1. Introduction 1.2. Simple Algebraic Specifications 1.3. Getting Richer
Theories 1.4...
- Helsinki University of Technology , 1995
"... : The structure of High-level nets is studied from an algebraic and a logical point of view using Algebraic nets as an example. First the category of Algebraic nets is defined and the semantics
given through an unfolding construction. Other kinds of Highlevel net formalisms are then presented. It is ..."
Cited by 10 (0 self)
Add to MetaCart
: The structure of High-level nets is studied from an algebraic and a logical point of view using Algebraic nets as an example. First the category of Algebraic nets is defined and the semantics given
through an unfolding construction. Other kinds of Highlevel net formalisms are then presented. It is shown that nets given in these formalisms can be transformed into equivalent Algebraic nets. Then
the semantics of nets in terms of universal constructions is discussed. A definition of Algebraic nets in terms of structured transition systems is proposed. The semantics of the Algebraic net is
then given as a free completion of this structured transition system to a category. As an alternative also a sheaf semantics of nets is examined. Here the semantics of the net arises as a limit of a
diagram of sheaves. Next Algebraic nets are characterized as encodings of special morphisms called foldings. Each algebraic net gives rise to a surjective morphism between Petri nets and conversely
each sur...
- In Proc. MFCS'93, volume 711 of LNCS , 1993
"... This paper presents a comparison between algebraic specifications-in-the-large and a type theoretical formulation of modular specifications, called deliverables. It is shown that the laws of
module algebra can be translated to laws about deliverables which can be proved correct in type theory. The a ..."
Cited by 6 (1 self)
Add to MetaCart
This paper presents a comparison between algebraic specifications-in-the-large and a type theoretical formulation of modular specifications, called deliverables. It is shown that the laws of module
algebra can be translated to laws about deliverables which can be proved correct in type theory. The adequacy of the Extended Calculus of Constructions as a possible implementation of type theory is
discussed and it is explained how the reformulation of the laws is influenced by this choice. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1917516","timestamp":"2014-04-17T15:54:59Z","content_type":null,"content_length":"18246","record_id":"<urn:uuid:6ea00e3b-619c-45b9-aac0-fb2586b45659>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
SYLLABUS & Structure of GATE
SYLLABUS & Structure of GATE Examination
The GATE is held every year on the second Sunday of February, across the country in over 100 cities. At present nearly 60,000 students write GATE every year. Candidates can choose a single paper of 3
hours duration to appear in GATE from the discipline papers shown in the following Table.
The GATE 2006 examination consists of a single paper of 3 hours duration and of 150 marks. GATE 2006 will have the following twenty one papers:
│ PAPER │ CODE │
│ AG │ Agricultural Engineering │
│ AR │ Architecture and Planning │
│ CE │ Civil Engineering │
│ CH │ Chemical Engineering │
│ CS │ Computer Science & Engineering │
│ CY │ Chemistry │
│ EC │ Electronics & Comm Engineering │
│ EE │ Electrical Engineering │
│ GG │ Geology & Geophysics │
│ IN │ Instrumentation Engineering │
│ IT │ Information Technology │
│ MA │ Mathematics │
│ ME │ Mechanical Engineering │
│ MN │ Mining Engineering │
│ MT │ Metallurgical Engineering │
│ PH │ Physics │
│ PI │ Production & Industrial Engg. │
│ PY │ Pharmaceutical Sciences │
│ TF │ Textile Engg. & Fiber Science │
│ XE │ Engineering Sc iences │
│ XL │ Life Sciences │
Papers XE and XL are of general nature and will comprise the following Sections:
Engineering Sciences(XE):
│ Section │ Code │
│ Engg. Mathematics (Compulsory) │ A │
│ Computational Science │ B │
│ Electrical Sciences │ C │
│ Fluid Mechanics │ D │
│ Materials Science │ E │
│ Solid Mechanics │ F │
│ Thermodynamics │ G │
Life Sciences(XL):
│ Section │ Code │
│ Chemistry (Compulsory) │ H │
│ Biochemistry │ I │
│ Biotechnology │ J │
│ Botany │ K │
│ Microbiology │ L │
│ Zoology │ M │
Candidates appearing in XE or XL papers are required to answer three sections. Sections (A) and (H) are compulsory in XE and XL respectively. Candidates can choose any two out of the remaining
sections mentioned against the respective papers.
• The choice of the appropriate paper is the responsibility of the candidate. However, some guidelines are suggested below:
1. Candidates are expected to appear in a paper appropriate to the discipline of their qualifying degree.
2. However, the candidates are free to choose any paper according to their admission plan, keeping in mind the eligibility criteria of the admitting institute.
• The question paper of GATE 2006 will be fully of objective type.
1. Candidates have to mark the correct choice by darkening the appropriate bubble against each question on an Objective Response Sheet (ORS).
2. There will be 'negative' marking for wrong answers. The deduction will be 25% of the marks allotted.
3. Here is the Structure of Question Papers
SYLLABUS of GATE
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green's theorems.
Differential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy's and Euler's equations; Laplace transforms; PDEs - Laplace, heat and
wave equations.
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson's rule; single and multi-step methods for differential equations.
Sources of power on the farm-human, animal, mechanical, electrical, wind, solar and biomass; bio-fuels; design and selection of machine elements - gears, pulleys, chains and sprockets and
belts; overload safety devices used in farm machinery; measurement of force, torque, speed, displacement and acceleration on machine elements.
Soil tillage; forces acting on a tillage tool; hitch systems and hitching of tillage implements; mechanics of animal traction; functional requirements, principles of working, construction and
operation of manual, animal and power operated equipment for tillage, sowing, planting, fertilizer application, inter-cultivation, spraying, mowing, chaff cutting, harvesting, threshing and
transport; testing of agricultural machinery and equipment; calculation of performance parameters -field capacity, efficiency, application rate and losses; cost analysis of implements and
Thermodynamic principles of I.C. engines; I.C. engine cycles; engine components; fuels and combustion; lubricants and their properties; I.C. engine systems - fuel, cooling, lubrication,
ignition, electrical, intake and exhaust; selection, operation, maintenance and repair of I.C. engines; power efficiencies and measurement; calculation of power, torque, fuel consumption, heat
load and power losses.
Tractors and power tillers - type, selection, maintenance and repair; tractor clutches and brakes; power transmission systems - gear trains, differential, final drives and power take-off;
mechanics of tractor chassis; traction theory; three point hitches- free link and restrained link operations; mechanical steering and hydraulic control systems used in tractors; human
engineering and safety in tractor design; tractor tests and performance.
Ideal and real fluids, properties of fluids; hydrostatic pressure and its measurement; hydrostatic forces on plane and curved surface; continuity equation; Bernoulli's theorem; laminar and
turbulent flow in pipes, Darcy- Weisbach and Hazen-Williams equations, Moody's diagram; flow through orifices and notches; flow in open channels.
Engineering properties of soils; fundamental definitions and relationships; index properties of soils; permeability and seepage analysis; shear strength, Mohr's circle of stress, active and
passive earth pressures; stability of slopes.
Hydrological cycle; meteorological parameters and their measurement, analysis of precipitation data; abstraction from precipitation; runoff; hydrograph analysis, unit hydrograph theory and
application; stream flow measurement; flood routing, hydrological reservoir and channel routing.
Measurement of distance and area; chain surveying, methods of traversing; measurement of angles and bearings, plane table surveying; types of levelling; contouring; instruments for surveying
and levelling; computation of earth work.
Mechanics of soil erosion, soil erosion types; wind and water erosion; factors affecting erosion; soil loss estimation; biological and engineering measures to control erosion; terraces and
bunds; vegetative waterways; gully control structures, drop, drop inlet and chute spillways; earthen dams; water harvesting structures, farm ponds, watershed management.
Soil-water-plant relationship, water requirement of crops; consumptive use and evapotranspiration; irrigation scheduling; irrigation efficiencies; design of irrigation channels; measurement of
soil moisture, irrigation water and infiltration; surface, sprinkler and drip methods of irrigation; design and evaluation of irrigation methods.
Drainage coefficient; planning, design and layout of surface and sub-surface drainage systems; leaching requirement and salinity control; irrigation and drainage water quality.
Groundwater occurrence confined and unconfined aquifers, evaluation of aquifer properties; well hydraulics; groundwater recharge.
Classification of pumps; pump characteristics; pump selection and installation.
Steady state heat transfer in conduction, convection and radiation; transient heat transfer in simple geometry; condensation and boiling heat transfer; working principles of heat exchangers;
diffusive and convective mass transfer; simultaneous heat and mass transfer in agricultural processing operations.
Material and energy balances in food processing systems; water activity, sorption and desorption isotherms; centrifugal separation of solids, liquids and gases; kinetics of microbial death -
pasteurization and sterilization of liquid foods; preservation of food by cooling and freezing; refrigeration and cold storage basics and applications; psychrometry - properties of air-vapour
mixture; concentration and drying of liquid foods - evaporators, tray, drum and spray dryers.
Mechanics and energy requirement in size reduction of granular solids; particle size analysis for comminuted solids; size separation by screening; fluidization of granular solids-pneumatic,
bucket, screw and belt conveying; cleaning and grading; Effectiveness of grain cleaners.
Hydrothermal treatment, drying and milling of cereals, pulses and oilseeds; Processing of seeds, spices, fruits and vegetables; By-product utilization from processing industries.
Controlled and modified atmosphere storage; Perishable food storage, godowns, bins and grain silos.
Goto Top
AR - ARCHITECTURE AND PLANNING
City planning: Evolution of cities; principles of city planning; types of cities & new towns; planning regulations and building byelaws; eco-city concept; sustainable development.
Housing: Concept of housing; neighbourhood concept; site planning principles; housing typology; housing standards; housing infrastructure; housing policies, finance and management; housing
programs in India; self help housing.
Landscape Design: Principles of landscape design and site planning; history of landscape styles; landscape elements and materials; plant characteristics & planting design; environmental
considerations in landscape planning.
Computer Aided Design: Application of computers in architecture and planning; understanding elements of hardware and software; computer graphics; programming languages – C and Visual Basic and
usage of packages such as AutoCAD, 3D-Studio, 3D Max.
Environmental Studies in Building Science: Components of Ecosystem; ecological principles concerning environment; climate responsive design; energy efficient building design; thermal comfort;
solar architecture; principles of lighting and styles for illumination; basic principles of architectural acoustics; environment pollution, their control & abatement.
Visual and Urban Design: Principles of visual composition; proportion, scale, rhythm, symmetry, harmony, datum, balance, form, colour, texture; sense of place and space, division of space;
barrier free design; focal point, vista, image ability, visual survey, figure-background relationship.
History of Architecture: Indian – Indus valley, Vedic, Buddhist, Indo-Aryan, Dravidian and Mughal periods; European – Egyptian, Greek, Roman, medieval and renaissance periods- construction and
architectural styles; vernacular and traditional architecture.
Development of Contemporary Architecture: Architectural developments and impacts on society since industrial revolution; influence of modern art on architecture; works of national and
international architects; art novuea, eclecticism, international styles, post modernism, deconstruction in architecture.
Building Services: Water supply, sewerage and drainage systems; sanitary fittings and fixtures; plumbing systems, principles of internal & external drainage systems, principles of
electrification of buildings, intelligent buildings; elevators & escalators, their standards and uses; air-conditioning systems; fire fighting systems, building safety and security systems.
Building Construction and Management: Building construction techniques, methods and details; building systems and prefabrication of building elements; principles of modular coordination;
estimation, specification, valuation, professional practice; project management techniques e.g., PERT, CPM etc;
Materials and Structural Systems: Behavioural characteristics of all types of building materials e.g. mud, timber, bamboo, brick, concrete, steel, glass, FRP, different polymers, composites;
principles of strength of materials; design of structural elements in wood, steel and RCC; elastic and limit state design; complex structural systems; principles of pre-stressing; tall
buildings; principles of disaster resistant structures.
Planning Theory: Regional planning; settlement system planning; history of human settlements; growth of cities & metropolises; principles of Ekistics; rural-urban migration; urban conservation;
urban renewal; Five-year plan; structural and sectoral plan.
Techniques of Planning: Planning survey techniques; preparation of urban and regional structure plans, development plans, action plans; site planning principles and design; statistical methods
of data analysis; application of G.I.S and remote sensing techniques in urban and regional planning; decision making models.
Traffic and Transportation Planning: Principles of traffic engineering and transportation planning; traffic survey methods; design of roads, intersections, grade separators and parking areas;
hierarchy of roads and levels of services; traffic and transport management in urban areas, intelligent transportation system; mass transportation planning; para-transits and other modes of
transportation, pedestrian & slow moving traffic planning.
Infrastructure, Services and Amenities: Principles of water supply and sanitation systems; water treatment; solid waste disposal systems; waste treatment, recycle & reuse; urban rainwater
harvesting; power supply and communication systems --- network, design & guidelines; demography related standards at various levels of the settlements for health, education, recreation,
religious & public-semi public facilities.
Development Administration and Management: Planning laws; development control and zoning regulations; laws relating to land acquisition; development enforcements, urban land ceiling; land
management techniques; planning and municipal administration; disaster mitigation management; 73rd & 74th Constitutional amendments; valuation & taxation; revenue resources and fiscal
management; public participation and role of NGO & CBO; Institutional networking & capacity building.
Goto Top
CE - CIVIL ENGINEERING
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative,
Maxima and minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy’s and Euler’s equations, Initial and boundary
value problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy’s integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson’s rule, single and multi-step methods for differential equations.
Mechanics: Bending moment and shear force in statically determinate beams. Simple stress and strain relationship: Stress and strain in two dimensions, principal stresses, stress transformation,
Mohr’s circle. Simple bending theory, flexural and shear stresses, unsymmetrical bending, shear centre. Thin walled pressure vessels, uniform torsion, buckling of column, combined and direct
bending stresses.
Structural Analysis: Analysis of statically determinate trusses, arches, beams, cables and frames, displacements in statically determinate structures and analysis of statically indeterminate
structures by force/ energy methods, analysis by displacement methods (slope deflection and moment distribution methods), influence lines for determinate and indeterminate structures. Basic
concepts of matrix methods of structural analysis.
Concrete Structures: Concrete Technology- properties of concrete, basics of mix design. Concrete design- basic working stress and limit state design concepts, analysis of ultimate load capacity
and design of members subjected to flexure, shear, compression and torsion by limit state methods. Basic elements of prestressed concrete, analysis of beam sections at transfer and service
Steel Structures: Analysis and design of tension and compression members, beams and beam- columns, column bases. Connections- simple and eccentric, beam–column connections, plate girders and
trusses. Plastic analysis of beams and frames.
Soil Mechanics: Origin of soils, soil classification, three - phase system, fundamental definitions, relationship and interrelationships, permeability and seepage, effective stress principle,
consolidation, compaction, shear strength.
Foundation Engineering: Sub-surface investigations- scope, drilling bore holes, sampling, penetration tests, plate load test. Earth pressure theories, effect of water table, layered soils.
Stability of slopes- infinite slopes, finite slopes. Foundation types- foundation design requirements. Shallow foundations- bearing capacity, effect of shape, water table and other factors,
stress distribution, settlement analysis in sands and clays. Deep foundations – pile types, dynamic and static formulae, load capacity of piles in sands and clays, negative skin friction. WATER
Fluid Mechanics and Hydraulics: Properties of fluids, principle of conservation of mass, momentum, energy and corresponding equations, potential flow, applications of momentum and Bernoulli’s
equation, laminar and turbulent flow, flow in pipes, pipe networks. Concept of boundary layer and its growth. Uniform flow, critical flow and gradually varied flow in channels, specific energy
concept, hydraulic jump. Forces on immersed bodies, flow measurements in channels, tanks and pipes. Dimensional analysis and hydraulic modeling. Kinematics of flow, velocity triangles and
specific speed of pumps and turbines.
Hydrology: Hydrologic cycle, rainfall, evaporation, infiltration, stage discharge relationships, unit hydrographs, flood estimation, reservoir capacity, reservoir and channel routing. Well
Irrigation: Duty, delta, estimation of evapo-transpiration. Crop water requirements. Design of: lined and unlined canals, waterways, head works, gravity dams and spillways. Design of weirs on
permeable foundation. Types of irrigation system, irrigation methods. Water logging and drainage, sodic soils.
Water requirements: Quality standards, basic unit processes and operations for water treatment. Drinking water standards, water requirements, basic unit operations and unit processes for
surface water treatment, distribution of water. Sewage and sewerage treatment, quantity and characteristics of wastewater. Primary, secondary and tertiary treatment of wastewater, sludge
disposal, effluent discharge standards. Domestic wastewater treatment, quantity of characteristics of domestic wastewater, primary and secondary treatment Unit operations and unit processes of
domestic wastewater, sludge disposal.
Air Pollution: Types of pollutants, their sources and impacts, air pollution meteorology, air pollution control, air quality standards and limits.
Municipal Solid Wastes: Characteristics, generation, collection and transportation of solid wastes, engineered systems for solid waste management (reuse/ recycle, energy recovery, treatment and
Noise Pollution: Impacts of noise, permissible limits of noise pollution, measurement of noise and control of noise pollution.
Highway Planning: Geometric design of highways, testing and specifications of paving materials, design of flexible and rigid pavements.
Traffic Engineering: Traffic characteristics, theory of traffic flow, intersection design, traffic signs and signal design, highway capacity.
Importance of surveying, principles and classifications, mapping concepts, coordinate system, map projections, measurements of distance and directions, leveling, theodolite traversing, plane
table surveying, errors and adjustments, curves.
Goto Top
CH - CHEMICAL ENGINEERING
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative,
Maxima and minima, Gradient, Divergence and Curl, Vector dentities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy’s and Euler’s equations, Initial and boundary
value problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy’s integral theorem, Taylor and Laurent series, Residue theorem.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson’s rule, single and multi-step methods for differential equations.
Process Calculations and Thermodynamics: Laws of conservation of mass and energy; use of tie components; recycle, bypass and purge calculations; degree of freedom analysis. First and Second
laws of thermodynamics. First law application to close and open systems. Second law and Entropy Thermodynamic properties of pure substances: equation of state and departure function, properties
of mixtures: partial molar properties, fugacity, excess properties and activity coefficients; phase equilibria: predicting VLE of systems; chemical reaction equilibria.
Fluid Mechanics and Mechanical Operations: Fluid statics, Newtonian and non-Newtonian fluids, Bernoulli equation, Macroscopic friction factors, energy balance, dimensional analysis, shell
balances, flow through pipeline systems, flow meters, pumps and compressors, packed and fluidized beds, elementary boundary layer theory, size reduction and size separation; free and hindered
settling; centrifuge and cyclones; thickening and classification, filtration, mixing and agitation; conveying of solids.
Heat Transfer: Conduction, convection and radiation, heat transfer coefficients, steady and unsteady heat conduction, boiling, condensation and evaporation; types of heat exchangers and
evaporators and their design.
Mass Transfer: Fick’s laws, molecular diffusion in fluids, mass transfer coefficients, film, penetration and surface renewal theories; momentum, heat and mass transfer analogies; stagewise and
continuous contacting and stage efficiencies; HTU & NTU concepts design and operation of equipment for distillation, absorption, leaching, liquid-liquid extraction, drying, humidification,
dehumidification and adsorption.
Chemical Reaction Engineering: Theories of reaction rates; kinetics of homogeneous reactions, interpretation of kinetic data, single and multiple reactions in ideal reactors, non-ideal
reactors; residence time distribution, single parameter model; non-isothermal reactors; kinetics of heterogeneous catalytic reactions; diffusion effects in catalysis.
Instrumentation and Process Control: Measurement of process variables; sensors, transducers and their dynamics, transfer functions and dynamic responses of simple systems, process reaction
curve, controller modes (P, PI, and PID); control valves; analysis of closed loop systems including stability, frequency response and controller tuning, cascade, feed forward control.
Plant Design and Economics: Process design and sizing of chemical engineering equipment such as compressors, heat exchangers, multistage contactors; principles of process economics and cost
estimation including total annualized cost, cost indexes, rate of return, payback period, discounted cash flow, optimization in design.
Chemical Technology: Inorganic chemical industries; sulfuric acid, NaOH, fertilizers (Ammonia, Urea, SSP and TSP); natural products industries (Pulp and Paper, Sugar, Oil, and Fats); petroleum
refining and petrochemicals; polymerization industries; polyethylene, polypropylene, PVC and polyester synthetic fibers.
Goto Top
CS - COMPUTER SCIENCE AND ENGINEERING
Mathematical Logic: Propositional Logic; First Order Logic.
Probability: Conditional Probability; Mean, Median, Mode and Standard Deviation; Random Variables; Distributions; uniform, normal, exponential, Poisson, Binomial.
Set Theory & Algebra: Sets; Relations; Functions; Groups; Partial Orders; Lattice; Boolean Algebra.
Combinatorics: Permutations; Combinations; Counting; Summation; generating functions; recurrence relations; asymptotics.
Graph Theory: Connectivity; spanning trees; Cut vertices & edges; covering; matching; independent sets; Colouring; Planarity; Isomorphism.
Linear Algebra: Algebra of matrices, determinants, systems of linear equations, Eigen values and Eigen vectors.
Numerical Methods: LU decomposition for systems of linear equations; numerical solutions of non-linear algebraic equations by Secant, Bisection and Newton-Raphson Methods; Numerical integration
by trapezoidal and Simpson’s rules.
Calculus: Limit, Continuity & differentiability, Mean value Theorems, Theorems of integral calculus, evaluation of definite & improper integrals, Partial derivatives, Total derivatives, maxima
& minima.
Theory of Computation: Regular languages and finite automata, Context free languages and Push-down automata, Recursively enumerable sets and Turing machines, Undecidability; NP-completeness.
Digital Logic: Logic functions, Minimization, Design and synthesis of combinational and sequential circuits; Number representation and computer arithmetic (fixed and floating point).
Computer Organization and Architecture: Machine instructions and addressing modes, ALU and data-path, CPU control design, Memory interface, I/O interface (Interrupt and DMA mode), Instruction
pipelining, Cache and main memory, Secondary storage.
Programming and Data Structures: Programming in C; Functions, Recursion, Parameter passing, Scope, Binding; Abstract data types, Arrays, Stacks, Queues, Linked Lists, Trees, Binary search
trees, Binary heaps.
Algorithms: Analysis, Asymptotic notation, Notions of space and time complexity, Worst and average case analysis; Design: Greedy approach, Dynamic programming, Divide-and-conquer; Tree and
graph traversals, Connected components, Spanning trees, Shortest paths; Hashing, Sorting, Searching.
Compiler Design: Lexical analysis, Parsing, Syntax directed translation, Runtime environments, Intermediate and target code generation, Basics of code optimization.
Operating System: Processes, Threads, Inter-process communication, Concurrency, Synchronization, Deadlock, CPU scheduling, Memory management and virtual memory, File systems, I/O systems,
Protection and security.
Databases: ER-model, Relational model (relational algebra, tuple calculus), Database design (integrity constraints, normal forms), Query languages (SQL), File structures (sequential files,
indexing, B and B+ trees), Transactions and concurrency control.
Computer Networks: ISO/OSI stack, LAN technologies (Ethernet, Token ring), Flow and error control techniques, Routing algorithms, Congestion control, TCP/UDP and sockets, IP(v4), Application
layer protocols (icmp, dns, smtp, pop, ftp, http); Basic concepts of hubs, switches, gateways, and routers.
Goto Top
CH - CHEMISTRY
Structure: Quantum theory: principles and techniques; applications to a particle in a box, harmonic oscillator, rigid rotor and hydrogen atom; valence bond and molecular orbital theories,
Hückel approximation; approximate techniques: variation and perturbation; symmetry, point groups; rotational, vibrational, electronic, NMR, and ESR spectroscopy
Equilibrium: Kinetic theory of gases; First law of thermodynamics, heat, energy, and work; second law of thermodynamics and entropy; third law and absolute entropy; free energy; partial molar
quantities; ideal and non-ideal solutions; phase transformation: phase rule and phase diagrams - one, two, and three component systems; activity, activity coefficient, fugacity, and fugacity
coefficient; chemical equilibrium, response of chemical equilibrium to temperature and pressure; colligative properties; Debye-Hückel theory; thermodynamics of electrochemical cells; standard
electrode potentials: applications - corrosion and energy conversion; molecular partition function (translational, rotational, vibrational, and electronic).
Kinetics: Rates of chemical reactions, temperature dependence of chemical reactions; elementary, consecutive, and parallel reactions; steady state approximation; theories of reaction rates -
collision and transition state theory, relaxation kinetics, kinetics of photochemical reactions and free radical polymerization, homogeneous catalysis, adsorption isotherms and heterogeneous
Main group elements: General characteristics, allotropes, structure and reactions of simple and industrially important compounds: boranes, carboranes, silicones, silicates, boron nitride,
borazines and phosphazenes. Hydrides, oxides and oxoacids of pnictogens (N, P), chalcogens (S, Se & Te) and halogens, xenon compounds, pseudo halogens and interhalogen compounds. Shapes of
molecules and hard- soft acid base concept. Structure and Bonding (VBT) of B, Al, Si, N, P, S, Cl compounds. Allotropes of carbon: graphite, diamond, C60. Synthesis and reactivity of inorganic
polymers of Si and P.
Transition Elements: General characteristics of d and f block elements; coordination chemistry: structure and isomerism, stability, theories of metal- ligand bonding (CFT and LFT), mechanisms
of substitution and electron transfer reactions of coordination complexes. Electronic spectra and magnetic properties of transition metal complexes, lanthanides and actinides. Metal carbonyls,
metal- metal bonds and metal atom clusters, metallocenes; transition metal complexes with bonds to hydrogen, alkyls, alkenes and arenes; metal carbenes; use of organometallic compounds as
catalysts in organic synthesis. Bioinorganic chemistry of Na, K. Mg, Ca, Fe, Co, Zn, Cu and Mo.
Solids: Crystal systems and lattices, miller planes, crystal packing, crystal defects; Bragg’s Law, ionic crystals, band theory, metals and semiconductors, Different structures of AX, AX2, ABX3
compounds, spinels.
Instrumental methods of analysis: Atomic absorption and emission spectroscopy including ICP-AES, UV- visible spectrophotometry, NMR, mass, Mossbauer spectroscopy (Fe and Sn), ESR spectroscopy,
chromatography including GC and HPLC and electro-analytical methods (Coulometry, cyclic voltammetry, polarography – amperometry, and ion selective electrodes).
Stereochemistry: Chirality of organic molecules with or without chiral centres. Specification of configuration in compounds having one or more stereogenic centres. Enantiotopic and
diastereotopic atoms, groups and faces. Stereoselective and stereospecific synthesis. Conformational analysis of acyclic and cyclic compounds. Geometrical isomerism. Configurational and
conformational effects on reactivity and selectivity/specificity.
Reaction mechanism: Methods of determining reaction mechanisms. Nucleophilic and electrophilic substitutions and additions to multiple bonds. Elimination reactions. Reactive intermediates-
carbocations, carbanions, carbenes, nitrenes, arynes, free radicals. Molecular rearrangements involving electron deficient atoms.
Organic synthesis: Synthesis, reactions, mechanisms and selectivity involving the following- alkenes, alkynes, arenes, alcohols, phenols, aldehydes, ketones, carboxylic acids and their
derivatives, halides, nitro compounds and amines. Use of compounds of Mg, Li, Cu, B and Si in organic synthesis. Concepts in multistep synthesis- retrosynthetic analysis, disconnections,
synthons, synthetic equivalents, reactivity umpolung, selectivity, protection and deprotection of functional groups.
Pericyclic reactions: Electrocyclic, cycloaddition and sigmatropic reactions. Orbital correlation, FMO and PMO treatments.
Photochemistry: Basic principles. Photochemistry of alkenes, carbonyl compounds, and arenes. Photooxidation and photoreduction. Di-p- methane rearrangement, Barton reaction.
Heterocyclic compounds: Structure, preparation, properties and reactions of furan, pyrrole, thiophene, pyridine, indole and their derivatives.
Biomolecules: Structure, properties and reactions of mono- and di-saccharides, physicochemical properties of amino acids, chemical synthesis of peptides, structural features of proteins,
nucleic acids, steroids, terpenoids, carotenoids, and alkaloids.
Spectroscopy: Principles and applications of UV-visible, IR, NMR and Mass spectrometry in the determination of structures of organic molecules.
Goto Top
Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector
identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy’s and
Euler’s equations, Initial and boundary value problems, Partial Differential Equations and variable separable method.
Complex variables: Analytic functions, Cauchy’s integral theorem and integral formula, Taylor’s and Laurent’ series, Residue theorem, solution integrals.
Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and
Binomial distribution, Correlation and regression analysis.
Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations.
Transform Theory: Fourier transform, Laplace transform, Z-transform.
Networks: Network graphs: matrices associated with graphs; incidence, fundamental cut set and fundamental circuit matrices. Solution methods: nodal and mesh analysis. Network theorems:
superposition, Thevenin and Norton’s maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time
domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer
functions. State equations for networks.
Electronic Devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: diffusion current, drift current, mobility, and resistivity. Generation and
recombination of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-I-n and avalanche photo diode, Basics of LASERs. Device technology: integrated
circuits fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.
Analog Circuits: Small Signal Equivalent circuits of diodes, BJTs, MOSFETs and analog CMOS. Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and
FET amplifiers. Amplifiers: single-and multi-stage, differential and operational, feedback, and power. Frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators;
criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits, 555 Timers. Power supplies.
Digital circuits: Boolean algebra, minimization of Boolean functions; logic gates; digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinatorial circuits: arithmetic circuits, code converters,
multiplexers, decoders, PROMs and PLAs. Sequential circuits: latches and flip-flops, counters and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor
(8085): architecture, programming, memory and I/O interfacing.
Signals and Systems: Definitions and properties of Laplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, DFT and FFT,
z-transform. Sampling theorem. Linear Time-Invariant (LTI) Systems: definitions and properties; causality, stability, impulse response, convolution, poles and zeros, parallel and cascade
structure, frequency response, group delay, phase delay. Signal transmission through LTI systems.
Control Systems: Basic control system components; block diagrammatic description, reduction of block diagrams. Open loop and closed loop (feedback) systems and stability analysis of these
systems. Signal flow graphs and their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Tools and techniques
for LTI control system analysis: root loci, Routh-Hurwitz criterion, Bode and Nyquist plots. Control system compensators: elements of lead and lag compensation, elements of
Proportional-Integral-Derivative (PID) control. State variable representation and solution of state equation of LTI control systems.
Communications: Random signals and noise: probability, random variables, probability density function, autocorrelation, power spectral density. Analog communication systems: amplitude and angle
modulation and demodulation systems, spectral analysis of these operations, superheterodyne receivers; elements of hardware, realizations of analog communication systems; signal-to-noise ratio
(SNR) calculations for amplitude modulation (AM) and frequency modulation (FM) for low noise conditions. Fundamentals of information theory and channel capacity theorem. Digital communication
systems: pulse code modulation (PCM), differential pulse code modulation (DPCM), digital modulation schemes: amplitude, phase and frequency shift keying schemes (ASK, PSK, FSK), matched filter
receivers, bandwidth consideration and probability of error calculations for these schemes. Basics of TDMA, FDMA and CDMA and GSM.
Electromagnetics: Elements of vector calculus: divergence and curl; Gauss’ and Stokes’ theorems, Maxwell’s equations: differential and integral forms. Wave equation, Poynting vector. Plane
waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart;
impedance matching; S parameters, pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Basics of propagation in
dielectric waveguide and optical fibers. Basics of Antennas: Dipole antennas; radiation pattern; antenna gain.
Goto Top
EE - ELECTRICAL ENGINEERING
Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector
identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy’s and
Euler’s equations, Initial and boundary value problems, Partial Differential Equations and variable separable method.
Complex variables: Analytic functions, Cauchy’s integral theorem and integral formula, Taylor’s and Laurent’ series, Residue theorem, solution integrals.
Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and
Binomial distribution, Correlation and regression analysis.
Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations.
Transform Theory: Fourier transform, Laplace transform, Z-transform.
Electric Circuits and Fields: Network graph, KCL, KVL, node and mesh analysis, transient response of dc and ac networks; sinusoidal steady-state analysis, resonance, basic filter concepts;
ideal current and voltage sources, Thevenin’s, Norton’s and Superposition and Maximum Power Transfer theorems, two-port networks, three phase circuits; Gauss Theorem, electric field and
potential due to point, line, plane and spherical charge distributions; Ampere’s and Biot-Savart’s laws; inductance; dielectrics; capacitance.
Signals and Systems: Representation of continuous and discrete-time signals; shifting and scaling operations; linear, time-invariant and causal systems; Fourier series representation of
continuous periodic signals; sampling theorem; Fourier, Laplace and Z transforms.
Electrical Machines: Single phase transformer - equivalent circuit, phasor diagram, tests, regulation and efficiency; three phase transformers - connections, parallel operation;
auto-transformer; energy conversion principles; DC machines - types, windings, generator characteristics, armature reaction and commutation, starting and speed control of motors; three phase
induction motors - principles, types, performance characteristics, starting and speed control; single phase induction motors; synchronous machines - performance, regulation and parallel
operation of generators, motor starting, characteristics and applications; servo and stepper motors.
Power Systems: Basic power generation concepts; transmission line models and performance; cable performance, insulation; corona and radio interference; distribution systems; per-unit
quantities; bus impedance and admittance matrices; load flow; voltage control; power factor correction; economic operation; symmetrical components; fault analysis; principles of over-current,
differential and distance protection; solid state relays and digital protection; circuit breakers; system stability concepts, swing curves and equal area criterion; HVDC transmission and FACTS
Control Systems: Principles of feedback; transfer function; block diagrams; steady-state errors; Routh and Niquist techniques; Bode plots; root loci; lag, lead and lead-lag compensation; state
space model; state transition matrix, controllability and observability.
Electrical and Electronic Measurements: Bridges and potentiometers; PMMC, moving iron, dynamometer and induction type instruments; measurement of voltage, current, power, energy and power
factor; instrument transformers; digital voltmeters and multimeters; phase, time and frequency measurement; Q-meters; oscilloscopes; potentiometric recorders; error analysis.
Analog and Digital Electronics: Characteristics of diodes, BJT, FET; amplifiers - biasing, equivalent circuit and frequency response; oscillators and feedback amplifiers; operational amplifiers
- characteristics and applications; simple active filters; VCOs and timers; combinational and sequential logic circuits; multiplexer; Schmitt trigger; multi-vibrators; sample and hold circuits;
A/D and D/A converters; 8-bit microprocessor basics, architecture, programming and interfacing.
Power Electronics and Drives: Semiconductor power diodes, transistors, thyristors, triacs, GTOs, MOSFETs and IGBTs - static characteristics and principles of operation; triggering circuits;
phase control rectifiers; bridge converters - fully controlled and half controlled; principles of choppers and inverters; basis concepts of adjustable speed dc and ac drives.
Goto Top
GG - GEOLOGY AND GEOPHYSICS
PART - I
Earth and Planetary system; size, shape, internal structure and composition of the earth; atmosphere and greenhouse effect; isostasy; elements of seismology; pressure in deep interior of
planets; continents and continental processes; physical oceanography; paleomagnetism, continental drift, plate tectonics.
Weathering; soil formation; action of river, wind and glacier; oceans and oceanic features; earthquakes, volcanoes, orogeny and mountain building; elements of structural geology;
crystallography; classification, composition and properties of minerals; elements of petrology; engineering properties of rocks and soils, role of geology in the construction of engineering
Introductory processes of ore formation, broad occurrence and distribution of ore deposits; coal and petroleum resources in India; ground water geology geological time scale and geochronology;
stratigraphic principles and stratigraphy of India; basic concepts of gravity, magnetic and electrical prospecting for ores and ground water.
PART – IIA: GEOLOGY
Crystal symmetry, forms, twinning; crystal chemistry; optical mineralogy, classification of minerals, diagnostic physical and optical properties of rock forming minerals.
Igneous rocks – classification, forms and textures, magmatic differentiation; phase diagrams and trace elements as monitors of magma evolutionary processes; mantle melting models and derivation
of primary magmas. Metamorphism: controlling factors, metamorphic facies, grade and baric types; metamorphism of pelitic, mafic and impure carbonate rocks; role of fluids in metamorphism;
metamorphic P-T-t paths and their tectonic significance; Igneous and metamorphic provinces of India; structure and petrology of sedimentary rocks; sedimentary processes and environments,
sedimentary facies, basin studies; association of igneous, sedimentary and metamorphic rocks with tectonic setting.
Stress, strain and material response; brittle and ductile deformation; primary and secondary structures; geometry and genesis of folds, faults, joints, unconformities; cleavage, schistosity and
lineation; methods of projection, tectonites and their significance; shear zone; superposed folding; basement cover relationship.
Morphology, classification and geological significance of important invertebrates, vertebrates, microfossils and palaeoflora; stratigraphic principles and Indian stratigraphy; geomorphic
processes and agents; development and evolution of landforms; slope and drainage; processes on deep oceanic and near-shore regions; quantitative and applied geomorphology; air photo
interpretation and remote sensing; ore mineralogy and optical properties of ore minerals; ore forming processes vis-à-vis ore-rock association (magmatic, hydrothermal, sedimentary and
metamorphogenic ores); ores and metamorphism; fluid inclusions as an ore genetic tool; prospecting and exploration of economic minerals; sampling, ore reserve estimation, mining methods; coal
and petroleum geology; origin and distribution of mineral and fuel deposits in India; marine geology and ocean resources; ore dressing and mineral economics.
Cosmic abundance; meteorites; geochemical evolution of the earth; geochemical cycles; distribution of major, minor and trace elements; elements of geochemical thermodynamics, isotope
geochemistry; geochemistry of waters including solution equilibria and water rock interaction.
Engineering properties of rocks and soils; rocks as construction material; geology of dams; tunnels and excavation sites; natural hazards; ground water geology and exploration and well
hydraulics; water quality; basic principles of remote sensing – energy sources and radiation principles, atmospheric absorption, interaction of energy with various features of the earth's
surface. GIS – basic concepts, raster and vector mode operation, digital processing of satellite images, visual and microwave remote sensing; elements of Geostatistics
PART – II B: GEOPHYSICS
The earth as a planet; different motions of the earth; gravity field of the earth and its shape; geochronology; seismology and interior of the earth; variation of density, velocity, pressure,
temperature, electrical and magnetic properties of the earth; earthquakes-causes and measurements; magnitude and intensity, focal mechanisms, earthquake quantification, source characteristics,
seismotectonics and seismic hazards; digital seismographs, paleoseismology, geomagnetic field, paleomagnetism; oceanic and continental lithosphere; plate tectonics; heat flow; upper and lower
atmospheric phenomena.
Theories of scalar and vector potential fields; Laplace, Maxwell and Helmholtz equations for solution of different types of boundary value problems in Cartesian, cylindrical and spherical polar
coordinates; Green's theorem; Image theory; integral equations and conformal transformations in potential theory; Eikonal equation and Ray theory.
'G' and 'g' units of measurement, density of rocks, gravimeters, Bouguer gravity formula, various corrections to gravity data, free air, Bouguer and isostatic anomalies, regional and residual
gravity separation, upward and downward continuation, preparation and analysis of gravity maps; gravity anomalies and their interpretation; calculation of mass, airborne, shipborne and
bore-hole gravity surveys.
Earth's magnetic field, units of measurement, magnetic susceptibility of rocks, magnetometers, corrections, preparation of magnetic maps, upward and downward continuation, magnetic anomalies
and their interpretation; magnetic anomalies and their interpretation.
Conduction of electricity through rocks, electrical conductivities of metals, metallic, non-metallic and rock forming minerals, D.C. resistivity units and methods of measurement, electrode
configuration for sounding and profiling, application of filter theory, interpretation of resistivity field data, application; self potential origin, classification, field measurement,
interpretation of induced polarization time frequency, phase domain; IP units and methods of measurement, interpretation and application; ground-water exploration.
Origin of electromagnetic field, elliptic polarization, methods of measurement for different source-receiver configuration components in EM measurements, skin-depth, interpretation and
applications; earth's natural electromagnetic field, tellurics, magneto-tellurics; geomagnetic depth sounding principles, electromagnetic profiling, methods of measurement, processing of data
and interpretation.
Seismic methods of prospecting: Reflection, refraction and CDP surveys; land and marine seismic sources, generation and propagation of elastic waves, velocity increasing with depth, geophones,
hydrophones, recording instruments (DFS), digital formats, field layouts, seismic noises and noise profile analysis, optimum geophone grouping, noise cancellation by shot and geophone arrays,
2D and 3D seismic data acquisition, processing and interpretation; CDP stacking charts, binning, filtering, dip-moveout, static and dynamic corrections, migration, signal processing, attribute
analysis, bright and dim spots, seismic stratigraphy, high resolution seismics, VSP, AVO.
Principles and techniques of geophysical well-logging, SP, resistivity, induction, gamma ray, neutron, density, sonic, temperature, dip meter, caliper, nuclear magnetic, cement bond logging,
micro-logs. Quantitative evaluation of formations from well logs; well hydraulics and application of geophysical methods for groundwater study; application of bore hole geophysics in ground
water, mineral and oil exploration.
Radioactive methods of prospecting and assaying of minerals (radioactive and non radioactive) deposits, half-life, decay constant, radioactive equilibrium, G M counter, scintillation detector,
semiconductor devices, application of radiometric for exploration and radioactive waste disposal.
Geophysical signal processing, sampling theorem, aliasing, Nyquist frequency, Fourier series, periodic waveform, Fourier and Hilbert transform, Z-transform, power spectrum, delta function, auto
correlation, cross correlation, convolution, deconvolution, principles of digital filters, windows, poles and zeros.
Geophysical inverse problems: non-uniqueness and stability of solutions; quasi-linear and non-linear methods including genetic algorithms and artificial neural network.
Goto Top
Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector
identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy’s and
Euler’s equations, Initial and boundary value problems, Partial Differential Equations and variable separable method.
Complex variables: Analytic functions, Cauchy’s integral theorem and integral formula, Taylor’s and Laurent’ series, Residue theorem, solution integrals.
Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and
Binomial distribution, Correlation and regression analysis.
Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations.
Transform Theory: Fourier transform, Laplace transform, Z-transform.
Basics of Circuits and Measurement Systems: Kirchoff’s laws, mesh and nodal Analysis. Circuit theorems. One-port and two-port Network Functions. Static and dynamic characteristics of
Measurement Systems. Error and uncertainty analysis. Statistical analysis of data and curve fitting.
Transducers, Mechanical Measurement and Industrial Instrumentation: Resistive, Capacitive, Inductive and piezoelectric transducers and their signal conditioning. Measurement of displacement,
velocity and acceleration (translational and rotational), force, torque, vibration and shock. Measurement of pressure, flow, temperature and liquid level. Measurement of pH, conductivity,
viscosity and humidity.
Analog Electronics: Characteristics of diode, BJT, JFET and MOSFET. Diode circuits. Transistors at low and high frequencies, Amplifiers, single and multi-stage. Feedback amplifiers. Operational
amplifiers, characteristics and circuit configurations. Instrumentation amplifier. Precision rectifier. V-to-I and I-to-V converter. Op-Amp based active filters. Oscillators and signal
Digital Electronics: Combinational logic circuits, minimization of Boolean functions. IC families, TTL, MOS and CMOS. Arithmetic circuits. Comparators, Schmitt trigger, timers and mono-stable
multi-vibrator. Sequential circuits, flip-flops, counters, shift registers. Multiplexer, S/H circuit. Analog-to-Digital and Digital-to-Analog converters. Basics of number system. Microprocessor
applications, memory and input-output interfacing. Microcontrollers.
Signals, Systems and Communications: Periodic and aperiodic signals. Impulse response, transfer function and frequency response of first- and second order systems. Convolution, correlation and
characteristics of linear time invariant systems. Discrete time system, impulse and frequency response. Pulse transfer function. IIR and FIR filters. Amplitude and frequency modulation and
demodulation. Sampling theorem, pulse code modulation. Frequency and time division multiplexing. Amplitude shift keying, frequency shift keying and pulse shift keying for digital modulation.
Electrical and Electronic Measurements: Bridges and potentiometers, measurement of R,L and C. Measurements of voltage, current, power, power factor and energy. A.C & D.C current probes.
Extension of instrument ranges. Q-meter and waveform analyzer. Digital voltmeter and multi-meter. Time, phase and frequency measurements. Cathode ray oscilloscope. Serial and parallel
communication. Shielding and grounding.
Control Systems and Process Control: Feedback principles. Signal flow graphs. Transient Response, steady-state-errors. Routh and Nyquist criteria. Bode plot, root loci. Time delay systems.
Phase and gain margin. State space representation of systems. Mechanical, hydraulic and pneumatic system components. Synchro pair, servo and step motors. On-off, cascade, P, P-I, P-I-D, feed
forward and derivative controller, Fuzzy controllers.
Analytical, Optical and Biomedical Instrumentation: Mass spectrometry. UV, visible and IR spectrometry. X-ray and nuclear radiation measurements. Optical sources and detectors, LED, laser,
Photo-diode, photo-resistor and their characteristics. Interferometers, applications in metrology. Basics of fiber optics. Biomedical instruments, EEG, ECG and EMG. Clinical measurements.
Ultrasonic transducers and Ultrasonography. Principles of Computer Assisted Tomography.
Goto Top
MA - MATHEMATICS
Linear Algebra: Finite dimensional vector spaces; Linear transformations and their matrix representations, rank; systems of linear equations, eigenvalues and eigenvectors, minimal polynomial,
Cayley-Hamilton Theroem, diagonalisation, Hermitian, Skew-Hermitian and unitary matrices; Finite dimesnsional inner product spaces, Gram-Schmidt orthonormalization process, self-adjoint
Complex Analysis: Analytic functions, conformal mappings, bilinear transformations; complex integration: Cauchy’s integral theorem and formula; Liouville’s theorem, maximum modulus principle;
Taylor and Laurent’s series; residue theorem and applications for evaluating real integrals.
Real Analysis: Sequences and series of functions, uniform convergence, power series, Fourier series, functions of several variables, maxima, minima; Riemann integration, multiple integrals,
line, surface and volume integrals, theorems of Green, Stokes and Gauss; metric spaces, completeness, Weierstrass approximation theorem, compactness; Lebesgue measure, measurable functions;
Lebesgue integral, Fatou’s lemma, dominated convergence theorem.
Ordinary Differential Equations: First order ordinary differential equations, existence and uniqueness theorems, systems of linear first order ordinary differential equations, linear ordinary
differential equations of higher order with constant coefficients; linear second order ordinary differential equations with variable coefficients; method of Laplace transforms for solving
ordinary differential equations, series solutions; Legendre and Bessel functions and their orthogonality.
Algebra: Normal subgroups and homomorphism theorems, automorphisms; Group actions, Sylow’s theorems and their applications; Euclidean domains, Principle ideal domains and unique factorization
domains. Prime ideals and maximal ideals in commutative rings; Fields, finite fields.
Functional Analysis: Banach spaces, Hahn-Banach extension theorem, open mapping and closed graph theorems, principle of uniform boundedness; Hilbert spaces, orthonormal bases, Riesz
representation theorem, bounded linear operators.
Numerical Analysis: Numerical solution of algebraic and transcendental equations: bisection, secant method, Newton-Raphson method, fixed point iteration; interpolation: error of polynomial
interpolation, Lagrange, Newton interpolations; numerical differentiation; numerical integration: Trapezoidal and Simpson rules, Gauss Legendre quadrature, method of undetermined parameters;
least square polynomial approximation; numerical solution of systems of linear equations: direct methods (Gauss elimination, LU decomposition); iterative methods (Jacobi and Gauss-Seidel);
matrix eigenvalue problems: power method, numerical solution of ordinary differential equations: initial value problems: Taylor series methods, Euler’s method, Runge-Kutta methods.
Partial Differential Equations: Linear and quasilinear first order partial differential equations, method of characteristics; second order linear equations in two variables and their
classification; Cauchy, Dirichlet and Neumann problems; solutions of Laplace, wave and diffusion equations in two variables; Fourier series and Fourier transform and Laplace transform methods
of solutions for the above equations.
Mechanics: Virtual work, Lagrange’s equations for holonomic systems, Hamiltonian equations.
Topology: Basic concepts of topology, product topology, connectedness, compactness, countability and separation axioms, Urysohn’s Lemma.
Probability and Statistics: Probability space, conditional probability, Bayes theorem, independence, Random variables, joint and conditional distributions, standard probability distributions
and their properties, expectation, conditional expectation, moments; Weak and strong law of large numbers, central limit theorem; Sampling distributions, UMVU estimators, maximum likelihood
estimators, Testing of hypotheses, standard parametric tests based on normal, X2 , t, F – distributions; Linear regression; Interval estimation.
Linear programming: Linear programming problem and its formulation, convex sets and their properties, graphical method, basic feasible solution, simplex method, big-M and two phase methods;
infeasible and unbounded LPP’s, alternate optima; Dual problem and duality theorems, dual simplex method and its application in post optimality analysis; Balanced and unbalanced transportation
problems, u -u method for solving transportation problems; Hungarian method for solving assignment problems.
Calculus of Variation and Integral Equations: Variation problems with fixed boundaries; sufficient conditions for extremum, linear integral equations of Fredholm and Volterra type, their
iterative solutions.
Goto Top
ME - MECHANICAL ENGINEERING
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative,
Maxima and minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy’s and Euler’s equations, Initial and boundary
value problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy’s integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson’s rule, single and multi-step methods for differential equations.
Engineering Mechanics: Free body diagrams and equilibrium; trusses and frames; virtual work; kinematics and dynamics of particles and of rigid bodies in plane motion, including impulse and
momentum (linear and angular) and energy formulations; impact.
Strength of Materials: Stress and strain, stress-strain relationship and elastic constants, Mohr’s circle for plane stress and plane strain, thin cylinders; shear force and bending moment
diagrams; bending and shear stresses; deflection of beams; torsion of circular shafts; Euler’s theory of columns; strain energy methods; thermal stresses.
Theory of Machines: Displacement, velocity and acceleration analysis of plane mechanisms; dynamic analysis of slider-crank mechanism; gear trains; flywheels.
Vibrations: Free and forced vibration of single degree of freedom systems; effect of damping; vibration isolation; resonance, critical speeds of shafts.
Design: Design for static and dynamic loading; failure theories; fatigue strength and the S-N diagram; principles of the design of machine elements such as bolted, riveted and welded joints,
shafts, spur gears, rolling and sliding contact bearings, brakes and clutches.
Fluid Mechanics: Fluid properties; fluid statics, manometry, buoyancy; control-volume analysis of mass, momentum and energy; fluid acceleration; differential equations of continuity and
momentum; Bernoulli’s equation; viscous flow of incompressible fluids; boundary layer; elementary turbulent flow; flow through pipes, head losses in pipes, bends etc.
Heat-Transfer: Modes of heat transfer; one dimensional heat conduction, resistance concept, electrical analogy, unsteady heat conduction, fins; dimensionless parameters in free and forced
convective heat transfer, various correlations for heat transfer in flow over flat plates and through pipes; thermal boundary layer; effect of turbulence; radiative heat transfer, black and
grey surfaces, shape factors, network analysis; heat exchanger performance, LMTD and NTU methods.
Thermodynamics: Zeroth, First and Second laws of thermodynamics; thermodynamic system and processes; Carnot cycle. irreversibility and availability; behaviour of ideal and real gases,
properties of pure substances, calculation of work and heat in ideal processes; analysis of thermodynamic cycles related to energy conversion.
Applications: Power Engineering: Steam Tables, Rankine, Brayton cycles with regeneration and reheat. I.C. Engines: air-standard Otto, Diesel cycles. Refrigeration and air-conditioning: Vapour
refrigeration cycle, heat pumps, gas refrigeration, Reverse Brayton cycle; moist air: psychrometric chart, basic psychrometric processes. Turbomachinery: Pelton-wheel, Francis and Kaplan
turbines — impulse and reaction principles, velocity diagrams.
Engineering Materials: Structure and properties of engineering materials, heat treatment, stress-strain diagrams for engineering materials.
Metal Casting: Design of patterns, moulds and cores; solidification and cooling; riser and gating design, design considerations.
Forming: Plastic deformation and yield criteria; fundamentals of hot and cold working processes; load estimation for bulk (forging, rolling, extrusion, drawing) and sheet (shearing, deep
drawing, bending) metal forming processes; principles of powder metallurgy.
Joining: Physics of welding, brazing and soldering; adhesive bonding; design considerations in welding.
Machining and Machine Tool Operations: Mechanics of machining, single and multi-point cutting tools, tool geometry and materials, tool life and wear; economics of machining; principles of
non-traditional machining processes; principles of work holding, principles of design of jigs and fixtures
Metrology and Inspection: Limits, fits and tolerances; linear and angular measurements; comparators; gauge design; interferometry; form and finish measurement; alignment and testing methods;
tolerance analysis in manufacturing and assembly.
Computer Integrated Manufacturing: Basic concepts of CAD/CAM and their integration tools.
Production Planning and Control: Forecasting models, aggregate production planning, scheduling, materials requirement planning.
Inventory Control: Deterministic and probabilistic models; safety stock inventory control systems.
Operations Research: Linear programming, simplex and duplex method, transportation, assignment, network flow models, simple queuing models, PERT and CPM.
Goto Top
MN - MINING ENGINEERING
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green’s theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy’s and Euler’s equations; Laplace transforms; PDEs – Laplace, heat and
wave equations.
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson’s rule; single and multi-step methods for differential equations.
Mechanics: Equivalent force systems; Equations of equilibrium; Two dimensional frames and trusses; Free body diagrams; Friction forces; Particle kinematics and dynamics.
Mine Development, Geomechanics and Ground Control: Methods of access to deposits; Underground drivages; Drilling methods and machines; Explosives, blasting devices and practices.
Geo-technical properties of rocks; Rock mass classification; Ground control, instrumentation and stress measurement techniques; Theories of rock failure; Ground vibrations; Stress distribution
around mine openings; Subsidence; Design of supports in roadways and workings; Rock bursts and coal bumps; Slope stability.
Mining Methods and Machinery: Surface mining: layout, development, loading, transportation and mechanization, continuous surface mining systems; Underground coal mining: bord and pillar
systems, room and pillar mining, longwall mining, thick seam mining methods; Underground metal mining : open, supported and caved stoping methods, stope mechanization, ore handling systems,
mine filling.
Generation and transmission of mechanical, hydraulic and pneumatic power; Materials handling: haulages, conveyors, face and development machinery, hoisting systems, pumps.
Ventilation, Underground Hazards and Surface Environment: Underground atmosphere; Heat load sources and thermal environment, air cooling; Mechanics of air flow, distribution, natural and
mechanical ventilation; Mine fans and their usage; Auxiliary ventilation; Ventilation planning.
Subsurface hazards from fires, explosions, gases, dust and inundation; Rescue apparatus and practices; Safety in mines, accident analysis, noise, mine lighting, occupational health and risk.
Air, water and soil pollution : causes, dispersion, quality standards, reclamation and control.
Surveying, Mine Planning and Systems Engineering: Fundamentals of engineering surveying; Levels and leveling, theodolite, tacheometry, triangulation, contouring, errors and adjustments,
correlation; Underground surveying; Curves; Photogrammetry; Field astronomy; EDM, total station and GPS fundamentals.
Principles of planning: Sampling methods and practices, reserve estimation techniques, basics of geostatistics and quality control, optimization of facility location, cash flow concepts and
mine valuation, open pit design; GIS fundamentals.
Work-study; Concepts of reliability, reliability of series and parallel systems. Linear programming, transportation and assignment problems, queueing, network analysis, basics of simulation.
Goto Top
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green’s theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy’s and Euler’s equations; Laplace transforms; PDEs – Laplace, heat and
wave equations.
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson’s rule; single and multi-step methods for differential equations.
Thermodynamics and Rate Processes: Laws of thermodynamics, activity, equilibrium constant, applications to metallurgical systems, solutions, phase equilibria, Ellingham and phase stability
diagrams, thermodynamics of surfaces, interfaces and defects, adsorption and segregation; basic kinetic laws, order of reactions, rate constants and rate limiting steps; principles of electro
chemistry- single electrode potential, electro-chemical cells and polarizations, aqueous corrosion and protection of metals, oxidation and high temperature corrosion – characterization and
control; heat transfer – conduction, convection and heat transfer coefficient relations, radiation, mass transfer – diffusion and Fick’s laws, mass transfer coefficients; momentum transfer –
concepts of viscosity, shell balances, Bernoulli’s equation, friction factors.
Extractive Metallurgy: Minerals of economic importance, comminution techniques, size classification, Flotation, gravity and other methods of mineral processing; agglomeration, pyro- hydro- and
electro-metallurgical processes; material and energy balances; principles and processes for the extraction of non-ferrous metals – aluminium, copper, zinc, lead, magnesium, nickel, titanium and
other rare metals; iron and steel making – principles, role structure and properties of slags, metallurgical coke, blast furnace, direct reduction processes, primary and secondary steel making,
ladle metallurgy operations including deoxidation, desulphurization, sulphide shape control, inert gas rinsing and vacuum reactors; secondary refining processes including AOD, VAD, VOD, VAR and
ESR; ingot and continuous casting; stainless steel making, furnaces and refractories.
Physical Metallurgy: Crystal structure and bonding characteristics of metals, alloys, ceramics and polymers, structure of surfaces and interfaces, nano-crystalline and amorphous structures;
solid solutions; solidification; phase transformation and binary phase diagrams; principles of heat treatment of steels, cast iron and aluminum alloys; surface treatments; recovery,
recrystallization and grain growth; industrially important ferrous and non-ferrous alloys; elements of X-ray and electron diffraction; principles of scanning and transmission electron
microscopy; industrial ceramics, polymers and composites; electronic basis of thermal, optical, electrical and magnetic properties of materials; electronic and opto-electronic materials.
Mechanical Metallurgy: Elasticity, yield criteria and plasticity; defects in crystals; elements of dislocation theory – types of dislocations, slip and twinning, source and multiplication of
dislocations, stress fields around dislocations, partial dislocations, dislocation interactions and reactions; strengthening mechanisms; tensile, fatigue and creep behaviour; super-plasticity;
fracture – Griffith theory, basic concepts of linear elastic and elasto-plastic fracture mechanics, ductile to brittle transition, fracture toughness; failure analysis; mechanical testing –
tension, compression, torsion, hardness, impact, creep, fatigue, fracture toughness and formability.
Manufacturing Processes: Metal casting – patterns and moulds including mould design involving feeding, gating and risering, melting, casting practices in sand casting, permanent mould casting,
investment casting and shell moulding, casting defects and repair; hot, warm and cold working of metals, Metal forming - fundamentals of metal forming processes of rolling, forging, extrusion,
wire drawing and sheet metal forming, defects in forming; Metal joining - soldering, brazing and welding, common welding processes of shielded metal arc welding, gas metal arc welding, gas
tungsten arc welding and submerged arc welding; welding metallurgy, problems associated with welding of steels and aluminium alloys, defects in welded joints; powder metallurgy; NDT using
dye-penetrant, ultrasonic, radiography, eddy current, acoustic emission and magnetic particle methods.
Goto Top
PH - PHYSICS
Mathematical Physics: Linear vector space; matrices; vector calculus; linear differential equations; elements of complex analysis; Laplace transforms, Fourier analysis, elementary ideas about
Classical Mechanics: Conservation laws; central forces, Kepler problem and planetary motion; collisions and scattering in laboratory and centre of mass frames; mechanics of system of particles;
rigid body dynamics; moment of inertia tensor; noninertial frames and pseudo forces; variational principle; Lagrange’s and Hamilton’s formalisms; equation of motion, cyclic coordinates, Poisson
bracket; periodic motion, small oscillations, normal modes; special theory of relativity – Lorentz transformations, relativistic kinematics, mass-energy equivalence.
Electromagnetic Theory: Solution of electrostatic and magnetostatic problems including boundary value problems; dielectrics and conductors; Biot-Savart’s and Ampere’s laws; Faraday’s law;
Maxwell’s equations; scalar and vector potentials; Coulomb and Lorentz gauges; Electromagnetic waves and their reflection, refraction, interference, diffraction and polarization. Poynting
vector, Poynting theorem, energy and momentum of electromagnetic waves; radiation from a moving charge.
Quantum Mechanics: Physical basis of quantum mechanics; uncertainty principle; Schrodinger equation; one, two and three dimensional potential problems; particle in a box, harmonic oscillator,
hydrogen atom; linear vectors and operators in Hilbert space; angular momentum and spin; addition of angular momenta; time independent perturbation theory; elementary scattering theory.
Thermodynamics and Statistical Physics: Laws of thermodynamics; macrostates and microstates; phase space; probability ensembles; partition function, free energy, calculation of thermodynamic
quantities; classical and quantum statistics; degenerate Fermi gas; black body radiation and Planck’s distribution law; Bose-Einstein condensation; first and second order phase transitions,
critical point.
Atomic and Molecular Physics: Spectra of one- and many-electron atoms; LS and jj coupling; hyperfine structure; Zeeman and Stark effects; electric dipole transitions and selection rules; X-ray
spectra; rotational and vibrational spectra of diatomic molecules; electronic transition in diatomic molecules, Franck-Condon principle; Raman effect; NMR and ESR; lasers.
Solid State Physics: Elements of crystallography; diffraction methods for structure determination; bonding in solids; elastic properties of solids; defects in crystals; lattice vibrations and
thermal properties of solids; free electron theory; band theory of solids; metals, semiconductors and insulators; transport properties; optical, dielectric and magnetic properties of solids;
elements of superconductivity.
Nuclear and Particle Physics: Nuclear radii and charge distributions, nuclear binding energy, Electric and magnetic moments; nuclear models, liquid drop model - semi-empirical mass formula,
Fermi gas model of nucleus, nuclear shell model; nuclear force and two nucleon problem; Alpha decay, Beta-decay, electromagnetic transitions in nuclei; Rutherford scattering, nuclear reactions,
conservation laws; fission and fusion; particle accelerators and detectors; elementary particles, photons, baryons, mesons and leptons; quark model.
Electronics: Network analysis; semiconductor devices; Bipolar Junction Transistors, Field Effect Transistors, amplifier and oscillator circuits; operational amplifier, negative feedback
circuits , active filters and oscillators; rectifier circuits, regulated power supplies; basic digital logic circuits, sequential circuits, flip-flops, counters, registers, A/D and D/A
Goto Top
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative,
Maxima and minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green’s theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy’s and Euler’s equations, Initial and boundary
value problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy’s integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by rapezoidal and Simpson’s rule, single and multi-step methods for differential equations.
Engineering Materials: Structure and properties of engineering materials and their applications; effect of strain, strain rate and temperature on mechanical properties of metals and alloys;
heat treatment of metals and alloys, its influence on mechanical properties.
Applied Mechanics: Engineering mechanics - equivalent force systems, free body concepts, equations of equilibrium; strength of materials - stress, strain and their relationship, Mohr’s circle,
deflection of beams, bending and shear stress, Euler’s theory of columns.
Theory of Machines and Design: Analysis of planar mechanisms, cams and followers; governers and fly wheels; design of elements - failure theories; design of bolted, riveted and welded joints;
design of shafts, keys, spur gears, belt drives, brakes and clutches.
Thermal Engineering: Fluid mechanics - fluid statics, Bernoulli’s equation, flow through pipes, equations of continuity and momentum; thermodynamics - zeroth, first and second law of
thermodynamics, thermodynamic system and processes, calculation of work and heat for systems and control volumes; air standard cycles; basics of internal combustion engines and steam turbines;
heat transfer - fundamentals of conduction, convection and radiation, heat exchangers.
Metal Casting: Casting processes - types and applications; patterns - types and materials; allowances; moulds and cores - materials, making, and testing; casting techniques of cast iron, steels
and nonferrous metals and alloys; solidification; design of casting, gating and risering; casting inspection, defects and remedies.
Metal Forming: Stress-strain relations in elastic and plastic deformation; concept of flow stress, deformation mechanisms; hot and cold working - forging, rolling, extrusion, wire and tube
drawing; sheet metal working processes such as blanking, piercing, bending, deep drawing, coining and embossing; analysis of rolling, forging, extrusion and wire /rod drawing; metal working
Metal Joining Processes: Welding processes - manual metal arc, MIG, TIG, plasma arc, submerged arc, electroslag, thermit, resistance, forge, friction, and explosive welding;other joining
processes - soldering, brazing, braze welding; inspection of welded joints, defects and remedies; introduction to advanced welding processes - ultrasonic, electron beam, laser beam; thermal
Machining and Machine Tool Operations: Basic machine tools; machining processes-turning, drilling, boring, milling, shaping, planing, gear cutting, thread production, broaching, grinding,
lapping, honing, super finishing; mechanics of machining - geometry of cutting tools, chip formation, cutting forces and power requirements, Merchant’s analysis; selection of machining
parameters; tool materials, tool wear and tool life, economics of machining, thermal aspects of machining, cutting fluids, machinability; principles and applications of nontraditional machining
processes - USM, AJM, WJM, EDM and Wire cut EDM, LBM, EBM, PAM, CHM, ECM.
Tool Engineering: Jigs and fixtures - principles, applications, and design; press tools - configuration, design of die and punch; principles of forging die design.
Metrology and Inspection: Limits, fits, and tolerances, interchangeability, selective assembly; linear and angular measurements by mechanical and optical methods, comparators; design of limit
gauges; interferometry; measurement of straightness, flatness, roundness, squareness and symmetry; surface finish measurement; inspection of screw threads and gears; alignment testing of
machine tools.
Powder Metallurgy: Production of metal powders, compaction and sintering.
Polymers and Composites: Introduction to polymers and composites; plastic processing - injection, compression and blow molding, extrusion, calendaring and thermoforming; molding of composites.
Manufacturing Analysis: Sources of errors in manufacturing; process capability; tolerance analysis in manufacturing and assembly; process planning; parameter selection and comparison of
production alternatives; time and cost analysis; manufacturing technologies - strategies and selection.
Computer Integrated Manufacturing: Basic concepts of CAD, CAM, CAPP, cellular manufacturing, NC, CNC, DNC, Robotics, FMS, and CIM.
Product Design and Development: Principles of good product design, tolerance design; quality and cost considerations; product life cycle; standardization, simplification, diversification, value
engineering and analysis, concurrent engineering.
Engineering Economy and Costing: Elementary cost accounting and methods of depreciation; break-even analysis, techniques for evaluation of capital investments, financial statements.
Work System Design: Taylor’s scientific management, Gilbreths’s contributions; productivity - concepts and measurements; method study, micro-motion study, principles of motion economy; work
measurement - stop watch time study, work sampling, standard data, PMTS; ergonomics; job evaluation, merit rating, incentive schemes, and wage administration; business process reengineering.
Facility Design: Facility location factors and evaluation of alternate locations; types of plant layout and their evaluation; computer aided layout design techniques; assembly line balancing;
materials handling systems.
Production Planning and Inventory Control: Forecasting techniques - causal and time series models, moving average, exponential smoothing, trend and seasonality; aggregate production planning;
master production scheduling; MRP and MRP-II; order control and flow control; routing, scheduling and priority dispatching; push and pull production systems, concept of JIT manufacturing
system; logistics, distribution, and supply chain management; Inventory - functions, costs, classifications, deterministic and probabilistic inventory models, quantity discount; perpetual and
periodic inventory control systems.
Operation Research: Linear programming - problem formulation, simplex method, duality and sensitivity analysis; transportation and assignment models; network flow models, constrained
optimization and Lagrange multipliers; simple queuing models; dynamic programming; simulation - manufacturing applications; PERT and CPM, time-cost trade-off, resource leveling.
Quality Management: Quality - concept and costs, quality circles, quality assurance; statistical quality control, acceptance sampling, zero defects, six sigma; total quality management; ISO
9000; design of experiments - Taguchi method.
Reliability and Maintenance: Reliability, availability and maintainability; distribution of failure and repair times; determination of MTBF and MTTR, reliability models; system reliability
determination; preventive maintenance and replacement, total productive maintenance - concept and applications.
Management Information System: Value of information; information storage and retrieval system - database and data structures; knowledge based systems.
Intellectual Property System: Definition of intellectual property, importance of IPR; TRIPS and its implications, patent, copyright, industrial design and trademark.
Goto Top
Natural Products: Pharmacognosy & Phytochemistry – Chemistry, tests, isolation, characterization and estimation of phytopharmaceuticals belonging to the group of Alkaloids, Glycosides,
Terpenoids, Steroids, Bioflavanoids, Purines, Guggul lipids. Pharmacognosy of crude drugs that contain the above constituents. Standardization of raw materials and herbal products. WHO
guidelines. Quantitative microscopy including modern techniques used for evaluation. Biotechnological principles and techniques for plant development, Tissue culture.
Pharmacology: General pharmacological principles including Toxicology. Drug interaction. Pharmacology of drugs acting on Central nervous system, Cardiovascular system, Autonomic nervous system,
Gastro intestinal system and Respiratory system. Pharmacology of Autocoids, Hormones, Hormone antagonists, chemotherapeutic agents including anticancer drugs. Bioassays, Immuno Pharmacology.
Drugs acting on the blood & blood forming organs. Drugs acting on the renal system.
Medicinal Chemistry: Structure, nomenclature, classification, synthesis, SAR and metabolism of the following category of drugs, which are official in Indian Pharmacopoeia and British
Pharmacopoeia. Introduction to drug design. Stereochemistry of drug molecules. Hypnotics and Sedatives, Analgesics, NSAIDS, Neuroleptics, Antidepressants, Anxiolytics, Anticonvulsants,
Antihistaminics, Local Anaesthetics, Cardio Vascular drugs – Antianginal agents Vasodilators, Adrenergic & Cholinergic drugs, Cardiotonic agents, Diuretics, Antijypertensive drugs, Hypoglycemic
agents, Antilipedmic agents, Coagulants, Anticoagulants, Antiplatelet agents. Chemotherapeutic agents – Antibiotics, Antibacterials, Sulphadrugs. Antiproliozoal drugs, Antiviral,
Antitubercular, Antimalarial, Anticancer, Antiamoebic drugs. Diagnostic agents. Preparation and storage and uses of official Radiopharmaceuticals, Vitamins and Hormones. Eicosonoids and their
Pharmaceutics : Development, manufacturing standards Q.C. limits, labeling, as per the pharmacopoeal requirements. Storage of different dosage forms and new drug delivery systems.
Biopharmaceutics and Pharmacokinetics and their importance in formulation. Formulation and preparation of cosmetics – lipstick, shampoo, creams, nail preparations and dentifrices.
Pharmaceutical calculations.
Pharmaceutical Jurisprudence: Drugs and cosmetics Act and rules with respect to manufacture, sales and storage. Pharmacy Act. Pharmaceutical ethics.
Pharmaceutical Analysis: Principles, instrumentation and applications of the following: Absorption spectroscopy (UV, visible & IR). Fluorimetry, Flame photometry, Potentiometry. Conductometry
and Plarography. Pharmacopoeial assays. Principles of NMR, ESR, Mass spectroscopy. X-ray diffraction analysis and different chromatographic methods.
Biochemistry. Biochemical role of hormones, Vitamins, Enzymes, Nucleic acids, Bioenergetics. General principles of immunology. Immunological. Metabolism of carbohydrate, lipids, proteins.
Methods to determine, kidney & liver function. Lipid profiles.
Microbiology : Principles and methods of microbio0logical assays of the Pharmacopoeia. Methods of preparation of official sera and vaccines. Serological and diagnostics tests. Applications of
microorganisms in Bio Conversions and in Pharmaceutical industry.
Clinical Pharmacy : Therapeutic Drug Monitoring Dosage regimen in Pregnancy and Lactation, Pediatrics and Geriatrics. Renal and hepatic impairment. Drug – Drug interactions and Drug – food
interactions, Adverse Drug reactions. Medication History, interview and Patient counseling.
Goto Top
TF - TEXTILE ENGINEERING AND FIBRE SCIENCE
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green’s theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy’s and Euler’s equations; Laplace transforms; PDEs – Laplace, heat and
wave equations.
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson’s rule; single and multi-step methods for differential equations.
Textile Fibres: Classification of textile fibres; Essential requirements of fibre forming polymers; Gross and fine structure of natural fibres like cotton, wool and silk. Introduction to
important bast fibres; properties and uses of natural and man-made fibres; physical and chemical methods of fibre and blend identification and blend analysis.
Molecular architecture, amorphous and crystalline phases, glass transition, plasticization, crystallization, melting, factors affecting Tg and Tm; Process of viscose and acetate preparation.
Polymerization of nylon-6, nylon-66, poly (ethylene terephthalate), polyacrylonitrile and polypropylene; Melt Spinning processes, characteristic features of PET, polyamide and polypropylene
spinning; wet and dry spinning of viscose and acrylic fibres; post spinning operations such as drawing, heat setting, tow-to-top conversion and different texturing methods.
Methods of investigating fibre structure e.g., Density, X-ray diffraction, birefringence, optical and electron microscopy, I.R. absorption, thermal methods (DSC, DMA/TMA, TGA); structure and
morphology of man-made fibres, mechanical properties of fibres, moisture sorption in fibres; fibre structure and property correlation.
Yarn manufacture and yarn structure & properties: Principles of opening, cleaning and mixing/blending of fibrous materials, working principle of modern opening and cleaning equipments; the
technology of carding, carding of cotton and synthetic fibres; Drafting operation, roller and apron drafting principle, causes of mass irregularity introduced by drafting; roller arrangements
in drafting systems; principles of cotton combing, combing cycle, mechanism and function, combing efficiency, lap preparation; recent developments in comber; Roving production, mechanism of
bobbin building, roving twist; Principle of ring spinning, forces acting on yarn and traveler; ring & traveler designs; mechanism of cop formation, causes of end breakages; working principle of
ring doubler and two for one twister, single and folded yarn twist, properties of double yarns, production of core spun yarn, compact spinning, principle of non conventional methods of yarn
production such as rotor spinning, air jet spinning, wrap spinning, twist less spinning and friction spinning.
Yarn contraction, yarn diameter, specific volume & packing coefficient; twist strength relationship in spun yarns; fibre configuration and orientation in yarn; cause of fibre migration and its
estimation, irregularity index, properties of ring, rotor and air-jet yarns.
Fabric manufacture and Fabric Structure: Principles of cheese and cone winding processes and machines; random and precision winding; package faults and their remedies; yarn clearers and
tensioners; different systems of yarn splicing; features of modern cone winding machines; different types of warping creels; features of modern beam and sectional warping machines; different
sizing systems, sizing of spun and filament yarns, modern sizing machines; principles of pirn winding processes and machines; primary and secondary motions of loom, effect of their settings and
timings on fabric formation, fabric appearance and weaving performance; dobby and jacquard shedding; mechanics of weft insertion with shuttle; warp and weft stop motions, warp protection, weft
replenishment; functional principles of weft insertion systems of shuttle-less weaving machines, principles of multiphase and circular looms.
Principles of weft and warp knitting; basic weft and warp knitted structures. Classification, production and areas of application of nonwoven fabrics. Basic woven fabric constructions and their
derivatives; crepe, cord, terry, gauze, leno and double cloth constructions. Peirce’s equations for fabric geometry; elastica model of plain woven fabrics; thickness, cover and maximum sett of
woven fabrics.
Textile Testing: Sampling techniques, sample size and sampling errors. Measurement of fibre length, fineness, crimp, strength and reflectance; measurement of cotton fibre maturity and trash
content; HVI and AFIS for fibre testing. Measurement of yarn count, twist and hairiness; tensile testing of fibres, yarns and fabrics; evenness testing of slivers, rovings and yarns; testing
equipment for measurement test methods of fabric properties like thickness, compressibility, air permeability, drape, crease recovery, tear strength, bursting strength and abrasion resistance.
FAST and Kawabata instruments and systems for objective fabric evaluation. Statistical data analysis of experimental results. Correlation analysis, significance tests and analysis of variance;
frequency distributions and control charts.
Preparatory Processes: Chemistry and practice of preparatory processes for cotton, wool and silk. Mercerization of cotton. Preparatory processes for nylon, polyester and acrylic and polyester/
cotton blends.
Dyeing: Classification of dyes. Dyeing of cotton, wool, silk, polyester, nylon and acrylic with appropriate dye classes. Dyeing polyester/cotton and polyester/wool blends. Batchwise and
continuous dyeing machines. Dyeing of cotton knitted fabrics and machines used. Dye fibre interaction. Introduction to thermodynamics and kinetics of dyeing. Methods for determination of wash,
light and rubbing fastness. Evaluation of fastness properties with the help of grey scale.
Printing: Styles of printing. Printing thickeners including synthetic thickeners. Printing auxiliaries. Printing of cotton with reactive dyes. Printing of wool, silk, nylon with acid and metal
complex dyes. Printing of polyester with disperse dyes. Methods of dye fixation after printing. Resist and discharge printing of cotton, silk and polyester. Printing of polyester/cotton blends
with disperse/reactive combination. Transfer printing of polyester. Developments in inkjet printing.
Finishing: Mechanical finishing of cotton. Stiff. Soft, wrinkle resistant, water repellent, flame retardant and enzyme (bio-polishing) finishing of cotton. Milling, decatizing and shrink
resistant finishing of wool. Antistat finishing of synthetic fibre fabrics. Heat setting of polyester.
Energy Conservation: Minimum application techniques.
Pollution: Environment pollution during chemical processing of textiles. Treatment of textile effluents.
Goto Top
XE - ENGINERING SCIENCES
SECTION A. ENGINEERING MATHEMATICS (Compulsory)
Linear Algebra: Algebra of matrices, inverse, rank, system of linear equations, symmetric, skew-symmetric and orthogonal matrices. Hermitian, skew-Hermitian and unitary matrices. eigenvalues
and eigenvectors, diagonalisation of matrices, Cayley-Hamilton Theorem.
Calculus: Functions of single variable, limit, continuity and differentiability, Mean value theorems, Indeterminate forms and L'Hospital rule, Maxima and minima, Taylor's series, Fundamental
and mean value-theorems of integral calculus. Evaluation of definite and improper integrals, Beta and Gamma functions, Functions of two variables, limit, continuity, partial derivatives,
Euler's theorem for homogeneous functions, total derivatives, maxima and minima, Lagrange method of multipliers, double and triple integrals and their applications, sequence and series, tests
for convergence, power series, Fourier Series, Half range sine and cosine series.
Complex variable: Analytic functions, Cauchy-Riemann equations, Application in solving potential problems, Line integral, Cauchy's integral theorem and integral formula (without proof),
Taylor's and Laurent' series, Residue theorem (without proof) and its applications.
Vector Calculus: Gradient, divergence and curl, vector identities, directional derivatives, line, surface and volume integrals, Stokes, Gauss and Green's theorems (without proofs) applications.
Ordinary Differential Equations: First order equation (linear and nonlinear), Second order linear differential equations with variable coefficients, Variation of parameters method, higher order
linear differential equations with constant coefficients, Cauchy- Euler's equations, power series solutions, Legendre polynomials and Bessel's functions of the first kind and their properties.
Partial Differential Equations: Separation of variables method, Laplace equation, solutions of one dimensional heat and wave equations.
Probability and Statistics: Definitions of probability and simple theorems, conditional probability, Bayes Theorem, random variables, discrete and continuous distributions, Binomial, Poisson,
and normal distributions, correlation and linear regression.
Numerical Methods: Solution of a system of linear equations by L-U decomposition, Gauss-Jordan and Gauss-Seidel Methods, Newton’s interpolation formulae, Solution of a polynomial and a
transcendental equation by Newton-Raphson method, numerical integration by trapezoidal rule, Simpson’s rule and Gaussian quadrature, numerical solutions of first order differential equation by
Euler’s method and 4th order Runge-Kutta method.
Numerical Methods: Truncation errors, round off errors and their propagation; Interpolation: Lagrange, Newton's forward, backward and divided difference formulas, Least square curve fitting;
Solutions of non linear equations of one variable using bisection, false position, Secant and Newton Raphson methods, Rate of convergence of these methods, general iterative methods, Simple and
multiple roots of polynomials; Solutions of system of linear algebraic equations using Gauss elimination methods, Jacobi and Gauss - Seidel iterative methods and their rate of convergence; Ill
conditioned and well conditioned system, Eigen values and Eigen vectors using power methods; Numerical integration using trapezoidal, Simpson's rule and other quadtrature formulas; Numerical
Differentiation; Solution of boundary value problems; Solution of initial value problems of ordinary differential equations using Euler's method, predictor corrector and Runge Kutta method.
Computer System Concepts: Representation of fixed- and floating-point numbers; Elementary concepts and terminology of basic building blocks of a computer system and system software.
Fortran: Fortran-90 for Numerical Computation: Basic data types including complex numbers; Arrays; Assignment statements; Structured Programming Constructs: Loops, Conditional execution,
iteration and recursion; Functions and subroutines; Structured programming practices.
C language: Basic data types including pointers; Assignments statements; Control statements; Dynamic memory allocation; Functions and procedures; Parameter passing mechanisms; Structured
programming practices.
Electric Circuits: Ideal voltage and current sources; RLC circuits, steady state and transient analysis of DC circuits, network theorems; single phase AC circuits, resonance and three phase
Magnetic Circuits: MMF and flux, and their relationship with voltage and current; principle of operation of transformer, equivalent circuit of a practical transformer, efficiency and regulation
of transformer.
Electric Machines: Principle of operation, characteristics and performance equations of DC machines; principle of operation, equivalent circuit of three-phase Induction machine
Electronic Circuits: Characteristics of p-n junction diode, Zener diode, bi-polar junction transistor (BJT) and junction field effect transistor (JFET); structure of MOSFET, its characteristics
and operation; rectifiers, filters, and regulated power supply, transistor biasing circuits, operational amplifiers, linear applications of operational amplifier, oscillators (tuned and phase
shift type)
Digital circuits: Number systems, Boolean algebra, logic gates, combinational and sequential circuits, Flip-Flops (RS, JK, D and T), Counters.
Measuring Instruments: Cathode Ray oscilloscope, D/A and A/D converters.
SECTION D. FLUID MECHANICS
Fluid Properties: relation between stress and strain rate for Newtonian fluids.
Hydrostatics: buoyancy, manometry, forces on submerged bodies.
Eulerian and Lagrangian description of fluid motion, concept of local and convective accelerations, steady and unsteady flows, control volume analysis for mass, momentum and energy.
Differential equations of mass and momentum (Euler equation), Bernoulli’s equation and its applications.
Concept of fluid rotation, vorticity, stream function and potential function. Potential flow: elementary flow fields and principle of superposition, potential flow past a circular cylinder.
Dimensional analysis: concept of geometric, kinematic and dynamic similarity, importance of non-dimensional numbers.
Fully-developed pipe flow, laminar and turbulent flows, friction factor, Darcy-Weisbach relation.
Qualitative ideas of boundary layer and separation, streamlined and bluff bodies, drag and lift forces.
Basic ideas of flow measurement using venturimeter, pitot-static tube and orifice plate.
Structure: Atomic structure and bonding in materials. Crystal structure of materials, crystal systems, unit cells and space lattices, determination of structures of simple crystals by x-ray
diffraction, miller indices of planes and directions, packing geometry in metallic, ionic and covalent solids. Concept of amorphous, single and polycrystalline structures and their effect on
properties of materials. Crystal growth techniques. Imperfections in crystalline solids and their role in influencing various properties.
Diffusion: Fick’s laws and application of diffusion in sintering, doping of semiconductors and surface hardening of metals.
Metals and Alloys: Solid solutions, solubility limit, phase rule, binary phase diagrams, intermediate phases, intermetallic compounds, iron-iron carbide phase diagram, heat treatment of steels,
cold, hot working of metals, recovery, recrystallization and grain growth. Microstrcture, properties and applications of ferrous and non-ferrous alloys.
Ceramics: Structure, properties, processing and applications of traditional and advanced ceramics.
Polymers: Classification, polymerization, structure and properties, additives for polymer products, processing and applications.
Composites: Properties and applications of various composites.
Advanced Materials and Tools: Smart materials, exhibiting ferroelectric, piezoelectric, optoelectric, semiconducting behavior, lasers and optical fibers, photoconductivity and
superconductivity, nanomaterials – synthesis, properties and applications, biomaterials, superalloys, shape memory alloys. Materials characterization techniques such as, scanning electron
microscopy, transmission electron microscopy, atomic force microscopy, scanning tunneling microscopy, atomic absorption spectroscopy, differential scanning calorimetry.
Mechanical Properties: stress-strain diagrams of metallic, ceramic and polymeric materials, modulus of elasticity, yield strength, tensile strength, toughness, elongation, plastic deformation,
viscoelasticity, hardness, impact strength, creep, fatigue, ductile and brittle fracture.
Thermal Properties: Heat capacity, thermal conductivity, thermal expansion of materials.
Electronic Properties: Concept of energy band diagram for materials - conductors, semiconductors and insulators, electrical conductivity – effect of temperature on conductility, intrinsic and
extrinsic semiconductors, dielectric properties.
Optical Properties: Reflection, refraction, absorption and transmission of electromagnetic radiation in solids.
Magnetic Properties: Origin of magnetism in metallic and ceramic materials, paramagnetism, diamagnetism, antiferro magnetism, ferromagnetism, ferrimagnetism, magnetic hysterisis.
Environmental Degradation: Corrosion and oxidation of materials, prevention.
SECTION F. SOLID MECHANICS
Equivalent force systems; free-body diagrams; equilibrium equations; analysis of determinate trusses and frames; friction; simple relative motion of particles; force as function of position,
time and speed; force acting on a body in motion; laws of motion; law of conservation of energy; law of conservation of momentum.
Stresses and strains; principal stresses and strains; Mohr's circle; generalized Hooke's Law; thermal strain; theories of failure.
Axial, shear and bending moment diagrams; axial, shear and bending stresses; deflection (for symmetric bending); torsion in circular shafts; thin cylinders; energy methods (Castigliano's
Theorems); Euler buckling.
Free vibration of single degree of freedom systems.
Basic Concepts: Continuum, macroscopic approach, thermodynamic system (closed and open or control volume); thermodynamic properties and equilibrium; state of a system, state diagram, path and
process; different modes of work; Zeroth law of thermodynamics; concept of temperature; heat.
First Law of Thermodynamics: Energy, enthalpy, specific heats, first law applied to systems and control volumes, steady and unsteady flow analysis.
Second Law of Thermodynamics: Kelvin-Planck and Clausius statements, reversible and irreversible processes, Carnot theorems, thermodynamic temperature scale, Clausius inequality and concept of
entropy, principle of increase of entropy; availability and irreversibility.
Properties of Pure Substances: Thermodynamic properties of pure substances in solid, liquid and vapor phases, P-V-T behaviour of simple compressible substances, phase rule, thermodynamic
property tables and charts, ideal and real gases, equations of state, compressibility chart.
Thermodynamic Relations: T-ds relations, Maxwell equations, Joule-Thomson coefficient, coefficient of volume expansion, adiabatic and isothermal compressibilities, Clapeyron equation.
Thermodynamic cycles: Carnot vapor power cycle, Ideal Rankine cycle, Rankine Reheat cycle, Air standard Otto cycle, Air standard Diesel cycle, Air-standard Brayton cycle, Vapor-compression
refrigeration cycle.
Ideal Gas Mixtures: Dalton’s and Amagat’s laws, calculations of properties, air-water vapor mixtures and simple thermodynamic processes involving them.
Goto Top
XL - LIFE SCIENCES
SECTION H. CHEMISTRY (Compulsory)
Atomic structure and periodicity: Planck’s quantum theory, wave particle duality, uncertainty principle, quantum mechanical model of hydrogen atom; electronic configuration of atoms; periodic
table and periodic properties; ionization energy, election affinity, electronegativity, atomic size.
Structure and bonding: Ionic and covalent bonding, M.O. and V.B. approaches for diatomic molecules, VSEPR theory and shape of molecules, hybridisation, resonance, dipole moment, structure
parameters such as bond length, bond angle and bond energy, hydrogen bonding, van der Waals interactions. Ionic solids, ionic radii, lattice energy (Born-Haber Cycle).
s.p. and d Block Elements: Oxides, halides and hydrides of alkali and alkaline earth metals, B, Al, Si, N, P, and S, general characteristics of 3d elements, coordination complexes: valence bond
and crystal field theory, color, geometry and magnetic properties.
Chemical Equilibria: Colligative properties of solutions, ionic equilibria in solution, solubility product, common ion effect, hydrolysis of salts, pH, buffer and their applications in chemical
analysis, equilibrium constants (Kc, Kp and Kx) for homogeneous reactions,
Electrochemistry: Conductance, Kohlrausch law, Half Cell potentials, emf, Nernst equation, galvanic cells, thermodynamic aspects and their applications.
Reaction Kinetics: Rate constant, order of reaction, molecularity, activation energy, zero, first and second order kinetics, catalysis and elementary enzyme reactions.
Thermodynamics: First law, reversible and irreversible processes, internal energy, enthalpy, Kirchoff’s equation, heat of reaction, Hess law, heat of formation, Second law, entropy, free
energy, and work function. Gibbs-Helmholtz equation, Clausius-Clapeyron equation, free energy change and equilibrium constant, Troutons rule, Third law of thermodynamics.
Basis of Organic Reactions Mechanism: Elementary treatment of SN1, SN2, E1 and E2 reactions, Hoffmann and Saytzeff rules, Addition reactions, Markonikoff rule and Kharash effect, Diels-Alder
reaction, aromatic electrophilic substitution, orientation effect as exemplified by various functional groups. Identification of functional groups by chemical tests
Structure-Reactivity Correlations: Acids and bases, electronic and steric effects, optical and geometrical isomerism, tautomerism, conformers, concept of aromaticity
Organization of life. Importance of water. Cell structure and organelles. Structure and function of biomolecules: Amino acids, Carbohydrates, Lipids, Proteins and Nucleic acids. Biochemical
separation techniques and characterization: ion exchange, size exclusion and affinity chromatography, electrophoresis, UV-visible, fluorescence and Mass spectrometry. Protein structure, folding
and function: Myoglobin, Hemoglobin, Lysozyme, Ribonuclease A, Carboxypeptidase and Chymotrypsin. Enzyme kinetics including its regulation and inhibition, Vitamins and Coenzymes.
Metabolism and bioenergetics. Generation and utilization of ATP. Metabolic pathways and their regulation: glycolysis, TCA cycle, pentose phosphate pathway, oxidative phosphorylation,
gluconeogenesis, glycogen and fatty acid metabolism. Metabolism of Nitrogen containing compounds: nitrogen fixation, amino acids and nucleotides. Photosynthesis: the Calvin cycle.
Biological membranes. Transport across membranes. Signal transduction; hormones and neurotransmitters.
DNA replication, transcription and translation. Biochemical regulation of gene expression. Recombinant DNA technology and applications: PCR, site directed mutagenesis and DNA-microarray.
Immune system. Active and passive immunity. Complement system. Antibody structure, function and diversity. Cells of the immune system: T, B and macrophages. T and B cell activation. Major
histocompatibilty complex. T cell receptor. Immunological techniques: Immunodiffusion, immunoelectrophoresis, RIA and ELISA.
Advanced techniques in gene expression and analysis: PCR and RT-PCR, microarray technology, DNA fingerprinting and recombinant DNA technology; prokaryotic and eukaryotic expression systems;
Vectors: plasmids, phages, cosmids and BAC.
Architecture of plant genome; plant tissue culture techniques; methods of gene transfer into plant cells and development of transgenic plants; manipulation of phenotypic traits in plants; plant
cell fermentations and production of secondary metabolites using suspension/immobilized cell culture; expression of animal protein in plants; genetically modified crops.
Animal cell metabolism and regulation; cell cycle; primary cell culture; nutritional requirements for animal cell culture; techniques for mass culture of animal cell lines; application of
animal cell culture for production of vaccines, growth hormones; interferons, cytokines and therapeutic proteins; hybridoma technology and gene knockout; stem cells and its application in organ
synthesis; gene therapy; transgenic animals and molecular pharming.
Industrial bioprocesses: microbial production of organic acids, amino acids, proteins, polysaccharides, lipids, polyhydroxyalkanoates, antibiotics and pharmaceuticals; methods and applications
of immobilization of cells and enzymes; kinetics of soluble and immobilized enzymes; biosensors; biofuels; biopesticides; environmental bioremediation.
Microbial growth kinetics; batch, fed-batch and continuous culture of microbial cells; media for industrial fermentations; sterilization of air and media, design and operation of stirred tank,
airlift, plug flow, packed bed, fluidized bed, membrane and hollow fibre reactors; aeration and agitation in aerobic fermentations; bioprocess calculations based on material and energy balance;
Down stream processing in industrial biotechnology: filtration, precipitation, centrifugation, cell disintegration, solvent extraction, and chromatographic separations, membrane filtration,
aqueous two phase separation.
Bioinformatics; genomics; proteomics and computational biology.
SECTION K. BOTANY
Plant Systematics: Systems of classification (non-phylogenetic vs. phylogenetic - outline), plant groups, molecular systematics.
Plant Anatomy: Plant cell structure, organization, organelles, cytoskeleton, cell wall and membranes; anatomy of root, stem and leaves, meristems, vascular system, their ontogeny, structure and
functions, secondary growth in plants and stellar organization.
Morphogenesis & Development: Cell cycle, cell division, life cycle of an angiosperm, pollination, fertilization, embryogenesis, seed formation, seed storage proteins, seed dormancy and
Concept of cellular totipotency, clonal propagation; organogenesis and somatic embryogenesis, artificial seed, somaclonal variation, secondary metabolism in plant cell culture, embryo culture,
in vitro fertilization.
Physiology and Biochemistry: Plant water relations, transport of minerals and solutes, stress physiology, stomatal physiology, signal transduction, N2 metabolism, photosynthesis,
photorespiration; respiration, Flowering: photoperiodism and vernalization, biochemical mechanisms involved in flowering; molecular mechanism of senencensce and aging, biosynthesis, mechanism
of action and physiological effects of plant growth regulators, structure and function of biomolecules, (proteins, carbohydrates, lipids, nucleic acid), enzyme kinetics.
Genetics: Principles of Mendelian inheritance, linkage, recombination, genetic mapping; extrachromosomal inheritance; prokaryotic and eukaryotic genome organization, regulation of gene
expression, gene mutation and repair, chromosomal aberrations (numerical and structural), transposons.
Plant Breeding and Genetic Modification: Principles, methods – selection, hybridization, heterosis; male sterility, genetic maps and molecular markers, sporophytic and gametophytic self
incompability, haploidy, triploidy, somatic cell hybridization, marker-assisted selection, gene transfer methods viz. direct and vector-mediated, plastid transformation, transgenic plants and
their application in agriculture, molecular pharming, plantibodies.
Economic Botany: A general account of economically and medicinally important plants- cereals, pulses, plants yielding fibers, timber, sugar, beverages, oils, rubber, pigments, dyes, gums, drugs
and narcotics. Economic importance of algae, fungi, lichen and bacteria.
Plant Pathology: Nature and classification of plant diseases, diseases of important crops caused by fungi, bacteria and viruses, and their control measures, mechanism(s) of pathogenesis and
resistance, molecular detection of pathogens; plant-microbe beneficial interactions.
Ecology and Environment: Ecosystems – types, dynamics, degradation, ecological succession; food chains and energy flow; vegetation types of the world, pollution and global warming, speciation
and extinction, conservation strategies, cryopreservation, phytoremediation.
Historical Perspective: Discovery of microbial world; Landmark discoveries relevant to the field of microbiology; Controversy over spontaneous generation; Role of microorganisms in
transformation of organic matter and in the causation of diseases.
Methods in Microbiology: Pure culture techniques; Theory and practice of sterilization; Principles of microbial nutrition; Enrichment culture techniques for isolation of microorganisms; Light-,
phase contrast- and electron-microscopy.
Microbial Taxonomy and Diversity: Bacteria, Archea and their broad classification; Eukaryotic microbes: Yeasts, molds and protozoa; Viruses and their classification; Molecular approaches to
microbial taxonomy.
Prokaryotic and Eukaryotic Cells: Structure and Function: Prokaryotic Cells: cell walls, cell membranes, mechanisms of solute transport across membranes, Flagella and Pili, Capsules, Cell
inclusions like endospores and gas vesicles; Eukaryotic cell organelles: Endoplasmic reticulum, Golgi apparatus, mitochondria and chloroplasts.
Microbial Growth: Definition of growth; Growth curve; Mathematical expression of exponential growth phase; Measurement of growth and growth yields; Synchronous growth; Continuous culture;
Effect of environmental factors on growth.
Control of Micro-organisms: Effect of physical and chemical agents; Evaluation of effectiveness of antimicrobial agents.
Microbial Metabolism: Energetics: redox reactions and electron carriers; An overview of metabolism; Glycolysis; Pentose-phosphate pathway; Entner-Doudoroff pathway; Glyoxalate pathway; The
citric acid cycle; Fermentation; Aerobic and anaerobic respiration; Chemolithotrophy; Photosynthesis; Calvin cycle; Biosynthetic pathway for fatty acids synthesis; Common regulatory mechanisms
in synthesis of amino acids; Regulation of major metabolic pathways.
Microbial Diseases and Host Pathogen Interaction: Normal microbiota; Classification of infectious diseases; Reservoirs of infection; Nosocomial infection; Emerging infectious diseases;
Mechanism of microbial pathogenicity; Nonspecific defense of host; Antigens and antibodies; Humoral and cell mediated immunity; Vaccines; Immune deficiency; Human diseases caused by viruses,
bacteria, and pathogenic fungi.
Chemotherapy/Antibiotics: General characteristics of antimicrobial drugs; Antibiotics: Classification, mode of action and resistance; Antifungal and antiviral drugs.
Microbial Genetics: Types of mutation; UV and chemical mutagens; Selection of mutants; Ames test for mutagenesis; Bacterial genetic system: transformation, conjugation, transduction,
recombination, plasmids, transposons; DNA repair; Regulation of gene expression: repression and induction; Operon model; Bacterial genome with special reference to E.coli; Phage ? and its life
cycle; RNA phages; RNA viruses; Retroviruses; Basic concept of microbial genomics.
Microbial Ecology: Microbial interactions; Carbon, sulphur and nitrogen cycles; Soil microorganisms associated with vascular plants.
SECTION M. ZOOLOGY
Animal world: Animal diversity, distribution, systematics and classification of animals, phylogenetic relationships.
Evolution: Origin and history of life on earth, theories of evolution, natural selection, adaptation, speciation.
Genetics: Principles of inheritance, molecular basis of heredity, mutations, cytoplasmic inheritance, linkage and mapping of genes.
Biochemistry and Molecular Biology: Nucleic acids, proteins, lipids and carbohydrates; replication, transcription and translation; regulation of gene expression, organization of genome, Kreb’s
cycle, glycolysis, enzyme catalysis, hormones and their actions, vitamins.
Cell Biology: Structure of cell, cellular organelles and their structure and function, cell cycle, cell division, chromosomes and chromatin structure. Eukaryotic gene organization and
expression (Basic principles of signal transduction).
Animal Anatomy and Physiology: Comparative physiology, the respiratory system, circulatory system, digestive system, the nervous system, the excretory system, the endocrine system, the
reproductive system, the skeletal system, osmoregulation.
Parasitology and Immunology: Nature of parasite, host-parasite relation, protozoan and helminthic parasites, the immune response, cellular and humoral immune response, evolution of the immune
Development Biology: Embryonic development, cellular differentiation, organogenesis, metamorphosis, genetic basis of development, stem cells.
Ecology: The ecosystem, habitats, the food chain, population dynamics, species diversity, zoogerography, biogeochemical cycles, conservation biology.
Animal Behaviour: Types of behaviours, courtship, mating and territoriality, instinct, learning and memory, social behaviour across the animal taxa, communication, pheromones, evolution of
animal behaviour.
Goto Top
IT - INFORMATION TECHNOLOGY
Mathematical Logic: Propositional Logic; First Order Logic.
Probability: Conditional Probability; Mean, Median, Mode and Standard Deviation; Random Variables; Distributions; uniform, normal, exponential, Poisson, Binomial.
Set Theory & Algebra: Sets; Relations; Functions; Groups; Partial Orders; Lattice; Boolean Algebra.
Combinatorics: Permutations; Combinations; Counting; Summation; generating functions; recurrence relations; asymptotics.
Graph Theory: Connectivity; spanning trees; Cut vertices & edges; covering; matching; independent sets; Colouring; Planarity; Isomorphism.
Linear Algebra: Algebra of matrices, determinants, systems of linear equations, Eigen values and Eigen vectors.
Numerical Methods: LU decomposition for systems of linear equations; numerical solutions of non-linear algebraic equations by Secant, Bisection and Newton-Raphson Methods; Numerical integration
by trapezoidal and Simpson’s rules.
Calculus: Limit, Continuity & differentiability, Mean value Theorems, Theorems of integral calculus, evaluation of definite & improper integrals, Partial derivatives, Total derivatives, maxima
& minima.
Regular Languages: finite automata, regular expressions, regular grammar.
Context free languages: push down automata, context free grammars
Digital Logic: Logic functions, minimization, design and synthesis of combinatorial and sequential circuits, number representation and computer arithmetic (fixed and floating point)
Computer organization: Machine instructions and addressing modes, ALU and data path, hardwired and microprogrammed control, memory interface, I/O interface (interrupt and DMA mode), serial
communication interface, instruction pipelining, cache, main and secondary storage
Data structures and Algorithms: the notion of abstract data types, stack, queue, list, set, string, tree, binary search tree, heap, graph, tree and graph traversals, connected components,
spanning trees, shortest paths, hashing, sorting, searching, design techniques (greedy, dynamic, divide and conquer, Algorithm design by induction), asymptotic analysis (best, worst, average
cases) of time and space, upper and lower bounds, Basic concepts of complexity classes – P, NP, NP-hard, NP-complete.
Programming Methodology: Scope, binding, parameter passing, recursion, C programming – data types and declarations, assignment and control flow statements, 1-d and 2-d arrays, functions,
pointers, concepts of object-oriented programming - classes, objects, inheritance, polymorphism, operator overloading.
Operating Systems (in the context of Unix): classical concepts (concurrency, synchronization, deadlock), processes, threads and interprocess communication, CPU scheduling, memory management,
file systems, I/O systems, protection and security, shell programming.
Information Systems and Software Engineering: information gathering, requirement and feasibility analysis, data flow diagrams, process specifications, input/output design, process life cycle,
planning and managing the project, design, coding, testing, implementation, maintenance.
Databases: E-R diagrams, relational model, database design, integrity constraints, normal forms, query languages (SQL), file structures (sequential, indexed), b-trees, transaction and
concurrency control.
Data Communication and Networks: ISO/OSI stack, transmission media, data encoding, multiplexing, flow and error control, LAN technologies (Ethernet, token ring), network devices – switches,
gateways, routers, ICMP, application layer protocols – SMTP, POP3, HTTP, DNS, FTP, Telnet, network security – basic concepts of public key and private key cryptography, digital signature,
Web technologies: Proxy, HTML, XML, basic concepts of cgi-bin programming.
Goto Top | {"url":"http://www.yuvajobs.com/education/gate/structure_of_GATE.asp","timestamp":"2014-04-20T01:37:34Z","content_type":null,"content_length":"218714","record_id":"<urn:uuid:d7c3ce51-06e1-49db-a8fc-c52b8ffc44c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Existence of equilibrium with unbounded short sales: A new approach
Danilov, Vladimir and Koshovoy, Gleb and Page, Frank and Wooders, Myrna (2011): Existence of equilibrium with unbounded short sales: A new approach.
Download (251Kb) | Preview
We introduce a new approach to showing existence of equilibrium in models of economies with unbounded short sales. Inspired by the pioneering works of Hart (1974) on asset market models, Grandmont
(1977) on temporary economic equilibrium, and of Werner (1987) on general equilibrium exchange economies, all papers known to us stating conditions for existence of equilibrium with unbounded short
sales place conditions on recession cones of agents' preferred sets or, more recently, require compactness of the utility possibilities set.. In contrast, in this paper, we place conditions on the
preferred sets themselves. Roughly, our condition is that the sum of the weakly preferred sets is a closed set. We demonstrate that our condition implies existence of equilibrium. In addition to our
main theorem, we present two theorems showing cases to which our main theorem can we applied. We also relate our condition to the classic condition of Hart (1974).
Item Type: MPRA Paper
Original Existence of equilibrium with unbounded short sales: A new approach
English Title: Existence of equilibrium with unbounded short sales: A new approach
Language: English
Keywords: arbitrage; unbounded short sales; asset market models; sum of weakly preferred sets; existence of equilibrium
D - Microeconomics > D5 - General Equilibrium and Disequilibrium > D53 - Financial Markets
Subjects: D - Microeconomics > D5 - General Equilibrium and Disequilibrium > D50 - General
D - Microeconomics > D4 - Market Structure and Pricing > D40 - General
Item ID: 37778
Depositing Myrna Wooders
Date 31. Mar 2012 22:35
Last Modified: 18. Feb 2013 12:09
Allouch, N. (2002) An equilibrium existence result with short selling. Journal of Mathematical Economics 37, 81--94.
Allouch N., C. Le Van, and F.Page. (2002) The geometry of arbitrage and the existence of competitive equilibrium, Journal of Mathematical Economics 38, 373--391.
Dana R.-A., C.Le Van, and F.Magnien. (1997) General equilibrium in asset markets with or without short-selling. J. of Math. Analysis and Appl. 206, 567--588.
Dana, R.-A., C. Le Van, and F. Magnien (1999) On different notion of arbitrage and existence of equilibrium, Journal of Economic Theory 87, 169-193.
Danilov V.I. and G.A.Koshevoy. (1999) Separation of closed sets. mimeo. In Russian.
Florenzano, M. and C. Le Van (2001) Finite Dimensional Convexity and Optimization, Springer-Verlag, Berlin, Heidelberg, New York.
Grandmont, J.M. (1982) Temporary general equilibrium theory, Handbook of Mathematical Economics Volume II, North Holland.
Grandmont, J.M. (1977) Temporary general equilibrium theory, Econometrica 45, 535-572.
Green, J.R. (1973) Temporary General Equilibrium in a Sequential Trading Model with Spot and Futures Transactions, Econometrica 41, 1103-1123.
Hammond, P.J. (1983) Overlapping expectations and Hart's condition for equilibrium in a securities model, Journal of Economic Theory 31, 170-175.
Hart. O. (1974) On the existence of an equilibrium in a securities model. J. of Econ. Theory, 9, 293--311.
Ha-Huy, T. (2011) "Equilibre sur les marches financiers - Croissance optimale et bien-^etre social, Ph.D. Dissertation, under the supervision of C. Levan, Paris 1.
Milne, F. (1981) "Short-selling, default risk and the existence of equilibrium in a securities model," International Economic Review 21, 255-267.
Mirkil H. (1957) New characterization of polyhedral cones. Canad. J. Math., 9, 1--4.
Nielsen, L.T. (1989) Asset market equilibrium with short-selling, Review of Economic Studies 56. 467-474.
Page, F.H.Jr. (1987) On equilibrium in Hart's securities exchange model. J. of Econ. Theory. 41, 392--404.
Page, F.H. Jr., and M. Wooders (1996) A necessary and sufficient condition for compactness of individually rational and feasible outcomes and existence of an equilibrium. Economics
Letters 52 (1996) 153-162.
Page, F.H.Jr., Wooders, M. and P.K.Monteiro. (2000). Inconsequential arbitrage. J. of Math. Economics 34, 439-469.
Rockafellar R.T. (1970) Convex Analysis. Princeton Univ. Press.
Werner J. (1987). Arbitrage and the existence of competitive equilibrium. Econometrica, 55, 1403--1418
URI: http://mpra.ub.uni-muenchen.de/id/eprint/37778 | {"url":"http://mpra.ub.uni-muenchen.de/37778/","timestamp":"2014-04-20T05:58:36Z","content_type":null,"content_length":"24367","record_id":"<urn:uuid:54b0b763-0840-4a83-a499-33e414040afd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ray Tracing Scenery
CS11 Advanced C++ Lab 3
This lab continues the construction of our ray tracer.
The Ray Tracing Process
Ray tracing is conceptually very simple. Every object in a scene has a mathematical description of its surface. For example, a sphere with a center C and radius r includes all points X within a
distance of r from its center: |X - C| = r. (Or, you can square both sides, which makes little difference in the solution, but definitely speeds up the math.)
Rays have two components, an origin P and a direction D. The direction is usually a normalized vector. The ray itself is modeled as another equation: P + D * t. The variable t only ranges over
nonnegative values, since the ray starts from the origin point. When t = 0, the ray is at point P, and as t increases, the ray moves in the direction of D. (If D is normalized, notice that the ray
has traversed a distance t at time t, which is sometimes useful to know...)
For simple objects like spheres, we can just solve for the intersection points of the ray's equation and the sphere's equation. (Actually, we solve for the values of t when intersections occur, then
use these values to compute the intersection points.) In the case of a sphere, this may result in no intersections, one intersection (a grazing hit), or two intersections. Normally the closest
intersection is taken, although for implementing features like Constructive Solid Geometry, you may want all of the intersection points.
Since this is a programming course and not a math course, we will give you the math equations for finding intersections with various simple objects. Your main job will be to implement the necessary
classes in the ray tracer, and hopefully to implement the equations correctly! Each kind of scene object will provide its own intersection-test functionality, so that the ray tracer can render scenes
with a variety of object types.
Once an intersection is found with an object, the color at that location must be properly computed. This involves, among other things, the surface normal at the point of intersection. Thus, scene
objects must also provide a mechanism for determining the surface normal at a location.
If you want to learn much more about ray tracing, you can read the Wikipedia page on the subject. However, we'll tell you everything that's necessary to know for each lab.
It is very convenient to have a class to represent rays themselves, because they have multiple components, and a lot of operations will involve rays. So, you might as well make your life a little
easier. Create a class to represent a ray being traced. You can call it Ray, or if you come up with a better name, use that.
As mentioned above, rays require two data members:
• the origin that the ray is emitted from
• the direction that the ray is headed
You can represent both of these values with your vector class from last week. (Yes, technically the origin is a point and not a vector, but representing these as separate types leads to a lot of
Your ray's constructor should take the origin and direction values for the ray as its arguments. The one nuance of the Ray constructor is that sometimes you will want the constructor to normalize the
direction vector automatically, and sometimes you will want to leave the direction value alone. Do this by having a third argument, a flag that controls whether the direction value is normalized.
Make this flag default to normalizing the direction vector, since this will be the typical usage. For example, you might do this:
Ray(const Vector3F &orig, const Vector3F &dir,
bool normalize = true);
You should provide accessors to the origin and direction values as well. However, you shouldn't need mutators on the ray object at all, since it doesn't need to be manipulated for any computations.
When a new ray is fired from a particular location, you can just create a new Ray object with new values.
Your ray also need to provide one member function that returns the actual 3D point for a particular t value. During ray intersection tests, we don't want to store actual points of intersection
because we really don't need to; we just use these values of t where intersections occur. But, ultimately, we will need to convert that into a 3D point. So, provide a function to do this. You might
call it getPointAtT(float t), for example. (Hint: Assert that t >= 0 in your code!)
Scene Objects
You will need to implement a class to represent each object in a raytraced scene. I recommend that you call them "scene objects" to clearly indicate their purpose. This class should be an abstract
base class, since there is no well-defined way that generic scene objects should behave. Eventually your scene object will include detailed information about its surface characteristics, but for now
your objects will have one characteristic: their surface color. So, give your scene object class a single data-member, a "surface color" field, specified using the color class you created last week.
Once you have the basic framework for your class, you need to add the following functions:
• An accessor and a mutator for the scene-object's surface color. These should be very simple. Just return the current surface color, or take a color as an argument and store it. (While you are at
it, you should probably create a default scene-object constructor that initializes your surface color to something other than black, such as gray (0.5, 0.5, 0.5).
• A pure-virtual function that computes whether or not an intersection occurred. If an intersection has occurred, the function should return the lowest t value for the intersection (this would be
the intersection closest to the ray's origin). This is all the code needs to return; remember that we can get to the actual 3D point by using the combination of the ray and this particular value
of t.
So, your function might look something like this:
float SceneObject::intersection(const Ray &r) const;
The argument is the ray to test against, and the return value is the t value for the intersection.
If no intersection occurred, you can return some known "invalid" value, such as t = -1. You should define a constant for this, and use the constant; don't just use -1 everywhere.
• A pure-virtual function that returns the surface normal of a point on an object. This function will be used when an intersection occurs, to determine what color should be assigned to the
location, and what direction rays will bounce off of that point on the object. In this case, the argument should be a 3D point which is assumed to be on the surface of the object, and the
return-value is a surface normal for that point. This function should be pure-virtual.
• A function that returns the color of a point on an object. This function also takes as an argument a 3D point on the surface of the object. The function can simply return the color passed to the
scene-object constructor for now, but you can imagine how this would change if you were to implement texture-mapped objects.
Scene Object Subclasses
All objects in the scene being ray-traced will be subclasses of the scene-object base class. For now, you can create the following subclasses:
• A plane object of infinite size. Planes are specified by two values, a distance d from the origin, and a surface-normal N for the plane. Given these two values, the points in the plane satisfy
this equation:
f(X) = X · N + d = 0
Thus, the plane class should have two data members:
□ A scalar specifying the distance of the plane from the origin. (Use the same type as your vector element type.)
□ A vector specifying the surface normal for the plane.
The class' constructor should require these values, and you should provide accessors (but not mutators!) for these values.
Given the above values, the relevant equations for plane computations are:
• A sphere object with a particular location and radius.
The sphere class should have two data members:
□ A vector specifying the sphere's center.
□ A scalar specifying the sphere's radius.
The class' constructor should require these values, and you should provide accessors (but not mutators!) for these values.
Given the above values, the relevant equations for sphere computations are:
□ There can be 0, 1, or 2 intersections between a ray and a sphere. For a ray with the same formulation as before, the intersections are simply the solution to the quadratic equation:
a * t² + b * t + c = 0
a = D · D
b = 2 * (P · D - D · C)
c = P · P + C · C - 2 * (P · C) - r²
You can use the discriminant to guide your computation, but remember the additional constraint that we only want solutions where t >= 0. Also, notice that a will never be zero since the
magnitude of D will never be zero, so you don't have to check for that in your code.
To make your life easier down the line, you should implement this test in a public helper function that returns all of the sphere's intersection points, not just the closest one. You can
write a helper function like this:
int getIntersections(const Ray &r, float &t1, float &t2) const;
This helper function can return the total number of valid intersections (0, 1, or 2), and can set t1 and t2 to the t-values of the intersection points, or to your "no intersection" value. (t1
and t2 are "out-parameters" since they are non-const references, and are used to return additional values back to the caller.)
Also, to make your life easier down the line, you should follow some conventions in how you return the values via t1 and t2:
☆ When there are two valid intersection points, store the smaller one in t1, and the larger one in t2.
☆ When there is only one valid intersection point, always store it in t1, and set t2 to "no intersection".
☆ When there are no valid intersection points, set both t1 and t2 to "no intersection" before returning 0.
This way, t1 will always be the closest intersection point. In fact, when you use this helper-function to implement the SceneObject intersection-test function, it will simply be a matter of
calling this helper function, then returning the value of t1:
// local variables t1, t2 to receive the results
// of the computation
float t1, t2;
getIntersections(r, t1, t2);
// t1 is either the closest intersection point, or
// it is our "no intersection" value.
return t1;
□ The surface normal of any point on the sphere is:
n(X) = (X - C) / |X - C|
In other words, subtract the point on the surface from the center, and normalize the resulting vector. Easy peasy.
In a few weeks you will add cylinders to your raytracer. This will involve a few new math operations, but the big win is that you can reuse your above sphere-intersection code, if you provide a
mechanism to get all intersection points with the sphere. So, it's a bit of extra work right now, but it will save a lot of time down the line.
In order to actually see anything in a ray-traced scene, there must be some light source. We can also use lights to render shadows that objects cast on other objects. For now, we will have a very
simple lighting model, where all lights are point-lights of a specific color.
Create a class to represent point lights, with the following state:
• the position of the light
• the color of the light
The constructor should take arguments to store for these values. The class should also provide accessors (but not mutators) for these values.
We won't do anything else with lights this week, but we will incorporate them into the raytracing process next week.
The Scene
A scene is simply a collection of scene-objects being raytraced, and another collection of lights illuminating the scene. Although we could certainly be much more sophisticated than this, we will
stick with the "simple" theme and represent scenes in this simple way.
Create a class to represent a scene. The objects in the scene will be dynamically allocated, as will the lights. (Since you only do this once to set up the scene, this will not hinder performance.)
Use STL vectors to store the object-pointers and light-pointers. You should provide the following functionality:
• The default constructor should just create an empty scene.
• A member function to add a new scene-object, that takes a pointer to the scene-object to add. Assert that the pointer isn't 0. The scene-object will be heap-allocated outside the scene code, but
the scene will still assume responsibility for cleaning up all scene-objects.
• A member function to add a new light, that takes a pointer to the light to add. The same comments apply here as for the previous member function.
• Write a destructor that goes through these lists of pointers and deletes all of the scene-objects and lights. (You might try using the for_each algorithm with a delete-functor, as discussed in
Next week we will take this scene object and implement the ray tracing process in it. For now, it will just function as a collection of objects and lights.
When you have finished writing all of these classes, you should write some test code to exercise your intersection code, to make sure that it works properly. Construct very simple tests, such as
shooting a ray from (0,0,0) at a sphere of radius 1 at (2,0,0), and making sure you get back a result of (1,0,0). You can also try your sphere-intersection helper function, and check that you get
both (1, 0, 0) and (3, 0, 0) as the results.
Once you are reasonably convinced that your code works properly, submit a tarball of your work on csman.
Copyright (C) 2007-2008, California Institute of Technology.
Last updated January 30, 2008. | {"url":"http://courses.cms.caltech.edu/cs11/material/advcpp/lab3/index.html","timestamp":"2014-04-16T07:18:12Z","content_type":null,"content_length":"20424","record_id":"<urn:uuid:6b2cd65e-a0ff-4094-a1ea-4924b969ba7d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anharmonic Phonons and the Isotope Effect in Superconductivity
Author(s): V Crespi; Marvin Cohen; David R. Penn;
Title: Anharmonic Phonons and the Isotope Effect in Superconductivity
Published: January 01, 1991
Anharmonic interionic potentials are examined in an Einstein model to study the unusual isotope-effect exponents for the high-T(c) oxides. The mass dependences of the electron-phonon
coupling constant λ and the average phonon frequency square-root <ω2> are computed from weighted sums over the oscillator levels. The isotope-effect exponent is depressed below 1⁄2 by
Abstract: either a double-well potential or a potential with positive quadratic and quartic parts. Numerical solutions of Schrodinger's equation for double-well potentials produce lambda's in
the range 1.5-4 for a material with a vanishing isotope-effect parameter-alpha. However, low phonon frequencies limit T(c) to roughly 15 K. A negative quartic perturbation to a
harmonic well can increase-alpha above 1⁄2. In the extreme-strong-coupling limit, alpha is 1⁄2, regardless of anharmonicity.
Citation: Physical Review B (Condensed Matter and Materials Physics)
Volume: 43
Issue: 16
Pages: pp. 12921 - 12924
Research Areas: Condensed Matter Physics | {"url":"http://www.nist.gov/manuscript-publication-search.cfm?pub_id=620376","timestamp":"2014-04-21T02:24:20Z","content_type":null,"content_length":"22539","record_id":"<urn:uuid:4aef7ca3-0b51-44bc-b8e6-a86b3f4c0ce4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
2 vector integration problems
February 28th 2013, 06:28 AM #1
Aug 2012
2 vector integration problems
1. Vector field is given with:
$\vec{F}=(3x^2yz+y^3z+xe^{-x})\hat{i}+(3xy^2+x^3z+ye^x)\hat{j}+(x^3y_y^3x_xy^ 2z^2)\hat{k}$
fin $\oint\vec{F}\cdot\mathrm{d}\vec{r}$ on a closed contour OABCDEO given with (0,0,0), (1,0,0), (1,0,1), (1,1,1), (1,1,0), (0,1,0), (0,0,0)
by linear roads.
2. Vector field is given with:
$\vec{F}=F_0\bigg[\bigg(\frac{y^3}{3a^3}+\frac{y}{a}e^{\frac{xy}{a^2 }}+1\bigg)\hat{i}+\bigg(\frac{xy^2}{z^3}+\frac{x+y }{a}e^{xy}{a^2}\bigg)\hat{j}+\frac{z}{a}e^{\frac{x y}{a^2}}\hat{k}\bigg]$
Using Stokes theorem find:
$\oint \vec{F} \cdot \mathrm{d}\vec{r}$
by curve which is perimeter of square ABCD given with A=(0,a,0), B=(a,a,0), C=(a, 3a, 0), D=(0,3a,0).
help pls
Re: 2 vector integration problems
Hey DonnieDarko.
Can you show us what you have tried? (Hint: For the first one think about the parameterization in terms of line segments).
In other words, what is the parameterization of each segment (and then take the inner product for that segment)?
Re: 2 vector integration problems
Actually I solved first one, so I'll try some more for second, and share toughts
February 28th 2013, 03:51 PM #2
MHF Contributor
Sep 2012
February 28th 2013, 10:14 PM #3
Aug 2012 | {"url":"http://mathhelpforum.com/calculus/213997-2-vector-integration-problems.html","timestamp":"2014-04-20T11:56:49Z","content_type":null,"content_length":"35682","record_id":"<urn:uuid:a7e836fa-5a1b-49f2-ab3c-bba018de72c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Some surface subgroups survive surgery.
D. Cooper and D.D. Long \Lambda
March 23, 1998
1 Introduction.
A central unresolved question in the theory of closed hyperbolic 3manifolds is whether they are
covered by manifolds which contain closed embedded incompressible surfaces. An affirmative res
olution of this conjecture would imply in particular that all closed hyperbolic 3manifolds contain
the fundamental group of a closed surface of genus at least two. Even the simplest case of this
conjecture, namely that of the manifolds obtained by surgery on a hyperbolic manifold with a single
cusp has remained open for many years. In this article we prove the following theorem:
Theorem 1.1 Suppose that M is a hyperbolic 3manifold with a single torus boundary component.
Then all but finitely many surgeries on M contain the fundamental group of a closed orientable
surface of genus at least two.
Our proof rests upon:
Theorem 1.2 Suppose that S is an incompressible, @incompressible quasifuchsian surface with
boundary slope ff.
Then there is a K so that if fl is any simple curve on @M with \Delta(ff; fl) ? K, the Dehn filled
manifold M (fl) contains the fundamental group of a closed surface of genus at least two.
This result is similar in spirit to the main theorem of [5]; that result applied only to surfaces of slope
zero, however in that context we were able to give an explicit (and fairly small) value for K. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/370/3942110.html","timestamp":"2014-04-20T23:33:09Z","content_type":null,"content_length":"8510","record_id":"<urn:uuid:437d28e9-b193-40b3-a364-e81cab51d4ca>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 9,147
All the data are raw data. Information are the result of analyzing data. It shows the relationship between data. Computer get series of data as input, then it can compute the data to find out the
statistics about the data, the statistics can be draw as graphs to indicate how ...
Wednesday, August 27, 2008 at 9:27am by homemaker
Research South Carolina for data that is commonly used in health services systems. This includes census data, vital statistics data (birth, deaths, marriages, and divorces), surveillance data,
administrative data, and survey research data. Keep in mind that most of this data ...
Friday, January 11, 2013 at 4:24pm by Norma
Data Gathering and Inference
When the production manager finds the average life of her bulb-lifetime data, it is an example of what phase of inferential statistics? A. Data organization. B. Data gathering. C. Data analysis. D.
Probability-based inference.
Wednesday, May 8, 2013 at 10:20am by Tifini
A researcher is gathering data from four geographical areas designated: South=1;North=2; East=3; West=4. The geographical regions represents a. categorical data b. quantitative data c. directional
data d. either quantitative or categorical data
Friday, May 27, 2011 at 11:45am by John
range lowest temp to highest temp subtract median the middle number when when ordadering a set of data chroologically mode which piece of data appears most often mean ...add all pieces of data and
divide by the number pieces of data you added together interpret gragh to get ...
Monday, January 4, 2010 at 4:47pm by Mr. L
It depends upon the data and your purpose in using the data. Earlier you asked about salary data. Mode is not representative of this data. That leaves median and mean to consider. I suggest you find
the median and the mean for your data, and then decide which is more ...
Tuesday, November 17, 2009 at 8:46pm by Ms. Sue
After the experiment, scientists orginize and ? te data? records the data I started to be a smart alec and put "massage" the data. :) Report the data would fit here also. Technically, we should
record the data in the process of doing the experiment. If we put the data on ...
Tuesday, October 10, 2006 at 5:10pm by chris
Question Details: Finding Regression for each set of data & finding best fit equation for each set of data: Data 1: X: 1 2 3 4 5 Y: 3.1 12.1 20.7 33.9 50.8 Data 2: X: 1 2 3 4 5 Y: 1.16 3.46 5.11 5.98
Sunday, March 11, 2007 at 1:46pm by harry
MTH233/statistics UOP
Find one quantitative and one qualitative data set from a reliable source and determine the best way to present that data graphically. Show your graph and explain your reasoning for choosing this
presentation method. When might the reported median of data be more appropriate ...
Thursday, July 22, 2010 at 1:51pm by sue
My question is which of the following are NOT examples of data? Numbers Graphs Measurements Conclusions I know conclusions are not data. What about graphs? Is that data or a way to record data?
Monday, September 3, 2012 at 1:16pm by Kelley
No data is given. Please repost with the data. If it does not copy and paste, you will have to type the data yourself. Thanks for asking.
Monday, August 4, 2008 at 10:06pm by PsyDAG
BIS 155 Data Analysis w/Spreadsheet
How would I begin to compile data into useable data in using Excel?
Tuesday, March 16, 2010 at 11:12pm by Yolanda
Which of the following is true in regard to limited data sets? A.Patients must authorize the use of limited data sets. B. Those who receive limited data sets can pass them on witout restrictions. C.
Limited data sets cant include a patiemts city, state, age, and birth date. D...
Monday, August 30, 2010 at 10:48pm by janice
6th Grade Statistics
The student is correct. Can you determine why? (Hint: Categorical data is also called nominal data or qualitative data.)
Tuesday, October 2, 2007 at 8:55pm by MathGuru
allied health
which of the following is true in regard to limited data sets> a. Patients must authorize the use of limited data sets. b. those who receive limited data sets can pass them on without restrictions c.
limited data sets can't include a patient's city, state, age, and birth ...
Tuesday, October 12, 2010 at 8:48am by Anonymous
allied health
I want to say D. Not sure here Which of the following is true in regard to limited data sets? A. Patients must authorize the use of limited data sets. B. Those who receive limited data sets can pass
them on without restrictions. C. Limited data sets can’t include a patient’s ...
Tuesday, July 6, 2010 at 6:21pm by james
i went there and there are like no data any where. i need data on them. do u know where i can find autism data for the year 2009 or 2008. can u list the sites here. im not so good at searching
Sunday, February 28, 2010 at 5:54pm by Anonymous
Confidentiality Health
Which of the the following statements is true in regard to limited data sets? A.Limited data sets contain some individual identifiers. B.Patients must authorize the use of limited data sets. C. Those
who recieve limited data sets can pass them on without restrictions. D....
Tuesday, February 21, 2012 at 5:40pm by Brenna
4. Which of the following is true about a trend line for data? (1 point) The minimum data point always lies on the trend line. Every data point must lie on the trend line. The trend line describes
the pattern in the data if one exists. The trend line includes the effect of all...
Thursday, March 21, 2013 at 8:09am by Philly
physical science
political? optical? Optical fibers have greatly lowered the cost of worldwide data transmission and availability. Even though the data link to the user is often wireless or (sometimes) a land-line
conducting cable, fiber optic data links are an important part of the Internet ...
Tuesday, August 16, 2011 at 4:39am by drwls
Data Mining
What kind of tasks do you see becoming popular areas to be data mining? What has to be there to make an area worth while for being Data Mined? What will it take to make Data Mining more accessible
for more people to be able to do it?
Wednesday, April 16, 2008 at 10:39pm by Anonymous
math check answer
4. Which of the following is true about a trend line for data? The minimum data point always lies on the trend line. Every data point must lie on the trend line. The trend line describes the pattern
in the data if one exists.*** The trend line includes the effect of all ...
Friday, March 22, 2013 at 4:52pm by batman
which of the following is true bout a trend line for data? a- the minimum data point always lie on the trend line b- every data point must lie on the trend line c- the trend line describes the
pattern in the data if one exists d- the trend line includes the effect of all ...
Friday, March 7, 2014 at 7:07pm by Steve
Research and Evaluation
I am not certain we agree on primary and secondary. If raw data is collected by the researcher, it is primary. If the raw data was collected by someone other than the author, it is secondary. For
instance, using census data to evaluate teen pregnancy is secondary data. ...
Tuesday, October 19, 2010 at 9:03am by bobpursley
I you are told that a data set has a mean of 25 and a variance of 0 you can conclude that A)There is only one observation in the data set B) There are no observations in the data set C)All of the
observation in the data set are 25 D)Someone has made a mistake E) None of the ...
Tuesday, September 13, 2011 at 12:24am by Dom
Data is only a collection of numbers or words. Information is an organization of data such that the latter could be extracted selectively according to some useful criteria or that serves a particular
purpose. The two are related by data structures and databases which enable ...
Friday, June 10, 2011 at 2:19pm by MathMate
infectious diseases epidemiology
iEpidemiology is a simple to use calculator for the analysis of epidemiologic study data. Enter your study data (exposed/unexposed, cases/non-cases or person-time) and calculate crude relative risk,
risk difference, and confidence intervals. Currently Features: - Rate Data - ...
Tuesday, October 5, 2010 at 8:05am by EpiProf
We do not have your data. Copy and paste will not work. You have to type all the data. Please repost with the specific data and question, so we can attempt to help. However, we do not do your
homework for you.
Monday, July 20, 2009 at 3:29pm by PsyDAG
When would you NOT want to have the data in a linked document updated when the corresponding worksheet data is updated? In what types of situations would it be critical to make sure that the linked
data IS updated? Are there any disadvantages to having data linked? Does this ...
Sunday, December 14, 2008 at 9:07pm by Bryan
The following data on the number of iron workers in the United States for the years 1978 through 2008 are provided by the U.S. Bureau of Labor Statistics. Using regression techniques discussed in
this section, analyze the data for trend. Develop a scatter plot of the data and ...
Sunday, October 2, 2011 at 8:58pm by Anonymous
4. For Questions 4-7, use the following data: The number of file conversions performed by a processor per day for 10 days was: 15, 27, 25, 28, 30, 31, 22, 25, 27, 29 What is the arithmetic mean of
the data? (Points: 5) 20.7 25.9 27 29 5. What is the trimmed mean of the data...
Monday, November 7, 2011 at 12:54pm by Anonymous
Math Ms. Sue please
4. Which of the following is true about a trend line for data? (1 point) The minimum data point always lies on the trend line. Every data point must lie on the trend line. The trend line describes
the pattern in the data if one exists. The trend line includes the effect of all...
Sunday, March 17, 2013 at 6:38pm by Delilah
MIS Nooona MBA
1- I need some ways the company use data on computers before discarding them ? 2- I want three programs that can destroy data on hard disk ? 3- Which method or program is the best for destroying data
so that no body can recover it ?
Friday, November 30, 2012 at 2:22pm by hanaa
office finances
your computers hard drive has crashed and all data has been lost.to safeguard against data loss ,you should have a.kept a tape backup b.printed your data frequently while working my answer is a
Friday, December 27, 2013 at 12:04pm by susue
Is there any data about the use of nichrome heating, for instance: a nichrome element in the cables of electric kettles, geysers and spiral stove? What do you mean by "data"? What physical data are
you looking for: resistivity?
Wednesday, February 7, 2007 at 3:03am by tigger needs help
CS programing
Storage: .data 2 0 ; storage for n22 .data 2 0 ; temp storage 23 .data 2 2 ; constant 2 24 .data 2 2 ; const 2 25 0 10 ;const 10 26 please can u show me the correct way to declare a constant using
wombat.(CPU sim).
Thursday, March 20, 2014 at 3:33pm by lex
science project
I'm filling out forms for my freshmen science fair project and I'm not sure what they mean by this question: Data Analysis: Describe the procedures you will use to analyze the data that answer
research question or hypothesis HELP! Depends on the data. Most likely you would ...
Sunday, January 28, 2007 at 9:21pm by Carol Ann
What tools can be used to collect data? What are some of the issues that occur with data collection? Are some tools more appropriate for collecting certain types of data and why?
Sunday, October 11, 2009 at 7:43am by MArie
5. (TCO 9) You have been tasked with analyzing an extremely large amount of data and to ultimately produce a report to share with the board of directors. The data is currently in a text file and has
over two thousand records of data. Explain how you would use Excel to analyze ...
Monday, June 24, 2013 at 10:00pm by laura
How do you find the polynomial funtion that best models data givn in a table for x and f(x). The only example in my testbook shows how to do it on a calculator but I cannot find any instructions with
my calculator to tell me how to do it. The data for x=-5, -4,-3, -2, -1, 0, 1...
Saturday, August 30, 2008 at 12:20pm by Lucy
How do you find the polynomial funtion that best models data givn in a table for x and f(x). The only example in my testbook shows how to do it on a calculator but I cannot find any instructions with
my calculator to tell me how to do it. The data for x=-5, -4,-3, -2, -1, 0, 1...
Sunday, August 31, 2008 at 11:45am by Lucy
Write a paragraph about data using the appropriate data vocabulary words. Think about why the line went so far at the end of the graph.In other words, analyze the data.
Wednesday, November 19, 2008 at 8:50pm by Andrea
Choose a commonly used microprocessor, Intel Core or IBM Power6. What data types are supported? How many bits are used to store each data type? How is each data type represented internally?
Wednesday, October 26, 2011 at 1:07am by Mike
A set of 50 data values has a mean of 27 and a variance of 16. I. Find the standard score (z) for a data value = 19. II. Find the probability of a data value < 19. III. Find the probability of a data
value > 19. Show all work.
Saturday, February 25, 2012 at 9:10pm by Blackqueen
Someone plz help me....i need help this is due now
A set of 50 data values has a mean of 27 and a variance of 16. I. Find the standard score (z) for a data value = 19. II. Find the probability of a data value < 19. III. Find the probability of a data
value > 19.
Saturday, November 12, 2011 at 9:34pm by Jennifer
Math....please help
A set of 50 data values has a mean of 27 and a variance of 16. I. Find the standard score (z) for a data value = 19. II. Find the probability of a data value < 19. III. Find the probability of a data
value > 19.
Saturday, November 12, 2011 at 7:05pm by Jennifer
environmental science
Please check this: Scientists use statistics to a. graph data b. analyze data c. communicate ideas to each other d. all of the above B ( in my book it says: scientists rely on and use statistics to
summarize, characterize, analyze, and compare data. statistics is usually a ...
Thursday, August 27, 2009 at 1:26pm by y912f
A set of 50 data values has a mean of 27 and a variance of 16. I. Find the standard score (z) for a data value = 19. II. Find the probability of a data value < 19. III. Find the probability of a data
value > 19.
Friday, November 11, 2011 at 1:19pm by Jennifer
A set of 50 data values has a mean of 27 and a variance of 16. I. Find the standard score (z) for a data value = 19. II. Find the probability of a data value < 19. III. Find the probability of a data
value > 19.
Saturday, November 12, 2011 at 6:52pm by Jennifer
range: difference between the highest and lowest value of a data set mean: average of the #'s in a data set mode: # that occurs most often in a data set (the one that repeats the most) median: the
middle value of a set of data that is ordered from lowest to highest. (numbers- ...
Monday, August 27, 2007 at 6:08pm by ~christina~
this is easier than you are trying to envision it. You have a bunch of data, and you have found that the straight line of best fit is y = 2x+5 You have no real formula that produces every data point
in the interval [1,10], but this line does as well as you can. So, now you ...
Thursday, June 20, 2013 at 11:51am by Steve
Research design is an area where researchers collect various data to address important questions. For example, if researchers wish to find characteristics of people most likely to develop lung
cancer, they will have to collect appropriate data first. Investigate 3 different ...
Monday, February 11, 2013 at 3:50pm by Betty Anne
Spearman's Rho Ordinal data comes in two general types. The first type of ordinal data is often called continuous ordinal data. This type of data is found in situations where the researcher has a
relatively detailed set of rankings where the cases have been ranked in a ...
Thursday, August 30, 2012 at 5:00pm by Jo-Anne
Spearman's Rho Ordinal data comes in two general types. The first type of ordinal data is often called continuous ordinal data. This type of data is found in situations where the researcher has a
relatively detailed set of rankings where the cases have been ranked in a ...
Thursday, August 30, 2012 at 5:00pm by Jo-Anne
AP Chemistry
Two- point equation: ln (P1/P-2) = - (ÄHvap/R)(1/T2-1/T1) The graphical analysis gives a more exact value because it takes into account possibly erroneous data points, making an “average”, “trend” or
“best fit” line that is as close as possible to all data points. The equation...
Sunday, January 28, 2007 at 4:45pm by Anonymous
You are developing a data base for Army tracked combat vehicles and you are normalizing the data base to base year 2007 dollars. What number would you enter into the data base for a vehicle budgeted
at $1,725,000 in the FY08 appropriation?
Tuesday, November 5, 2013 at 8:16am by Gail
Need help on the question below. Jeff studied data from the following categories: age, countries, salary, tempature, types of trees, and year of education. List the quantitative data and list the
categorical data. Would the quantitative data be: age, salary, temperature, and ...
Wednesday, February 6, 2013 at 7:02pm by Brandi
Biology..please help
This is data collected.. Solute: Bowman's Capsule, Glomerus, Loop of Henle, Collecting Duct protein: 0, 0.8, 0, 0 urea: 0.05, 0.05, 1.50, 2.00 glucose: 0.10, no data, 0, 0 chloride: 0.37, no data, no
data, 0.6 ammonia: 0.0001, 0.0001, 0.0001, 0.04 substance x: 0, 9.15, 0, 0 ...
Monday, October 15, 2007 at 12:10am by Anonymous
The marketing manager at Massimino & McCarthy, a chain of retail stores that sells men's clothing, is reviewing marketing research data to try to determine if changes in marketing strategy are
needed. Which of the following sources of data would be a secondary data source?
Saturday, August 14, 2010 at 11:23am by barbara
Group the largest data set and find mean, median, mode, variance, standard deviation, 15-th, 45-th and 80-th percentiles of the grouped data. Then find the same sample statistics using the ungrouped
data. Is there any difference? Comment.
Sunday, November 20, 2011 at 8:56am by metu
The marketing manager at Massimino & McCarthy, a chain of retail stores that sells men\'s clothing, is reviewing marketing research data to try to determine if changes in marketing strategy are
needed. Which of the following sources of data would be a secondary data source?
Saturday, August 14, 2010 at 11:23am by Anonymous
The marketing manager at Massimino & McCarthy, a chain of retail stores that sells men's clothing, is reviewing marketing research data to try to determine if changes in marketing strategy are
needed. Which of the following sources of data would be a secondary data source?
Thursday, May 30, 2013 at 2:40pm by rick
Job Sat. INTR. EXTR. Benefits 5.2 5.5 6.8 1.4 5.1 5.5 5.5 5.4 5.8 5.2 4.6 6.2 5.5 5.3 5.7 2.3 3.2 4.7 5.6 4.5 5.2 5.5 5.5 5.4 5.1 5.2 4.6 6.2 5.8 5.3 5.7 2.3 5.3 4.7 5.6 4.5 5.9 5.4 5.6 5.4 3.7 6.2
5.5 6.2 5.5 5.2 4.6 6.2 5.8 5.3 5.7 2.3 5.3 4.7 5.6 4.5 5.9 5.4 5.6 5.4 3.7 6.2...
Tuesday, September 21, 2010 at 1:31pm by chris
Computers (Anonymous)
It is a collection of steps required to convert data into information. There are phrases like input, process, output and feedback. In the step of collection, the raw facts are collected from
different stages. The collection data is classified into different categories in the ...
Friday, September 4, 2009 at 8:46am by jennifer
Data Analysis
You are a District sales manager and you are overloaded with data that you need to compile into useable data and you are looking to use Excel to help you do this. Using the concept from this week,
how will you determine where to start and what tools can you use within Excel to...
Wednesday, July 22, 2009 at 6:49pm by Bryan
Scientific investigations involve many steps and processes. Which characteristics define a laboratory experiment? A. hypothesis, models, and calculations B. test variables, data, and uncontrolled
conditions C. data, conclusions, and unregulated environment D. independent and ...
Sunday, October 20, 2013 at 3:08pm by Anne
First calculate the mean Then for each data value, take the difference between that data value and the mean and square it. Add up all those squared differences and divide by the number of data values
e.g. for 1,2,3,4 the mean is (1+2+3+4)/4 = 2.5 variance = ( (1-2.5)^2 + (2-2....
Friday, April 23, 2010 at 12:26am by Reiny
Office Finances
Your computer's hard drive has crashed and all data has been lost. To safeguard against data loss, you should have a.kept a tape backup, b. used a password on all confidential files c. stored your
data frequently while working, d. an uninterrupted power supply
Thursday, November 5, 2009 at 3:03am by anna
6th Grade Statistics
The student is correct because the median is used to find the middle numeric value of a data set, the mean balances out the data so all numbers in a data set are the same. This means that the mean
and median can only be used for a numeric data, so this student is correct.
Tuesday, October 2, 2007 at 8:55pm by Jacob
intro to software development
I have aa assignment that asks to identify three processes or capapbilities needed to track of home dvd and and cd collection. identify the input data required for each process, and identify a
logical name for each data outputitem and type of data out put. What is this?
Friday, October 16, 2009 at 6:23pm by mary
A set of 50 data values has a mean of 35 and a variance of 25. I. Find the standard score (z) for a data value = 26. II. Find the probability of a data value > 26.
Monday, October 17, 2011 at 7:29am by don
The data is breeds of dogs and the weight. The plot needs to contain the data about the dogs, with one stem labeled 0. I'm not sure how to put the data together in the stem-and-leaf plot.
Monday, December 10, 2007 at 5:50pm by Johnna
A set of 50 data values has a mean of 18 and a variance of 4. I. Find the standard score (z) for a data value = 20. II. Find the probability of a data value < 20.
Sunday, October 30, 2011 at 9:01pm by Jen
Homework Help
A set of 50 data values has a mean of 48 and a variance of 9. I. Find the standard score (z) for a data value = 43. II. Find the probability of a data value < 43.
Sunday, December 2, 2012 at 7:16pm by Huffette
1- three ways that companies use to destroy their data before destroying their computers . 2-three programs that permanently destroy data on a hard disk. 3- Best methods or program for destroying
data so nobody can recover it . Thanks.
Thursday, November 29, 2012 at 12:07pm by hanaa
Which of the following is an accurate description of Simpson's paradox? When groups of data are aggregated, an association can get stronger because of a confounding variable. That confounding
variable is usually the number of observations in different groups of data. When ...
Thursday, February 21, 2013 at 12:38am by Jon
One advantage of a _____________ is that the actual data values are retained in the graphical display of the data. A) Pie chart B) Dot plot (This one?) C) Histogram D) boxplot If we want to discuss
any gaps and clusters in a data set, which of the following should not be ...
Thursday, September 5, 2013 at 3:16pm by Andy
The mean of a set of 51 data points is 3.04. If a data value of 16 were added to the data set, what would be the approximate value of the new mean?
Monday, April 9, 2012 at 7:39pm by Nicole
The mean of a set of 51 data points is 3.04. If a data value of 16 were added to the data set, what would be the approximate value of the new mean?
Tuesday, April 24, 2012 at 10:56pm by coal
Me and my grade 8 class are doing project on "data management". in one part it says: Statistical Measure of your choice: Research one other representation of statistics and apply the calculations to
your data. Once again, explain what this means within the context of your data...
Friday, February 17, 2012 at 10:11pm by Alex Johnson
Computer Science
a)385 data loss incidents took place b)110 million people were affected by personal data loss incidents c)3% of all data loss incidents were caused by hard copy theft/loss d)20% of all data loss
incidents were caused by portable media theft/loss e)15% of data loss incidents ...
Sunday, January 16, 2011 at 7:16am by Sue
You are given the following data. # of Absences Final Grade 0 96 1 92 2 71 3 66 4 60 5 51 A. Find the correlation coefficient for the data. B. Find the equation for the regression line for the data,
and predict the final grade of a student who misses 3.5 days.
Friday, October 21, 2011 at 3:03pm by Jen
You are given the following data. # of Absences Final Grade 0 96 1 92 2 71 3 66 4 60 5 51 A. Find the correlation coefficient for the data. B. Find the equation for the regression line for the data,
and predict the final grade of a student who misses 3.5 days.
Sunday, October 23, 2011 at 1:04pm by Jennifer
10. For questions 10-13, use the following data: 13 29 41 60 89 14 26 53 7 14 What is the arithmetic mean of the data? (Points: 5) 20 14 34.6 82 11. What is the range of the data? (Points: 5) 14 34.6
82 50 12. What is the variance of the data? (Points: 5) 231.04 616.64 685.16...
Monday, November 7, 2011 at 1:32pm by Anonymous
In Example 2, find the percent of the total and the number of degrees in each sector for mountains, theme parks, relatives, and other. Find the median of the data set in Example 3. In which category
does it fall? Find the average unemployment rate for Example 4. Determine ...
Tuesday, May 21, 2013 at 1:28pm by Kelly
college math
10. For questions 10-13, use the following data: 13 29 41 60 89 14 26 53 7 14 What is the arithmetic mean of the data? (Points: 5) 20 14 34.6 82 11. What is the range of the data? (Points: 5) 14 34.6
82 50 12. What is the variance of the data? (Points: 5) 231.04 616.64 685.16...
Monday, November 7, 2011 at 1:25pm by Anonymous
PLZ HELP....STATS
You are given the following data. # of Absences Final Grade 0 96 1 92 2 71 3 66 4 60 5 51 A. Find the correlation coefficient for the data. B. Find the equation for the regression line for the data,
and predict the final grade of a student who misses 3.5 days.
Sunday, October 23, 2011 at 11:52pm by Jennifer
You are given the following data. Number of Absences Final Grade 0 95 1 91 2 82 3 69 4 67 5 58 - Find the correlation coefficient for the data. - Find the equation for the regression line for the
data, and predict the final grade of a student who misses 3.5 days.
Thursday, October 4, 2012 at 11:05am by mom
IT would be helpful to know what is in the data tables. But of more concern, you ought to know what you are doing in a lab before you start, so that you have an idea of what is the purpose, what data
you will collect, and how to analyze it. I am quite certain you did not do ...
Tuesday, May 21, 2013 at 7:16pm by bobpursley
home economics
Need help looking up data on unemployment from 1998-2008 yearly data and 2008 monthly data. Also finding gdp defliator form 1998-2008 yearly and quantity
Wednesday, January 28, 2009 at 8:17am by Rosslyn
Excel Help
3) __________ are used to compare sets of data in one chart. Time series Multiple data series Relative series Comparison series Is multiple data series
Sunday, July 12, 2009 at 10:29pm by Bryan
My question is about connectionism.I have a program that conctruct connectionist models.Presumably I have number of unit for example one of them is unit1 that represent room: for example office how
do I find out data input for that unit I saw somewhere ready data input for ...
Thursday, June 3, 2010 at 9:34am by katarzyna rochowicz
2. Any given data set consists of a set of numerical values. Please indicate by stating yes or no for each of the following statements whether or not it could be correct for any data set. (This
question is not referring to the data that is given above.) a. There is no mode. b...
Wednesday, July 7, 2010 at 10:55pm by Anonymous
Identify at least two data structures that are used to organize a typical file cabinet. Post actual pseudocode or code examples of these data structures. Why do you feel it is necessary to emulate
these types of data structures in a computer program? For what kind of work ...
Thursday, October 14, 2010 at 9:16pm by Anonymous
algebra 1
Enter these data points into some graphing software, and ask the computer to perform a linear regression on the data (find the best-fit line that goes through the data points) The computer should
also be able to calculate the correlation coefficient. If the correlation ...
Wednesday, October 24, 2012 at 11:28am by Jennifer
When estimating distances from a table of velocity data, it is not necessary that the time intervals are equally spaced. After a space ship is launched, the following velocity data is obtained. Use
these data to estimate the height above the Earth's surface at 120 seconds. (0,...
Sunday, November 10, 2013 at 10:40pm by steve
2. You are given the following data. Number of Absences Final Grade 0 93 1 90 2 79 3 66 4 60 5 56 - Find the correlation coefficient for the data. - Find the equation for the regression line for the
data, and predict the final grade of a student who misses 3.5 days. Show all ...
Friday, December 30, 2011 at 12:31am by Lauren
The one piece of data you don't have in the problem (no one ever puts this in and that's the secret) is you are supposed to recall that the boiling point occurs when the vapor pressure of a liquid
equals atmospheric pressure. The piece of data you need is pH2O @ 100 C = 760 ...
Saturday, February 18, 2012 at 10:44pm by DrBob222
Programming C++
If we are using two data intergers and there answer is bigger than simple data intergers and we are givig computer long data integers for the answer but the answer is false . so i want to know why it
is false?
Friday, April 16, 2010 at 11:53pm by Maryam
use interpolation when the data falls within the range given by the model (when it says "based on data from such and such to such and such" or "this data applies to years x to y" when the year or
time you're studying falls outside this range then you are using extrapolation
Monday, January 14, 2013 at 6:04pm by Kinematics
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Data","timestamp":"2014-04-20T18:41:42Z","content_type":null,"content_length":"43969","record_id":"<urn:uuid:9990bfb7-f0b4-408a-83a1-5d013ea388eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Max and Min, Help Cant find the critical points?
June 7th 2009, 10:46 AM #1
Nov 2008
Max and Min, Help Cant find the critical points?
I would really appreciate if anyone could help me with the following exercises k, l, m, n. I cant seem to solve the derivatives to find the critical points. THANK VERY MUCH in advance.
You need to use partial differenitation for these problems, then solve for
$\left( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right) = \b{0}$
June 7th 2009, 04:53 PM #2 | {"url":"http://mathhelpforum.com/calculus/92100-max-min-help-cant-find-critical-points.html","timestamp":"2014-04-19T03:23:29Z","content_type":null,"content_length":"33435","record_id":"<urn:uuid:59d0a947-5efc-4b08-899c-c8052e423017>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Display Class Syllabus
Foundations of Mathematics
CLASS CODE: MATH 301 CREDITS: 3
DIVISION: PHYSICAL SCIENCE & ENGINEERING
DEPARTMENT: MATHEMATICS
GENERAL This course does not fulfill a General Education requirement.
DESCRIPTION: Achieving maturity in mathematical communication. Topics include introduction to mathematical proof, analysis of proof, set theory, mathematical induction, logical reasoning,
elementary number theory, and properties of relations and functions.
TAUGHT: Winter, Fall
CONTENT AND Introduction to mathematical proof, analysis of proof, set theory, mathematical induction and logical reasoning.
GOALS AND 1. Become fluent in reading and writing theorems and proofs and recognizing errors in non-proofs.
OBJECTIVES: 2. Gain proficiency in reading and understanding mathematics texts.
3. Sharpen oral and written communication skills.
4. Demonstrate the appropriate use of the language of mathematics.
5. Show interconnectedness between strands of mathematics.
6. Apply concepts of number theory.
7. Understand basic set theory.
8. Describe relations and functions and their properties.
REQUIREMENTS: The current textbook must be purchased by the student. Other requirements may include: examinations, oral presentations, in-class board work, homework exercises, and any other
assignments the instructor may require.
PREREQUISITES: Math 113
EFFECTIVE DATE: August 2001 | {"url":"http://www2.byui.edu/catalog-archive/2004-2005/class.asp2179.htm","timestamp":"2014-04-20T13:20:01Z","content_type":null,"content_length":"4262","record_id":"<urn:uuid:1cb94d3f-8f67-49f1-b6e1-5347c802e370>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proceedings of the American Mathematical Society
ISSN 1088-6826(online) ISSN 0002-9939(print)
On the solvability of systems of bilinear equations in finite fields
Author: Le Anh Vinh
Journal: Proc. Amer. Math. Soc. 137 (2009), 2889-2898
MSC (2000): Primary 11L40, 11T30; Secondary 11E39
Published electronically: May 4, 2009
MathSciNet review: 2506446
Full-text PDF
Abstract | References | Similar Articles | Additional Information
Abstract: Given
We show that the system is solvable for any
• 1. James Arthur Cipra, Todd Cochrane, and Christopher Pinner, Heilbronn’s conjecture on Waring’s number (mod 𝑝), J. Number Theory 125 (2007), no. 2, 289–297. MR 2332590 (2008d:11116), http://
• 2. D. Covert, D. Hart, A. Iosevich, and I. Uriarte-Tuero, An analog of the Furstenberg-Katznelson-Weiss theorem on triangles in sets of positive density in finite field geometries, preprint 2008,
• 3. K. Gyarmati and A. Sárközy, Equations in finite fields with restricted solution sets. II. (Algebraic equations), Acta Math. Hungar. 119 (2008), no. 3, 259–280. MR 2407038 (2009m:11207), http:/
• 4. D. Hart, Explorations of Geometric Combinatorics in Vector Spaces over Finite Fields, Ph.D. Thesis, Missouri University.
• 5. Derrick Hart and Alex Iosevich, Sums and products in finite fields: an integral geometric viewpoint, Radon transforms, geometry, and wavelets, Contemp. Math., vol. 464, Amer. Math. Soc.,
Providence, RI, 2008, pp. 129–135. MR 2440133 (2009m:11032), http://dx.doi.org/10.1090/conm/464/09080
• 6. Derrick Hart and Alex Iosevich, Ubiquity of simplices in subsets of vector spaces over finite fields, Anal. Math. 34 (2008), no. 1, 29–38 (English, with English and Russian summaries). MR
2379694 (2008m:05296), http://dx.doi.org/10.1007/s10476-008-0103-z
• 7. D. Hart, A. Iosevich, D. Koh and M. Rudnev, Averages over hyperplanes, sum-product theory in finite fields, and the Erdős-Falconer distance conjecture, to appear in Trans. Amer. Math. Soc.,
• 8. D. Hart, A. Iosevich, D. Koh, S. Senger, and I. Uriarte-Tuero, Distance graphs in vector spaces over finite fields, coloring, pseudo-randomness and arithmetic progressions, preprint, 2008,
• 9. M. Krivelevich and B. Sudakov, Pseudo-random graphs, More sets, graphs and numbers, Bolyai Soc. Math. Stud., vol. 15, Springer, Berlin, 2006, pp. 199–262. MR 2223394 (2007a:05130), http://
• 10. A. Sárközy, On products and shifted products of residues modulo 𝑝, Integers 8 (2008), no. 2, A9, 8. MR 2438294 (2009f:11034)
• 11. Igor E. Shparlinski, On the solvability of bilinear equations in finite fields, Glasg. Math. J. 50 (2008), no. 3, 523–529. MR 2451747 (2009j:11189), http://dx.doi.org/10.1017/
• 12. L. A. Vinh, On a Furstenberg-Katznelson-Weiss type theorem over finite fields, to appear in Ann. Comb., arXiv:0807.2849
• 13. L. A. Vinh, On kaleidoscopic pseudo-randomness of finite Euclidean graphs, preprint, 2008, arXiv:0807.2689.
• 14. L. A. Vinh, On , to appear in Proc. 21st FPSAC, 2009.
• 15. L. A. Vinh, Triangles in vector spaces over finite fields, to appear in Online J. Anal. Comb. (2009).
• 16. André Weil, Numbers of solutions of equations in finite fields, Bull. Amer. Math. Soc. 55 (1949), 497–508. MR 0029393 (10,592e), http://dx.doi.org/10.1090/S0002-9904-1949-09219-4
Similar Articles
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 11L40, 11T30, 11E39
Retrieve articles in all journals with MSC (2000): 11L40, 11T30, 11E39
Additional Information
Le Anh Vinh
Affiliation: Department of Mathematics, Harvard University, Cambridge, Massachusetts 02138
Email: vinh@math.harvard.edu
DOI: http://dx.doi.org/10.1090/S0002-9939-09-09947-X
PII: S 0002-9939(09)09947-X
Keywords: Bilinear equations, finite fields
Received by editor(s): December 1, 2008
Published electronically: May 4, 2009
Communicated by: Ken Ono
Article copyright: © Copyright 2009 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication. | {"url":"http://www.ams.org/journals/proc/2009-137-09/S0002-9939-09-09947-X/","timestamp":"2014-04-21T08:03:33Z","content_type":null,"content_length":"35482","record_id":"<urn:uuid:a6f4b103-8e78-4d84-9215-aead63a13dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
The myth of hypercomputation, in Turing Festschrift
, 2004
"... Consider the idea of computing functions using experiments with kinematic systems. We prove that for any set A of natural numbers there exists a 2-dimensional kinematic system BA with a single
particle P whose observable behaviour decides n ∈ A for all n ∈ N. The system is a bagatelle and can be des ..."
Cited by 14 (5 self)
Add to MetaCart
Consider the idea of computing functions using experiments with kinematic systems. We prove that for any set A of natural numbers there exists a 2-dimensional kinematic system BA with a single
particle P whose observable behaviour decides n ∈ A for all n ∈ N. The system is a bagatelle and can be designed to operate under (a) Newtonian mechanics or (b) Relativistic mechanics. The theorem
proves that valid models of mechanical systems can compute all possible functions on discrete data. The proofs show how any information (coded by some A) can be embedded in the structure of a simple
kinematic system and retrieved by simple observations of its behaviour. We reflect on this undesirable situation and argue that mechanics must be extended to include a formal theory for performing
experiments, which includes the construction of systems. We conjecture that in such an extended mechanics the functions computed by experiments are precisely those computed by algorithms. We set
these theorems and ideas in the context of the literature on the general problem “Is physical behaviour computable? ” and state some open problems.
- Theoretical Computer Science , 1980
"... In the theoretical analysis of the physical basis of computation there is a great deal of confusion and controversy (e.g., on the existence of hyper-computers). First, we present a methodology
for making a theoretical analysis of computation by physical systems. We focus on the construction and anal ..."
Cited by 12 (4 self)
Add to MetaCart
In the theoretical analysis of the physical basis of computation there is a great deal of confusion and controversy (e.g., on the existence of hyper-computers). First, we present a methodology for
making a theoretical analysis of computation by physical systems. We focus on the construction and analysis of simple examples that are models of simple sub-theories of physical theories. Then we
illustrate the methodology, by presenting a simple example for Newtonian Kinematics, and a critique that leads to a substantial extension of the methodology. The example proves that for any set A of
natural numbers there exists a 3-dimensional Newtonian kinematic system MA, with an infinite family of particles Pn whose total mass is bounded, and whose observable behaviour can decide whether or
not n ∈ A for all n ∈ N in constant time. In particular, the example implies that simple Newtonian kinematic systems that are bounded in space, time, mass and energy can compute all possible sets and
functions on discrete data. The system is a form of marble run and is a model of a small fragment of Newtonian Kinematics. Next, we use the example to extend the methodology. The marble run shows
that a formal theory for computation by physical systems needs strong conditions on the notion of experimental procedure and, specifically, on methods for the construction of equipment. We propose to
extend the methodology by defining languages to express experimental procedures and the construction of equipment. We conjecture that the functions computed by experimental computation in Newtonian
Kinematics are “equivalent ” to those computed by algorithms, i.e. the partial computable functions. 1
"... Newtonian systems, bounded in space, time, mass and energy can compute all functions by ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5393393","timestamp":"2014-04-21T06:26:35Z","content_type":null,"content_length":"17896","record_id":"<urn:uuid:973a4959-8fe3-4539-b327-e0201a3999e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating surfaces from a set of coordinates [Archive] - OpenGL Discussion and Help Forums
04-01-2009, 03:20 PM
Hello all, I am currently working on a college project which involves building a device capable of generating a 3d surface model of static indoor environment. The device will use a rangefinder to
collect a series of distances and convert them into a set of coordinates relative to its stationary position. These coordinates will be sent to a computer to generate the model. We were planning on
using OpenGL to generate this model.
I have never used OpenGL so I'm currently running through whatever tutorials that I can find. I was wondering if somebody could point in the direction of an ideal method for this procedure. This
isn't something that will be used for professional purposes so there's a large amount of leeway regarding quality. It simply has to be an aesthetically decent representation of an indoor environment.
I was thinking of using GLUT to construct a gigantic triangle strip from every point but this is something I gleamed off of early reading. I'll probably be back with more specific questions later
once I'm more familiar with the API but for now I was just looking for a general direction/method. Any help would be greatly appreciated. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-167160.html","timestamp":"2014-04-19T19:46:40Z","content_type":null,"content_length":"5142","record_id":"<urn:uuid:414042ba-2c84-4556-81d1-c9b3c37b55f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Comment. Math. Helv. 72 (1997) 618635
0010-2571/97/040618-18 $ 1.50+0.20/0
c 1997 Birkh¨auser Verlag, Basel
Commentarii Mathematici Helvetici
A class of flows on 2-manifolds with simple recurrence
Konstantin Athanassopoulos, Theodoros Petrescou and Polychronis Strantzalos
Abstract. We study D-stable flows on orientable 2-manifolds of finite genus in connection with
the topology of the underlying phase spaces. The description of the phase portrait is used to
prove that a connected orientable 2-manifold of finite genus supporting a non-minimal D-stable
flow must be homeomorphic to an open subset of the 2-sphere or the 2-torus. In the case of the
presence of singularities we necessarily have an open subset of the 2-sphere.
Mathematics Subject Classification (1991). 58F25, 54H20.
Keywords. D-stable flow, 2-manifold of finite genus, recurrence, periodic orbit, local center.
1. Introduction
The main object of study in this article is the class of D-stable flows on 2-manifolds
of finite genus (for definition see section 2). We are concerned with their qualitative
behavior in connection with the topological structure of the underlying manifold.
This point of view is in the center of the theory of transformation groups and
dynamical systems. The class of D-stable flows is proved to be suitable for the
main purposes of the two theories. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/940/2230264.html","timestamp":"2014-04-20T01:14:20Z","content_type":null,"content_length":"8432","record_id":"<urn:uuid:d373ec8d-d6b4-491e-8b57-38685d510db5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
convert 69 f to celcius
You asked:
convert 69 f to celcius
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/convert_69_f_to_celcius","timestamp":"2014-04-17T10:46:04Z","content_type":null,"content_length":"54350","record_id":"<urn:uuid:4a69b43b-85ed-4ab4-9587-4a30f1dd553f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Origin Forum - Autofill Column with Last Known Value
Author Topic
Drbobshepherd Posted - 04/02/2012 : 6:03:00 PM
Is there a Labtalk routine or an X-function to fill cells with the last known value? For example,
Posts col(B)={0 -- -- -- -- -- -- -- 1 -- -- -- -- 12 -- -- 0 --}.
I would like to process so
col(B)={0 0 0 0 0 0 0 0 1 1 1 1 1 12 12 12 0 0}.
I can have hundreds of thousands of missing values to fill so I'd prefer not to resort to nested loops, which can be quite slow.
Origin Ver. and Service Release (Select Help-->About Origin):OriginPro 8.6.0
Operating System:Window XP
Hideo Fujii
Posted - 04/04/2012 : 4:18:34 PM
USA Hi Drbobshepherd,
976 Posts
How about the following formula in the Set Column Values tool?
--Hideo Fujii
Posted - 05/02/2012 : 10:34:53 AM
USA Hideo,
Thanks, your command works perfectly. I see that it is basically an if statement:
if(col(B)[i]==NaN) col(B)[i]=col(B)[i-1];
but I am not familiar with the language. Do you have a reference I could use to learn it?
Posted - 05/02/2012 : 11:04:04 AM
France It is more an if - else statement, used in variable assignement. General syntax is:
106 Posts
--> variable will be assigned to value1 if condition is true, or value2 if condition if false.
if your case, full corresponding command is:
which mean:
if col(b)[i] is NaN, put col(b)[i-1] into col(b)[i]
else put col(b)[i] into col(b)[i]
Search for "conditionnal operator" into labtalk help
Posted - 05/02/2012 : 11:11:05 AM
France note that it can also be used for vector assignement.
106 Posts
For example:
range r1=1, r2=2;
Posted - 05/02/2012 : 2:29:57 PM
USA Couturier,
Posts I found the reference in Help/Programming/Main/Operators/Conditional. Hard to believe I have been programming for decades without ever being aware of this useful operator. It just fell
through the cracks. This proves I don't know everything, but with your help, I am closer.
Dr. Bob
Hideo Fujii
Posted - 05/03/2012 : 11:03:33 AM
USA Hi Couturier,
976 Posts
I also want to say thank you. That vector expression is a surprising rediscovery for me, too.
For my curiosity, I have compared the speed of Couturier's vector and If statement.
range r1=1, r2=2;
sec -e time;
type -a Vector: $(time) sec;
window -t wks;
range r1=1, r2=2;
for(ii=1; ii<=100000; ii++) {
if(r1[ii]>0.5) r2[ii]=1; else r2[ii]=0;
sec -e time;
type -a IF: $(time) sec;
Vector: 0.188 sec
IF: 13.338 sec
So, the conditional operator for vectors is 70(=13.34/0.19) times faster!
Edited by - Hideo Fujii on 05/03/2012 11:38:44 AM | {"url":"http://www.originlab.com/forum/topic.asp?TOPIC_ID=10390","timestamp":"2014-04-21T07:03:44Z","content_type":null,"content_length":"44915","record_id":"<urn:uuid:03639c94-783d-436e-a4b2-48c271482f96>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to do probability with a dart board thats R=9in, r=5in
how to do probability with a dart board thats R=9in, r=5in
how would i do probablty wit a dart board thats R=9in + r=5in. please help i cant figure this out.
FWT wrote:how would i do probablty wit a dart board thats R=9in + r=5in. please help i cant figure this out.
At a guess, you're supposed to find the probability that a dart will hit some specified portion of the board...?
Find the areas of each of the regions: the whole board (using R), the inner portion (using r), and the outer ring (subtracting the second value from the first). (Use
$A_R\, =\, \pi R^2$
$A_r\, =\, \pi r^2$
to find the first two areas.)
The probability of hitting the outer or inner area will be (outer or inner area)/(total area).
If I have misunderstood your meaning, please reply with corrections. If you get stuck, please reply showing your work. Thank you!
Re: how to do probability with a dart board thats R=9in, r=5in
i figured it out | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=17&t=229","timestamp":"2014-04-21T02:28:27Z","content_type":null,"content_length":"19795","record_id":"<urn:uuid:26aa6e24-4b16-4e33-ab64-4fb6505a65c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to write an interactive GUI in MATLAB
To give you a mini lesson on the way matlab handles figures (gui objects):
MATLAB deals with all graphics objects as Handles. If you dont know what that is, a Handle is a reference to a set of data. In the case of graphics objects in MATLAB, these handles are all simply
double precision numbers.
To access or set any properties of any graphics, you have to use the get()/set() command, along with the handle to the graphics object you are referencing.
There are 3 "levels" of graphics objects in MATLAB:
- the highest level. It is the window that any graphics objects you create go in. eg:
creates an empty figure. f is now set to a handle to the figure you created. To see what kinds of properties figures have, use:
Its a good idea to look at what this returns, so you can see what sorts of properties are stored in the figure handle. It stores data like the size of the figure, and other general properties.
- The second level is the "child" of the figure, the axes. each axes is ALWAYS created in a figure (Matlab will create one if you create an axes without specifying a figure). The Axes is the "place"
for plots, text, labels, and any other graphics object in MATLAB.
This will create a figure then an axes in that figure. to see what sorts of things are stored in axes, use
This will list all properties in the axes, its a good idea again to see whats there to understand what sort of things go into axes.
One thing to notice is the "parent" and "child" fields in axes/figures. In the code above, you created an axes in a figure. Now, if you run
>> child = get(f,'children');
% child==a will be true
it will return a handle to the axes you created - the same handle as a.
Graphics objects
The last level of graphics objects are the actual graphics objects - plots, text, etc. Again, if you create a plot:
f = figure;
a = axes;
p = plot(1,1);
now a has a child (the plot p) so if you run:
child = get(a,'children')
child will point to the same plot as p. Again, it is a good idea to use get(p) to see what sort of things go in a graphics object.
Ive bored you enough with this stuff - it really is the most annoying part of matlab imo, but jsut remember you use get() and set for everything.
jsut realized I havent used set to set anything, heres a simple way to resize the figure f:
>> set(f,'position',[1,1,400,500]);
this will set the figure to begin at the [1,1] pixel on your screen and end at the [400,500] pixel (from the bottom left of your screen).
if makign an interactive GUI wuy will also need to look into callback functions, as they are important to any gui.
hope this essay didnt confuse you more | {"url":"http://www.physicsforums.com/showthread.php?t=435044","timestamp":"2014-04-21T04:51:42Z","content_type":null,"content_length":"31296","record_id":"<urn:uuid:0787837d-e41f-484e-85c6-8802d3a3dd42>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula for the angle between two Vectors
What angle is between a = (4,4,4)^T and b = (4,0,4)^T?
Visually, it looks somewhat less than midway between perpendicular and horizontal. 35° would be a good guess.
Formula for the angle between two Vectors
One could do better than guess by noticing that in going from the tail to the head of a the vertical distance increases by 4 while the horizontal distance increases by 4
Hence the tangent of the angle is 4 / (4
so the angle with the horizontal is arctan( 0.7071 ) = 35.26°. Since b is in the horizontal plane, the angle between the two vectors must be that value.
The formula for the angle θ between two unit vectors is:
a[u] · b[u] = cos θ
To use this formula with non-unit vectors: (1) normalize each vector, (2) compute the dot product, (3) take the arc cos to get the angle.
QUESTION 5:
(Calculator Problem: ) Apply this formula to a and b of the figure. | {"url":"http://programmedlessons.org/VectorLessons/vch10/vch10_5.html","timestamp":"2014-04-17T00:48:35Z","content_type":null,"content_length":"3203","record_id":"<urn:uuid:7ecfb34a-5629-4a4a-be0d-f25f0fdb739d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
cc Write an inequality for the statement, "When a number n is divided by opposite of 3, the result is at least 39". Simplify, if necessary. cc View Solution
cc "The opposite of f is at least 38." Write an inequality for the statement . View Solution
cc Which of the following is the simplified form of the inequality, -48x ≥ -192? View Solution
cc Which of the following is the simplified form of the inequality, 6x ≥ 36? View Solution
cc Which of the following is the simplified form of the inequality, 7x ≥ -63? cc View Solution
cc Which of the following is the simplified form of the inequality, -7x ≥ 70? cc View Solution
cc Which of the following is the simplified form of the inequality, 14x ≤ -196? View Solution
cc Which of the following is the simplified form of the inequality, -2x ≤ 76? View Solution
cc Which of the following is the simplified form of the inequality, x7 ≤ 47? View Solution
cc Which of the following is the simplified form of the inequality, x-4≤ 26? View Solution
cc Which of the following is the simplified form of the inequality, x30 ≥ 6? View Solution
cc Which of the following is the simplified form of the inequality, "When a number is multiplied by 10 the result is less than 120", if N represents the number? View Solution
cc Write and simplify the inequality obtained from the statement, "When a number N is divided by the opposite of 2, the result is less than 38". View Solution
cc Which of the following choices best represents the simplified form of the inequality obtained from the statement "The product of a number F and the number 13 is less than or equal to View
the number 169"? Solution
cc Write and simplify the inequality obtained from the statement, "When a number N is multiplied by the opposite of 3, the result is at least 30". cc View Solution
cc Which of the following represents the simplest form of the inequality, x(-6) ≥7? View Solution
cc Which of the following choices represents the simplest form of the inequality, x(-4) ≥ -12? View Solution
cc Which of the following choices represents the simplest form of the inequality, x-108 ≤ - 1412? cc View Solution
cc "Jimmy must work for at least 80 hours in 8 days, spending equal time every day to complete a project." Write and simplify the inequality obtained for the situatiuon, to find the View
number of working hours H per day. Solution
cc What is the simplified form of the inequality obtained from the statement, "One-third of Q is at least 8"? cc View Solution
cc Is the statement, "- 9 is a solution of the inequality 9x ≥ 54", true? cc View Solution
cc Is the statement, "0 is not a solution of the inequality x5 > -13", false? cc View Solution
cc Write an inequality for the statement, "When a number n is divided by opposite of 4, the result is at least 30". Simplify, if necessary. cc View Solution
cc "The opposite of f is at least 35." Write an inequality for the statement . View Solution
cc Which of the following is the simplified form of the inequality, 4x ≥ 40? View Solution
cc Which of the following is the simplified form of the inequality, 6x ≥ -54? cc View Solution
cc Which of the following is the simplified form of the inequality, -7x ≥ 63? cc View Solution
cc Which of the following is the simplified form of the inequality, 16x ≤ -256? View Solution
cc Which of the following is the simplified form of the inequality, -4x ≤ 152? View Solution
cc Which of the following is the simplified form of the inequality, -11x ≥ -99? View Solution
cc Which of the following is the simplified form of the inequality, x4 ≤ 10? View Solution
cc Which of the following is the simplified form of the inequality, x-4≤ 19? View Solution
cc Write and simplify the inequality obtained from the statement, "When a number N is multiplied by the opposite of 3, the result is at least 18". cc View Solution
cc Which of the following is the simplified form of the inequality, x21 ≥ 3? View Solution
cc Which of the following is the simplified form of the inequality, "When a number is multiplied by 9 the result is less than 99", if N represents the number? View Solution
cc Which of the following choices best represents the simplified form of the inequality obtained from the statement "The product of a number F and the number 12 is less than or equal to View
the number 144"? Solution
cc Write and simplify the inequality obtained from the statement, "When a number N is divided by the opposite of 2, the result is less than 48". View Solution
cc Which of the following represents the simplest form of the inequality, x(-8) ≥7? View Solution
cc Is the statement, "- 7 is a solution of the inequality 7x ≥ 63", true? cc View Solution
cc Which of the following choices represents the simplest form of the inequality, x(-5) ≥ -7? View Solution
cc Is the statement, "0 is not a solution of the inequality x4 > -19", false? cc View Solution
cc Which of the following choices represents the simplest form of the inequality, x-32 ≤ - 98? cc View Solution
cc "Henry must work for at least 63 hours in 7 days, spending equal time every day to complete a project." Write and simplify the inequality obtained for the situatiuon, to find the View
number of working hours H per day. Solution
cc What is the simplified form of the inequality obtained from the statement, "One-third of Q is at least 6"? cc View Solution
cc More than 20% of 30 million population of a country are rich. Which of the following inequalities expresses p for the number of rich people in the country? View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgdbkxkjgge&.html","timestamp":"2014-04-18T05:31:51Z","content_type":null,"content_length":"90728","record_id":"<urn:uuid:ff1e4596-85f5-423f-8ec8-2f23678902cb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milestones:Maxwell's Equations, 1860-1871
From GHN
Maxwell’s Equations, 1860-1871
Between 1860 and 1871, at his family home Glenlair and at King’s College London, where he was Professor of Natural Philosophy, James Clerk Maxwell conceived and developed his unified theory of
electricity, magnetism and light. A cornerstone of classical physics, the Theory of Electromagnetism is summarized in four key equations that now bear his name. Maxwell’s equations today underpin all
modern information and communication technologies.
James Clerk Maxwell began his first serious work on electromagnetism when he was a Fellow at Cambridge University, 1854 – 1856. From 1860 – 1865 he was a Professor at King’s College London, during
which time he did some key experiments at the College and at his residence in Kensington. He began to spend his summers at Glenlair, where he also conducted experiments. During his tenure at King’s
College, he published his two most important papers on electromagnetic theory: “On Physical Lines of Force” (1861), which added a critical correction to Ampère's circuital law; and “A Dynamical
Theory of the Electromagnetic Field” (1865), which proposed light as an electromagnetic wave. In both cases he pioneered the use of mathematics in describing the behavior of light. From 1865 – 1871,
Maxwell lived full-time at Glenlair as an independent scholar, during which time he wrote his magnum opus, Treatise on Electricity and Magnetism (published in 1873), which summarized all of the known
theory of electromagnetism, including his own contributions. We specify the dates 1860 – 1871 for the Milestone- this covers Maxwell’s time at King’s College London and subsequently Glenlair, during
which time Maxwell published the two key papers on the theory of electromagnetism, and wrote the Treatise.
Maxwell deduced that light was an electromagnetic wave, thus revolutionizing the fields of electrical science and electrical engineering. He pioneered the use of calculus in electromagnetic science
and independently derived three of the four modern equations that now bear his name. These are Gauss' Law, Gauss' Law for Magnetism, and Ampere’s Law with Maxwell’s correction, and appear in their
original scalar form at equations (115), (56) and (112) in his 1861 paper On Physical Lines of Force. In this paper he also derived a full-time-derivative version of Faraday’s Law (at equations (54)
and (77)), which is a more general version of the fourth modern Maxwell equation. Equation (77) additionally includes a term for the Lozentz Force, predating the work of Lorentz. The correction to
Ampere’s Circuital Law introduced the electric displacement current, which ultimately enabled his derivation of the electromagnetic wave equation in his 1865 paper A Dynamical Theory of the
Electromagnetic Field. In the 1873 Treatise on Electricity and Magnetism, Maxwell introduced a condensed, vector form of the equations in quaternion notation (Volume 2, chapter IX General Equations
of the Electromagnetic Field). These included the vector fields E, B, D and H and vector potential A as are used today (albeit originally written in German Script).
It was Oliver Heaviside who subsequently introduced the fourth modern Maxwell equation as a partial-time-derivative version of Faraday’s Law, and recast the equations derived by Maxwell in their
well-known vector calculus form, but he acknowledged that it was Maxwell who did the original work. Albert Einstein specifically acknowledged the importance of Maxwell in his development of special
relativity. It was apparently Einstein who originally referred to them as “Maxwell’s Equations,” and this is the way they are known to the broader public.
Maxwell built on the earlier work of many giants, including Ampere, Gauss and Faraday, but he himself was a giant who revolutionized the field of electrical and optical physics, and laid the
groundwork for electrical, radio and photonic engineering, with his experiments, theories and publications. The unification of the theories of electricity, magnetism and light, which comes directly
from Maxwell’s equations, clearly sets Maxwell’s work apart from similar achievements of the time.
List of supporting documents and publications
James Clerk Maxwell. “On Physical Lines of Force,” Phil Mag J Science 4(1861), 161ff.
James Clerk Maxwell. “A Dynamical Theory of the Electromagnetic Field,” Phil. Trans. Royal Soc. 155 (1865), p. 459ff.
James Clerk Maxwell. A Treatise on Electricity and Magnetism, Volume 2 (Oxford, U.K.: Clarendon Press, 1873): Chapter IX, "General Equations of the Electromagnetic Field."
The plaques may be visited at
Glenlair, King’s Building,
Knockvennie, Strand Campus,
Castle Douglas, King's College London,
Kirkcudbrightshire, London WC2R 2LS
DG7 3DF UK | {"url":"http://ieeeghn.org/wiki/index.php/Milestones:Maxwell's_Equations,_1860-1871","timestamp":"2014-04-20T08:25:32Z","content_type":null,"content_length":"41923","record_id":"<urn:uuid:78c9c3ae-048a-47ed-b89a-088a2069082a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magic Squares - Table of Contents
Structure of Magic and Semi-Magic Squares, by Francis Gaspalou
Methods and Tools for Enumeration
Main Results
The Transformation Method
The Permutation Method
The Intermediate Square Method
Enumeration Programs
Some References
Warning! The different pages of the site are linked each other. They are not independent. I recommend to read these pages in the order.
About the Author What's new On line since September 15, 2005 | {"url":"http://www.gaspalou.fr/magic-squares/index.htm","timestamp":"2014-04-19T17:42:49Z","content_type":null,"content_length":"8929","record_id":"<urn:uuid:22f8ff53-a0f6-4993-9bc9-84bd7c9cca40>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dual Degree in Economics
Degree Requirements for Dual Degree in Economics (BA) and Mathematics (BS)
A solid training in the mathematical and statistical sciences is fundamental to optimally prepare economics students for graduate school. A dual degree in economics and mathematics will
substantially increase program quality and career prospects for our students as well as enhancing the reputation of the economics program at UCD. Similarly, a solid training in quantitative and
qualitative economic principles offers significant benefits to mathematics majors who seek industrial and/or consulting positions.
Program Requirements
Students majoring in economics and mathematics for the BA/BS dual-degree must declare such by the time they have completed 60 semester hours. No pass/fail grades may count toward the dual degree.
The minimum grade for all economics classes taken at CU Denver counted towards the major is C- (one D- is allowed in one counted economics elective) and the minimum GPA requirement for all CU Denver
economics classes counted towards the major is 2.5. The minimum grade for all mathematics classes taken at CU Denver counted towards the major is C- and the minimum GPA requirement for all CU Denver
mathematics classes counted towards the major is 2.25.
Required Economics Courses
• ECON 2012 - Principles of Economics: Macroeconomics gtPATHWAYS: GT-SS1
• ECON 2022 - Principles of Economics: Microeconomics gtPATHWAYS: GT-SS1
• ECON 4071 - Intermediate Microeconomic Theory gtPATHWAYS:
• ECON 4081 - Intermediate Macroeconomic Theory gtPATHWAYS:
• ECON 4091 - History of Economic Thought gtPATHWAYS:
• ECON 4811 - Introduction to Econometrics gtPATHWAYS:
Total: 18 Hours
Economics Electives
Any five 3-semester-hour courses taken in economics may satisfy this requirement, other than internships and independent studies which require the approval of the department chair. Note: ECON 3801
and ECON 3811 cannot be counted as electives.
One of the following Mathematics courses can be counted as one Economics elective (it may also be counted as one Mathematics required course or one Mathematics elective):
• MATH 3301 Operations Research I
• MATH 3302 Operations Research II
• MATH 4101 Applied Statistics Using SAS and SPSS I
• MATH 4387 Regression Analysis, Modeling and Time Series
• MATH 4390 Game Theory
• MATH 4450 Complex Variables
• MATH 4733 Partial Differential Equations
• MATH 4830 Applied Statistics
• MATH 5350 Mathematical Theory of Interest
Total: 15 Hours (four Economics courses + one Mathematics course, or five Economics courses)
Senior Exercise
Graduating seniors must submit the three best papers that the student wrote in any three separate courses taken in the Department of Economics for the outcomes assessment of the economics program.
The three papers should be handed in at one time in a folder to the economics office, before the first day of the month in which the student plans to graduate.
Required Core Courses for All Mathematics Majors
Lower-Division Courses
• MATH 1401 Calculus I gtPATHWAYS: GT-MA1
• MATH 2411 Calculus II gtPATHWAYS: GT-MA1
• MATH 2421 Calculus III gtPATHWAYS: GT-MA1
Upper-Division Courses
• MATH 3000 Introduction to Abstract Mathematics
• MATH 3191 Applied Linear Algebra
• MATH 4310 Introduction to Real Analysis I
Total: 21 Hours.
Required Courses for the Dual-Degree
• MATH 3200 Elementary Differential Equations
• MATH 4650 Numerical Analysis I
• MATH 4779 Math Clinic
• MATH 4810 Probability
• MATH 4820 Statistics
Applied/Modeling Elective: one course chosen from
• MATH 3301 Operations Research I
• MATH 3302 Operations Research II
• MATH 4387 Regression Analysis, Modeling and Time Series
• MATH 4409 Applied Combinatorics
• MATH 4733 Partial Differential Equations
• MATH 4791 Continuous Modeling
• MATH 4792 Probabilistic Modeling
• MATH 4793 Discrete Math Modeling
• MATH 4794 Optimization Modeling
Depth in Proof-Writing Elective: one course chosen from
• MATH 4110 Theory of Numbers
• MATH 4140 Introduction to Modern Algebra
• MATH 4201 Topology
• MATH 4220 Higher Geometry II
• MATH 4320 Introduction to Real Analysis II (highly recommended)
• MATH 4408 Applied Graph Theory
Students must choose two approved mathematics electives (at least 3 semester hours) above 3000, excluding MATH 4012, 4013, 4014 and 4015.
One of the following Economics courses can be counted as one Mathematics elective (it can also be counted as one Economics elective):
• ECON 4030 Data Analysis with SAS
• ECON 4110 Money and Banking
• ECON 4150 Economic Forecasting
• ECON 4320 Financial Economics
• ECON 4430 Economic Growth
• ECON 4550 Game Theory and Economic Applications
• ECON 4610 Labor Economics
• ECON 4740 Industrial Organization
Total: 27 Hours (eight Mathematics courses + one Economics course, or nine Mathematics courses).
Portfolio, Interview, Survey
In the semester of graduation, students must
• submit a portfolio consisting of two papers, typically written for previous courses, that demonstrate mathematical and writing proficiency;
• participate in an exit interview, which may be scheduled by the department administrative assistant;
• complete a senior survey, available from the department administrative assistant.
Residence Requirements
In addition to the CLAS residence requirements, the Economics Department requires that
• At least six of the major courses (18 semester hours), including at least three courses out of 4071, 4081, 4091 and 4811, must be taken from economics faculty at CU Denver.
• Once a student has enrolled at CU Denver, no courses in the major may be taken outside the economics department without permission from the undergraduate advisor.
And the Mathematics Department requires that
• At least 15 upper-division mathematics credits must be taken at CU Denver. | {"url":"http://www.ucdenver.edu/academics/colleges/CLAS/Departments/economics/Programs/BachelorofArts/Pages/DualDegreeinEconomicsandMathematics.aspx","timestamp":"2014-04-19T02:19:22Z","content_type":null,"content_length":"106848","record_id":"<urn:uuid:f9abc4df-9bc0-4b11-b7f1-2d65fe6639bb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Full Program
Current as of Sunday, May 3, 2009 00:21:11
Program | Deadlines | Abstract submission | Registration/Housing/Etc. | Inquiries: meet@ams.org
2009 Spring Eastern Section Meeting
Worcester, MA, April 25-26, 2009 (Saturday - Sunday)
Meeting #1050
Associate secretaries:
Steven H Weintraub
, AMS
Saturday April 25, 2009
• Saturday April 25, 2009, 7:30 a.m.-4:00 p.m.
Foyer of Room 102, Higgins Laboratories
• Saturday April 25, 2009, 7:30 a.m.-4:00 p.m.
Exhibit and Book Sale
Room 102, Higgins Laboratories
• Saturday April 25, 2009, 8:00 a.m.-10:50 a.m.
Special Session on Symplectic and Contact Topology, I
Room 104, Salisbury Laboratories
Peter Albers, Purdue University/ETH Zurich palbers@math.purdue.edu
Basak Gurel, Vanderbilt University basak.gurel@vanderbilt.edu
• Saturday April 25, 2009, 8:00 a.m.-10:50 a.m.
Special Session on The Mathematics of Climate Change, I
Room 114, Higgins Laboratories
Catherine A. Roberts, College of the Holy Cross croberts@holycross.edu
Gareth E. Roberts, College of the Holy Cross
Mary Lou Zeeman, Bowdoin College
□ 8:00 a.m.
Introduction to the Mathematics of Climate Change.
Mary Lou Zeeman*, Bowdoin College and Cornell University
□ 9:00 a.m.
Periodic Versus Constant Proportion Fish Exploitation Policies.
Abdul-Aziz Yakubu*, Howard University
Jon Conrad, Cornell University
Mary Lou Zeeman, Bowdoin College
□ 9:30 a.m.
Climate-induced changes in spatial distribution of Northwest Atlantic fish stocks: implications for management.
Janet A. Nye*, NOAA National Marine Fisheries Service, Northweast Fisheries Science Center
Jason S. Link, NOAA National Marine Fisheries Service Northeast Fisheries Science Center
Jonathan A. Hare, NOAA National Marine Fisheries Service Northeast Fisheries Science Center
□ 10:00 a.m.
North Atlantic Climate Variability: Preliminary Analysis of Historical Weather Data and Stable Isotope Time Series from Cave Dripwater and a Holocene-Age Stalagmite from Bermuda.
Steven E Gaurin*, University of Massachusetts
□ 10:30 a.m.
Audience discussion on the role of mathematics in climate change research, moderated by Mary Lou Zeeman.
• Saturday April 25, 2009, 8:00 a.m.-10:50 a.m.
Special Session on Discrete Geometry and Combinatorics, I
Room 116, Higgins Laboratories
Egon Schulte, Northeastern University schulte@neu.edu
Brigitte Servatius, Worcester Polytechnic Institute
• Saturday April 25, 2009, 8:00 a.m.-10:50 a.m.
Special Session on Algebraic Graph Theory, Association Schemes, and Related Topics, I
Room 218, Higgins Laboratories
William J. Martin, Worcester Polytechnic Institute martin@wpi.edu
Sylvia A. Hobart, University of Wyoming
• Saturday April 25, 2009, 8:20 a.m.-10:50 a.m.
Special Session on Analysis of Weakly Differentiable Maps with Constraints and Applications, I
Room 305, Salisbury Laboratories
Fengbo Hang, Courant Institute, New York University
Mohammad Reza Pakzad, University of Pittsburgh pakzad@pitt.edu
• Saturday April 25, 2009, 9:00 a.m.-10:50 a.m.
Special Session on Number Theory, I
Room 154, Higgins Laboratories
John T. Cullinan, Bard College cullinan@bard.edu
Siman Wong, University of Massachusetts, Amherst
• Saturday April 25, 2009, 9:00 a.m.-10:50 a.m.
Special Session on Quasi-Static and Dynamic Evolution in Fracture Mechanics
Room 407, Salisbury Laboratories
Christopher J. Larsen, Worcester Polytechnic Institute cjlarsen@wpi.edu
• Saturday April 25, 2009, 9:00 a.m.-10:50 a.m.
Special Session on Scaling, Irregularities, and Partial Differential Equations, I
Room 406, Salisbury Laboratories
Umberto Mosco, Worcester Polytechnic Institute
Bogdan M. Vernescu, Worcester Polytechnic Institute vernescu@wpi.edu
• Saturday April 25, 2009, 9:00 a.m.-10:50 a.m.
Special Session on Real and Complex Dynamics of Rational Difference Equations with Applications, I
Room 202, Higgins Laboratories
M. R. S. Kulenovic, University of Rhode Island mkulenovic@mail.uri.edu
Orlando Merino, University of Rhode Island
• Saturday April 25, 2009, 9:30 a.m.-10:50 a.m.
Special Session on Effective Dynamics and Interactions of Localized Structures in Schrödinger Type Equations, I
Room 105, Salisbury Laboratories
Fridolin Ting, Lakehead University fting@lakeheadu.ca
• Saturday April 25, 2009, 11:00 a.m.-11:50 a.m.
Invited Address
Fractal spectra between Scylla and Charybdis.
Room 115, Salisbury Laboratories
Umberto Mosco*, Worcester Polytechnic Institute
• Saturday April 25, 2009, 2:00 p.m.-2:50 p.m.
Invited Address
Topology of weakly differentiable maps.
Room 115, Salisbury Laboratories
Fengbo Hang*, Courant Institute of New York University
• Saturday April 25, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Effective Dynamics and Interactions of Localized Structures in Schrödinger Type Equations, II
Room 105, Salisbury Laboratories
Fridolin Ting, Lakehead University fting@lakeheadu.ca
• Saturday April 25, 2009, 3:00 p.m.-6:30 p.m.
Special Session on Analysis of Weakly Differentiable Maps with Constraints and Applications, II
Room 305, Salisbury Laboratories
Fengbo Hang, Courant Institute, New York University
Mohammad Reza Pakzad, University of Pittsburgh pakzad@pitt.edu
• Saturday April 25, 2009, 3:00 p.m.-6:20 p.m.
Special Session on Symplectic and Contact Topology, II
Room 104, Salisbury Laboratories
Peter Albers, Purdue University/ETH Zurich palbers@math.purdue.edu
Basak Gurel, Vanderbilt University basak.gurel@vanderbilt.edu
• Saturday April 25, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Topological Robotics
Room 407, Salisbury Laboratories
Li Han, Clark University
Lee N. Rudolph, Clark University lrudolph@black.clarku.edu
• Saturday April 25, 2009, 3:00 p.m.-5:45 p.m.
Special Session on The Mathematics of Climate Change, II
Room 114, Higgins Laboratories
Catherine A. Roberts, College of the Holy Cross croberts@holycross.edu
Gareth E. Roberts, College of the Holy Cross
Mary Lou Zeeman, Bowdoin College
□ 3:00 p.m.
Weather prediction and Chaos.
James A Yorke*, Univ of Maryland, Math & Physics Depts and IPST
□ 3:30 p.m.
Poleward Expansion of Hadley Cells.
William F. Langford*, University of Guelph, Guelph ON Canada
Greg Lewis, University of Ontario Institute of Technology, Oshawa ON Canada L1H7K4
□ 4:00 p.m.
Understanding the relative humidity distribution of the atmosphere using a simple model.
Paul A O'Gorman*, Department of Earth, Atmospheric, and Planetary Sciences
□ 4:30 p.m.
A component-based framework for semi-empirical, process-based modeling of carbon flux from terrestrial ecosystems.
Sudeep Samanta*, Woods Hole Research Center
Richard A. Houghton, Woods Hole Research Center
□ 5:00 p.m.
Audience discussion on the role of mathematics in climate change research, moderated by Gareth Roberts.
• Saturday April 25, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Number Theory, II
Room 154, Higgins Laboratories
John T. Cullinan, Bard College cullinan@bard.edu
Siman Wong, University of Massachusetts, Amherst
• Saturday April 25, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Discrete Geometry and Combinatorics, II
Room 116, Higgins Laboratories
Egon Schulte, Northeastern University schulte@neu.edu
Brigitte Servatius, Worcester Polytechnic Institute
• Saturday April 25, 2009, 3:00 p.m.-5:20 p.m.
Special Session on Scaling, Irregularities, and Partial Differential Equations, II
Room 406, Salisbury Laboratories
Umberto Mosco, Worcester Polytechnic Institute
Bogdan M. Vernescu, Worcester Polytechnic Institute vernescu@wpi.edu
• Saturday April 25, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Algebraic Graph Theory, Association Schemes, and Related Topics, II
Room 218, Higgins Laboratories
William J. Martin, Worcester Polytechnic Institute martin@wpi.edu
Sylvia A. Hobart, University of Wyoming
• Saturday April 25, 2009, 3:00 p.m.-5:20 p.m.
Special Session on Real and Complex Dynamics of Rational Difference Equations with Applications, II
Room 202, Higgins Laboratories
M. R. S. Kulenovic, University of Rhode Island mkulenovic@mail.uri.edu
Orlando Merino, University of Rhode Island
Sunday April 26, 2009
• Sunday April 26, 2009, 8:00 a.m.-12:00 p.m.
Foyer of Room 102, Higgins Laboratories
• Sunday April 26, 2009, 8:00 a.m.-10:50 a.m.
Special Session on Symplectic and Contact Topology, III
Room 104, Salisbury Laboratories
Peter Albers, Purdue University/ETH Zurich palbers@math.purdue.edu
Basak Gurel, Vanderbilt University basak.gurel@vanderbilt.edu
• Sunday April 26, 2009, 8:00 a.m.-10:45 a.m.
Special Session on The Mathematics of Climate Change, III
Room 114, Higgins Laboratories
Catherine A. Roberts, College of the Holy Cross croberts@holycross.edu
Gareth E. Roberts, College of the Holy Cross
Mary Lou Zeeman, Bowdoin College
□ 8:00 a.m.
Challenges to physical-biological coupling in climate models.
Amala Mahadevan*, Boston University
□ 8:30 a.m.
Multi-decadal Variability of Atlantic Meridional Overturning Circulation in the Community Climate System Model Version 3.
Young-Oh Kwon*, Woods Hole Oceanographic Institution
Claude Frankignoul, LOCEAN, Université Pierre et Marie Curie, Paris, France
□ 9:00 a.m.
How Ice Line Moves: Revisiting Budyko's Energy Balance Model.
Esther R. Widiasih*, University of Minnesota, Twin Cities
□ 9:30 a.m.
Nonlinear threshold behavior during the loss of Arctic sea ice.
J. S. Wettlaufer*, Yale University
□ 10:00 a.m.
Audience discussion on the role of mathematics in climate change research, moderated by Catherine A. Roberts.
• Sunday April 26, 2009, 8:00 a.m.-10:50 a.m.
Special Session on Discrete Geometry and Combinatorics, III
Room 116, Higgins Laboratories
Egon Schulte, Northeastern University schulte@neu.edu
Brigitte Servatius, Worcester Polytechnic Institute
• Sunday April 26, 2009, 8:00 a.m.-10:50 a.m.
Special Session on Algebraic Graph Theory, Association Schemes, and Related Topics, III
Room 218, Higgins Laboratories
William J. Martin, Worcester Polytechnic Institute martin@wpi.edu
Sylvia A. Hobart, University of Wyoming
• Sunday April 26, 2009, 8:00 a.m.-12:00 p.m.
Exhibit and Book Sale
Room 102, Higgins Laboratories
• Sunday April 26, 2009, 8:20 a.m.-10:50 a.m.
Special Session on Analysis of Weakly Differentiable Maps with Constraints and Applications, III
Room 305, Salisbury Laboratories
Fengbo Hang, Courant Institute, New York University
Mohammad Reza Pakzad, University of Pittsburgh pakzad@pitt.edu
• Sunday April 26, 2009, 8:30 a.m.-10:50 a.m.
Special Session on Effective Dynamics and Interactions of Localized Structures in Schrödinger Type Equations, III
Room 105, Salisbury Laboratories
Fridolin Ting, Lakehead University fting@lakeheadu.ca
• Sunday April 26, 2009, 8:30 a.m.-10:50 a.m.
Special Session on Number Theory, III
Room 154, Higgins Laboratories
John T. Cullinan, Bard College cullinan@bard.edu
Siman Wong, University of Massachusetts, Amherst
• Sunday April 26, 2009, 9:00 a.m.-10:50 a.m.
Special Session on Scaling, Irregularities, and Partial Differential Equations, III
Room 406, Salisbury Laboratories
Umberto Mosco, Worcester Polytechnic Institute
Bogdan M. Vernescu, Worcester Polytechnic Institute vernescu@wpi.edu
• Sunday April 26, 2009, 9:00 a.m.-10:40 a.m.
Session for Contributed Papers
Room 407, Salisbury Laboratories
• Sunday April 26, 2009, 11:00 a.m.-11:50 a.m.
Invited Address
Lagrangian submanifolds: From physics to number theory.
Room 115, Salisbury Laboratories
Octav Cornea*, Université de Montréal
• Sunday April 26, 2009, 2:00 p.m.-2:50 p.m.
Invited Address
A rapid survey of coarse geometry.
Room 115, Salisbury Laboratories
Kevin Whyte*, University of Illinois at Chicago
• Sunday April 26, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Symplectic and Contact Topology, IV
Room 104, Salisbury Laboratories
Peter Albers, Purdue University/ETH Zurich palbers@math.purdue.edu
Basak Gurel, Vanderbilt University basak.gurel@vanderbilt.edu
• Sunday April 26, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Discrete Geometry and Combinatorics, IV
Room 116, Higgins Laboratories
Egon Schulte, Northeastern University schulte@neu.edu
Brigitte Servatius, Worcester Polytechnic Institute
• Sunday April 26, 2009, 3:00 p.m.-3:50 p.m.
Special Session on Scaling, Irregularities, and Partial Differential Equations, IV
Room 406, Salisbury Laboratories
Umberto Mosco, Worcester Polytechnic Institute
Bogdan M. Vernescu, Worcester Polytechnic Institute vernescu@wpi.edu
• Sunday April 26, 2009, 3:00 p.m.-5:50 p.m.
Special Session on Algebraic Graph Theory, Association Schemes, and Related Topics, IV
Room 218, Higgins Laboratories
William J. Martin, Worcester Polytechnic Institute martin@wpi.edu
Sylvia A. Hobart, University of Wyoming
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2165_progfull.html","timestamp":"2014-04-19T06:03:19Z","content_type":null,"content_length":"102972","record_id":"<urn:uuid:1c8a67f7-7903-4d0a-b64c-768ee3011eef>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 11 - 20 of 2,289
- Risk Magazine , 1994
"... prices as a function of volatility. If an option price is given by the market we can invert this relationship to get the implied volatility. If the model were perfect, this implied value would
be the same for all option market prices, but reality shows this is not the case. Implied Black–Scholes vol ..."
Cited by 251 (0 self)
Add to MetaCart
prices as a function of volatility. If an option price is given by the market we can invert this relationship to get the implied volatility. If the model were perfect, this implied value would be the
same for all option market prices, but reality shows this is not the case. Implied Black–Scholes volatilities strongly depend on the maturity and the strike of the European option under scrutiny. If
the implied volatilities of at-the-money (ATM) options on the Nikkei 225 index are 20 % for a maturity of six months and 18 % for a maturity of one year, we are in the uncomfortable position of
assuming that the Nikkei oscillates with a constant volatility of 20 % for six months but also oscillates with a constant volatility of 18 % for one year. It is easy to solve this paradox by allowing
volatility to be timedependent, as Merton did (see Merton, 1973). The Nikkei would first exhibit an instantaneous volatility of 20 % and subsequently a lower one, computed by a forward relationship
to accommodate the one-year volatility. We now have a single process, compatible with the two option prices. From the term structure of implied volatilities we can infer a time-dependent
instantaneous volatility, because the former is the quadratic mean of the latter. The spot process S is then governed by the following stochastic differential equation: dS �rt () dt��() t dW
- Journal of Financial Economics , 1980
"... The expected market return is a number frequently required for the solution of many investment and corporate tinance problems, but by comparison with other tinancial variables, there has been
little research on estimating this expected return. Current practice for estimating the expected market retu ..."
Cited by 245 (1 self)
Add to MetaCart
The expected market return is a number frequently required for the solution of many investment and corporate tinance problems, but by comparison with other tinancial variables, there has been little
research on estimating this expected return. Current practice for estimating the expected market return adds the historical average realized excess market returns to the current observed interest
rate. While this model explicitly reflects the dependence of the market return on the interest rate, it fails to account for the effect of changes in the level of market risk. Three models of
equilibrium expected market returns which reflect this dependence are analyzed in this paper. Estimation procedures which incorporate the prior restriction that equilibrium expected excess returns on
the market must be positive are derived and applied to return data for the period 19261978. The principal conclusions from this exploratory investigation are: (1) in estimating models of the expected
market return, the non-negativity restriction of the expected excess return should be explicitly included as part of the specification; (2) estimators which use realized returns should be adjusted
for heteroscedasticity. 1.
- Review of Financial Studies , 1997
"... This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space
Markov chain in credit ratings. The parameters of this process are easily estimated using observable data ..."
Cited by 237 (12 self)
Add to MetaCart
This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space
Markov chain in credit ratings. The parameters of this process are easily estimated using observable data. This model is useful for pricing and hedging corporate debt with imbedded options, for
pricing and hedging OTC derivatives with counterparty risk, for pricing and hedging (foreign) government bonds subject to default risk (e.g., municipal bonds), for pricing and hedging credit
derivatives, and for risk management. This article presents a simple model for valuing risky debt that explicitly incorporates a firm's credit rating as an indicator of the likelihood of default. As
such, this article presents an arbitrage-free model for the term structure of credit risk spreads and their evolution through time. This model will prove useful for the pricing and hedging of
corporate debt with We would like to thank John Tierney of Lehman Brothers for providing the bond index price data, and Tal Schwartz for computational assistance. We would also like to acknowledge
helpful comments received from an anonymous referee. Send all correspondence to Robert A. Jarrow, Johnson Graduate School of Management, Cornell University, Ithaca, NY 14853. The Review of Financial
Studies Summer 1997 Vol. 10, No. 2, pp. 481--523 1997 The Review of Financial Studies 0893-9454/97/$1.50 imbedded options, for the pricing and hedging of OTC derivatives with counterparty risk, for
the pricing and hedging of (foreign) government bonds subject to default risk (e.g., municipal bonds), and for the pricing and hedging of credit derivatives (e.g. credit sensitive notes and spread
adjusted notes). This model can also...
, 2001
"... Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread
changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-co ..."
Cited by 224 (2 self)
Add to MetaCart
Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes
have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common
factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread
changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity.
- Journal of Banking and Finance , 1977
"... It is not uncommon in the arrangement of a loan to include as part of the financial package a guarantee of the loan by a third party. Examples are guarantees by a parent company of loans made to
its subsidiaries or government guarantees of loans made to private corporations. Also included would be g ..."
Cited by 217 (2 self)
Add to MetaCart
It is not uncommon in the arrangement of a loan to include as part of the financial package a guarantee of the loan by a third party. Examples are guarantees by a parent company of loans made to its
subsidiaries or government guarantees of loans made to private corporations. Also included would be guarantees of bank deposits by the Federal Deposit Insurance Corporation. As with other forms of
insurance, the issuing of a guarantee imposes a liability or cost on the guarantor. In this paper, a formula is derived to evaluate this cost. The method used is to demonstrate an isomorphic
correspondence between loan guarantees and common stock put options, and then to use the well developed theory of option pricing to derive the formula. 1.
- Journal of Financial Economics
"... Abstract: This paper examines the joint time series of the S&P 500 index and near-the-money short-dated option prices with an arbitrage-free model, capturing both stochastic volatility and
jumps. Jump-risk premia uncovered from the joint data respond quickly to market volatility, becoming more promi ..."
Cited by 210 (1 self)
Add to MetaCart
Abstract: This paper examines the joint time series of the S&P 500 index and near-the-money short-dated option prices with an arbitrage-free model, capturing both stochastic volatility and jumps.
Jump-risk premia uncovered from the joint data respond quickly to market volatility, becoming more prominent during volatile markets. This form of jump-risk premia is important not only in
reconciling the dynamics implied by the joint data, but also in explaining the volatility “smirks” of cross-sectional options data.
- Journal of Finance , 2001
"... The purpose of this article is to explain the spread between spot rates on corporate and government bonds. We find that the spread can be explained in terms of three elements: (1) compensation
for expected default of corporate bonds (2) compensation for state taxes since holders of corporate bonds p ..."
Cited by 207 (3 self)
Add to MetaCart
The purpose of this article is to explain the spread between spot rates on corporate and government bonds. We find that the spread can be explained in terms of three elements: (1) compensation for
expected default of corporate bonds (2) compensation for state taxes since holders of corporate bonds pay state taxes while holders of government bonds do not, and (3) compensation for the additional
systematic risk in corporate bond returns relative to government bond returns. The systematic nature of corporate bond return is shown by relating that part of the spread which is not due to expected
default or taxes to a set of variables which have been shown to effect risk premiums in stock markets Empirical estimates of the size of each of these three components are provided in the paper. We
stress the tax effects because it has been ignored in all previous studies of corporate bonds. 1
- European Finance Review , 1998
"... : A three parameter stochastic process, termed the variance gamma process, that generalizes Brownian motion is developed as a model for the dynamics of log stock prices. The process is obtained
by evaluating Brownian motion with drift at a random time given by a gamma process. The two additional par ..."
Cited by 197 (26 self)
Add to MetaCart
: A three parameter stochastic process, termed the variance gamma process, that generalizes Brownian motion is developed as a model for the dynamics of log stock prices. The process is obtained by
evaluating Brownian motion with drift at a random time given by a gamma process. The two additional parameters are the drift of the Brownian motion and the volatility of the time change. These
additional parameters provide control over the skewness and kurtosis of the return distribution. Closed forms are obtained for the return density and the prices of European options. The statistical
and risk neutral densities are estimated for data on the S&P500 Index and the prices of options on this Index. It is observed that the statistical density is symmetric with some kurtosis, while the
risk neutral density is negatively skewed with a larger kurtosis. The additional parameters also correct for pricing biases of the Black Scholes model that is a parametric special case of the option
pricing model d... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=457&sort=cite&start=10","timestamp":"2014-04-20T20:27:05Z","content_type":null,"content_length":"38478","record_id":"<urn:uuid:d1d14cd8-f2c5-4c11-8b6a-b81a28a9a2da>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intusoft Newsletter, Nov 2005
│PV Lithium-Ion Battery chargers│
Two battery charging techniques will be compared
1. Direct connection
2. Maximum peak power tracking, MPPT
In the first case, the number of series cells is varied so the “best” combination is chosen to work over the expected operating temperature range. Figure 4 shows the schematic, and resulting data is
shown in Figure 5. The charger works best when the peak power voltage exceeds the battery voltage. If the maximum cell temperature is 60 Deg C. then 10 cells are required. The parameter calculation
uses the cell-temperature estimate developed earlier, and adds the concept of a normalized incident radiation, sol, where sol=1 for AM1.5 conditions. Notice the voltage clamp using D3 and V5.
Lithium-Ion batteries must not be overcharged for safety concerns. The voltage clamp, along with short circuit protection is internal to consumer cells. OEM cells require the manufacturer to provide
safety features that include:
Over voltage limit, 4.2 volt
Under voltage limit, 3 volt
Short circuit fuse
Figure 4: Charging with 10 IXYS XOD17-48B cells in series.
Figure 5: Battery fully charged on a bright summer day, and less than 50% in winter.
This technique wastes incident solar energy and requires mode solar cells than a peak power tracking solution would require.
The peak power vs. solar illumination is plotted in Figure 6. Peak power tracking can be simulated using a behavioral element to supply constant power to the Lithium-Ion battery as shown in Figure 7.
Figure 6: Solar array characteristic vs. incident solar radiation.
Figure 7: A simulated source includes efficiency and housekeeping power estimates.
Plugging in the power values into the array simulation results in the data shown in Figure 8. Unfortunately, the power loss from housekeeping power and converter efficiency wipes out the MPPT gain.
So, it’s unnecessary to proceed further with the design because the simpler direct charge method produces superior results.
Figure 8: Charge times with MPPT are dashed curves.
│Poly Crystalline Solar Panel│
Poly crystalline panels use a less efficient and lower cost technology. This technology is widely used in lower power modules. The cost for a 10-watt module runs about $100. These small panels are
used widely in undeveloped regions to operate electronic devices, such as TV’s or PC’s, for a few hours a day. The arrays are built to charge a Lead-Acid gel type battery. This battery can be
“over-charged” to equalize the SOC in the series cells. It is necessary to keep the over-charge voltage low enough to prevent formation of Hydrogen gas. The poly crystalline PV array isn’t accurately
modeled using the mono-crystalline model, as shown in Figure 9.
Figure 9: The mono-crystalline model has errors at the endpoints for a Shell ST10 module.
Several additions to the model are needed for the Poly crystalline array. First, in 1996, Zekry, et. al. [4], pointed out that lateral resistance in the array could be modeled by using a distributed
model, that is, parallel diodes and photocurrent generators connected by resistors. If 4 such sections are used, the problem can be reduced to 2 sections by recognizing symmetry between the 2 anode
contacts. Adding this affect gets a fit at higher array voltages (high diode current). Next, the generation-recombination current needs to be included. This was done by adding a parallel diode with
2*N emission coefficient. The extra diodes series resistance and saturation current were used as free parameters to make the best fit to the data. Figure 10 shows how the model compares to the
published data at 20 and 60 Deg C for the Shell ST10 module. Figure 11 shows the model schematic.
The solar array peak power data are shown in Figure 10.
Figure 10: The final model compared with published data for the Shell ST10 module.
Figure 11: A poly crystalline PV Solar Array Model.
Next, the steps in the previous Lithium-Ion battery charger can be repeated using the poly crystalline model and a Lead-Acid battery. First, the solar array power vs. illumination curves is used to
determine the peak power tracking points vs. illumination. These data are shown in Figure 12.
Figure 12: Peak power varies with solar input and temperature.
Fortunately, we already have a Lead-Acid battery model, so it’s only necessary to connect the array directly to the battery to get the charge times. Then, using the previously computed peak power
operating points, the charge times can be calculated using a constant power source to charge the battery. The results are shown in Figure 13.
Figure 13: Charge time with MPPT is comparable to direct charging except for low solar illumination.
The only substantial benefit for MPPT tracking is to increase the SOC from 54% to 60% in the winter. That would not seem to justify the increased complexity in this application. So, when would you
use peak power point tracking? Evidently the overhead in terms of power and circuit complexity must justify the inclusion of MPPT. For battery chargers, once you get to a 100% state of charge, then
nothing you do will improve the situation. That means MPPT is only useful when the solar radiation is low, or varies a lot in cloudy climate. Even then, it’s only a 10% or so improvement, which is
easily gained be increasing the size of the solar panel. On the other hand, if the battery is oversized, it will never fully charge and the maximum energy can be recovered. That’s equivalent to
returning energy back to the power grid. Moreover, when energy is returned to the power grid, there must be a switching power supply so that its inefficiency doesn’t penalize the MPPT. These larger
systems won’t notice the housekeeping power loss. Saving 10% on a $50,000 installation is certainly worth adding MPPT, even at the cost of a microprocessor.
The peak power operating point can be tracked using a controller, or by using an estimate based on array temperature. In either case, the controlled state variable must be chosen. Normally the output
current would be selected. But the output current is proportional to power and there will be two possible operating points.
Therefore, controlling either power or output current directly results in a small signal gain reversal at the maximum power point, so that the control loop would be statically unstable. That’s a
common problem encountered in control systems. For example, an airplane (or car) that has its center of gravity too far to the rear will try to fly (or drive) backwards if there is an angular
disturbance. The problem is usually solved by controlling a different parameter in the “inner” control loop. Here, the array voltage can be controlled without any static instability, so the array
voltage becomes the output of the inner or high-speed control loop. For an open-loop controller it’s necessary to estimate the array voltage vs. temperature. There will be some uncertainty in the
estimate because of ageing and a slight dependence of solar illumination.
To eliminate this error, the peak power point can be measured and the control signal modified to settle at the peak. Measuring the peak power point requires the introduction of some kind of
disturbance. The approach taken by most commercial operations is to use a microprocessor unit (MPU) that perturbs the operating point, and corrects the output based on which side of the maximum the
current set point is detected. Another analog approach (the mpu could do this also) is to introduce a dithering frequency and designing a “linear” control law that seeks the maximum power point. The
microprocessor-based approach can use the “linear” control laws, or it can be based on nonlinear control techniques, such as fuzzy logic or even neural networks. The advantage of using the “linear”
approach lies in applying the well-established control system theory to describe the loop dynamics.
To find a maximum of a function, its derivative can be taken and the maximum occurs when the derivative is zero. Differentiating P=IV, give the following:
Notice that dv and di can be considered “small” signal parameters. So, if a relatively low frequency AC signal is introduced into the system, then di and dv can be extracted using a band-pass filter.
Then the value at the dither frequency can be evaluated by demodulating the result. Now, here’s where simulation can be used to design the control system. Rather than writing the equations, the
simulator can be used to evaluate the control law and set scale factors. Figure 14 shows how that’s done for the default array. First, the previously discussed solar panel model is used in X3, then
X1 models a buck regulator operating in continuous conduction mode. Nodes dv and di are the dither signals extracted as though they are the only AC signals present. Later-on the PWM switching signal
must be filtered. B2 is the control law for making the array voltage equal to the control signal. Finally, Vc is metered to see how it varies as the array voltage is swept using the control signal,
Vs. Vs includes the dither signal from V2. Figure 15 shows the transient simulation results. Next, the signal from Vc is used for control and a step change in load is introduced to check on loop
dynamics. This was accomplished by changing the B2 expression to
Figure 14: System level simulation.
Figure 15: The simulated control signal goes through zero at maximum power.
Figure 16: Power tracking works when V6 steps down 20ms into the simulation.
Having made an acceptable control law, the circuit implementation is needed. Three multiplications are required. That’s fairly expensive if general purpose analog multipliers are used; it’s not so
bad for a microprocessor, but the sample rate needs to be high compared with the dither frequency. Continuing from an analog design perspective, the I*dv and V*di products are no more than variable
gain circuits.
Sometime in the 60s an unknown author described how to do this economically using a field effect transistor. Both JFETs and MOSFETs operated in the “linear” region (that’s physics talk for the
engineers saturated region) follow the equation shown below for the grounded source configuration:
For MOSFETS, you set,
where W is the channel width, L is the length, and BETA and Kp are gain parameters. Both MOSFETS and JFETs work with slightly negative Drain-Source voltages (less than a diode drop). Now if you
connect a large valued resistor between the drain and gate and an equal valued resistor to a control signal, then,
Substituting back into the first equation,
And the conductance is then proportional to the control voltage. Placing the FET at the input to an op-amp has the effect of making a 2-quadrant multiplier shown in Figure 17.
Figure 17: A simple inexpensive multiplier.
As the FET gate voltage exceeds threshold, the FET becomes a resistor. The threshold voltage varies considerably with time and temperature, so you should keep away from the threshold by a volt or so.
As the voltage is increased further, the resistance decreases. In the limit, the resistance can’t decrease below the bulk resistance. These upper and lower boundaries limit the useful range of this
gain control technique to something on the order of 10:1. For a larger dynamic range, diodes can be used. The circuit is more complex because the diode’s voltage offset must be cancelled by using a
pair of diodes. Figure 18 shows the basic idea.
Figure 18: The conductance between Rhi and Rlo is proportional to Ibias.
In forward conduction, the diode equation is
q= charge of an electron=1.60218E-019 coulombs
N= emission coefficient
K= Boltzmans constant = 1.38066E-023 (coulomb volt) / (kelvin)
T= temperature in deg. kelvin.
Io is a device parameter
Solving for conductance
Then substituting I back into the conductance equation
This result holds over a remarkable range of current and is the basis for nearly all IC multiplier circuits.
Figure 19 shows an IC implementation that can also be used with transistor arrays. It is limited at high I by the bulk resistance and at the low end by leakage current. You can easily coax an
accurate result over a 3-decade range with this circuit. The diode-connected transistors (Q5 and Q6) reduce the effect of bulk collector resistance because of the transistor action. That extends the
small-signal performance range by the transistor current gain. The gain control current is supplied to the Igain node of Q3. Current mirrors supply Igain to each of the diode connected transistors,
and a balancing current to their emitters. Rhi and rlo are the variable resistor terminals that must be biased between the Vcc-Vee power rails.
Figure 19: Current mirrors bias the diode connected transistors, Q5 and Q6.
The FET resistance modulation scheme will be used for the low cost analog peak power tracker.
The final product, where dp is multiplied by v(10), can actually be considered to be a phase detector. That’s accomplished by using the polarity of v(10), which is used to select the positive or
negative dp result. Moreover, dp can also be limited, and the result is accomplished using an exclusive or gate. Figure 20 shows the result, comparing the xor with a multiplier for sol=.5. The
schematic for this test case is in the drawing file named MPPT_MUL1.DWG. The PWM control was set to voltage mode and the array voltage was swept from 12 volts to 21 volts from about 20ms to 50ms. A 1
kHz dither signal was inserted in the control loop such that the AC array voltage was the same for all DC sweep values. Notice that the ripple in power minimizes at the peak power point.
Figure 20: XOR circuit replaces multiplier.
The raw products were filtered in IntuScope using 5th-order Bessel filters, Delay=3m. Therefore, you must look at the data 3ms earlier to get the correct steady state values. The cursors are set at
the zero crossing and 3ms prior to the zero crossing. Both the xor and multiplier produce about the same zero crossing. The peak power point is correctly detected at the zero crossing for both cases.
Power stays within 2% of the peak over a 2.5-volt range of array voltage. The array voltage should be easily estimated within that band by accounting for solar cell temperature. That suggests the
simpler open-loop MPPT would be acceptable for anything but the highest cost systems. Temperature can be measured using a forward biased silicon diode attached to the array. The array voltage for
MPPT is then proportional to the diode voltage.
[1] S.M.Zee, Physics of Semiconductor Devices, 2nd edition, John Wiley & Sons,1981,pg 793.
[2] S.M.Zee, Physics of Semiconductor Devices, 2nd edition, John Wiley & Sons,1981,pg 806.
[3] Suleiman Abu-Sharkh and Dennis Doerffel , Rapid test and non-linear model characterization of solid-state lithium-ion batteries, First International Symposium on LLIBTA in Honolulu, Hawaii, June
[4] Abdelhalin Zekry and Abdulhameed Yousef, A Distributed SPICE-Model of a Solar Cell, IEEE Transactions on Electron Devices, Vol. 43, May 1996, pg 691.
To determine the “wire length” of each winding in Magnetics Designer you can do the following:
1. From the dropdown menu select Edit > Add button
2. Check “for each winding,” “For Inductors,” and “For Transformers”
3. In the description field type “Winding Length”
4. In the button label field type “WireLength”
5. Press the “>>next>>” button
6. In the equation section add the following:
WireLength = 0
sum_build = 0
shape == wShape ? 3.14159 : 4
for( winding = 0 ; winding < Nmax ; winding = winding+1 )
WireLength(winding) = N(winding)*shape*(ID + 2*sum_build + build(winding) - tw(winding))
sum_build = sum_build + build(winding)
7. Press the “>>next>>” button
Note: If you want to make this value available for future designs you need to add what is shown to the bottom of the user.equ file.
8. Press the “>>finish>>” button
At the bottom of each winding you should see the Winding Length given in cm. Also the values will be shown in the summary report.
Intusoft has implemented two new powerful features for IsSpice4 Transient Analysis. In the new 8.x.11 Build 2641, new convergence options have been added to improve the accuracy and speed of
transient analysis.
The first option is called VSECVMAX. If this option is set, the value is used as an upper limit for applying the VSECTOL convergence algorithm during a transient analysis. Setting this option
prevents the VSECTOL algorithm from getting trapped into divide-by-zero situations. A typical value is 1E4. Note that this option is only valid when VSECTOL is set. As an example:
.options vsectol = 1u vsecvmax=1e4
The second new feature is a novel way of using ITL4 in transient analysis. Recall that ITL4 is an option commonly used in SPICE-based simulators to control the number of iterations in each transient
time point calculation. Up to now, ITL4 has been set by users to a large value (typically 100-500) in order to avoid the dreaded “time step too small” error in transient analysis. The problem with
this approach is that it may result in making transient analysis unnecessarily longer than needed for most time points. Since the number is fixed, the simulator will scale back the time step after it
has gone through that many iterations at every time point. This results in a long simulation run time.
In Build 2641, we are introducing a new dynamic control for ITL4. Simply set ITL4=0 in an options statement and the simulator will automatically determine the maximum number of iterations to try for
each transient time point, before scaling back the time step value. At the end of the simulation you can check the “maximum transient iterations” to see what was the maximum number of iterations that
the simulator took to complete the simulation. This figure is usually well below the value that you might have set for ITL4.
As an example, consider the IR1150 TestBridge drawing. Running a 20ms transient simulation with ITL4=200 verses ITL4=0 results in the following statistics at the end of the run:
As you can see there is a big improvement in both the total number of iterations and the total run time when using the new algorithm. We used an AMD Athlon64 3200 with 1G of ram in running these
We request that you try this new approach in simulating hard-to-converge and long transient analysis designs. Please report your findings to us (rmktg@intusoft.com). This will in turn help us fine
tune this algorithm and make it perform even better in future releases. | {"url":"http://www.intusoft.com/nlhtm/nl78.htm","timestamp":"2014-04-20T00:46:58Z","content_type":null,"content_length":"52095","record_id":"<urn:uuid:dc2221b3-0001-44c1-9845-b1a795047556>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
vector transformation, cylindrical to Cartesian
1. The problem statement, all variables and given/known data
I have a result which is in the form (cylindrical coordinates):
$$ A\boldsymbol{e_{\theta }}=kr\boldsymbol{e_{\theta }} $$
And I have to provide the answer in cartesian coordinates.
2. Relevant equations
I know that the unit vectors:
$$ \boldsymbol{\hat{\theta} }=\begin{bmatrix}-sin\ \theta
cos\ \theta
\end{bmatrix} $$
and that
$$ r=\sqrt{x^{2}+y^{2}} $$
3. The attempt at a solution
$$ kr\boldsymbol{e_{\theta }} =k\left (\sqrt{x^{2}+y^{2}}\right ) $$
$$ \left (-sin\left(tan^{-1}\left(\frac{y}{x}\right ) $$
$$ \right )\boldsymbol{e_{x}} $$
$$ +cos\left (tan^{-1}\left (\frac{y}{x}\right )\boldsymbol{e_{y}} \right )\\ $$
I can't seem to get further than this. I don't know if I've made a mistake, or whether there is some trig identity that can help me simplify further, but I know the final answer and it is much
Any help much appreciated.
Thanks in advance
P.S. Why does the latex break down when the equation is too long? | {"url":"http://www.physicsforums.com/showthread.php?t=585907","timestamp":"2014-04-20T08:33:01Z","content_type":null,"content_length":"20919","record_id":"<urn:uuid:05ba3d85-5956-4b5a-815a-b5b5a5abac63>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Vinogradov-Bombieri theorem
, 2003
"... The problem of quickly determining whether a given large integer is prime or composite has been of interest for centuries, if not longer. The past 30 years has seen a great deal of progress,
leading up to the recent deterministic, polynomial-time algorithm of Agrawal, Kayal, and Saxena [2]. This new ..."
Cited by 18 (0 self)
Add to MetaCart
The problem of quickly determining whether a given large integer is prime or composite has been of interest for centuries, if not longer. The past 30 years has seen a great deal of progress, leading
up to the recent deterministic, polynomial-time algorithm of Agrawal, Kayal, and Saxena [2]. This new “AKS test ” for the primality of n involves verifying the
"... Abstract. Let p denote a prime. In this article we provide the first published lower bounds for the greatest prime factor of p − 1 exceeding (p − 1) 1 2 in which the constants are effectively
computable. As a result we prove that it is possible to calculate a value x0 such that for every x>x0 there ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. Let p denote a prime. In this article we provide the first published lower bounds for the greatest prime factor of p − 1 exceeding (p − 1) 1 2 in which the constants are effectively
computable. As a result we prove that it is possible to calculate a value x0 such that for every x>x0 there is a p<xwith the greatest prime factor of p − 1 exceeding x 3 5. The novelty of our
approach is the avoidance of any appeal to Siegel’s Theorem on primes in arithmetic progression. 1.
"... ABSTRACT. We give an effective version with explicit constants of a mean value theorem of Vaughan related to the values of ψ(y, χ), the twisted summatory function associated to the von Mangoldt
function Λ and a Dirichlet character χ. As a consequence of this result we prove an effective variant of t ..."
Add to MetaCart
ABSTRACT. We give an effective version with explicit constants of a mean value theorem of Vaughan related to the values of ψ(y, χ), the twisted summatory function associated to the von Mangoldt
function Λ and a Dirichlet character χ. As a consequence of this result we prove an effective variant of the Bombieri-Vinogradov theorem with explicit constants. This effective variant has the
potential to provide explicit results in many problems. We give examples of such results in several number theoretical problems related to shifted primes. For integers a and q ≥ 1, let 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5268351","timestamp":"2014-04-20T18:02:13Z","content_type":null,"content_length":"16492","record_id":"<urn:uuid:bacfaa5c-8d8a-4577-8dc8-424305447e46>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
CBSE Sample Paper Physics Class XI 2009
Sample Paper – 2009
Class – XI
Subject - Physics
TIME: 3hr. MM: 70
General Instructions.
1. Questions from question no. 1-8 carry 1marks each, 9-18 carry 2 marks each, 19-27 carry 3 marks each and 28-30 carry 5 marks each.
2. There is no overall choice but one choice is given in 2 marks question, two choice in 3 marks question and all three choices in five marks question
3. You may use the following physical constant where ever necessary:
Speed of light C = 3 x 10^8 ms^-1^
Gravitational constant G= 6.6 x 10^-11 NM^2 Kg^-2
Gas constant R= 8.314 J Mol^-1 k^-1
Mass of electron = 9.110 x 10^-31 Kg
Mechanical equivalent of heat = 4.185 J Cal^-1
Standard atmospheric pressure = 1.013 x 10^5 Pa
Absolute zero 0k = - 273.15^0c
Acceleration due to gravity = 9.8 Ms^-2
1. If a composite physical quantity in terms of moment of inertia I, force F, velocity V,
work W and length L is defined as,
Q = (I F V^2 /WL^3)
Find the dimension of Q and identify it.
2. Explain why a man who fall from a height on a cemented floor receive more injury
then when he fall from the same height on the heap of sand.
3. Is it possible to have a collision in which all the kinetic energy is lost? If so cite an example.
4. You are given two spheres of the same mass, size and appearance, but one of them is solid while the other hollow. If they are allowed to roll down an incline which
One will reach the bottom first?
5. Can every oscillatory motion be treated as simple harmonic motion in the limit of small amplitude?
6. If the earth be at one half its present distance from the sun how many days will their be in a year?
7. A 25 cm long tube closed at one end is lowered with his open end down into a fresh water lake. If one- third of the tube is filled with water then what is the distance between the surface of lake
and level of water in the tube.
8. Why zeroth law of thermodyamics is not called as IIIrd law though it was discovered after I and II law?
9. Define uniform velocity of an object moving along a straight line. What will be shape of velocity time and position time graphs of such a motion?
10. What do you mean by a projectile? A projectile is fired with velocity v making an angle Ө with the horizontal. Show that its path is parabolic.
11. Define the term momentum and impulse also derives the relation between them.
12. Define and explain moment of inertia of a rigid body about an axis of rotation. Hence define radius of gyration.
13. What do understand by gravitational field and gravitational field intensity? Show that acceleration due to gravity is a measure of the gravitational field of earth at a point.
Define escape velocity. Derive an expression for escape velocity of an object from
the surface of a planet.
14. Explain terminal velocity and mention its importance. Derive stokes law
14. State newton’s law of cooling. Deduce the relation
log (Ө- Ө[0]) = - k t + c
15. In a refrigerator, heat from inside at 277K is transferred to a room at 300K. How many joule of heat will be delivered to the room for each joule of electrical energy consumed ideally?
16. Give the essential features of the kinetic theory of gases. Show that the pressure exerted by a gas is equal to two-thirds the of the average kinetic energy per unit volume of the gas molecule.
17. State Newton’s formula for velocity of sound in gases. Discuss Laplace correction. What is the effect of density and pressure and temperature on velocity of sound in air? Explain.
18. What are centripetal acceleration and centripital force? Derive expression for them.
19. What is elastic collision? Calculate the velocities of the two bodies undergoing elastic collision in one dimension.
20. Show that the acceleration of free fall g at the surface of earth and the gravitational constant G are related by the expression
g = 4\3 Π p G R,
22. State the conditions, which are necessary for a satellite to appear stationary. Show
how an satellite orbiting round earth obey Kepler’s third law.
23. Find the maximum length of a steel wire that can hang without breaking.
Given that breaking stress for steel = 7.9 x 10^11 N M^-2 and density of steel =7.9 x
10^3 Kg M^-3
Define young’s modulus of elasticity, normal stress and longitudinal strain. Give
units of each of them. Derive elastic potential energy of a wire when stretched.
24. The weight of a body in water is one third of its weight in air. What is the density
Of the material of the body.
What is stokes law? Explain terminal velocity and mention its importance
25. Show that for a carnot engine, efficiency of the engine is >[]
η = 1- T[2]/T[1 ](T[2 ]>[ ]T[1])
Where T[1 ] is the temperature of the source and T[2] is the temperature of the sink.
26. If the orbital velocity of moon is increased by 42% than what will happen to its
position? Explain.
27. Define angular momentum for a system and show that
Angular momentum = Linear momentum x Perpendicular distance from axis of
28. A body of mass 1 Kg initially at rest is moved by a horizontal force of 0.5 N on a
smooth friction less table. Calculate the work done by the force in 10 S and show
that it is equal to the change in kinetic energy of the body.
Show that torque is the product of moment of inertia and angular acceleration. Also
prove that torque is the rate of change of angular momentum. Hence state the law of
conservation of angular momentum.
29. Establish Bernoulli’s equation for liquid flow, stating separately the assumption
made. What does the equation essentially represent?
What should be the average velocity of water in a tube of diameter 0.4 Cm, so that
the flow is (a) laminar, (b) turbulent. The viscosity of water is 10^-3 N M^-2 S
30. What is Doppler effect. Derive an expression for the apparent frequency of sound
due to Doppler effect in the following case:
(a) When the listener moves towards the stationary source.
(b) When the source moves towards the stationary listener.
Two notes of wavelength 2.04 metres and 2.08 metres produce 200 beats per minute in
a gas. Find the velocity of sound in the gas. | {"url":"http://cbse-sample-papers.blogspot.in/2009/01/cbse-sample-paper-physics-class-xi-2009.html","timestamp":"2014-04-20T00:48:12Z","content_type":null,"content_length":"98158","record_id":"<urn:uuid:a4ce4864-1463-414d-bfff-5fb6c10b0c3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Byte of hashcode
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Mar 2013
Rep Power
Byte of hashcode
Hi, lets say i have md5sum like "d41d8cd98f00b204e9800998ecf8427e" and I want to generate 256 numbers in random sequence from range 0 to 255 using this md5sum.
Any idea how to deal with it ?
The simplest method would be to, say, break the hash into four parts of 8 digits, convert each part into a number (hex->dec), XOR them together, and use the result as an RNG seed.
But do you need something cryptographically secure? Reproducible on different machines? Can you use the original input to MD5 instead? Any constraints?
Site rules and FAQ | Project Suggestions | mod_rewrite guide | PHP New User Guide and FAQ - PHP Stickies | Regex Resources
Originally Posted by requinix
The simplest method would be to, say, break the hash into four parts of 8 digits, convert each part into a number (hex->dec), XOR them together, and use the result as an RNG seed.
But do you need something cryptographically secure? Reproducible on different machines? Can you use the original input to MD5 instead? Any constraints?
Yes, I need to make it reproducible.
For example, it's like that:
1. USER INPUT: "MIKE" (can be different with 1 char, or 120 chars - I need to md5 it)
2. "MIKE" to MD5
3. Generate reproducible random sequence of numbers from 0-255 using this md5sum
How do you think about my idea, but I am not sure is it good at all:
Let's take string "MIKE" as user input (key).
"MIKE" in md5 is "94f4a1c41b8358205fdc712dd5f12dc8".
It's 32 digits and I need to get 256 numbers (0-255).
256/32=8, so I seperate my md5sum in 8 groups for 4 chars (32/8=4) and i get: "94f4a1c4 1b835820 5fdc712d d5f12dc8c"
Now I md5 each of group i got before (g? is variable just for help):
g1 = md5("94f4a1c4") = "23567aa89eae688c3a6f02d07b2beeeb"
g2 = md5("1b835820") = "499a930d757a1ad7e6b983d532e2fb1a"
g3 = md5("5fdc712d") = "761fb78a69470de96155e89df6990a8d"
g4 = md5("d5f12dc8c") = "24d1f27c5fecff6f731794bfd32e72d9"
And again I seperate each group I got to 8 groups of 4 chars.
And i am doing it until I get 256 groups of 8 chars.
But does it make any sense ? I think its not reproducible and anyway i would have to get somehow only 1 number from this from this 8 chars group and i don't have an idea how to
How to deal with it ?
If you only need something that seems random then you probably could get away with just rehashing the data. Although simply rehashing the previous hash (not the broken up pieces) is simpler.
By the way, what is this for?
Site rules and FAQ | Project Suggestions | mod_rewrite guide | PHP New User Guide and FAQ - PHP Stickies | Regex Resources
Originally Posted by requinix
If you only need something that seems random then you probably could get away with just rehashing the data. Although simply rehashing the previous hash (not the broken up pieces) is simpler.
By the way, what is this for?
What do you mean to rehash the data ?
Do you mean do it like that (?):
md5("MIKE") = "94f4a1c41b8358205fdc712dd5f12dc8"
md5("94f4a1c41b8358205fdc712dd5f12dc8") = "e3e1ff3d938a5cece2cfaecb231f4ab5"
md5("e3e1ff3d938a5cece2cfaecb231f4ab5") = "f7ec1e24382d9dfe9b1a6028d0fba3f6"
md5("f7ec1e24382d9dfe9b1a6028d0fba3f6") = "aaebc56f98dc31501c92c23f1b27db36"
md5("aaebc56f98dc31501c92c23f1b27db36") = "a7871acf8a406e8abe706a8363742b35"
And what about that ? I think it's a good idea but I don't understand it yet enough, I think
It's part of my project at IT studies. I got an obligation to use it by hashing a key.
You still break a hash into four to get your numbers. I'm just saying you don't then have to use the hash of each of those parts to get the next hash - just hash the string you got.
MIKE => 94f4a1c4 1b835820 5fdc712d d5f12dc8
94f4a1c41b8358205fdc712dd5f12dc8 => e3e1ff3d 938a5cec e2cfaecb 231f4ab5
e3e1ff3d938a5cece2cfaecb231f4ab5 => f7ec1e24 382d9dfe 9b1a6028 d0fba3f6
f7ec1e24382d9dfe9b1a6028d0fba3f6 => aaebc56f 98dc3150 1c92c23f 1b27db36
Site rules and FAQ | Project Suggestions | mod_rewrite guide | PHP New User Guide and FAQ - PHP Stickies | Regex Resources
Originally Posted by requinix
You still break a hash into four to get your numbers. I'm just saying you don't then have to use the hash of each of those parts to get the next hash - just hash the string you got.
MIKE => 94f4a1c4 1b835820 5fdc712d d5f12dc8
94f4a1c41b8358205fdc712dd5f12dc8 => e3e1ff3d 938a5cec e2cfaecb 231f4ab5
e3e1ff3d938a5cece2cfaecb231f4ab5 => f7ec1e24 382d9dfe 9b1a6028 d0fba3f6
f7ec1e24382d9dfe9b1a6028d0fba3f6 => aaebc56f 98dc3150 1c92c23f 1b27db36
Ok, I see it now but do you have any good idea to make digit (0-255) from "94f4a1c4" ?
What do you think about changing each char from "94f4a1c4" to ascii, multiple it and modulo it by 256, like..
ascii('9') * ascii('4') * ascii('f') * ascii('4') * ascii('a') * ascii('1') * ascii('c') * ascii('4') % 256
What do you think about that ? Is there better idea ?
Well I am not sure but the problem here may be that it can happen i won't get all numbers from 0-255 and there may be some numbers missing and program will hang in loop.
The value being returned by your md5 function is represented in hex (ie base 16). To get a number from 0 - 255, all you have to do is break the hash into 1-byte chunks and convert each chunk from
base 16 to base 10.
94 f4 a1 etc.
You would chunk it into 4-byte chunks if you were trying to get full 32-bit integers out of it.
Alternatively, if you can change your md5 hash function to return the raw bits instead of the hex string representation that might save you the trouble of having to convert from base 16.
PHP FAQ
Originally Posted by Spad
Ah USB, the only rectangular connector where you have to make 3 attempts before you get it the right way around
Originally Posted by E-Oreo
The value being returned by your md5 function is represented in hex (ie base 16). To get a number from 0 - 255, all you have to do is break the hash into 1-byte chunks and convert each chunk from
base 16 to base 10.
You would chunk it into 4-byte chunks if you were trying to get full 32-bit integers out of it.
Alternatively, if you can change your md5 hash function to return the raw bits instead of the hex string representation that might save you the trouble of having to convert from base 16.
Well, but if I convert "94f4a1c4" to:
94 = 148
f4 = 244
a1 = 161
c4 = 196
And how should I act with that decimal numbers ? Multiple them and modulo 256 ? Or use them as my individual decimal to my set of 0-255 decimal numbers ?
There is another problem, becouse what if there will be some numbers from 0-255 missing and i wont able to get them with next and next md5 of previous md5 ?
I have a feeling, that you didn't communicate well, what you are trying to do.
I'm going to make a guess:
A. To generate a sequence including every integer from 0 to 255 inclusive, with each integer appearing exactly once. [This can be described as permutation of the elements of Z_256.]
B. For the ordering of the sequence to be determined from an MD5 hash in such a way that every distinct MD5 hash will result in a distinct ordering.
Are we getting warmer?
Last edited by mah$us; April 1st, 2013 at 05:21 PM.
Originally Posted by mah$us
I have a feeling, that you didn't communicate well, what you are trying to do.
I'm going to make a guess:
A. To generate a sequence including every integer from 0 to 255 inclusive, with each integer appearing exactly once. [This can be described as permutation of the elements of Z_256.]
B. For the ordering of the sequence to be determined from and MD5 hash in such a way that every distinct MD5 hash will result in a distinct ordering.
Are we getting warmer?
A. (with reproducible order of this permutation when using same key)
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Mar 2013
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Mar 2013
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Mar 2013
Rep Power
No Profile Picture
Lost in code
Devshed Supreme Being (6500+ posts)
Join Date
Dec 2004
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Mar 2013
Rep Power
No Profile Picture
Contributing User
Devshed Newbie (0 - 499 posts)
Join Date
Feb 2009
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Mar 2013
Rep Power | {"url":"http://forums.devshed.com/security-cryptography-17/byte-hashcode-942684.html","timestamp":"2014-04-16T11:52:13Z","content_type":null,"content_length":"116184","record_id":"<urn:uuid:1c28cd1d-9552-4b15-b879-9bea81dfa793>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Name That Graph
June 28, 2011
A problem about “unraveling” graphs into strings
László Babai is a great combinatorial mathematician as well as a great complexity theorist. Among his many illustrious results are surprising upper bounds and nontrivial lower bounds, both kinds
obtained by connecting group theory with complexity theory.
Today we wish to talk about Babai’s work on the Graph Isomorphism problem, and present an open problem about the complexity of finding a “canonical form” for a graph.
It is emblematic that his 1985 STOC paper which introduced Arthur-Merlin games, and whose main result was that some matrix-group problems belong to ${\mathsf{NP}^R \cap \mathsf{co}\mathrm{-}\mathsf
{NP}^R}$ for a random oracle ${R}$, was titled “Trading Group Theory for Randomness.” This led into his famous paper with Shlomo Moran which developed the Arthur-Merlin theory in full and shared the
inaugural Gödel Prize with Shafi Goldwasser, Silvio Micali, and Charles Rackoff’s equally famous paper on Interactive Proof Systems.
Laci, as he is often called, is also an ardent popularizer of mathematics for undergraduates. He co-founded the Budapest Semesters in Mathematics program to bring North Americans to his native
Hungary, won a major teaching award at the University of Chicago in 2005, and has led an 8-week summer REU course at Chicago every year since 2001.
Graph Isomorphism
A graph ${G}$ is usually encoded as an adjacency matrix or as a list of its edges. Either way involves numbering the vertices ${1}$ to ${n}$. Two different numberings will often produce two different
encodings ${g_1}$ and ${g_2}$, even though they represent the same graph. The Graph Isomorphism (GI) problem is, given any two encodings ${g_1}$ and ${g_2}$, do they represent the same graph? Or if
we think of the encodings themselves as the “graphs,” are those graphs isomorphic?
GI belongs to ${\mathsf{NP}}$: when the graphs are isomorphic, one can guess a permutation ${\sigma}$ of ${\{1,\dots,n\}}$, and verify for each pair ${i,j \in \{1,\dots,n\}}$ that ${(i,j)}$ is an
edge in ${g_1}$ if and only if ${(\sigma(i),\sigma(j))}$ is an edge in ${g_2}$. This is one of the amazingly few natural decision problems in ${\mathsf{NP}}$ whose status for many years has been
“Intermediate.” It is not known to be ${\mathsf{NP}}$-complete, and is not known to be in ${\mathsf{P}}$. If GI is ${\mathsf{NP}}$-complete then the polynomial hierarchy collapses at the second
level, as follows from its complement belonging to Babai’s Arthur-Merlin class ${\mathsf{AM}}$, which is a randomized form of ${\mathsf{NP}}$.
The famous text by Michael Garey and David Johnson listed twelve problems in ${\mathsf{NP}}$ that in 1979 were not known to be complete or in ${\mathsf{P}}$. Of these only two remain with
“Intermediate” status: GI and Factoring. As stated in the previous post, each of us believes one of them is in ${\mathsf{P}}$. What can be said now is that many special cases of GI belong to ${\
mathsf{P}}$, as enumerated here. This includes a deep paper by Laci with Dima Grigoriev and David Mount.
Why GI May Be Hard
We get several claims now and then to have discovered a polynomial time algorithm for GI. So far it still seems to be open whether or not GI is in ${\mathsf{P}}$. Those working on the problem should,
we think, be aware of several basic facts about GI.
${\bullet }$Pure Matrix Methods Are Weak: Several obvious ideas are to look at the adjacency matrix of a graph and try to use this to give the graph a canonical name. If the method depends only on
the eigenvalues of the matrix, then the method must fail. This follows since there are many graphs that are co-spectral: they are graphs that are not isomorphic, but have exactly the same spectrum:
${\bullet }$Strongly Regular Graphs Are Tough: A great test case for any new idea on GI is the class of graphs that are called strongly regular. A regular graph, of course, is just a graph with the
constraint that all its degrees are the same. Another way to think about this is: every vertex in the graph has the same number of neighbors. A strongly regular carries this property to the next
level. A regular graph is ${(\lambda,\mu)}$ strongly regular if,
1. Every two adjacent vertices have exactly ${\lambda}$ common neighbors.
2. Every two non-adjacent vertices have ${\mu}$ common neighbors.
The GI problem restricted to these graphs is also open. The reason it is so hard is that all naive methods that depend on the local neighborhood structure fail—because the graphs are so “regular.”
${\bullet }$The GI “Cluster”: Another piece of evidence is that a few dozen different-seeming problems, from group theory, matrix theory, and other fields, are known to be polynomial-time equivalent
to GI. One example is a term-equality problem proved “GI-complete” by David Basin. This is a miniature version of the argument that since no one has found a poly-time algorithm for any of the
thousands of different ${\mathsf{NP}}$-complete problems, they are all hard. This argument has its doubters, Dick included, but at least for GI more than Factoring, there is some “salience” of the
level of complexity it represents.
See this 2005 survey for other information on the hardness of GI.
Canonizing Graphs
An isomorphism invariant is a function ${I}$ from graph representations to binary strings such that ${I(g) = I(g')}$ if and only if ${g}$ is isomorphic to ${g'}$.
Yuri Gurevich showed that any such ${I}$ can be modified with polynomial-time overhead into an invariant ${I'}$ such that ${I'(g)}$ outputs an adjacency matrix ${g_0}$ for the graph. Thus ${I'(g') =
g_0}$ if and only if ${g'}$ represents the same graph ${G}$ as ${g}$. Then ${g_0}$ is called a canonical form for ${G}$, and ${I'}$ is called a canonizing function.
Babai and Ludek Kučera found an ${O(n^2)}$-time computable function that acts as a canonizing function on all but a ${1/2^{\delta n}}$ fraction of random ${n}$-vertex graphs. The scheme is simple to
execute but a bit tricky to state, so we refer to the paper for details. This implies that graph-isomorphism is in polynomial time for “random” graphs.
Note that there are many classes of objects, including special types of graphs, for which canonical naming schemes are known. For example, this is long known to be true for trees. Rather then present
that naming method, let’s consider a much simpler problem of naming sets of numbers. A simple method for naming a set ${S = \{s_{1},\dots,s_{n}\}}$ is to sort the set of numbers and then make them
into a string: if ${s_{\pi_{1}},\dots,s_{\pi_{n}}}$ is the sorted order of the set ${S}$‘s elements, then the “name” of the set is the sequence
$\displaystyle s_{\pi_{1}},\dots,s_{\pi_{n}}.$
Note the key point is that two sets are equal as sets if and only if they have the exact same sequence. Thus, the set ${\{8,3,41\}}$ is named by the sequence ${3,8,41}$.
An Approach
We wish to consider a more simple-minded way to try to get an invariant or a canonical form, using any polynomial-time computable 1-1 function ${f}$ from graphs to strings. Define
$\displaystyle \ell_f(g) = \max\{f(g') : g' \text{ is isomorphic to } g\}.$
Here the maximum can be taken with regard to the standard lexicographic order on binary strings, and is called the lex-max canonical form. A decision problem that helps one find it by binary search
Definition 1 Decision problem ${\mathsf{CAN}(f)}$: Given ${g}$ and a string ${w \in \{0,1\}^*}$, does there exist a ${g'}$ isomorphic to ${g}$ such that ${f(g') \geq w}$?
Then computing ${\ell_f}$ is polynomial-time equivalent to deciding ${\mathsf{CAN}(f)}$. Also ${\mathsf{CAN}(f)}$ belongs to ${\mathsf{NP}}$, since we can guess ${g'}$. Thus ${\mathsf{CAN}(f)}$ is
always between GI and the ${\mathsf{NP}}$-complete problems. We wonder, is it always ${\mathsf{NP}}$-complete? Is it ever equivalent to GI itself? Strangely, no canonizing function is known to be
equivalent to GI, but many are ${\mathsf{NP}}$-hard.
Which “Unravelings” are NP-Hard?
Let us simply try to “unravel” an adjacency matrix into a binary string. For an undirected graph, we need only the upper triangle of the matrix. The three most obvious rules we can think of are to
list those entries in order going: 1. across by the rows, 2. down by the columns, or 3. starting with the largest diagonal and ending at the upper-right corner. These are best shown by a diagram:
$\displaystyle A = \begin{array}{|ccccc|} 0 & 1 & 1 & 0 & 1\\ { } & 0 & 1 & 1 & 1\\ { } & { } & 0 & 0 & 1\\ { } & { } & { } & 0 & 0\\ { } & { } & { } & { } & 0\\ \end{array}$
$\displaystyle \begin{array}{rcl} f_1(A) &=& 1101.111.01.0 = 1101111010\\ f_2(A) &=& 1.11.010.1110 = 1110101110\\ f_3(A) &=& 1100.111.01.1 = 1100111011 \end{array}$
Theorem 2 ${\mathsf{CAN}(f_2)}$ is ${\mathsf{NP}}$-complete, by reduction from Clique, and ${\mathsf{CAN}(f_3)}$ is ${\mathsf{NP}}$-complete, by reduction from Hamiltonian Path.
To prove this, given an instance ${(g,k)}$ of Clique, define ${w}$ to consist of ${k}$-choose-${2}$${1}$‘s, followed by ${0}$‘s out to length ${n}$-choose-${2}$. Then the graph represented by ${g}$
has a clique of size ${k}$ if and only if there is a permutation that numbers the vertices of the clique ${1,\dots,k}$. Applying this permutation yields an adjacency matrix ${A'}$ such that ${f_2
(A')}$ begins with ${k}$-choose-${2}$${1}$‘s, so ${f_2(A') \geq w}$. (Note that for the above ${A}$, we can show a 4-clique by swapping vertex labels ${4}$ and ${5}$.)
For ${f_3}$, given an instance ${g}$ of Hamiltonian Path, take ${w}$ to begin with ${n-1}$${1}$‘s and the rest ${0}$. Then ${g}$ has such a path if and only if there is a re-numbering of the vertices
in the path from ${1}$ to ${n}$, and this majorizes ${w}$.
However, we do not know whether ${\mathsf{CAN}(f_1)}$ is ${\mathsf{NP}}$-complete. If we try ${w = 1^{n-1}0\cdots 0}$ as for ${f_3}$, then having a permutation creating ${A'}$ such that ${f_1(A') \
geq w}$ only means that the original graph ${g}$ has a node connected to every other node, which is easy to determine. We do not see how to choose ${w}$ to gain information much more useful than
If we compare graphs to sets, we can see why canonization is possibility hard. For sets of numbers the making of a canonical name is helped by our ability to sort numbers—the act of sorting preserves
sets, but places them into a canonical form. The difficulty for graphs, and therefore GI, is that the corresponding “sorting” step seems to be impossible. Or said better, today we have no idea how to
“sort” an adjacency matrix in polynomial time.
Open Problems
Is ${\mathsf{CAN}(f_1)}$${\mathsf{NP}}$-complete? Or is row-major order somehow weaker than the column-major and diagonal unravelings?
Is there a canonizing function that is equivalent to GI itself, or is canonizing graphs always harder than testing for isomorphism?
Are GI and canonizing really in ${\mathsf{P}}$? An ostensibly easier task is to put GI into ${\mathsf{BQP}}$ as with Factoring—see this StackExchange thread for more.
1. June 28, 2011 2:00 pm
We know at this point that no “Shor-like” quantum algorithm can solve GI. Specifically, we know that any quantum algorithm which treats the graph as a “black box”, an oracle that returns a
permuted graph in an unstructured way, requires highly-entangled measurements (Hallgren et al., http://www.cse.psu.edu/~hallgren/multireg.pdf ). Moreover, the families of entangled measurements
that we know how to carry out efficiently don’t work (Moore, Russell, and Sniady, http://arxiv.org/abs/quant-ph/0612089 ).
My personal guess at this point is that GI is in BQP if and only if it is in P. But this is because one should always make guesses that one would like to have proved wrong.
2. June 28, 2011 2:17 pm
For the general question of which equivalence classes have poly-time computable complete invariants and/or canonizations, there’s a lucid and interesting treatment in this paper of Fortnow and
□ June 28, 2011 2:25 pm
(Of course, it doesn’t *answer* the question; but it gives what I think is the right complexity-theoretic framework to think about it, and proves some structural results.)
3. June 28, 2011 8:59 pm
You mention that strongly regular graphs seem particularly hard to distinguish. That led me to consider the following family of decision problems related to GI:
Let A be a set of graphs. Define GI(A) to be this problem: given a graph H, is H isomorphic to a graph in A?
I am specifically interested in the case where A = {G_n | n = 1, 2, …}, where G_n is an n-vertex graph. GI(A) asks: is H isomorphic to the unique n-vertex graph in A?
Since the set A may not be computable, it might be best to have it as an oracle — I don’t want to try to define the problem too carefully. What I wonder is: are problems of this type easy? Or are
there in fact specific graphs which are “difficult to recognize”?
If this problem is easy, we can extend the definition of GI(A) to any set of graphs and ask the same question. If it seems as hard as GI, maybe it will help to shed light on GI.
I haven’t thought too hard about this problem, so hopefully it isn’t trivial!
□ June 28, 2011 9:10 pm
Here is I think a better formulation of the question I am really trying to ask.
Let f(G) be the number of gates needed in a circuit which outputs 1 if the input graph is isomorphic to G, and 0 otherwise. Let f(n) = max{f(G) | G has n vertices}.
How fast does f(n) grow? Is it polynomial? What families of graphs attain this maximum?
4. June 29, 2011 8:54 am
1) Is strongly regular graphs hard? See, D. A. Spielman, Faster isomorphism testing of strongly regular graphs, Proc. 28th ACM STOC, 1996, 576-–584. (O((n^1/3) log n)
2) I think that canonizing graphs is harder than testing for isomorphism. See, A. Rahnamai Barghi, I. Ponomarenko, Non-isomorphic graphs with cospectral symmetric powers, Electronic Journal of
Combinatorics, http://www.combinatorics.org/Volume_16/PDF/v16i1r120.pdf
5. June 30, 2011 7:22 am
GI is a great problem and the evidence regarding what is the answer is confusing. I remember people trying to show it NP complete in the 70s (Now, we have very strong evidence aganst it; it will
collapse the hierarchy.) It looked unrelated to factoring before Shor’s algorithm and then suddenly it looked like the noncommutative analog, but then it looked again not so related to factoring.
It is related to very interesting group theory and very interesting graph theory. The question of finding parameters that will allow to identify non isimorphic graph is closely related to main
themes of mathematical studies. The related study of isospectral graphs is quite exciting. The (somewhat remotely) possibly related edge- and vertex- reconstruction problems is of interest.
Really a great problem!
□ June 30, 2011 7:10 pm
Thank you for good reply! In 1979, M. R. Garey and D. S. Johnson (well-known book “Computers and intractability”) noted that proofs of NP-completeness seem to require a certain amount of
redundancy, which redundancy graph isomorphism problem lacks (pages 155–156).
6. July 20, 2011 8:55 am
As you mentioned cospectral graphs. It would be interesting to find graphs with equal spectrum plus equal Laplacian spectrum!
7. July 4, 2013 12:11 am
For characterizing graphs up to isomorphism it could be worthwhile to try for a complete set of invariants which can be computed easily.
Consider following set which seems to me a complete set of invariants:
it is as follows:
C(G) = (C1(G), C2(G), ……., Cr(G), ……, Cp(G))
where, Cr(G) = (Cr(T1), Cr(T2), ….Cr(Tm)), a sub-sequence of numbers , representing count of trees on (r-1) edges of different (non-isomorphic) types among the all possible non-isomorphic types
that are possible for trees on (r-1) edges, taken in the same order for both graphs whose isomorphism is to be checked.
As an alternative way of attaching the problem do have a look at:
Recent Comments
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
The More Variables,… on Fast Matrix Products and Other…
The More Variables,… on Progress On The Jacobian …
The More Variables,… on Crypto Aspects of The Jacobian…
The More Variables,… on An Amazing Paper
The More Variables,… on Mathematical Embarrassments
The More Variables,… on On Mathematical Diseases
The More Variables,… on Who Gets The Credit—Not…
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests | {"url":"https://rjlipton.wordpress.com/2011/06/28/name-that-graph/","timestamp":"2014-04-19T19:33:32Z","content_type":null,"content_length":"118925","record_id":"<urn:uuid:604a51cc-544f-42d5-9580-b9dc4924722e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAT 281
MAT 281 Discrete Mathematics
This is a sample syllabus only. Ask your instructor for the official syllabus for your course.
Office hours:
Updated Course Description
Logic and sets, functions and relations, Boolean algebra and circuit design, mathematical induction, recursion, modular arithmetic and elementary number theory, counting techniques and combinatorics,
big-O notation and complexity of algorithms, graphs and trees; with applications to computers and computer programming.
MAT 281 meets for three hours of lecture per week.
MAT 153, and CSC 121 or MAT 241 or CSC 111 or equivalent with grades C or better.
MAT 281 provides mathematical foundations for various topics in discrete mathematics including those necessary for core computer science courses. Upon completing MAT 281 the student should be able to
• construct and understand rigorous logical arguments and inferences
• construct and understand proofs by mathematical induction
• understand and use permutations, combinations, binomial coefficients, the pigeonhole principle, in algorithms, counting arguments, and proofs
• use "big-oh" and "big-omega" notation
• analyze the complexity of simple algorithms
• understand basic concepts and algorithms of graph theory
• understand combinatorial circuits and their properties; Boolean functions, and synthesis of circuits.
Expected outcomes
Students should be able to demonstrate through written assignments, tests, and/or oral presentations, that they have achieved the objectives of MAT 281.
Method of Evaluating Outcomes
Evaluations are based on homework, class participation, short tests and scheduled examinations covering students' understanding of topics covered in MAT 281.
Discrete Structures with Contemporary Applications, by A. Stanoyevitch. Chapman & Hall (2011)
Course Outline
Based on a 15 week course.
Week Topics
1-2 Logic and sets.
3-4 Functions and relations, equivalence relations
5-6 Mathematical induction and recursion
7-8 Modular arithmetic and elementary number theory
9-10 Counting techniques and combinatorics
11-12 Big-O notation and the complexity of algorityms
13-14 Graphs and trees
15 Boolean algebra and circuit design
Grading Policy
Students' grades are based on homework, class participation, short tests, and scheduled examinations covering students' understanding of the topics covered in MAT 281. The instructor determines the
relative weights of these factors.
Attendance Requirements
Attendance policy is set by the instructor.
Policy on Due Dates and Make-Up Work
Due dates and policy regarding make-up work are set by the instructor.
Schedule of Examinations
The instructor sets all test dates except the date of the final exam. The final exam is given at the date and time announced in the Schedule of Classes.
Academic Integrity
The mathematics department does not tolerate cheating. Students who have questions or concerns about academic integrity should ask their professors or the counselors in the Student Development
Office, or refer to the University Catalog for more information. (Look in the index under "academic integrity".)
Accomodations for Students with Disabilities
Cal State Dominguez Hills adheres to all applicable federal, state, and local laws, regulations, and guidelines with respect to providing reasonable accommodations for students with temporary and
permanent disabilities. If you have a disability that may adversely affect your work in this class, I encourage you to register with Disabled Student Services (DSS) and to talk with me about how I
can best help you. All disclosures of disabilities will be kept strictly confidential. Please note: no accommodation may be made until you register with the DSS in WH B250. For information call (310)
243-3660 or to use telecommunications Device for the Deaf, call (310) 243-2028.
Revision history:
Revised by A. Stanoyevitch, spring 2011. | {"url":"http://www.csudh.edu/math/syllabi/MAT281DeptSyllabus.html","timestamp":"2014-04-19T02:17:52Z","content_type":null,"content_length":"6318","record_id":"<urn:uuid:d876670e-bd84-4a6a-bac2-58eb767018c0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roofing help needed please .CAUTION - posh shed content..
Is anyone able to point me in he right direction? I"m building a summerhouse which will be an irregular Octogon. 2 opposite straight sides are 900mm long, 2 opposite sides are 1350mm and the four
corners are 1250mm long. I should be ok to work out the common rafters but what are the formulas to work out the hip rafters, bevels etc..? Whats the formula to work out the length of the ridge beam
so that the whole roof has the same pitch? ( is that even possible?? )
brain overload time
IT really is an irregular octagon if it has four sides and four corners!
are the corners all 45 degrees?
it started out as a regular hexagon with all 6 internal angles being 120 degrees and all sides being 1250mm long. This was however too small so I stretched the hexagon by adding two ( opposite )
lengths sides @ 900mm and changing two of the 1250mm sides to 1350mm.
Imagine a square...diagonally cut off the four corners and replace the cuts with a line you now have * sides. The diagonals you just made are 1250mm each, two ( original ) opposite sides are 900mm
and the last two original sides are 1350mm...
You need a ready reckoner.
right i am doing this of the top of my head.
the length of the ridge will be the length of the widest span minus the shortest span.
this wont give you the correct pitch on the corners I don't think, if they were 45 degrees the would.
How longs a piece of string ?
Im not a roofer and haven't done this before but I am a scenic carpenter and have to mock things up like this now and again.
What I would do cut a ridge beam slightly oversize and put it into position on a temporary structure (maybe a bit of 3x1) in the position you want it then you can work out the angles from there and
then cut the ridge beam to fit once the hips are attached then take the temporary structure out.
Or try drawing the two side elevations 1:1 on a piece of ply (you could just draw the end of the roof where the hips are).
But thinking about it Im not sure you would get the same pitch all the way around.
Maybe I'm not much help
Actually thinking about it you should be able to get the same pitch all way round depending on the length of your ridge beam.
Is this the kind of thing you had in mind? lengths are for a 500mm drop on the roof.
Let me know if this is helpful and I'll get sketchup to tell us the angles involved (and make sure I've measured the right things!) - I'm new to using it so it may take a mo or two!
edit: i can obvs calc any lengths or whatever. Equally i can change roof shape if desired!
edit2: view image to see more clearly!
That's also a solution Markie. I was thinking of two ' high-points ' with a ridge-beam connecting them..
Markie does your program spit out the angles and pitch?
Yes it can/does.
If you tell me where you want your ridge and the roof drop you want then I can (try to) set it up? All good practice for me!
great app i use builderscalcsroof i use on ipod. Enter all known details, gives plumb cuts, birds mouth cuts etc etc. To achieve same pitch all way round and same wall plate height you will need two
high points.
The idea was to have two high points but it would depend on the length of the ridge..
I'll be using shingles so I reckon a minimum of a 30 degree pitch. I was planning to have to finnials atop the structure but if the ridge is going to be stupidly short 2 finnials too close together
won't look right..thoughts?
Markie, reckon on a minimum pitch of 30 degrees, if you can work out an equal pitch the ridge length will be given automatically or not?
Yes, lengths are automatically calculated, but I don't know what pitch is
Sorry Markie, Pitch is the angle of the roof or slope if you were building a stair,,
Right. 450mm! Perhaps.
The roof slopes shown are at 30 degrees.
edit: thanks, I'd guessed. and have just updated drawing with more (helpful!) measurements.
edit2: and with measurements from end of roof bar to corners.
edit3: key point, this is all with lines not beams, so you'd need to take beam thicknesses into account - or I'd imagine I could do that with this, perhaps...
Is the shed walls built yet and now you only have the roof to do? I would construct the roof on the floor as long as you have a level floor. I have just done some scale drawings on paper with one
finial and 30degree pitch and the rafters work out to 1800mm long less the thickness of the finial. This gives a overhang of 100mm. hope this helps.
When the chippies were doing my roof they had their tables of rafter lengths, bird mouth poistions etc, but actually they just took a full length piece, fitted in place, marked it up and then used
that as a template for all the others.....
thanks Markie, the roof-bar or ridge is to run R to L to the two 1350mm edges I don't know if that changes things..So are all the rafterss to be 1604mm @ 30deg?
@ Waveydave...the walls will be going up tomorrow half are in place so I'l be able to get some pics up hopefully..
I think any changes to the ridge bar from what I've got will result in measurement changes. My email is in my profile, perhaps you could send me (or post up here) a sketch of what you want and I can
try to recreate it tomorrow?
If the rafters are measured from the ends of the ridge to the corners of the octagon, then yes, with the ridge as shown in my diagrams they nominally measure 1604.6 without overhang or anything else
taken into account!
Hi Tymbian
I am interested to see what Markie comes up with. Just done a sketch to scale with protractor and ruler. Allowing for a ridge beam of 500mm and a finial(100mm wide) post at each end you will
need...4x rafters @ 1580mm and 4x rafters @1450mm. This gives a overhang of 100mm, but this also depends on guttering so this overhang can be reduced or extended by altering the rafters. This is at
30 degree pitch and the ridge running parallel to the 1350 sides. Sorry but cannot post picture cos ive done it with pencil and paper but to scale the proprotions and roof looks good. Let us know how
you get on.
Hey Tymbian, waveydave!
What changes should I make to my ridgebeam and/or other parts of the model?
Is it a matter of spinning the ridge by 90 degrees so it runs between the two 1350 sides?
Happy to do whatever desired - all good SketchUp learning for me! - but having no 'proper' building knowledge - finials, ridge beams, rafters, etc - I'm struggling to understand how Tymbian's desired
design differs from what I've shown above!
Hi Markie, waveydave...
Yep the ridge is/was to run parallel to the 900mm sides..but I'm also toying with the idea that 1 finial might be enough..( Markie a finial would be the decorative spike/ sphere etc that decorates
the apex of a roof in this situation..it would be attached to a piece of timber; here it would be an octagon that all the rafters would fix into. It can also protrude down into the summerhouse so
that more fanciness can be added.
I have added, at every angle/ junction a length of 6 x 2 ( 46mm wide ) which protrudes outwards enough for the cut edge or end-grain of the cladding to be protected..Don't know how this affects any
Thanks for your efforts guys
I'm not sure it's possible to run the ridge parallel to the 900mm sides and have a pitch of 30 on all sides - basically the rafters across the narrower span have to rise above the halfway line in
order to make the height of the rafters along the 1350 axis, ie you need a negative ridge length?!
I can account for the corner 6x2s, but need to get the structure right first - do you mean stick an octagon in instead of the ridge? And still have all pitches at 30 degrees?
edit: looks great, btw!
Markie..if I build a single finial into the roof it will be easier then. Think of an octagonal ( regular ) piece of timber with sides a minimum of 46mm ( which is the width of the rafters ).
Thickness of the timber will be the dimension of the rafter at said pitch..ie. if the roof ( rafters 97 x 46 mm ) has a 30 degree pitch 97mm cut at 30 deg. equals 115mm so the octagonal center piece
will be 115mm thick.
Going back to the ridge beam idea what pitch would the roof have to be to run the ridge parallel to the 900mm sides?
If there's a single finial I don't believe it can be regular, given that your octagon corners aren't those of a regular octagon. I'm working on this at the mo! Regardless of if I'm right or wrong
about the angles, the pitch of each roof plane will (almost!) certainly be different.
If the ridge is parallel with the 900 sides and if that ridge is 450mm long and at a height of 77.1 then the pitch of [S:every:S] the long and short side roof planes will be 30 degrees.
If what matters to you is the length of the beams to the corner junctions, I can get that no probs! How far out do your extra bits stick?
Right. Your single finial would (in as much as I've got it right!) need to be this shape, allowing for 46mm wide wood.
This obviously doesn't show the different angles you'd have to cut the rafters at in order to give the pitch (and while SketchUp can find them easily enough, I think they might be grim to cut!).
There will (I think) be three different pitches.
Angles are easily given if desired.
Hi Markie,
the finial piece seems to be a bit squished..i mean the angles on the far L & R don't look like 120 deg.
Think of a regular hexagon, internal angles are 120 deg. then add two pieces opposite each other both 900mm long. 4 of the angles stay at 120 deg.
Great work btw. how long does it take?
Hey. Yep, but with your octagon two of the sides have been stretched to 900mm and two to 1350mm?
Will have another look tonight, but I'm pretty sure on the model that all internal angles are 135 degrees.
I've been playing with SketchUp for two evenings more than this thread, two or three hours a night
I'm began by making a shed (because I'd like to build one in my garden) and then moved on to (badly) modelling my house and garden in order to plan where the shed should go.
This has led me to conclude that for the shed to be perfectly positioned I will need to move a brick wall
I think all the internal angles are 135 thanks to magic.
It doesn't look quite right because I don't know how to get a perfect top down view, so we're looking at it from a slight angle!
So tymbian are some of your internal angles 120degree and some 135degree? I think Markie and me have been doing it as a regular octogan with 135 internals all round. From looking at your pics it
loooks like the flatter angle is about 150degree and the near angle looks like 120ish. A digital angle finder is a good purcahse in this case.
Give us the known measurements, distances between walls would be good. | {"url":"http://singletrackworld.com/forum/topic/roofing-help-needed-please-caution-posh-shed-content","timestamp":"2014-04-19T14:47:23Z","content_type":null,"content_length":"64444","record_id":"<urn:uuid:ce67fe71-5640-4362-8d5b-bc9cc31bc336>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Experiment 9
Conservation of Linear Momentum
To verify the law of conservation of linear momentum for head-on collisions of two masses
An air track with gliders, two sets of photo-gates (motion sensitive timers), additional weights, a mass scale, and a calculator
If the centers of two billiard balls (moving toward each other) are exactly on the same line, Figure 1, their collision is a "head-on" collision, and after collision they tend to stay on the same
line. If their centers are not exactly on the same line but are on two different lines (parallel or nonparallel), their collision is called "oblique collision" and after collision each take a
different direction. In this experiment, the easier case of "head-on collision" will be examined in order to verify the law of conservation of linear momentum. Although on an air track, there are
sliders (or gliders) instead of billiard balls, since the center of mass of both sliders move exactly on the same line, the collision is considered " head-on."
The Law of Conservation of Linear Momentum applied to masses M[1] and M[2] initially moving toward each other at velocities V[1] and V[2], and finally (after collision) returning at velocities U[1]
and U[2][ ]may be written as follows:
M[1]U[1] + M[2]U[2] = M[1]V[1] + M[2]V[2] (1)
Total Momentum After Collision = Total Momentum Before Collision
Elastic and Inelastic Collisions:
A collision during which some kinetic energy is lost is called "inelastic Collision." In reality, all collisions are inelastic and are associated with some K.E. loss no matter how small. A collision
during which no K.E. is lost is considered "elastic" that means "perfectly elastic" and is ideal. The K.E. loss may be calculated as follows:
ΔK.E. = (Total K.E.)[after collision ] - (Total K.E.)[before collision ] =
[0.5M[1](U[1])^2 + 0.5M[2](U[2])^2] - [0.5M[1](V[1])^2 + 0.5M[2](V[2])^2]
1) Place the air track on an appropriate table. Connect the air pump to it and turn it on. Place two gliders (sliders) on it and use the leveling screws under its legs to level it. When leveled, the
gliders should not move to either side if both are already at rest. This guarantees that there will be no change in the P.E. of the gliders when you make them move left or right.
2) Now, turn the air pump off. Do not slide the gliders when the air pump is off. This will keep the air track scratch-free.
3) Make sure that each glider has its two sticks inserted into its top holes near its top ends. These two sticks trigger the photo gate as the glider moves under it for the purpose of time
measurement. Measure the mass of each glider (including its two rubber bands and the two sticks) using a laboratory scale. If they are exactly equal, you do not need to mark them; otherwise, mark
them to make sure you know the mass of each. Let the left glider be M[1] and the right glider M[2] as indicated in Figure 2.
4) Turn the air track on and place the gliders on the air track, one at each end with their rubber bands facing each other. Place M1 on the left and M2 on the right as shown in Fig. 2. Each rubber
band holder has an axle or a stem that gets horizontally inserted into a hole on the forehead of each glider. The holder can be turned about its horizontal stem to any angular position. Preferably,
adjust the rubber bands angles such that they collide with each other at a 90-degree angle. This provides a smooth collision without any jumping of the collider (s) . Then practice sending the
gliders toward each other by giving each a quick hit (an initial velocity) such that they meet somewhere near the middle of the air track. Two good points to put the photo gates at are at about 70cm
and 130cm from say the left side. Left side is where M1 starts. Gently adjust the height of the photo gates by loosening the appropriate screws. Each time one of the sticks on top of a glider goes
through the photo gate, the red light on top of that gate blinks, if, of course, the photo gate is already turned on. The closer the photo gates are the closer to the actual velocities before and
after collision will be measured. The goal is to measure the velocities just before and just after collision. If photo gates are placed far from each other, they measure the initial velocities V[1]
and V[2] too soon before collision and the final velocities U[1] and U[2] too late after collision allowing the photo gates to measure velocities that are not close enough to the actual velocities.
That's why practicing the process a few times is important. If you can make gliders meet somewhere near the middle, then the 70cm and 130cm positioning of the gliders help better velocity
Time Measurements for the Calculation of Velocities (IMPORTANT)
5) Each time the two sticks on top of a glider go through a photo gate, the gate measures the time interval between the two generated pulses or events; therefore, each photo gate MUST be put in PULSE
mode. Dividing the measured distance between the sticks of a glider (X) by the time it measures (t) results in a speed measurement. Fortunately, there is a good feature in this photo gate. It is its
"MEMORY OPTION." If you turn the MEMORY OPTION on, the first time a glider goes through it, it records the elapsed time (t[1]) for calculating a V. Do nothing and let the glider return after
collision and go through that gate again. It has already measured (t[2]) as well. What you read on it is (t[1]). If you push the memory key to READ position, it shows you the total time (t[1]+t[2]).
Subtract (t[1]) from the total to find (t[2]). (t[2]) is the time for calculation of a U. With this feature, all you need to be good at is to make sure that the two gliders completely pass both gates
before they collide. You may want to write down the times measured for the motion of M[1] as (t[11]) and (t[12]) and the times measured for the motion of M[2] as (t[21]) and (t[22]). The speeds will
therefore be
V[1] = X[1] / t[11 ; ]U[1] = X[1] / t[12] and
V[2] = X[2] / t[21 ; ]U[2] = X[2] / t[22 ]. Note that velocities must be plugged into equations and not just the speeds; in other words, do not forget the directions.
5) Apply the above procedure for the 4 cases shown in Table 1.
6) For each set of M[1] and M[2] in Table 1, plug V[1] , U[1], V[2] , and U[2] in Equation (1) to see if the left side and right side become equal and if linear momentum is conserved.
7) For each case, calculate a %difference as well as the loss in the K.E. during collision. Record your results in Table 2.
│Case│ M[1] │ M[2] │
│ 1 │ Glider +60.0grams* │ Glider │
│ 2 │ Glider + 40.0grams* │ Glider + 140.0grams* │
│ 3 │ Just the glider │ Just the glider │
│ 4 │Left glider at rest placed at the middle (V[1] = 0) │Right glider put into motion toward the middle│
│ * The mass to be added must be split equally to both sides of the glider for symmetry │
Table 1
│ │ M[1] │ V[1] │ M[2 ] │ V[2] │M[1]V[1]+M[2]V[2]│ U[1] │ U[2] │M[1]U[1]+M[2]U[2]│ │ΔK.E. │
│Case│ │ │ │ │ │ │ │ │% diff on Total Momentum │ │
│ │[](grams)│(cm/s)│(grams)│(cm/s)│ [](gram*cm/s) │(cm/s)│(cm/s)│ [](gram*cm/s) │ │Joules│
│ 1 │ │ │ │ │ │ │ │ │ │ │
│ 2 │ │ │ │ │ │ │ │ │ │ │
│ 3 │ │ │ │ │ │ │ │ │ │ │
│ 4 │ │ │ │ │ │ │ │ │ │ │
Show Calculations
Comparison of the results:
Provide the percent difference formula used as well as the calculated percent difference in each case.
State your conclusions of the experiment.
Provide a discussion if necessary. | {"url":"http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Experiment%2009-1310.htm","timestamp":"2014-04-19T04:58:18Z","content_type":null,"content_length":"20733","record_id":"<urn:uuid:ca6b82aa-226a-4c64-b666-3c52297b3351>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
synchronized clocks with respect to rest frame
Let me give you the correct answer to your question: if the train was at rest and it starts moving, AS FAR AS WE CAN TELL UNDER THE AXIOMS OF SR, the clocks might stand on their heads and sing the
Alleluliah Chorus.
This is simply untrue. See the Usenet Physics FAQ on the topic:
We have two axioms to go on and both explicity restrict themselves to inertial observers.
Also untrue. The postulates restrict themselves to inertial frames. You can have non-inertial observers and objects moving in an inertial frame, you just cannot build an inertial reference frame
where they are at rest. See the FAQ linked above.
However, although neither postulate explicitly mentions non-inertial reference frames, from an inertial reference frame it is simply a mathematical transform to obtain the physics of a non-inertial
reference frame. Thus, SR can deal with non-inertial reference frames as well. The two postulates do not apply directly, but the physics can nevertheless be derived from the postulates in a
mathematically rigorous way.
If you are asking about accelerating objects, then you are outside of the scope of the 1905 paper.
Also untrue. Einstein explicitly deals with accelerating clocks in section 4.
Four lines. Hardly a sufficient treatment.
Nonsense. Exactly how many lines are required for a sufficient treatment? What if I set Einstein's treatment in a larger font with a narrower column width so that it takes the required number of
lines, does the treatment suddenly become sufficient?
The sufficiency of the treatment has nothing to do with the length. If a correct result is derived or explained in a few words, then that is a credit to the treatment, not a detraction. In this case,
Einstein succinctly and clearly extended the time dilation of an inertial clock to the case of an accelerating clock. It is clearly part of the 1905 paper, and trying to pretend otherwise really
weakens your credibility.
Acceleration and gravitation are indistinguishable under GR, at least over short intervals where tidal affects aren't observable.
This actually contradicts the point you are trying to make. The whole point of the equivalence principle is that, over a small region, GR reduces locally to SR. So the fact that you can already deal
with acceleration in SR is (via the equivalence principle) what allows you to know how to deal with gravity in GR.
The Pound Rebka experiment is a classic example of this. You can analyze the Pound Rebka experiment as an experiment on an accelerating rocket far from gravity using SR. You then know immediately the
result you expect in the stationary lab under gravity using GR. | {"url":"http://www.physicsforums.com/showthread.php?p=3791937","timestamp":"2014-04-19T15:09:26Z","content_type":null,"content_length":"93565","record_id":"<urn:uuid:d7ec1b29-c775-45d6-93e5-265cce055fde>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Analysis of the First Proofs of the Heine-Borel Theorem - Cousin's Proof
Cousin's Proof
In his 1895 “Sur les fonctions de n variables complexes,” Pierre Cousin [7] extended the Heine-Borel Theorem to arbitrary covers. On page 22, he wrote:
Define a connected space S bounded by a simple or complex closed contour; if to each point of S there corresponds a circle of finite radius, then the region can be divided into a finite number of
subregions such that each subregion is interior to a circle of the given set having its center in the subregion.
Before we examine the proof, we mention a few points:
• According to Dugac, this memoir was completed on October 28, 1893 (two years before Borel), but was not published until 1895 [8, p. 97]. It is very unlikely that Cousin was aware of Borel’s
statement and he certainly did not make any reference to it. It appears that Cousin has at least some claim of priority for the theorem.
• Cousin’s statement is a higher-dimensional version of the Heine-Borel Theorem. Circles of finite radius would be equivalent to open intervals in one dimension.
• As Cousin did not work with the real number line in his proof, the form of completeness he used is not precisely one of those listed in our Introduction. However, there are higher-dimensional
analogues of these characterizations. Specifically, Cousin made use of the following higher-dimensional version of the nested interval property:
Every nested sequence of closed, bounded sets in \({\mathbb R}^2\) has a non-empty intersection.
• It would be more accurate to refer to his “circles” as “disks” because they do not consist just of the boundary.
• No mention of countability of the cover is made, nor is it required, for this proof.
Cousin started immediately with a description of his proof technique, just as we teach our students to do, and proceeded by contradiction. He used the “divide and conquer” technique that probably
appears elsewhere in a real analysis course.
Diagram 2: In this diagram we see the “divide and conquer” method that Cousin employed in his proof.
Cousin took the region S and divided it into \(n>1\) subregions (\(n\) an integer). If the entire region required an infinite number of circles to cover it, then at least one of the subregions must
also require an infinite number of circles to cover it. Cousin called that subregion S[1]. It is the light grey, lower right quadrant of Diagram 2.
We suppose, in fact, the lemma is false: we divide S into squares using parallels to the coordinate axes, in a way that the number of obtained regions is at least equal to a certain integer n;
there is at least one of these regions S[1], for which the lemma is still false.
Cousin then iterated the process, dividing S[1] into n sub-subregions. At least one of these sub-subregions must require an infinite number of circles to cover it. He called this region S[2] (the
slightly darker grey, lower left quadrant of S[1] in Diagram 2.) By continuing in this fashion, he created an infinite sequence of nested closed square regions S[1 ], S[2 ], … , S[p ], … (the
successively darker grey regions in Diagram 2.) He then applied the nested interval property to show there is a point M that is common to each of these S[p], and that M is in the interior or boundary
of S.
Subdividing S[1 ]into squares and portions of squares in number at least equal to n, I deduce S[2], in the same way that S[1 ]is deduced from S; in following the reasoning, I arrive at an
indefinite series of squares or portions of squares S[1 ], S[2 ], … , S[p ], … ; it is clear that S[p], for p increasing indefinitely, has for a limit a point M interior to S or on its perimeter;
Because M is in each S[p], and each S[p] cannot be covered by a finite number of circles, certainly each S[p] cannot be covered by a single circle. However, as Cousin noted, this is impossible,
because M is in S and so some circle of positive radius covers it (the blue circle in Diagram 2). That circle will contain some S[n] and all subsequent S[n+k], which is a contradiction to the way
that the S[p] were constructed.
... one arrives at this conclusion that one can find a square S[p ]surrounding M or adjacent to M which is not contained in the interior of one of the circles of the statement; however this is
impossible because to the point M corresponds a circle of finite radius having this point for a center.
The following may be helpful when considering using Cousin’s proof in a class.
• A discussion of completeness in the form of the nested interval property applied to nested, closed, bounded, connected two-dimensional regions will be required.
• This proof has the benefit of using the “divide and conquer” technique with which students may be familiar, particularly from Bolzano-Weierstrass or one of the corollaries that we discussed
above. If they have not already seen this technique, it is more likely it will appear in their studies than the “numbers of the second type” technique employed by Borel.
• Glancing through introductory analysis, topology, and set theory textbooks shows it is common to see a proof of the Heine-Borel Theorem using a technique similar to the one utilized by Cousin.
Presenting Cousin’s proof as the germ of the texbook technique may be helpful for students. Recall that Borel also published a proof using this “divide and conquer” technique in [3]. While
appearing years after Cousin’s paper, interested instructors may consider examining Borel’s presentation as well.
• This proof does not require the covering to be countable.
• Even if the course requires only the one-dimensional theorem, it makes for a nice student project to study a higher-dimensional analogue.
• Cousin provided a well-written proof that is easy to follow. His strategy is clearly presented at the beginning and the contradiction is noted at the end.
• Although Cousin’s statement is equivalent to the Heine-Borel Theorem, it certainly is not in the form that students will be using in a basic analysis course. The theorem will need to be
translated into the one-dimensional case for most of the applications.
We believe that this proof would be particularly applicable if the Heine-Borel Theorem is being taught in a point-set topology course or in an advanced calculus class where domains of functions are
often two-dimensional or higher. | {"url":"http://www.maa.org/publications/periodicals/convergence/an-analysis-of-the-first-proofs-of-the-heine-borel-theorem-cousins-proof","timestamp":"2014-04-18T19:51:55Z","content_type":null,"content_length":"107580","record_id":"<urn:uuid:535471ae-8a7a-4e7f-8944-28b9c9612821>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
infinite divisor chain
Let R be a
. An
b of R has an
infinite divisor chain iff
there is a
infinite sequence
) in R with:
• a[1] divides b
• a[i+1] divides a[i] for i > 0, i an integer
• Any a[i] is only associated to a finite number of elements a[j]
Where is this definition is important ?
Let's look at an unique factorization domain.
In an UFD, every irreducible element is a prime.
Proof: Let be p irreducible and p divide x y, which means b p = x y. We must show that p divides x or y. Let x = x[1]...x[n], y = y[1]...y[m] factorizations of x and y. Then x[1]...x[n]y[1]...y[m] is
a factorization of b p. Because p is irreducible and any factorization is unique, p must be associated to a x[i] or y[j]. Therefore p divides x or y.
Now arises the question "If R is not an UFD, are the irreducible elements which are not primes ?" or "Are UFD and prime follows from irreducible equivalent ?" Note that "irreducible -> prime" makes
factorizations unique.
The answer is no, because there is the possibility that you have reducible elements without an factorization into irreducible elements. Sounds weird, but is true: an element might have an infinite
divisor chain.
Let's give an example:
Let R be the set of holomorphic functions on C. (here: the function must be holomorphic at every point of C).
R is a ring be the standard addition and multiplication of functions. R is commutative and an integral domain. It can be shown, that any irreducible element is prime.
By the Weierstrass approximation theorem there are holomorphic functions which are infinite products of functions of the form (z-n)g(z), (g(z) is never 0 on C). These products can be divided with
these (z-n) (throwing (z-n)g(z) out of the product gives again a holomorphic function). This gives obviously an infinite divisor chain. An infinite divisor chain obviously contradicts the properties
of an UFD.
I had to look this example up and there seems to be no easier one. | {"url":"http://everything2.com/title/infinite+divisor+chain?showwidget=showCs722318","timestamp":"2014-04-21T08:11:19Z","content_type":null,"content_length":"20147","record_id":"<urn:uuid:d6dcd7f4-7e57-4d13-9818-9712ae6d2a5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equation for a line.
December 7th 2007, 04:14 PM #1
Dec 2007
Equation for a line.
Hi. I need help finding a vector for an equation for a line. In the question i'm given 2 points which both have the same distance to the Line.
Points (2,2) and (0,-2)
I managed to find a point on the line p
Try: $\left\langle {1, 0} \right\rangle + t\left\langle {-2,1} \right\rangle$
Thank you. But how did you get that vector? In the answer sheet it says (2,-1) but i guess its the same thing. How did you come up with (-2,1) ?? Thanks again.
Well just look at $\left\langle {2, - 1} \right\rangle = - \left\langle { - 2,1} \right\rangle$. That means that those vectors are parallel.
So we can use either one of them or any multiple of either.
As to how I found it: I just found a vector which is perpendicular to the given vector.
Thanks again!
Oh yeah!! How silly i am. (ulv) = 0 makes them perpendicular to each other. Man i've forgotten so much. Thanks a bunch again!
December 7th 2007, 05:52 PM #2
December 8th 2007, 07:48 AM #3
Dec 2007
December 8th 2007, 08:16 AM #4
December 8th 2007, 08:44 AM #5
Dec 2007 | {"url":"http://mathhelpforum.com/advanced-algebra/24419-equation-line.html","timestamp":"2014-04-18T23:39:51Z","content_type":null,"content_length":"41712","record_id":"<urn:uuid:b14c01ca-b36e-46cb-9dbf-4a7e42d1d8ad>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to mathematical statistics
This classic book retains its outstanding ongoing features and continues to provide readers with excellent background material necessary for a successful understanding of mathematical statistics.
Chapter topics cover classical statistical inference procedures in estimation and testing, and an in-depth treatment of sufficiency and testing theory—including uniformly most powerful tests and
likelihood ratios. Many illustrative examples and exercises enhance the presentation of material throughout the book. For a more complete understanding of mathematical statistics.
Review: Introduction to Mathematical Statistics
User Review - John Espy - Goodreads
Not my favorite. Read full review
Unbiasedness Consistency and Limiting Distributions 25
Multivariate Distributions 73
15 other sections not shown
Bibliographic information | {"url":"http://books.google.co.uk/books?id=dX4pAQAAMAAJ&q=discussed&dq=related:ISBN0486428125&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-20T06:00:52Z","content_type":null,"content_length":"143051","record_id":"<urn:uuid:765d34ea-fcfb-47ba-b39a-53e60fe848f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate The Kinetic Energy Of A 45 Gram Golf ... | Chegg.com
Calculate the kinetic energy of a 45 gram golf ball traveling at 20m/s
(Remember your standard units!!!!!!!!!!!!!)
Calculate the kinetic energy of a 200 gram baseball traveling at 40m/s
Calculate the kinetic energy of a 300 gram softball traveling at 35m/s.
THANKS! please include equation and show ALL work. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/calculate-kinetic-energy-45-gram-golf-ball-traveling-20m-s-remember-standard-units-calcula-q1704959","timestamp":"2014-04-18T20:10:39Z","content_type":null,"content_length":"20801","record_id":"<urn:uuid:c71dce76-d5c8-49e4-a931-00aff2520695>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mechanics of lipid bilayer junctions affecting the size of a connecting lipid nanotube
In this study we report a physical analysis of the membrane mechanics affecting the size of the highly curved region of a lipid nanotube (LNT) that is either connected between a lipid bilayer vesicle
and the tip of a glass microinjection pipette (tube-only) or between a lipid bilayer vesicle and a vesicle that is attached to the tip of a glass microinjection pipette (two-vesicle). For the
tube-only configuration (TOC), a micropipette is used to pull a LNT into the interior of a surface-immobilized vesicle, where the length of the tube L is determined by the distance of the
micropipette to the vesicle wall. For the two-vesicle configuration (TVC), a small vesicle is inflated at the tip of the micropipette tip and the length of the tube L is in this case determined by
the distance between the two interconnected vesicles. An electrochemical method monitoring diffusion of electroactive molecules through the nanotube has been used to determine the radius of the
nanotube R as a function of nanotube length L for the two configurations. The data show that the LNT connected in the TVC constricts to a smaller radius in comparison to the tube-only mode and that
tube radius shrinks at shorter tube lengths. To explain these electrochemical data, we developed a theoretical model taking into account the free energy of the membrane regions of the vesicles, the
LNT and the high curvature junctions. In particular, this model allows us to estimate the surface tension coefficients from R(L) measurements.
Membrane tethers have been studied extensively over the past 40 years [1-11]. These structures, also called membrane nanotubes, were observed during fluid shear deformation of live cells attached to
a substrate. As these cells were dislodged, membranous tethers remained attached to the surface displaying both the fluid and the elastic properties of the membrane [1,2]. Following this work many
naturally forming membrane nanotubes have been identified [7-10]. For example, membrane nanotubes have been shown to exist within the cell, notably in the trans golgi network [10]. Here, lipid and
protein cargo destined for various destinations throughout the cell are sorted and pinched off from the tubular membrane of the network. It has also been reported that cells have the ability to use
membrane nanotubes for the exchange of organelles [7], and this exchange has interestingly even been recognized between different cell types [8]. Thus, these tethers, which were first observed
following a dramatic manipulation, have been shown to be a common occurrence in biology.
Following their initial discovery, the lipid membrane nanotubes (LNTs) have been created artificially in several model membrane systems. By attaching a bead or a micropipette to a point on the
membrane and applying a localized mechanical force to the bilayer surface it has been shown that a lipid tether can be pulled from the vesicle membrane [3-5,11]. The size of the structure is a result
of the interplay between the curvature elasticity effects maintaining the original geometry and the membrane tension [12]. Tether pulling experiments can be used for estimations of tube diameters. By
measuring the forces required for pulling a tube, the diameter of the LNTs were estimated to be 50-200 nm [13]. From a tube coalescence method [14] and video pixel analysis of accumulated
fluorescence images as well as from micrographs obtained with differential interference contrast optics [5], the LNT diameters were determined to be in the range of 100-300 nm [13]. To complement
these methods, we developed an electrochemical method to monitor the diffusion of electroactive molecules through the LNT, thus allowing the LNT diameter to be measured as a function of nanotube
length [11]. The method relies on the formation of a vesicle-LNT network by using a micropipette technique [5,15]. The micropipette-assisted vesicle-LNT network formation allows us to create complex
systems of vesicles interconnected by LNTs, including a so-called inward configuration where a small daughter vesicle is created inside a larger mother vesicle, the two vesicles being connected by a
LNT [6] (see Figure 1A). During network formation, the LNT is pulled with a micropipette to the interior of the vesicle and thus the opening of the tube faces outward to the exterior of the vesicle.
This makes it possible to monitor the diffusion of a marker molecule from the micropipette, through the tube, and out of the nanotube opening. The concentration of the molecules measured at the
opening of the LNT is directly related to the inner diameter of a LNT of determined length [11]. In this article we use the electrochemical method for monitoring the size of a nanotube attached
directly to the micropipette in the configuration we refer to as the tube-only configuration (TOC) (see Figure 1B).
Figure 1. Experimental configurations. Sketches of the geometries of the large unilamellar vesicles interconnected with a common LNT; (A) the "two-vesicle" configuration, where the LNT is connected
between the mother vesicle and a small daughter vesicle inside of the mother vesicle, (B) the "tube-only" configuration where the LNT is connected between the tip of a glass pipette and the giant
unilamellar vesicle.
Additionally, by inflating a small ("daughter") vesicle at the tip of the micropipette, the diameter of a nanotube placed in between the inner vesicle and membrane of the outer vesicle can be
examined in a configuration here called two-vesicle configuration (TVC) (see Figure 1A). The measurements show that there is a reduction in tube diameter at shorter length, and the effect appears to
be more pronounced in the TVC. In this work we suggest a geometrical model based on direct minimization of the Helfrich's functional for the system of lipid vesicles linked to a LNT via junctions of
specific geometry. This new model presents a unified quantitative analysis of TOC and TVC and explains why the length of the LNT in the TVC is twice as high as in TOC for a given radius. Furthermore,
the model has just two parameters, which can be chosen to fit the experimental data on monitoring of the size of the LNT. This allows for identifying the contribution of the surface tension to the
free elastic energy of the system. This low-tension term has been neglected in the related publication [11], where a phenomenological description of the system was suggested and only a qualitative
consistency with experimental data was obtained.
Materials and methods
Surface-immobilized giant unilamellar soybean liposomes (SBL) were made from soybean polar lipid extract (Avanti Polar Lipids, Alabaster, AL), as previously described [5,6,11,15]. An injection
pipette pulled with a commercial pipette puller (Model PE-21, Narishige Inc., London, UK) and was back-filled with a 50 mM catechol solution. The pipette was then electro-inserted into the
unilamellar liposome with the aid of a voltage pulse generated relative to a 5 μm counter electrode (ProCFE from Dagan Corp, Minneapolis, MN), which was placed on the opposite side of the liposome
from the injection pipette. Carbon fiber working electrodes were fabricated in house and have been described elsewhere [11]. Working electrodes were held at +800 mV versus a silver/silver chloride
reference electrode (Scanbur, Sweden). All measurements were made using an Axon 200B potentiostat (Molecular Devices, Sunnyvale, CA).
Nanotube radius measurements and calculations
The flux of catechol through the nanotube was measured using carbon fiber amperometry. A 5 μm carbon fiber microelectrode was placed at the nanotube-liposome junction. The nanotube was then either
lengthened or shortened by manipulating the injection pipette. After the new length was obtained, the current was allowed to stabilize and was then recorded. This process was repeated several times
for each liposome resulting in a series of electrochemical measurements for tubes of different lengths. The electrode was then removed from the nanotube-liposome junction and allowed to reach a
steady current to establish a baseline. The difference in measured current for a nanotube versus this background together with the length of the nanotube was then used to compute the diameter of the
nanotube based on the previously derived relationship
where R is the radius of the nanotube of a given length L, Δi is the change in measured current with respect to the background, n is the number of moles of electrons transferred per mole of redox
species (for catechol, this is equal to 2), F ≈ 96 485.34 C/mol is Faraday's constant, D = 7.0 × 10^-^6 cm^2/s is the diffusion coefficient of the selected redox species (catechol). ΔC is the change
in concentration of catechol over the nanotube length and is equal to the concentration of electroactive species in the pipette assuming that the concentration at the electrode surface is zero (in
our experiments ΔC = 50 mM).
The results for the tube radius deduced from the simultaneous measurement of electrochemical current and the tube length by using formula (1) are presented in this study. In comparison with our
previous publication [11], a wider range of the length L of the tube is presented for the TOC configuration.
Theoretical approach
The system under consideration
In the first system (Figure 1A), a mother vesicle contains a small daughter vesicle on the inside with a common LNT connecting the two compartments. In the second case (Figure 1B), the lipid tube is
pulled to the inside of the vesicle and is directly fixed to the tip of the micropipette. Also, there is a source of lipid attached to the mother vesicle wall. The presence of lipid source means that
the surface tension is low. We model the membrane as a two-dimensional surface Γ. Its free elastic energy written in the form of Helfrich functional [16] reads
Here H is the mean curvature of the surface, C[0 ]is the spontaneous curvature which is determined by the specific chemical composition of the membrane, k is the coefficient of membrane bending, σ is
the coefficient of membrane surface tension. The equilibrium shape of the membrane with pulled cylindrical tubule can be found from minimum of the functional
where f is the force needed to pull the lipid tube of length L [12]. In the case, when the junctions are not taken into account, the interplay between membrane bending k and membrane tension σ
produces variability in tubule radius and the force f[0]
where f[0 ]is the force needed to hold the tube of radius R[0 ]at a fixed position [12]. However, it was shown that for lipid vesicles interconnected with LNTs, either pulled outward from the vesicle
wall [5,15] or inward into the vesicle interior [11,17], the neck elements (the junctions between the lipid tube and the vesicle body) also contribute to the total free energy of the membrane. Below
we consider a theoretical model based on the Helfrich functional to find the equilibrium shape of the membrane accounting for the junctions of the specific geometry. By comparing the results of
numerical computations with experimental data, we are able to determine the tension in the LNT after fitting the experimental data with the geometrical model described below.
The geometrical model
When the inner vesicle or the junction between the micropipette and the nanotube is subjected to the translation movement along the LNT axis, the length of the tube is changed (increasing or
decreasing its value in a controlled way, which can be monitored under the microscope). During these manipulations the radius of the tube adapts to minimize the Helfrich's free energy (2) with C[0 ]=
0, as we neglect any contribution from spontaneous curvature.
We assume that the shape of LNT can be approximated by a cylindrical surface of radius R and length L. Since radii of both vesicles are much larger than the tube radius, the junctions between the
cylinder and vesicles are modelled by toroidal surfaces with the inner radius R + r and crossection radius r (Figure 2). In the TOC, when the inner vesicle is not present, only one junction is
considered. Although the junction between the micropipette and the tube contributes to the total free energy, it is assumed that this contribution does not depend on the tube radius R and, thus, the
corresponding term vanishes after the variation. In these settings, the radius-dependent part of the free energy is given by the expression:
Figure 2. Schematics of the geometry of the tube-junctions.
The toroidal part of the surface can be parametrised by (5) due to translation invariance of the energy functional (4). The multiplier ν assumes the value 1 for TOC and 2 for TVC to represent both
In Equation 4, L, r, σ are fixed parameters while the radius of the tube R is adjusted to satisfy
The variation (6) yields the following relation between the tube length L and radius R
where r as well as data. Assuming that the radius of the tube R is much larger than the parameter r, the first two terms of the power series expansion for (7) with respect to r/R can also be used to
quantitatively model the measured relation L(R). This, simplified, form of (7) reads
and allows for expressing R as a function of L
An important feature of the proposed model is the asymptote L, the radius R grows and the energy of the cylindrical part of the surface becomes dominant over the energy of the toroidal junctions.
Thus, in the limit case L → ∞, we obtain the junction free equilibrium value of R given by (3).
Fitting the parameters
For given K measurements (L[i], R[i]), i = 1 ... K, we vary r to minimize
by means of conjugate gradient minimization procedure. Here, the relation L(R) is given by (7). When the ratio r/R is small, the approximation (8) can be used instead. In this case, one can also fit
(9) to the data by minimizing the functional
The latter method is preferable when the relative measurement error for R is greater than the one for L.
Results and discussion
When fitting the curve (7) to the dataset for the TVC, the parameter values are r ≈ 1.7 nm and σ/k ≈ 89 μm^-^2. The corresponding values for the dataset in the case of TOC are r ≈ 1.2 nm and σ/k ≈ 54
μm^-^2. The relation L(R) with fitted parameters are plotted on Figure 3 (blue curves) together with measured experimental data. As expected, the parameter r is much smaller than the radius R: r/R <
0.06. Therefore, the simplified form (8) and its inverse (9) can be used for the given range of values of R. Fitting the relation (9) to the measurements by minimizing (11) yields σ/k ≈ 98 μm^-^2, r
≈ 1.9 nm and σ/k ≈ 72 μm^-^2, r ≈ 1.7 nm for TVC and TOC, respectively. The corresponding curves are plotted in Figure 3 in red. The model exhibits good agreement with the empirical data. A rather
large scattering of measurement points at high R values in the TOC case is reflected as about 20% difference in parameter values when using different approaches to find the best fit. In this case,
the values obtained through fitting (9), namely σ/k ≈ 72 μm^-^2, r ≈ 1.7 nm have higher reliability.
Figure 3. Comparison of experimental and model results. The measurement points (shown as markers) and the predictions of the model (solid lines). Parameters for the model predictions were chosen to
minimize functionals (11) (red lines) and (10) (blue lines).
Our model establishes a connection between the data from TOC and TVC experiments. It follows directly from formula (7), that to reach a given radius R of the tube, the length L[TVC ]of the tube in
the TVC experiment must be double of that in TOC arrangement
To explore this theoretical prediction, we divide the lengths obtained in the experiment with TVC by two and plot the resulting data set together with the measurements for TOC on Figure 4. The
optimal parameters of the model for this, unified, data are σ/k ≈ 55 μm^-2, r ≈ 1.2 nm and σ/k ≈ 71 μm ^-^2, r ≈ 1.6 nm for functionals (10) and (11), respectively. These values are similar to ones
for the TOC case since this portion of the data is more disperse and has much greater contributions to functionals (11) and (10) when compared to the data for the TVC case. Figure 4 also shows that
the measurements are in agreement in the region, where they overlap, i.e., for values of R between 0.05 and 0.06 μm.
Figure 4. Comparison of experimental and model results (unified description). The measurement points (shown as markers) for both TVC and TOC plotted after dividing the TVC length by two. Parameters
for the model predictions were chosen to minimize functionals (11) (red line) and (10) (blue line).
Assuming the well established value of bending modulus k = 10^-12 erg [16], the recalculated coefficients of the surface tension are found in the interval of σ ~ 0.01-0.02 dyn/cm. These tension
values are much smaller comparing to the lipid molecular compressibility (100 dyn/cm) [18] but much larger than the critical surface tension for the instability of the membrane cylinder and
"pearling" (10^-^5 dyn/cm for (DGDG/DMPC membrane) LNTs of radius R ~ 0.3 - 5 μm found in [19] work) while comparable with the magnitude of the lateral tension (higher limit) for mutual adhesion of
lecithine membranes ~10^-^4 erg/cm^2 [20].
The small value of the junction radius corresponds to the strongly deformed state of the membrane. These small values should be considered as order estimates, since they are attributes of the assumed
toroidal geometry of junctions. The real shape of these junctions is probably more complex and, thus, cannot be described by just two scalar valued parameters. Although freeze-fracture electron
microscopy does not reveal bilayers with curvature less than 20 nm, the value r ~ 1.5 nm which is found from the model is similar to the radius curvature of small inverted pores (for example, it is
known that phospholipids spontaneously form inverted membrane structures with the radius varying between 0.5 and 5 nm, and smallest fusion pores have a calculated diameter less than 2.5 nm) [21,22].
We propose a simple geometrical model for the quantitative explanation of the experimental results on equilibrium geometrical shape and LNTs parameters, R(L), in the different configurations. The
experimental observations show that the nanotube diameter is reduced at shorter lengths and also that the diameter is consistently smaller for the TVC as compared to the TOC for a given length. The
observed effect is ascribed to originate from the elastic junctions, since the phenomenon is accentuated in a system containing two necks connected to a vesicle membrane. We approximate the shape of
these junctions by simple geometrical shapes and express the free elastic energy of the membrane in terms of the length of the LNT, its radius, the radius of the junction and the tension of the
membrane. Variation of the energy with respect to the nanotube radius yields an explicit relation between the radius and the length. The relation is in agreement with observed values. The model
enables estimations of the current surface tension coefficient and the curvature at junction regions. The estimated values of the surface tension are of order 10^-^2 dyn/cm and the curvature value at
junctions are comparable to ones at fusion pores. Furthermore, the proposed model offers a clear explanation of the difference in measurements for TVC and TOC: in contrast to TOC, the TVC features
two junction regions, thus, the length of the LNT in this configuration must be twice as long to achieve the same value of the radius.
DGDG: digalactosyldiacylglycerol; DMPC: dimyristoylphosphatidylcholine; LNT: lipid nanotube; SBL: soybean liposomes; TOC: tube-only configuration; TVC: two-vesicle configuration.
Authors' contributions
RG contributed in development of the geometrical model, analysis of experimental data and participated in writing of the manuscript. MVV participated in the model development and analysis of
experimental data, physical interpretation of results and writing the manuscript. KLA and MK have contributed to the experimental part of the study. RK, MK, AGE, and ASC have equally participated in
writing of Sections 'Background', 'Experimental', and 'Results and discussion.' RK and MVV provided the idea for the theoretical work. All authors read and approved the final manuscript.
The authors are grateful to Prof. Sergei Rjasanow for the helpful discussion of the geometrical model. Part of this study supported by the German Academic Exchange Service (Deutscher Akademischer
Austausch Dienst). ASC acknowledges support from the Swedish Research Council (VR) and the Knut and Alice Wallenberg Foundation. AGE acknowledges support from the European Research Council, VR and
the USA National Institutes of Health.
Sign up to receive new article alerts from Nanoscale Research Letters | {"url":"http://www.nanoscalereslett.com/content/6/1/421","timestamp":"2014-04-16T19:05:19Z","content_type":null,"content_length":"110007","record_id":"<urn:uuid:e9cf59fc-1492-4a7a-aacc-01391531e982>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 2010 Formula Sheet
Physics formula. Physics 2010 Formula Sheet. Some useful (and not so useful) constants and formulas. x = x. f. x. i. v. x = xt. a. x = v. x. t. x = v. ix. t + 12 a. x. t. 2. v. fx = v. ix + a. x. t.
v. fx. 2. v. ix. 2 = 2a. x. x. g = 9.8 m /s. 2. F. x = ma. x. W = mg. f. s,max. µ. k. N. W = Fd cos. K = 12 mv. 2. U = mgh. U = 12 kx. 2. K. i + U. i = K. f + U. f .
Algebra and Trigonometry based Physics Formula sheet
Algebra and Trigonometry based Physics Formula sheet Shadow J.Q. Robinson1 1Department of Physics
Formula Sheet for Physics - Welcome to the Faculty Home Page Server
Reference Guide & Formula Sheet for Physics Dr. Hoselton & Mr. Price Page 1 of 8 Version 5
PHYSICAL REVIEW B 74, 085315 2006 Generalized Kubo formula for
Generalized Kubo formula for spin transport:Atheory of linear response to non-Abelian fields Pei
Empirical Formula and Molecular Formula
The Empirical Formula and The Molecular Formula . The empirical formula is the simplest formula
Material Safety Data Sheet - Duke Physics
Synonym: Dichloromethane Chemical Name: Methylene Chloride Chemical Formula: C-H2-Cl2 Contact
Campaign Formula
Distribute your videos to as many video sharing sites, social media sites, podcast directories and blogs as possible. Your Lead Capture Site Podcast Podcast Capture
The Metric Formula
acknowledges the writers/authors, backgammon players and the backgammon community in general for their studious approach to backgammon; for their ideas and mathematical Backgammon
Quadratic Formula
1 This material is the property of the AR Dept. of Education. It may be used and reproduced for non-profit, educational purposes only after contacting the ADE Quadratic equation
Brackett, Lloyd, C. "Planned Breeding," Dog World Magazine, Chicago IL, 1961. Hedhammer,Willis, Malcomb, "Breeding Dogs" Canine Health Conference, AKC Canine Dog breeding
Focus is the formula.
RESCO® straight cutting oils include a variety of extreme-pressure lubricants with the viscosity of your choice for the most difficult machining operations. RESCO
Euler’s Formula
Euler’s Formula Math 123 March 2006 1 Purpose Although Euler’s Formula is relatively simple
Formula of a Hydrate
Since many hydrates contain water in a stoichiometric quantity, it is possible to determine the mole ratio of water to salt. First, you would Hydrates
The HighSight Formula
Maria High School (1) Mother McAuley Liberal Arts (8) Mount Carmel High School (5) Notre Dame College Prep (1) Notre Dame High School for Girls (4) Mother McAuley Liberal Arts High School
The Formula for Curvature
The Formula for Curvature Willard Miller October 26, 2007 Suppose we have a curve in the plane
82 Extra Practice THE QUADRATIC FORMULA #27 You have used factoring and the Zero Product Property
Empirical Formula
174 ChemActivity 31 Empirical Formula (Can a Molecule Be Identified by Its Percent Composition
Formula of a Hydrate
Determination of the Formula of a Hydrate Introduction The purpose of this experiment
Dehumidification Formula
Step 1: (cft) ÷ Step 2: (class factor) = _____ AHAM Pints needed minimum Step 3: AHAM Pints needed ÷ Dehu AHAM Rating = # of Dehus needed Aham
71 Morganella morganii Antibiotic resistant gram negative rod 72 Neisseria gonorrhoeae GBL strain 73 Newcastle Disease Virus ATCC VR-109 74 Parainfluenza Virus type 1 Morganella morganii
Physics 116A Winter 2006 Bernoulli numbers and the Euler-Maclaurin
Physics 116A Winter 2006 Bernoulli numbers and the Euler-Maclaurin summation formula In this note | {"url":"http://pdf-world.net/pdf/409795/Physics-2010-Formula-Sheet-pdf.php","timestamp":"2014-04-18T08:02:57Z","content_type":null,"content_length":"33790","record_id":"<urn:uuid:4121696d-151c-451a-b365-41fe403d5dfb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: FW: ML for logit/ologit
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: FW: ML for logit/ologit
From Nick Cox <n.j.cox@durham.ac.uk>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject RE: st: FW: ML for logit/ologit
Date Wed, 16 Nov 2011 11:35:40 +0000
You need to supply the right number of arguments too. Here it looks as if rho is undefined because you are not feeding the program with enough arguments.
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Thomas Murray (Department of Economics)
Sent: 16 November 2011 11:18
To: statalist@hsphsun2.harvard.edu
Subject: RE: st: FW: ML for logit/ologit
Right yes - apologies for that - I was experimenting with different arguments and emailed a faulty program.
What I type is:
program drop logittest
program define logittest
version 12.0
args lnf b1 x rho
tempvar lng
qui {
gen double `lng' = ln(invlogit(`b1'*((`x'^(1-`rho')-1)/(1-`rho')))) if $ML_y1==1
replace `lng' = ln(invlogit(-`b1'*((`x'^(1-`rho')-1)/(1-`rho')))) if $ML_y1==0
replace `lnf' = `lng'
ml model lf logittest (happy =)
ml max
The trace returns:
- version 12.0
- args lnf b1 x rho
- tempvar lng
- qui {
- gen double `lng' = ln(invlogit(`b1'*((`x'^(1-`rho')-1)/(1-`rho')))) if $ML_y1==1
= gen double __000007 = ln(invlogit(__000006*((^(1-)-1)/(1-)))) if happy==1
unknown function ^()
replace `lng' = ln(invlogit(-`b1'*((`x'^(1-`rho')-1)/(1-`rho')))) if $ML_y1==0
replace `lnf' = `lng'
This is Brendan's comment from a bit earlier:
You have "((^(1-)" in there, so the ^ doesn't apply to a variable, and therefore isn't recognised as an operator.
I think this is because your `x' expands to "".
Could ^ be failing to pick up `rho' inside the bracket?
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox
Sent: 16 November 2011 11:03
To: 'statalist@hsphsun2.harvard.edu'
Subject: RE: st: FW: ML for logit/ologit
That is not clear at all. The explanation is different. You are invoking a local rho, which you never define. Your earlier statement defines mu.
It now also seems that your earlier "along the lines of" means that you were not showing us the code that was buggy, which does help to explain why we couldn't see the bug.
Please do "Say exactly what you typed and exactly what Stata typed (or did) in response."
Thomas Murray (Department of Economics)
I have run the set trace and it is clear there is a problem with the including an argument within the squared term (I've pasted the relevant part below). I am sure this is the bug but I do not know why Stata doesn't like it.
I am confident all the quotations are correct in the program.
Many Thanks,
- mata: Mopt_search()
------------------------------------------------------------------------------------------------ begin logittest ---
- version 12.0
- args lnf b1 b2 x mu
- tempvar lng
- qui {
- gen double `lng' = ln(invlogit(`b1'*((`x'^(1-`rho')-1)/(1-`rho')))) if $ML_y1==1
= gen double __000007 = ln(invlogit(__000006*((^(1-)-1)/(1-)))) if educ1==1 unknown function ^()
replace `lng' = ln(invlogit(-`b1'*((`x'^(1-`rho')-1)/(1-`rho')))) if $ML_y1==0
replace `lnf' = `lng'
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-11/msg00802.html","timestamp":"2014-04-16T20:03:05Z","content_type":null,"content_length":"12372","record_id":"<urn:uuid:572ad074-c564-4ab6-ad6a-16261a772b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: How to apply tags to expression terms?
Date: Apr 17, 2013 2:30 AM
Author: David Park
Subject: Re: How to apply tags to expression terms?
The answers you received recommending TraditionalForm are probably the way
to go in most cases. But suppose that one prefers to work in StandardForm,
or for one reason or another TraditionalForm is also not the order you want
for terms or factors? Then I think a convenience routine might be useful.
I don't think the idea of using tags and trying to compute with such
expressions is too great. One would have to almost rewrite Mathematica
algebra! But we can write a useful routine for final display expressions.
So here is a routine that allows you to reorder terms in a Plus expression
or factors in a Times expression.
HoldOrderForm::usage =
"HoldOrderForm[permutation][expr] will reorder the terms or factors \
of a Plus or Times expression according to the permutation and put \
the result in a HoldForm.";
SyntaxInformation[HoldOrderForm] = {"ArgumentsPattern" -> {_}};
HoldOrderForm[permutation_?PermutationListQ][(f : Plus | Times)[
args__]] /; Length[{args}] == Length[permutation] :=
(HoldForm @@ {(List @@ f[args])[[permutation]]}) /. List -> f
Then for your simple case:
-1 + x;
HoldOrderForm[{2, 1}]@%
x - 1
Here is a case where we first reorder terms and then reorder the resulting
(1 - x + Exp[I x]) (x - 1)
Inner[#1[#2] &, {HoldOrderForm[{2, 1, 3}], HoldOrderForm[{2, 1}]},
List @@ %, Times]
HoldOrderForm[{2, 1}]@%
(1 + E^(I x) - x) (-1 + x)
(x - 1) (E^(I x) + 1 - x)
(E^(I x) + 1 - x) (x - 1)
If you like it I'll put it in Presentations. Presemtations contains a number
of convenience routines for manipulating expressions to special forms. Some
of them just seemed useful to me but many of them came from questions on
MathGroup. If the ideas seem useful and not too specialized then I consider
including them. And they are in a common place where Presentations users can
find them.
David Park
From: Alexei Boulbitch [mailto:Alexei.Boulbitch@iee.lu]
Dear Community members,
I often see on this site an at the StackExchange the repeating questions of
how to rearrange some expression, that Mathematica "likes" to keep in one
form, but the user prefers another one. It is like in this trivial example:
(x^2 - 1)/(x + 1) // Simplify
that Mathematica returns -1+x, rather than x-1 that might be wished by the
I have seen many answers to this questions, and gave few mine. The problem
here is that the answers are non-universal: they strongly depend upon the
expression in question. Besides, they require some additional programming,
and the more complex is the formula to sort, the longer will be the part of
the sorting code.
It seems that the problem of sorting terms of analytic expressions in the
desired order might be solved, if one could assign tags to the terms to be
sorted, and then sort the terms according to a specified list of such tags.
Now comes my question, do you know how to apply tags to expression terms?
I have seen an analogous functionality in the Presentation Master, the
package of David Park. There, however, the tags are used to be assigned to
sub expressions, the held part of the expression in question. David, is it
possible to assign tags to expressions that are not held?
Thank you in advance.
Alexei BOULBITCH, Dr., habil.
IEE S.A.
ZAE Weiergewan,
11, rue Edmond Reuter,
L-5326 Contern, LUXEMBOURG
Office phone : +352-2454-2566
Office fax: +352-2454-3566
mobile phone: +49 151 52 40 66 44
e-mail: alexei.boulbitch@iee.lu | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8894880","timestamp":"2014-04-19T17:46:43Z","content_type":null,"content_length":"4907","record_id":"<urn:uuid:3b716f77-47f1-4e5d-b89a-cbf820cb10a5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
A-n space
A-n space
Higher algebra
Algebraic theories
Algebras and modules
Higher algebras
Model category presentations
Geometry on formal duals of algebras
An $A_n$-space or $A_n$-algebra in spaces is a space (in the sense of an infinity-groupoid, usually presented by a topological space or a simplicial set) with a multiplication that is associative up
to higher homotopies involving up to $n$ variables.
• An $A_0$-space is a pointed space.
• Same with an $A_1$-space.
• An $A_2$-space is an H-space.
• An $A_3$-space is a homotopy associative H-space (but no coherence is required of the associator).
• An $A_4$-space has an associativity homotopy that satisfies the pentagon identity up to homotopy, but no further coherence.
• …
• An A-infinity space has all coherent higher associativity homotopies.
Revised on January 24, 2013 20:00:08 by
Urs Schreiber | {"url":"http://nlab.mathforge.org/nlab/show/A-n+space","timestamp":"2014-04-18T19:17:14Z","content_type":null,"content_length":"20683","record_id":"<urn:uuid:db339730-d8bf-4720-8421-de9fb9209ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dominik Göddeke -- GPGPU
GPGPU::Reduction Tutorial
Source code
Source code of the example implementation is provided. All code has been tested using Visual Studio.NET on Windows and the Gnu and Intel compilers on Linux. Some code samples do not work currently on
ATI hardware due to ATI's lack of support for LUMINANCE render targets. See below for details.
Introduction and Prerequisites
This tutorial explains the implementation of reduction-type operations on the GPU. Reductions are characterized by (several) input vector(s) of length N >> 1 and an output scalar. In other words,
reductions are functions that map large textures of size M by M to a texture of size 1x1. An example for such an operation is the computation of the maximum of a given floating point vector, which
will be covered in this tutorial. Other applications from linear algebra include vector norms, dot products etc. The technique introduced in this tutorial is however very general and can be applied
to many GPGPU operations.
To allow abstracting from special cases, we will assume N to be a power of two throughout this tutorial. For an implementation of general reductions, please refer to the chapter Mapping Computational
Concepts to GPUs by Mark Harris, published in GPU Gems 2, and especially the freely available sample code.
This tutorial is not meant to reiterate everything that is explained in detail in the Basic Math Tutorial. The reader is expected to have a level of understanding of GPGPU computing as conveyed in
that entry-level tutorial.
Some general remarks:
• I certainly do not claim to have invented this technique. It has been around from the very beginning of GPGPU.
• The tutorial is meant to be a teaching effort, not an effort in efficiency. You are highly discouraged to time and benchmark the provided sample code.
• Reductions are one example of how to do texcoord magic on the GPU, which is one very important building block to adapting new algorithms to the GPU programming paradigm. While the actual
implementation might look like ugly index battling at first sight, it is well worth it to try and understand what is really going on.
• The kernels are implemented in Cg, and the sample applications in C++ and OpenGL. A port to GLSL is trivial and left to the interested reader.
• Porting this code to texture2Ds from the texture rectangles in use is left as an excercise to the interested reader as well. This is meant to be a tutorial conveying ideas and not a library
providing running versions for all kinds of hardware.
• Almost all code in this tutorial is presented based on a mapping of a 1D CPU array into a 2D LUMINANCE texture on the GPU. The necessary modifications to implement this technique for a mapping
into RGBA textures are straight-forward and will not be illustrated in detail. Modified source code for one of the presented techniques is however provided below.
Basic idea
Recall that on the GPU, the kernels are executed on all elements (texels) covered by the output region (viewport-sized quad). One obvious way to compute a scalar from an input vector therefore is to
render a 1x1 output and use a kernel that reads in all values from the input texture(s). This naive approach however has several drawbacks. First, only one of parallel processing elements (PEs) would
be busy, thus eleminating all parallel execution and therefore efficiency. Second, this might exceed the maximum allowed shader length and the static instruction count on some hardware.
The idea is to perform a parallel reduction operation instead, based on global communication techniques on parallel computers. The algorithm proceeds to recursively reduce the output size, e.g. by
computing the local maximum of each 2x2 group of elements in one kernel. A related term in computer graphics is to build a mipmap pyramid, though in this example, it will not contain blended colors
of subgroups of elements, but the maximum value.
On a high-level view, a parallel reduction on the GPU is an adjustment of the input and output texture sizes and element indices. For a given vector of length M by M, the output of the first step is
a M/2 by M/2 texture. For each of its texels, the texture coordinates for the input texture are adjusted so that they cover disjunct 2 by 2 subregions. The values in these subregions are then
compared and the maximum is written to the corresponding output locations. This is recursively repeated until a 2 by 2 texture has been reduced to the final 1 by 1 "scalar" texture, yielding a
logarithmic number of iterations.
The following series of images summarizes the first reduction step of the algorithm for an 8 by 8 input texture. The image on the left shows the input texture. Highlighted in green is the range of
the output texels (which are of course located in another texture). The image on the right shows the result of the first reduction pass. Observe that each texel in the output contains the local
maximum of a corresponding 2 by 2 region in the input texture. This relation is further highlighted in the second row of images.
Generic loop
In this tutorial, only the necessary steps to perform the actual reduction are explained in detail. The naming conventions are based on the Basic Math Tutorial to avoid confusion. Since the actual
reduction is performed in a ping-pong manner, three different textures are required in this example: one input texture containing the input vector, and two (temporary) textures to perform the
reduction itself. The following initializations are assumed:
// texture identifiers
GLuint pingpongTexID[2];
GLuint inputTexID;
// ping pong management vars
int writeTex = 0;
int readTex = 1;
GLenum attachmentpoints[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
All textures are assumed to be properly set up to the size sqrt(N) by sqrt(N). The texture identified by inputTexID is assumed to be already populated with the vector of length N we want to compute
the maximum entry of.
The first step in the computation takes the texture identified by inputTexID as input and reduces it once into the currently writable ping-pong texture attached to an FBO. This first step is taken
out of the reduction loop because we do not want to overwrite the input texture during the computation.
// enable fragment profile, bind program [...]
// enable input texture
cgGLSetTextureParameter(textureParam, inputTexID);
// set render destination
glDrawBuffer (attachmentpoints[writeTex]);
// calculate output region width and height
int outputWidth = texSize / 2;
// perform computation and ping-pong output textures
Here, drawQuad(w,h) and swap() are two generic functions that draw a quad of size w by h into the output texture and swap the role of the read-only and write-only textures attached to the FBO
Next, the remaining number of reduction steps is computed and the steps are executed in a loop with the usual ping-pong technique.
// calculate number of remaining passes: log_2 of current output width
int numPasses = (int)(log((double)outputWidth)/log(2.0));
// perform remaining reduction steps
for (int i=0; i<numPasses; i++) {
// input texture is read-only texture of pingpong textures
// set render destination
glDrawBuffer (attachmentpoints[writeTex]);
// calculate output region width and height
outputWidth = outputWidth / 2;
// perform computation and ping-pong output textures
In the next sections, the actual kernels used for two different ways to implement this will be discussed in detail.
Solution 1: Precomputed texcoords
Recall that we need to adjust the texture coordinates used to sample the input texture to read in 2x2 blocks. One way to do this is to pass them in using the texcoord interpolants the GL provides:
With the glMultiTexCoord() routines, we can assign several sets of texture coordinates to the vertices of the rendered quad and have them automatically interpolated across the quad. So we simply pass
the coordinates of the locations we want to sample in our reduction shader later on. The generic function drawQuad(w,h) is therefore adjusted as follows:
* Renders w x h quad in top left corner of the viewport.
void drawQuad(int w, int h) {
glMultiTexCoord2f(GL_TEXTURE0, -0.5, -0.5);
glMultiTexCoord2f(GL_TEXTURE1, 0.5, -0.5);
glMultiTexCoord2f(GL_TEXTURE2, -0.5, 0.5);
glMultiTexCoord2f(GL_TEXTURE3, 0.5, 0.5);
glVertex2f(0.0, 0.0);
glMultiTexCoord2f(GL_TEXTURE0, 2*w-0.5, -0.5);
glMultiTexCoord2f(GL_TEXTURE1, 2*w+0.5, -0.5);
glMultiTexCoord2f(GL_TEXTURE2, 2*w-0.5, 0.5);
glMultiTexCoord2f(GL_TEXTURE3, 2*w+0.5, 0.5);
glVertex2f(w, 0.0);
glMultiTexCoord2f(GL_TEXTURE0, 2*w-0.5, 2*h-0.5);
glMultiTexCoord2f(GL_TEXTURE1, 2*w+0.5, 2*h-0.5);
glMultiTexCoord2f(GL_TEXTURE2, 2*w-0.5, 2*h+0.5);
glMultiTexCoord2f(GL_TEXTURE3, 2*w+0.5, 2*h+0.5);
glVertex2f(w, h);
glMultiTexCoord2f(GL_TEXTURE0, -0.5, 2*h-0.5);
glMultiTexCoord2f(GL_TEXTURE1, 0.5, 2*h-0.5);
glMultiTexCoord2f(GL_TEXTURE2, -0.5, 2*h+0.5);
glMultiTexCoord2f(GL_TEXTURE3, 0.5, 2*h+0.5);
glVertex2f(0.0, h);
Note that the quad covers a region of size w by h, whereas the texture coordinates cover 2w by 2h. The admittedly unfamiliar-looking modifications of the actual coordinates by plus/minus 0.5 is
required to define the input coordinates for texture rectangles correctly. It is highly recommended to modify the implementation to perform just one reduction pass and print out the generated texture
coordinates for a deeper insight. As texture coordinates are defined per vertex, the rasterizer will interpolate them across the quad. In the kernel, we therefore just input the interpolated
coordinates and use them to sample the input texture four times. The resulting values are then fed into the max() function of Cg, and the maximum of the four values is returned.
float maximum (float2 left: TEXCOORD0,
float2 right: TEXCOORD1,
float2 top: TEXCOORD2,
float2 bottom: TEXCOORD3,
uniform samplerRECT texture) : COLOR {
float val1 = texRECT(texture, left);
float val2 = texRECT(texture, right);
float val3 = texRECT(texture, top);
float val4 = texRECT(texture, bottom);
return max(val1,max(val2,max(val3,val4)));
Note that we read in the coordinates using the register binding to the predefined TEXCOORDi identifiers, which correspond to the values we passed in per vertex using glMultiTexCoord().
A sample implementation of this solution is available for download.
Solution 2: On-the-fly texcoord manipulation
Alternatively, we can compute the offsets on-the-fly based on the output index directly in the shader. The generic function drawQuad(w,h) is in fact very simple in this approach:
void drawQuad(int w, int h) {
glVertex2f(0.0, 0.0);
glVertex2f(w, 0.0);
glVertex2f(w, h);
glVertex2f(0.0, h);
The actual computation is quite easy: First, we need to subtract 0.5 from the incoming output position to move away from the texel center to its origin. Then, we double the value and add 0.5 again to
move back to the texel center. The resulting shader is:
float maximum (float2 coords: WPOS,
uniform samplerRECT texture) : COLOR {
float2 topleft = ((coords-0.5)*2.0)+0.5;
float val1 = texRECT(texture, topleft);
float val2 = texRECT(texture, topleft+float2(1,0));
float val3 = texRECT(texture, topleft+float2(1,1));
float val4 = texRECT(texture, topleft+float2(0,1));
return max(val1,max(val2,max(val3,val4)));
A sample implementation of this solution is available for download. Additionally, a sample implementation using RGBA textures is provided. Except for the obvious changes regarding the texture format,
the shader is modified to compute the maximum of each RGBA value into an RGBA result. The final maximum of the resulting RGBA "scalar" is calculated on the CPU. This version runs fine on ATI hardware
which does not support LUMINANCE render targets as used throughout the tutorial. Please note that this modification is straight-forward on the code level and completely independent of the technique
conveyed by this tutorial. Download the source file here.
Suggestions for Improvements
The implementation allows for a multitude of improvements beyond the scope of this introductory tutorial:
• Instead of interpolating texture coordinates in float2 tupels, it is recommended to pair them into float4 groups, because the interpolation hardware in the rasterizer natively works on RGBA data.
• The vertex shader can be used to compute the coordinates on the fly instead of calculating offsets once per fragment.
• There is a certain point at which it is faster to terminate the reduction process and just compute everything in one pass or on the CPU. This needs to be benchmarked by the individual
I would like to thank Thomas Rohkämper for creating the images used in this tutorial. Andrew Burnett-Thompson helped with beta-testing and proofreading. Additionally, the forums at gpgpu.org always
deserve proper credit.
Disclaimer and Copyright
(c) 2005-2007 Dominik Göddeke, University of Dortmund, Germany.
The example code for this tutorial is released under a weakened version of the zlib/libPNG licence:
This software is provided 'as-is', without any express or implied
warranty. In no event will the author be held liable for any
damages arising from the use of this software.
Permission is granted to anyone to use this software for any
purpose, including commercial applications, and to alter it
and redistribute it freely.
Feedback (preferably by e-mail) is appreciated! | {"url":"http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial2.html","timestamp":"2014-04-17T21:36:34Z","content_type":null,"content_length":"18762","record_id":"<urn:uuid:dc506f3b-9380-4c62-bed4-9d5887162f67>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
DGS-Based Resonators Form Lowpass Filters
Microstrip filters formed with defected ground structures (DGSs) offer high performance levels at microwave and millimeter-wave frequencies. By building upon that technology, it has been possible to
develop a novel symmetrical split ring resonator (SSRR) DGS that has been used to fabricate lowpass microwave filters with wide rejection bands. Several filters were simulated and fabricated based on
SSRR DGS units, with good agreement between simulated and measured results.
DGS units are suitable for both microwave and millimeterwave applications.^1-4 Known as a subcategory of electromagnetic- bandgap (EBG) structures, a DGS allows modifications in a circuit's guided
wave properties by etching lattice shapes in the ground plane of a microstrip line.^5,6 Any defect etched in the ground plane of a microstrip line disturbs its current distribution, increasing the
effective capacitance and inductance. Researchers have explored numerous types of DGS shapes, including dumbbell, periodic, U-shaped, and SSRR types, with a variety of effects.^7-9
However, all of these structures exhibit relatively narrow filter stopbands, making them unsuitable for filter design, harmonic suppression, or broad out-ofband rejection without the use of loaded
open stubs or compensated microstrip line.10 Conventional lowpass filters (LPFs) using shunt stubs and stepped impedance lines have narrow stopbands and poor cutoff responses. But these could be
improved by increasing the number of filter elements at the cost of increased size and degraded passband characteristics.
To overcome the limitations of conventional SSRR DGS approaches, a novel SSRR DGS LPF has been developed that is compact and features wide stopband performance with two transmission zeros and sharp
falling edge at the cutoff frequency. Even wider stopbands are possible by cascading two or three of the novel SSRR DGS units together. To demonstrate the effectiveness of the new approach, several
LPFs were simulated and fabricated, with close agreement between modeled and measured performance.
Figure 1 shows the layout of the proposed circular SSRR (a) and its equivalent circuit (b). The circular SSRR DGS has two symmetrical splits in each ring and there is an additional etched slot
connected with the inner ring in the middle that is different from a conventional SSRR DGS.^10 For the equivalent circuit of the proposed SRSS, two series resonators with inductor L1 and capacitor C
[1] result from the two symmetrical half defected circles of the outer ring; the inner ring with the additional slot corresponds to a resonator formed by capacitor C[2] and inductor L[2]. While the
coupling of the two half outer circuits and between the outer and the inner rings results in capacitor C[p].
For the purpose of performing an electromagnetic (EM) computer-aided- engineering (CAE) simulation, two resonant frequencies were applied. Frequency f1 = 5.13 GHz is the high resonant frequency for
the outer ring and f2 = 4.48 GHz is the low resonant frequency for the inner ring (Fig. 2). Simulations were performed using the Advanced Design System (ADS) suite of software tools from Agilent
Technologies as well as the High-Frequency Structure Simulator (HFSS) EM software from Ansoft with the results from both software tools agreeing quite closely. The values of the circuit models
extracted from the EM simulations are C[1] = 0.979 pF, L[1] =1.667 nH, C[2] = 0.75 pF, L[2] = 1.683 nH, and Cp=0.487 pF for R1 = 5 mm, R[2] = 3.5 mm, r = 1 mm, d = 0.5 mm, g = 0.5 mm, and w = 1.88
Compared with conventional dumbbell DGS and SSRR DGS units, the new SSRR DGS unit offers several advantages, including flatter lowpass properties and a sharper filter cutoff response. It also
provides an even wider bandgap due to the introduction of two elliptic- function transmission zeros (Fig. 3).
Figure 4 offers a comparison of the resonant properties of a circular SSRR with and without an additional slot. The slot greatly increases the effective capacitance C2, which is closely related to
the slot width, g. When g increases, the equivalent capacitance C2 decreases, and the resonant frequency, f2, moves upward in frequency. The additional slot effectively improves the width of the
stopband without increasing the open stubs. The new SSRR DGS unit has two different resonant frequencies, since each ring corresponds to a specific resonant frequency. Both resonant frequencies could
be controlled by adjusting the radius of each ring (Fig. 5).
For filter designs, such as LPFs, the proposed SSRR DGS offers several advantages compared to conventional DGS units, including flatter lowpass response and a sharper cutoff response. The novel SSRR
DGS units can even achieve a LPF with a wider bandgap than conventional DGS units, due to the two transmission zeros compared to only one for the conventional SSRR or dumbbell DGS units. To
demonstrate the effectiveness of using multiple DGS units, Fig. 6 shows the out-ofband suppression possible when using a periodic array of DGS units, using as many as three SSRR DGS units. In all
cases, a dielectric circuit board with permittivity (dielectric constant) of 3.2 and height of 0.787 mm was used. The width of the conductor line was w = 1.88 mm, which corresponds to a
characteristic impedance of 50 Ohms. The circular SSRR DGS unit has a radius of R1 = 4.8 mm (outer), R2 = 3.3 mm (inner) and a split-gap of d = 1 mm. The width of the ring is r = 1 mm and the width
of the loaded slot in the middle is g = 0.5 mm. The distance between the centers of each of the two SSRR units is l = 13.5 mm. The length of the input and output line is s = 10.2 mm for LPFs formed
of two SSRR DGS units, while l = 15 mm and s = 7.7 mm for the LPFs formed of three SSRR DGS units. The slow-wave and coupling effects between the neighboring DGS units lead to the suppression of
higher harmonics and consequently a wide rejection band.
As shown in Fig. 6, the stopband and the cutoff characteristics of the LPFs improve as the number of DGSs is increased, without adding any compensatory stubs. This indicates that coupling effects can
improve the filter's characteristics, but when a large number of coupled resonators are used, the radiation losses in the passband become significant.
To validate the theoretical analysis, LPFs with different SSRR DGS units were designed and fabricated. Figure 7 shows the measured results of a LPF based on a single SSRR DGS unit together with EM
software simulated results. It has a 3-dB cutoff frequency of 4.3 GHz with resonant frequencies of 4.49 and 5.14 GHz. As can clearly be seen, the proposed LPF has a sharp transition domain and a wide
stopband response.
Figure 8 shows the simulated and measured results of a LPF formed with three cascaded DGS units. The results show low insertion loss of 0.695 dB, with more than 32-dB filter attenuation from 5.02 to
10.00 GHz. The measured results are consistent with the simulated results, revealing only a small frequency offset and different insertion-loss values due to fabrication tolerances. This basic LF
design is adaptable to a wide range of microwave and millimeter-wave applications.
1. H. J. Chen, T. H. Huang, C. S. Chang, L. S. Chen, N. F. Wang, Y. H. Wang, and M. P. Houng, "A novel cross-shape DGS applied to design ultrawide stopband lowpass filters," IEEE Microwave and
Wireless Components Letters, Vol. 16, May 2006, pp. 252-254.
2. P. Vagner and M. Kasal, "Design of novel microstrip lowpass filter using defected ground structure," Microwave and Optical Technology Letters, Vol. 50, January 2008, pp. 10-13.
3. Y. C. Or and K. W. Leung, "Compact wideband DGS lowpass filter with modified microstripline," Microwave and Optical Technology Letters, Vol. 50, April 2008, pp. 974-977.
4. W.-T. Liu, C.-H. Tsai, T.-W. Han, and T.-L. Wu, "An embedded common-mode suppression filter for GHz differential signals using periodic defected ground plane," IEEE Microwave and Wireless
Component Letters, Vol. 18, No. 4, April 2008, pp. 248-250.
5. H. J. Chen, T. H. Huang, C. S. Chang, L. S. Chen, N. F. Wang, Y. H. Wang, and M. P. Houng, "A novel cross-shape DGS applied to design ultra-wide stopband low-pass filters," IEEE Microwave and
Wireless Components Letters, Vol. 16, May 2006, pp. 252-254.
6. D. Ahn, J. S. Park, C. S. Kim, J. Kim, Y. X. Qian, and T. Itoh, "A design of the low-pass filter using the novel microstrip defected ground structure," IEEE Transactions on Microwave Theory &
Techniques, Vol. 49, No. 1, January 2001, pp. 86-93.
7. M. K. Mandal, K. Divyabrarnharn, and S. Sanyal, "Design of compact, wideband bandstop filters with sharp-rejection characteristics," Microwave and Optical Technology Letters, Vol. 50, No. 5, May
2008, pp. 1244-1248.
8. D.-J. Woo, T.-K. Lee, J.-W. Lee, C.-S. Pyo, and W.-K. Choi, "Novel u-slot and v-slot DGSs for bandstop filter with improved Q factor," IEEE Transactions on Microwave Theory & Techniques, Vol. 54,
No. 6, June 2006, pp. 2840-2846.
9. A. Balalem, A. R. Ali, J. Machac, and A. Omar, "Quasielliptic microstrip lowpass filters using an interdigital DGS slot," IEEE Microwave and Wireless Components Letters, Vol. 17, No. 8, August
2007, pp. 586-588.
10. Santanu Dwari and Subrata Sanyal, "Compact sharp cutoff wide stopband microstrip lowpass filter using complementary split ring resonator," Microwave and Optical Technology Letters, Vol. 49, No.
11, November 2007, pp. 2865-2867. | {"url":"http://mwrf.com/print/components/dgs-based-resonators-form-lowpass-filters","timestamp":"2014-04-20T21:50:28Z","content_type":null,"content_length":"24402","record_id":"<urn:uuid:b0579c6d-1dfc-48a2-9fbb-e79d5c7db76c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glossary for OECD Composite Leading Indicators
C: CLI
COMPONENT SERIES
D: DE-TRENDING
M: MCD
P: PHASE
S: SIX-MONTH RATE OF CHANGE
TURNING POINT
Y: YEAR-ON-YEAR GROWTH RATES (YoY)
Z: ZONE
The CLI is adjusted to ensure that its cyclical amplitude on average agrees with that of detrended reference series.
Business cycles are recurrent sequences of alternating phases of expansion and contraction in economic activity. The name 'business cycle' has some ambiguity, since it can refer to conceptually
different economic fluctuation. Whenever the context does not eliminate ambiguity, the following qualifiers are used to distinguish the different concepts. The 'classical cycle' refers to
fluctuations in the level of the economic activity (eg. measured by GDP in volume terms), the 'growth cycle' refers to fluctuations in the economic activity around the long-run potential level, or
fluctuations in the output-gap (eg. measured by the de-trended GDP) and finally the 'growth rate cycle' refers to fluctuations of the growth rate of economic activity (eg. GDP growth rate). The OECD
CLI is focusing on the 'growth cycle' concept with the amplitude adjusted CLI, but offers translations for the two other concepts with the trend restored CLI for classical cycles and the CLI 12-month
rate of change (alternatively year-on-year growth rate) for the growth rate cycle.
The composite leading indicator (CLI) is an aggregate time series displaying a reasonably consistent leading relationship with the reference series for the business cycle in a country. CLI is
constructed by aggregating together component series selected according to multiple criteria, such as: economic significance, cyclical correspondence and data quality. As a result of the
multi-criteria selection process the CLI can be used to give an early indication of turning points in the reference series but not for quantitative forecasts.
Component series are economic time series which exhibit leading relationship with a reference series at the turning points. The component series are selected from a wide range of economic sectors.
The number of series used for the compilation of the OECD CLIs varies for each country, tipically between 5 and 10 series. Selection of the appropriate series for each country is made according to
the following criteria: Economic significance: there has to be an a priori economic reason for a leading relationship with the reference series; Cyclical behaviour: cycles should lead those of the
reference series, with no missing or extra cycles. At the same time, the lead at turning points should be homogeneous over the whole period; Data quality: statistical coverage of the series should be
broad; series should be compiled on a monthly rather than on a quarterly basis; series should be timely and easily available; there should be no break in time series; series should not be revised
The time span separating two turning points of the same nature (two peaks or two troughs).
De-trending is a procedure in which the long term trend, that may obscure cyclical variations in the component or the reference series, is removed. Up to December 2008 component series were
de-trended with the Phase Average Trend (PAT) method. Starting from December 2008 the OECD has decided to replace the combined PAT/MCD approach with the Hodrick-Prescott (HP) filter to perform
de-trending and smoothing in a single operation. The HP-filter is operated as a band-pass filter with frequency cut-off at 12 months for high frequency components (smoothing) and with frequency
cut-off at 120 months for low frequency components (de-trending).
MCD (Months for Cyclical Dominance) is defined as the shortest span of months for which the I/C ratio is less than unity. I and C are the average month-to-month changes without regard to sign of the
irregular and trend-cycle component of the series, respectively. There is a convention that the maximum value of MCD should be 6. For quarterly series, there is an analogous measure, quarters for
Cyclical Dominance (QCD), which has a maximum value conventionally defined as 2. Starting from December 2008 the OECD has decided to replace the combined PAT/MCD approach with the Hodrick-Prescott
filter, which makes the MCD smoothing obsolete.
This transformation of the detrended component series is required prior to aggregation into CLI in order to express the cyclical movements in a comparable form, on a common scale. The method used to
calculate normalised indices is to subtract the mean from the observed value and then to divide the resulting difference by the mean absolute deviation. Finally the series is relocated to have mean
of 100.
The time span between a peak and a trough.
Cyclical indicator systems are constructed around a reference series. The reference series is the economic variable whose cyclical movements the CLI intendeds to predict. In the OECD system, the
index of total industrial production was used as the reference series up to April 2012. Starting from April 2012 Gross Domestic Product (GDP) is used as the reference series except for China.
This measure is used until December 2008. Starting from December 2008 the OECD has decided to replace the 6-month rate of change with the year-on-year growth rate. The annualised 6-month rate of
change of CLIs is calculated by dividing the figure for a given month m by the 12-month moving average centred on m-6.5. Let R(t) and C(t) be respectively the 6-month rate of change and the CLI at t,
Smoothing eliminates the noise from the series, and makes the cyclical signal clearer. Up to December 2008 component series were smoothed according to their MCD (months for cyclical dominance) values
to reduce irregularity. Starting from December 2008 the OECD has decided to replace the combined PAT/MCD approach with the Hodrick-Prescott (HP) filter to perform de-trending and smoothing in a
single operation. The HP-filter is operated as a band-pass filter with frequency cut-off at 12 months for high frequency components (smoothing) and with frequency cut-off at 120 months for low
frequency components (de-trending).
The trend restored CLI is composed of the trend of the reference series and the amplitude adjusted CLI. it is comparable with the original reference series.
In time series analysis, a given time series can be decomposed into: - A cyclical component, - A trend component, - A seasonal component, - An irregular component. Since December 2008 the OECD uses
the Hodrick-Prescott (HP) filter to estimate the trend. Up to December 2008 the method of trend estimation adopted by the OECD was a modified version of the phase-average trend (PAT) method developed
by the United States NBER.
A turning point occurs in a series when the deviation-from-trend series reached a local maximum (Peak) or a local minimum (Trough). Growth cycle peaks (end of expansion) occur when activity is
furthest above its trend level. Growth cycle troughs (end of contraction/recession) occur when activity is furthest below its trend level. In addition, turning points should respect various censoring
rules. In the simplified Bry-Boschan procedure, used in the OECD CLI system for turning point identification, these censor rules guarantee the alternation of peaks and troughs, while ensuring that
phases last not less than 9 months and and cycles last not less than 2 years.
Component series are equally weighted in the aggregation process into a country CLI. On the other hand, GDP-PPP weights are used to estimate the CLIs for groups of countries, i.e. zone.
YEAR-ON-YEAR GROWTH RATES (YoY)
Alternatively called the '12-month rate of change', this rate is calculated by dividing the figure for a given period t (a month or a quarter in relation to the frequency of the data) by the value of
the corresponding period in the previous year. Let R(t) and C(t) be respectively the Year-on-Year growth rate and the CLI at t is:
For monthly data:
For quarterly data:
Starting from December 2008 the OECD has decided to replace the 6-month rate of change with the year-on-year growth rate.
In addition to the individual country series OECD calculates zone aggregates for CLIs and reference series included in the OECD CLI framework. Five indicators (the amplitude adjusted CLI, the
normalized CLI, original reference series, the trend of the reference series and the normalized de-trended reference series) are calculated using a chain linking formula, with country components
weighted according to their GDP share (on a Purchasing Power Parity basis). Weights are changed every five years. Iceland is excluded from zone calculations since no CLI is published for this
country. Four other indicators are calculated or derived from the five indicators above. This is done to preserve the relationship between the different indicator types. The derived indicators are:
the trend restored CLI and its 12month rate of change, the ratio to trend of the original reference series and the 12 month rate of change of the reference series.
More detailed information is available in the CLI zone aggregation methodology document. The list of zones and their country composition is available on the OECD website.
Permanent urls for this page: www.oecd.org/std/cli/glossary
Related Documents
Composite Leading Indicators (CLI) Frequently Asked Questions (FAQs)
OECD Composite Leading Indicators: Reference Turning Points and Component Series
Composite Leading Indicators (CLI) for Zones - Weights
Also Available | {"url":"http://www.oecd.org/fr/std/indicateurs-avances/glossaryforoecdcompositeleadingindicators.htm","timestamp":"2014-04-17T19:21:50Z","content_type":null,"content_length":"49379","record_id":"<urn:uuid:6c89543d-07fa-4192-8752-461645cab0b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
02. Force & Weight
Note: This is a multi-page article.
To navigate, use the drop-down lists and arrow keys at the top and bottom of each page.
This page discusses relatively simple problems involving weight and static forces on Earth's surface — for example, using the gravitational principles from the earlier discussion to compute the
weight of masses that aren't moving.
Static Force Example
• A 10-kilogram mass is sitting on a table.
• There are two equal, opposing forces — the mass presses down on the table, and the table presses up on the mass.
• The static gravitational force, the "weight", is proportional to:
(1) $ \displaystyle f = mg$
□ f = Force, newtons.
□ m = Mass, kilograms
□ g = Gravitational acceleration, m/s^2, described in the previous section.
• In this case, the mass presses down on the table with a force of about 98 Newtons, and the table presses against the mass with an equal, opposite force.
• Because the forces are equal, the mass doesn't move.
• Because the mass doesn't move and for reasons to be explained below, regardless of the amount of force involved, no power is required.
• To convert from force to units of weight, one applies a conversion factor. For kilograms of weight, divide the computed force by little-g. And remember that, confusingly, the kilogram is both a
mass unit and a weight unit.
The Purpose of Little-g
At this point, some readers may wonder why little-g exists. Can't we replace it with the Gravitational Force Equation? Doesn't that produce the same results? Well, the answer is yes, this is true
— indeed it must be true. Little-g is only a convenience, a shortcut — computing results using the fundamental force equation must produce the same outcome or something is very wrong.
Little-g represents an intermediate result, a convenient acceleration term that takes some subtle issues into account like latitude and centripetal force, but all these could be taken into
account in every computation if one wanted to proceed that way. Because of increasingly cheap computer power and advanced mathematical software, one may choose to use only one equation for
Little-g allows us to say $f = mg$, which seems easier to understand than this equivalent equation:
(2) $ \displaystyle f = \frac{G m_1 m_2}{r^2} - \frac{m_2 v^2 cos(\phi)^2}{r} $
□ f = Force, newtons
□ G = Universal gravitational constant, described earlier
□ $m_1$ = Mass of Earth
□ $m_2$ = Small mass being evaluated
□ r = Distance between $m_1$ and $m_2$
□ v = Earth's equatorial rotation velocity
□ $\phi$ = Latitude
On the other hand, equation (2) produces accurate results at any altitude (set r to equal earth's radius plus the desired altitude) and any latitude ($\phi$) on Earth. For everyday gravitational
acceleration calculations where only a few decimal places are needed, it might seem to be an overly precise solution.
Force versus Acceleration
Some of the described terms have units of acceleration (m/s^2), while others have units of force, usually newtons. Little-g expresses gravitational acceleration in units of m/s^2. It seems that
we're using a term with units of acceleration to compute a force. How can we do that? Here is how:
(3) $ \displaystyle f = ma$
(4) $ \displaystyle a = \frac{f}{m}$
□ f = Force, newtons
□ a = Acceleration, m/s^2
□ m = Mass, kilograms
In general, if there is only one mass term in an equation (usually Earth's mass), the result has units of acceleration (because of the equivalence principle — which has the effect that different
masses fall at the same rate in a gravitational field). But if there are two mass terms, in most cases the result has units of force.
Reader Feedback
Aren't they different?
I read your page because I have a few questions. On your page you have a link what's explaining the Big G. But I don't understand how in the equation F1=F2=G((m1xm2)/r2) F1=F2 with the explaining
text "the attractive force (F) between two bodies is proportional to the product of their masses (m1 and m2)". If m1 is earth and m2 is the moon, then both should have the same force? Can't
believe that, but may be I'm mixing up the big G with g. I can understand G((m1xm2)/r2), but I think that it will be different for F1 and F2. I'm not sure if I wrote the equation correct in this
way. Remember that force and acceleration are different things. To give a simple example, imagine that a Mack truck and a ping-pong ball are connected by a rubber band. The rubber band is trying
to pull the Mack truck and the ping-pong ball together with a force of one Newton.
Now try to explain how the force on one end of the rubber band is different than the force on the other end. How would that be possible? The ping-pong ball experiences the force in a different
direction, but it's the same amount of force.
We can compute force F, for masses M[1] and M[2], a separation between them of r, and gravitational force G:
$ \displaystyle F = \frac{G M_1 M_2}{r^2}$
F = force, Newtons
G = universal gravitational constant
M[1] = mass 1, kilograms
M[2] = mass 2, kilograms
r = radius that separates masses M[1] and M[2], meters
The force F in the above equation is the same for both masses, no matter how different they are. The masses experiences the force in an opposite direction, but the amount of force is the same.
But — very important — the acceleration experienced by the ping-pong ball (if it is allowed to move) is much greater than the acceleration experienced by the Mack truck. This is because
acceleration depends on mass:
$ \displaystyle a = \frac{F}{M}$
This means that, for a given force, a more massive object M[1] experiences less acceleration than a less massive object M[2]. For a given force, the acceleration an object experiences is
inversely proportional to its mass.
Here's a thought experiment: imagine a ten-kilogram object M[1] and a one-kilogram object M[2], sitting on perfectly smooth ice, connected by a rubber band. The rubber band is exerting a force of
one Newton. If the masses are released from constraint, the less massive object M[2] will move toward the more massive object M[1] at ten times the rate of its partner.
Imagine further that you anchor mass M[1] at position A on the smooth ice, and anchor M[2] at position B. You are required in advance to draw a line on the ice where they will meet when they are
released. Don't read ahead — think about it.
The line should be drawn at one-tenth the distance between M[1] and M[2], nearest to M[1] (the more massive object). When the masses are released, and assuming a lot of things that aren't usually
true in a real experiment, like no friction and an ideal rubber band, the two masses will collide at a location at 1/10 the original distance, but nearest to mass M[1].
In the real world, one of planets instead of masses on a smooth sheet of ice, two orbiting planets, regardless of their relative masses, are actually orbiting around a point defined by the
difference between their masses. For example, if the solar system consisted only of the sun and Jupiter, the center of their rotation would not be the center of the sun as is commonly thought,
but a location near the sun's surface, a location defined by the difference between their masses.
The most important difference between force and acceleration is that acceleration requires motion, which requires time. To expand a bit further, in order to fully understand how force and
acceleration differ, you need to learn Calculus, a way to compute motion and dynamic change. There are a number of excellent reasons to learn Calculus, this is just one. (Obligatory link to
Calculus primer.)
I hope this helps.
Note: This is a multi-page article.
To navigate, use the drop-down lists and arrow keys at the top and bottom of each page. | {"url":"http://www.arachnoid.com/gravitation_equations/force_weight.html","timestamp":"2014-04-18T02:59:10Z","content_type":null,"content_length":"17021","record_id":"<urn:uuid:32be80fa-6d15-4001-826e-9400e447445a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 30
- IN REPORTS OF THE CENTRE FOR MATHEMATICS AND COMPUTER SCIENCES , 1998
"... We describe an algorithm producing circular layouts for trees, that is drawings, where subtrees of a node lie within circles, and these circles are themselves placed on the circumference of a
circle. The complexity and methodology of our algorithm compares to Reingold and Tilford's algorithm for ..."
Cited by 19 (1 self)
Add to MetaCart
We describe an algorithm producing circular layouts for trees, that is drawings, where subtrees of a node lie within circles, and these circles are themselves placed on the circumference of a circle.
The complexity and methodology of our algorithm compares to Reingold and Tilford's algorithm for trees [11]. Moreover, the algorithm naturally admits distortion transformations of the layout. This,
added to its low complexity, makes it very well suited to be used in an interactive environment.
, 1995
"... This paper examines an infinite family of proximity drawings of graphs called open and closed fi-drawings, first defined by Kirkpatrick and Radke [15, 21] in the context of computational
morphology. Such proximity drawings include as special cases the well-known Gabriel, relative neighborhood and ..."
Cited by 19 (10 self)
Add to MetaCart
This paper examines an infinite family of proximity drawings of graphs called open and closed fi-drawings, first defined by Kirkpatrick and Radke [15, 21] in the context of computational morphology.
Such proximity drawings include as special cases the well-known Gabriel, relative neighborhood and strip drawings. Complete characterizations of those trees that admit open fi-drawings for 0 fi ! fi
! 1 or closed fi-drawings for 0 fi ! fi 1 are given, as well as partial characterizations for other values of fi. For the intervals of fi in which complete characterizations are given, it can be
determined in linear time whether a tree admits an open or closed fi-drawing, and, if so, such a drawing can be computed in linear time in the real RAM model. Finally, a complete characterization of
all graphs which admit closed strip drawings is given.
- Journal of Graph Algorithms and Applications , 2005
"... A graph with a given partition of the vertices on k concentric circles is radial level planar if there is a vertex permutation such that the edges can be routed strictly outwards without
crossings. Radial level planarity extends level planarity, where the vertices are placed on k horizontal lines an ..."
Cited by 19 (9 self)
Add to MetaCart
A graph with a given partition of the vertices on k concentric circles is radial level planar if there is a vertex permutation such that the edges can be routed strictly outwards without crossings.
Radial level planarity extends level planarity, where the vertices are placed on k horizontal lines and the edges are routed strictly downwards without crossings. The extension is characterised by
rings, which are level non-planar biconnected components. Our main results are linear time algorithms for radial level planarity testing and for computing an embedding. We introduce PQR-trees as a
new data structure where R-nodes and associated templates for their manipulation are introduced to deal with rings. Our algorithms extend level planarity testing and embedding algorithms which use
- Comput. Graph. Forum
"... We provide a novel visualization method for the comparison of hierarchically organized data. Our technique visualizes a pair of hierarchies that are to be compared and simultaneously depicts how
these hierarchies are related by explicitly visualizing the relations between matching subhierarchies. El ..."
Cited by 16 (1 self)
Add to MetaCart
We provide a novel visualization method for the comparison of hierarchically organized data. Our technique visualizes a pair of hierarchies that are to be compared and simultaneously depicts how
these hierarchies are related by explicitly visualizing the relations between matching subhierarchies. Elements that are unique to each hierarchy are shown, as well as the way in which hierarchy
elements are relocated, split or joined. The relations between hierarchy elements are visualized using Hierarchical Edge Bundles (HEBs). HEBs reduce visual clutter, they visually emphasize the
aforementioned splits, joins, and relocations of subhierarchies, and they provide an intuitive way in which users can interact with the relations. The focus throughout this paper is on the comparison
of different versions of hierarchically organized software systems, but the technique is applicable to other kinds of hierarchical data as well. Various data sets of actual software systems are used
to show how our technique can be employed to easily spot splits, joins, and relocations of elements, how sorting both hierarchies with respect to each other facilitates comparison tasks, and how user
interaction is supported. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing Algorithms I.3 1
- Proc. of Information Visualization 2001 , 2001
"... We describe a new animation technique for supporting interactive exploration of a graph, building on the wellknown radial tree layout method. When a node is selected to become the center of
interest, our visualization performs an animated transition to a new layout. Our approach is to linearly inter ..."
Cited by 13 (0 self)
Add to MetaCart
We describe a new animation technique for supporting interactive exploration of a graph, building on the wellknown radial tree layout method. When a node is selected to become the center of interest,
our visualization performs an animated transition to a new layout. Our approach is to linearly interpolate the polar coordinates of the nodes, while enforcing constraints on the layout to keep the
transition easy to follow. We apply this technique to visualizations of social networks and of the Gnutella file-sharing network, and discuss our findings and usability results. Key Words: graph
drawing, animation, interaction 1.
- In Proceedings of the Symposium on Graph Drawing ’99 , 1999
"... This paper presents some of the most important features of a tree visualisation system called Latour, developed for the purposes of information visualisation. This system includes a number of
interesting and unique characteristics, for example the provision for visual cues based on complexity metric ..."
Cited by 11 (0 self)
Add to MetaCart
This paper presents some of the most important features of a tree visualisation system called Latour, developed for the purposes of information visualisation. This system includes a number of
interesting and unique characteristics, for example the provision for visual cues based on complexity metrics on graphs, which represent general principles that, in our view, graph based information
visualisation systems should generally offer.
- Proc. 13th International Symposium on Graph Drawing (GD’05 , 2005
"... Among various styles of tree drawing reported in the literature, balloon drawing enjoys a desirable feature of displaying tree structures in a rather balanced fashion. Each subtree in the
balloon drawing of a tree is enclosed in a circle. Along any path from the root node, the radius of each circle ..."
Cited by 6 (0 self)
Add to MetaCart
Among various styles of tree drawing reported in the literature, balloon drawing enjoys a desirable feature of displaying tree structures in a rather balanced fashion. Each subtree in the balloon
drawing of a tree is enclosed in a circle. Along any path from the root node, the radius of each circle reflects the number of descendants associated with the root node of the subtree. In this paper,
we investigate various issues related to balloon drawings of rooted trees from the algorithmic viewpoint. First, we design an efficient algorithm to optimize the angular resolution and the aspect
ratio for the balloon drawings of rooted unordered trees. For the case of ordered trees for which the center of the enclosing circle of a subtree need not coincide with the root of the subtree,
flipping the drawing of a subtree (along the axis from the parent to the root of the subtree) might change both the aspect ratio and the angular resolution of the drawing. We show that optimizing the
angular resolution as well as the aspect ratio with respect to this type of rooted ordered trees is reducible to the perfect matching problem for bipartite graphs, which is solvable in polynomial
time. In addition, a related problem concerning the optimization of the drawing area can be modelled as a specific type of nonlinear programming for which there exist several robust algorithms in
practice. With a slight modification to the balloon drawing, we are able to generate the drawings of galaxy systems, H-trees, and sparse graphs, which are of practical interest.
- Proc. Computing and Combinatorics, COCOON 2005, volume 3595 of LNCS , 2005
"... Abstract. We present a simple linear time algorithm for drawing level graphs with a given ordering of the vertices within each level. The algorithm draws in a radial fashion without changing the
vertex ordering, and therefore without introducing new edge crossings. Edges are drawn as sequences of sp ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. We present a simple linear time algorithm for drawing level graphs with a given ordering of the vertices within each level. The algorithm draws in a radial fashion without changing the
vertex ordering, and therefore without introducing new edge crossings. Edges are drawn as sequences of spiral segments with at most two bends. 1
- Journal of Graph Algorithms and Applications , 2007
"... This paper describes a within-subjects experiment. In this experiment, the effects of different spatial layouts on human sociogram perception are examined. We compare the relative effectiveness
of five sociogram drawing conventions in communicating underlying network substance, based on user task pe ..."
Cited by 6 (0 self)
Add to MetaCart
This paper describes a within-subjects experiment. In this experiment, the effects of different spatial layouts on human sociogram perception are examined. We compare the relative effectiveness of
five sociogram drawing conventions in communicating underlying network substance, based on user task performance and personal preference. We also explore the impact of edge crossings, a widely
accepted readability aesthetic. Both objective performance and subjective questionnaire measures are employed in the study. Subjective data are gathered based on the methodology of Purchase et al.
[70], while objective data are collected through an online system. We found that 1) both edge crossings and drawing conventions pose significant effects on user preference and task performance of
finding groups, but neither has much impact on the perception of actor status. On the other hand, node positioning and angular resolution may be more important in perceiving actor status. In
visualizing social networks, it is important to note that the techniques that are highly preferred by users do not necessarily lead to best task performance. 2) subjects have a strong preference of
placing nodes on the top or in the center to highlight importance, and clustering nodes in the same group and separating clusters to highlight groups. They have tendency to believe that nodes on the
top or in the center are more important, and nodes in close proximity belong to the same group. Some preliminary recommendations for sociogram design and hypotheses about human reading behavior are
"... Abstract. We study methods for drawing trees with perfect angular resolution, i.e., with angles at each vertex, v, equal to 2π/d(v). We show: 1. Any unordered tree has a crossing-free
straight-line drawing with perfect angular resolution and polynomial area. 2. There are ordered trees that require e ..."
Cited by 6 (6 self)
Add to MetaCart
Abstract. We study methods for drawing trees with perfect angular resolution, i.e., with angles at each vertex, v, equal to 2π/d(v). We show: 1. Any unordered tree has a crossing-free straight-line
drawing with perfect angular resolution and polynomial area. 2. There are ordered trees that require exponential area for any crossing-free straight-line drawing having perfect angular resolution. 3.
Any ordered tree has a crossing-free Lombardi-style drawing (where each edge is represented by a circular arc) with perfect angular resolution and polynomial area. Thus, our results explore what is
achievable with straight-line drawings and what more is achievable with Lombardi-style drawings, with respect to drawings of trees with perfect angular resolution. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=372873","timestamp":"2014-04-19T06:00:28Z","content_type":null,"content_length":"39671","record_id":"<urn:uuid:c0a3bcd6-7ca5-4cc9-9620-837d0285c93d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
If a reflection in a line k followed by a reflection in a line m maps a point A to A" and this mapping is the same as... - Homework Help - eNotes.com
If a reflection in a line k followed by a reflection in a line m maps a point A to A" and this mapping is the same as a 120 degrees rotation of A about point P, what is the angle X between k and m?
When a point A is reflected in a line k, the angle between AP (where P is a point on k) and k is equal to the angle between A'P and k, where A' is the image of A after reflecting in k.
Write angle `APk = A'Pk = theta_1`
Similarly, `A'Pm = A''Pm = theta_2`
We know that A'' is the image of A after a 120 degree rotation about P. Therefore we have that
`2theta_1 + 2theta_2 = 120^o`
`implies` `theta_1 + theta_2 = 60^o`
We also know that the angle X is equal to `theta_1 + theta_2`. Therefore,
X = 60 degrees
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/how-do-solve-this-question-reflection-line-k-434726","timestamp":"2014-04-16T12:00:32Z","content_type":null,"content_length":"25751","record_id":"<urn:uuid:24ee531a-394e-45fb-8cb4-27edfb2ac528>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
High quality quantitative aptitude PDF Ebooks are listed below.
Document type > All
PPT Order by > Relevance
Popularity Related
quantitative aptitude 10 out of 10 based on 29 ratings.
PDF Ebook PDF Book Pages: -
Bank P.O. Examination Special-III. MOCK TEST. Quantitative Aptitude ... For the equation px. 2 + px + q = 0, the value of the. discriminant is zero. The roots of ...
... site/apicetcolleges/collegesmbamca/7087363-Solved-Quantitative-Aptitude-Paper-of-Bank ...
2002 2003 GC 300 349
PDF Ebook PDF Book Pages: 50
... systems, behavioral neuroscience, and quantitative research methods. (Applications to ... scores from the aptitude sections of the ... systems, behavioral neuroscience, quantitative, and social
psychology. ...
http://www.asu.edu/aad/catalogs/2002-2003/graduate/GC2002-2003.pdf/2002-2003-GC-300- ...
TCS Placement Papers - All TCS placement papers till 2008
PDF Ebook PDF Book Pages: -
All the aptitude Questions were repeated from previous year papers from TCS. For critical ... Quantitative Aptitude(38 ques in 40 min): Note: For this section there is no need to ...
PDF Ebook PDF Book Pages: -
... , S K, et al. IIT Chemistry. 001.076. AGA-O15 ... Aggarwal, R S. Quantitative aptitude for competitive. examinations (fully solved) 001.076. AHU-O. Ahuja, B N. ...
accenture placement papers placement test papers, aptitude
PDF Ebook PDF Book Pages: -
2>mathematical questions:-very easy only sums frm r s agarwal (quantatitive) come ... Prepare from R.S .Agarwal Quantitative aptitude book. Important topics: Venn diagram ...
PDF Ebook PDF Book Pages: -
Object, Environment and Observer in Perception, Significance of Perception for Managers. ... 1. N.D. Vohra , Quantitative Techniques in Management , (Tata Mcgraw ...
Department of Computer Science&Engineering,AJCE.
PDF Ebook PDF Book Pages: -
QUANTITATIVE APTITUDE. R.S AGGARWAL. 30. ASP 3.0. DAVE MERCER. 31. THE DESIGN AND ANALYSIS OF ... QUANTITATIVE APTITUDE FOR CAT. R.S AGGARWAL. 490. DIGITAL PRINCIPLES AND. APPLICATIONS. ALBERT ...
PD/December/2009/969 Regulars
PDF Ebook PDF Book Pages: -
Canara Bank P.O. Exam., 2009 : Reasoning. 1120. Andhra Bank Marketing ... Quantitative Aptitude. 1125. United India Insurance Co. Administrative Officers. Exam., 2009 : ...
http://pdgroup.upkar.in/UploadedFiles/BooksAdditionalPages/PD%20English%20December% ...
(Under Section 3 of the UGC Act, 1956)
PDF Ebook PDF Book Pages: -
Introduction to Mathematica, Quantitative methods for managers and ... QUANTITATIVE APTITUDE FOR COMPETITIVE EXAMINATIONS: The quantitative aptitude occupies ...
PD/April/2009/1721 Regulars
PDF Ebook PDF Book Pages: -
... Banking Awareness. 1856. Andhra ... Allahabad Bank P.O. Exam., 2008 : Quantitative. Aptitude. 1873. Union Bank of India P.O. Exam., 2008 : Quantita- tive Aptitude. 1877 ...
Build the MBA Math and Spreadsheet Skills You'll Need
PDF Ebook PDF Book Pages: -
Economics: Marginal Analysis with Calculus. Moving well beyond generic GMAT aptitude ... Total Cost is fixed cost plus the sum of the marginal costs for each unit. ...
http://www.mbamath.com/sampleexercises/mba%20math%20economics%20marginal%20analysis% ...
Project Preparation
PDF Ebook PDF Book Pages: -
Most Bank-supported nutrition projects build on existing programs and use. existing ... a mixture of quantitative and qualitative. techniques: Bank nutrition projects have used ...
Build the MBA Math and Spreadsheet Skills You'll Need
PDF Ebook PDF Book Pages: -
Economics: Marginal Analysis with Formulas. Moving well beyond generic GMAT aptitude ... Total Cost is fixed cost plus the sum of the marginal costs for each unit. ...
http://www.mbamath.com/sampleexercises/mba%20math%20economics%20marginal%20analysis% ...
Course &Scheme of Examination Of Bachelor of Commerce (Honors)
PDF Ebook PDF Book Pages: -
... 7. P.C. Agarwal, M.D. Agarwal: Business Economics (Ramesh Book Depot, Jaipur). Paper-IV: QUANTITATIVE APTITUDE. Note: The candidates shall be ...
PDF Ebook PDF Book Pages: -
(Test of Reasoning, Quantitative Aptitude and General Awareness will be ... The Bank shall not be responsible for an application being rejected. which is based on wrong ...
Indian Overseas Bank, Public Sector Bank with headquarters in
PDF Ebook PDF Book Pages: -
Objective Test consisting of Test of Reasoning Ability, Quantitative Aptitude, General Awareness and English Language ... wrong answers marked in the objective tests ...
Management Aptitude Test December 03, 2006
PDF Ebook PDF Book Pages: -
Quantitative Aptitude. There were total 40 questions in this section. ... Quantitative Aptitude - 40. Quantitative Aptitude. IC : PTpnrMAT2006 (3) ...
PDF Ebook PDF Book Pages: -
... RBI, etc.; the course covers the Test of. English Language, Quantitative Aptitude and Reasoning for ... for the. second stage for assessment of aptitude and traits, through ...
PDF Ebook PDF Book Pages: -
(Test of Reasoning, Quantitative Aptitude and General Awareness will be printed ... The Bank shall not be responsible for an application being rejected ...
PDF Ebook PDF Book Pages: -
... of the Written Test is to assess the candidate's aptitude for ... Exam. The Tests. will be based mainly on the CAT pattern covering the three Sections, Verbal. Ability, Quantitative Aptitude and
DMCA | ebook free download | {"url":"http://asaha.com/ebooks/quantitative-aptitude.html","timestamp":"2014-04-16T16:47:55Z","content_type":null,"content_length":"25940","record_id":"<urn:uuid:cb4a6313-f358-45f5-a380-6f6f36c8770e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/koikkara/medals/1","timestamp":"2014-04-18T19:21:44Z","content_type":null,"content_length":"108707","record_id":"<urn:uuid:1b4476fe-0431-4614-a9cd-d688c10ced4c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
angle between points of a line
Pages: 1 2 3
Post reply
angle between points of a line
Hi all,
I have some points as shown below.
@ 0(105, 66)
@ 1(114, 92)
@ 2(104, 11)
@ 3(99, 131)
@ 4(71, 144)
@ 5(37, 168)
Problem statement:
I have to find angle between points of line.
Note: The 0th index and the 5th index point are the start and end point of string.
I calculated angle using atan2 (dy, dx) and my angles are show as below..
angle : 70.9065
angle : -97.0379
angle : 92.3859
angle : 155.095
angle : 144.782
The issue is that the direction of 1st index point to the 2nd index point is upward (down to going up), but my angle is computed in downward direction .. why?
The upward direction is inverted ..why?
Pls guide me to fix this issue .
Last edited by pari_alf (2014-01-17 13:01:28)
Re: angle between points of a line
hi pari_alf
Only just woken up to your problem here in the UK.
I've made a graph to show your points but I'm unclear about what angles you are measuring. The angle I've marked with an x is 40.73 which isn't any of your answers.
Using atan(dy,dx) will give different results depending on which point you start with and which next in calculating the differences dy and dx. It may be that you have reversed the order.
You'll have to show the actual calculation if you want me to investigate further.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
Hi Bob,
I got pari_alf's angles with Geogebra (see image) - except for the negative for point B - but this is as far as my knowledge on this sort of stuff goes.
So, over to you and pari_alf...but I hope my drawing will help.
I had to rename the points because Geogebra won't accept numeric names.
0 = A
1 = B
2 = C
3 = D
4 = E
5 = F
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
hi Phro,
Thanks for the diagram. That answers my question; the angles are for 0 to 1, 1 to 2, 2 to 3 and so on.
hi pari_alf
The issue is that the direction of 1st index point to the 2nd index point is upward (down to going up), but my angle is computed in downward direction .. why?
The upward direction is inverted ..why?
But it is going down. Look at Phro's diagram.
So what is the exact calculation you are doing? Somehow your calculator is getting this result.
Inverse trig functions often need some user input. This is because the basic trig functions are all multi-valued (eg. tan 45 = tan 225 = tan 405 ...)
So when the designers are constructing the software for atan they have to second guess which angle you want and you have to adjust the answer if it's not the answer you were expecting.
eg. I had drawn a concave polygon and wanted the internal angles. One was obviously reflex but my software gave an acute angle. I had to subtract from 360 to get the 'right' result.
http://www.mathisfunforum.com/viewtopic … 34#p249834
See picture for post 2.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
Sorry...I got the angle wrong for the 1st index point (point B in my drawing).
When obtaining the angle values in Geogebra I measured in the opposite direction with this one compared to the direction in which I measured all the others, and it gave the angle to the wrong side of
the point.
Here's the corrected drawing:
Last edited by phrontister (2014-01-18 13:12:22)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
Hi guys, thanks for replying.
Kindly see my points shown on image below, where the green and red circles are the start and end point and all other other intermediate points are in black circle.
I computed angle using atan2 (dy,dx),
Problem is that the direction from 1st index point to the 2nd index point is shown vertically in the attached image. but my computed angle shows the direction in downward direction.
Computed angles are shown below..
Original Points:
@ 0(105, 66)
@ 1(114, 92)
@ 2(104, 11)
@ 3(99, 131)
@ 4(71, 144)
@ 5(37, 168)
angle : 70.9065
angle : -97.0379
angle : 92.3859
angle : 155.095
angle : 144.782
Why my angle vertical angle is inverted to downward direction using atan2(dy,dx) ?
Last edited by pari_alf (2014-01-18 18:18:38)
Re: angle between points of a line
pari_alf wrote:
Hi guys, thanks for replying.
Kindly see my points shown on image below, where the green and red circles are the start and end point and all other other intermediate points are in black circle.
I computed angle using atan2 (dy,dx),
Problem is that the direction from 1st index point to the 2nd index point is shown vertically in the attached image. but my computed angle shows the direction in downward direction.
Computed angles are shown below..
Original Points:
@ 0(105, 66)
@ 1(114, 92)
@ 2(104, 11)
@ 3(99, 131)
@ 4(71, 144)
@ 5(37, 168)
angle : 70.9065
angle : -97.0379
angle : 92.3859
angle : 155.095
angle : 144.782
Why my angle vertical angle is inverted to downward direction using atan2(dy,dx) ?
Hi, @phrontister
I saw your last figure that seems ok. fine .
but if you see my figure, the angle from 1st index to 2nd index point is -97 which meant that my line is going in downward direction. but my line is actually going upward.
How can i make my angle correct?
Re: angle between points of a line
hi pari_alf
I still am unable to answer that without knowing how you computed the angles.
Are you using a calculator? In which case, which one?
Or some software? In which case what software?
How have you determined dy and dx? Please post the actual calculation you did.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
I calculated the angle by making a program in c++.
The peusdo code is something like this
for: i 1 to n-1 //n is the size of vector points
dx = points [i-1].x - points[i].x
dy = pointsp[i-1].y - points[i].y
angle = atan2 (dy , dx ) * 180/pi
This is the way i am coding to compute the angle
Re: angle between points of a line
hi pari_alf
I tried your code in excel and got these values:
This is what I would expect because these are the principle values for the function atan.
I can convert my answers into yours by adding or subtracting 180 as necessary.
According to
Principal arc tangent of x, in the interval [-pi/2,+pi/2] radians.
One radian is equivalent to 180/PI degrees.
But curiously on this page
Principal arc tangent of y/x, in the interval [-pi,+pi] radians.
One radian is equivalent to 180/PI degrees.
This must be an error as atan has two possible values in this range. ???
To use atan you have to compute the division (dy/dx) and then apply the function to the result.
It would be interesting to see what answers you get if you try this.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
bob bundy wrote:
hi pari_alf
I tried your code in excel and got these values:
This is what I would expect because these are the principle values for the function atan.
I can convert my answers into yours by adding or subtracting 180 as necessary.
According to
Principal arc tangent of x, in the interval [-pi/2,+pi/2] radians.
One radian is equivalent to 180/PI degrees.
But curiously on this page
Principal arc tangent of y/x, in the interval [-pi,+pi] radians.
One radian is equivalent to 180/PI degrees.
This must be an error as atan has two possible values in this range. ???
To use atan you have to compute the division (dy/dx) and then apply the function to the result.
It would be interesting to see what answers you get if you try this.
Yes these are my angle using atan(dy/dx)
but the angle from index 1 to index 2, the angle is wrong using atan2 () method
Last edited by pari_alf (2014-01-18 23:57:44)
Re: angle between points of a line
I'm not that familiar with C++ to know exactly how the software actually computes the angles. I've posted in Coders Corner in the hope that someone does.
In the meantime, how much does it matter? If you are just curious, then we can wait for a programmers response.
If this is holding up the completion of a project, then let's try to find a work-around. What will you do with the angles now you have found them?
According to Microsoft (http://msdn.microsoft.com/en-us/library/88c36t42.aspx
atan2 uses the signs of both parameters to determine the quadrant of the return value.
So let's get some more evidence. In the image below I've made up some values to cover all four quadrants with the principle value results for atan. With those values for dy and dx what does C++ do?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
Actually, i am trying to find the direction of each line using angle computed by atan2 (dy, dx).
To find the direction of each line, i did something like this..
if ( angleInDegree > 0 )
directionCode = abs ((angleInDegree / 22.5) - 16 );
print --> directionCode
directionCode = abs ((angleInDegree / 22.5) + 16);
print --> directionCode
My Result with angle and direction codes are like this
Original Points:
@ 0(105, 66)
@ 1(114, 92)
@ 2(104, 11)
@ 3(99, 131)
@ 4(71, 144)
@ 5(37, 168)
angle : 70.9065
angle : -97.0379
angle : 92.3859
angle : 155.095
angle : 144.782
Print the directional codes..
@ - 0:12
@ - 1:11
@ - 2:11
@ - 3:9
@ - 4:9
you guys can note the the angle of 1st index to the 2nd index point is -97 that meant in downward direction.
It should be at upward direction i meant angle should be arround 90.
and direction code should be 3 or 4.
Why my angle of point index 1 to index2, is inverted.
Last edited by pari_alf (2014-01-19 02:53:48)
Re: angle between points of a line
Hi Bob,
I tried your code in excel and got these values:
ATAN2 in Excel is different from most other programs by having switched x and y: ie, ATAN2(x,y) = ATAN(y/x).
I allowed for that in my Excel calcs, with the following results:
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
Hi pari_alf
Having thought about this some more I now can't see anything wrong with the Index 1/Index 2 angle measurement of -97.0379°, because:
- the negative sign indicates that the angle is measured in a clockwise direction from Index 1's x-axis; and
- the clockwise measuring direction combined with the angle value places the sloping line from Index 1 to Index 2 downward at 7.0379° left of vertical.
Something odd about your drawing in post #6: it is an exact vertical flip of mine in post #5. I checked that in Geogebra, and could only replicate your drawing by entering negative y values (see
Last edited by phrontister (2014-01-20 01:05:16)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
hi pari_alf
I do not have a C++ compiler so I cannot check this myself. But you can. Please let me know.
In the diagram below I have taken the y direction as upwards as is usual.
C++ has a basic function atan which gives the inverse of the tan function. Let's assume there is a machine code level calculation that works out atan values (possibly a Taylor series but it does not
matter how the values are worked out).
Because many angles give the same tan, the inverse would be multi-valued. So the software programmers have made a single valued function by always giving the principle value ... meaning a value
between minus pi/2 and plus pi/2. Most calculators work this way.
Let alpha be an acute angle measured anti-clockwise from the x axis. Let a and b be positive numbers.
Then atan(b/a) = atan(-b/-a) so a line in the first quadrant and a line in the third quadrant will give the same angle.
Similarly, lines in the second and fourth quadrant will give the same angle.
So the software programmers have made a second function, atan2, which works out the same angle but applies the following correction, depending on the sign of dx and dy.
(dx,dy) = (a,b) then if atan(b/a) = alpha return the value of alpha.
(dx,dy) = (-a,b) then if atan(b/a) = alpha return the value of 180 - alpha, ie. give a second quadrant angle.
(dx,dy) = (-a,-b) then if atan(b/a) = alpha return the value of alpha - 180, ie. give a third quadrant angle.
(dx,dy) = (a,-b) then if atan(b/a) = alpha return the value of minus alpha, ie. give a fourth quadrant angle.
Hope that helps,
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
Hi pari_alf,
you guys can note the angle of 1st index to the 2nd index point is -97 that meant in downward direction.
It should be at upward direction i meant angle should be around 90.
and direction code should be 3 or 4.
Why my angle of point index 1 to index2, is inverted.
According to my calculations in Geogebra, Excel and Mathematica, all six angles (including their signs) that you gave in your first post are correct.
Have you checked out my post #15 yet?
Looking at the values of the y-axis points in your first post, and assuming the usual upward direction of the y-axis, index 5 is the highest point (168) on the y-axis and index 2 the lowest (11), yet
your drawing in post #6 has the orientation of these two points, and all the others, inverted (ie, you show index 5 as the lowest and index 2 as the highest..etc).
It appears that in your drawing all six y-axis values have been incorrectly signed. I wonder if this could be the reason that you think the line from index 1 to index 2 should be upward instead of
downward, and that the corresponding angle is inverted. To me, the line from index 1 to index 2 is downward and the corresponding angle is negative, indicating a clockwise (downward) angle from the
x-axis of index 1.
In the image below I've used atan instead of atan2, which might help clarify things because of its slightly different approach.
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
phrontister wrote:
It appears that in your drawing all six y-axis values have been incorrectly signed. I wonder if this could be the reason that you think the line from index 1 to index 2 should be upward instead
of downward, and that the corresponding angle is inverted.
Well if you see in my posted figure,
1. the start point is shown in green circle that is index 0. then point of index 0 is going down diagonally probably in 3rd quadrant.
2. then index point 1 is going upward probably in 2nd or 1st quadrant.
3. point of index 2 is going down to index point 3 in 3rd quadrant.
4. then index point 3 is going diagonally in 3rd quadrant.
that meant that our points of line are going in anti clockwise direction because our start point is green and end point is red in the figure post 6.
Then how are you saying that it is going clockwise direction?
Note: These points are exactly correct. these points are drawn on an image window. where the index of an image is started 0,0 at the top left corner of the image.. and going down diagonally bottom
right of the image is the highest index value of the image.
So I have problem in only direction of line of points index 1 to index 2.
It is shown that the line from index 1 to index 2 , is going upward direction that meant angle should be positive and around 97,..
because our points are shown in anti-clockwise direction.
I don't know that is the issue. because these points are correct and we know the start and end point as we can see in the image attached in post 6. Which clearly shows that lines are going in anti
clockwise direction.
Last edited by pari_alf (2014-01-20 13:46:29)
Re: angle between points of a line
Hi pari_alf,
Note: These points are exactly correct. these points are drawn on an image window. where the index of an image is started 0,0 at the top left corner of the image.. and going down diagonally
bottom right of the image is the highest index value of the image.
Aha! Ok, that answers the question of whose drawing is the inverted one...it's mine, although I was following what I believe is the standard convention.
Sorry, but I'm home for lunch and just saw your post, but can't answer fully because I have to go out again shortly and I'll be gone for a few hours. I'll have a closer look at your post when I
return and I'll reply then...and I think I can now see the answer through the fog.
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
phrontister wrote:
Hi pari_alf,
Note: These points are exactly correct. these points are drawn on an image window. where the index of an image is started 0,0 at the top left corner of the image.. and going down diagonally
bottom right of the image is the highest index value of the image.
Aha! Ok, that answers the question of whose drawing is the inverted one...it's mine, although I was following what I believe is the standard convention.
Sorry, but I'm home for lunch and just saw your post, but can't answer fully because I have to go out again shortly and I'll be gone for a few hours. I'll have a closer look at your post when I
return and I'll reply then...and I think I can now see the answer through the fog.
Ahhhan.. Thanks .. hope to see you soon.
Re: angle between points of a line
waiting for reply
I need to fix this issue..
Re: angle between points of a line
Sorry...working on it now. I started my reply some time ago but then I had to leave it to finish something that couldn't wait. Won't be too much longer, I hope...
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
hi pari_alf
I thought this was settled with post 16. In case you missed it here it is again.
Let alpha be an acute angle measured anti-clockwise from the x axis. Let a and b be positive numbers.
Then atan(b/a) = atan(-b/-a) so a line in the first quadrant and a line in the third quadrant will give the same angle.
Similarly, lines in the second and fourth quadrant will give the same angle.
So the software programmers have made a second function, atan2, which works out the same angle but applies the following correction, depending on the sign of dx and dy.
(dx,dy) = (a,b) then if atan(b/a) = alpha return the value of alpha.
(dx,dy) = (-a,b) then if atan(b/a) = alpha return the value of 180 - alpha, ie. give a second quadrant angle.
(dx,dy) = (-a,-b) then if atan(b/a) = alpha return the value of alpha - 180, ie. give a third quadrant angle.
(dx,dy) = (a,-b) then if atan(b/a) = alpha return the value of minus alpha, ie. give a fourth quadrant angle.
If you look at the table below you will see that atan2 is behaving exactly as expected. Your first line is in the first quadrant, the second is in the third quadrant and the remainder are in the
second quadrant. That's how they are in your diagram and that's what atan2 is reporting.
Hope that helps,
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: angle between points of a line
Hi pari_alf,
Ok, to avoid confusion I'll depart from the entry of points via the standard convention that is used by Geogebra, Mathematica and others where the first quadrant is used, and continue instead in the
fourth quadrant that is used by your program. I've been able to redraw my atan2 and atan images in Geogebra in the fourth quadrant (see below)...by which I mean that all six index points appear in
Geogebra's fourth quadrant.
pari_alf wrote:
that meant that our points of line are going in anti clockwise direction because our start point is green and end point is red in the figure post 6.
Then how are you saying that it is going clockwise direction?
I referred only to the Index1/Index2 angle that is measured from the x-axis of Index1, and not the positional relationship between all the points...as follows:
phrontister wrote:
I now can't see anything wrong with the Index 1/Index 2 angle measurement of -97.0379°, because:
- the negative sign indicates that the angle is measured in a clockwise direction from Index 1's x-axis; and
- the clockwise measuring direction combined with the angle value places the sloping line from Index 1 to Index 2 downward at 7.0379° left of vertical.
The angles we've been calculating are those formed by a line to the x-axis of its index point, and whether or not an angle is positive or negative is determined by the direction of the creation of
the angle: anticlockwise = positive, and clockwise = negative. Btw, I didn't know any of this until trying to find the answer to your problem, but all sites I found on the net that mention this agree
on it.
Now, I should point out that what I'll say next about positive and negative angles is the opposite of how they're understood in standard convention, but, short of asking you to redraw your image to
agree with standard orientation, this is probably my best alternative. Where 'positive' is used, convention has it as 'negative' (and vice versa), and where 'clockwise' is used, convention has it as
'anticlockwise' (and vice versa). Therefore in your case anticlockwise = negative, and clockwise = positive. Confusing, no?!
In my first drawing below, imagine that line AB is laying directly along and on top of its x-axis line AB, both 'hinged' at start point A (Index0) at (105,66). AB needs to rotate on its hinge away
from its x-axis line in a clockwise direction so that the B end (Index1) reaches its target position at (114,92), thus creating positive angle PAB (70.9065°).
All angles other than QBC are created the same way and are positive.
QBC is the angle formed by line BC to its x-axis line BQ, both 'hinged' at point B (Index1) at (114,92). BC needs to rotate on its hinge away from its x-axis line in an anticlockwise direction so
that the C end (Index2) reaches its target position at (104,11), thus creating negative angle QBC (-97.0379°).
The second drawing is just the atan version that I referred to in post #17, but inverted to your program's orientation.
So...there you have it. Hope that helps.
Last edited by phrontister (2014-01-21 03:29:55)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: angle between points of a line
hi Phro,
Looks like we posted at almost the same time. What you have said is equivalent to what I have said but you've used rotations and I've used quadrants.
ps. What's wrong with having the positive y axis downwards. Isn't that how it is for you in Oz all the time?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Post reply
Pages: 1 2 3 | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=20402&p=1","timestamp":"2014-04-20T13:20:37Z","content_type":null,"content_length":"65743","record_id":"<urn:uuid:957ceb6c-9717-46e8-9047-73d2f21f3536>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics - Fiction/Non-fiction books
May 10th 2006, 07:18 AM #1
Senior Member
Apr 2006
Statistics - Fiction/Non-fiction books
Could someone please help me with the following question:
Would you expect word-length in general to differ in fiction and non-fiction books?
Take two books, of different authors, one fiction, one non-fiction. Choose a reasonable sample size of words from each, and find the mean, median, modal word-length in each and standard
deviation. Make it clear how you have done the various calculations without presenting detailed arithmetic.
In the light of the figures found, comment on the initial question (no formal inferential work needed) just an informed view from the figures found.
Could someone please help me with the following question:
Would you expect word-length in general to differ in fiction and non-fiction books?
Take two books, of different authors, one fiction, one non-fiction. Choose a reasonable sample size of words from each, and find the mean, median, modal word-length in each and standard
deviation. Make it clear how you have done the various calculations without presenting detailed arithmetic.
In the light of the figures found, comment on the initial question (no formal inferential work needed) just an informed view from the figures found.
I imagine it would also depend on the authors used in the samples. For instance, Frank Herbert would probably have an inordinate number of "$5" words, whereas Madeline L'Engle wouldn't.
I imagine it would also depend on the authors used in the samples. For instance, Frank Herbert would probably have an inordinate number of "$5" words, whereas Madeline L'Engle wouldn't.
I think it would be reasonable to say that in a fiction book there are about 100 000 words and in a non-fiction book maybe 80 000 words. Any suggestions?
Could someone please help me with the following question:
Would you expect word-length in general to differ in fiction and non-fiction books?
Yes, but it depends on the authors as well.
Take two books, of different authors, one fiction, one non-fiction. Choose a reasonable sample size of words from each, and find the mean, median, modal word-length in each and standard
deviation. Make it clear how you have done the various calculations without presenting detailed arithmetic.
When you choose the books avoid non-fiction rich in mathematical, chemical
or similar notation, it will make the sampling more difficult and ambiguous.
I would suggest that the fiction be "Moby Dick", as its almost traditional
by now to use this as a literary reference test (as in testing the "Bible Codes"
You will need to consider how big a sample you will need, this will depend
on the spread of word lengths in the texts (the SD of word lengths), and
the resolution you wish to achieve in your test (that is do you wish to
detect a difference in mean word lengths of 1, 0.1, 0.01 ... letters with
high probability).
A rough order of magnitude estimate that I made indicates you may be looking
at sample sizes ~>1500 if you want to detect differences in the mean word
length of ~0.1 words.
You will need to devise a sampling frame that selects words fairly (method
of deciding which words to include in your sample). How you do this will
depend on the facilities you have available (computer text with software
that can randomly sample the texts, or doing it by hand with paper copies
of the books).
In the light of the figures found, comment on the initial question (no formal inferential work needed) just an informed view from the figures found.
That's about all I can think of for this at present.
May 10th 2006, 10:01 AM #2
May 10th 2006, 10:23 AM #3
Senior Member
Apr 2006
May 10th 2006, 11:06 PM #4
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-statistics/2905-statistics-fiction-non-fiction-books.html","timestamp":"2014-04-17T21:04:57Z","content_type":null,"content_length":"42970","record_id":"<urn:uuid:cb5cf8c3-9fb6-4f9c-bd5a-95312f4c960f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
A confusing sequence question
August 17th 2010, 06:27 PM #1
Mar 2008
Acolman, Mexico
A confusing sequence question
Hello, I am completely lost with this question:
Let $x_n = n^{\frac{1}{n}}$ for $n \in \mathbb{N}$.
1. Show that $x_{n+1} < x_n$ if and only if $(1+ \frac{1}{n}) < n$ and infer that the inequality is valid for $n \geq 3$. Conclude that $(x_n)$ is ultimately decreasing and that $x = \lim(x_n)$
2. Use the fact that the subsequence $(x_{2n})$ also converges to $x$ to conclude that $x=1$.
I see why $(1+ \frac{1}{n}) < n$ given that $n \geq 3$ , since $\lim (1+ \frac{1}{n}) = e < 3$
But that's pretty much what I got =(
Last edited by akolman; August 17th 2010 at 06:39 PM.
Follow Math Help Forum on Facebook and Google+
Since you wrote " $\lim (1+ \frac{1}{n}) = e < 3$", I'm going to assume that you mean $(1+\frac{1}{n})^n$ instead.
We have the following bidirectional causal chain:
$\displaystyle \Leftrightarrow x_{n+1} > x_n$
$\displaystyle \Leftrightarrow n^{1/n} > (n+1)^{\frac{1}{n+1}}$
$\displaystyle \Leftrightarrow n^{\frac{n+1}{n}} > n+1$
$\displaystyle \Leftrightarrow n^{1+\frac{1}{n}} > n+1$
$\displaystyle \Leftrightarrow n.n^{\frac{1}{n}} > n+1$
$\displaystyle \Leftrightarrow n^{\frac{1}{n}} > \frac{n+1}{n} = 1+\frac{1}{n}$
$\displaystyle \Leftrightarrow n > \left(1+\frac{1}{n}\right)^n$
You should be able to do the rest now!
Last edited by Vlasev; August 17th 2010 at 09:33 PM.
Follow Math Help Forum on Facebook and Google+
August 17th 2010, 09:09 PM #2
Senior Member
Jul 2010 | {"url":"http://mathhelpforum.com/differential-geometry/153951-confusing-sequence-question.html","timestamp":"2014-04-18T03:22:24Z","content_type":null,"content_length":"36703","record_id":"<urn:uuid:2ca2a30f-a11a-4937-b33a-60928cbd6e47>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vista, CA Algebra 1 Tutor
Find a Vista, CA Algebra 1 Tutor
...Further I paricipated in several tutor training workshops. Specific to Mathematics, I completed 2 years as a Premed student prior to switching to major in Chemistry. In those two years I
completed a number of Math courses in Calculus 1 and 2, as well as a number of Math intensive courses such as Physics 1, Physics 2 and Physical Chemistry.
16 Subjects: including algebra 1, chemistry, biology, organic chemistry
...My family on my mom's side is from Russia, so learning Russian is very intuitive to me. I grew up learning some from my grandma, and as such, I took to studying it on my own as well. I love the
language and I have a strong grasp on all aspects of the grammar as well as pronunciation.
73 Subjects: including algebra 1, reading, chemistry, Spanish
...And everybody has to take it. The good news, at least for you, is that that means qualified tutors, like myself, see Geometry all the time, and know how to tutor it. I usually have two or more
Geometry students at any given time, and I've been tutoring since I graduated in 2009.
12 Subjects: including algebra 1, calculus, algebra 2, geometry
...I have developed the patience and caring attitude that must be present in every tutor in order to conduct an informative and educated tutoring session. I will provide the best knowledge for
your child from what I gained and will answer all of their questions. I will diligently review the necess...
10 Subjects: including algebra 1, chemistry, reading, physics
...Her images have, also, been published widely, including in LIFE, The New York Times, and Architectural Record. I took extensive art history courses at the University of Rome, Italy, and also
for my BA and MFA degrees at the University of California at San Diego. I've traveled extensively in Eur...
26 Subjects: including algebra 1, reading, English, SAT reading | {"url":"http://www.purplemath.com/vista_ca_algebra_1_tutors.php","timestamp":"2014-04-18T21:49:16Z","content_type":null,"content_length":"24025","record_id":"<urn:uuid:179b9ddb-c5b5-4dd5-9fd7-84423d54e674>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Refining Ballistic Reticles??
I posted this awhile back on the specialty pistols forum, and thought it might be of interest to some here:
Sometimes the trajectory curve doesn't follow ballistic reticles well enuf to give even 50 or 100 yd. zero's, but it's actually fairly easy to apply a little trick that the mil-dot guys use to
reference their zeros in 25, or 50 yd. increments.
OK, suppose we have a 2-7X Burris that has the following stadia subtensions in MOA:
Now, run a ballistics program for your rig-- say a hypothetical 308 Encore-- 150 Ball. Tip @ 2600 fps @ 4600 elevation- 50 degrees, 200 yd sight-in distance. The stadia zeros will be:
272 yds.
Now most guys will just leave it at that, or maybe tweak it a little to try and get a better trajectory-stadia fit to get more even (50 or 100 yd.) zero #'s, and then simply approximate their
holdovers for in between ranges, but suppose u want to leave it right where it's at. It's not a bad "fit", but may not be the easiest to try and shoot a coyote @ 550, or 459, or whatever range. But
if there was a way to reference a little more accurately for interpolating between stadia, then it would provide a little edge, so to speak, for better long-range hits. As mentioned before, some of
the mil-dot users apply a system that helps for interpolating by dividing the stadia (mil-dot) gaps into tenths of a unit, thereby making interpolation easier instead of just guessing. Turns out that
system can be easily applied for our ballistic reticles also like this:
1) Subtract each stadia MOA measurement from the next larger stadia to calculate total MOA gap between each stadia:
6.2 - 2.1= 4.1 MOA
10.4 - 6.2= 4.2
15.5 - 10.4= 5.1
Now we already know that 2.1 MOA (our 1st stadia)= 272 yds., and the 2nd stadia= 393 yds., but we want to know where 300, 325, 350, and 375 lie between those 2 stadia, so, if we refer to the
trajectory printout, we see that the following drop in MOA for each range is:
300= 2.75
325= 3.75
350= 4.5
375= 5.25
Now subtract the closer STADIA MOA zero from each 25 yd. MOA calc, as follows:
300= 2.75- 2.1= .65
325= 3.75- 2.1= 1.65
350= 4.5-2.1= 2.4
375= 5.25- 2.1= 3.15
Now simply divide each remainder by the total gap MOA calc. (4.1), to get the amount of interpolation for each range between the 2 stadia (what we're doing is making imaginary stadia that's easier to
reference besides just guessing for each 25 yd. range increment), as follows:
300= .65/4.1= .2
325= 1.65/4.1= .4
350= 2.4/4.1= .6
375= 3.15/4.1= .75
Now calculate the rest of the 25 (or 50-- whatever u choose) yd. zeros for the other stadia "gaps", and make a "better" range sticker for each zero. The part we just calculated would look like this
on the range sticker:
272= 1 SU (stadia unit)
300= 1.2
325= 1.4
350= 1.6
375= 1.75
393= 2
I would venture to say that a guy using this system could become very proficient at placing the bullets right where they need to go, with a little prtactice-- even at long-range. | {"url":"http://www.longrangehunting.com/forums/f18/refining-ballistic-reticles-10282/","timestamp":"2014-04-18T12:27:21Z","content_type":null,"content_length":"52443","record_id":"<urn:uuid:bd65eb5c-1a17-4d30-9f50-5375f972f5fc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Rickety Stairway to SQL Server Data Mining, Part 0.1: Data In, Data Out
In the first of a series of amateur tutorials on SQL Server Data Mining (SSDM), I promised to pull off an impossible stunt: explaining the broad field of statistics in a few
paragraphs without the use of equations. What other SQL Server blog ends with a cliffhanger like that? Anyone who aims at incorporating data mining into their IT infrastructure or skill set in any
substantial way is going to have to learn to interpret equations, but it is possible to condense a few key statistical concepts in a way that will help those who aren’t statisticians – like me – to
make productive use of SSDM without them. These crude Cliff’s Notes can at least familiarize DBAs, programmers and other readers of these tutorials with the minimal bare bones concepts they will need
to know in order to interpret the data output by SSDM’s nine algorithms, as well as to illuminate the inner workings of the algorithms themselves. Without that minimal foundation, it will be more
difficult to extract useful meaning from your data mining efforts.
The first principle to keep in mind is so absurdly obvious that it is often half-consciously forgotten – perhaps because it is right before our noses – but it is indispensable to
understanding both the field of statistics and the stats output by SSDM. To wit, the numbers signify something. Some intelligence assigned meaning to them. One of the biggest hurdles when
interpreting statistical data, reading equations or learning a foreign language is the subtle, almost subconscious error of forgetting that these symbols reflect ideas in the head of another
conscious human being, which probably correspond to ideas that you also have in your head, but simply lack the symbols to express. An Englishman learning to read or write Spanish, Portuguese, Russian
or Polish may often forget that the native speakers of these languages are trying to express the exact same concepts that an English speaker would; they have the exact same ideas in their heads as we
do, but communicate them quite differently. Quite often, the seemingly incoherent quirks and rules of a particular foreign language may actually be part of a complex structure designed to convey
identical, ordinary ideas in a dissimilar, extraordinary way. It is the same way with mathematical equations: the scientists and mathematicians who use them are trying to convey ideas in the most
succinct way they know. It is often easier for laymen to understand the ideas and supporting evidence that those equations are supposed to express, when they’re not particularly well-versed in the
detailed language that equations represent. I’m a layman, like some of my readers probably are. My only claim to expertise in this area is that when I was in fourth grade, I learned enough about
equations to solve the ones my father, a college physics teacher, taught every week – but then I forgot it all, so I found myself back at Square One when I took up data mining a few years back.
On a side note, it would be wise for anyone who works with equations regularly to consciously remind themselves that they are merely symbols representing ideas, rather than the other
way around; a common pitfall among physicists and other scientists who work with equations regularly seems to be the Pythagorean heresy, i.e. the quasi-religious belief that reality actually consists
of mathematical equations. It doesn’t. If we add two apples to two apples, we end up with four apples; the equation 2 + 2 = 4 expresses the nature and reality of several apples, rather than the
apples merely being a stand-in for the equation. Reality is not a phantom that obscures some deep, dark equation underlying all we know; math is simply a shortcut to expressing certain truths about
the external world. This danger is magnified when we pile abstraction on top of abstraction, which may lead to the construction of ivory towers that eventually fall, often spectacularly. This is a
common hazard in the field of finance, where our economists often forget that money is just an abstraction based on agreements among large numbers of people to assign certain meanings to it that
correspond to tangible, physical goods; all of the periodic financial crashes that have plagued Western civilization since Tulipmania have been accompanied by a distinct forgetfulness of this fact,
which automatically produces the scourge of speculation. I’ve often wondered if this subtle mistake has also contributed to the rash of severe mental illness among mathematicians and physicists, with
John Nash (of the film A Beautiful Mind), Nicolai Tesla and Georg Cantor being among the most recognized names in a long list of victims. It may also be linked to the uncanny ineptitude of our most
brilliant physicists and mathematicians when it comes to philosophy, such as Rene Descartes, Albert Einstein, Stephen Hawking and Alan Turing. In his most famous work, Orthodoxy, 20^th Century
British journalist G.K. Chesterton noticed the same pattern, which he summed up thus: “Poets do not go mad; but chess-players do. Mathematicians go mad, and cashiers; but creative artists very
seldom. I am not, as will be seen, in any sense attacking logic: I only say that this danger does lie in logic, not in imagination.”[1] At a deeper level, some of the risk to mental health from
excessive math may pertain to seeking patterns that aren’t really there, which may be closely linked to the madness underlying ancient “arts” of divination like haruspicy and alectromancy.
This digression into philosophy and mysticism is actually quite relevant to data mining, in order to avoid some serious potential pitfalls. One is the possibility of forgetting that
the writer of an equation has assigned a meaning to it; typically, it’s not all just gobbledygook, although it may sometimes seem so to laymen like us. The most significant risk is the recognition of
patterns that aren’t really there, or of assigning unwarranted meanings to patterns we’ve found or imagined; this is an error even the most highly trained mathematicians and logicians can fall into,
including the leading data miners. To make matters even worse, intellectual dishonesty can lead people to deliberately assign unwarranted meanings to statistics, apply the wrong to stats to the wrong
question, or resort to many other means of subterfuge to obscure the truth. Our politicians do it every day, as we all know. It is because of this that British Prime Minister Benjamin Disraeli
allegedly coined the famous phrase, “There are three kinds of lies: lies, damned lies, and statistics.”[2] There are ways of spotting dishonesty in logic and statistics, but my main purpose here is
to help novice data miners avoid honest mistakes, a fate which can be avoided through a better understanding of SSDM. The data output by it has a common format using identical terms, which aids in
the critical task of assigning meaning, but there are some subtle distinctions in how those terms should be interpreted from one algorithm to the next. Furthermore, setting inputs, predictable
attributes and parameters incorrectly can lead to faulty assessments of the meaning of the output data, as can poor choices of algorithms or simply feeding them bad data. Garbage in, garbage out, as
the saying goes. Data mining is more complex, however, in that we can taint our data by using the wrong garbage cans to collect it, so to speak. These algorithms are brilliantly designed, to the
point where they can sometimes spit out useful results even when used haphazardly, but they do not work magic. They can only operate on the data they are fed; they cannot violate the Ex Nihilo
principle that information cannot simply be created out of thin air. The Time Series algorithm, for example, cannot predict the weather using only temperature data as accurately as it could if we
also included records about other variables like wind speed. DBAs and database programmers are normally skilled in modern methods of validation, de-duplication and other means of assuring data
integrity, so putting garbage data in is theoretically less of a problem with SSDM. Over-interpretation of data, inputting the wrong kinds of data and the like are much more relevant issues with data
Statistical methods are just tools designed to address some of these potential problems. Data mining cannot introduce new information that is not already present, but statistical
methods can be used to conserve existing information and put it to use. It is akin to combining bubble plastic wrap and cardboard boxes to make sure that a package is not damaged in the mail, or
better yet, picture it is as using many different methods like caulking, storm windows and the like to keep the heat from leaking out of your home. Statistical methods are no different from ordinary
implements like hammers and screwdrivers, except that they are abstract tools used to conserve and present information in a useful way. Of course, we need to choose the right combination of
statistical tools for a particular job, just as we need to be careful in the tools we use to build a house or repair a bicycle; just as we wouldn’t use a wrench to pry out a Phillips head screw, we
wouldn’t measure how much two datasets resemble each other merely by comparing their modes. It might be better to picture some of the more sophisticated combination of statistical methods as a sort
of contraption made up of different cameras and lights on armatures, which can illuminate and photograph a particular object from many directions all at once; humans cannot easily visualize such a
device operating on objects of more than three dimensions, as we routinely encounter in SQL Server Analysis Services (SSAS) cubes, but the basic concept is still the same. Imagine identifying an
object as an elephant by taking photographs from different directions simultaneously to reveal a tail and a trunk, then shining a light from another angle to reveal two tusks. This is basically what
occurs in the field of statistics, which has a vast array of tools available to fit a lot of different applications. I’m only going to give a very brief summary of the tools applicable to SSDM,
without going into the many variations on them that I’m sure competent statisticians could point out. For example, probability is a simple concept that I assume all of this blog’s readers can already
understand. All of the nine algorithms make use of it, but there is no sense in delving into variations of it that SSDM doesn’t depend upon, like unordered pairs, compound events and the like.
SQL Server-Specific Statistics
It is easier to understand the statistics applicable to data mining when you realize that there is a kind of hierarchy among them, with the more complex algorithms and stats depending
on varying combinations of certain basic building blocks. At the base of the pyramid are three measures of central tendency, i.e. the predilection of data to cluster around certain values: mean,
median and mode. The last of these is the most difficult of the three to calculate (which requires the use of the Count aggregate and TOP function in T-SQL) but is the least applicable to SSDM;
medians, on the other hand, can be useful if data is skewed in a particular direction. It is a basic building block of quantiles, which divide datasets into fragments of n size, as the NTILE
windowing function does in T-SQL. Averages are the most important of these three measures of central tendency in SSDM, for they are a basic building block in several hierarchies of increasingly
sophisticated statistical methods. For example, both SSAS and SSDM make frequent use of variance, which is a measure of how much variability there is among the values in a dataset; this is calculated
by subtracting the mean from each number in the dataset, then squaring them to change any resulting negatives to positive numbers that can be used to compare distance from the mean. The results are
then divided by the number of items in the dataset minus one. Standard deviation is simply the square root of the variance and is useful in showing how much variety there is in a dataset; for
example, if you suspect that a few middle aged students are throwing off the average age of a college classroom, you could use standard deviation to investigate how much variety there is in the ages
in your sample. Z-scores in turn measure how closely an individual value is related to the rest of the dataset by subtracting the mean and dividing by the standard deviation. Covariance likewise
builds upon standard deviation, by averaging the products of the deviations of two datasets. It is useful in measuring how different two datasets are from each other, with high positive numbers
signifying a close relationship, negative numbers signifying a pronounced inverse relationship and numbers around zero indicating a lack of a connection. Pearson Product Moment Correlation, often
referred to simply as Correlation, in turn builds on that method by using the product of the standard deviations of two datasets as divisors with their covariance, to give a more accurate measure of
the relationship of two datasets to each other.[3] Because some of these methods are subject to a slight bias towards underestimation, particularly with small samples, a small adjustment known as
Bessel’s Correction is sometimes made by dividing by the number of points in the dataset minus the number of parameters (usually 1).[4] The result is known as an unbiased population formula, while
statistics like this which don’t apply this correction are known as biased.
These statistical methods have their counterparts in MDX, T-SQL and DMX. Multidimensional Expressions (MDX), the language used in SSAS cubes, provides corresponding functions like
Median, Stdev, StdevP (for biased standard deviations), Variance (or Var), VarianceP (or VarP, using the biased formula), Correlation, Covariance and CovarianceN (for unbiased covariance). T-SQL
likewise provides the AVG, STDEV, STDEVP, VAR and VARP aggregates, which are now more useful than ever before thanks to several enhancements of the OVER clause. One of the most exciting developments
in SQL Server 2012 is the complementing of ROW_NUMBER, RANK, DENSE_RANK, and NTILE with new windowing functions like LEAD, LAG, FIRST_VALUE, LAST_VALUE, PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK
and CUME_DIST, all of which may prove useful in massaging data on the relational side before mining. In a nutshell, the latter four calculate relative ranks of a row as percents or within
percentiles.[5] For an in-depth discussion of these T-SQL enhancements, see Itzik Ben-Gan’s excellent book, Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions. DMX (Data Mining
Extensions), the language for working in SSDM, provides several prediction functions which I will discuss towards the end of this series of tutorials, as well as statistical functions like
PredictHistogram, PredictVariance, PredictStdev and PredictAdjustedProbability. The last of these uses an undocumented internal formula to correct probability estimates, to reduce the chances of a
data value being considered probable even when it is rarely or never found in a dataset.[6]
As I touched upon briefly in my last column, the nine algorithms Microsoft provides are much more sophisticated, sometimes by several orders of magnitude, but even the most refined of
them depends upon basic building blocks like these. Linear Regression can also be calculated in MDX through the LinRegIntercept, LinRegPoint, LinRegR2, LinRegSlope and LinRegVariance functions, but
that is about as far as we can go using solely MDX or T-SQL, unless we want to duplicate the functionality in SSDM’s algorithms, which may waste a lot of developer time and entail a lot of risk in
reinventing the wheel. Logistic Regression can be seen as an adjustment to Linear Regression to constrain for minimum and maximum output values; it constitutes the equivalent of the Neural Network
algorithm, except without a hidden layer of weighted neurons. Decision Trees in turn makes use of multiple regressions, with adjustments applied. Time Series also applies a long chain of corrections
to data obtained through Decision Trees and moving averages, as I will discuss in greater detail when the time for that algorithm’s blog post comes up for discussion. Likewise, Sequence Clustering is
a more sophisticated version of the Clustering algorithm optimized for chains of events. The simplest algorithm, Naïve Bayes, plays a hidden role in several of these more sophisticated cousins when
SQL Server performs Feature Selection under the hood. Before a dataset is trained, SQL Server may optimize performance and the usefulness of the output under certain conditions, by capping the number
of columns used in making predictions, as well as discarding those that it deems unlikely to provide any useful information. This may be determined by four different methods, two of which are
sophisticated variations on Naïve Bayes based on data distributions, known as Bayesian with K2 Prior and Bayesian Dirichlet Equivalent with Uniform Prior. The Interestingness Score is also always
used with continuous columns, while a probabilistic method known as Shannon’s Entropy is used for discrete and discretized values. Association Rules, Sequence Clustering and Time Series don’t make
use of Feature Selection, while Clustering and Linear Regression must use the Interestingness Score, which Naïve Bayes cannot use. Most of the algorithms that make use of Feature Selection have
parameters named MAXIMUM_INPUT_ATTRIBUTES and MAXIMUM_OUTPUT_ATTRIBUTES, which represent the highest number of inputs or predictable columns (the default for both is 255) that the algorithm will
accept before applying Feature Selection. Setting the number to zero disables Feature Selection altogether, which may allow you to use inputs or outputs that it disallows, at the cost of possible
degradations in performance and usefulness, of course. In algorithms that support it, MAXIMUM_STATES can also be used to cap the number of values considered for a column to the top n most popular
values, with the rest of the values being treated as Missing. SQL Server Books Online (BOL) provides a more detailed look at Feature Selection, complete with links to technical articles on the
Bayesian methods. If you choose your input columns well, however, you won’t have to worry about your choices being overridden by Feature Selection, which you can disable if necessary. The important
thing to keep in mind is that this selective sampling of your data may be going on under the hood if you’ve reached the threshold for invoking Feature Selection, in which case some of your input or
output data may unexpectedly be absent.
SSDM Output: A Bird’s Nest of Nested Tables
After you process a mining structure, SQL Server populates objects in an SSAS database with the results, which can be retrieved several different ways. The most sophisticated method
is to view the charts and other fancy graphical illustrations on the Mining Model Viewer tab in BIDS or SSDT. Microsoft provides a Tree Viewer, Association Rules Viewer, Naïve Bayes Content Viewer,
Neural Network Viewer, Cluster Viewer, Sequence Clustering Viewer and Time Series Viewer (each prepended by the company’s name). Logistic Regression uses the Neural Network Viewer because of the
similarity of those two algorithms, just as Linear Regression shares the Tree Viewer with Decision Trees. Because these illustrations are so varied, I won’t go into detail on any of them until their
particular algorithms come up for discussion in later posts. It is easier to discuss the common format they all share in the Generic Content Viewer, which you can select on the Mining Model Viewer
tab, as depicted below:
This particular output is associated with a Sequence Clustering mining model, but the format is similar to that used by all of the other algorithms; essentially, the nine algorithms
spit out disparate information, which is then stored in a common format, which the graphs in the Mining Model Viewer tab depict in dissimilar ways. Because the output of several disparate algorithms
is mashed together like this, a lot of interpretation is required, especially since the data also represents several different levels of denormalized tables in flattened tree structures. This degree
of denormalization and the common format are important considerations when working with DMX, as opposed to MDX and T-SQL. The most basic method to return this raw data is to run a DMX SELECT Content
query against the name of the mining model, which I will deal with much later in the series. At that point I will discuss different ways of eliminating some of the redundancies in the data and how to
separate some of the denormalized data it returns into a series of tables linked by foreign keys, but that is an advanced topic we don’t have to worry about right now. The Generic Content Viewer is
useful in that it does some of the grunt work by representing some of the denormalized data in a tree structure for you, as you can see from the list of Clusters on the left side of the picture
above. When you retrieve any of the same data with DMX, it will be returned as a flattened tree, with several different branches included in the same table. The way to differentiate each branch from
the other is by the most important column returned in DMX Content queries, the NODE_TYPE. The tables these queries return contain multiple levels of the same tree structure, with the roots, leaves
and intermediate nodes all sharing the same columns of data, but representing completely different meanings depending on the NODE_TYPE. The potential values vary considerably from one algorithm to
the next, as Figure 1 demonstrates:
Once we know the NODE_TYPE, we can determine what kind of node we are dealing with and interpret the values in the other columns returned by the DMX Content Query, which are
represented on the right side of the previous image of the Generic Content Viewer. The information returned in the Generic Content Tree Viewer and DMX content queries contains some obvious or
superfluous information, such as the MODEL_CATALOG and MODEL_NAME, which are simply the name of the SSAS database the data mining objects and metadata are contained in and the name of the mining
model. NODE_NAME and NODE_UNIQUE_NAME are useful in that they can be used to identify a particular node by its assigned number and help correlate it with other nodes, but they are redundant.
PARENT_UNIQUE_NAME is also used to help correlate child nodes with their parents, while CHILDREN_CARDINALITY can help identify how many children a node has. NODE_DESCRIPTION, NODE_CAPTION and
ATTRIBUTE_NAME can all be used to identify the purpose of a node better, by giving a text explanation, such as a list of columns in a cluster with either of the Clustering algorithms. There is quite
a bit of overlap here though, especially when the XML representations in MSOLAP_MODELCOLUMN and MSOLAP_NODE_SHORT_CAPTION are thrown into the mix. To add to the confusion, some of the nodes are not
included at all in this common format when you perform a DMX SELECT content query with certain algorithms, such as NODE_RULE and MARGINAL_RULE in the case of Clustering. At other times, some of these
columns are simply left blank, set to NULL or identified as Missing, depending on the algorithm and NODE_TYPE in question. The same is also true with respect to the four columns that provide actual
statistics: NODE_SUPPORT (the count of cases), NODE_PROBABLITY, MARGINAL_PROBABILITY and MSOLAP_NODE_SCORE. Refer to the table below for more details:
The presence of multiple levels of nodes in a single table denotes a high degree of denormalization, but NODE_DISTRIBUTION takes that one step further by nesting another table within
each row returned. Each NODE_DISTRIBUTION contains a table of values with a common format, including columns for ATTRIBUTE_NAME, ATTRIBUTE_VALUE, SUPPORT, PROBABILITY and VARIANCE whose meaning may
vary in dramatic or subtle ways from one NODE_TYPE or algorithm to the next. It also returns a flag named VALUETYPE, which may cause further confusion by assigning a different meaning to the data in
a particular row of the NODE_DISTRIBUTION table. For example, if we encounter a NODE_TYPE of 28, that means we are dealing with a periodic structure of the ARIMA algorithm within Time Series. The
NODE_DISTRIBUTION table for that row will contain its own rows, which may assigned different meanings to ATTRIBUTE_NAME, ATTRIBUTE_VALUE, SUPPORT, PROBABILITY and VARIANCE, depending on whether the
VALUETYPE for that subordinate row’s VALUETYPE is 12 for Periodicity, 13 for Auto Regressive Order, 14 for Moving Average Order or 15 to signify a Difference Order. As with NODE_TYPE, the values
represented in VALUE_TYPE are sometimes limited to certain algorithms; the values 12 through 15 are unique to Time Series, for example. The figure below lists the potential values for the VALUETYPE
Figure 3: Potential Values for VALUETYPE in a NODE_DISTRIBUTION Table[7]
This hodgepodge of nested tables and separate meanings for columns within each table, depending on the NODE_TYPE or VALUETYPE, can create a lot of confusion. Sometimes the differences
in meaning between statistics like NODE_PROBABILITY and VARIANCE can also vary in subtle ways. In the future, I’ll discuss some methods I’ve devised (possibly with the help of other sources on the
Internet) of getting the data into T-SQL queries and sorting out this confusing mass of information. As long as we’ve fed the right algorithms enough good data, there should be some useful
information buried in there, which we can dig out, by mining the mining results so to speak. Comparing the results of the nine algorithms can be a little like comparing apples and oranges at times,
so it is remarkable that SQL Server’s Data Mining Team was able to devise a basket capable of holding the results of all of them, in the form of this common format. To really understand the results,
however, we will have to discuss the inner workings of each algorithm separately, since each is essentially a different flavor, or rather a different kind of tool that answers dissimilar questions.
In next week’s post, A Rickety Stairway to SQL Server Data Mining, Algorithm 1: Not-So-Naïve Bayes, I’ll demonstrate how to set up a mining experiment on data that DBAs are more likely to be familiar
with, like the output of sys.dm_os_wait_stats, sys.dm_io_virtual_file_stats and the System Log.That ought to make it easier to interpret the results of the first graphs we encounter, in the Naïve
Bayes Content Viewer.
[1] See Chesterton, G.K., 2001, Orthodoxy. Image Books: London.
[2] Mark Twain attributed it to Disraeli, but it is unclear if he was actually the source. See the Wikipedia page “Lies, Damned Lies, and Statistics”, available at http://en.wikipedia.org/wiki/
Lies,_damned_lies,_and_statistics .
[3] For a readable primer on the topic, see Wagner, Susan F., 1991, Introduction to Statistics. HarperPerennial: New York.
[5] pp. 68-74, Ben-Gan, Itzik, 2012, Microsoft SQL Sever 2012 High-Performance T-SQL Using Window Functions. O’Reilly Media, Inc.: Sebastopol, California.
[7] I adapted this from Books Online in a roundabout way, first by pasting the table into a DMX query so that I could create a CASE statement, then pasted it back here and added some simple
commentary. So I suppose BOL should ultimately get the credit for this piece of information (as it should for a lot of other things I write about in this series), but I’m having trouble finding the
original page I cited it from.
Posted on November 28, 2012, in Uncategorized. Bookmark the permalink. Leave a comment. | {"url":"http://multidimensionalmayhem.wordpress.com/2012/11/28/a-rickety-stairway-to-sql-server-data-mining-part-0-1-data-in-data-out/","timestamp":"2014-04-20T07:52:03Z","content_type":null,"content_length":"92667","record_id":"<urn:uuid:66887fe0-0a2f-4721-9374-534035c92623>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gauss's Day of Reckoning
Gauss's Day of Reckoning
A famous story about the boy wonder of mathematics has taken on a life of its own
Making History
If Sartorius did not specify a series running from 1 to 100, where did those numbers come from? Could there be some other document from Gauss's era that supplies the missing details? Perhaps someone
to whom Gauss told the story "with amusement and relish" left a record of the occasion. The existence of such a corroborating document cannot be ruled out, but at present there is no evidence for it.
None of the works I have seen makes any allusion to another early source. If an account from Gauss's lifetime exists, it remains so obscure that it can't have had much influence on other tellers of
the tale.
In the literature I have surveyed, the 1-100 series makes its first appearance in 1938, some 80 years after Sartorius wrote his memoir. The 1-100 example is introduced in a biography of Gauss by
Ludwig Bieberbach (a mathematician notorious as the principal instrument of Nazi anti-Semitism in the German mathematical community). Bieberbach's telling of the story is also the earliest I have
seen to specify Gauss's strategy for calculating the sum—the method of forming pairs that add to 101. Should Bieberbach therefore be regarded as the source from whom scores of later authors have
borrowed these "facts"? Or is this a case of multiple independent invention?
If you think it utterly implausible that two or more authors would come up with the same example and the same method, then Bieberbach himself is disqualified as the source. A full millennium before
Gauss and Büttner had their classroom confrontation, essentially the same problem and solution appeared in an eighth-century manuscript attributed to Alcuin of York.
Furthermore, in the years since Bieberbach wrote, there is unmistakable evidence of independent invention. Not all versions agree that the sequence of numbers was the set of consecutive integers from
1 through 100. Although that series is the overwhelming favorite, many others have been proposed. Some are slight variations: 0-100 or 1-99. Several authors seem to feel that adding up 100 numbers is
too big a job for primary-school students, and so they trim the scope of the assignment, suggesting 1-80, or 1-50, or 1-40, or 1-20, or 1-10. A few others apparently think that 1-100 is too easy, and
so they give 1-1,000 or else a series in which the difference between successive terms is a constant other than 1, such as the sequence 3, 7, 11, 15, 19, 23, 27. (The example series chosen by various
authors and other features of the versions are tabulated in the table at right.)
Perhaps the most influential version of the story after that of Sartorius is the one told by Eric Temple Bell in Men of Mathematics, first published in 1937. Bell has a reputation as a highly
inventive writer (a trait not always considered a virtue in a biographer or historian). He turns the Braunschweig schoolhouse into a scene of gothic horror: "a squalid relic of the Middle Ages run by
a virile brute, one Büttner, whose idea of teaching the hundred or so boys in his charge was to thrash them into such a state of terrified stupidity that they forgot their own names." Very cinematic!
When it comes to the arithmetic, however, Bell is one of the few writers who scruple to distinguish between fact and conjecture. He doesn't claim to know the actual numerical series, but writes: "The
problem was of the following sort, 81297 + 81495 + 81693 + ... + 100899, where the step from one number to the next is the same all along (here 198), and a given number of terms (here 100) are to be
added." (Personally, I'd have a hard time even writing that problem on a small slate, much less solving it.) | {"url":"http://www.americanscientist.org/issues/pub/2006/5/gausss-day-of-reckoning/3","timestamp":"2014-04-18T05:31:07Z","content_type":null,"content_length":"131851","record_id":"<urn:uuid:f05a885a-5ca5-4e2a-a3d1-bafd49138772>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bill Lionheart's
School of Mathematics, University of Manchester
(almost complete)
Jump to year
1987 1988 1991 1992 1993 1994 1995 1996 1997 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 Submitted/in press
Some preprints and e-prints are on the MIMS e-print server. Automatic list of my MIMS eprints. There are also links to Zentralblatt (Zbl), MathSciNet, ArXiV preprint server. The link to DOI (Digital
object identifier) goes to the on-line version of the journal article (to which you may or may not have a subscription!).
The ISBN link goes to Wikipedia's "book source" page which may help you find the book in a library or shop near you. Also there is a link to Amazon.com, but no special endorsment of Amazon is
You can also search for my publications in Google Scholar (getting better). PubMed
ArXiv.org, Smithsonian/NASA (ADS)
ISI Researcher ID has some of my publications. My MathsRev Author ID is 315924.
At the bottom of the page there is a list of the last known locations of some of my coauthors.
Refereeed Publications
1. Breckon, W R, Pidcock, M K, Mathematical Aspects of Impedance Imaging, Clin. Phys. Physiol. Meas., 8, Suppl A, 1987, Ch.21, pp.77-84
DOI: 10.1088/0143-0815/8/4A/010
2. Breckon, W R, Pidcock, M K, Some Mathematical Aspects of Impedance Imaging, Mathematics and Computer Science in Medical Imaging, Ed Viergever and Todd-Pokropek, NATO ASI series F, Vol 39,
Springer, 1988.
ISBN: 3540186727
3. Breckon, W R, Pidcock, M K, Ill-Posedness and Non-Linearity in Electrical-Impedance Tomography, Information Processing in Medical Imaging, Ed de Graaf and Viergever, pp.235-244, Plenum, 1988
4. Breckon, W R, Pidcock, M K, Progress in Electrical-Impedance Tomography, Some Topics on Inverse Problems, pp.254-264, ed Sabatier, World Scientific, 1988.
Zbl: 850.35133
5. Breckon, W R, Pidcock, M K, Data errors and reconstruction algorithms in electrical impedance tomography, Clin. Phys. Physiol. Meas. 9 105-109, 1988
DOI: 10.1088/0143-0815/9/4A/018
6. Breckon, W R, Paulson, K S and Pidcock, M K, 'Parallelism in EIT Reconstruction', Information Processing in Medical Imaging, Progress in Clinical and Biological Research, Vol 363, Ed. Ortendahl,
D.A. and Llacer, J., 187-196, Wiley-Liss (1991).
7. Breckon, W R, 'Measurement and Reconstruction in EIT', Inverse Problems and Imaging, Research Notes in Mathematics 245, Ed Roach G., 130-140, Pitman(1991). ISBN: 0-582-06424-4 (hard) or
0-470-21718-9 (paper)
Zbl: 0755.92011 Amazon.com
8. Lidgey F J, Zhu Q S, McLeod, C N and Breckon W R, 'Electrode current determination from programmable voltage sources', Clinical Physics and Physiological Medicine, Vol 13A, 43-47,(1992).
9. Paulson, K S, Breckon, W R, Pidcock, M K, 'Electrode modelling in Electrical Impedance Tomography', SIAM Journal of Applied Mathematics, Vol 52, 1012-1022, (1992).
Zbl: 0760.35055
10. Paulson K S, Breckon W R and Pidcock M K, 'A hybrid phantom for Elec trical Impedance Tomography', Clinical Physics and Physiological Medicine, Vol 13A, 155-161,(1992).
11. Breckon, W R, 'The problem of Anisotropy in Electrical Impedance Tomography', Proceedings of 14th International Conference of the IEEE Engineering in Medicine and Biology Society, Vol 5,
1734-1735, Paris, (1992).
MIMS e-print: 2008.44
12. Paulson, K S, Breckon, W R and Pidcock, M K, 'Optimal measurements in Electrical Impedance Tomography 'Proceedings of 14th International Conference of the IEEE Engineering in Medicine and Biology
Society, Vol 5, 1730-1731, Paris, (1992).
13. Zhu, Q, Breckon, W R, Lidgey, F J and McLeod, C N, 'A voltage driven Adaptive Current Electrical Impedance Tomograph', Proceedings of 14th Internatio nal Conference of the IEEE Engineering in
Medicine and Biology Society, Vol 5, 1704-1705, Paris, (1992).
14. Zhu, Q, Denyer C W, Lidgey, F J, Lionheart, W R and McLeod C N, 'A serial data acquisition architecture for continuous Impedance Imaging', Proceedings of 15th International Conference of the IEEE
Engineering in Medicine and Biology Society, 1024-1025, Paris, (1992).
15. Paulson, K S, Breckon, W R, Pidcock, M K, 'Optimal Experiments in Electrical Impedance Tomography', IEEE Transactions on Medical Imaging, Vol 12, No 2, 681-686, (1993).
16. Paulson, K S, Lionheart, W R and Pidcock, M K, 'Fast, non-linear inversion for Electrical Impedance Tomography', Information Processing in Medical Imaging, Ed. Barrett. and Gmitro, A F, 244-258,
Springer (1993).
17. Zhu, Q, McLeod, C N, Breckon, W R, Lidgey F J, Paulson, K S and Pidcock, M.K, 'An Adaptive Current Tomograph using voltage sources', IEEE Transactions on Biomedical Engineering, Vol 40, No 2,
(163-168), (1993).
18. Zhu QS, McLeod CN, Denyer CW, Lidgey FJ, Lionheart WRB, Serial data acquisition architecture for continuous impedance imaging, Proceedings of the Annual Conference on Engineering in Medicine and
Biology, 1993, Vol.15, No.pt 2, pp.1024-1025, IEEE, Piscataway, NJ, USA
19. Paulson, K S, Lionheart, W R and Pidcock, M K, 'A fast inversion for Electrical Impedance Tomography', Image and Vision Computing, Vol 12, No 6, 367-373, (1994).
20. Zhu, Q, Denyer, C W, Lidgey, F J, Lionheart, W R and McLeod, C N, 'Development of a real-time adaptive current tomograph', Physiological Measurement, Vol 15, Supp 2A, 37-45 (1994)
DOI: 10.1088/0967-3334/15/2A/005
21. McLeod, CN, Shi, Y, Denyer CW, Lidgey, FJ, Lionheart WRB, Paulson KS, Pidcock MK, Chest Impedance Imaging using trigonometric current patterns, Proc Proc IX Internation Conference on
Bio-Impedance, Heidelberg, p408-409, 1995.
MIMS e-print: 2008.42
22. Lionheart WRB, Uniquness of solution for the anisotropic Electrical Impedance Inverse Problem, Proc IX Internation Conferenc on Bio-Impedance, Heidelberg, p515-516, 1995.
23. Paulson K, Lionheart W, Pidcock M, POMPUS - An Optimized EIT Reconstruction Algorithm, Inverse Problems, 1995, Vol.11, No.2, pp.425-437
Zbl: 0822.35152 DOI: 10.1088/0266-5611/11/2/010
24. Molebny V.V, Lionheart W.R.B,Vovk P.P. , Mykytenko Y.H, Gouz V.I., Sensor Position measurement for Electroimpedance Tomography, Medical & Biological Engineering and Computing, 1996, Vol 34, Suppl
1, Part 1,
25. Lionheart W.R.B, Conformal uniqueness results in anisotropic electrical impedance imaging. Inverse Problems Volume 13, Number 1, February,(125-134) 1997
Errata and Comments
DOI: 10.1088/0266-5611/13/1/010 MathSciNet: 98c:78025 Zbl: 868.35140
26. Lionheart, WRB Lidgey, FJ McLeod, CN Paulson, KS Pidcock, MK Shi, Y, Electrical Impedance Tomography for high speed chest imaging, Physica Medica, Vol 13, Suppl 1,247-249, Dec 1997,
MIMS e-print: 2008.30
27. Lionheart, WRB, Boundary Shape and Electrical Impedance Tomography, Inverse Problems, Vol 14, No 1, 139-147,1998.
Zbl: 894.35129 MathSciNet: 98m:78029 DOI: 10.1088/0266-5611/14/1/012 MIMS e-print: 2008.28
28. Arridge, S.R., Lionheart., W.R.B., "Non-Uniqueness in Diffusion-Based Optical Tomography", Optics Letters, 23, 882-884, 1998. Reprint
GoogleScholar: versns; citns
MIMS e-print: 2008.72
29. McLeod C, Lionheart, W, ELECTRIC IMPEDANCE IMAGING, Encyclopedia of Electrical and Electronics Engineering , edited by John G. Webster, Wiley 1999, ISBN: 0-471-13946-7
30. Lionheart WRB. Uniqueness, shape, and dimension in EIT, ANNALS OF THE NEW YORK ACADEMY OF SCIENCES, 1999, Vol.873, pp.466-471
31. K.Jerbi, W.R.B. Lionheart, P.J. Vauhkonen, "Image reconstruction in EIT assuming a translationally invariant conductivity distribution", Medical & Biological Engineering & Computing , Vol.37,
Suppl. 2, 1999. (European Medical and Biological Engineering Conference, EMBEC '99, Vienna, 1999). This conference proceedings suppliment appears not to be on line, but a few details of the
conference are on the EMBEC web site. See instead Jerbi et al 2000 below which in any case is a more detailed paper.
32. V Kolehmainen, S R Arridge, W R B Lionheart, M Vauhkonen and J P Kaipio, Recovery of region boundaries of piecewise constant coefficients of an elliptic PDE from boundary data, Inverse Problems
15 No 5 (October 1999) 1375-1391.
Zbl: 868.35140 MathSciNet: 2000g:35231 DOI: 10.1088/0266-5611/15/5/318 Preprint
33. W R B Lionheart, S R Arridge, M Schweiger, M Vauhkonen and J P Kaipio, Electrical Impedance and Diffuse Optical Tomography Reconstruction Software, Proceedings of the 1st World Congress on
Industrial Process Tomography, p474-477, Buxton, Derbyshire, 1999. Reprint
34. K. Jerbi, W.R.B. Lionheart, P.J. Vauhkonen, M. Vauhkonen Sensitivity Matrix and Reconstruction Algorithm for EIT assuming axial uniformity, Physiological Meas, Volume 21, Issue 1 ( 2000), p61-66
DOI: 10.1088/0967-3334/21/1/308
MIMS e-print: 2013.54
35. N. Kerrouche, CN McLeod, WRB Lionheart, Time series of EIT chest images using singular value decomposition and Fourier transform, Physiol. Meas. 22 No 1 (February 2001) 147-157
DOI: 10.1088/0967-3334/22/1/318
36. M Vauhkonen, W R B Lionheart, L M Heikkinen, P J Vauhkonen and J P Kaipio, A MATLAB package for the EIDORS project to reconstruct two-dimensional EIT images, Physiol. Meas. 22 No 1 (February
2001) 107-111
37. Andrea Borsic, Chris McLeod, William Lionheart, Nacer Kerrouche, Realistic 2D human thorax modelling for EIT, Physiol. Meas. 22 No 1 (February 2001) 77-83
DOI: 10.1088/0967-3334/22/1/310
38. William R B Lionheart, Jari Kaipio and Christopher N McLeod , Generalized optimal current patterns and electrical safety in EIT, Physiol. Meas. 22 No 1 (February 2001) 85-90
ArXiV preprint: physics/0003094
Papers in the Proceedings of the 2nd World Congress on Industrial Process Tomography, Hannover, Germany Wednesday 29th to Friday 31st August 2001
39. Reconstruction Algorithms for Permittivity and Conductivity Imaging (key note) W R B Lionheart, MIMS e-print: 2008.43 Page 4 -11
40. Custom Silicon for Finite Element Modelling M Komarudin, T A York, W R B Lionheart Page 272
41. Considerations in Electrical Impedance Imaging N Polydorides, W R B Lionheart, H McCann Page 387
42. Total Variation Regularisation in EIT Reconstruction A Borsic, C N McLeod, W R B Lionheart Page 433
43. Polydorides, N. Lionheart, W.R.B. McCann, H, (2002) Krylov subspace iterative techniques: on the detection of brain activity with electrical impedance tomography. IEEE Trans Med Imaging, 21, 596
DOI: 10.1109/TMI.2002.800607 PubMed MIMS e-print: 2006.240
44. Borsic A., Lionheart W.R.B., McLeod, C.N, (2002) Generation of anisotropic-smoothness regularization filters for EIT, IEEE Trans Med Imaging, 21, 579 -587
DOI: 10.1109/TMI.2002.800611 MIMS e-print: 2006.243
45. Polydorides N, Lionheart WRB, A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software
project, Meas. Sci. Technol. 13 (December 2002) 1871-1883
DOI: 10.1088/0957-0233/13/12/310
46. Y Z Ider, S Onart and W R B Lionheart, Uniqueness and reconstruction in magnetic resonance electrical impedance tomography (MR EIT), Physiol. Meas. 24 No 2 (May 2003) 591-604.
DOI: 10.1088/0967-3334/24/2/368 PubMed MIMS e-print: 2006.242
Papers in the Proceedings of the 3rd World Congress on Industrial Process Tomography,Banf, Canada, 2nd-5th Sept 2003
47. Prototype Hardware for Finite Element Modelling M Komarudin, W R B Lionheart, T A York, p20-26,
48. Non Iterative Inversion Method for Electrical Resistance, Capacitance and Inductance Tomography for Two Phase Materials A Tamburrino, G Rubinacci, M Soleimani, W R B Lionheart, p233-238
49. Sensitivity Analysis of 3D Magnetic Induction Tomography (MIT) W R B Lionheart, M Soleimani, A J Peyton, p239-244
50. Image Reconstruction in 3D Magnetic Induction Tomography using a FEM Forward Model M Soleimani, W R B Lionheart, A J Peyton, X Ma, p252-255
51. Forward Problem in 3D Magnetic Induction Tomography (MIT) M Soleimani, W R B Lionheart, C H Riedel, O Dossel, p275-280
52. Adjoint Formulations in Impedance Imaging N Polydorides, W R B Lionheart, p689-694
53. Process Compliant Electrical Impedance Instrumentation for Wide Scale Exploitation on Industrial Vessels B D Grieve, J L Davidson, R Mann, W R B Lionheart, T A York, p806-812
54. Higson SR, Drake P, Stamp DW, Peyton A, Binns R, Lionheart W, Lyons A Development of a sensor for visualization of steel flow in the continuous casting mould, Rev. Met. Paris, No 6 (June 2003),
pp. 629-632. Abstract
DOI: 10.1051/metal:2003125
55. Lionheart WRB, Geometric methods for anisotopic inverse boundary value problems, in New Analytic and Geometric Methods in Inverse Problems, Lectures given at the EMS Summer School and Conference
held in Edinburgh, Scotland 2000, Edited by Kenrick Bingham, Yaroslav V. Kurylev, and Erkki Somersalo, Springer 2003, 237--252.
MIMS e-print: 2006.411 Zbl: 1067.35149 MathSciNet: 2005a:35280 Amazon.com ISBN: 3540406824 Google books R Bayford, WRB Lionheart, Editors: SPECIAL ISSUE: Biomedical Applications of Electrical
Impedance Tomography - EIT 2003, Physiological Measurement, Feb 2004.
56. WRB Lionheart, Review: Developments in EIT reconstruction algorithms: pitfalls, challenges and recent developments, Physiol. Meas. 25 (February 2004) 125-142 Abstract Cited by...
DOI: 10.1088/0967-3334/25/1/021 ArXiV preprint: physics/0310151 PubMed
Papers in the proceedings of the International Conference on Electrical Bioimpedance and Electrical Impedance Tomography, 20-24 June, 2004, Gdansk, Poland. Editors Antoni Nowakowski et al
57. Rank Analysis of the Anisotropic Inverse Conductivity Problem J.P.J. Abascal, W.R.B. Lionheart 511-514
MIMS e-print: 1080
58. Bayes-MCMC Reconstruction from 3D EIT Data Using a Combined Linear and Non-Linear Forward Problem Solution Strategy M. Soleimani, R.G. Aykroyd, R.M. West, S. Meng, W.R.B. Lionheart, N.
Polydorides pp. 479-482
59. Simultaneous reconstruction of the boundary shape and conductivity in 3D electrical impedance tomography M. Soleimani, Juan Felipe P. J. Abascal, WRB Lionheart 475-478
60. Feasibility study of 3D permeability reconstruction using magnetostatic permeability tomography M. Soleimani, W.R.B. Lionheart 539-542
61. Improvement of the electrical capacitance tomography imaging using total variation regularisation M. Soleimani, W.R.B. Lionheart 543-546
62. Reconstruction of shape of inclusions in electrical resistance capacitance tomography using level set method M. Soleimani, W.R.B. Lionheart, O. Dorn 547-550
63. Image reconstruction in magnetic induction tomography using a regularized Gauss Newton method M. Soleimani, W.R.B. Lionheart 551-554
64. Hammer, H, Lionheart, B, Application of Sharafutdinov's ray transform in integrated photoelasticity J Elasticity 75 (3): 229-246 Jun 2004
ArXiV preprint: physics/0309090 Zbl: 1084.74019 MathSciNet: 2006c:53082 DOI: 10.1007/s10659-004-7191-1
65. M Soleimani, WRB Lionheart, Magnetostatic permeability tomography in material inspection, In Proc., 7th Biennial ASME Conference Engineering Systems Design and Analysis, ESDA 04.
66. M Soleimani, WRB Lionheart, A J Peyton, X Ma , Molten metal flow visualization using mutual inductance tomography , In Proc., 7th Biennial ASME Conference Engineering Systems Design and Analysis,
ESDA 04.
67. Steele; Philip, Cooper; Jerome, Lionheart; William, Through-log density detector, US Patent No. 6,784,672 August 31, 2004 (Filed June 17, 2002).
68. Rachel A Tomlinson, Hanno Hammer, and William R B Lionheart, The development of a novel method for tomographic photoelasticity, ICEM12- 12th International Conference on Experimental Mechanics 29
August - 2 September, 2004 Politecnico di Bari, Italy, ICTP E-print
69. Hammer, H, Lionheart, WRB, Reconstruction of spatially inhomogeneous dielectric tensors through optical tomography J Opt Soc America A 22 (2): 250-255 FEB 2005
Abstract DOI: 10.1364/JOSAA.22.000250 MathSciNet: 2005m:44003
70. W Lionheart, N.Polydordes and A Borsic, The reconstruction problem, Part 1 of Electrical Impedance Tomography: Methods, History and Applications, (ed) D S Holder, Institute of Physics, p3-64,
2004. ISBN: 0750309520 .
Book review
Amazon.com MIMS e-print: 2006.421 Google Books
71. MS Joshi and WRB Lionheart, An inverse boundary value problem for harmonic differential forms, Asymptotic Analysis 41(2), 2005, p93-106.
Comments and errata
Zbl: 1068.35185 MathSciNet: 2006a:35313 MIMS e-print: 2006.388 ArXiV preprint: math.AP/9911212
72. Manuchehr Soleimani and William R. B. Lionheart, Image Reconstruction in Three-Dimensional Magnetostatic Permeability Tomography, 1274-1297, IEEE Trans. Magnetics, Vol 41, No. 4, 2005
DOI: 10.1109/TMAG.2005.845158 reprint
73. M.Soleimani and WRB Lionheart, Nonlinear image reconstruction for electrical capacitance tomography using experimental data, Meas Sci Tech vol 16 pp1987-1996, 2005.
DOI: 10.1088/0957-0233/16/10/014
74. C van Berkel and WRB Lionheart, Electrostatic object reconstruction in a half space, Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice,
Cambridge, UK, 11-15th July 2005. Chapter V01, Editor D Lesnic, Leeds University Press. preprint
75. M. Soleimani, W.R.B. Lionheart, A.J Peyton, X. Ma , Inverse finite element method applied to magnetic inductance tomography experimental data, Proc. 4th World Congress on Industrial Process
Tomography, Aizu, Japan, pages 1054-1059, 2005.
76. B D Grieve, W R B Lionheart, T A York, The Industrial Application of EIDORS-3D within Fine Chemicals Processing and Formulations Technology, Proc. 4th World Congress on Industrial Process
Tomography, Aizu, Japan, pages 243-248, 2005.
77. J L Davidson, B D Grieve, L S Ruffino, W R B Lionheart, K Primrose, R Mann, T A York, Electrical Impedance Tomography Applied to Large-Scale Production Filtration using a Planar Sensor
Architecture, Proc. 4th World Congress on Industrial Process Tomography, Aizu, Japan, pages 279-284, 2005
78. D R Stephenson, J L Davidson, W R B Lionheart, B D Grieve, T A York, Comparison of 3D Image Reconstruction Techniques using Real Electrical Impedance Measurement Data, Proc. 4th World Congress on
Industrial Process Tomography, Aizu, Japan, pages 643-650, 2005
MIMS e-print: 2009.11
79. M. Soleimani, W.R.B. Lionheart, O. Dorn Level set reconstruction of conductivity and permittivity from boundary electrical measurements using expeimental data, Inverse Problems in Science and
Engineering Vol 14 No 2, pages 193-210 , March 2006.
DOI: 10.1080/17415970500264152 MIMS e-print: 2006.363
80. H Woo, S Kim, J K Seo, W R B Lionheart and E J Woo, A direct tracking method for a grounded conductor inside a pipeline from capacitance measurements, Inverse Problems, 22,481-494 2006
DOI: 10.1088/0266-5611/22/2/006 Zbl: 05021727 MathSciNet: 2006k:78017 MIMS e-print: 2007.150
81. A Adler and W R B Lionheart, Uses and abuses of EIDORS: An extensible software base for EIT, Physiol Meas 27, S25-S42, 2006.
DOI: 10.1088/0967-3334/27/5/S03 PubMed MIMS e-print: 2008.25
82. M. Soleimani, O Dorn, WRB Lionheart, A Narrowband Level Set Method Applied to EIT in Brain for Cryosurgery Monitoring, IEEE Trans. Biomed. Eng., vol 53, no 11, pp 2257-2264, 2006.
DOI: 10.1109/TBME.2006.877112
83. Manuchehr Soleimani, William R. B. Lionheart, Antony J. Peyton, Xiandong Ma, and Stuart R. Higson, A Three-Dimensional Inverse Finite-Element Method Applied to Experimental Eddy-Current Imaging
Data, IEEE Trans. Mag, Vol 42, No. 5, pp. 1560-1567, May 2006.
DOI: 10.1109/TMAG.2006.871255
84. Robert M West, Manuchehr Soleimani, Robert G Aykroyd, and William RB Lionheart Speed Improvement of MCMC Image Reconstruction in Tomography by Partial Linearization International Journal of
Tomography & Statistics, Vol. 4, No. S06, 13-23, 2006.
Zbl: pre05244620
85. P. H. Steele , J. E. Cooper, B. K. Mitchell, C. Boden and W. R. B. Lionheart, EIT Detection of Juvenile and Knot Wood in Southern Pine Trees, IAI01, 5th World Congress on Industrial Process
Tomography, Bergen, Norway, 3rd-6th September 2007.
86. RG Aykroyd, M Soleimani and WRB Lionheart, Conditional Bayes reconstruction for ERT data using resistance monotonicity information, Meas. Sci. Technol. 17 2405-2413, 2006.
DOI: 10.1088/0957-0233/17/9/006
87. Manuchehr Soleimani, William R. B. Lionheart, Absolute Conductivity Reconstruction in Magnetic Induction Tomography Using a Nonlinear Method, IEEE Trans Medical Imaging, 2006,25, 1521-1530
DOI: 10.1109/TMI.2006.884196
88. Ma X, Peyton AJ, Soleimani M, Lionheart, WRB, 2006 IEEE Instrumentation and Measurement technology conference proceedings, Vols 1-5, 299-303, 2006
89. Higson, S.R. Drake, P. Lyons, A. Peyton, A. Lionheart, B. Electromagnetic visualisation of steel flow in continuous casting nozzles Ironmaking & Steelmaking, Volume 33, Number 5, October 2006 ,
pp. 357-361(5)
DOI: 10.1179/174328106X149897
90. Cees van Berkel and William R B Lionheart, An iterative method for electrostatic object reconstruction in a half space, Meas. Sci. Technol. 18 41-48, 2007
DOI: 10.1088/0957-0233/18/1/005
91. Cees van Berkel and William R B Lionheart "Reconstruction of a grounded object in an electrostatic halfspace with an indicator function", Inverse Problems in Science and Engineering, Volume 15,
Issue 6 January 2007 , pages 585 - 600
DOI: 10.1080/17415970600903873
MIMS e-print: 2007.149 , MathSciNet: 2008h:35362
92. WRB Lionheart and CJP Newton, Analysis of the inverse problem for determining nematic liquid crystal director profiles from optical measurements using singular value decomposition. New J. Phys. 9
(2007) 63.
DOI: 10.1088/1367-2630/9/3/063
MIMS e-print: 2006.420
93. A Adler, T Dai, WRB Lionnheart, Temporal image reconstruction in electrical impedance tomography, Physiol. Meas. 28 (2007) S1-S11.
DOI: 10.1088/0967-3334/28/7/S01
94. J F Perez-Juste Abascal , Simon R Arridge, W R B Lionheart, R H Bayford and D S Holder, Validation of a finite element solution for electrical impedance tomography in an anisotropic medium,
Physiol. Meas. 28 S129-S140(2007) .
DOI: 10.1088/0967-3334/28/7/S10
95. Storteig; Eskild, Lionheart; William R.B. Current prediction in seismic surveys, United States Patent Application 20070127312, June 7, 2007
96. M. Soleimani, W.R.B Lionheart, A. J. Peyton, Image reconstruction for high contrast conductivity imaging in mutual induction tomography for industrial applications, IEEE Trans. Ins. & Meas., vol.
56, no. 5, pages 2024-2032, 2007.
DOI: 10.1109/TIM.2007.895598
97. Cooper, J., P. Steele, B. Mitchell, C. Boden and W. Lionheart. 2007. Detecting Juvenile and Knot Wood in Southern Pine Logs with Electrical Impedance Tomography. Pages 16-26 in proceedings of the
12th International Conference on Scanning Technology and Process Optimization in the Wood Industry. Wood Machining Institute. Atlanta, GA.
98. Jia Liu, W. T. Hewitt, W.R.B. Lionheart, J Montaldi and M. Turner, A Lemon is not a Monstar: visualization of singularities of symmetric second rank tensor fields in the plane. Eurographics UK
Theory and Practice of Computer Graphics (2008) Ik Soo Lim, Wen Tang (Editors), p99-106
MIMS e-print: 2008.53
99. Andy Adler, John Arnold, Richard Bayford, Andrea Borsic, Brian Brown, Paul Dixon, Theo J.C. Faes, Inez Frerichs, Herve Gagnon, Yvo Garber, Bartlomiej Grychtol, Gunter Hahn, William R B Lionheart,
Anjum Malik, Janet Stocks, Andrew Tizzard, Norbert Weiler, Gerhard Wolf , GREIT: towards a consensus EIT algorithm for lung images, Proceedings of the EIT conference 2008, Dartmouth College, NH,
p48-51, 2008.
MIMS e-print: 2008.62
100. Jerome E Cooper, Philip H Steele, Brian K Mitchell, Craig Boden and William R B Lionheart, Detecting Juvenile Wood in Southern Pine Logs with Brush Electrodes, Proceedings of the EIT conference
2008, Dartmouth College, NH, p142-145, 2008
MIMS e-print: 2008.63
101. Alistair Boyle , William R.B. Lionheart , Camille Gomez-Laberge , Andy Adler, Evaluating Deformation Corrections in Electrical Impedance Tomography, Proceedings of the EIT conference 2008,
Dartmouth College, NH, p175-178, 2008
MIMS e-print: 2008.65
102. Andy Adler, Andrea Borsic, Nick Polydorides, William R B Lionheart, Simple FEMs aren't as good as we thought: experiences developing EIDORS v3.3, Proceedings of the EIT conference 2008,
Dartmouth College, NH, p179-182, 2008
MIMS e-print: 2008.64
103. Andy Adler, Richard Youmaran, William R B Lionheart, A measure of the information content of EIT data, Physiol Meas, 29:S101-S109, 2008. (Featured article)
DOI: 10.1088/0967-3334/29/6/S09
MIMS e-print: 2008.52
104. Nick Polydorides, Eskild Storteig, William Lionheart, Forward and Inverse Problems in Towed Cable Hydrodynamics, Ocean Engineering, Volume 35, Issues 14-15, October 2008, Pages 1429-1438
DOI: 10.1016/j.oceaneng.2008.07.001
MIMS e-print: 2008.73
105. WRB Lionheart, X Ma and A Peyton, Electromagnetic detection system for detecting presence of e.g. gun, has processing system processing measurement to generate data set characterizing detection
area, and generators and detectors mounted on support unit, Patent numbers, WO2008087385-A2; EP2108132-A2; GB2459790-A; CA2675587-A1; WO2008087385-A3; WO2008087385-A8; CN101688925-A;
US2010207624-A, 2008.
106. N Polydorides, E Storteig, W Lionheart, Ocean current prediction in towed cable hydrodynamics under dynamic steering, Inverse Problems in Science and Engineering, Volume 17, Issue 5, 2009, Pages
627 – 645
DOI: 10.1080/17415970802390010
MIMS e-print: 2008.74
107. Romina Gaburro and William R.B.Lionheart, Recovering Riemannian metrics in monotone families from boundary data, Inverse Problems, Vol 25, Issue 4, 045004 (14pp), 2009
DOI: 10.1088/0266-5611/25/4/045004
MIMS e-print: 2008.71
108. Andy Adler , John H. Arnold , Richard Bayford , Andrea Borsic , Brian Brown , Paul Dixon , Theo J.C. Faes , Inez Frerichs , Herve Gagnon , Yvo Gorber , Bartlomiej Grychtol, Gunter Hahn, William
R B Lionheart1, Anjum Malik, Robert P. Patterson, Janet Stocks, Andrew Tizzard, Norbert Weiler, Gerhard K. Wolf, GREIT: a unified approach to 2D linear EIT reconstruction of lung images, Physiol
Meas, 30, S35-S55, 2009
DOI: 10.1088/0967-3334/30/6/S03
109. Vladimir Sharafutdinov and William Lionheart, Reconstruction algorithm for the polarization tomography problem with incomplete data, in Imaging Microstructures: Mathematical and Computational
Challenges, Edited by Habib Ammari and Hyeonbae Kang, Contemporary Mathematics, Volume: 494, p137-160, 2009
MIMS e-print: 2008.75
110. Andrea Borsic, Brad M. Graham, Andy Adler, William R. B. Lionheart Total Variation Regularization in Electrical Impedance Tomography, IEEE TMI, 29, 1, 44 - 54 , 2010.
DOI: 10.1109/TMI.2009.2022540
Nearly final draft
MIMS e-print: 2007.92 (Very old version)
111. Nicola Wadeson, Edward Morton and William Lionheart Scatter in an uncollimated x-ray CT machine based on a Geant4 Monte Carlo simulation. In: SPIE Medical Imaging 2010: Physics of Medical
Imaging, 15-18 February 2010, San Diego, USA.
MIMS e-print: 2010.66
112. William R. B. Lionheart and Kyriakos Paridis, Finite Elements and Anisotropic EIT reconstruction. In: 14th International Conference on Electrical Bioimpedance and the 11th Conference on
Biomedical Applications of EIT, 4-8 Apr 2010, Gainesville, FL, USA. Journal of Physics: Conference Series, vol. 224, no. 1, p. 012022, 2010.
MIMS e-print: 2010.35
DOI: 10.1088/1742-6596/224/1/012022
113. Kyriakos Paridis and William R. B. Lionheart, Shape corrections for 3D EIT. In: 14th International Conference on Electrical Bioimpedance and the 11th Conference on Biomedical Applications of
EIT, 4-8 Apr 2010, Gainesville, FL, USA. Journal of Physics: Conference Series, vol. 224, no. 1, p. 012049, 2010.
MIMS e-print: 2010.34
DOI: 10.1088/1742-6596/224/1/012049
114. Andy Adler and William R B Lionheart, Correcting for variability in mesh geometry in finite element models, Journal of Physics: Conference Series, vol. 224, no. 1, p. 012021, 2010
DOI: 10.1088/1742-6596/224/1/012021
115. F. A. Kotasidis, G.I. Angelis J. C. Matthews, W. R. Lionheart, A. J. Reader, Space variant PSF parameterization in image space using printed point source arrays on the HiRez PET/CT; Imaging
Systems and Techniques Conference Record, 2010 pp. 129 - 134
DOI: 10.1109/IST.2010.5548459
116. Andy Adler, Romina Gaburro, William Lionheart, EIT, chapter in Handbook of Mathematical Methods in Imaging, O Scherzer, Springer-Verlag, 2011.
117. M Betcke, WRB Lionheart EJ Morton, X-ray tomography system i.e. real-time tomography system, for generating three-dimensional image of object, has controller generating three-dimensional image
from reconstructed images on surface Patent Number: WO2011008787-A1, 2011.
118. Juan-Felipe P. J. Abascal , William R. B. Lionheart , Simon R. Arridge, Martin Schweiger , David Atkinson , and David S. Holder Electrical impedance tomography in anisotropic media with known
eigenvectors, Inverse Problems 27 065004 DOI: 10.1088/0266-5611/27/6/065004
119. Andy Adler, William Lionheart, Minimizing EIT image artefacts from mesh variability in Finite Element Models, Physiological Measurement, 32 (2011) 823-834. DOI: 10.1088/0967-3334/32/7/S07
120. G I Angelis, A J Reader, F A Kotasidis, W R Lionheart and J C Matthews The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution
neuroreceptor PET imaging 2011 Phys. Med. Biol. 56 3895 DOI: 10.1088/0031-9155/56/13/010
121. WRB Lionheart, K Paridis, A Adler, Resistor networks and transfer resistance matrices, Proceedings 13th International Conference on Biomedical Applications of Electrical Impedance Tomography,
May 23-25, 2012, Tianjin, China.
122. J. L. Davidson, R. A. Little P. Wright, M. Crabb, J. Naish, A. Morgan, G. J. M. Parker, W. R. B. Lionheart, R. Kikinis and H. McCann , MRI-informed functional EIT lung imaging, Proceedings 13th
International Conference on Biomedical Applications of Electrical Impedance Tomography, May 23-25, 2012, Tianjin, China.
123. M. G. Crabb, J. L. Davidson, R. Little, P. Wright, J. H . Naish, G. J. M. Parker, H. McCann and W. R. B. Lionheart, Accounting for electrode movement in MRI-informed functional EIT lung imaging,
100 Years of Electrical Imaging, p83-86, Editors Herve Chauris, Andy Adler, William Lionheart, Presses des Mines, Collction Sciences de la Terre et de l'Environmnet, Paris 2012,
ISBN: 2911256875
124. T.A. Khairuddin and W. R. B. Lionheart, Do electro-sensing fish use the first order polarization tensor for object characterization?, 100 Years of Electrical Imaging, p149 Editors Herve Chauris,
Andy Adler, William Lionheart, Presses des Mines, Colletion Sciences de la Terre et de l'Environmnet, Paris 2012,
ISBN: 2911256875
125. Bartlomiej Grychtol, William R. B. Lionheart, Marc Bodenstein, Gerhard K. Wolf, and Andy Adler, Impact of Model Shape Mismatch on Reconstruction Quality in Electrical Impedance Tomography, IEEE
Trans Medical Imaging 31 , 9, 2012 , pp 1754-1760 DOI: 10.1109/TMI.2012.2200904
126. Alistair Boyle, Andy Adler and William R. B. Lionheart, Shape Deformation in Two-Dimensional Electrical Impedance Tomography, IEEE Trans Medical Imaging. 2012 Jun 12.
DOI: 10.1109/TMI.2012.2204438
127. Georgios I Angelis, Fotis A Kotasidis, Julian C Matthews, Pawel J Markiewicz, William R Lionheart, Andrew J Reader, Full field spatially-variant image-based resolution modelling reconstruction
for the HRRT, Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2012 IEEE, pp3434-3438, article
128. William M Thompson, William RB Lionheart and Edward J Morton, Real-Time imaging with a high speed X-Ray CT system. In: 6th International Symposium on Process Tomography, 26 - 28 March 2012, Cape
Town, South Africa.
MIMS e-print: 2012.36
129. WM Thompson, WRB Lionheart, EJ Morton, High-speed dynamic imaging with a real time tomography system, Proceedings of the Second International Conference on Image Formation in X-Ray Computed
Tomography, June 24-27, 2012, Salt Lake City, Utah, USA , p99-102.
130. GI Angelis, AJ Reader, PJ Markiewicz, FA Kotasidis, WR Lionheart, JC Matthews, Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm,
2013 Phys. Med. Biol. 58 5061
DOI: 10.1088/0031-9155/58/15/5061
131. William M.Thompson, William R.B.Lionheart and Dan Oberg, Reduction of Periodic Artefacts for a Switched-Source X-ray CT Machine by Optimising the Source Firing Pattern, The 12th International
Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, 2013, p345-348, Fully 3D proceedings
132. M G Crabb, J L Davidson, R Little, P Wright, J H Naish, G J M Parker, R Kikinis, H McCann and W R B Lionheart, Mutual information as a measure of reconstruction quality in 3D dynamic lung EIT,
2013 J. Phys.: Conf. Ser. 434 012082
DOI: 10.1088/1742-6596/434/1/012082
133. Marta M Betcke and William R B Lionheart, Multi-sheet surface rebinning methods for reconstruction from asymmetrically truncated cone beam projections: I. Approximation and optimality, 2013
Inverse Problems 29 115003
DOI: 10.1088/0266-5611/29/11/115003
134. Marta M Betcke, and William R B Lionheart, Multi-sheet surface rebinning methods for reconstruction from asymmetrically truncated cone beam projections: II. Axial deconvolution, 2013 Inverse
Problems 29 115004
DOI: 10.1088/0266-5611/29/11/115004
135. Paul D.Ledger and William R.B. Lionheart, The perturbation of electromagnetic fields at distances large compared to the object's size, submitted to IMA J Applied Maths.
136. Paul D.Ledger and William R.B. Lionheart, Characterising the shape and material properties of hidden targets from magnetic induction data, submitted to IEEE Transactions on Magnetics.
137. P..D. Ledger and W.R.B Lionheart, Characterising the shape and material properties of hidden targets from magnetic induction data, submitted to IEEE Trans Mag.
138. Romina Gaburro and William Lionheart, Determining the absorption in anisotropic media, MIMS preprint and submitted to Inverse Problems and Imaging.
MIMS e-print: 2010.89
Some coauthors
Here are some of my coauthors and collaborators and their last known locations. Especially those who have moved and might be harder to find. Please email me any updates.
• Juan Felipe Perez-Juste Abascal, finished his PhD in late 2006 in the UCL EIT group. He still has too many names making him especially hard to find. He is currently working in Paris at "Supelec"
• Andy Adler, Dept of Systems and Computer Engineering Carleton University, Ottawa Canada.
• Richard Bayford, School of Health and Social Sciences University of Mdx. (sorry for the abbreviation!)
• Richard Booth rfbooth.com
• Andera Borsic, Thayer School of Engineering Dartmouth College.
• Romina Gaburro, Department of Mathematics and Statistics University of Limerick, Ireland.
• Karim Jerbi, Cognitive Neuroscience and Brain Imaging Lab, CNRS UPR640-LENA, Paris. (see list of members)
• Mark Joshi, Dept of Economics University of Melbourne.
• Jari Kaipio is now in the Department of Mathematics Department University of Auckland, New Zealand.
• Yaroslav (Slava) Kurylev is at the Department of Mathematics of University College London.
• Jia Liu is at the Medical and Biological Engineering Research Group, University of Hull,
• Chris Mcleod Institute of Biomedical Engineering, Imperial College, London
• Dale Murphy is currently Deputy Vice-Chancellor at Swinburne University of Technology in Australia.
• Kevin Paulson, Department of Engineering University of Hull
• Nicholas Polydorides (aka Nick Polydorides), Assistant Professor at the Cyprus Institute .
• Manuchehr Soleimani, Elec Eng, University of Bath.
• Andrew Seagar is now at School of Engineering, Griffith University , Australia
• Erkki Somersalo is now at Mathematics Department of Case Western Reserve University in Cleveland, Ohio, USA
• Vladimir Sharafutdinov is at the Sobolev Institute of Mathematics, Novosibirsk State University.
• David Stephenson is with Hull Research and Technology Centre, BP Chemicals.
• Eskild Storteig as of (2007) is at Validus Engineering
• Rachel Tomlinson Dept of Mechanical Engineering University of Sheffield.
• Marko Vauhkonen Department of Physics University of Kuopio.
Popular and magazine articles
• W.R.B. Lionheart, Inverse problems in industry, Mathematics Today, vol 44 no 1, Feb 2008, p24-26
MIMS e-print: 2008.27
Some theses
Some theses by me and my students | {"url":"http://www.maths.manchester.ac.uk/~bl/research/publ-new.php","timestamp":"2014-04-20T20:56:01Z","content_type":null,"content_length":"64708","record_id":"<urn:uuid:dae01bbb-7b95-446c-a23e-a4a6e1843022>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A particularly important ramification of the property of linearity is expressed in the notion of equivalent circuits. To wit: if we are considering the response of a network at any given terminal
pair, that is a pair of nodes that have been brought out to the outside world, it follows from the properties oflinearitythat,ifthe networkislinear,the output at a singleterminalpair(either voltage
or current) is the sum of two components: 1. The response that would exist if the excitation at the terminal pair were zero and 2. The response forced at the terminal pair by the exciting voltage or
• one year ago
• one year ago
Best Response
You've already chosen the best response.
please explain these two points...@LonelyandForgotten
Best Response
You've already chosen the best response.
@stgreen could you help me..i cant get these points..
Best Response
You've already chosen the best response.
yea it makes sense...its about natural and step response of circuits
Best Response
You've already chosen the best response.
|dw:1360488103509:dw|i found this in the explanation for thevenins theorem...V=Voc+iRth which one is the natural one and step one?
Best Response
You've already chosen the best response.
when we talk about natural and steo response, we refer to capacitors/inductors mostly..natural response means the behavior of a network when there is no source...(e.g. capacitors connected in a
circuit having no voltage/current source..the initial energy stored in capacitors will drive the circuit...means the circuit is source-free and its such a behaviour is said to be natural
Best Response
You've already chosen the best response.
step response is the response when a source will try to charge the inductor/capacitors such that after a long time, the capacitors/inductors connected will be charged. if the capacitors/inductors
have high initial voltage/current than what the source has to offer they can discharge until it reaches a steady charged state with a lower voltage/current.
Best Response
You've already chosen the best response.
i don't know if it makes sense to you...depends on which grade you are studying
Best Response
You've already chosen the best response.
im in first year studying about thevenins theorem...
Best Response
You've already chosen the best response.
ok if you didn't get it....i'll try to make it simpler.what say?
Best Response
You've already chosen the best response.
thevenin theorem is little bit different thing
Best Response
You've already chosen the best response.
ok please explain..im struggling with it..
Best Response
You've already chosen the best response.
ok you can look at it this way....suppose we have a network, and we are interested in the output at two terminals of it (forget what's inside the circuit...it may have
resistor,capacitors,inductors and so on) the point is....the output,we are interested in,is the sum of two terms...one terms comes when we have no source voltage/current applied to the circuit
and the circuit drives on the energy initially stored in the circuit (like we connected charged capacitors in the circuit and measured the output)....now the second term comes when we excite the
network by a voltage/current source..and measured the output again...sum both the terms and we get the general output response...
Best Response
You've already chosen the best response.
don't confuse it with thevenin theorem.... thevenin theorem is just a simplification of a circuit..it says we can replace a whole linear circuit just by a resistor and a voltage/current source
Best Response
You've already chosen the best response.
ok tnx..
Best Response
You've already chosen the best response.
it maybe helpful to see the math related to non electrical system to understand the dynamic, http://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/
unit-ii-second-order-constant-coefficient-linear-equations/exponential-response/ , the 16 min video "inhomogeneous second order DEs definitions and models" explains the math using both the spring
mass dash-pot physical system model and a circuit model. focusing on the key point of driven vs non-driven, basically that the equation itself is set up in such a way that the non-driven will
yield zero(the natural case) and driven will yield some non-zero result(the step case), I am still learning myself so I am sure that my explanation is inadequate(possibly wrong) but the professor
in the link explains it very well.
Best Response
You've already chosen the best response.
I should have said yield zero for the "second" term above.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51176466e4b09e16c5c8b0c6","timestamp":"2014-04-20T03:14:52Z","content_type":null,"content_length":"75259","record_id":"<urn:uuid:59cc63a9-cc13-4ba4-8d97-48f9512faf7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
zu der Diskussion zur Grundlegung der Mathematik am Sonntag, dem 7
- American Mathematical Monthly , 2001
"... 1. INTRODUCTION. For geometers, Hilbert’s influential work on the foundations of geometry is important. For analysts, Hilbert’s theory of integral equations is just as important. But the address
“Mathematische Probleme ” [37] that David Hilbert (1862– 1943) delivered at the second International Cong ..."
Cited by 10 (4 self)
Add to MetaCart
1. INTRODUCTION. For geometers, Hilbert’s influential work on the foundations of geometry is important. For analysts, Hilbert’s theory of integral equations is just as important. But the address
“Mathematische Probleme ” [37] that David Hilbert (1862– 1943) delivered at the second International Congress of Mathematicians (ICM) in Paris has tremendous importance for all mathematicians.
Moreover, a substantial part of | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5461745","timestamp":"2014-04-19T06:17:38Z","content_type":null,"content_length":"11877","record_id":"<urn:uuid:867ef995-ae69-4d05-9bf8-a7468e69b987>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Richmond Hill Math Tutor
Get ahead in school, get ahead in life! I offer private classes to help students of all ages advance beyond their peers. A highly qualified graduate of NYU's prestigious Steinhardt School of
Education, I'm a New York State-licensed teacher and the founder of Pen Paper Imagination, providing one-on-one and group mentoring in Literacy, Reading, Writing, Creative Writing and Math.
36 Subjects: including algebra 1, geometry, reading, logic
...Having worked with a number of students with learning differences, I have developed strategies to help these alternative learners more effectively access confusing concepts, in turn bolstering
their academic confidence. I always look forward to meeting new students and helping each to meet his/h...
33 Subjects: including prealgebra, algebra 1, English, Spanish
...I am a teacher for the NYC Department of Education. I am certified in special education and general education. I also have my masters degree in elementary education.
12 Subjects: including algebra 2, algebra 1, SAT math, writing
...I pride myself on being able to find the right way to communicate scientific and mathematical concepts to anybody who wants to learn them. Sometimes you work hard and study properly, but you
can't quite understand what is in front of you. Sometimes it's because you're one of hundreds in a colle...
14 Subjects: including calculus, physics, prealgebra, GRE
...I have a master's degree from Upenn in music and work there as a guitar teacher. I also play keyboard and several other instruments and my master's degree required me to take several music
history and world music classes. I also freelance as a musician.
28 Subjects: including calculus, elementary (k-6th), physics, precalculus | {"url":"http://www.purplemath.com/south_richmond_hill_ny_math_tutors.php","timestamp":"2014-04-21T00:12:51Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:049d7d0c-897c-4e25-931d-b67ed7faef84>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the integral
November 15th 2010, 01:05 AM #1
Junior Member
Feb 2010
Find the integral
Hi there,
$\displaystyle \int (x^2-1)e^x dx$
I chose to integrate by parts. This is what I did.
$\displaystyle \int x^2 e^x dx-\int e^x dx$
1st integration $\displaystyle u=x^2 ,du=2x dx ,dv=e^x dx, v=e^x$
$\displaystyle (\int x^2 e^x dx = x^2 e^x -\int e^x 2xdx)-e^x +C$
2nd integration $\displaystyle u=2x, du=2dx, dv=e^x dx, v= e^x$
$\displaystyle \int 2x e^x dx=2xe^x -\int e^x 2dx$
My answer $\displaystyle x^2 e^x +2xe^x -2e^x -e^x +C$
$\displaystyle x^2 e^x +2xe^x -3e^x +C$
Is this correct?
Hi there,
$\displaystyle \int (x^2-1)e^x dx$
I chose to integrate by parts. This is what I did.
$\displaystyle \int x^2 e^x dx-\int e^x dx$
1st integration $\displaystyle u=x^2 ,du=2x dx ,dv=e^x dx, v=e^x$
$\displaystyle (\int x^2 e^x dx = x^2 e^x -\int e^x 2xdx)-e^x +C$
2nd integration $\displaystyle u=2x, du=2dx, dv=e^x dx, v= e^x$
$\displaystyle \int 2x e^x dx=2xe^x -\int e^x 2dx$
My answer $\displaystyle x^2 e^x +2xe^x -2e^x -e^x +C$
$\displaystyle x^2 e^x +2xe^x -3e^x +C$
Is this correct?
Why do you ask? Derivate and check whether you obtain the original function in the integral.
And no, it isn't correct: you have a sign error.
I see the sign error now. I needed a second opinion.
What for? To remind you that in indefinite integrals you have the tools to check whether you
reached the correct solution or not?
I use to say to my students: any question that can be checked in a reasonable time (the solution to a linear
system of equations, the solution of an indefinite integral, etc.) and STILL is incorrect will be
heavily (de)graded.
November 15th 2010, 01:13 AM #2
Oct 2009
November 15th 2010, 01:28 AM #3
Junior Member
Feb 2010
November 15th 2010, 03:18 AM #4
Oct 2009 | {"url":"http://mathhelpforum.com/calculus/163295-find-integral.html","timestamp":"2014-04-19T21:41:33Z","content_type":null,"content_length":"42853","record_id":"<urn:uuid:3d3280ac-d732-4881-8456-d2516f058be9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Room for Imagination
Let $V := \mathbb{C}^{n}$, or in general finite dimensional $\mathbb{C}$-vector space. Then we define $G(k, V)$ (or $G(k, n)$) as the set of all $k$-dimensional subspaces (which are referred to as “
$k$-planes”) of $V$. (Of course, we assume $0 \leq k \leq n$.)
Any $k$-plane $P$ may be represented by an $k \times n$ matrix of rank $k$. That is, the rows of the matrix are basis elements of the $k$-plane $P$. Denote $M_{k}(k, n)$ the set of all $k \times n$
matrices with rank $k$. Then we may identify
$G(k, n) = M_{k}(k, n)/GL(k)$, the orbit of group action by $GL(k)$.
Exercise 1.1. Justify the above identification by showing the following statement. Any $k \times n$ matrices $A$ and $B$ represent same $k$-planes if and only if there exists $g \in GL(k)$ such that
$B = gA$.
(Hint: Show that $A, B$ represent the same $k$-plane if and only if there is $k \times k$ matrix $C$ such that $A_{i} = B_{i}C$ for each $i$, where $A_{i}, B_{i}$ denote the rows of $A, B$,
respectively. Let $A^{i}$ and $B^{i}$ denote the $i$th columns of $A$ and $B$ respectively. Take the transpose to get $A^{i} = C^{t}B^{i}$. Thus $A, B$ represent the same $k$-planes if and only
if there are $k \times k$ matrices $g, h$ such that $B^{i} = gA^{i}$ and $A^{i} = hB^{i}$. Show that $gh = I$ so that $g, h \in GL(k)$.)
Exercise 1.2. Show that $\dim GL(k, n) = k(n-k)$.
(Hint: From the hint given for Exercise 1.1., show that for any $k \times n$ matrices $A, B$, the row spaces of $A, B$ coincide if and only if the column spaces of them coincide. Then for any rank
$k$ matrix $A$, we may assume that $A$ is given by augmenting an $k \times (n - k)$ matrix on the right to a $k \times k$ invertible matrix (i.e., an element of $GL(k)$). Argue that the entries of $k
\times (n - k)$ decide the dimension of the whole grassmannian.)
Recall that we can projectivize a vector space as follows:
$\mathbb{P}(\mathbb{C}^{n+1}) := \mathbb{P}^{n} = \{\text{lines in }\mathbb{C}^{n+1}\} = G(1, n+1)$.
If $V \subseteq W$ then clearly $\mathbb{P}(V) \subseteq \mathbb{P}(W)$. We have the identification $V \leftrightarrow \mathbb{P}(V)$.
Example. We have
$G(2, 4) = \{V \subseteq \mathbb{C}^{4} : V \simeq \mathbb{C}^{2}\} \simeq \{\mathbb{P}(V) \subseteq \mathbb{P}^{3} : \mathbb{P}(V) \simeq \mathbb{P}^{1}\}$.
There is another way to describe $G(2, 4)$. We review the relevant linear algebra first.
Recall that $\wedge^{k}(V)$ is the set of alternating $k$-multilinear functions $f : V^{k} \rightarrow \mathbb{C}$. Given $f \in \wedge^{k}(V)$ and $g \in \wedge^{l}(V)$, we define
$f \wedge g := A(f \otimes g)/(k!l!) \in \wedge^{k+l}(V)$.
That is, we have
$(f \wedge g)(v_{1}, \cdots, v_{k +l}) = \sum_{\sigma \in S_{k+l}}f(v_{\sigma(1)}, \cdots, v_{\sigma(k)})g(v_{\sigma(k+1)}, \cdots, v_{\sigma(k+l)})/(k!l!)$.
Exercise 1.3. Show that $f \wedge g = (-1)^{kl}(g \wedge f)$.
Exercise 1.4. Let $e_{1}^{*}, \cdots, e_{n}^{*} \in \mathbb{C}^{n}$ be the standard dual basis. Given $I = (1 \leq i_{1} < \cdots < i_{k} \leq n)$ and $J = (1 \leq j_{1} < \cdots < j_{k} \leq n)$,
show that $e_{I}^{*}(e_{J}) = \delta_{IJ}$, which is $1$ when $I = J$ and $0$ otherwise. Conclude that the set $\{e_{i_{1}} \wedge \cdots \wedge e_{i_{k}} : 1 \leq i_{1} < \cdots < i_{k} \leq n\}$
form a basis for $\wedge^{k}\mathbb{C}^{n}$. (In particular, we have $\dim \wedge^{k}\mathbb{C}^{n} = {n \choose k}$, and of course we are assuming $0 \leq k \leq n$. If $k > n$, exterior $k$-power
is just zero.)
Convention. Given $u = a_{1}e_{1} + \cdots + a_{n}e_{n} \in \mathbb{C}^{n}$, we have a unique dual image $u^{*} = a_{1}e_{1}^{*} + \cdots + a_{n}e_{n}^{*} \in V^{*}$. Thus, if $u, v \in \mathbb{C}^
{n}$, then we write $u \wedge v := u^{*} \wedge v^{*}$.
Exercise 1.5. For the standard basis $e_{1}, \cdots, e_{n} \in \mathbb{C}^{n}$, show that $(e_{1} \wedge \cdots \wedge e_{n})(v_{1}, \cdots, v_{n}) = \det[e_{i}^{*}(v_{j})]$.
Exercise 1.6. Given the bases $\{u_{1}, \cdots, u_{n}\}, \{v_{1}, \cdots, v_{n}\} \subseteq \mathbb{C}^{n}$, take the matrix $A = (a_{ij})$ given by $v_{i} = \sum_{j=1}^{n}a_{ij}w_{j}$. In other
words, in basis $\{w_{j}\}_{j=1}^{n}$ the $i$th row of $A$ are $w_{i}$. Show that
$v_{1} \wedge \cdots \wedge v_{n} = \det(A) w_{1} \wedge \cdots \wedge w_{n}$.
Thus if $A = (a_{ij})$ is an $n \times n$ matrix, then
$(a_{11}, \cdots, a_{1n}) \wedge \cdots \wedge (a_{n1}, \cdots, a_{nn}) = \det(A)$.
Now we describe $G(2, 4)$ in a different way. Fix any $V \in G(2, 4) = G(2, \mathbb{C}^{4})$. Consider any basis $u, v \in V$. Then we have a map $(u, v) \mapsto u \wedge v \in \wedge^{2}\mathbb{C}^
{4}$. If we write $u = a_{11}e_{1} + a_{12}e_{2}$ and $v = a_{21}e_{1} + a_{22}e_{2}$, then $A = (a_{ij})$ is the $2 \times 2$ matrix whose rows are $u, v$. Writing $\det(u, v) := \det A$, we have $
(u \wedge v)/\det(u, v) = e_{1}^{*} \wedge e_{2}^{*}$. Thus for any other basis $u', v' \in \mathbb{C}^{2}$, we have
$u' \wedge v' = (\det(u', v')/\det(u, v))(u \wedge v)$.
Now define $V \mapsto \wedge^{2}V = \mathbb{C}(u \wedge v) \subseteq \wedge^{2}\mathbb{C}^{4} \simeq \mathbb{C}^{6}$ by choosing a basis $u, v \in V$ since $\wedge^{2}V$ is not dependent on the
choice of basis of $V$.
More concretely, if we write $\mathbb{C}^{4} = \mathbb{C}e_{1} \oplus \mathbb{C}e_{2} \oplus \mathbb{C}e_{3} \oplus \mathbb{C}e_{4}$. Then $\wedge^{2}\mathbb{C}^{4} = \mathbb{C}e_{12} \oplus \cdots \
oplus \mathbb{C}e_{34}$, where $e_{ij} = e_{i} \wedge e_{j} = - e_{j} \wedge e_{i} = -e_{ji}$. This means that we can recognize $\wedge^{2}\mathbb{C}^{2}$ as the space of $4 \times 4$ skew-symmetric
$\begin{pmatrix}x_{12}e_{12} + x_{13}e_{13} + x_{14}e_{14} \\ + x_{23}e_{23} + x_{24}e_{24} \\ + x_{34}e_{34}\end{pmatrix} \leftrightarrow \begin{bmatrix} 0 & x_{12} & x_{13} & x_{14} \\ -x_{12} & 0
& x_{23} & x_{24} \\ -x_{13} & -x_{23} & 0 & x_{34} \\ -x_{14} & -x_{24} & -x_{34} & 0 \end{bmatrix}$.
Since $\dim \wedge^{2}V = {2 \choose 2} = 1$, we have
$\wedge^{2} : G(2,4) \rightarrow G(1, \wedge^{2} \mathbb{C}^{4}) \simeq G(1, \mathbb{C}^{6}) \simeq G(1 : \mathbb{P}^{5})$.
The following exercise gives an intuitive picture of $\wedge$.
Exercise 1.7. Given $v = (v_{1}, v_{2}, v_{3}) \in \mathbb{C}^{3}$, define $\tilde{v} := v_{1}e_{23} + v_{2}e_{31} + v_{3}e_{12}$. Given $v, w \in \mathbb{C}^{3}$, show $\widetilde{v \times w} = \
tilde{v} \wedge \tilde{w}$.
Exercise 1.8. Given an $n \times n$ skew symmetric matrix $A$, if $n$ is odd, then $\det(A) = 0$.
Exercise 1.9. (Hard) Given an $n \times n$ skew symmetric matrix $A$, if $n$ is even and $A = (x_{ij})$, then there is a polynomial $p(x_{ij})$ such that $\det(A) = p(x_{ij})^{2}$. This polynomial is
called the Pfaffian of $A$.
Exercise 1.10. Explain how to define the rank of elements in $\wedge^{k}\mathbb{C}^{n}$ (Hint: use $n \choose k$ skew symmetric matrix).
A comparison between change of variables and substitution
If $\varphi:[a,b] \rightarrow \mathbb{R}$ is a $C^{1}$ function (i.e., derivative exists and continuous), then for any continuous $f:\varphi([a, b]) \rightarrow \mathbb{R}$, we have
$\displaystyle\int_{a}^{b}f(\varphi(x))\varphi'(x)dx = \int_{\varphi(a)}^{\varphi(b)}f(t)dt$.
Now consider a $C^{1}$ function $\phi : U \rightarrow \mathbb{R}^{n}$ where $U \subseteq \mathbb{R}^{n}$ is open. (Here, $C^{1}$ means that partial derivatives are continuous.) Suppose that $\phi$ is
injective and $f$ is a real-valued function compactly supported in $\phi(U)$. Then
$\displaystyle\int_{U}f(\phi(x))|\det\phi'(x)|dx = \int_{\phi(U)}f(t)dt$.
The question is: where is the absolute value in the single variable case?
Notice that for the multivariable case, we only consider the case when $\phi$ is 1-1. If we assume $\varphi$ 1-1 in the single variable case, then it is either $\varphi' > 0$ or $\varphi' < 0$. If $\
varphi' > 0$, then two cases evidently coincide, so assume that $\varphi' < 0$. Then
$\displaystyle\int_{a}^{b}f(\varphi(x))|\varphi'(x)|dx = -\int_{a}^{b}f(\varphi(x))\varphi'(x)dx = \int_{\varphi(b)}^{\varphi(a)}f(t)dt.$
But notice hat $\varphi([a, b]) = [\varphi(b), \varphi(a)]$, since $\varphi$ decreases! Thus the two cases match.
A good way to write a plane
Suppose that a plane contains two nonparallel lines with direction vectors v and w. If p is a point passing through the plane, then the plane equation is given by det(x – p, v, w) = 0.
Connected component is closed
Any connected component is closed because it is the maximal connected set containing an element fixed but then its closure is a surrounding connected set.
From classical geometry to algebraic geometry
This post is an elementary discussion of understanding points in $k^{n}$ as points in $\mathbb{A}_{k}^{n} = \text{Spec}k[x_{1}, \cdots, x_{n}]$, where $k$ is a (not necessarily algebraically closed)
field. The reference is Ravi Vakil’s lecture notes, though the post is not identical to the notes.
Theorem. Given a point $(a_{1}, \cdots, a_{n}) \in k^{n}$, the kernel of the evaluation homomorphism (at the point) $k[x_{1}, \cdots, x_{n}] \rightarrow k$ given by $f \mapsto f(a_{1}, \cdots, a_{n})
$ is $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$. In particular, the ideal $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ is maximal, so $(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \in \text{Spec}k[x_{1}, \cdots,
Remark. It is rather hard to compute the kernel in a naive way. Let’s try it. Suppose that $f(a_{1}, \cdots, a_{n}) = 0$. To show that $f \in (x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ directly, we need
to find $g_{1}, \cdots, g_{n} \in k[x_{1}, \cdots, x_{n}]$ such that
$f(x_{1}, \cdots, x_{n}) = g_{1}(x_{1}, \cdots, x_{n})(x_{1} - a_{1}) + \cdots + g_{n}(x_{1}, \cdots, x_{n})(x_{n} - a_{n})$.
But there is a way to get around this difficulty: to get help from morphisms!
Proof. Define $\phi : k[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \rightarrow k$ by $\bar{f} \mapsto f(a_{1}, \cdots, a_{n})$. This is well-defined because if $\bar{f} = 0$, then
$f(x_{1}, \cdots, x_{n}) = \sum_{j=1}^{n}h_{j}(x_{1}, \cdots, x_{n})(x_{j} - a_{j})$ so that $f(a_{1}, \cdots, a_{n}) = 0$.
Now, define $\psi : k \rightarrow k[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ by $c \mapsto \bar{c}$. Then
$\psi\phi(\bar{f}) = \overline{f(a_{1}, \cdots, a_{n})} = f(\bar{a_{1}}, \cdots, \bar{a_{n}}) = f(\bar{x_{1}}, \cdots, \bar{x_{n}}) = \bar{f}$,
and $\phi\psi(c) = \phi(\bar{c}) = c$, so $\phi$ is an isomorphism. Let $K$ be the kernel of evaluation. Then we have constructed the following isomorphism
$k[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \overset{\sim}{\longrightarrow} k[x_{1}, \cdots, x_{n}]/K$
given by $f \mod (x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \mapsto f \mod K$. Since zeros of each side correspond, we have $(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) = K$. Q.E.D.
Remark. The key of the above proof was to recognize that
$\overline{f(x_{1}, \cdots, x_{n})} = f(\bar{x_{1}}, \cdots, \bar{x_{n}})$
in the quotient ring $k[x_{1}, \cdots, x_{n}]/I$ where $I$ is a given ideal of the polynomial ring. Let $\pi : k[x_{1}, \cdots, x_{n}] \rightarrow k[x_{1}, \cdots, x_{n}]/I$. The above identity can
be also understood as
$\pi(k)[\bar{x_{1}}, \cdots, \bar{x_{n}}] = k[x_{1}, \cdots, x_{n}]/I$,
so it is acceptable to write
$\bar{a_{n_{j}}}\bar{x_{j}}^{n} + \cdots + \bar{a_{1}}\bar{x_{j}} + \bar{a_{0}} = a_{n_{j}}\bar{x_{j}}^{n} + \cdots + a_{1}\bar{x_{j}} + a_{0}$,
which is given by the restriction of scalars given by $\pi : k \overset{\sim}{\longrightarrow} \pi(k)$. Notice that our arguments do not need $k$ to be isomorphic to $\pi(k)$! That’s why I believe
that this can be generalized as much as we want (see below).
There is another reason that it is legitimate to write $a \bar{x_{j}} = \bar{a}\bar{x_{j}}$: if we look at the quotient $k[x_{1}, \cdots, x_{n}]/I$ as a $k$-algebra, this is how one defines $k$
-action on the quotient module: if $N$ is an $A$-submodule of $M$, we define $a\bar{m} = \overline{am}$. Both views are valid here!
Question. Can we generalize this to $A[x_{1}, \cdots, x_{n}]$, where $A$ is an arbitrary commutative ring with unity?
Answer (not professional). I think we can, but it is more reasonable to assume that $A$ is an integral domain so that we can conclude that $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ are prime ideals
given by the isomorphism $A[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \simeq A$ similar to above. However, I think that the isomorphism is given over any commutative rings.
Consider an example: $\mathbb{Z}[x, y] \rightarrow \mathbb{Z}$ given by $f \mapsto f(m, n)$. Then we can construct
$\phi : \mathbb{Z}[x, y]/(x - m, y - n) \rightarrow \mathbb{Z}$
by $\bar{f} \mapsto f(m, n)$ and
$\psi : \mathbb{Z} \rightarrow \mathbb{Z}[x, y]/(x - m, y - n)$
by $s \mapsto \bar{s}$. Then we have
$\psi\phi(f(x, y)) = \phi(f(m, n)) = \overline{f(m, n)} = \overline{f(x, y)}$ and
$\phi\psi(s) = \phi(\bar{s}) = s$. This clearly generalizes to the case where $A$ is any commutative ring and we have arbitrarily many finite number of indeterminates. Again, the key is to see $\
overline{f(x, y)} = \overline{f(m, n)}$. As we saw before, this is merely because polynomials are “constructed by addition and multiplication”. Just like in Theorem, this argument is not about merely
giving an isomorphism, but it also shows that
$f(a_{1}, \cdots, a_{n}) = 0$ if and only if $f(x_{1}, \cdots, x_{n}) \in (x_{1} - a_{1}, \cdots, x_{n} - a_{n})$.
To put it differently, we have
$f(a_{1}, \cdots, a_{n}) = 0$ if and only if $f(x_{1}, \cdots, x_{n}) = 0 \mod (x_{1} - a_{1}, \cdots, x_{n} - a_{n})$.
A more ambitious question. If I have not made any incorrect remark, it is natural to ask what is the notion of ideals $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ that is the kernel of $f \mapsto f(a_
{1}, \cdots, a_{n})$. If the ground ring is not an integral domain, we necessarily have $(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) otin \mathbb{A}_{A}^{n}$, so we might want to make $\mathbb{A}_{A}^{n}$
bigger! Let us call such an ideal as a type C ideal. Now we may ask what are the type C ideals in an arbitrary ring and if it enriches the theory! However, this might be just a very shallow question.
Remark. Notice that we have not used Nullstellensatz (weak nor strong) for this discussion. Weak Nullstellensatz is only used to say that over an algebraically closed field, classical points are all
the maximal ideals of a polynomial ring over the field.
Review: Topologification 1
This post is to review some topological concepts. The material is from Topology (3rd edition) by James Munkres and several articles of proofwiki (e.g., the article about a way to create a basis) with
some additional thoughts of myself.
Given a set $X$, we are interested in ways to construct a topological structure on $X$.
One natural thought is to pick some collection of subsets $\mathscr{S} \subseteq \mathcal{P}(X)$ and ask if there is the smallest topology on $X$ that contains $X$. If it exists, it must be unique,
and let us denote $O_{\mathscr{S}}$ the topology, calling it the topology generated by $\mathscr{S}$.
Existence of the topology generated by a set. We can construct this topology by taking intersection containing $\mathscr{S}$ because there is at least one being intersected, which is the entire power
set $\mathcal{P}(X)$.
Remark. Let $\mathscr{T} \subseteq \mathcal{P}(X)$ be a topology on $X$. Then $O_{\mathscr{T}} = \mathscr{T}$. Moreover, we get a functor $\mathscr{S} \mapsto O_{\mathscr{S}}$ from the category of
topologies on $X$ to the category of subsets of $X$ (i.e., $\mathcal{P}(X)$).
Remark. Given a collection of subsets $\mathscr{S} \subseteq \mathcal{P}(X)$, the topology $O_{\mathscr{S}}$ is the topological best approximation of $\mathscr{S}$. It deserves a name such as
“topologification” and I am pretty sure this is the left-adjoint to the forgetful functor from the category of topologies on $X$ to the powerset $\mathcal{P}(X)$.
We now want to another (less categorical but more set-theoretical) criterion to compute $O_{\mathscr{S}}$. An easier case is given by the concept of “basis”.
The subcollection $\mathscr{B} \subseteq \mathcal{P}(X)$ is called a (topological) basis if it satisfies the following axioms:
1. $\mathscr{B}$ covers $X$.
2. If $x \in B_{1} \cap B_{2}$ where $B_{1}, B_{2} \in \mathscr{B}$, then there is $B_{3} \in \mathscr{B}$ such that $x \in B_{3} \subseteq B_{1} \cap B_{2}$.
We can compute $O_{\mathscr{B}}$ as follows.
Theorem 1.1. Let $\mathscr{B}$ be a topological basis in $\mathcal{P}(X)$. Then
$O_{\mathscr{B}} = \{U \in \mathcal{P}(X) | (\forall x \in U) (\exists B \in \mathscr{B}) (x \in B \subseteq U)\}$.
Remark. Let $O$ denote the right-hand side. It is easy to show that $O$ is a topology on $X$. However, it is hard (at least for me) to show the equality!
We show that $O$ is a topology on $X$. (i) We get $\emptyset \in O$ vacuously and $X \in O$ trivially. Let $\{U_{i}\}_{i \in I} \subseteq O$. (ii) If $x \in \cup_{i \in I}U_{i}$, then $x \in B \
subseteq U_{j} \subseteq \cup_{i\in I}U_{i}$ for some $j \in I$ and some $B \in \mathscr{B}$, so $\cup_{i \in I}U_{i} \in O$. (iii) If $x \in U_{i} \cap U_{j}$, then there is $B_{i}, B_{j} \in \
mathscr{B}$ such that $x \in B_{i} \subseteq U_{i}$ and $x \in B_{j} \subseteq U_{j}$, so take $B_{k} \in \mathscr{B}$ such that $x \in B_{k} \subseteq B_{i} \cap B_{j} \subseteq U_{i} \cap U_{j}$.
Thus $U_{i} \cap U_{j} \in O$.
We have established that $O$ forms a topology on $X$ and clearly, we have $\mathscr{B} \subseteq O$. The difficulty arises when one tries to show $O \subseteq O_{\mathscr{B}}$. To finish showing the
theorem, we need to discuss about another way to compute $O_{\mathscr{B}}$ by another description of $O$.
Theorem 1.2. Let $\mathscr{B}$ be a topological basis in $\mathcal{P}(X)$. Then
$O_{\mathscr{B}} = \{U \in \mathcal{P}(X) | U \text{ is a union of some members of }\mathscr{B}\}$.
Proof. Denote $O'$ the right-hand side. It is evident that $O' \subseteq O_{\mathcal{B}}$. If we show that $O' = O$ (see the remark above), then we can conclude at once that $O_{\mathscr{B}} = O' =
O$ since $O$ is a topology on $X$ containing $\mathscr{B}$!
Let $U \in O$. For each $x \in U$, (using the axiom of choice) choose $B_{x} \in \mathscr{B}$ such that $x \in B_{x} \subseteq U$. Then $U = \cup_{x \in U}B_{x} \in O'$. It is immediate that $O' \
subseteq O$, so we finish the proof (of Theorem 1 and 2 together). Q.E.D.
Remark. Do not forget that the description given in Theorem 1 was not so trivial to prove! As a corollary, we get a nontrivial way to describe open sets of a topological space $(X, O(X))$ by $O_{O
(X)} = O(X)$.
Corollary 1.3. Let $X$ be a topological space. Then $U \subseteq X$ is open if and only if for each $x \in U$ there is an open $V i x$ such that $x \in V \subseteq U$.
Subspace and quotient space
A space with an inclusion morphism is a subspace. A space with a projection morphism (mapping each element to its equivalence class) is a quotient space. | {"url":"http://gycheong.wordpress.com/","timestamp":"2014-04-20T18:24:07Z","content_type":null,"content_length":"117680","record_id":"<urn:uuid:e114f9c9-3a96-4b78-9ebb-efe14b3b7bfd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Hey guys!
First of all thanks everyone so much for your attention.
The method i have to use is indeed the one BOB linked to (the one at WIKIPEDIA - i am not allowed to post links!).
So, I can use it for a system of linears, with ease, but i have no idea how to apply it on a system like the one i presented!!
Maybe some trick is required!?
I would be so gratefull, if someone has a hint!
Thanks everyone,
this forum seems so nice! | {"url":"http://www.mathisfunforum.com/post.php?tid=18801&qid=248319","timestamp":"2014-04-18T00:35:16Z","content_type":null,"content_length":"19407","record_id":"<urn:uuid:fa43ac20-ca9e-4f79-9bd2-c19eff025bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
and BOC(1,1) in Terms of Detection Performance
International Journal of Navigation and Observation
Volume 2008 (2008), Article ID 793868, 9 pages
Research Article
Comparison between Galileo CBOC Candidates and BOC(1,1) in Terms of Detection Performance
^1Dipartimento di Elettronica, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10138 Torino, Italy
^2Istituto Superiore Mario Boella, Corso Castelfidardo 30/A, 10138 Torino, Italy
^3Galileo Unit, European Commission DG-Tren, 28 Rue de Mot, 1049 Brussels, Belgium
Received 31 July 2007; Revised 30 December 2007; Accepted 25 February 2008
Academic Editor: Gerard Lachapelle
Copyright © 2008 Fabio Dovis et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Many scientific activities within the navigation field have been focused on the analysis of innovative modulations for both GPS L1C and Galileo E1 OS, after the 2004 agreement between United States
and European Commission on the development of GPS and Galileo. The joint effort by scientists of both parties has been focused on the multiplexed binary offset carrier (MBOC) which is defined on the
basis of its spectrum, and in this sense different time waveforms can be selected as possible modulation candidates. The goal of this paper is to present the detection performance of the composite
BOC implementation of an MBOC signal in terms of detection and false alarm probabilities. A comparison among the CBOC and BOC(1,1) modulations is also presented to show how the CBOC solution,
designed to have excellent tracking performance and multipath rejection capabilities, does not limit the acquisition process.
1. Introduction
The agreement reached in 2004 by United States (US) and European Commission (EC) [1] focused on the Galileo and GPS coexistence clearly stated as central point to the selection of a common signal in
space (SIS) baseline structure that is the BOC(1,1). In addition, the same agreement paved the way for common signal optimization with the goal to provide increased performance as well as
considerable flexibility to receiver manufacturers.
Therefore, EC and US started to analyze possible innovative modulation strategies [2] in the view of Galileo E1 OS optimization and for the future L1C signals of the new generation GPS satellites.
Considering the recent activities carried out by the Galileo signal task force (STF) jointly to US experts in the Working Group A, it came out that the multiplexed binary offset carrier (MBOC) could
be a good candidate for both GPS and Galileo satellites. In fact, on the 26th of July 2007 US and EC announced their decision to jointly implement the MBOC on the Galileo open service (OS) and the
GPS IIIA civil signal as reported in [3].
The MBOC power spectral density (PSD) is a mixture of BOC(1,1) spectrum and BOC(6,1) spectrum; then different time waveforms can be combined to produce the MBOC-like spectral density. The
contribution of the BOC(6,1) subcarrier brings in an increased amount of power on higher frequencies, which leads to signals with narrower correlation functions and then yielding better performance
at the receiver level.
The European approach to the MBOC implementation consists in adding in time a BOC(1,1) and a BOC(6,1), defined as composite BOC (CBOC) modulation. At the time of writing, the US is likely to choose a
time-multiplexed implementation, named TMBOC. Throughout the paper, the CBOC features will be described and clarified taking also into account different implementation options like, for example, the
allocation of the power among the data and pilot channels of the E1 signal.
Regardless the kind of CBOC, such a signal structure allows the receivers to obtain high performance in terms of multipath rejection and tracking [4, 5]. This is mainly due to a higher transition
rate brought by the BOC(6,1) on top of the BOC(1,1). However, the optimization process must also consider the signal candidates in terms of their acquisition performance. It is known that CBOC
signals have sharper correlation functions [4, 5] than the BOC(1,1) solution and this characteristic, as described in [6, 7], makes the acquisition process more challenging. In this paper, the
acquisition of a CBOC signal in terms of its detection and false alarm probabilities (more related to the modulation characteristics and less connected to the acquisition implementation) is
investigated and compared to the performance of the pure BOC(1,1) modulation as well as the detection performance of a BOC(1,1) legacy receiver acquiring a CBOC signal. In this paper, the mean
acquisition time is not investigated, since it is connected to the detection rate performance as well as the acquisition solution being implemented, so not only dependent on the signal modulation
The results show that from the acquisition standpoint, thanks to the 10/11th of power located to a BOC(1,1) in the MBOC spectrum, the compatibility with the state-of-the-art BOC(1,1) receiver
baseline is assured.
Moreover, it is assumed to use the Galileo acquisition engines presented in [8] which work on a pilot channel with a secondary code, that further modulates the primary pseudorandom sequences (any
kind of BOC or MBOC).
The paper is organized as follows: Section 2 reports the main features of the MBOC approach while Section 3 presents the correlations properties as well as the possible CBOC candidates in terms of
power allocation. Then, Section 4 is devoted to the description of the acquisition problem from a theoretical aspect, and Section 5 presents the related simulation results for the CBOCs and BOC(1,1)
modulated signals. Finally, Section 6 draws some conclusions.
2. MBOC Definition and Spectrum Characteristics
As reported in [9], the MBOC signal is obtained defining its power spectral density as a combination of the BOC(1,1) and BOC(6,1) power spectra (i.e., including both pilot and data channel
components). The notation introduced in [9] is MBOC(6,1,1/11), where the term (6,1) refers to the BOC(6,1), and the ratio 1/11 represents the power split between the BOC(1,1) and BOC(6,1) spectrum
components as given by
where is the unit-power spectrum density of a sine-phased BOC modulation as defined in [10].
Figure 1 shows the comparison among the PSDs of the BOC(1,1) and the MBOC(6,1,1/11) foreseen for the Galileo E1 signal as well as for the future GPS L1C. In the picture, it is evident that the
increased power at a frequency shifted about 6MHz from the central frequency E1, deriving by the presence of the BOC(6,1) component.
It is important to remark that MBOC is defined starting from the power spectrum. In this sense, many possible time-domain implementations can result with the same approximation of the defined
3. CBOC Features
In CBOC implementations, each ranging code is modulated by a weighted combination of a BOC(1,1) and a BOC(6,1) subcarriers:
where [second] is the chip duration.
The notation usually reported for the composite BOC signal is CBOC(6,1,γ/ρ), where the parameters γ and ρ are related to the power splitting between the BOC(1,1) modulated signal and the BOC(6,1)
contribution. However, such a notation does not take into account that the actual overall signal is obtained by combining data and pilot channels, then introducing a further degree of freedom.
Furthermore, it is not mandatory that the BOC(6,1) contribution has to be present on both data and pilot channels, opening additional options to the implementation.
Therefore, the time-domain signal on the E1 data channel can be expressed as
where is the product of the navigation message and the spreading code, and α is the fraction of power allocated to the data channel. In the same way, the E1 pilot channel can be expressed as
where is the spreading code sequence, and β is the fraction of power allocated to the pilot channel.
The parameters and can assume the values (), and they are used to model the presence or not of the BOC(6,1) subcarrier and its sign in the channels.
It is important to remark that, under the assumption that data and pilot channels use orthogonal spreading codes, the residual cross-correlation between the spreading sequences chosen for Galileo can
be considered negligible, the overall spectrum on the E1 band is the summation of the power spectra of the pilot and data channels. Different combinations of the parameters α, β, , , γ, and ρ can be
chosen in order to obtain signals, whose power spectral density resembles the spectral mask defined for the MBOC signal.
Table 1 shows some possible selection of the parameters associated to the power split between data and pilot channels.
As already remarked, it is not always the case that the CBOC is selected for both data and pilot channels (see third row of Table 1). Anyway, the most probable implementation selected by EC will fall
on the CBOC(6,1,1/11) option (and both and positive) with 50% of power on both channels. This decision is due to the relatively high data rate on the E1 data channel, which is known also to carry
integrity messages.
Regardless the power splitting, the CBOC in time-domain shows a four-level spreading sequence as depicted in Figure 2, where a CBOC(6,1,1/11) realization with positive contribution coming from the
BOC(6,1) subcarrier has been reported.
The presence of higher transition rate (due to BOC(6,1) component) creates a sharper correlation function than the BOC(1,1) baseline. The normalized autocorrelation functions of the CBOC(6,1,1/11)
and CBOC(6,1,4/33) are compared to the BOC(1,1) correlation in Figure 3.
The larger is the contribution of the BOC(6,1) subcarrier (as so the γ over ρ ratio) in the CBOC implementation, the sharper is the correlation function.
This characteristic will be deeply highlighted in the following sections considering its impact on the detection performance of the acquisition stage of the receiver.
To better highlight the sharper CBOC correlation functions, a zoom of Figure 3 around the main peak is reported in Figure 4.
The CBOC autocorrelation function can be written by means of the BOC(1,1) and BOC(6,1) autocorrelations and cross-correlations as
where the term is the cross-correlation term between the BOC(1,1) and BOC(6,1).
The presence of a cross-correlation factor in (5) results in creating little differences with respect to the MBOC spectrum as defined in (1). Therefore, on-going studies are in place with the goal to
define implementation strategies to remove such cross factor. Among the others, the most promising is to alternate BOC(6,1) and BOC(1,1) phases on adjacent code chips (see as an example [11]).
4. Acquisition of the Optimized CBOC Signal
The first operation performed by any GNSS receiver is the signal acquisition, in charge to understand which satellites are in the line of sight and to provide the tracking stages with a coarse
estimation of the received code delay and a rough estimation of the Doppler frequency shift.
The declaration of the presence or absence of a satellite (determination of both code delay and Doppler shift) is obtained by evaluating a two-dimensional matrix called search space. Each item of
such a matrix, that is, cell, corresponds to the value assumed by the bi-dimensional correlation for a specific couple code delay and Doppler shift . This bi-dimensional correlation is also known as
cross-ambiguity function.
As shown in [8], several are the solutions that can be found in literature for the signal acquisition: serial search, fast acquisition, and parallel acquisition in frequency domain, but they just
differ in the way the search space is obtained and equivalent in terms of detection performance. Other acquisition techniques known with the name of differential acquisition strategies are nowadays
used in GNSS fields [12], but since the mathematical details are different from the previous mentioned methodologies, they will not be considered in this paper.
Any acquisition technique can be characterized for a given by the false alarm and detection probabilities. Here, just the false alarm and detection cell probabilities will be considered, since as
discussed in [13] the characteristics of the acquisition engine can always be related to these fundamental values.
Such probability functions are usually evaluated considering only the peak amplitude of the correlation function and the presence of noise. Such characterization does not take into account the fact
that, for a given Doppler shift and code phase error, the correlation generally does not achieve the maximum possible value. This effect can be modeled as a correlation loss which depends on the
shape of the cross ambiguity function and on its representation in terms of resolution (i.e., Doppler shift and code delay steps) in the search space [8].
In order to take into account also this effect in the comparison of BOC(1,1) and CBOC(6,1,γ/ρ) functions, the behavior around the peak in the search space has to be studied.
Once decided the acquisition threshold , the cell false alarm probability can be easily evaluated as the integral of the tail of the distribution of the search matrix in a misalignment condition (or
equivalently when the signal is absent). In formula is (see [14])
The distribution of can be shown to assume the expression [8]:
where, when the local code spreading sequence has unitary power, and the signal is digitized respecting the Nyquist criterion, is equal to . is the number of samples coherently integrated, and is the
variance of the Gaussian noise affecting the received signal.
Equation (7) holds for the case of noncoherent integration process applied to a serial or parallel acquisition technique. With this technique, the detection and decision can be taken over the
summation of correlation outputs before the envelope operation so to reduce the noise impact and to increase the acquisition detection rate [8].
The probability density function of the correlator output depends on two variables: the code displacement error and the Doppler shift error , respectively. The conditional probability density
function to the hypothesis of a perfect code and Doppler alignment () is demonstrated in [8] to assume the expression:
where is a term proportional to the received signal power . The corresponding conditional detection probability is then the integral over the tail of (see [8, 15]), which leads to the expression:
Equation (9) involves the th-order Marcum function, discussed and defined in [15]. It is remarked how (9) does not still consider the shape of the correlation function of the signal being acquired.
This correlation function can be locally approximated around the peak as the product of the mono-dimensional correlations along the code delay and Doppler axes [7, 8], that is
It is evident that in real applications, where a residual error remains in the estimation of the code phase and Doppler shift, the acquisition does not work using the maximum possible correlation
value. This situation can be modeled as additional losses or as an impairment, which depends both on the shape of and .
The approximation reported in (10) is extremely important because it makes possible to separate the effects of the code errors to the one coming from Doppler shift, so to consider the total loss
simply as the product of two single impairments.
As far as the code error loss is concerned, the reduction of the correlation output can be accounted in a dB amplitude scale as [8]
A plot of the code correlation losses for the CBOC(6,1,1/11), CBOC(6,1,4/33), and the BOC(1,1) is depicted in Figure 5.
Similarly to the code correlation loss, the residual Doppler phase error in the acquisition process produces a reduction of the correlation peak that is demonstrated, again in [8], to be equal to
Being , the Dirichlet kernel function. Remembering that the term is the number of samples coherently integrated, it is clear how the correlation loss depends on the integration time. Figure 6 reports
the trend of the Doppler loss when the integration time goes from T = 4 milliseconds up to T = 12 millisecondss with 4 milliseconds of step.
It is necessary to model the probability distribution of the code phase offset and Doppler shift in order to add up the different losses inside the conditional detection probability reported in (9).
These two realistic hypotheses can be made on the basis of the functioning of the acquisition engine:
(i)the resolution used in the acquisition phase is usually of some integer fraction ± of chip, then the maximum absolute phase offset can be assumed uniformly distributed between ± chip;(ii)
similarly, the Doppler frequency can be assumed to be uniformly distributed between zero and half the maximum absolute digital frequency, obtained by normalizing a natural frequency expressed in Hz
with respect to the numerical frequency used to express a sequence of sample of a digital signal, bin width ±, where is typically less or equal to . Furthermore, the Doppler frequency and code phase
errors can be considered independent and uncorrelated. With all these assumptions, the combined loss simply becomes the sum (because expressed in dB) of the contributions and . Thus, according to the
definition of [6, 8], an expected value of the detection probability, which also accounts for the particular shape of the cross ambiguity function and the impairments due to the residual code phase
and Doppler errors, can be derived from the conditional detection probability defined in (9) integrating over the two assumed distribution for and :
with .
Therefore, since different modulations have different , clearly different detection rate must be expected considering what derived in (13).
The expected value of the detection rate averages among all the possible code phase and Doppler offset; the acquisition can deal with, and it can be seen as an averaged probability even though in the
following it will be referred to this quantity as a normal probability.
5. Detection Performance of the CBOC Modulation Candidates
The CBOC candidates and BOC(1,1) modulations have been compared considering the impairments addressed in Section 4. Both false alarm and detection probabilities have been obtained by means of Monte
Carlo simulations.
A classical acquisition technique not tailored for the new modulation has been considered, and the false alarm probability as well as the detection rate has been determined considering an integration
period of 4 milliseconds (one Galileo primary code duration). All the simulated signals (BOC(1,1) and CBOCs) have been sampled at 12MSamples/s considering the front end operating under the Nyquist
criterion (i.e., 12MHz two- sided bandwidth).
A common way to express the detection performance of an acquisition engine is by means of the so-called receiver operative characteristics (ROC) curves, where the detection probability is reported
versus the false alarm probability at a specific signal to noise ratio. During this performance analysis, a of 40dB·Hz has been considered to obtain all the ROC curves, where here refers to the
single channel (pilot or data component) carrier to noise ratio.
In Figure 7, a comparison among the BOC(1,1) modulation and two different CBOC implementations is depicted. The ROC curves for the three modulations are reported changing by simulation to simulation
and by the code search resolution starting from a value of half a chip down to an eighth of chip.
It is evident from this comparison that when the code search step is reduced, higher detection rates can be achieved for the same false alarm probabilities with all the modulations. These trends can
be explained remembering that the larger is the code phase error , the larger is the correlation loss averaged in (13), and then the lower is the detection rate.
The sharper correlation functions of the CBOC implementations lead to a more relevant code loss contribution with respect to the BOC(1,1). However, as demonstrated in Figure 7, the degradation which
stems from the different code loss among BOC(1,1) and CBOCs can be reduced decreasing the code phase step, anyway often necessary to guarantee the pull-in phase of the tracking stages.
Another possibility to address the detection acquisition performance is given by graphs which depict the detection probability for a given false alarm probability versus the , as done in the
comparison of Figure 8.
Here, the BOC(1,1) and CBOCs modulations are compared considering a fixed false alarm probability of 10^-4 varying the from a minimum of 30 up to 55dB·Hz.
In this operative scenario, the necessary to acquire the CBOCs with a detection probability of 0.9 is reported in Table 2 as well as the degradation with respect to the case of using a BOC(1,1)
Considering that since MBOC has better self spectral separation coefficients (SSCs) than BOC(1,1), the intrasystem interference coming from satellites transmitting MBOC with a different PRN will be
In addition, the SSC between the MBOC and the C/A code is also lower and thus in summary the intersystem and intrasystem interference is also reduced [16]. Then, the equivalent noise due to
interference from other satellites is around 0.1–0.2dB lower for MBOC than for BOC(1,1), and the equivalent is expected to be 0.1–0.2dB-Hz better [16].
In addition, the interplex modulation product with MBOC is around 4dB lower with CBOC than with BOC(1,1) and thus the net effect is that at the end, for the same transmitted power from the
satellite, at the ground there is an increase of the received power of approximately another half a dB.
Therefore, the degradation of the sharper correlation function is mostly compensated in all the cases by the increased power at the ground and by better SSC, and in some cases it should actually
outperform the BOC(1,1) at least when code search step is reduced to less or equal to one quarter of chip.
It has not to be forgotten that one of the aim of the CBOC modulations is to maintain the interoperability and the compatibility with the existing systems. In fact, the contribution of the BOC(1,1)
in the CBOC definition, (cfr. (3) and (4)), still assures a nonzero cross-correlation function among CBOCs and BOC(1,1).
Figure 9 reports the correlation functions obtained demodulating a CBOC signal with a local BOC(1,1) code.
This might be the working scenario of a BOC(1,1) legacy receiver processing the new optimized MBOC signal.
The cross-correlation functions and depicted in Figure 9 are practically identical.
They are mainly characterized by a reduction of the peak maximum due to the cross loss given by the BOC(6,1) term, but the correlation slope and widths are comparable one to the other.
This is the case of the ROC curves reported in Figure 10 where the detection performance of the CBOC(6,1,1/11) demodulated by a BOC(1,1) replica is reported together with the performance of the
standalone BOC(1,1) and CBOC(6,1,1/11).
The comparison is made considering only a code delay step of 0.25 and 0.125chip. The detection probability versus the for a false alarm probability of 10^-4 is reported in Figure 11.
It is interesting to notice how, when the search space has a resolution of 0.25chip for the code phase, higher detection probabilities can be obtained by demodulating the CBOC(6,1,1/11) with a local
BOC(1,1) implementation. When the code step used in the search space is reduced to 0.125chip, then the solution to demodulate the CBOC(6,1,1/11) with a local BOC(1,1) does not outperform the pure
CBOC(6,1,1/11) solution.
Similar considerations can be made for the comparison of the CBOC(6,1,4/33) detection performance which have been reported in Figures 12 and 13.
When is reduced, then the maximum peak reduction of and plays a more significant role in the total averaged loss explaining the change of performance outlined in the previous comments.
On the basis of the modernization activities around the future Galileo E1 signals, this paper focuses on the analysis of the acquisition detection performance of two CBOC solutions, which are the
CBOC(6,1,1/11) and CBOC(6,1,4/33).
Such activity, done with an acquisition engine implemented via software, is a key step for signals comparison considering that the CBOC modulation due to its sharper correlation function might
present some acquisition losses with respect to the BOC(1,1). Through simulations, it has been proved that, in practical operative conditions and thanks to the better SSC derived by using an MBOC
spectrum and thanks to the increased power level of the signal at the ground (which results in about 0.7dB·Hz of improvement in the equivalent seen by the receivers antennas), those losses can be
Moreover, this work also shows how the CBOC candidate modulations still assure the compatibility and interoperability with BOC(1,1) legacy receivers in terms of acquisition.
All these considerations together with the major advantages in terms of better tracking performance and multipath rejections capabilities clearly justify the selections of the CBOC as implementation
of the agreed MBOC.
1. United States-European Commission Agreement on the Promotion, “Provision and Use of Galileo and GPS Satellite-Based Navigation Systems and Related Application,” http://pnt.gov//public/docs/
2. “Joint Statement on Galileo and GPS Signal Optimization By the European Commission (EC) and the United States (US),” Bruxelles March 2006, http://useu.usmission.gov/Dossiers/Galileo_GPS/
3. “United States and the European Union announce final design for GPS-Galileo common civil signal,” http://europa.eu/rapid/pressReleasesAction.do?reference=IP/07/1180&format=HTML&aged=0&language=EN
&guiLanguage=fr .
4. G. W. Hein, J. A. Avila-Rodriguez, L. Ries, et al., “A candidate for the Galileo L1OS optimized signal,” in Proceedings of the Institute of Navigation (ION '05), Long Beach, Calif, USA, September
5. G. W. Hein, J. A. Avila-Rodriguez, S. Wallner, et al., “MBOC: the new optimized spreading modulation recommended for GALILEO L1 OS and GPS L1C,” in Proceedings of the IEEE/ION Position, Location,
and Navigation Symposium (PLANS '06), p. 883, San Diego, Calif, USA, April 2006. View at Publisher · View at Google Scholar
6. D. Borio, M. Fantino, L. Lo Presti, and L. Camoriano, “Acquisition analysis for Galileo BOC modulated signals: theory and simulation,” in Proceedings of the European Navigation Conference (ENC
'06), Manchester, UK, May 2006.
7. H. Mathis, P. Flammant, and A. Thiel, “An analytic way to optimize the detector of a post-correlation FFT acquisition algorithm,” in Proceedings of the International Technical Meeting of the
Satellite Division of the Institute of Navigation (ION GPS/GNSS '03), p. 689, Portland, Ore, USA, September 2003.
8. M. Fantino, Study of architectures and algorithms for software Galileo receivers, Ph.D. dissertation, Electronic Department, Politecnico di Torino, Torino, Italy, May 2006.
9. GPS-Galileo Working Group A MBOC Recommendations, “Recommendations on L1 OS/L1C Optimization,” March 2006, http://www.galileoju.com/page3.cfm.
10. J. W. Betz, “Design and performance of code tracking for the GPS M code signal,” in Proceedings of the 13th International Technical Meeting of the Satellite Division of the Institute of
Navigation (ION GPS '00), Salt Lake City, Utah, USA, September 2000.
11. J. A. Avila-Rodriguez, S. Wallner, G. W. Hein, et al., “CBOC—an implementation of MBOC,” in Proceedings of the 1st CNES Workshop on Galileo Signals and Signal Processing, Tolouse, France, October
12. S. K. Shanmugam, R. Watson, J. Nielsen, and G. Lachapelle, “Differential signal processing schemes for enhanced GPS acquisition,” in Proceedings of the 18th International Technical Meeting of the
Satellite Division of the Institute of Navigation (ION GNSS '05), p. 212, Long Beach, Calif, USA, September 2005.
13. D. Borio, L. Camoriano, and L. Lo Presti, “Impact of the acquisition searching strategy on the detection and false alarm probabilities in a CDMA receiver,” in Proceedings of the IEEE/ION
Position, Location, and Navigation Symposium (PLANS '06), p. 1100, San Diego, Calif, USA, April 2006. View at Publisher · View at Google Scholar
14. E. D. Kaplan and C. Hegarty, Understanding GPS: Principles and Applications, Mobile Communications Series, Artech House, Boston, Mass, USA, 2nd edition, 2006.
15. J. I. Marcum, “A statistical theory of target detection by pulsed radar,” IEEE Transaction on Information Theory, vol. 6, no. 2, 59 pages, 1947.
16. S. Wallner, G. W. Hein, and J. A. Avila-Rodriguez, “Interference computations between several GNSS systems,” in Proceedings of the ESA Workshop on Satellite Navigation User Equipment Technologies
(NAVITEC '06), Noordwijk, The Netherlands, December 2006. | {"url":"http://www.hindawi.com/journals/ijno/2008/793868/","timestamp":"2014-04-16T20:07:29Z","content_type":null,"content_length":"212067","record_id":"<urn:uuid:3e212a43-d342-4041-ab57-f60968de7a48>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is IV FLOW RATE CALCULATIONS EXAMPLES?
Examples. 1) Give a 50cc IVPB ... The doctor orders an IV to infuse at 125cc/hr. Calculate the flow rate using 10 drop/min IV tubing. select "x = milliters/hour" x = 125 calibration = 10. ... You may
be wondering why a second answer is displayed in the calculator. The first answer displays the ...
Example. IV flow rate calculations determine the flow rate of solution administered over a given time. This is normally expressed as mL/Hr or mL/min. However, you should also know how to determine
how much volume would be administered using time and rate.
How to calculate IV flow rates: Intravenous fluid must be given at a specific rate, neither too fast nor too slow. The specific rate may be measured as ml/hour, L/hour or drops/min.
Learn dosage calculations with this free tutorial complete with explanations, examples, and practice questions. Determining IV drop rate using the drop factor for an order based on volume per time
explained in this section.
Examples. 1) If an order was written to infuse a liter of IV fluid every 8 hours, at what rate would the IV pump be set for? Answer is 125 mL/hour. Using the calculator, select the time to equal
hours (it's already preselected).
Nursing electronic IV flow rate calculator to solve the regulator setting given total ordered volume and time. Calculations are for controllers, infusion pumps and syringe pumps.
Another flow rate calculation example. Question from Angel on Yahoo! Answers A 110lb woman is started on a nitroglycerin IV drip.The order is to administer the nitroglycerin at 5mcg/min.
Intravenous Flow Rates Quiz: 20 questions are presented to test your skills and enhance your learning of intravenous drug calculations. Instructions: Enter your answer in the box provided; Click
check to see if you have answered correctly;
This will give you your final IV flow rate. In this example, you would divide 100 mL by 46 minutes to get... How to Calculate IV Drip Rates ... Determining the correct IV dosage requires a series of
calculations. Other People Are Reading.
It is important that the IV flow rate is calculated correctly so that the patient receives the proper amount of medication ... The resulting number is the IV flow rate. For example, if the medicine
in Step 1 is to be given over a ... IV Flow Rate Calculation; Photo Credit Comstock/Comstock ...
IV drip rate calculations may require a few extra steps, however the basic process is the same as you learned how to calculate liquid medication dosages. ... Here are your free examples. You have a
bottle of Amiodarone with 900mg/500ml.
Flow rate is the amount of fluid flowing in the given time. ... Solved Examples. Question 1: ... air flow rate calculator. Flow Rate Calculator. water flow rate calculator. flow calculator. Related
Worksheets. Distance Rate Time Worksheet.
IV Flow Rates and Duration. IV Solutions may be administered using a gravity flow administration set. ... For example: There is 400 mL left in the IV bag at 0900 hrs. ... Flow rate calculations
follow in Quiz 5.
Learn how to calculate the intravenous flow rate value from this tutorial, given with the definition, formula and example. Related Calculator: >> IV Flow Rate Calculator
For example: Lets say you have to give 80mg of IV lasix STAT x1 so that your patient, ... Click here when you are ready to learn about IV drip rate calculations. Click here to go to the main Nursing
Math page.
... (IV) infusions, you need to know the flow rate, infusion time, and total volume. Fortunately, ... Part of the Medical Dosage Calculations For Dummies Cheat Sheet. Whenever you’re administering
intravenous ... For example, if you must administer 1 L (1,000 mL) ...
Online iv flow rate calculator to calculate the intravenous flow rates of fluids
... the flow rate is Example 11.14 The prescriber ordered Vancomycin 1.5 g in 200 ... (potassium chloride) in 100 mL of IV fluid at the rate of 10 mEq/h. What is the flow rate in microdrops per ...
chapter can be found on the Prentice Hall Dosage Calculation Tutor that accompanies this text ...
Calculating Flow Rates. In order to set up a IV, we need to know the flow rate; ... there is a certain amount of error introduced into the calculation - for example, I will have to round off to the
nearest whole drop, because I cannot count partial drops.
Basic IV flow rate calculations. Basic IV flow rate calculations.. | | | . | | | · . . ...
Nursing 1/11/11 cs G ASC Lab IV Drip Calculation* Practice Problems IV Formulas: 1. Infusion Rate (mL/hr) = Total Volume in mL (How much volume is
Vocabulary words for IV FLOW RATE CALCULATIONS. Includes studying games and tools such as flashcards. ... all macrodrip infusion set (above examples) have the same calibrations. either 10, 15, 0r 20
gtt/ml ...
calc iv flow rate. Ratings: (0) | Views: 2,061 | Likes: 4. Published by Amy. ... Volume to be infused (VTBI) x Drop factor (DF) Time (minutes) Example: 1000 ml x 15 = 31 gtts/min 480 minutes b. ml/hr
x DF 60 Example: 125 ml x 15 = 31 gtts/min 60 2.
IV CALCULATION TO CALCULATE RATE USING MINUTES: volume x calibration minutes ... Formula: Volume (mL) x Drop Factor (gtts/mL) = Y (Flow Rate in gtts/min) Time (min) Example: Calculate the IV flow
rate for 1200 mL of NS to be infused in 6 hours. Volume (mL) = Y (Flow Rate in mL/hr) Time ...
The calculation is listed below. ... (below) describes: How to calculate IV flow rates: Intravenous fluid must be given at a specific rate, neither too fast nor too slow. ... Example: Calculate the
IV flow rate for 200 mL of 0.9% NaCl IV over 120 minutes.
For example, if three sheets of paper are provided, three sheets must be returned. ... Sample Problems for I.V. Drip Rate Calculations and Infusion Times 8. ... flow rate. Answers at the end of study
The nurse needs to know how long a volume of fluid in the IV bag at the current flow rate will last, ... Formula: Volume Flow rate = (Infusion) time Example: The nurse makes rounds and notes that the
current IV bag contains ... Extraneous information for calculation: Cleocin 150 mg IV q 8 ...
... Dopamine 5 mg/kg/day IV q4h in 120 mL of ... Patient weights 135 lbs Dilute the desired dose in 120 mL of D5W Calculate the flow rate in ml/hr for the. allnurses.com. ... and print this out and
tell your faculty we think it's an exceptionally bad idea to use examples that are ...
... for example, mg per minute, mcg per minute, or units per hour. Dimensional analysis can be used to calculate flow rate, or when the flow rate is known, ... Assess your skills in critical care IV
calculations in Quiz 8.
Figuring IV Flow Rate, ... For example, if you must administer 1 L (1,000 mL) ... The flow rate is 250 mL/hr. Common Conversion Factors in Medical Dosage Calculations. As a healthcare professional,
you have to convert patient weights, fluid volumes, medication weights, and more.
Flow Rate Calculation Many intravenous fluids are frequently ordered on the basis of mL/hr. The size of the drops is regulated by the size of the IV tubing. ... Example: One liter of Normal Saline is
charted over 9 hours. The drop factor is 15.
When calculating flow rate for IV PIGGYBACK do you use the drip rate on the secondary IV tubing even though you are using the dial on the primary tubing which has a different flow rate?
Flow Rate Calculations For IV IV Flow Rate Questions, IV Flow Rate Calculations, Flow Rate, Flow Rate Calculations for IV, IV Flow Rates by Gravity
Advanced Adult Intravenous Calculations. Examples: Client has APTT of: 40 70 88 IV Flow Rate Ordered “Per Minute” Critical Care IV calculations Determine electronic IV...
6-8 About IV Flow Rate 9 IV Flow Rate Problem Set 1 10 IV Flow Rate Problem Set 2 ... RULES FOR CALCULATIONS USING THE FLOW RATE FORMULA 1. ... Examples of IV Solutions: 5%DW means 5 parts dextrose :
Specifications and Flow Rate Calculations; Calculations; IV; WPCo Formula Rate Calculations 2009; RECAP calculations; Pipe Flow-Friction Factor Calculations with Excel;
This calculator determines the drip rate, watch count or flow rate setting for a manually controlled IV. The input parameters or variables are:
Critical Thinking: IV Calculations This course has been awarded three (3.0) ... the nurse must maintain competency in basic medication calculations. Calculating Flow Rate ... A variety of examples of
IV Drip Rate Calculator. Enter the volume to be infused: litres millilitres Enter the length of the infusion: hours minutes Enter the drop factor: drops / millilitre Drops ... No claims are made or
implied regarding the accuracy of calculations.
Vocabulary words for Calculations: Calculation of IV Flow Rate. ... What is the flow rate in milliliters per hour? ... We can’t access your microphone! Click the icon above to update your browser
permissions above and try again. Example: Reload the page to try again! Reload.
IV flow rate formula : by pacer: Tue Oct 14 2003 at 2:14:18: 1. Information required to determine IV flow rate. a) Total volume to be infused. b) Time period over which it is to be infused. c)
Properties of the administration set.
DRUG DOSE CALCULATIONS Made Easy Easier 1 . ... The desired dose in the example that follows is known as a basic doctor’s order. Æ (2.5 mg of medication) 2. ... Used for simple IV fluid flow rates,
(no medications involved) Volume to be infused X IV Drip Set ...
... (flow rate)1500 ml iv saline over 12 hours, ... (example using gtt/min, macrodrop tubing 15/1mL, 200 mL of solution over 2 hours) ... x gtt/min = 15 gtt/1 mL x 200 mL/2 hrs x 1 hr/60 min Then
solve for x... x= 25 gtt/min. ... What is the formula for iv calculation?
A check of pharmacy calculation of IV solutions containing additives should be done prior to administration. ... • Double-check all machine flow rates for accuracy by calculating rates manually. ...
in this example, 10 gtt/mL.
through selected I.V. drip rate calculations. Streamlining the basic equation In nursing school, you probably had to learn a long, ... rate as appropriate. For example: 3 ×2.25 = 6.75 ml/hour (0.3
mcg/kg/minute) 4 ×2.25 = 9 ml/hour ...
Example 1: Order: 3000 mL NS IV over 24 hrs. Drip factor of tubing: 15 gtts/mL. 3000 x 15 gtts/mL 45000 . 24 hr X 60 min = 1440 = 31.25 = 31gtts/min ***Calculate drip rates to the nearest whole
number*** ... IV Flow Rate Calculations: In-Class Exercise: 1.
IV Drop Rate Calculations: Quiz: Help: Calculator: These four tests have been created by the software program Drug Calculations for Health Professionals. Download, and try it for free. If you like it
register to obtain further practice, ...
This is a straight-forward IV flow rate calculation, in which no conversion is required. We know we will be using the . ... Examples. CALCULATION. Order: Give 500mg of dopamine in 250mL of D5W to
infuse at 20mg/h. Calculate the flow rate in mL /h.
The nurse is responsible for maintaining the proper flow rate while assuring the comfort and safety of the patient. ... Example: A physician ... IV Infusion Time Calculator. No related posts. drip
rate, drip rate calculation, ...
... increase the IV rate 6 mL/hr q30min until labour is established. Example Set pump at 6 mL/hr (1 milliunit/min 6 mL/hr) ... CHAPTER 9 Special Types of Intravenous Calculations 249 Example: patient
weight is 70 kg. 1. ... PROFICIENCY TEST 1 Special IV Calculations (Continued) 12. Order ...
If you didn't find what you were looking for you can always try Google Search
Add this page to your blog, web, or forum. This will help people know what is What is IV FLOW RATE CALCULATIONS EXAMPLES | {"url":"http://mrwhatis.net/iv-flow-rate-calculations-examples.html","timestamp":"2014-04-21T01:08:29Z","content_type":null,"content_length":"37456","record_id":"<urn:uuid:b8bbc4f8-ea87-49d0-9da6-737b05a7e9c1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example (from Program LA_GESVX_EXAMPLE)
Next: LA_GBSV Up: General Linear Systems Previous: Arguments   Contents   Index
The results below are computed with
The call:
CALL LA_GESVX(A, B, X, FERR=FERR, BERR=BERR, &
RCOND=RCOND, RPVGRW=RPVGRW )
FERR, BERR, RCOND and RPVGRW on exit:
The forward and backward errors of the three solution vectors are:
The estimate of the reciprocal condition number of
The reciprocal pivot growth factor is 1.12500.
The solution of the system
Susan Blackford 2001-08-19 | {"url":"http://www.netlib.org/lapack95/lug95/node83.html","timestamp":"2014-04-16T07:17:54Z","content_type":null,"content_length":"6494","record_id":"<urn:uuid:6c677d67-89b5-43f6-8b35-d2463755df65>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magnetic Effects of Current and Magnetism Theory
Magnetic Effects of Current and Magnetism
Force on a conductor carrying a current placed in a magnetic filed ( F ) –
F = i (I x B )
Or F = i / B sin θ
Where F = magnitude of the force on the conductor (in Newton)
i = current in ampere flowing through conductor
I = length of the conductor (in metre)
B = magnetic field at the place of conductor (in Wb/m^2 or Tesla)
and θ = angle between the magnetic field and the length of the conductor
Force on a moving charge in a magnetic field –
F = Q (v x B)
Or F = QvB sin θ
Where Q = charge (in coulomb)
V = velocity of moving charge (in metre/second)
Lorentz equation for force (F) between moving charges:
F = q E + q ( v x B)
Where E and B are the electric and magnetic field strengths respectively due to one charge and V is the velocity of the moving charge q.
Ampere’s swimming rule – It state that if a person is imagined to swim above the conductor with his face downwards and in the direction of the current, hen the north pole of the needle placed below
the conductor gets deflected towards his left hand.
Maxwells’ cork screw rule – If a right handed cork screw is imagined to be rotated in such a direction that the tip of the screw advances in the direction of the current, then the direction of
rotation of thumb gives the direction of magnetic field.
Right hand rule – If a conductor carrying current is grasped in the right hand such that the thumb points in the direction of the current, then the direction of the curl of the rest of fingers gives
the directions of the magnetic field.
Biot-Savart Law (or Laplace’s Law) – I states that, the magnetic field ‘dB’ at a point due to the current element of length dt is given by –
where I = current flowing in the conductor
r = distance of the point from the
μ = absolute permeability of the medium
Magnetic field due to a straight infinite conductor –
Where i = current in ampere
Force between two parallel straight wires carrying currents –
Where i[1] & i[2]= current flowing in two conductors respectively
r = distance between two conductors
and l = length of the conductor under consideration
Unite of current (ampere) – The expression for force helps us to define an ampere as follows “The electric current which when flowing in each of the two infinitely long parallel straight conductors
of negligible cross-section placed I m apart in empty space makes them exert a force of 2 x 10^–7 newtons per metre of length”.
Magnetic filed due to a circular coil –
(i) At centre of the coil –
(ii) At the axis of the coil –
Where n = number of turn in coil
i = current flowing in the coil
a = radius of the circular coil
And x = distance of the point on the axis, from the centre of the coil.
Magnetic field due to a long solenoid –
(i) At any point on the axis–
(ii) At any point well inside a long solenoid –
B = μ[o] n i
(iii) At ends of the solenoid –
Where n = number o turns per unit length
i = current in ampere
θ[1] & θ[2] = angles subtended at the point of consideration by the first and last turn of th solenoid.
Moving coil galvanometer – Current (i) in moving coil galvanometer is –
where K = = Instrument constant
C = Torsional rigidity of the suspension wire
N = Number of turns in the coil
B = Magnetic filed induction due to permanent magnet
A = Area of the coil
Φ = Deflection of the coil in radians
Cyclotron – is a device used to accelerate charged particles to very high energies.
Cyclotron radius or gyro-radius (r)–
where m = mass of the charged particle
v = component of velocity of the charged particle perpendicular to the magnetic field.
B = magnetic field
q = charge on the particle
Cyclotron Frequency or gyro frequency (f) –
Magnetic Poles – are the parts of the magnet towards which the external magnetizing force tends to converge (south pole) or from which it tends to diverge (north pole).
Coulomb’s law of force – States that the force exerted by an isolated magnetic pole of strength m[1] on another pole of strength m[2] in free space is directly proportional to the product of their
strengths, and inversely proportional to the square of the distance r between them.
Where K = = 10^–7 weber per ampere – metre
and μ[0] = permeability of the free space.
Unit magnetic pole – is that imaginary pole which when placed in air at a distance of one unit length from an equal and a similar pole repels it with a force of one unit.
Pole strength – (m) – The force on the poles per unit filed of induction is called the pole strength.
Magnetic field – The space around a magnet (or a current carrying conductor) in which the magnetic effect can be felt is called the magnetic field.
Intensity of Magnetic field (H) – It is numerically equal to the force which a unit north pole would experience if placed at that point, it being assumed that unit pole itself does not affect the
magnetic field. The S. I. unit of H is ampere per metre (A/m).
Magnetic field vector (B)– The magnetic filed vector or the magnetic flux density of the magnetic induction B is related to H as B = μ[0]. H. Its S. I. unit is weber square per metre or Telsa (T)
Magnetic dipole moment (M) – It is the moment of the couple acting on the magnet when placed at right angles to a uniform magnetic field of unit intensity. Its value is:
M = m x 2I = 2mI
Magnetic intensity due to a short bar magnet –
(i) End on position (or Tan-A position)
Where B[a] = Magnetic intensity at a point distant ‘r’ from the centre and on the axis of the magnet.
(ii) Broad side on position (or Tan-B position)
Where B[b] = Magnetic intensity at a point distant ‘r’ from the centre of the magnet an on the equatorial line of the magnet.
Magnetic meridian – If a magnetic needle is hanged freely from its centre of gravity, then the vertical plane passing through the axis of the magnetic needle, is known as the magnetic meridian.
Angle of declination – At any place, the acute angle in between magnetic meridian and the geographical meridian is known as the angle of declination.
Angle of Dip– Is the angle at any place which is made in between the horizontal direction and the direction of earth magnetic filed in magnetic meridian. Angle of dip (θ) is –
Where V = Vertical component of Earth’s magnetic field
and H = Horizontal component of Earth’s magnetic field
Torque (τ) acting on a Magnet in a Uniform Magnetic field (B)
τ = 2ml B sin θ = MB sin θ
where m = pole strength of the magnet
21 = effective length of the magnet inclined at angle θ to
and M = 2ml = magnetic dipole moment.
Pushing Palm Rule for Determining direction of force of Moving charged Particles entering a Uniform Magnetic Filed B in a direction Perpendicular to B–
Use left palm for negative charges and right palm for positive charges. Keep the palm straight and he thumb outstretched. If fingers point in the direction of B and the outstretched thumb in the
direction of the velocity V of the moving charge particle the pushing direction of the palm gives the direction for force F acting charged particle obviously F. B = 0 and F V = 0 | {"url":"http://mcqs.makox.com/physics/magnetic-effects-of-current/magnetic-effects-of-current-and-magnetism-theory/","timestamp":"2014-04-20T20:55:42Z","content_type":null,"content_length":"95782","record_id":"<urn:uuid:c28cd919-404f-4595-ab88-322d467d1b46>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greenbelt Algebra 2 Tutor
Find a Greenbelt Algebra 2 Tutor
...This approach does not promote conceptual understanding! Students need to be able to think critically and creatively when they face a new problem. They need to be challenged with rich problems,
while being provided with the tools to tackle those problems with creativity and confidence.
16 Subjects: including algebra 2, English, writing, calculus
...My broad background in math, science, and engineering combined with my extensive research experience provides me with the unique tools to teach effectively to students at all levels.As an
undergraduate student in physics and electrical engineering and as a doctoral student in physics, I took adva...
16 Subjects: including algebra 2, calculus, physics, statistics
...I believe in teaching the content as well as instilling confidence so students can become successful, life-long learners. I am a regional instructor for Texas Instruments and have expertise in
the technology used in your student's classroom. I have coached varsity sports such as tennis, volleyball and basketball.
14 Subjects: including algebra 2, geometry, algebra 1, SAT math
...Math and teaching are some of my passions. I like to make students realize that Math is not hard, and that with practice everything becomes easy and solvable. I am very patient.
12 Subjects: including algebra 2, calculus, prealgebra, precalculus
...I have worked with high school and college students studying English literature, as well as students from elementary through graduate school who were struggling with reading, writing, and
grammar. I currently teach three ESL classes focusing on writing and reading. I have several years of experience teaching English to non-native speakers.
46 Subjects: including algebra 2, Spanish, English, algebra 1 | {"url":"http://www.purplemath.com/greenbelt_md_algebra_2_tutors.php","timestamp":"2014-04-20T19:17:18Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:d0e39afb-3055-488e-95cf-9efafeeb337f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Richmond Hill Math Tutor
Get ahead in school, get ahead in life! I offer private classes to help students of all ages advance beyond their peers. A highly qualified graduate of NYU's prestigious Steinhardt School of
Education, I'm a New York State-licensed teacher and the founder of Pen Paper Imagination, providing one-on-one and group mentoring in Literacy, Reading, Writing, Creative Writing and Math.
36 Subjects: including algebra 1, geometry, reading, logic
...Having worked with a number of students with learning differences, I have developed strategies to help these alternative learners more effectively access confusing concepts, in turn bolstering
their academic confidence. I always look forward to meeting new students and helping each to meet his/h...
33 Subjects: including prealgebra, algebra 1, English, Spanish
...I am a teacher for the NYC Department of Education. I am certified in special education and general education. I also have my masters degree in elementary education.
12 Subjects: including algebra 2, algebra 1, SAT math, writing
...I pride myself on being able to find the right way to communicate scientific and mathematical concepts to anybody who wants to learn them. Sometimes you work hard and study properly, but you
can't quite understand what is in front of you. Sometimes it's because you're one of hundreds in a colle...
14 Subjects: including calculus, physics, prealgebra, GRE
...I have a master's degree from Upenn in music and work there as a guitar teacher. I also play keyboard and several other instruments and my master's degree required me to take several music
history and world music classes. I also freelance as a musician.
28 Subjects: including calculus, elementary (k-6th), physics, precalculus | {"url":"http://www.purplemath.com/south_richmond_hill_ny_math_tutors.php","timestamp":"2014-04-21T00:12:51Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:049d7d0c-897c-4e25-931d-b67ed7faef84>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
The "Full" Network
The “full” network isn’t.
Mann et al 2008 describe a “full” network consisting of 1209 proxies:
We made use of a multiple proxy (‘‘multiproxy’) database consisting of a diverse (1,209) set of annually (1,158) and decadally (51) resolved proxy series … Reconstructions were performed based on
both the ‘‘full’ proxy data network and on a ‘‘screened’ network (Table S1)
The locations of the “full” network are illustrated in their Figure 1, captioned “spatial distribution of full (A) and screened (B)” network. Their Figure S7 compares “long-term CPS NH land (a) and
EIV NH land plus ocean (full global proxy network) (b) with and without using tree-ring data”. Their Figure S8 compares “long-term CPS NH land (a) and EIV NH land plus ocean (b) reconstructions (full
global proxy network)”. Their Figure S11 shows “plots of the NH CPS Land reconstruction (A) and EIV Land plus ocean (full global proxy network)”.
Table S1 shows breakdown counts by dendro and non-dendro, by annual-resolved and total, by NH, SH and Global. The top portion of Table S1 is excerpted here:
The problem is that their actual calculations appear almost certainly not to have used the network with the count statistics shown in Table S1, but a truncated version of that network, with their
“full” network using only 663 (560 – NH) of the 1209 sites (1026- NH). Jean S has run the gridproxy.m program using the FULL option and so we have been able to “prove” this for this part of the
program (CPS) through an actual run of the program. (I’m not sure that the EIV portion of the program can be run with present code.)
First here is a table showing counts that compare to the top right counts (NH all) for Table S1. The column headed Matlab shows the counts obtained from the gridproxy.m program, while the column
headed “Reported” shows the corresponding column from Table S1. I’ll explain the calculation of the other 3 columns below (which I calculated from first principles. For now, you can see that my NH
total in the third column matches the counts from gridproxy.m exactly in nearly every case and, in no case, differs by more than 1.)
│NH Dendro │NH Other│NH Total│Matlab│Reported│
│421 │139 │560 │560 │1036 │
│297 │129 │426 │427 │700 │
│162 │124 │286 │287 │408 │
│90 │120 │210 │211 │277 │
│62 │39 │101 │102 │151 │
│28 │34 │62 │62 │102 │
│19 │30 │49 │49 │77 │
│11 │30 │41 │41 │63 │
│6 │24 │30 │30 │46 │
│4 │21 │25 │25 │40 │
│4 │20 │24 │24 │37 │
│4 │20 │24 │24 │34 │
│4 │19 │23 │23 │33 │
│4 │18 │22 │22 │28 │
│4 │18 │22 │22 │28 │
│3 │18 │21 │21 │27 │
│2 │17 │19 │19 │24 │
An Unreported Screening Procedure on the Full Network
The difference appears to result from an unreported screening procedure on the “full” network.
Mann’s SI does describe a screening procedure on his “screened” network:
Screening Procedure. To pass screening, a series was required to exhibit a statistically significant (P 0.10) correlation with either one of the two closest instrumental surface temperature grid
points over the calibration interval (although see discussion below about the influence of temporal autocorrelation, which reduces the effective criterion to roughly P 0.13 in the case of
correlations at the annual time scales)
The implementation of a procedure along these lines can be discerned in gridproxy.m in the case (confusingly) entitled “WHOLE”, but which means screened. At one stage of the program, correlation
hurdles are defined for various proxy classes (using the proxy class codes to set the hurdle.) “Annual” and “decadally” resolved proxies have different benchmarks. Documentary and speleothems
(5000,6000 series) have somewhat different benchmarks (corals – 7000 check for negative correlation). The relevant lines are:
case ‘WHOLE’
ia=4; %% annually resolved
ib=5; %% decadally resolved
corra=0.106; %% 9000,8000,7500,4000,3000,2000
corra2=-0.106; %% 7000
corra3=0.136; %% 6000, 5000
corrb=0.337; %% same as above +1
Later in the program, they test the observed correlation for each proxy against the relevant benchmark (their code is very inelegant; they speak Matlab with a heavy Fortran accent). For the
“screened” network, corra is 0.106.
for i=1:m1-1 % This is for searching annually-resolved proxies
if (z(3,i)==9000 | z(3,i)==8000 | z(3,i)==7500 | z(3,i)==4000 | z(3,i)==3000 | z(3,i)==2000) &…
x(kk,i+1)>-99999 & x(kkk,i+1)>-99999 &…
z(1,i)>=ilon1 & z(1,i)=ilat1 & z(2,i)=corra
For corals and speleothems, they do it a little differently, testing the absolute value of the correlation:
… z(1,i)>=ilon1 & z(1,i)=ilat1 & z(2,i)=corra3
Now watch what happens in the “FULL” case. Here’s how they set their hurdles. Can you see what happens?
case ‘FULL_’
If a series has a negative correlation to temperature (as about half of the tree ring series do), the test shown above will lead to the exclusion of this series. So this test either intentionally or
unintentionally eliminates all the series with negative correlations to gridcell temperature – thus the reduction in NH count from 1036 to 560. Jean S got this count from running gridproxy.m and I
got this number by independently comparing reported correlations to a benchmark of 0. So while it’s not impossible that I’ve misinterpreted something here, until we see some alternate explanation, I
think that this interpretation should be regarded as valid.
Now this creates problems – both with the statistical significance of what was reported and with the accuracy of the methodological description in the PNAS article, both familiar themes in this
We’ve discussed on many occasions that you can “get” a HS merely from picking upward-trending series from networks of red noise (David Stockwell had a good note on this phenomenon on his blog a
couple of years ago. My first experiments of this type were on the cherry picks in the original Jacoby network.) Since gridcell temperature series from the mid-19th century to late 20th century by
and large have an upward trend, this test is equivalent to a simple pick operation. Whatever the merits of effectively screening the data set for upward trending data, this affects statistical
benchmarks, as the null is applying the same sort of operation to red noise networks.
Secondly, whatever one may think of the Mannian PC algorithm (and defenders are reduced to a pretty hard core right now), few people defend the non-reporting of their modification of the PC algorithm
and their failure to assess the statistical significance and properties of these changes. (Quite aside from whether they can “get” a HS using some other method.) We’re into something pretty similar
here. Again there seems to be an unreported screening operation on the “full” network. The value of archiving code is once again demonstrated as the unreported operation can only be discerned through
parsing code. Because Mann et al commendably archived code concurrent with publication, it hasn’t taken years to identify the issue. (I note that the precise identification of the problem with
Mannian PCs also was identified when a little bit of code was inadvertently left in a directory identified for the public after MM03.) So code really does help pin down problems.
41 Comments
1. Steve: It looks like you forgot to close a tag in the second code listing above:
– Sinan
□ Re: Sinan Unur (#1),
He also forgot to close a [b] tag in html.
2. There seems to be an interaction between blockquote and strong in WordPress. I did it right but WordPress removed the end bold inside the blockquote, so I tried to manually fix it.
3. If the period for testing for correlation with temperature was something like 500 years, then maybe those with no correlation are “bad” proxies without it being potentially spurious (a “mining
for hockey sticks” exercise), but even then negative correlations would be just as good as positive (just be careful with the sign). Using R2 they wouldn’t have made this error of thinking a
negative correlation means no predictive power.
4. Again this is great detective work.
I guess I am bemused by the decision to use r> |.106| as the criteria for including a proxy series. Why not r > |.206|? Since r> |.106| leads to an r2 of 1%, which means that there are
innumerable other factors – some likely to be more potent – contributing to the proxy metric. Without specifying these how does one go about reconstructing the past temperature record without
necessarily including error bars that make the whole exercise problematic. I mean if you had two thermometers in the same grid cell that caliberated r=.10 would you use either of them? Or do I
have the wrong end of the stick here?
5. #5. Also recall the pick two procedure for their calculation of correlations.
The other aspect to this is reverse engineering. It’s hard to rationally justify the purposes of many of these methods, but they all end up affecting weights. Some preliminary calcs by Jean S
indicate that the weights are highly concentrated on a few proxies. Lucy and Charlie Brown one more time. So the issue will be whether this few proxies (bristlecones, Finnish sediments with Mann
orientation) are wonder thermometers. Same old same old.
6. If one assumes the missing ‘abs’ to be a bug, rather than a feature – what is the effect on the results of adding the abs to:
z(1,i)>=ilon1 & z(1,i)=ilat1 & z(2,i)=corra
in the non-corals/speleotherms case?
7. Steve,
I tried sending you an email about this, but it seems to have become lost. There are quite a few dead links to climate2003.com on the CA homepage.
Thought you might like to know for cleanliness purposes.
Steve: That’s an old website. I got tired of paying for it when I had space at CA but need to spend some time posting the pages again under a new alter ego. So much to do, so little time.
8. If you want to prove that the mean climate in the past 1000 years was without structure, picking proxies with a r=.1 with climate is ideal. Take a bunch of proxies that explain barely more than
1% of the variation in temperature and average them and you get 0 for each point in time…a nice flat hockey stick handle.
9. #7. Hard to say. I don’t think that anyone has got the next portion to work yet.
There are any number of calculations which can be used to show the pointlessness of certain operations. At some point, you’d think that people in the trade have to take some responsibility for
underwriting subprime proxies.
10. Also, what is with all the magic numbers in the code, like the following:
if (z(3,i)==9000 | z(3,i)==8000 | z(3,i)==7500 | z(3,i)==4000 | z(3,i)==3000 | z(3,i)==2000) …
Whoever wrote that needs to take a CS101 refresher.
Steve: those are codes for the types of proxies. It’s a bit inelegant but I find their Fortran style do-loops in preference to logical vectors and functions to be more annoying,
11. I have to mention that it is not just the picking process that results in creation of the hockey stick. It is the amplification process, an ideal scale and offset match the temperatures in recent
times to the best possible extent which deamplifies historic data.
A simple picking process amplifies current data while averaging the remainder. If there is a real historic signal, it is on the correct scale. I demonstrated this effect on this post. The data
was not amplified, just picked.
Amplification of the sorted data results in a statistical demagnification and offsetting of historic data relative to present data. As I showed in this post.
It is very important that the discussion doesn’t become, why not use better or higher correlation values. Any noise created by processes other than the one you are looking for will cause this
effect. Higher correlation sorting of a set of data gives the same shaped result as low correlation with a noisier graph due to less series passing calibration.
12. Jeff, here are some posts by David Stockwell along similar lines. http://landshape.org/enm/2006/03/ (the pictures see, to be lost, I’ll check with him about this.) This was in part pursuant to
passim remarks here. I’ll re-read your post and try to understand the interaction.
13. An elementary exercise that grad students have to go through is before collecting data see if the sample size they plan to collect with the expected variability will enable them to detect an
effect of a given size (a power analysis). With a correlation of .1 as a cutoff, it is unlikely that any signal (such as MWP) would be detectable, no matter how many proxies are tossed in the
So this test either intentionally or unintentionally eliminates all the series with negative correlations to gridcell temperature
It is intentional. They state in the main text:
Where the sign of the correlation could a priori be specified (positive for tree-ring data, ice-core oxygen isotopes, lake sediments, and historical documents, and negative for coral
oxygen-isotope records), a one-sided significance criterion was used. Otherwise, a two-sided significance criterion was used.
So the first if (and the corresponding for the “decadal” proxies later) tests for positive correlation. This includes categories 9000 (tree-width), 8000 (ice-core oxygen isotopes? – yes), 7500
(MXD), 4000 (lake sediments), 3000 (composite??? – SM: 3000 – a few temperature reconstructions from tree rings unaccountably not classed with dendro; 3001- ocean sediments) and 2000
(Luterbacher). It is worth mentioning that four Lake Korttajärvi series are in the category 4000.
The second if
if ( z(3,i)==7000) &…
x(kk,i+1)>-99999 & x(kkk,i+1)>-99999 &…
z(1,i)>=ilon1 & z(1,i)=ilat1 & z(2,i)<=ilat2 & z(ia,i)<=corra2
tests for negative correlation and is design only for the category 7000 (coral oxygen-isotope records?).
Now the third if for the categories 6000 (speleothems) and 5000 (what is this category??? [SM- documentary, 5000 mostly precipitation, 5001 mostly Chinese ] , it sure looks like the historical
documents category… maybe 5000 and 4000 should be switched??) is the one for series with a priori unknown sign of the correlation. The interesting thing is the line right after
So although the sign of the correlation is not known, it is known a priori that the series with negative correlation should be flipped!
□ Re: Jean S (#15),
It is intentional. They state in the main text:
Where the sign of the correlation could a priori be specified (positive for tree-ring data, ice-core oxygen isotopes, lake sediments, and historical documents, and negative for coral
oxygen-isotope records), a one-sided significance criterion was used. Otherwise, a two-sided significance criterion was used.
Steve, does this mean they expected a positive correlation for the Lake Korttajarvi sediments you mentioned in your “It’s Saturday Night Live” post? Are lake sediment studies normally
positively correlated and the Korttajarvi sediments involved a different kind of data collection procedure which actually gives a negative correlation?
Steve: I can’t imagine that all the possible lake sediment parameters are known a priori to be positive.
15. I’ve made a few inline comments on #15. In my post, I said that 5000,6000 were speleothems and corals. Brain cramp. Corals are 7000 series, 6000 is speleothem, 5000 documentary.
I’m still not convinced that they intentionally shrunk the “full” network without reporting the screen and shrink. While their stats and commentary on the “full” network are incorrect, I suspect
that this particular screen was inadvertent; otherwise, they are making intentional misrepresentation – something that I’m reluctant to think on these facts.
16. I’m sure that you’ve noticed that they use Mannian smoothing – wouldn’t it be nice to encounter one method that was used in conventional literature.
The function lowpassmin cycles through end point padding alternatives. I’ve experimented with 10-point Butterworth filters using the R-function from signal package and noticed very serious end
point ringing for the Socotra and Dongge series. I’m not used to Butterworth filters and am not over-interpreting this, but the effect is quite large in a first pass
17. Steve
Read some EE Texts about Butterworth filters. That is where they originate. Very nice but they have some spurious outputs that are known in the EE world.
18. Butterworth basically just means “no ripple”. i.e. the frequency response falls off from a shelf into a nice smooth curve with no wiggling around. There are filters with steeper falloff but they
have ripple in the “shelf” portion in exchange. There are also filters with less phase shift but they also have ripple.
Regarding the coding style: I think what Soronel was trying to say is that a good programmer uses named constants to make the code readable. In C you could do:
#define CODE_DOCUMENTARY 5000
if( type == CODE_DOCUMENTARY ) …
and so on.
19. Here are some graphs comparing the three primary types of low pass filter in electronics – Butterworth, Bessel and Chebyshev, showing the strengths and weaknesses of each.
20. Electronics texts are not very helpful on the impact of these filters on noisy stochastic series, which are a different design issue.
THe issue that I’m looking is at end effects where the butterworth filter seems to really produce some spurious ringing effects at the ends of some series e.g. Dongge. But I’m also not used to
these filters and there may be some differences between the R and Matlab implementations that’s producing an artifact. JEan S has done a Matlab calc on DOngge but I’m stuck for now on transposing
it to R.
Given that the smoothing is probably not a good idea in the first place(see MAtt Briggs), spending time on potential end point issues resulting from Mannian smoothing is a very unedifying
□ Re: Steve McIntyre (#21),
He issue that I’m looking is at end effects where the butterworth filter seems to really produce some spurious ringing effects at the ends of some series e.g. Dongge.
I assume, from this comment, that you are interested in the time domain response of the Butterworth filer and not the frequency domain that previous posters have described. It has been a long
time since my circuit lectures but your comment seems to be describing the impulse response of the filter. Perhaps more knowledgeable people cna comment but this is an inherent part of the
filter design/ The ringing will change depending on the shape of the signal. A signal with an abrupt cutoff will have a perceptible ring. This can be heard on the audible ringing tone of a
telephone system. The tone will be low pass filtered after D to A conversion. Depending on the state of the signal when it is cut off, a pop (ringing) mauy be heard. So the signal sounds like
□ Re: Steve McIntyre (#21),
Electronics texts are not very helpful on the impact of these filters on noisy stochastic series, which are a different design issue.
If the end-padding method is fixed, RomanM’s impulse response matrix tells all we need ( assuming the processes in question are Gaussian.. )
21. I think “The Scientist and Engineer’s Guide to Digital Signal Processing” might be more along the lines your looking for. The whole book is free to download. Chapter 20, “Chebyshev Filters”, is
about Chebyshev and Butterworth filters.
22. From an earth sciences perspective, a more important, underlying question is whether signal processing/filtering methods developed for electronics are, in fact, applicable to climatological data
and analysis, both for real data and for proxies.
Regarding the coding style: I think what Soronel was trying to say is that a good programmer uses named constants to make the code readable. In C you could
#define CODE_DOCUMENTARY 5000
if( type == CODE_DOCUMENTARY ) …
Exactly, I’ve had several jobs where dropping the numbers directly into the test would get you a serious dressing down and people who couldn’t learn got canned.
24. Steve McI writes (#6),
Also recall the pick two procedure for their calculation of correlations.
Last weekend I tried implementing the Mannian Pick Two procedure on my golf game: the course wasn’t busy, so I tried teeing off twice on each hole and playing the better of the two drives. This
gave me a much better score than the stuffy conventional rules. The PGA should take note!
25. Can anyone give an overview of the “5000 – Documentary” proxies? Because the central European historians are pretty emphatic about the Medieval Warm Period, and it would be interesting to see if
any of that information is in here in the first place.
From tracking wheat harvests through birthrates and taxes, there’s a fair chunk of records.
26. Mr. McIntyre : I think Don is right and you are indeed dealing with the impulse response properties of the filter.
I was just trying to help by describing what the term “Butterworth” actually means. What it describes is a frequency domain property (maximal passband flatness and no ripple). I realize this
isn’t very helpful statistically but thought it would be good to at least know what the term means.
I am fairly sure that you are both right that a Butterworth filter will exhibit significant ringing after a step change. It’s arguable what the correct method is for using a Butterworth on a
series which has finite length. The only 100% valid approach is to truncate the smoothed result by the filter width, so that all your output values are only involving valid input sample values.
If you’re not going to do that you’d have to, at the very least, replicate the end values to avoid injecting high frequencies into the filter near the ends. Step changes are not what this type of
filter is designed to handle at all. But even that isn’t really a very clever thing to do, since you’re affecting the trend by having a dramatic loss of all frequency components of the signal
near the ends. Neither is linear interpolation very clever, that fixes the first derivative but not the second derivative, etc.
So really where does it stop? You’d basically need a way to generate data at the ends which has the same spectral properties of the data itself but won’t affect the trend and append that. I’m
still not sure that’s a totally valid solution but it’s the best I can come up with. I’d truncate it.
27. Ah-hah:
Here they state:
Butterworth maximally flat magnitude
Some overshoot and ringing in step response.
They go on to say that if you want minimal ringing after a step change you should use a Thomson filter. Maybe that’s the better way to go in this case. So if you’re going to do your own study,
you might want to look into that. Won’t help for pre-existing studies using Butterworth though.
28. Me, I don’t “want” to do anything with this particular filter. It’s just that it’s embedded into this study. (ACtually Mann used it before, we just haven’t looked at it before.)
29. What is worth investigating is the impact of such an extensive time duration of a 10th order IIR filter.
30. Causality issues notwithstanding, of course.
31. EA and Steve #27:
Maybe that is why the X-ray density sign was swopped. Otherwise the lake sediments would have a negative correlation to other methods!
32. I’m amused by the topic of filter design having cropped up. It’s many years since I did any of this kind of thing and I never really understood the electronics. But I knew someone who did. His
standard technique for testing analogue filters was to put a square wave into it, and look at the output on a scope. This allows you to see the impulse repsonse. A Butterworth filter would, if I
recall, ring a bit. You’d get a high frequency spike that then decayed very quickly. Bessel filters wouldn’t. They were, in effect, critically damped.
I’m more than a little bit amazed to find this coming up in a discussion on statistics.
But I’m having real trouble imagining what applying such a filter to this kind of data means. If one believed that the climate was a collection of overlapping cycles of different lengths – then
this technique could be used to remove the shortest cycles.
But proxies aren’t a continuous signal. They are a series of data points, and the variation between successive data points could be quite large – not disimilar to the change at an end point. If
you are getting ringing at an enpoint are you also getting ringing within the dataset. This rather depends on the frequency of sampling (Nyquist?) also on what smoothing might already have been
Steve: Take a look at the Dongge cave smoothing on the new thread. Is this the sort of effect that you’ve noticed:
□ Re: Nick Moon (#36),
But I knew someone who did. His standard technique for testing analogue filters was to put a square wave into it, and look at the output on a scope. This allows you to see the impulse
Actually that allows you to see the step response (assuming the period of the pulse had sufficient duration), which is different than the impulse response.
33. I can say from my experience in a different field (audio engineering) that Butterworth filters are commonly used as low- or bandpass filters when “treating” noisy analogue sound recordings,
because of their nice flat response in the frequency domain. In my own audio restoration work I am *not* using them any more however, as I find the VERY noticeable impulse distortion
objectionable – a steep BW filter will turn a sharp audio impulse (no matter if caused by a scratched record or by a percussive instrument) into a declining sinusoidal wave of perceptible length,
about 5 to 10 times as long as the original impulse, audible as a “ringing” noise like from a dampened plucked string. Definitely NOT what you’d want to use if – like with the temperature curves
discussed here – you’d need to preserve the curve SHAPE rather than (a certain part of) its spectral properties! By now, there are linear-phase digital lowpass and bandpass filters available for
audio and video use that avoid these defects. Sorry I have no idea how exactly these are implemented mathematically, but IMHO this would be a field where to look for better filtering/smoothing
34. I’m starting a thread on Buterworth filters. Given that frequency fidelity is not an issue in these reconstructions (nobody argues for stable underlying cycles), as others observe, why use this
particular method? Anyway, please take this to the other thread.
□ Re: Steve McIntyre (#39),
Anyway, please take this to the other thread.
What happened to the other thread? It seems to have disappeared?
[I suppose it will re-appear as soon as I post this, however.
Steve: sorry about that. I was adding some info from UC and inadvertently left it offline. .
35. Re: #36
Steve: Take a look at the Dongge cave smoothing on the new thread. Is this the sort of effect that you’ve noticed:
Yes the spike at the end really does look very like what you see on an oscilloscope.
Re: #38 – I stand fully corrected.
Post a Comment | {"url":"http://climateaudit.org/2008/10/06/the-full-network/?like=1&source=post_flair&_wpnonce=c327cf8c6a","timestamp":"2014-04-20T17:16:11Z","content_type":null,"content_length":"140944","record_id":"<urn:uuid:eef52d58-daa8-4749-b4be-6cc737ecf8c1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
June 2000
Can science be used to further our understanding of art? This question triggers reservations from both scientists and artists. However, for the abstract paintings produced by Jackson Pollock in
the late 1940s, the answer is a resounding "yes".
Pollock dripped paint from a can on to vast canvases rolled out across the floor of his barn. Although recognised as a crucial advancement in the evolution of modern art, the precise quality and
significance of the patterns created by this unorthodox technique remain controversial. Here we analyse Pollock's patterns and show that they are fractal - the fingerprint of Nature.
Figure 1: Detail of non-chaotic (top) and chaotic (middle) drip trajectories generated by a pendulum and detail of Pollock's 'Number 14' painting from 1948 (bottom).
In contrast to the broken lines painted by conventional brush contact with the canvas surface, Jackson Pollock used a constant stream of paint to produce a uniquely continuous trajectory as it
splattered on to the canvas below. A typical canvas would be reworked many times over a period of several months, with Pollock building a dense web of paint trajectories. This repetitive, cumulative,
"continuous dynamic" painting process is strikingly similar to the way patterns in Nature evolve.
Other parallels with natural processes are also apparent. Gravity plays a central role for both Pollock and Nature. Furthermore, by abandoning the easel, the horizontal canvas became for Pollock a
physical terrain to be traversed. His approach in working from all four sides replicated the isotropy (having the same properties in all directions) and homogeneity of many natural patterns. His
canvases were also large and unframed, similar to a natural environment. Can these shared characteristics be the signature of a deeper common approach?
Since its discovery in the 1960s, chaos theory has experienced spectacular success in explaining many of Nature's processes. Could Pollock's painting process therefore also be chaotic?
There are two revolutionary aspects to Pollock's application of paint and both have potential to introduce chaos. The first is his motion around the canvas. In contrast to traditional brush-canvas
contact techniques, where the artist's motions are limited to hand and arm movements, Pollock used his whole body to introduce a wide range of length scales into his painting motion. In doing so,
Pollock's dashes around the canvas possibly followed Levy flights: a special distribution of movements, first investigated by Paul Levy in 1936, which has recently been used to describe the
statistics of chaotic systems.
The second revolutionary aspect concerns his application of paint by letting it drip on to the canvas. In 1984, a study of a dripping tap showed that small adjustments could change the falling fluid
from a non-chaotic to chaotic flow, and Pollock could have likewise mastered a chaotic flow.
To investigate this possibility, a simple system can be designed to generate drip trajectories where the degree of chaos can be tuned. The system consists of a pendulum which records its motion by
dripping an identical paint trajectory on to a horizontal canvas positioned below. When left to swing on its own, the pendulum follows a predictable, non-chaotic motion. However, by knocking the
pendulum at a frequency slightly lower than the one at which it naturally swings, the system becomes a "kicked rotator".
By tuning the kick (which can be applied very precisely using, for example, electromagnetic driving coils), chaotic motion can be generated. Example sections of non-chaotic (top) and chaotic (middle)
drip paintings are shown in Fig. 1. Since Pollock's paintings are built from many criss-crossing trajectories, these pendulum paintings like-wise feature a number of trajectories generated by varying
the launch conditions. For comparison, the bottom picture is a section of Pollock's dripped trajectories.
Figure 2: Photographs of (top) a 0.1m section of snow on the ground, (middle) a 50m section of forest and (bottom) a 2.5m section of Pollock's 'One: Number 31' painted in 1950.
A striking visual similarity exists between the drip patterns of Pollock and those generated by a chaotic drip system. If both drip patterns are generated by chaos, what common quality would be
expected in the patterns left behind? Many natural chaotic systems form fractals in the patterns that record the process.
Nature builds its fractals using statistical self-similarity: the patterns observed at different magnifications, although not identical, are described by the same statistics. The results are visually
more subtle than the instantly identifiable, artificial patterns generated using exact self-similarity, where the patterns repeat exactly at different magnifications.
Fortunately, there are visual clues which help to identify statistical self-similarity. The first relates to "fractal scaling". The visual consequence of obeying the same statistics at different
magnifications is that it becomes difficult to judge the magnification and hence the length scale of the pattern being viewed. This is demonstrated in Fig. 2(top) and Fig.2(middle) for Nature's
fractal scenery and in Fig. 2(bottom) for Pollock's painting.
A second visual clue relates to "fractal displacement", which refers to the pattern's property of being described by the same statistics at different spatial locations. As a visual consequence, the
patterns gain a uniform character and this is confirmed for Pollock's work in the upper inset of Fig. 3, where the pattern density
Figure 3: A plot of log N(L) versus log L for the aluminium trajectories of the painting 'Blue Poles'. The black line is the data (composed of 1523 data points within the first decade). The red and
blue lines indicate the two gradients. Note that the graph remains linear beyond the range shown. The upper inset shows a plot of pattern density P versus the X and Y positions across the painting
'Number 14' (0.57m by 0.78m). P is defined as the per centage of the canvas surface area fill ed by the pattern within a square of side length L= 0.05m. The plotted ranges are 0 < P < 100% and 0< X,
Y < 0.43m. The lower inset is a schematic representation of the box counting technique (see text for details).
These visual clues to fractal content can be confirmed by calculating the fractal dimension D of Pollock's drip paintings. The large amount of repeating structure within a fractal pattern causes it
to occupy more space than a one dimensional line but not to the extent of completely filling the two dimensional plane. To detect and quantify this intermediate dimensionality of fractals, we
calculate D using the well-established "box-counting" method.
In this method, we cover the scanned photograph of a Pollock painting with a computer-generated mesh of identical squares. The number of squares
The largest size of square is chosen to match the canvas size (
The two chaotic processes proposed for generating Pollock’s paint trajectories - Pollock’s body motions and the dripping fluid motions - operate across distinctly different length scales. These
scales can be estimated from film and still photography of Pollock’s painting process.
Based on the physical range of his body motions and the canvas size, his Levy flights over the canvas are expected to cover the approximate length scales
We would therefore expect the fractal analysis to reveal two
A consequence of multiple
The analysis of Fig. 3 produces
How did Pollock construct and refine his fractal patterns? In many paintings (though not all), he introduced the different colours more or less sequentially: the majority of trajectories with the
same colour were deposited during the same period in the painting’s evolution. To investigate how Pollock built his fractal patterns, we have therefore electronically de-constructed the paintings
into their constituent coloured layers and calculated each layer’s fractal content.
We find that each of the individual layers consist of a uniform, fractal pattern. As each of the coloured patterns is re-incorporated to build up the complete pattern, the fractal dimension of the
overall painting rises. Thus the combined pattern of many colours has a higher fractal dimenion than those of the individual coloured contributions.
The layer he painted first plays a pivotal role - it has a significantly higher
Fig. 4: A comparison of (left) the black anchor layer and (right) the complete pattern consisting of four layers (black, brown, white and grey on a beige canvas) for the painting 'Autumn Rhythm:
Number 30' (2.66m by 5.30m) painted in 1950. The complete pattern occupies 47% of the canvas surface area. The anchor layer occupies 32%.
Pollock died in 1956, before chaos and fractals were discovered. It is highly unlikely, therefore, that Pollock consciously understood the fractals he was painting. Nevertheless, his introduction of
fractals was deliberate. For example, the colour of the anchor layer was chosen to produce the sharpest contrast against the canvas background and this layer also occupies more canvas space than the
other layers, suggesting that Pollock wanted this highly fractal anchor layer to visually dominate the painting.
Furthermore, after the paintings were completed, he would dock the canvas to remove regions near the canvas edge where the pattern density was less uniform. He also took steps to perfect the "drip
and splash" technique itself. His initial drip paintings of 1943 consisted of a single layer of trajectories which, although distributed across the whole canvas, only occupied 20% of the
Finally we note that, because
Richard P. Taylor, Adam P. Micolich and David Jonas,
School of Physics, University of New South Wales, Sydney, 2052, Australia.
This article first appeared in Physics World, Volume 12, Issue 10, October 1999. | {"url":"http://plus.maths.org/content/comment/reply/2167","timestamp":"2014-04-17T09:51:42Z","content_type":null,"content_length":"45045","record_id":"<urn:uuid:49655e03-b048-4ead-bd9f-d97ae9538bca>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sampling Procedure
The sample for the Iraq Multiple Indicator Cluster Survey was designed to provide estimates on a large number of indicators on the situation of children and women at the national level; for areas of residence of Iraq represented by rural and urban (metropolitan and other urban) areas; for the18 governorates of Iraq; and also for metropolitan, other urban, and rural areas for each governorate. Thus, in total, the sample consists of 56 different sampling domains, that includes 3 sampling domains in each of the 17 governorates outside the capital city Baghdad (namely, a metropolitan area domain representing the governorate city centre, an other urban area domain representing the urban area outside the governorate city centre, and a rural area domain) and 5 sampling domains in Baghdad (namely, 3 metropolitan areas representing Sadir City, Resafa side, and Kurkh side, an other urban area sampling domain representing the urban area outside the three Baghdad governorate city centres, and a sampling domain comprising the rural area of Baghdad).
The sample was selected in two stages. Within each of the 56 sampling domains, 54 PSUs were selected with linear systematic probability proportional to size (PPS).
\After mapping and listing of households were carried out within the selected PSU or segment of the PSU, linear systematic samples of six households were drawn. Cluster sizes of 6 households were selected to accommodate the current security conditions in the country to allow the surveys team to complete a full cluster in a minimal time. The total sample size for the survey is 18144 households. The sample is not self-weighting. For reporting national level results, sample weights are used.
The sampling procedures are more fully described in the sampling appendix of the final report and can also be found in the list of technical documents within this archive.
(Extracted from the final report: Central Organisation for Statistics & Information Technology and Kurdistan Statistics Office. 2007. Iraq Multiple Indicator Cluster Survey 2006, Final Report. Iraq.)
Deviation from Sample Design
No major deviations from the original sample design were made. One cluster of the 3024 clusters selected was not completed all othe clusters were accessed.
Response Rates
Of the 18144 households selected for the sample, 18123 were found to be occupied. Of these, 17873 were successfully interviewed for a household response rate of 98.6 percent. In the interviewed households, 27564 women (age 15-49 years) were identified. Of these, 27186 were successfully interviewed, yielding a response rate of 98.6 percent. In addition, 16570 children under age five were listed in the household questionnaire. Of these, questionnaires were completed for 16469 which correspond to a response rate of 99.4 percent. Overall response rates of 97.3 and 98.0 are calculated for the women's and under-5's interviews respectively.
The Iraq Multiple Indicator Cluster Survey sample is not self-weighted. Essentially, by allocating equal numbers of households to each of the sampling domains, different sampling fractions were used in each sampling domain since the size of the sampling domains varied. For this reason, sample weights were calculated and these were used in the subsequent analyses of the survey data.
The major component of the weight is the reciprocal of the sampling fraction employed in selecting the number of sample households in that particular sampling domain:
Wh = 1 / fh
The term fh, the sampling fraction at the h-th stratum, is the product of probabilities of selection at every stage in each sampling domain:
fh = P1h * P2h * P3h
where Pih is the probability of selection of the sampling unit in the i-th stage for the h-th sampling domain.
Since the estimated numbers of households per enumeration area prior to the first stage selection (selection of primary sampling units) and the updated number of households per enumeration area were different, individual sampling fractions for households in each enumeration area (cluster) were calculated. The sampling fractions for households in each enumeration area (cluster) therefore included the probability of selection of the enumeration area in that particular sampling domain and the probability of selection of a household in the sample enumeration area (cluster).
A second component which has to be taken into account in the calculation of sample weights is the level of non-response for the household and individual interviews. The adjustment for household non-response is equal to the inverse value of:
RR = Number of interviewed households / Number of occupied households listed
After the completion of fieldwork, response rates were calculated for each sampling domain. These were used to adjust the sample weights calculated for each cluster.
Similarly, the adjustment for non-response at the individual level (women and under-5 children) is equal to the inverse value of:
RR = Completed women's (or under-5's) questionnaires / Eligible women (or under-5s)
Numbers of eligible women and under-5 children were obtained from the household listing in the Household Questionnaire in households where interviews were completed.
The unadjusted weights for the households were calculated by multiplying the above factors for each enumeration area. These weights were then standardized (or normalized), one purpose of which is to make the sum of the interviewed sample units equal the total sample size at the national level. Normalization is performed by multiplying the aforementioned unadjusted weights by the ratio of the number of completed households to the total unadjusted weighted number of households. A similar standardization procedure was followed in obtaining standardized weights for the women's and under-5's questionnaires. Adjusted (normalized) weights varied between 0.110 and 3.721 in the 56 sampling domains.
Sample weights (Table SD.4) were appended to all data sets and analyses were performed by weighting each household, woman or under-5 with these sample weights.
(Extracted from the final report Appendix A: Central Organisation for Statistics & Information Technology and Kurdistan Statistics Office. 2007. Iraq Multiple Indicator Cluster Survey 2006, Final Report. Iraq.) | {"url":"http://www.childinfo.org/mics/mics3/archives/iraq/survey0/technicalInformation/sampling.html","timestamp":"2014-04-20T03:14:49Z","content_type":null,"content_length":"8318","record_id":"<urn:uuid:9e668362-14d0-42c8-a347-8ffe90ca3d23>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaming prediction markets: Equilibrium strategies with a market maker
Results 1 - 10 of 21
"... � We survey the literature on prediction mechanisms, including prediction markets and peer prediction systems. We pay particular attention to the design process, highlighting the objectives and
properties that are important in the design of good prediction mechanisms. Mechanism design has been descr ..."
Cited by 13 (3 self)
Add to MetaCart
� We survey the literature on prediction mechanisms, including prediction markets and peer prediction systems. We pay particular attention to the design process, highlighting the objectives and
properties that are important in the design of good prediction mechanisms. Mechanism design has been described as “inverse game theory. ” Whereas game theorists ask what outcome results from a game,
mechanism designers ask what game produces a desired outcome. In this sense, game theorists act like scientists and mechanism designers like engineers. In this article, we survey a number of
mechanisms created to elicit predictions, many newly proposed within the last decade. We focus on the engineering questions: How do they work and why? What factors and goals are most important in
, 1009
"... Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Various market making algorithms have been proposed in the literature and deployed in practice,
but there has been little effort to evaluate their benefits and disadvantages in a systematic manner. We int ..."
Cited by 5 (0 self)
Add to MetaCart
Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Various market making algorithms have been proposed in the literature and deployed in practice, but
there has been little effort to evaluate their benefits and disadvantages in a systematic manner. We introduce a novel experimental design for comparing market structures in live trading that ensures
fair comparison between two different microstructures with the same trading population. Participants trade on outcomes related to a two-dimensional random walk that they observe on their computer
screens. They can simultaneously trade in two markets, corresponding to the independent horizontal and vertical random walks. We use this experimental design to compare the popular inventory-based
logarithmic market scoring rule (LMSR) market maker and a new information based Bayesian market maker (BMM). Our experiments reveal that BMM can offer significant benefits in terms of price stability
and expected loss when controlling for liquidity; the caveat is that, unlike LMSR, BMM does not guarantee bounded loss. Our investigation also elucidates some general properties of market makers in
prediction markets. In particular, there is an inherent tradeoff between adaptability to market shocks and convergence during market equilibrium. 1
- In Proc. of the Second Conference on Auctions, Market Mechanisms, and Their Applications , 2011
"... We describe the design of Instructor Rating Markets in which students trade on the ratings that will be received by instructors, with new ratings revealed every two weeks. The markets provide
useful dynamic feedback to instructors on the progress of their class, while at the same time enabling the c ..."
Cited by 5 (0 self)
Add to MetaCart
We describe the design of Instructor Rating Markets in which students trade on the ratings that will be received by instructors, with new ratings revealed every two weeks. The markets provide useful
dynamic feedback to instructors on the progress of their class, while at the same time enabling the controlled study of prediction markets where traders can affect the outcomes they are trading on.
More than 200 students across the Rensselaer campus participated in markets for ten classes in the Fall 2010 semester. We show that market prices convey useful information on future instructor
ratings and contain significantly more information than do past ratings. The bulk of useful information contained in the price of a particular class is provided by students who are in that class,
showing that the markets are serving to disseminate insider information. At the same time, we find little evidence of attempted manipulation of the liquidating dividends by raters. The markets are
also a laboratory for comparing different microstructures and the resulting price dynamics, and we show how they can be used to compare market making algorithms. 1
"... We study information revelation in scoring rule and prediction market mechanisms in settings in which traders have conflicting incentives due to opportunities to profit from the market
operator’s subsequent actions. In our canonical model, an agent Alice is offered an incentive-compatible scoring ru ..."
Cited by 4 (0 self)
Add to MetaCart
We study information revelation in scoring rule and prediction market mechanisms in settings in which traders have conflicting incentives due to opportunities to profit from the market operator’s
subsequent actions. In our canonical model, an agent Alice is offered an incentive-compatible scoring rule to reveal her beliefs about a future event, but can also profit from misleading another
trader Bob about her information and then making money off Bob’s error in a subsequent market. We show that, in any weak Perfect Bayesian Equilibrium of this sequence of two markets, Alice and Bob
earn payoffs that are consistent with a minimax strategy of a related game. We can then characterize the equilibria in terms of an information channel: the outcome of the first scoring rule is as if
Alice had only observed a noisy version of her initial signal, with the degree of noise indicating the adverse effect of the second market on the first. We provide a partial constructive
characterization of when this channel will be noiseless. We show that our results on the canonical model yield insights into other settings of information extraction with conflicting incentives.
"... Prediction markets are designed to elicit information from multiple agents in order to predict (obtain probabilities for) future events. A good prediction market incentivizes agents to reveal
their information truthfully; such incentive compatibility considerations are commonly studied in mechanism ..."
Cited by 3 (2 self)
Add to MetaCart
Prediction markets are designed to elicit information from multiple agents in order to predict (obtain probabilities for) future events. A good prediction market incentivizes agents to reveal their
information truthfully; such incentive compatibility considerations are commonly studied in mechanism design. While this relation between prediction markets and mechanism design is well understood at
a high level, the models used in prediction markets tend to be somewhat different from those used in mechanism design. This paper considers a model for prediction markets that fits more
straightforwardly into the mechanism design framework. We consider a number of mechanisms within this model, all based on proper scoring rules. We discuss basic properties of these mechanisms, such
as incentive compatibility. We also draw connections between some of these mechanisms and cooperative game theory. Finally, we speculate how one might build a practical prediction market based on
some of these ideas. 1
"... [Extended Abstract] ∗ ..."
, 2012
"... With the proliferation of online labor markets and other human computation platforms, online experiments have become a low-cost and scalable way to empirically test hypotheses and mechanisms in
both human computation and social science. Yet, despite the potential in designing more powerful and expre ..."
Cited by 3 (2 self)
Add to MetaCart
With the proliferation of online labor markets and other human computation platforms, online experiments have become a low-cost and scalable way to empirically test hypotheses and mechanisms in both
human computation and social science. Yet, despite the potential in designing more powerful and expressive online experiments using multiple subjects, researchers still face many technical and
logistical difficulties. We see synchronous and longitudinal experiments involving real-time interaction between participants as a dualuse paradigm for both human computation and social science, and
present TurkServer, a platform that facilitates these types of experiments on Amazon Mechanical Turk. Our work has the potential to make more fruitful online experiments accessible to researchers in
many different fields.
"... We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of
our setting is that there no longer are “good” or “malicious ” provers, but only rational ones. In essenc ..."
Cited by 2 (2 self)
Add to MetaCart
We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our
setting is that there no longer are “good” or “malicious ” provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0, c] determined by the
transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful
as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is
computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin’s single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy,
coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring
rules, a tool developed to elicit truthful information from experts.
- In ACM EC , 2012
"... Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Variants of the logarithmic market scoring rule (LMSR) have emerged as the standard. LMSR market
makers are loss-making in general and need to be subsidized. Proposed variants, including liquidity sensiti ..."
Cited by 2 (1 self)
Add to MetaCart
Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Variants of the logarithmic market scoring rule (LMSR) have emerged as the standard. LMSR market makers
are loss-making in general and need to be subsidized. Proposed variants, including liquidity sensitive market makers, suffer from an inability to react rapidly to jumps in population beliefs. In this
paper we propose a Bayesian Market Maker for binary outcome (or continuous 0-1) markets that learns from the informational content of trades. By sacrificing the guarantee of bounded loss, the
Bayesian Market Maker can simultaneously offer: (1) significantly lower expected loss at the same level of liquidity, and, (2) rapid convergence when there is a jump in the underlying true value of
the security. We present extensive evaluations of the algorithm in experiments with intelligent trading agents and in human subject experiments. Our investigation also elucidates some general
properties of market makers in prediction markets. In particular, there is an inherent tradeoff between adaptability to market shocks and convergence during market equilibrium.
"... We consider the problem of evaluating the performance of human contributors for tasks involving answering a series of questions, each of which has a single correct answer. The answers may not be
known a priori. We assert that the measure of a contributor’s judgments is the amount by which having the ..."
Cited by 1 (0 self)
Add to MetaCart
We consider the problem of evaluating the performance of human contributors for tasks involving answering a series of questions, each of which has a single correct answer. The answers may not be
known a priori. We assert that the measure of a contributor’s judgments is the amount by which having these judgments decreases the entropy of our discovering the answer. This quantity is the
pointwise mutual information between the judgments and the answer. The expected value of this metric is the mutual information between the contributor and the answer prior, which can be computed
using only the prior and the conditional probabilities of the contributor’s judgments given a correct answer, without knowing the answers themselves. We also propose using multivariable information
measures, such as conditional mutual information, to measure the interactions between contributors ’ judgments. These metrics have a variety of applications. They can be used as a basis for
contributor performance evaluation and incentives. They can be used to measure the efficiency of the judgment collection process. If the collection process allows assignment of contributors to
questions, they can also be used to optimize this scheduling. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=9718271","timestamp":"2014-04-17T16:47:55Z","content_type":null,"content_length":"39243","record_id":"<urn:uuid:57738750-3b4d-4929-aef5-e82d84103d76>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |