content
stringlengths
86
994k
meta
stringlengths
288
619
Tutte’s edge-coloring conjecture - IN SURVEYS IN COMBINATORICS, 1999 267 201-222. THE ELECTRONIC JOURNAL OF COMBINATORICS 8 (2001), #R34 8 , 1999 "... A graph is a minor of another if the first can be obtained from a subgraph of the second by contracting edges. An excluded minor theorem describes the structure of graphs with no minor isomorphic to a prescribed set of graphs. Splitter theorems are tools for proving excluded minor theorems. We disc ..." Cited by 9 (0 self) Add to MetaCart A graph is a minor of another if the first can be obtained from a subgraph of the second by contracting edges. An excluded minor theorem describes the structure of graphs with no minor isomorphic to a prescribed set of graphs. Splitter theorems are tools for proving excluded minor theorems. We discuss splitter theorems for internally 4-connected graphs and for cyclically 5-connected cubic graphs, the graph minor theorem of Robertson and Seymour, linkless embeddings of graphs in 3-space, Hadwiger’s conjecture on t-colorability of graphs with no Kt+1 minor, Tutte’s edge 3-coloring conjecture on edge 3-colorability of 2-connected cubic graphs with no Petersen minor, and Pfaffian orientations of bipartite graphs. The latter are related to the even directed circuit problem, a problem of Pólya about permanents, the 2-colorability of hypergraphs, and sign-nonsingular matrices. , 2003 "... A graph is perfect if for every induced subgraph, the chromatic number is equal to the maximum size of a complete subgraph. The class of perfect graphs is important for several reasons. For instance, many problems of interest in practice but intractable in general can be solved e#ciently when restr ..." Cited by 7 (3 self) Add to MetaCart A graph is perfect if for every induced subgraph, the chromatic number is equal to the maximum size of a complete subgraph. The class of perfect graphs is important for several reasons. For instance, many problems of interest in practice but intractable in general can be solved e#ciently when restricted to the class of perfect graphs. Also, the question of when a certain class of linear programs always have an integer solution can be answered in terms of perfection of an associated graph. In the first - Surveys in Combinatorics, LMS Lecture Note Series "... We discuss splitter theorems for internally 4-connected graphs and for cyclically 5-connected cubic graphs, the graph minor theorem, linkless embeddings, Hadwiger's conjecture, Tutte's edge 3-coloring conjecture, and Pfaffian orientations of bipartite graphs. ..." Cited by 3 (1 self) Add to MetaCart We discuss splitter theorems for internally 4-connected graphs and for cyclically 5-connected cubic graphs, the graph minor theorem, linkless embeddings, Hadwiger's conjecture, Tutte's edge 3-coloring conjecture, and Pfaffian orientations of bipartite graphs. , 1999 "... A graph is quasi 4-connected if it is simple, 3-connected, has at least five vertices, and for every partition (A, B, C) of V(G) either |C|≥4, or G has an edge with one end in A and the other end in B, orone of A,B has at most one vertex. We show that any quasi 4-connected nonplanar graph with minim ..." Cited by 3 (1 self) Add to MetaCart A graph is quasi 4-connected if it is simple, 3-connected, has at least five vertices, and for every partition (A, B, C) of V(G) either |C|≥4, or G has an edge with one end in A and the other end in B, orone of A,B has at most one vertex. We show that any quasi 4-connected nonplanar graph with minimum degree at least three and no cycle of length less than five has a minor isomorphic to P − 10, the Petersen graph with one edge deleted. We deduce the following weakening of Tutte’s Four Flow Conjecture: every 2-edge connected graph with no minor isomorphic to P − 10 has a nowhere-zero 4-flow. This extends a result of Kilakos and Shepherd who proved the same for 3-regular graphs.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1307943","timestamp":"2014-04-18T20:09:23Z","content_type":null,"content_length":"21442","record_id":"<urn:uuid:30d69079-e067-4071-bb70-a2134eefcc2a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
HVAC Charts - Duct Sizing CalculatorHVAC Charts - Your Source for Heating & Air Conditioning Charts and Supplies Duct Sizing Calculator The Duct Sizing Calculator is a hand held sliding calculator used for sizing supply and return duct systems using the equal friction, or velocity reduction design method. Front of calculator includes scales for friction, velocity, round duct size, weight per linear foot of round duct, surface area in square feet per linear foot of round duct, rectangular equivalents of round duct sizes, weight per linear foot of rectangular duct and surface area in square feet per linear foot of Rectangular duct. Back of calculator has equivalent feet of straight duct for commonly used fittings, corrected pressure drop chart, and many HVAC formulas. Pull out slide has recommended duct velocities, recommended gauges for sheet metal, and much more. The photo shows front and back sides of the calculator. Calculator is made of thick durable water resistant paper stock with glued edges and corner rivets. Calculator measures 4 inches high and 8.5 inches wide..
{"url":"http://www.hvaccharts.com/Duct_color.html","timestamp":"2014-04-16T16:14:30Z","content_type":null,"content_length":"5417","record_id":"<urn:uuid:dcb0049f-349b-483d-bacb-c496a616b849>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
At A Glance Featuring a 279 Built-In Functions, 2-Line Big Display and Solar Plus, the FX-115MSPlus is permitted on College Entrance Examination Boards like the SAT and PSAT/NMSQT. Manual Additional Functions Technical Specs 1-Line/# of Digits Not Available 2-Line/# of Digits 11/10 + 2 Natural Textbook Display Not Available Store/Recall Yes / Yes Recall & Edit Values Yes # of Constant Memories 6 Dual (Solar and Battery) Yes Solar Not Available Battery Not Available Automatic Power Down Yes General Features Function/Mode Menus Yes Scrolling Keys Yes Review & Edit Previous Entries Yes Clear Last Entry & Clear All Yes Backspace Yes Fixed Decimal Capabilities Yes Math Functions Operating System VPAM Add, Subtract, Multiply, Divide Yes Correct Order of Operations (M, D, A, S) Yes Integer Division with Remainder Not Available Constant Feature Not Available Parenthesis Yes Change Sign (+/-) Yes X^2/Square Root Yes/Yes X^3/Cube Root Yes/Yes Exponents (^key)/Powers of 10 Yes/Yes xth Root Yes Pi Yes Percent Calculations Yes Display True Fraction Form Not Available Fraction<>Decimal, Decimal<>Fraction Yes Improper Fraction<>Mixed Number Yes Simplification Yes Sin, Cos, Tan, & Inverse Yes Hyperbolic Functions Yes Converts between DEG, RAD, GRAD Yes One-Variable Statistics/Two-Variable Statistics Yes/Yes Mean, Sum, # Elements Yes Standard Deviation Yes Linear Regression Yes Log, In, Inverse Log, Exponential Yes Store and Edit Data in Memory Yes Quadratic, Log, Exp, Power, Inverse Regression Yes nPr, cPr, x! Yes Random Number Generator Yes Complex Number Calculations Yes Numeric Integration Yes Numeric Differential Calculations Yes Additional Math Functions DMS<>DD Conversions Yes Number Bases - Dec, Hex, Oct, Binary Yes Polar<>Rectangular Conversions Yes Scientific Notation Yes Simultaneous Equations & Polynomial Equations Yes Engineering Symbol Calculations Yes Boolean Logic Operations Yes Solve Function Yes Hardware - Other Protective Hard Case Yes Color Coded Keys Grouped by Function Not Available Overhead Projectable Model Not Available Instruction Manual Yes Quick Reference Guide Yes Available in Teacher Packs (10 units) Yes Available in Classroom Sets (30 units) Yes Resource Materials Not Available Technical Specifications Subject to Change
{"url":"http://www.casio.com/products/Calculators_%26_Dictionaries/Fraction_%26_Scientific/FX-115MSPlus/","timestamp":"2014-04-17T09:36:26Z","content_type":null,"content_length":"22636","record_id":"<urn:uuid:06603e7f-9f49-4eaf-aec7-09444b641a81>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Transient Cognitive Dynamics, Metastability, and Decision Making The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain. Author Summary The modeling of the temporal structure of cognitive processes is a key step for understanding cognition. Cognitive functions such as sequential learning, short-term memory, and decision making in a changing environment cannot be understood using only the traditional view based on classical concepts of nonlinear dynamics, which describe static or rhythmic brain activity. The execution of many cognitive functions is a transient dynamical process. Any dynamical mechanism underlying cognitive processes has to be reproducible from experiment to experiment in similar environmental conditions and, at the same time, it has to be sensitive to changing internal and external information. We propose here a new dynamical object that can represent robust and reproducible transient brain dynamics. We also propose a new class of models for the analysis of transient dynamics that can be applied for sequential decision making. Citation: Rabinovich MI, Huerta R, Varona P, Afraimovich VS (2008) Transient Cognitive Dynamics, Metastability, and Decision Making. PLoS Comput Biol 4(5): e1000072. doi:10.1371/journal.pcbi.1000072 Editor: Karl J. Friston, University College London, United Kingdom Received: December 5, 2007; Accepted: March 27, 2008; Published: May 2, 2008 Copyright: © 2008 Rabinovich et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by ONR N00014-07-1-0741. PV acknowledges support from Spanish BFU2006-07902/BFI and CAM S-SEM-0255-2006. Competing interests: The authors have declared that no competing interests exist. The dynamical approach for studying brain activity has a long history and is currently one of strong interest [1]–[7]. Cognitive functions are manifested through the generation and transformation of cooperative modes of activity. Different brain regions participate in these processes in distinct ways depending on the specific cognitive function and can prevail in different cognitive modes. Nevertheless, the mechanisms underlying different cognitive processes may rely on the same dynamical principles, e.g., see [8]. The execution of cognitive functions is based on fundamental asymmetries of time – often metaphorically described as the arrow of time. This is inseparably connected to the temporal ordering of cause-effect pairs. The correspondence between causal relations and temporal directions requires specific features in the organization of cognitive system interactions, and on the microscopic level, specific network interconnections. A key requirement for this organization is the presence of nonsymmetrical interactions because, even in brain resting states, the interaction between different subsystems of cognitive modes also produces nonstationary activity that has to be reproducible. One plausible mechanism of mode interaction that supports temporal order is nonreciprocal competition. Competition in the brain is a widespread phenomenon (see [9] for a remarkable example in human memory systems). At all levels of network complexity, the physiological mechanisms of competition are mainly implemented through inhibitory connections. Symmetric reciprocal inhibition leads to multistability and this is not an appropriate dynamical regime for the description of reproducible transients. As we have shown in [5],[10], nonsymmetric inhibition is an origin of reproducible transients in neural networks. Recently functional magnetic-resonance imaging (fMRI) and EEG have opened new possibilities for understanding and modeling cognition [11]–[15]. Experimental recordings have revealed detailed (spatial and temporal) pictures of brain dynamics corresponding to the temporal performance of a wide array of mental and behavioral tasks, which usually are transient and sequential [16]–[18]. Several groups have formulated large-scale dynamical models of cognition. Based on experimental data these models demonstrate features of cognitive dynamics such as metastability and fast transients between different cognitive modes [15], [16], [19]–[24]. There is experimental evidence to support that metastability and transient dynamics are key phenomena that can contribute to the modeling of cortex processes and thus yield a better understanding of a dynamical brain [18], [25]–[30]. Common features of many cognitive processes are: (i) incoming sensory information is coded both in space and time coordinates, (ii) cognitive modes sensitively depend on the stimulus and the executed function, (iii) in the same environment cognitive behavior is deterministic and highly reproducible, and (iv) cognitive modes are robust against noise. These observations suggest (a) that a dynamical model which possesses these characteristics should be strongly dissipative so that its orbits rapidly “forget” the initial state of the cognitive network when the stimulus is present, and (b) that the dynamical system executes cognitive functions through transient trajectories, rather than attractors following the arrow of time. In this paper we suggest a mathematical theory of transient cognitive activity that considers metastable states as the basic elements. This paper is organized as follows. In the Results section we first provide a framework for the formal description of metastable states and their transients. We introduce a mathematical image of robust and reproducible transient cognition, and present a basic dynamical model for the analysis of such transient behavior. Then, we generalize this model taking into account uncertainty and use it for the analysis of decision making. In the Discussion, we focus on some open questions and possible applications of our theory to different cognitive problems. In the Methods section, a rigorous mathematical approach is used to formulate the conditions for robustness and reproducibility. Metastability and Cognitive Transient Dynamics A dynamical model of cognitive processes can use as variables the activation level A[i](t)≥0 of cognitive states (i = 1…N) of specific cognitive functions [31]. The phase space of such model is then the set of A[i](t) with a well-defined metric where the trajectories are sets of cognitive states ordered in time. To build this model, we introduce here several theoretical ideas that associate metastable states and robust and reproducible transients with new concepts of nonlinear dynamics, i.e., stable heteroclinic sequences and heteroclinic channels [4], [5], [10], [32]–[34]. The main ideas are the following: • Metastable states of brain activity can be represented in a high-dimensional phase space of a dynamical model (that depends on the cognitive function) by saddle sets, i.e., saddle fixed points or saddle limit cycles. • In turn, reproducible transients can be represented by a stable heteroclinic channel (SHC), which is a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings (see Figure 1). The condensation of the trajectories in the SHC and the stability of such channel are guaranteed by the sequential tightness along the chain of the saddles around a multi-dimensional stable manifold. The SHC is structurally stable in a wide region of the control parameter space (see Methods). • The SHC concept is able to solve the fundamental contradiction between robustness against noise and sensitivity to the informational input. Even close informational inputs induce the generation of different modes in the brain. Thus, the topology of the corresponding stable heteroclinic channels sensitively depends on the stimuli, but the heteroclinic channel itself, as an object in the phase space (similar to traditional attractors), is structurally stable and robust against noise. Figure 1. Schematic representation of a stable heteroclinic channel. The SHC is built with trajectories that condense in the vicinity of the saddle chain and their unstable separatrices (dashed lines) connecting the surrounding saddles (circles). The thick line represents an example of a trajectory in the SHC. The interval t[k][+1]−t[k] is the characteristic time that the system needs to move from the metastable state k to the k+1. Based on these ideas we model the temporal evolution of alternating cognitive states by equations of competitive metastable modes. The structure of these modes can be reflected in functional neuroimage experiments. Experimental evidence suggests that for the execution of specific cognitive functions the mind recruits the activity from different brain regions [35]–[37]. The dynamics of such networks is represented by sequences of switchings between cognitive modes, i.e., as we hypothesize, a specific SHC for the cognitive function of interest. Mathematical Image and Models We suggest here that the mathematical image of reproducible cognitive activity is a stable heteroclinic channel including metastable states that are represented in the phase space of the corresponding dynamical model by saddle sets connected via unstable separatrices (see Figure 1). Note that the topology of Figure 1 reminds a ‘chaotic itinerancy’ [38]. However, based only on Milnor attractors we cannot demonstrate the reproducibility phenomena which is the main feature of the SHC. To make our modeling more transparent let us use as an example the popular dynamical image of rhythmic neuronal activity, i.e., a limit cycle. At each level of complexity of a neural system, its description and analysis can be done in the framework of some basic model like a phase equation. The questions that can be answered in this framework are very diverse: synchronization in small neuronal ensembles like CPGs, generation of brain rhythms [39], etc. Our approach here is similar. We formulate a new paradigm for the mathematical description of reproducible transients that can be applied at different levels of the network complexity pyramid. This paradigm is the Stable Heteroclinic Channel. As a limit cycle, the SHC can be described by the same basic equation on different levels of the system complexity. The sense of the variables A[i](t)≥0, of course, is different at each level. Before we introduce the basic model for the analysis of reproducible transient cognitive dynamics, it is important to discuss two general features of the SHC that do not depend on the model. These are: (i) the origin of the structural stability of the SHC, and (ii) the long passage time in the vicinity of saddles in the presence of moderate noise. To understand the conditions of the stability of SHC we have to take into account that an elementary phase volume in the neighborhood of a saddle is compressed along the stable separatrices and it is stretched along an unstable separatrix. Let us to order the eigenvalues of a saddle as The number is called the saddle value. If v[i]>1 (the compressing is larger than the stretching), the saddle is named as a dissipative saddle. Intuitively it is clear that the trajectories do not leave the heteroclinic channel if all saddles in the heteroclinc chain are dissipative. A rigorous analysis of the structural stability of the heteroclinic channel supports our intuition (see Methods The problem of the temporal characteristics of the transients is related to the “exit problem” for small random perturbations of dynamical systems with saddle sets. This problem was first solved by Kifer [40] and then discussed in several papers, in particular, in [41]. A local stability analysis in the vicinity of a saddle fixed point allows us to estimate the time that the system spends in the vicinity of the saddle:(1) where τ(p) is the mean passage time, |η| is the level of noise, and λ is an eigenvalue corresponding to the unstable separatrix of the saddle. A biologically reasonable model that is able to generate stable and reproducible behavior represented in the phase space by the SHC has to (i) be convenient for the interpretation of the results and for its comparison with experimental data, (ii) be computationally feasible, (iii) have enough control parameters to address a changing environment and the interaction between different cognitive functions (e.g., learning and memory). We have argued that the dynamical system that we are looking for has to be strongly dissipative and nonlinear. For simplicity, we chose as dynamical variables the activation level of neuronal clusters that consist of correlated/synchronized neurons. The key dynamical feature of such models is the competition between different metastable states. Thus, in the phase space of this basic model there must be several (in general many) saddle states connected by unstable separatrices. Such chain represents the process of sequential switching of activity from one cognitive mode to the next one. This process can be finite, i.e., ending on a simple attractor or repetitive. If we choose the variables A[j](t) as the amount of activation of the different modes, we can suppose that the saddle points are disposed on the axes of an N-dimensional phase space, and the separatrices connecting them are disposed on a (N−n)-dimensional manifold (n<N−1), which are the boundaries of the phase space. We will use two types of models that satisfy the above conditions: (i) the Wilson-Cowan model for excitatory and inhibitory neural clusters [42], and (ii) generalized Lotka-Volterra equations – a basic model for the description of competition phenomena with many participants [32],[43]. Both models can be represented in a general form as:(2) Here A[j](t)≥0 is the activation level of the j-th cluster, Θ[z] is a nonlinear function, i.e., a sigmoid function in the case of the Wilson-Cowan model and a polynomial one for the generalized Lotka-Volterra model. The connectivity matrix ρ[ji] can depend on the stimulus or change as a result of learning. σ(I) is a parameter characterizing the dependence of the cognitive states on the incoming information I. The parameter β represents other types of external inputs or noise. In the general case, A[j] (t) is a vector function whose number of components depends on the complexity of the intrinsic dynamics of the individual brain blocks. The cognitive mode dynamics can be interpreted as a nonlinear interaction of such blocks that cooperate and compete with each other. To illustrate the existence of a stable heteroclinic channel in the phase space of Equation 2, let us consider a simple network that consists of three competitive neural clusters. This network can be described by the Wilson-Cowan type model as(3) where ρ[jj]<0, ρ[j≠i]≥0, β>0, N = 3. The network can also be described by a Lotka-Volterra model of the form:(4) where ρ[ji]≥0. In all our examples below we will suppose that the connection matrix is non symmetric, i.e., ρ[ji]≠ρ[ij], which is a necessary condition for the existence of the SHC. Figure 2 illustrates how the dynamics of these two models with N = 3 can produce a robust sequential activity: both models have SHC in their phase-spaces. The main difference between the dynamics of the Wilson-Cowan and Lotka-Volterra models is the type of attractors. System 3 contains a stable limit cycle in a SHC and a stable fixed point (the origin of the coordinates for β = 0). In contrast, there is one attractor, i.e., a SHC, in the phase space of System 4. Figure 2. Closed stable heteroclinic sequence in the phase space of three coupled clusters. (A) Wilson-Cowan clusters. (B) Lotka-Volterra clusters. Both models demonstrate robust transient (sequential) activity even for many interacting modes. An example of this dynamics is presented in Figure 3. This figure shows the dynamics of a two-component Wilson-Cowan network of 100 excitatory and 100 inhibitory modes. The parameters used in these simulations are the same as those reported in [44] where the connectivity was drawn from a Bernoulli random process but with the probability of connections slightly shifted with respect to the balanced excitatory-inhibitory network. The system is organized such that a subgroup of modes fall into a frozen component and the rest produce the sequential activity. The model itself is sufficiently general to be translated to other concepts and ideas as the one proposed here in the form of cognitive Figure 3. Robust transient dynamics of 200 cognitive modes modeled with Wilson-Cowan equations. (A) The activation level of three cognitive modes are shown (E14, E11, E35), (B) Time series illustrating sequential switching between modes: 10 different modes out of the total 200 interacting modes are shown. Figure 4 illustrates the reproducibility of transient sequential dynamics of Model 4 with N = 20 modes. This simulation corresponds to the following conditions: (i) ρ[ji]≠ρ[ij] and (ii) v[i]>1 (see [10] for details). In this figure each mode is depicted by a different color and the level of activity is represented by the saturation of the color. The system of equations was simulated 10 times, each trial starting from a different random initial condition within the hypercube . Note the high reproducibility of the sequential activation among the modes, which includes the time interval between the switchings. Figure 4. Reproducibility of a transient sequential dynamics of 20 metastable modes corresponding to SHC in Model 4. The figure shows the time series of 10 trials. Simulations of each trial were initiated at a different random initial condition. The initial conditions influence the trajectory only at the beginning due to the dissipativeness of the saddles (for details see also [10]). Because of the complexity of System 4 with large N, the above conditions cannot guarantee the absence of other invariant sets in this system. However we did not find them in our computer simulations. For a rigorous demonstration of the structural stability of the SHC see Methods section. It is important to emphasize that the SHC may consist of saddles with more than one unstable manifold. These sequences can also be feasible because, according to [40] and [45], if a dynamical system is subjected to the influence of small noise, then for any trajectory going through an initial point in a neighborhood of such saddle, the probability to escape this neighborhood following a strongly unstable direction is almost one. The strongly unstable direction corresponds to the maximal eigenvalue of the linearization at the saddle point. In other words, everything occurs in the same way as for the SHC; one must only replace the unstable separatrices in the SHC by strongly unstable manifolds of saddle points. As we mentioned above, the variables A[i](t)≥0 in the basic Equations 2 or 4 can be interpreted in several different ways. One of them which is related to experimental work is the following. Using functional Principal Component (PC) analysis of fMRI data (see, for example [46]) it is possible to build a cognitive “phase space” based on the main orthogonal PCs. A point in such phase space characterizes the functional cognitive state at instant t. The set of states in subsequent instants of time is a cognitive trajectory that represents the transient cognitive dynamics. Sequential Decision Making Decisions have to be reproducible to allow for memory and learning. On the other hand, a decision making (DM) system also has to be sensitive to new information from the environment. These requirements are fundamentally contradictory, and current approaches [47]–[50] are not sufficient to explain the use of sequential activity for DM. Here, we formulate a new class of models suitable for analyzing sequential decision making (SDM) based on the SHC concept, which is a generalization of Model 4. A key finding in Decision Theory [51] is that the behavior of an individual shifts from risk-aversion (when possible gains are predicted) to risk seeking (when possible losses are predicted). In particular, Kahneman and Tversky [52] conducted several experiments to test decision making under uncertainty. They showed that when potential profits are concerned, decision-makers are risk averse, but when potential losses are concerned, subjects become risk seeking. Other classical paradigms assume that decision makers should always be risk averse, both when a potential profit and when a possible loss are predicted. SDM model. To illustrate how the SHC concept can be applied to the execution of a specific cognitive function, let us consider a simple fixed time (T*) game: a player is taking sequential actions in a changing environment so as to maximize the reward. The success of the game depends on the decision strategy. Formally, the SDM model consists of: (i) a set of environment states σ(I); (ii) a set of dynamical variables A[j]≥0 characterizing the level of activity of the cognitive modes that correspond to the execution of the decision strategy; and (iii) a scalar representing the cumulative reward that depends on the number of achieved steps in the available time T*, and on the values of the instantaneous reward at the steps along different transients, i.e., different choices. Depending on the environment conditions, the game can end at step (k+1), or it can continue using one or many different ways based on the different choices. It is clear that to get the maximum cumulative reward the player has to pass as many steps within the game's time T*. Thus, the strategy that will make the game successful has to be based on two conditions: 1. the game does not have to end in an attractor (stable fixed point) at time t<T*, and 2. the player has to encounter as many metastable states as possible during the time T*. It is difficult to estimate analytically which strategy is the best to solve the first problem. It can be done in a computer simulation, but we can make a prediction for the second problem. Let us assume that we have a successful game and, for the sake of simplicity, that the reward on each state is identical (as our computer simulations indicate, the results do not qualitatively change if the rewards for each step are different). Thus, the game dynamics in the phase space can be described by the system(5) where A[j]≥0, m[k] is the number of admissible values of σ[j] at the decision step t[k], represents the stimulus determined by the environment information I[k] at the step t[k], and η[j] is a multiplicative noise. We can think that the game is a continued process that is represented by a trajectory arranged in a heteroclinic channel (see Figure 1). The saddle vicinities correspond to the decision steps. Evidently, the number of such steps increases with the speed of the game that depends on the time that the system spends in the vicinity of the saddle (metastable state) as given by Equation 1: t[k] = 1/λ[k] ln (1/|η|) where |η| is the level of perturbation (average distance between the game trajectory and the saddle at decision step t[k]), and λ[k] is a maximal increment that corresponds to the unstable separatrices of this saddle. From this estimate we can make a clear prediction. If the system does not stop in the middle of the game (see Problem 1 above), to get the best reward a player has to choose the σ(I[k]) that correspond to the maximal λ[k] and to have an optimal level of the noise (not too much to avoid leaving the heteroclinic channel). Suppose that we have noise in the input I that controls the next step of the decision making. Since(7) such additive informational noise appears on the right side of the dynamical model as a multiplicative noise. Computer modeling. The parameters of the model were selected according to a uniform distribution in the range . As a proof of concept, the specific order of the sequence is not important. Therefore, the sequence order is set from saddle 0 to N which is obtained by setting a connectivity matrix so that . Note that there are infinite matrices that will produce the same sequence. All the rest of the parameters that form the basis of all possible perturbations or stimulations at each of the saddles or decision steps were taken from a uniform distribution . The specific selection of these parameters does not have any impact on the results that are shown throughout this paper. For the sake of simplicity, we assume that the external perturbations at each of the decision steps are uncorrelated. The dynamical systems 5 and 6 was integrated using a standard explicit variable Runge-Kutta method. When the trajectory reaches the vicinity of a saddle point within some radius ε = 0.1, then the decision making function is applied. The rule applied in this case is the high-risk rule, which is implemented as follows. At each saddle we calculate the increments λ[j][(q)i] = σ[j][(q)]−ρ[j][(q)i]σ[j][(q)] with q = 1,…, m[k] such that a specific q is chosen to obtain a maximal λ[j][(q)i] at each saddle. In other words, we choose the maximal increment, which corresponds to the fastest motion away from the saddle S[i], and therefore, the shortest time for reaching the next saddle. To evaluate the model, we analyzed the effect of the strength of uncorrelated multiplicative noise 〈η[j](t)η[j](t′)〉 = μδ(t−t′). The results are shown in Figure 5. As the theory predicted, the noise plays a key role in the game, and there exists an optimal level of noise. For low noise the system travels through most of the saddles in a slower manner (see Equation 1), while for increasing values of the noise the number of metastable states involved in the game are reduced. Figure 5A shows the cumulative reward for different noise levels. Two interesting cases were investigated. As we can see from the figure, the optimal cumulative reward is obtained for a particular noise level. For levels of moderate noise the system enters partially repeated sequences, because the two or more unstable directions allow the system to move to two or more different places in a random fashion. The reproducibility measure of the obtained sequences is shown in Figure 5B. We can see that the most reproducible sequences are generated for a slightly smaller level of noise than the one that corresponds to the maximum cumulative reward. To estimate the reproducibility across sequences we used the Levenshtein distance that basically finds the easiest way to transform one sequence into another [53]. This distance is appropriate to identify the repetitiveness of the sequence and it is used in multiple applications. Sometimes it happens that the sequence becomes repetitive, and in other cases it just dies. The error bars in this figure denote the standard deviation. While the Levenshtein distance displays not too large error bars, the cumulative reward does because for that level of noise is common to enter limit cycles that reach the maximum time. It is more likely to find two extremes: (i) ending quickly and (ii) reaching a limit cycle. Figure 5. Estimation of the cumulative reward for different noise levels using multiplicative noise. (A) Cumulative reward calculated as the number of cognitive states that the system travels through until the final time of the game T* which is 100 in this case. For each level of noise, 1000 different sequences are generated (for N = 15 and a total of 15 choices). (B) Reproducibility index of the sequence calculated with the average Levenshtein distance across all generated sequences. The lower the distance, the more similar the sequences are for 1000 different runs. The pair distances are calculated and averaged to obtain the mean and the standard deviation which is represented by the error bars. Concerning the formation of a habit it is important to note that the memorized sequence is subjected to the external stimulation that can change the direction at any given time. This fact is reflected in the results shown in Figure 5 where the Levenshtein distance does not go exactly to zero. The heteroclinic skeleton that forms the SHC can be broken and can even repeat itself to produce limit cycles for a given set of external stimulus. So the model does have alternatives that are induced by the set of external perturbations under the risk taking decision making rule. This simple game illustrates a type of transient cognitive dynamics with multiple metastable states. We suggest that other types of sequential decision making could be represented by similar dynamical mechanisms. We have provided in this paper a theoretical description of the dynamical mechanisms that may underlie some cognitive functions. Any theoretical model of a very complex process such as a cognitive task should emphasize those features that are most important and should downplay the inessential details. The main difficulty is to separate one from another. To build our theory we have chosen two key experimental observations: the existence of metastable cognitive states and the transitivity of reproducible cognitive processes. We have not separated the different parts of the brain that form the cognitive modes for the execution of a specific function. The main goal of such coarse grain theory is to create a general framework of transient cognitive dynamics that is based on a new type of model that includes uncertainty in a natural way. The reproducible transient dynamics based on SHC that we have discussed contains two different time scales, i.e., a slow time scale in the vicinity of the saddles and a fast time scale in the transitions between them (see Figure 1). Taking this into account, it is possible to build a dynamical model based not on ODEs but on a Poincare map (see for a review [5]), which can be computationally very efficient for modeling a complex system. Winnerless competitive dynamics (represented by a number of saddle states whose vicinities are connected by their unstable manifolds to form a heteroclinic sequence) is a natural dynamical image for many transient cognitive activities. In particular we wish to mention transient synchronization in the brain [54], where authors have studied the dynamics of transitions between different phase-synchronized states of alpha activity in spontaneous EEG. Alpha activity has been characterized as a series of globally synchronized states (quasi-stable patterns on the scalp). We think that this dynamics can be described on the framework of the winnerless competition principle. From the theoretical point of view, a heteroclinic network between partially synchronized phase clusters has been analyzed in [55],[56]. The SHC concept allows considering transitions even between synchronized states with strongly different basic frequencies (like gamma and beta frequencies). Cognitive functions can strongly influence each other. For example, when we model decision making we have to take into account attention, working memory and different information sources. In particular, the dynamic association of various contextual cues with actions and rewards is critical to make effective decisions [57]. A crucial question here is how to combine several reward predictions, each of which is based on different information: some reward predictions may only depend on visual cues, but others may utilize not only visual and auditory cues but also the action taken by a subject. Because the accuracy of different reward predictions varies dynamically during the course of learning, the combination of predictions is important [58]. In a more general view, the next step of the theory has to be the consideration of mutual interaction of models like Model 4 that represent the execution of different cognitive functions. The dynamical mechanisms discussed in this paper can contribute to the interpretation of experimental data obtained from brain imaging techniques, and also to design new experiments that will help us better understand high level cognitive processes. In particular, we think that the reconstruction of the cognitive phase space based on principal component analysis of fMRI data will allow finding the values of the dynamical model parameters for specific cognitive functions. To establish a direct relation between model variables and fMRI data will be extremely useful to implement novel protocols of assisted neurofeedback [59]–[62], which can open a wide variety of new medical and brain-machine applications. Stable Heteroclinic Sequence We consider a system of ordinary differential equations(M1) where the vector field X is C^2-smooth. We assume that the system M1 has N equilibria Q[1], Q[2], …, Q[N], such that each Q[i] is a hyperbolic point of saddle type with one dimensional unstable manifold that consists of Q[i] and two “separatrices”, the connected components of which we denote by . We assume also that , the stable manifold of Q[i][+1]. The set is called the heteroclinic sequence. We denote by the eigenvalues of the matrix . By the assumption above one of them is positive and the others have negative real parts. Without loss of generality one can assume that they are ordered in such a way that We will use below the saddle value (see Equation 1) For readers who are interested in understanding the details of these results we recommend, as a first step, to read references [63],[64]. Definition M1. The heteroniclic sequence Γ is called the stable heteroclinic sequence (SHS) if(M2) It was shown in [10],[32] that the conditions M2 imply stability of Γ in the sense that every trajectory started at a point in a vicinity of Q[1] remains in a neighborhood of Γ until it comes to a neighborhood of Q[N]. In fact, the motion along this trajectory can be treated as a sequence of switchings between the equilibria Q[i][ = 1, 2,…,N] Of course, the condition indicates the fact that the system M1 is not structurally stable and can only occur either for exceptional values of parameters or for systems of a special form. As an example of such a system one may consider the generalized Lotka-Volterra Model 4 (see [10],[32]). Stable Heteroclinic Channel We consider now another system, say,(M3) that also has N equilibria of saddle type Q[1], Q[2], …, Q[N] with one dimensional unstable manifold , and with v[i]>1, i = 1,…,N. Denote by U[i] a small open ball of radius ε centered at Q[i] (one may consider, of course, any small neighborhood of Q[i]) that does not contain invariant sets but Q[i]. The stable manifold divides U[i] into two parts: containing a piece of , and another one . Assume that , and denote by the connected component of containing Q[i] and that . Denote by the δ-neighborhood of in ℜ^d. Definition M2. Let. We say that the System M3 has a stable heteroclinic channel in V(ε,δ) if there exits a set of initial points such that for every x[0] ⊂ U there exits T>0 for which the solution x(t,x[0]), 0≤t≤T, of M3 satisfies the following conditions: 1. x(0, x[0]) = x[0] 2. for each 0≤t≤T, x(t,x[0]) Є V(ε,δ) 3. for each 1≤i≤N there exists t[i]<T such that Thus, if ε and δ are small enough, then the motion on the trajectory corresponding to x(t,x[0]) can be treated as a sequence of switchings along the pieces of unstable separatrices between the saddles Q[i], i = 1,…,N. It follows that the property to possess a SHC is structurally stable: if a System M3 has a SHC then a C^1- close to System M3 also has it. We prove this fact here under additional conditions. Denote by the intersection . It is a segment for which one end point is Q[i] while the other one, say P[i], belongs to the boundary . Let , the piece of the stable manifold of Q[i] and , where O[γ](B) is the γ-neighborhood of a set B in ℜ^d. The boundary ∂V[i](γ) consists of , a (d-1)-dimensional ball, B[i], “parallel to” and a “cylinder” homeomorphic to S ^d^−2×I, where S ^d^−2 is the (d-2)-dimensional sphere and I is the interval [0,1]. We denote by C[i] (γ) this cylinder. The proof of the following lemma is rather standard and can be performed by using a local technique in a neighborhood of a saddle equilibrium (see [63]–[65]). Lemma M1. There is 0<ε[0]<1 such that for any ε<ε[0] and any 1≤i≤N there exist ε[i]<ε[0] and 1<μ[i]<v[i] for which the following statement holds: if then(M4) where “dist” is the distance in ℜ^d, τ[i]>0 is the time and x(τ[i], x[0]) is the point of exit of the solution of M3, going through x[0], from. A segment has two end points: one of which is P[i] and the other one, say. Fix ε<ε[0]. Lemma M2. There exists members K[i]>1 and γ[i]>0 such that if, then: 1. there is such that 2. every point x(t, x[0]), belongs to the-neighborhood of. The lemma is a direct corollary of the theorem of continuous dependence of a solution of ODE on initial conditions on a finite interval of time. Now, fix the numbers μ[i], ε[i] satisfying Lemma M1. Then we impose a collection of assumptions that will guarantee the existence of the SHC. Assumption M[N]. The point. The lemma M2 implies that there exits such that for every . Fix a number such that(M6) Assumption M[N−1]. The point. Again, there exits such that for every . Fix a number such that(M7) Continuing we come to Assumption M[i]. (i = 1,…,N−2) The point. We choose such that(M8) where is fixed in such a way that provided that . The following theorem is a direct corollary of Lemmas 1 and 2, the assumptions M[N]−M[2] and the choice of numbers : Theorem M2. Under the assumptions above, the System M3 has a SHC in V(ε, δ) where and the set of initial points (see Definition M2). There exists σ>0 such that every system where also has a SHC in V(ε, δ), maybe with a smaller open set U of initial points. The proof of Corollary is based: 1. on the fact that the local stable and unstable manifolds of a saddle point for an original and a perturbed system are C^1-close to each other; 2. on the theorem of smooth dependence of a solution of ODE on parameters and 3. on the open nature of all assumptions of Theorem M2. The conditions look rather restrictive, in general. Nevertheless, for an open set of perturbations of a system possessing a SHS, they certainly occur. Theorem M3. If a System M1 has a SHS then there is an open set U in the Banach space of vector fields with the C^1-norm such that the system has a SHC, for every ZЄU. The proof can be made by a rather standard construction. Since for the system (M1) then in some local coordinates around a point the System M1 can be written as(M9) where x[1]ЄP, x[2]ЄP^d^−1, x = (x[1],x[2]), and the inequality x[1]>0 determines the side of belongs to. Denote by the “cup-function”: a C^1-smooth function ℜ^d→ℜ^+ such that . Now the system(M10) will have a piece of the separatrix satisfying the assumption M[i] if 0<δ[i]<<1. We perturb the System M1 in such a way for every i = 1, …,N−1 and obtain a System M3 having SHC provided that all δ[i] >0 and sufficiently small. Author Contributions Conceived and designed the experiments: MR VA. Performed the experiments: RH. Analyzed the data: RH PV. Contributed reagents/materials/analysis tools: RH PV. Wrote the paper: MR RH PV.
{"url":"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000072?imageURI=info:doi/10.1371/journal.pcbi.1000072.g003","timestamp":"2014-04-21T09:43:08Z","content_type":null,"content_length":"236601","record_id":"<urn:uuid:097587f6-843a-40bc-8a82-682a64de3c96>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponential and Logarithmic Models Contents: This page corresponds to § 4.5 (p. 359) of the text. Suggested problems from text: p. 366 #7,9,15,17,25,33,35,69 Exponential Growth and Decay Two common types of mathematical models are Exponential Growth: y = a e^ bx, b > 0. Exponential Decay: y = a e^ -bx, b > 0. Example 1. During the 1980s the population of a certain city went from 100,000 to 205,000. Populations by year are listed in the table below. │ Year │1980│1981│1982│1983│1984│1985│1986│1987│1988│1989│ │Population in thousands │100 │108 │117 │127 │138 │149 │162 │175 │190 │205 │ This data is approximated well by the exponential growth model P = 100 e^0.08t, where t is the number of years since 1980. In other words, the year 1980 corresponds to t = 0, 1981 corresponds to t = 1, etc. The data points and model are graphed below. Population data points and model P = 100 e^0.08t where t is number of years since 1980. Problem 1: Use the model to predict the population of the city in 1994. 1991 corresponds to t = 11, so our model predicts that the population will be P = 100 e^0.08*11 = 241 thousand. Problem 2: According to our model, when will the population reach 300 thousand? To solve this problem we set 100 e^0.08t equal to 300 and solve for t. 100 e^0.08t = 300 e^0.08t = 3 Take the natural logarithm of both sides. ln e^0.08t = ln 3 0.08t = ln 3 t = (ln 3)/0.08 = 13.73, approximately. Therefore, the population is expected to reach 300 thousand about three fourths of the way through the year 1993. It is important to recognize the limitations of this model. While it is obvious from the graph that for t between 0 and 9, the model values are very close to the actual population values, we should not assume that our model will give an accurate prediction of population for values of t much larger than 9. For instance, the model predicts that in the year 2080 (t = 100), the population of the city will be almost 300 million! That is not likely. Suppose we know that a variable y can be expressed in the form ae^bx, but we don't know the values a and b. If we are given any two points on the graph of y, then it is possible to find the numbers a and b. The simplest case, and one that is often encountered in applications, is where we know the value of y when x = 0 and one other point on the graph of y. Example 2. Assume that a population P is growing exponentially, so P = ae^bt, where t is measured in years. If P = 15000 in 1990, and P has grown to 17000 in 1993, find the formula for P. Let t be the number of years since 1990. Then a = 15000, the value of P when t = 0. Note that t could have been chosen differently. For instance, we could let t be the number of years since 1900. But then we would not know the value of P when t = 0, so we would not know the value of a immediately. We still need to find b. All we know about b is that it is positive, since the population is growing. Using the value we have found for a, we have P = 15000 e^bt. The year 1993 corresponds to t = 3, so we substitute P = 17000 and t = 3 in the equation above and solve for b. 17000 = 15000 e^3b We have solved equations of this form several times. The first step is to isolate the exponential term. Then take the natural logarithm of both sides. 17/15 = e^3b ln (17/15) = ln e^3b ln (17/15) = 3b b = (ln(17/15))/3 = 0.0417 (approximately) Therefore P = 15000 e^0.0417 t Based on this model, when will the population reach 20000? Set P equal to 20000 and solve for t. 20000 = 15000 e^0.0417 t 4/3 = e^0.0417 t ln (4/3) = ln e^0.0417 t ln (4/3) = 0.0417 t t = (ln (4/3))/0.0417 = 6.9 years So, we expect the population to reach 20000 toward the end of 1996. Find a model of the type P = ae^bt, where t is the number of years since 1970, if P = 30000 in 1970 and P = 36000 in 1977. Use this model to predict the value of P in 1980. When the coefficient of x (or whatever the independent variable is named) is negative, then we are modeling a decreasing variable. This is called exponential decay. We will illustrate exponential decay by considering a radioactive substance. A sample of a radioactive substance decays with time. Example 3. The mass (in grams) of radioactive material in a sample is given by N = 100e^-0.0017t, where t is measured in years. Find the half-life of this radioactive substance. The half-life of a radioactive substance is the amount of time required for half of a give sample to decay. Note that half-life is independent of the size of the sample. If the half-life of a certain radioactive material is 700 years, then if the initial mass of the sample is 1000 grams, in 700 years there will be 500 grams. If the initial mass of the sample is only 8 grams, in 700 yeas there will be 4 grams. In this example, the mass of the radioactive material is 100 grams at time t = 0. Therefore the half-life is the amount of time necessary for the sample to decay to 50 grams. So we can find the half-life by setting N equal to 50 and solving for t. 100e^-0.0017t = 50 e^-0.0017t = 0.5 Note: If the initial amount had been 800 grams, then we would have to solve the equation 800e^-0.0017t=400, and after dividing both sides by 800 we would have e^-0.0017t = 0.5, which is the same as the equation above. That is why half-life is independent of initial quantity. ln e^-0.0017t = ln 0.5 -0.0017t = ln 0.5 t = (ln 0.5)/ -0.0017 = 408 years (approximately) Question: Why is (ln 0.5)/-0.0017 equal to (ln 2)/0.0017 ? In an earlier section we discussed an important example of exponential growth, namely continuous compounding of interest. Computing the half-life of a radioactive substance is very similar to computing the doubling time for an investment. Example 4. If $1000 is invested at 9% annual interest compounded continuously, how long will it take for the investment to double? Using the compound interest formula A = Pe^rt, we have A = 1000e^0.09t We want to find the amount of time it will take for the investment to double, that is grow to $2000, so we set A equal to 2000 and solve for t. 1000e^0.09t = 2000 Divide both sides by 1000. e^0.09t = 2 Note: Just like half-life, doubling time is independent of the initial investment P. If we had started with $50 and asked how long it will be before we have $100, then we would have solved the equation 50e^0.09t = 100, and after dividing both sides by 50 we would again have e^0.09t = 2. ln e^0.09t = ln 2 0.09 t = ln 2 t = (ln 2)/0.09 = 7.7 years (approximately). Question: If you invest $3000 at 9% compounded continuously, about how much will you have in 15 and a half years? Answer: Around $12000. This is easy to approximate because doubling time is 7.7 years, so 15.5 years corresponds to just over two doublings. 3000 -> 6000 -> 12000. How long does it take an investment to triple at %11 compounded continuously? Answer Other Models Example 5. Consider the following data points. │x│ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ When we plot these points we see that they do not seem to lie on any exponential curve, but the shape is very much like a logarithmic graph. This suggests that a logarithmic model is reasonable. The graph below shows the data points and the function y = 0.63 + 2.7 ln x which fits the data points quite well.
{"url":"http://dl.uncw.edu/digilib/mathematics/algebra/mat111hb/eandl/elmodels/elmodels.html","timestamp":"2014-04-17T09:55:39Z","content_type":null,"content_length":"15341","record_id":"<urn:uuid:cd6b6a43-2490-4c66-b458-1a83eff473ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure This Math Challenges for Families - Did You Know? French mathematician Peter Gustav Lejeune Dirichlet (1805-1859) first described the pigeonhole principle. The pigeonhole principle was so named because if 10 homing pigeons return to 9 holes, then at least one hole must have two pigeons in it. A good approximation of the number of hairs on the human head is about 100,000.
{"url":"http://www.figurethis.org/challenges/c28/did_you_know.htm","timestamp":"2014-04-19T22:06:17Z","content_type":null,"content_length":"14463","record_id":"<urn:uuid:40f891a1-77e2-4317-af55-3f28c782897a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic finite-element calculation of excitation modes in periodic ASA 125th Meeting Ottawa 1993 May 2aPA3. Dynamic finite-element calculation of excitation modes in periodic and disordered Lucite--steel composite systems. Ping Sheng Minyao Zhou Exxon Res. and Eng. Co., 79 Rte. 22 East, Annadale, NJ 08801-0998 Recent experiment by L. Ye et al. on Lucite--steel composite system [Phys. Rev. Lett. 69, 3080 (1992)] has demonstrated the localization of bending waves. This is particularly surprising on first sight because the huge acoustic impedance mismatch between Lucite and steel would seem to imply that Lucite should have very little effect on the steel. A dynamic finite-element approach was used to calculate the excitation modes in a 2-D periodic structure consisting of a steel plate decorated by a checkerboard pattern of Lucite blocks. The results show that the coupling between the steel plate bending wave and the flexural resonances of the Lucite blocks can drastically alter the dispersion relation of the bending wave, whereas the effect of periodicity is minimal. Since the Lucite flexural resonance frequencies depend on the Lucite block height, its effect on the bending wave can be tuned in terms of the frequency as demonstrated experimentally by Ye et al. This talk will also report the results of calculations on a random 1-D Lucite--steel system. In particular, the localization characteristics will be emphasized.
{"url":"http://www.auditory.org/asamtgs/asa93ott/2aPA/2aPA3.html","timestamp":"2014-04-21T07:07:45Z","content_type":null,"content_length":"1822","record_id":"<urn:uuid:db280ac8-23d0-4727-b7ec-707a07acd71f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Answers to math word problems free? Browse Ask Answer Search Join/Login Answers to math word problems free? Asked Jan 31, 2012, 01:37 PM — 1 Answer Rachel needs a mixture of 55 pounds (lb) of nuts consisting of peanuts and cashews. Let p represent the number of pounds of peanuts in the mixture. Write an algebraic expression for the number of pounds of cashews that she needs to add. Share | 1 Answer Not your question? Ask your question View similar questions Thread Tools Search this Thread Show Printable Version Email this Page Check out some similar questions! Answers to math word problems free? [ 7 Answers ] It usually takes Brian 1 1/2 hours to get to work from the time he gets out of bed. His drive to the office takes 3/4 hour. How much time does he spend getting ready for work? Free answers to math word problems? [ 3 Answers ] If al eats 50 cookies every 60 minutes, how many cookies will he have eaten in 3 days? Free Math Word Problems Answers? [ 4 Answers ] The normal body temp of a camel is 97.7 if it has no water by noon its body temp can be greater than 104 write the inequality of the camel temp at noon Answers to math word problems for free [ 1 Answers ] Smitty isn’t superstitious, but whenever she finds a coin in the street, she makes a wish and tosses it into a fountain. Last week, she hit the sidewalk jackpot. On each of six days (Monday through Saturday), she found a U.S. coin of a different denomination (penny, $0.01; nickel, $0.05; dime,... View more questions Search EN NL Home Blog Contact Us Privacy Policy Experts Help Answers to math word problems free? Asked Jan 31, 2012, 01:37 PM — 1 Answer Rachel needs a mixture of 55 pounds (lb) of nuts consisting of peanuts and cashews. Let p represent the number of pounds of peanuts in the mixture. Write an algebraic expression for the number of pounds of cashews that she needs to add. Rachel needs a mixture of 55 pounds (lb) of nuts consisting of peanuts and cashews. Let p represent the number of pounds of peanuts in the mixture. Write an algebraic expression for the number of pounds of cashews that she needs to add. Thread Tools Search this Thread Show Printable Version Email this Page It usually takes Brian 1 1/2 hours to get to work from the time he gets out of bed. His drive to the office takes 3/4 hour. How much time does he spend getting ready for work? If al eats 50 cookies every 60 minutes, how many cookies will he have eaten in 3 days? The normal body temp of a camel is 97.7 if it has no water by noon its body temp can be greater than 104 write the inequality of the camel temp at noon Smitty isn’t superstitious, but whenever she finds a coin in the street, she makes a wish and tosses it into a fountain. Last week, she hit the sidewalk jackpot. On each of six days (Monday through Saturday), she found a U.S. coin of a different denomination (penny, $0.01; nickel, $0.05; dime,... EN NL Home Blog Contact Us Privacy Policy Experts Help
{"url":"http://www.askmehelpdesk.com/middle-school/answers-math-word-problems-free-632156.html","timestamp":"2014-04-18T10:37:31Z","content_type":null,"content_length":"37525","record_id":"<urn:uuid:45767328-0dde-47d4-9800-b7e26d196e9e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Enumeration of words by the sum of differences between adjacent letters Toufik Mansour We consider the sum $u$ of differences between adjacent letters of a word of $n$ letters, chosen uniformly at random from a given alphabet. This paper obtains the enumerating generating function for the number of such words with respect to the sum $u$, as well as explicit formulas for the mean and variance of $u$. Full Text: PDF PostScript
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/705","timestamp":"2014-04-16T22:09:13Z","content_type":null,"content_length":"10640","record_id":"<urn:uuid:9b7821b8-1732-458b-b7ad-b4d4b5c2d086>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
MAP estimator: Laplacian error January 18th 2011, 05:44 PM MAP estimator: Laplacian error suppose we have N noisy versions of the parameter x. The (additive) noise is Laplacian distributed and the parameter x is known a priori to have a gaussian distribution. What is required is to estimate x given the N noisy measurements, using the MAP estimator. Since the absolute function is non-differentiable it's approximated with a hyperbola. However, finding the (approximate) MAP solution requires evaluating the following finite series, first: f(x) = sum { (y_k - x)/del * 1/sqrt(1+ ((y_k - x)/del)^2) } , k=1:N, (y_k is the k-th measurement). Did any of the members here come across a similar problem?
{"url":"http://mathhelpforum.com/advanced-statistics/168722-map-estimator-laplacian-error-print.html","timestamp":"2014-04-19T15:35:11Z","content_type":null,"content_length":"3695","record_id":"<urn:uuid:210c375c-c80a-4849-9105-61c6f818817c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
iterated integral April 22nd 2010, 11:05 AM #1 Junior Member Dec 2009 iterated integral Hello, I want to calculate: $\int_{A} y d(x,y)$ where $A={(x,y): y\ge0; x^2+y^2\le1; x^2+y^2-2x\le0 }$ $\int_{A} y d(x,y) = \int^{1/2}_0 \int^{\sqrt{2x-x^2}}_0 y dy dx + \int^{1}_{1/2} \int^{\sqrt{1-x^2}}_0 y dy dx$ Solving I have: $\int_{A} y d(x,y) = \frac{5}{24}$ I have tried to solve it with polar coordinate: $\int_{A} y d(x,y) = \int^{\pi/3}_0 \int^1_0 r^2 sin\theta dr d\theta + \int^{\pi/2}_{\pi/3} \int^{2cos\theta}_1 r^2 sin\theta dr d\theta$ Solving this one I have: $\int_{A} y d(x,y) = \frac{1}{24}$ As we can see, second result is 5 times the first one, so there must be any mistakes. Any suggestions? Thank you very much. I have made the graphic with A region, I think maybe there is a mistake with polar coordinate? Thank you. April 23rd 2010, 02:25 AM #2 Junior Member Dec 2009
{"url":"http://mathhelpforum.com/calculus/140743-iterated-integral.html","timestamp":"2014-04-24T16:37:02Z","content_type":null,"content_length":"32789","record_id":"<urn:uuid:ba439f94-942d-4211-bd76-3e9ec4b0be42>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Is logic part of mathematics - or is mathematics part of logic? Replies: 10 Last Post: Jul 9, 2013 4:09 AM Messages: [ Previous | Next ] Re: Is logic part of mathematics - or is mathematics part of logic? Posted: Jul 9, 2013 4:09 AM On Jul 8, 2013, at 11:37 PM, GS Chandy <gs_chandy@yahoo.com> wrote: > The way RH had learned math is in total contradiction to the 'teaching philosophy' he has been expounding here at Math-teach these many years. I think I have been pretty clear that my ultimate goal in teaching is the art of the subject. What probably happened is that you didn't get that. Bob Hansen
{"url":"http://mathforum.org/kb/message.jspa?messageID=9161057","timestamp":"2014-04-19T09:43:29Z","content_type":null,"content_length":"28431","record_id":"<urn:uuid:67ee76ce-a707-45b0-9be0-c610d8e8cc44>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy in Electromagnetic Waves Next: Worked Examples Up: Electromagnetic Waves Previous: Effect of Dielectric Materials From Sect. 233, the energy stored per unit volume in an electromagnetic wave is given by It is clear, from the above, that half the energy in an electromagnetic wave is carried by the electric field, and the other half is carried by the magnetic field. As an electromagnetic field propagates it transports energy. Let i.e., It follows, from the definition of so that Since half the energy in an electromagnetic wave is carried by the electric field, and the other half is carried by the magnetic field, it is conventional to convert the above expression into a form involving both the electric and magnetic field strengths. Since, Equation (338) specifies the power per unit area transported by an electromagnetic wave at any given instant of time. The peak power is given by where average power per unit area transported by an electromagnetic wave is half the peak power, so that The quantity intensity of the wave. Next: Worked Examples Up: Electromagnetic Waves Previous: Effect of Dielectric Materials Richard Fitzpatrick 2007-07-14
{"url":"http://farside.ph.utexas.edu/teaching/316/lectures/node119.html","timestamp":"2014-04-18T08:43:06Z","content_type":null,"content_length":"12384","record_id":"<urn:uuid:08e60505-230a-4008-9f7b-a00225e9a750>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
On Hyperideals in Left Almost Semihypergroups ISRN Algebra Volume 2011 (2011), Article ID 953124, 8 pages Research Article On Hyperideals in Left Almost Semihypergroups Department of Mathematics & Computer Science, Faculty of Natural Sciences, University of Gjirokastra, Gjirokastra 6001, Albania Received 7 June 2011; Accepted 11 July 2011 Academic Editors: A. V. Kelarev and A. Kiliçman Copyright © 2011 Kostaq Hila and Jani Dine. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper deals with a class of algebraic hyperstructures called left almost semihypergroups (LA-semihypergroups), which are a generalization of LA-semigroups and semihypergroups. We introduce the notion of LA-semihypergroup, the related notions of hyperideal, bi-hyperideal, and some properties of them are investigated. It is a useful nonassociative algebraic hyperstructure, midway between a hypergroupoid and a commutative hypersemigroup, with wide applications in the theory of flocks, and so forth. We define the topological space and study the topological structure of LA-semihypergroups using hyperideal theory. The topological spaces formation guarantee for the preservation of finite intersection and arbitrary union between the set of hyperideals and the open subsets of resultant 1. Introduction and Preliminaries The applications of mathematics in other disciplines, for example in informatics, play a key role, and they represent, in the last decades, one of the purposes of the study of the experts of hyperstructures theory all over the world. Hyperstructure theory was introduced in 1934 by a French mathematician Marty [1], at the 8th Congress of Scandinavian Mathematicians, where he defined hypergroups based on the notion of hyperoperation, began to analyze their properties, and applied them to groups. In the following decades and nowadays, a number of different hyperstructures are widely studied from the theoretical point of view and for their applications to many subjects of pure and applied mathematics and computer science by many mathematicians. In a classical algebraic structure, the composition of two elements is an element, while in an algebraic hyperstructure, the composition of two elements is a set. Some principal notions about hyperstructures and semihypergroups theory can be found in [1–7]. The Theory of ideals, in its modern form, is a contemporary development of mathematical knowledge to which mathematicians of today may justly point with pride. Ideal theory is important not only for the intrinsic interest and purity of its logical structure but because it is a necessary tool in many branches of mathematics and its applications such as in informatics, physics, and others. As an example of applications of the concept of an ideal in informatics, let us mention that ideals of algebraic structures have been used recently to design efficient classification systems, see [8–12]. The study of LA-semigroup as a generalization of commutative semigroup was initiated in 1972 by Kazim and Naseeruddin [13]. They have introduced the concept of an LA-semigroup and have investigated some basic but important characteristics of this structure. They have generalized some useful results of semigroup theory. Since then, many papers on LA-semigroups appeared showing the importance of the concept and its applications [13–23]. In this paper, we generalize this notion introducing the notion of LA-semihypergroup which is a generalization of LA-semigroup and semihypergroup, proposing so a new kind of hyperstructure for further studying. It is a useful nonassociative algebraic hyperstructure, midway between a hypergroupoid and a commutative hypersemigroup, with wide applications in the theory of flocks etc. Although the hyperstructure is nonassociative and noncommutative, nevertheless, it possesses many interesting properties which we usually find in associative and commutative algebraic hyperstructures. A several properties of hyperideals of LA-semihypergroup are investigated. In this note, we define the topological space and study the topological structure of LA-semihypergroups using hyperideal theory. The topological spaces formation guarantee for the preservation of finite intersection and arbitrary union between the set of hyperideals and the open subsets of resultant topologies. Recall first the basic terms and definitions from the hyperstructure theory. Definition 1.1. A map is called hyperoperation or join operation on the set , where is a nonempty set and denotes the set of all nonempty subsets of . Definition 1.2. A hyperstructure is called the pair , where is a hyperoperation on the set . Definition 1.3. A hyperstructure is called a semihypergroup if for all , , which means that If and are nonempty subsets of , then Definition 1.4. A nonempty subset of a semihypergroup is called a sub-semihypergroup of if , and is called in this case super-semihypergroup of . Definition 1.5. Let be a semihypergroup. Then is called a hypergroup if it satisfies the reproduction axiom, for all , . Definition 1.6. A hypergrupoid is called an LA-semihypergroup if, for all , Every LA-semihypergroup satisfies the medial law, that is, for all , In every LA-semihypergroup with left identity, the following law holds: for all . An element in an LA-semihypergroup is called identity if . An element 0 in a semihypergroup is called zero element if . A subset of an LA-semihypergroup is called a right (left) hyperideal if and is called a hyperideal if it is two-sided hyperideal, and if is a left hyperideal of , then becomes a hyperideal of . By a bi-hyperideal of an LA-semihypergroup , we mean a sub-LA-semihypergroup of such that . It is easy to note that each right hyperideal is a bi-hyperideal. If has a left identity, then it is not hard to show that is a bi-hyperideal of and . If denotes the set of all idempotents subsets of with left identity , then forms a hypersemilattice structure, also if , then . The intersection of any set of bi-hyperideals of an LA-semihypergroup is either empty or a bi-hyperideal of . Also the intersection of prime bi-hyperideals of an LA-semihypergroup is a semiprime bi-hyperideal of . 2. Main Results Proposition 2.1. Let be an LA-semihypergroup with left identity, a left hyperideal, and a bi-hyperideal of . Then and are bi-hyperideals of . Proof. Using the medial law (1.4), we get also Hence, is a bi-hyperideal of . we obtain also Hence, is a bi-hyperideal of . Proposition 2.2. Let be an LA-semihypergroup with left identity and two bi-hyperideals of . Then is a bi-hyperideal of . Proof. Using (1.4), we get By the above, if and are nonempty, then and are connected bi-hyperideals. Proposition 2.1 leads us to an easy generalization, that is, if are bi-hyperideals of an LA-semihypergroup with left identity, then are bi-hyperideals of , consequently the set of bi-hyperideals forms an LA-semihypergroup. If is an LA-semihypergroup with left identity , then and are bi-hyperideals of . It can be easily shown that , , and . Hence, this implies that and . Also, , , , , and (if is an idempotent), consequently . It is easy to show that . Lemma 2.3. Let be an LA-semihypergroup with left identity, and let be an idempotent bi-hyperideal of . Then is a hyperideal of . Proof. By the definition of LA-semihypergroup (1.3), we have and every right hyperideal in with left identity is left. Lemma 2.4. Let be an LA-semihypergroup with left identity , and let be a proper bi-hyperideal of . Then . Proof. Let us suppose that . Since , using (1.3), we have . It is impossible. So, . It can be easily noted that . Proposition 2.5. Let be an LA-semihypergroup with left identity, and let be bi-hyperideals of . Then the following statements are equivalent: (1)every bi-hyperideal is idempotent, (2),(3)the hyperideals of form a hypersemilattice , where . Proof. (1)⇒(2). Using Lemma 2.3, it is easy to note that . Since implies , hence . (2)⇒(3). and . Similarly, associativity follows. Hence, is a hypersemilattice. (3)⇒(1). . A bi-hyperideal of an LA-semihypergroup is called a prime bi-hyperideal if implies either or for every bi-hyperideal and of . The set of bi-hyperideals of is totally ordered under the set inclusion if for all bi-hyperideals either or . Theorem 2.6. Let be an LA-semihypergroup with left identity. Every bi-hyperideal of is prime if and only if it is idempotent and the set of the bi-hyperideals of is totally ordered under the set Proof. Let us assume that every bi-hyperideal of is prime. Since is a hyperideal and so is prime which implies that , hence is idempotent. Since is a bi-hyperideal of (where and are bi-hyperideals of ) and so is prime. Now by Lemma 2.3, either or which further implies that either or . Hence, the set of bi-hyperideals of is totally ordered under set inclusion. Conversely, let us assume that every bi-hyperideals of is idempotent and the set of bi-hyperideals of is totally ordered under set inclusion. Let and be the bi-hyperideals of with and without loss of generality assume that . Since is an idempotent, so implies that , and, hence, every bi-hyperideal of is prime. A bi-hyperideal of an LA-semihypergroup is called strongly irreducible bi-hyperideal if implies either or for every bi-hyperideal and of . Theorem 2.7. Let be an LA-semihypergroup with zero. Let be the set of all bi-hyperideals of , and the set of all strongly irreducible proper bi-hyperideals of , then forms a topology on the set , where and :Bi-hyperideal preserves finite intersection and arbitrary union between the set of bi-hyperideals of and open subsets of . Proof. Since is a bi-hyperideal of and 0 belongs to every bi-hyperideal of , then , also which is the first axiom for the topology. Let , then , where is a bi-hyperideal of generated by . Let and , if , then and . Let us suppose , this implies that either or . It is impossible. Hence, which further implies that . Thus . Now if , then and . Thus and , therefore , which implies that . Hence is the topology on . Define :Bi-hyperideal by , then it is easy to note that preserves finite intersection and arbitrary union. A hyperideal of an LA-semihypergroup is called prime if implies that either or for all hyperideals and in . Let denotes the set of proper prime hyperideals of an LA-semihypergroup absorbing 0. For a hyperideal of , we define the sets and . Theorem 2.8. Let be an LA-semihypergroup with zero. The set constitutes a topology on the set . Proof. Let , if , then and and . Let which implies that either or , which is impossible. Hence, . Similarly . The remaining proof follows from Theorem 2.7. The assignment preserves finite intersection and arbitrary union between the hyperideal and their corresponding open subsets of . Let be a left hyperideal of an LA-semihypergroup . is called quasiprime if for left hyperideals of such that , we have or . Theorem 2.9. Let be an LA-semihypergroup with left identity . Then a left hyperideal of is quasiprime if and only if implies that either or . Proof. Let be a left hyperideal of . Let us assume that , then that is, Hence, either or . Conversely, let us assume that , where and are left hyperideal of such that . Then there exists such that . Now, by the hypothesis, we have for all . Since , so by hypothesis, for all , we obtain . This shows that is quasiprime. An LA-semihypergroup is called an antirectangular if , for all . It is easy to see that . In the following results for an antirectangular LA-semihypergroup , . Proposition 2.10. Let be an LA-semihypergroup. If are hyperideals of , then is a hyperideal. Proof. Using (1.4), we have also which shows that is a hyperideal. Consequently, if are hyperideals of , then are hyperideals of and the set of hyperideals of form an antirectangular LA-semihypergroup. Lemma 2.11. Let be an antirectangular LA-semihypergroup. Any subset of is left hyperideal if and only if it is right. Proof. Let be a right hyperideal of , then using (1.3), we get . Conversely, let us suppose that is a left hyperideal of , then using (1.3), we have . It is fact that . From the above lemma, we remark that every quasiprime hyperideal becomes prime in an antirectangular LA-semihypergroup. Lemma 2.12. Let be an anti-rectangular LA-semihypergroup. If is a hyperideal of , then . Proof. Let , then . Hence . Also, . An hyperideal of an LA-semihypergroup is called an idempotent if . An LA-semihypergroup is said to be fully idempotent if every hyperideal of is idempotent. Proposition 2.13. Let be an antirectangular LA-semihypergroup, and, be hyperideals of . Then the following statements are equivalent: (1) is fully idempotent, (2),(3)the hyperideals of form a hypersemilattice where . The proof follows from Proposition 2.5. The set of hyperideals of is totally ordered under set inclusion if for all hyperideals either or and denoted by hyperideal. Theorem 2.14. Let be an antirectangular LA-semihypergroup. Then every hyperideal of is prime if and only if it is idempotent and hyperideal is totally ordered under set inclusion. Proof. The proof follows from Theorem 2.6. In conclusion, let us mention that it would be interesting to investigate whether it is possible to apply hyperideals of hyperstructures to the construction of classification systems similar to those introduced in [8–12]. The authors are highly grateful to referees for their valuable comments and suggestions. 1. F. Marty, “Sur une generalization de la notion de group, 8th Congres Math,” Scandinaves, pp. 45–49, 1934. 2. P. Corsini, Prolegomena of Hypergroup Theory, Aviani Editore, 2nd edition, 1993. View at Zentralblatt MATH 3. P. Corsini and V. Leoreanu, Applications of Hyperstructure Theory, vol. 5 of Advances in Mathematics, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2003. View at Zentralblatt MATH 4. B. Davvaz and V. Leoreanu-Fotea, Hyperring Theory and Applications, International Academic Press, 2007. 5. T. Vougiouklis, Hyperstructures and Their Representations, Hadronic Press, Palm Harbor, Fla, USA, 1994. View at Zentralblatt MATH 6. K. Hila, B. Davvaz, and K. Naka, “On quasi-hyperideals in semihypergroups,” Communications in Algebra. In press. 7. K Hila, B. Davvaz, and J. Dine, “Study on the structure of Γ-semihypergroups,” Communications in Algebra. In press. 8. A. V. Kelarev, J. L. Yearwood, and M. A. Mammadov, “A formula for multiple classifiers in data mining based on Brandt semigroups,” Semigroup Forum, vol. 78, no. 2, pp. 293–309, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. A. V. Kelarev, J. L. Yearwood, and P. W. Vamplew, “A polynomial ring construction for the classification of data,” Bulletin of the Australian Mathematical Society, vol. 79, no. 2, pp. 213–225, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. A. V. Kelarev, J. L. Yearwood, and P. Watters, “Rees matrix constructions for clustering of data,” Journal of the Australian Mathematical Society, vol. 87, no. 3, pp. 377–393, 2009. View at Publisher · View at Google Scholar 11. A. V. Kelarev, J. L. Yearwood, P. Watters, X. Wu, J. H. Abawajy, and L. Pan, “Internet security applications of the Munn rings,” Semigroup Forum, vol. 81, no. 1, pp. 162–171, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 12. A. V. Kelarev, J. L. Yearwood, and P. A. Watters, “Optimization of classifiers for data mining based on combinatorial semigroups,” Semigroup Forum, vol. 82, no. 2, pp. 242–251, 2011. View at Publisher · View at Google Scholar 13. M. A. Kazim and M. Naseeruddin, “On almost semigroups,” The Aligarh Bulletin of Mathematics, vol. 2, pp. 1–7, 1972. View at Zentralblatt MATH 14. Q. Mushtaq and S. M. Yusuf, “On LA-semigroups,” The Aligarh Bulletin of Mathematics, vol. 8, pp. 65–70, 1978. View at Zentralblatt MATH 15. Q. Mushtaq and S. M. Yusuf, “On locally associative LA-semigroups,” The Journal of Natural Sciences and Mathematics, vol. 19, no. 1, pp. 57–62, 1979. View at Zentralblatt MATH 16. Q. Mushtaq, “Abelian groups defined by LA-semigroups,” Studia Scientiarum Mathematicarum Hungarica, vol. 18, no. 2–4, pp. 427–428, 1983. View at Zentralblatt MATH 17. Q. Mushtaq and Q. Iqbal, “Decomposition of a locally associative LA-semigroup,” Semigroup Forum, vol. 41, no. 2, pp. 155–164, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. Q. Mushtaq and M. S. Kamran, “On left almost groups,” Proceedings of the Pakistan Academy of Sciences, vol. 33, no. 1-2, pp. 53–55, 1996. 19. P. V. Protić and N. Stevanović, “On Abel-Grassmann's groupoids,” in Proceedings of the Mathematical Conference in Priština, pp. 31–38. View at Zentralblatt MATH 20. Q. Mushtaq and M. S. Kamran, “On LA-semigroup with weak associative law,” Scientiffic Khyber, vol. 1, pp. 69–71, 1989. 21. Q. Mushtaq and M. Khan, “M-systems in LA-semigroups,” Southeast Asian Bulletin of Mathematics, vol. 33, no. 2, pp. 321–327, 2009. View at Zentralblatt MATH 22. Q. Mushtaq and M. Khan, “Topological structures on Abel-Grassman's grupoids,” arXiv:0904.1650v1, 2009. 23. P. Holgate, “Groupoids satisfying a simple invertive law,” The Mathematics Student, vol. 61, no. 1–4, pp. 101–106, 1992. View at Zentralblatt MATH
{"url":"http://www.hindawi.com/journals/isrn.algebra/2011/953124/","timestamp":"2014-04-21T11:02:24Z","content_type":null,"content_length":"343741","record_id":"<urn:uuid:59be480f-9330-45d6-a4fb-31b58b95411a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Complexity Results for Modal Dependence Logic when quoting this document, please refer to the following URN: urn:nbn:de:0030-drops-25240 URL: http://drops.dagstuhl.de/opus/volltexte/2010/2524/ Lohmann, Peter ; Vollmer, Heribert Complexity Results for Modal Dependence Logic Modal dependence logic was introduced very recently by Väänänen. It enhances the basic modal language by an operator dep. For propositional variables p_1,...,p_n, dep(p_1,...,p_(n-1);p_n) intuitively states that the value of p_n only depends on those of p_1,...,p_(n-1). Sevenster (J. Logic and Computation, 2009) showed that satisfiability for modal dependence logic is complete for nondeterministic exponential time. In this paper we consider fragments of modal dependence logic obtained by restricting the set of allowed propositional connectives. We show that satisfibility for poor man's dependence logic, the language consisting of formulas built from literals and dependence atoms using conjunction, necessity and possibility (i.e., disallowing disjunction), remains NEXPTIME-complete. If we only allow monotone formulas (without negation, but with disjunction), the complexity drops to PSPACE-completeness. We also extend Väänänen's language by allowing classical disjunction besides dependence disjunction and show that the satisfiability problem remains NEXPTIME-complete. If we then disallow both negation and dependence disjunction, satistiability is complete for the second level of the polynomial hierarchy. In this way we completely classify the computational complexity of the satisfiability problem for all restrictions of propositional and dependence operators considered by Väänänen and Sevenster. BibTeX - Entry author = {Peter Lohmann and Heribert Vollmer}, title = {Complexity Results for Modal Dependence Logic}, booktitle = {Circuits, Logic, and Games}, year = {2010}, editor = {Benjamin Rossman and Thomas Schwentick and Denis Th{\'e}rien and Heribert Vollmer}, number = {10061}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2010/2524}, annote = {Keywords: Dependence logic, satisfiability problem, computational complexity, poor man's logic} Keywords: Dependence logic, satisfiability problem, computational complexity, poor man's logic Seminar: 10061 - Circuits, Logic, and Games Issue date: 2010 Date of publication: 26.04.2010 DROPS-Home | Fulltext Search | Imprint
{"url":"http://drops.dagstuhl.de/opus/volltexte/2010/2524/","timestamp":"2014-04-21T00:37:42Z","content_type":null,"content_length":"8625","record_id":"<urn:uuid:bcca1a9b-b365-4317-b80e-5acca2ccaba3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Neighbor joining algorithms for inferring phylogenies via LCA-distances , 2009 "... Distance based reconstruction methods of phylogenetic trees consist of two independent parts: first, inter-species distances are inferred assuming some stochastic model of sequence evolution; then the inferred distances are used to construct a tree. In this paper we concentrate on the task of inters ..." Cited by 6 (4 self) Add to MetaCart Distance based reconstruction methods of phylogenetic trees consist of two independent parts: first, inter-species distances are inferred assuming some stochastic model of sequence evolution; then the inferred distances are used to construct a tree. In this paper we concentrate on the task of interspecies distance estimation. Specifically, we characterize the family of valid distance functions for the assumed substitution model and show that deliberate selection of distance function significantly improves the accuracy of distance estimates and, consequently, also improves the accuracy of the reconstructed tree. Our contribution consists of three parts: First, we present a general framework for constructing families of additive distance functions for stochastic evolutionary models. Then, we present a method for selecting (near) optimal distance functions, and we conclude by presenting simulation results which support our theoretical analysis. 1 Introduction. One of the most popular approaches to phylogenetic reconstruction is the distance based approach. This approach associates lengths to the edges of the phylogenetic tree. The additive distance between two - In SODA: ACM-SIAM Symposium on Discrete Algorithms , 2008 "... Phylogenetic reconstruction is the problem of reconstructing an evolutionary tree from sequences corresponding to leaves of that tree. A central goal in phylogenetic reconstruction is to be able to reconstruct the tree as accurately as possible from as short as possible input sequences. The sequence ..." Cited by 6 (2 self) Add to MetaCart Phylogenetic reconstruction is the problem of reconstructing an evolutionary tree from sequences corresponding to leaves of that tree. A central goal in phylogenetic reconstruction is to be able to reconstruct the tree as accurately as possible from as short as possible input sequences. The sequence length required for correct topological reconstruction depends on certain properties of the tree, such as its depth and minimal edge-weight. Fast converging reconstruction algorithms are considered state-of the-art in this sense, as they require asymptotically minimal sequence length in order to guarantee (with high probability) correct topological reconstruction of the entire tree. However, when the original phylogenetic tree contains very short edges, this minimal sequence-length is still too long for practical purposes. Short , 2007 "... In this work we consider hierarchical clustering algorithms, such as UPGMA, which follow the closest-pair joining scheme. We study optimal O(n 2)-time implementations of such algorithms which use a ‘locally closest ’ joining scheme, and specify conditions under which this relaxed joining scheme is e ..." Cited by 4 (1 self) Add to MetaCart In this work we consider hierarchical clustering algorithms, such as UPGMA, which follow the closest-pair joining scheme. We study optimal O(n 2)-time implementations of such algorithms which use a ‘locally closest ’ joining scheme, and specify conditions under which this relaxed joining scheme is equivalent to the original one (i.e. ‘globally closest’). Key Words: Hierarchical clustering, UPGMA, design of algorithms, input-output specification, computational complexity , 2007 "... This work considers the problem of reconstructing a phylogenetic tree from triplet dissimilarities, which are dissimilarities defined over taxontriplets. Triplet dissimilarities are possibly the simplest generalization of pairwise dissimilarities, and were used for phylogenetic reconstructions in th ..." Add to MetaCart This work considers the problem of reconstructing a phylogenetic tree from triplet dissimilarities, which are dissimilarities defined over taxontriplets. Triplet dissimilarities are possibly the simplest generalization of pairwise dissimilarities, and were used for phylogenetic reconstructions in the past few years. We study the hardness of finding a tree best fitting a given triplet-dissimilarity table under the ℓ ∞ norm. We show that the corresponding decision problem is NP-hard and that the corresponding optimization problem cannot be approximated in polynomial time within a constant multiplicative factor smaller than 1.4. On the positive side, we present a polynomial time constant-rate approximation algorithm for this problem. We also address the issue of best-fit under maximal distortion, which corresponds to the largest ratio between matching entries in two triplet-dissimilarity tables. We show that it is NP-hard to approximate the corresponding optimization problem within any constant multiplicative factor. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=344922","timestamp":"2014-04-18T19:52:44Z","content_type":null,"content_length":"21188","record_id":"<urn:uuid:1ada4e1e-fb3e-4046-9a46-298d4982b3d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Trying to find a good way to calculate Production rates February 2nd 2011, 09:18 AM Trying to find a good way to calculate Production rates I hope this is the right subforum for this. I have a quandary that I think math can solve I'm just not sure where to start. I am trying to calculate the production rates of the construction of a building. It is a large hospital and there are hundreds of workers involved. There is a main schedule with activities. Each activity has a duration (planned and actual). If the activities contained all the same components then it would be easy to find the production. That is not the case. Lets take an example. Plumbing for the first floor might be a good example. There are multiple components in this activity. One is the piping. Thats got a length. Thats easy. Then there are elbows and T's and Y's etc. I could just brush over the fittings but then my calculations will not be accurate. It takes a lot longer to install a fitting then piping of the same length. So I am looking for a way to calculate the production of activates with variables. Maybe take the first month of quantities complete and have length as one variable and the amount of connections be other variables. So a 90 deg elbow would be two connections a coupling would be 2 connections, a Y' would be 3 connections and so fourth. Then compare the next months with this first months results. I am not sure exactly how to do this. If anyone can help I would greatly appreciate it. I have tried Goggling this subject but I keep getting economic results. Also just to let you know normally this is figured with man hours involved but I'm trying to get an alternate method without man hours. Just raw production rates. Another option would be to take the quantities of each of the different variables and figure it some how with the planned schedule. Then as components are completed somehow compare it to the planned duration. Thanks for any in-site you can give on this.
{"url":"http://mathhelpforum.com/statistics/170015-trying-find-good-way-calculate-production-rates-print.html","timestamp":"2014-04-21T15:18:58Z","content_type":null,"content_length":"4915","record_id":"<urn:uuid:d12bf4e1-8aed-4a3a-9241-ef5d83be789a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
(chain rule) implicit differentition 1. April 21st 2009, 07:53 PM #1 (chain rule) implicit differentition using chain rule F(x,y)= y^5+x^2 y^3-ye^x^2-1=0 thank you again 2. April 21st 2009, 08:16 PM #2 3. April 21st 2009, 08:17 PM #3 4. April 21st 2009, 08:46 PM #4 Last edited by TheEmptySet; April 21st 2009 at 09:01 PM. Reason: missing negative sign 5. April 21st 2009, 08:48 PM #5 ty theemptyset, but can u tell me how you come about the answer?? thank you so much, i stink at math, but i like it 6. April 21st 2009, 08:49 PM #6 7. April 21st 2009, 08:56 PM #7 8. April 21st 2009, 09:00 PM #8 well i just did it seperately dF/dx= 2xy^3-2xye^x^2 dF/dy= 5y^4+3x^2y^2-e^x^2 not sure it helped, basically the same thing as TheEmptySet 9. April 21st 2009, 09:02 PM #9 Last edited by pickslides; April 21st 2009 at 09:03 PM. Reason: Typo 10. April 21st 2009, 09:02 PM #10 Oh ok. You didn't do implicit differentiation. You used partial differentiation. That is fine of course I just assumed by the thread title that you were in a Calc I or II class, not Calc III. 11. April 21st 2009, 09:03 PM #11 yea im in calc III, sry for the misunderstanding 12. April 21st 2009, 09:04 PM #12 Nice website thxs, BOOKMARKED 13. April 21st 2009, 09:09 PM #13 eeppp I need to pay more attention. I thought that is what we wanted. I will add this for completeness Suppose $y=f(x)$ and that $F(x,y)=0$ then we get $F(x,y)=0$ then by implict differentation and the chain rule we get $\frac{dF}{dx}\cdot \frac{dx}{dx}+\frac{dF}{dy}\frac{dy}{dx}=0$ $F_x+F_y\frac{dy}{dx}=0 \iff F_y\frac{dy}{dx}=-F_x \iff \frac{dy}{dx}=-\frac{F_x}{F_y}$ Similar Math Help Forum Discussions Search Tags
{"url":"http://mathhelpforum.com/calculus/84943-chain-rule-implicit-differentition.html","timestamp":"2014-04-16T06:36:43Z","content_type":null,"content_length":"70631","record_id":"<urn:uuid:253863f6-787a-49a0-a9ef-ee8260f18b92>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: AMATYC conference attachments- New challenges to math departments and need to improve math success Replies: 2 Last Post: Apr 2, 2012 1:44 PM Messages: [ Previous | Next ] Re: AMATYC conference attachments- New challenges to math departments and need to improve math success Posted: Apr 1, 2012 11:16 PM The commercial, below, seems to be misusing the heading of "AMATYC conference attachments." Also, a cursory scan of the two attachments seems to indicate that Nolting's perception of "mathematics" is symptomatic of why most American students have difficulty with curricular mathematics ... so generating an adult population over half of whom suffer from math-distress. Traditionally, American curricular mathematics instruction serves primarily to try to temporarily train students to perform as directed by teachers and whatever learning media they use ... rather than to develop students' lasting functional personal mathematical intelligence. Mathematical comprehension is NOT a matter of temporarily learning data-processing routines or verbiage. It is a matter of abstracting concepts, drawing conclusions, and developing fluencies ... all through personal exercise of mathematical common sense. I see no evidence that Nolting has the foggiest idea of what mathematical comprehension/intelligence is all about, or its crucial role in students' personal mathematical development. [I do hope to learn that my initial impression is very wrong.] From: pnolting@aol.com Sent: Sunday, April 01, 2012 6:34 PM To: Pnolting@aol.com Subject: AMATYC conference attachments- New challenges to math departments and need to improve math success Dear AMATYC Participant: (forgot the attachments) Online math courses are on the increase as well as the focus to improve math success. As well, many two and four year institutions are looking at the emporium model to organize math curriculum. One constant, whether your math department is following this trend or not, is the continued pressure to increase the success rates of math students. Now is the time to help students develop improved learning strategies and conduct research at your colleges/universities to improve math success. My interview with Dr. Hunter Boylan, in the special math issue of the Journal of Developmental Education, has intensified the urgency to help online and ground-based math students succeed in their classes, particularly those students in developmental math courses (see attachment). I suggested strategies, research topics and reasons for the large number of developmental math students being The following solutions have evolved out of my twenty years of working with colleges across the nation and working with math students at my own institution. 1. Web-Based Program for Online Math Students with an Online Math Readiness Survey and six remediation modules. The remediation modules are in Organization and Procrastination, Math Learning Skills, Math Test Anxiety and Test-Taking, Math Academic Skills, Computer Skills and Learning Styles. The Web site is now being field tested in several colleges (www.mathreadiness) and will be ready for summer. Email me to gain access to the Web site (see page 5 in attached catalog). 2. The Winning at Math student math study skills text is the only text that has research supporting the improvement of math learning and grades. It also has the How to Reduce Test Anxiety CD attached to the inside back cover and the Test Attitudes Inventory in chapter two. If you are planning to use the Winning at Math as a required text for your math course, math lab or freshman seminar/first year experience/study skills course, I will send you instructions to access to the Instructors Resource site that includes the teacher?s manual at no cost. If you want an additional copy of the text email me and I will send you one. The text also has a companion Math Study Skills Evaluation (MSSE). The MSSE is a diagnostic survey that has subtest scores and suggests chapters and pages to read in the Winning at Math. You can go to www.academicsucdcess.com for more information. 3. To help your students improve math success we also have the Mastering Math Video Series using DVDs or video streams on subjects such as note-taking, test-taking and math anxiety. Right before or after the workshops on test anxiety I demonstrate our new DVD on Managing Math and Test Anxiety using a video clip from my Web site (www.academicsuccess.com). You can go to the Web site and see these video clips and we offer a 30 day money back guarantee. If you want additional information you can email me at PNolting@aol.com or call me at (941) 951-8160. Remember that Kim and I also facilitate faculty, staff and student training. Sometimes a special training helps to kick off a new initiative! Paul Nolting, Ph.D. Date Subject Author 4/1/12 Re: AMATYC conference attachments- New challenges to math departments and need to improve math success Clyde Greeno @ MALEI 4/2/12 Re: AMATYC conference attachments- New challenges to math departments and need to improve math success Alain Schremmer
{"url":"http://mathforum.org/kb/message.jspa?messageID=7759146","timestamp":"2014-04-19T09:57:03Z","content_type":null,"content_length":"22929","record_id":"<urn:uuid:424ebb48-ac81-4f7e-9b9d-def129795a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Oaks, PA Statistics Tutor Find an Oaks, PA Statistics Tutor I am available to tutor all levels of mathematics from Algebra to graduate discrete mathematics. I taught mathematics at two major universities. In addition to the usual subjects, I am qualified to tutor actuarial math, statistics and probability, theoretical computer science, combinatorics and introductory graduate topics in discrete mathematics. 18 Subjects: including statistics, calculus, geometry, GRE ...Upon arrival in Florida, I operated the Adolescent Language Program (ALP) in the Palm Beach County School District for 3 years before moving into the Child Care arena. I am a licensed Missionary with the United Fellowship of Churches, Inc. I also currently host an Internet Radio program, "Learning to Worship Him," where I teach a half-hour bible study each week. 51 Subjects: including statistics, English, reading, geometry ...I have tutored in libraries, coffee shops and even make house calls. I am willing to meet you in any location conducive to your learning as long as it is safe for the both of us. I believe that as a tutor, I should be trying to put myself out of business. 26 Subjects: including statistics, chemistry, calculus, geometry ...There is no one-size-fits-all when it comes to education. Let's think outside the box and help you to succeed! I hold two degrees that are associated with Biology. 20 Subjects: including statistics, reading, algebra 2, biology ...My name is Kristin and I have taught middle school math for the past 6 years. I've enjoyed tutoring students in elementary and middle school for the past 10 years. I'm patient, caring, and I'd love to help you improve in school! 21 Subjects: including statistics, reading, algebra 1, SAT math Related Oaks, PA Tutors Oaks, PA Accounting Tutors Oaks, PA ACT Tutors Oaks, PA Algebra Tutors Oaks, PA Algebra 2 Tutors Oaks, PA Calculus Tutors Oaks, PA Geometry Tutors Oaks, PA Math Tutors Oaks, PA Prealgebra Tutors Oaks, PA Precalculus Tutors Oaks, PA SAT Tutors Oaks, PA SAT Math Tutors Oaks, PA Science Tutors Oaks, PA Statistics Tutors Oaks, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Oaks_PA_Statistics_tutors.php","timestamp":"2014-04-20T13:59:57Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:4702538f-0b2d-4c0d-b287-82deff771041>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
The Unclosed Sum I couldn’t remember how to construct an example of a two closed subspaces of a Banach space such that their sum is not closed, so I searched online. I found a discussion of an example at physicsforum , and a paper by Schochetman, Smith, and Tsui that characterizes when the sum of two closed subspaces of a Hilbert space will be closed.
{"url":"http://www.arsmathematica.net/2009/02/27/the-unclosed-sum/","timestamp":"2014-04-16T07:15:07Z","content_type":null,"content_length":"8855","record_id":"<urn:uuid:a70157af-c544-411d-b71b-7742f8cb732a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Obtaining Descriptive Statistics with Multiply Imputed Data [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Obtaining Descriptive Statistics with Multiply Imputed Data From Richard Williams <Richard.A.Williams.5@ND.edu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>, "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject Re: st: Obtaining Descriptive Statistics with Multiply Imputed Data Date Wed, 01 Jul 2009 14:45:16 -0500 At 01:18 PM 7/1/2009, Engstrom, Malitta wrote: Dear all, I'm new to Stata and hoping you can offer some assistance. I'm having a tough time finding syntax that will allow me to obtain descriptive statistics (including frequencies, means and standard deviations) from my multiply imputed data. Any assistance you can offer would be appreciated. Thank you, First off, since you are using multiple imputation you may want to upgrade to Stata 11. See http://www.stata.com/stata11/mi.html. 2nd, if no one else has any easier ideas, you can often get descriptive stats using estimation commands, e.g. mean x proportion x reg x -mean- and -proportion- and -reg- all work with the -mim- command. Richard Williams, Notre Dame Dept of Sociology OFFICE: (574)631-6668, (574)631-6463 HOME: (574)289-5227 EMAIL: Richard.A.Williams.5@ND.Edu WWW: http://www.nd.edu/~rwilliam * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-07/msg00033.html","timestamp":"2014-04-18T03:01:10Z","content_type":null,"content_length":"7723","record_id":"<urn:uuid:ffa55bab-e933-44f3-be17-c5790eca364a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
psychology statistics Posted by Brain on Tuesday, February 28, 2012 at 2:07am. Submit your answers to the following questions using the ANOVA source table below. The table depicts a two-way ANOVA in which gender has two groups (male and female), marital status has three groups (married, single never married, divorced), and the means refer to happiness scores (n = 100): 1. What is/are the independent variable(s)? What is/are the dependent variable(s)? 2. What would be an appropriate null hypothesis? Alternate hypothesis? 3. What are the degrees of freedom for 1) gender, 2) marital status, 3) interaction between gender and marital status, and 4) error or within variance? 4. Calculate the mean square for 1) gender, 2) marital status, 3) interaction between gender and marital status, and 4) error or within variance. 5. Calculate the F ratio for 1) gender, 2) marital status, and 3) interaction between gender and marital status. 6. Identify the criterion Fs at alpha = .05 for 1) gender, 2) marital status, and 3) interaction between gender and marital status. 7. If alpha is set at .05, what conclusions can you make? Source Sum of Squares (degrees of freedom [df]) Mean Square Fobt. Fcrit. Gender 68.15 ? ? ? ? Marital Status 127.37 ? ? ? ? Gender * Marital Status (A x B) 41.90 ? ? ? ? Error (Within) 864.82 ? ? NA NA Total 1102.24 99 NA NA NA Please Note: The table that you see in the assignment has been slightly modified from the one presented in the module notes since it is beyond the scope of this unit to have students calculate p values. Instead you are asked to calculate the F value and compare it to the critical F value to determine whether the test is significant or not. By Monday, February 27, 2012, deliver your assignment to the M4: Assignment 2 Dropbox. Assignment 2 Grading Criteria Maximum Points Correctly identified the independent and dependent variables. Provided appropriate null and alternative hypotheses. Correctly identified the degrees of freedom for 1) gender, 2) marital status, 3) interaction between gender and marital status, and 4) error or within variance. Correctly calculated the mean square for 1) gender, 2) marital status, 3) interaction between gender and marital status, and 4) error or within variance. Correctly calculated the F ratio for 1) gender, 2) marital status, and 3) interaction between gender and marital status. Correctly identified the criterion Fs at alpha = .05 for 1) gender, 2) marital status, and 3) interaction between gender and marital status. Provided the correct conclusion based on alpha = .05. Wrote in a clear, concise, and organized manner; demonstrated ethical scholarship in accurate representation and attribution of sources; displayed accurate spelling, grammar, and punctuation. Related Questions statistics - 5. The table below shows Psychology exam scores, Statistics Exam ... statistics - 5. The table below shows Psychology exam scores, Statistics Exam ... Psychology - Need help double checking these tricky psychology questions. I put ... Psychology - Need help double checking these tricky psychology questions. I put ... CHEMISTRY (WEBWORK) - SPARTAN: Enter the numerical energy values (X, Y, & Z) on ... Chemistry(Webwork) - Enter the numerical energy values (X, Y, & Z) on the y-axis... Research and Statistics - 5. The table below shows Psychology exam scores, ... statistics - 4. Suppose you administered an anxiety test to a large sample of ... Psychology - 1. You are conducting research on sex differences in e-mails. Your ... Statistics - The Econ Dpt final has 30 questions that are multiple choice. There...
{"url":"http://www.jiskha.com/display.cgi?id=1330412844","timestamp":"2014-04-21T07:44:34Z","content_type":null,"content_length":"10883","record_id":"<urn:uuid:28a209fa-92b3-49c0-b0d8-a6bed802ad52>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorial Details: Introduction to Linear Programming with AMPL Date: Tuesday, August 3, 2010, 01:00 pm - 03:00 pm Location: 575 Walter Instructor(s): Haoyu Yu, MSI Linear programming is a mathematical approach to formulating optimization problems with linear objectives and constraints that is widely used in many fields - operations research, economics, business, computer science, etc. This tutorial will provide a hands-on introduction to solving such problems with AMPL, a comprehensive, powerful and flexible algebraic modeling language for the linear, nonlinear and integer programming problems often encountered in optimization. We will describe the different options that AMPL provides for representing a variety of linear programming problems and demonstrate how to use AMPL solvers, such as the ILOG CPLEX system from IBM, to solve linear programming problems expressed in AMPL. Prerequisites: Some familiarity with the concept of optimization
{"url":"https://www.msi.umn.edu/tutorial/529","timestamp":"2014-04-19T02:55:51Z","content_type":null,"content_length":"10028","record_id":"<urn:uuid:4c11ed24-d1e7-4a75-ac1e-392edcb05c12>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming challenge: wildcard exclusion in cartesian products Dirk Thierbach dthierbach at usenet.arcornews.de Wed Mar 22 15:14:00 CET 2006 [Had to drop alt.comp.lang.haskell, otherwise my newsserver doesn't accept it] Dinko Tenev <dinko.tenev at gmail.com> wrote: > OK, here's a case that will make your program run in exponential time: > S = { a, b }, W = { *a*b, *b*a } -- on my machine, it starts getting > ugly as soon as n is 15 or so. Note that S^n - W = { a^n, b^n }. > In general, whenever all the patterns in the set match against the last > position, your current implementation is guaranteed to have to sift > through all of S^n. I'd say the very idea of checking against a > blacklist is fundamentally flawed, as far as performance is concerned. If more time during preprocessing is allowed, another idea is to treat the wildcard expressions as regular expressions, convert each into a finite state machine, construct the "intersection" of all these state machines, minimize it and then swap final and non-final states. Then you can use the resulting automaton to efficiently enumerate S^n - W. In the above case, the resulting FSM would have just three states. And it doesn't really matter what language you use to implement this algorithm, it's the idea that counts. Notation aside, all implementations will be quite similar. - Dirk More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2006-March/381995.html","timestamp":"2014-04-17T22:32:06Z","content_type":null,"content_length":"4074","record_id":"<urn:uuid:234f09c6-ffe1-4173-b4a2-d18dc387d3c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
An introduction to fibrations, topos theory, the effective topos and modest sets An introduction to fibrations, topos theory, the effective topos and modest sets Wesley Phoa Abstract: A topos is a categorical model of constructive set theory. In particular, the effective topos is the categorical `universe' of recursive mathematics. Among its objects are the modest sets, which form a set-theoretic model for polymorphism. More precisely, there is a fibration of modest sets which satisfies suitable categorical completeness properties, that make it a model for various polymorphic type theories. These lecture notes provide a reasonably thorough introduction to this body of material, aimed at theoretical computer scientists rather than topos theorists. Chapter 2 is an outline of the theory of fibrations, and sketches how they can be used to model various typed lambda-calculi. Chapter 3 is an exposition of some basic topos theory, and explains why a topos can be regarded as a model of set theory. Chapter 4 discusses the classical PER model for polymorphism, and shows how it `lives inside' a particular topos - the effective topos - as the category of modest sets. An appendix contains a full presentation of the internal language of a topos, and a map of the effective topos. Chapters 2 and 3 provide a sampler of categorical type theory and categorical logic, and should be of more general interest than Chapter 4. They can be read more or less independently of each other; a connection is made at the end of Chapter 3. The main prerequisite for reading these notes is some basic category theory: limits and colimits, functors and natural transformations, adjoints, cartesian closed categories. No knowledge of indexed categories or categorical logic is needed. Some familiarity with `ordinary' logic and typed lambda-calculus is assumed. LFCS report ECS-LFCS-92-208 This is a 150-page report which is available as a 1Mb PostScript file or as a 315kb gzipped PostScript file.
{"url":"http://www.lfcs.inf.ed.ac.uk/reports/92/ECS-LFCS-92-208/index.html","timestamp":"2014-04-21T12:07:59Z","content_type":null,"content_length":"5911","record_id":"<urn:uuid:3605ea7e-c122-487e-865a-fe91f44841e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating convertable energy Oh OK... You want to know the power in the movement of the water given the water speed (v=6.6m/s) and the volume flow rate (dV/dt = 5gpm). Numbers off post #1. I'll walk you through the stages - the trick is to keep in mind what everything means so it's not just an abstract calculation. The formula is: ##\dot{m}## is the mass flow rate and ##v## is the speed. You already have the volume flow rate (##\dot{V}##) in gpm, so we need to use ##\dot{m}=\rho\dot{V}## to get the mass flow rate. You want it in SI units. Looking up the conversion factor tells me: /s = 15852gpm So what is the volume flow rate in cubic-meters per second? That is convenient units since the density of water (##\rho##) is 1000kg/m ... so what is the mass flow rate in kilograms per second? You've already done the speed calculation so v=6.6m/s So now you can find the power.
{"url":"http://www.physicsforums.com/showthread.php?t=673512","timestamp":"2014-04-21T04:44:00Z","content_type":null,"content_length":"45380","record_id":"<urn:uuid:a86dc33f-9187-48c6-bd1e-0b0a849969e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 15 - The Laws of Thermodynamics Thermodynamics describes the processes whereby energy is transferred as heat and work, e.g., in steam engines, electric power generating plants, energy is transferred from coal (heat as it is burned) to electrical energy which can do useful work. First Law of Thermodynamics DU = Q - W The change in internal energy, DU is equal to the heat added minus work done by the system. (Q is positive when heat is added to the system, and work, W, is positive when work is done by the system; Otherwise they are negative.) The first law is a statement of energy conservation. Isothermal process means temperature, T, is constant. Since internal energy is proportional to temperature, the change in internal energy, DU, for an isothermal process is zero. Thus Q = W. Adiabatic process means no heat is added or given off, i.e., Q = 0. Work done by simple system e.g., a piston undergoes expansion or contraction. Work W = PDV where P is constant and DV = V[f] - V[i] is change in volume. This is an isobaric process since the pressure is constant. Second Law of Thermodynamics - Can be stated in many different forms. 1. Heat flows naturally from a hot object to a cold object and not vice versa. 2. It is impossible to get only work from heat. 3. The entropy of a system and its surrounding always increases as a result of natural process. Efficiency e = W/Q[H] where Q[H] is heat that flows from the hot reservoir. Q[L] = Q[H] - W is heat discharged to the cold (or low temperature) reservoir. (Efficiency is what you get divided by what you put in.) Thus e = 1 - Q[L]/Q[H] Carnot cycle (ideal or maximum) efficiency e[ideal] = 1 - T[L]/T[H] where T[L] and T[H] are the temperatures of the cold and hot reservoirs respectively. The Carnot efficiency describes the maximum efficiency possible. It is an ideal system. No real systems can have greater than this efficiency. It is always less. Refrigerators, air conditioners and heat pumps Coefficient of performance - is a measure of how efficient the refrigerator is. The higher the number, the better the performance. For an ideal system, Where T[L] is the inside temperature and T[H] is the outside temperature. Change in entropy DS = Q/T where Q is the heat added or removed and T is the temperature in Kelvin. Entropy is a measure of order or disorder of the system. Restatement of second law of thermodynamics - Natural processes tend to move toward a state of greater disorder. (Just look at the mess in your room!) Homework solutions 3. Sketch a PV diagram. Before you can sketch the PV diagram you have to find all the P and V values. State 1: V[1] = 2.0 L, P[1] = 1 atm State 2: V[2] = 1.0 L, P[2] = 1 atm State 3: Isothermal expansion V[3] = 2.0 L, by ideal gas law P[3] = (P[2]V[2])/V[3] = 0.5 atm State 4 same as state 1: P[4] = P[1] and V[4] = V[1] = 2.0 L 9. Two-step process as shown on the figure below. Heat loss going from a to b at constant volume, and heat added going from b to c where the temperature at c is the same as in a. Calculate, (a) total work, change in internal energy of the gas, and (c) total heat flow into or out of the gas. (Conversion factors 1 atm = 1.0135´10^5 N/m^2, and 1 L = 10^-3 m^3.) a. Going from a to b, the work done is zero since W = DV and the volume is constant. However, going from b to c the work done is W[bc] = PDV[bc] = (1.5 atm)(1.0135´10^5 N/m^2×atm)(10.0 L - 6.8 L)´10^-3 m^3/L = 486 J. Total work W = 486 J. b. The change in internal energy is given by DU = ^3/[2]nR DT = ^3/[2]nR(T[c] - T[a]) But given T[c] = T[a]. Thus DU = 0. (This does not mean that the change in internal energy along ab or along bc is zero. One is the negative of the other.) c. The heat flow can be derived from the first law of thermodynamics. Since DU = 0, Q = W = 486 J. 14. Calculate the metabolic rate for a 24-h activity. │Activity │ Calorie count│Total │ │Sleeping for 8 h │ 60 kcal/h´8 h =│480 kcal │ │Desk for 8 h │ 100 kcal/h´8 h =│800 kcal │ │Light work for 4 h │ 200 kcal/h´4 h =│800 kcal │ │TV for 2 h │ 100 kcal/h´2 h =│200 kcal │ │Tennis for 1.5 h │ 400 kcal/h´1.5 h =│600 kcal │ │Run for 0.5 h │ 1000 kcal/h´05 h =│500 kcal │ │ │ Total =│3380 kcal│ Thus the metabolic rate for a 24 h period is 3380 kcal. 17. Engine does 7200 J of work in each cycle while absorbing 12.0 kcal from a high temperature reservoir. What is the efficiency? Q[H] = 12.0 kcal = (12.0´10^3)(4.186 J) Efficiency i.e., output over energy put in = 7200 J/50,232 = 0.143 = 14.3%. 20. Nuclear power plant operates at 75% of its maximum theoretical efficiency between T[H] = 600°C and T[L] = 350°C. Plant produces electric energy at 1.1 GW (i.e., 10^9 W), how much exhaust heat is discharge per hour? T[H] = 600°C + 273 = 873 K T[L] = 350°C + 272 = 623 K (Temperature must be in Kelvin) Maximum efficiency for Carnot cycle is However, the plant is operating at 75% of the maximum possible efficiency. This gives the operating efficiency to be Basic definition of efficiency e = W/Q[H] or Q[H] = W/e = 1.1´10^9 J/0.215 = 5.12´10^9 J Q[H] is the total heat input of which 1.1´10^9 J comes out as work. Thus Q[H] = 5.12´10^9 J - 1.1´10^9 J = 4.01´10^9 J is the amount of heat discharged per second. In 1 h total heat discharge = (4.02´10^9 J)(3600 s/h) 23. Heat engine utilizes a heat source at 550°C with an ideal efficiency of 30%. Find new high temperature if the efficiency is to be increased to 40%. For an ideal Carnot engine, the efficiency is For a 30% efficient engine, 0.3 = 1 - T[L]/(273 + 550)K Solving gives T[L] = 576 K Now for a 40% efficient engine, 0.4 = 1 - 576 K/T[H] The new high temperature is T[H] = 960 K = 687°C. Note: To use any of the formula in this chapter the temperature must be in Kelvin. 28. A restaurant refrigerator has coefficient of performance, COP = 5.0. (COP is a measure of how efficient the refrigerator is. The higher the number, the more efficient it is.) For refrigerators, Solving gives T[L] = 252 K = -21°C. 33. 1 kilogram of H[2]0 heated from 0°C to 100°C. Estimate the change in entropy of H[2]0. From 0°C to 100°C Heat Q = mcT = (1 kg)(4186 J/kg×C°)(100°C - 0°C) = 4.186´10^5 J. Entropy change DS = Q/T Since temperature goes from 0°C to 100°C, take average, T[avg] = 50°C = 50°C + 273K = 323 K 38. A 5.0 kg piece of Al at 30°C placed in 1.0 kg of water in a Styrofoam container at room temperature (20°C = 293 K). Calculate the approximate net change in entropy of the system. Need the equilibrium temperature. Heat lost by Al = heat gained by water m[Al]c[Al]DT[Al] = m[w]c[w]DT[w] (5.0 kg)(900 J/kg×C°)(30°C - T[c]) = (1.0 kg)(4186 J/kg×C°)(T[c] - 20°C) Solving gives T[c] = 25.24°C. Thus heat lost by Al, Q[Al] = m[Al]c[Al]DT[Al] = 2.142´10^4 J. Average temperature of aluminum = (30 + 25.24)/2 = 27.629°C = 300.62 K. Hence entropy change for aluminum, Average temperature of water is (20 + 25.24)/2 = 22.62°C = 295.62 K And entropy change for water, Hence net entropy change is DS[net] = DS[Al] + DS[w] The net entropy change is greater than or equal to 0 as required by the 2^nd law of thermodynamics. Access counter: This page last updated on September 19, 1997. © 1996 Dr. H. K. Ng. All Rights Reserved.
{"url":"http://www.physics.fsu.edu/users/ng/Courses/phy2053c/HW/Ch15/ch15.htm","timestamp":"2014-04-19T01:48:47Z","content_type":null,"content_length":"14826","record_id":"<urn:uuid:b79f46c9-1253-4f13-90ae-b09640bf043c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Oracle, Relativization, and P vs NP, [Philosophical] up vote 4 down vote favorite I can understand why $P^A = NP^A$ does not imply $P=NP$, $A$ can "contain" the powers of NP. However, why does $P^B \neq NP^B$ not imply $P \neq NP$? It seems like if $P$ and $NP$ denote the same classes; then we should be able to arbitrarily substitute one for the other (as long as the only thing of interest is the computational model), and everything should stay the same. More generally, why is $A^X \neq B^X$ not a poof for $A \neq B$ ? I feel like there's a very fundamental piece of logic / reasoning I'm missing here. EDIT: I understand the construction of the oracles A & B. However, I still don't understand why the existence of $B$ not prove that $P \neq NP$. 1 An analogy is given at terrytao.wordpress.com/2009/08/01/… – Colin McQuillan Mar 9 '11 at 15:59 2 There is probably no function $f$ such that $f(A,X)=A^X$. – Bjørn Kjos-Hanssen Mar 9 '11 at 16:07 2 What's philosophical about the question? It appears to me as a technical question with a very definite answer. – Thierry Zell Mar 9 '11 at 19:38 add comment 5 Answers active oldest votes The notation is deceptive. $P^A$ is not something constructed from objects $P$ and $A$, but rather something analogous to $P$. In fact, $P$ is a special case of $P^A$, namely $P=P^\ varnothing$. The same holds, mutatis mutandis, for $NP$. Removing the contrapositive for extra clarity, the proper way of stating your question is thus: Why $P^\varnothing=NP^\varnothing$ does not imply $P^A=NP^A$ for every $A$? up vote 11 Then it should be clear that there is no reason for this to hold, just like, say, $x^0=y^0$ does not imply $x^a=y^a$ for real $x,y,a$. down vote accepted EDIT: I'm not sure I should mention this, as it will probably just add to the confusion. However, under a proper notion of relativization, the implication $C_1^A\ne C_2^A\Rightarrow C_1 \ne C_2$ does actually hold for “small classes” $C_1,C_2$, such as $AC^0[m]$, $TC^0$, $NC^1$, $L$, $NL$. See this paper by Aehlig, Cook, and Nguyen. The main reason which makes it work is that for all these classes, the depth of dependence of the oracle queries on each other is constant, hence any relativized function can be written as a finite composition of unrelativized functions and parallel oracle calls. add comment In the case you are interested, adding an oracle for a language $B$ to a machine means that such machine is able to do some non-trivial computation at cost of one atomic operation (ignoring the resources needed to prepare the query to the oracle). This make such machine more powerful (maybe not strictly ...). up vote 4 It is quite possible that a polynomial time non-deterministic machine $N$ can harness such power in ways that a mere deterministic machine $D$ cannot. That would mean that in presence of down vote such oracle for $B$, the computational speed-up for $N$ is larger than the one for $D$. add comment $P^A$ and $NP^A$ are shorthand notation to be used with care. What we mean by $\mathcal C^A$ depends heavily on the syntactical definition of the class $\mathcal C$ and is not well-defined if it's not clear which syntactical definition of $\mathcal C$ we are referring to. $\mathcal C^A$ is more an invariant of $\mathcal C$-machines than of the set of languages decided by $\ mathcal C$-machines. up vote 2 While $P=NP$ means that $P$-machines can decide exactly the same languages as $NP$-machines, $P^B\neq NP^B$ for some oracle $B$ as proven by Baker, Gill, and Solovay (1975) means that down vote $P$-machines with additional access to the oracle $B$ can decide strictly fewer languages than $NP$-machines with access to $B$. Baker, Gill, and Solovay also give an oracle $A$ with respect to which these two types of machines can decide exactly the same languages. add comment I have wondered this before and the best answer I have heard is this: It is possible that while $P=NP$, $NP$ can get far more use out of an oracle than $P$ can. In essence, this is what happens in the construction of $B$ in your original question. We make use of the fact that an $NP$ algorithm can query exponentially many values of the oracle in only polynomial time, up vote 1 something that a $P$ algorithm could never do, even if $P=NP$. down vote add comment P = NP does not imply P^A = NP^A for every set A because for some A, NP^A allows more schemes of solutions than P^A does (via non-determinism). In particular, it is known for some A and B that P^A = NP^A and P^B not = NP^B. The reason for confusion here may result from the fact that the notation NP^A is misleading (or outright illogical); it would be less confusing if N(P^A) were used, instead. Then it would be more clear why P = NP does not necessarily imply P^A = N(P^A). Or, if one would like to really dot the "i", P(TM) should be used in lieu of P, P(NTM) in lieu of NP, P(TM^A) in lieu of P^A, and P(NTM^A) in lieu of NP^A, with PM meaning Turing machines and NTM - non-deterministic Turing machines, in obvious sense. up vote 0 Then it would become clear that any of the two axioms P(TM) = P(NTM) or P(TM) not = P(NTM) does not logically entail P(TM^A) = P(NTM^A) or P(TM^A) not = P(NTM^A) simply because the down vote classes TM and NTM are not equal (despite the fact that they have the same "computational power") and, trivially, P(X) = P(Y) for some X, Y not equal to each other. The above is true under assertion that P, NP and similar (derivative) sets are fairly arbitrary (except, perhaps, P \subseteq NP, etc.). Otherwise, if P = NP is true then P non = NP (vacuously) proves everything, and so does P = NP if P non = NP is true. Marek Suchenek add comment Not the answer you're looking for? Browse other questions tagged computational-complexity or ask your own question.
{"url":"http://mathoverflow.net/questions/57965/oracle-relativization-and-p-vs-np-philosophical","timestamp":"2014-04-18T05:43:32Z","content_type":null,"content_length":"72607","record_id":"<urn:uuid:f0dd15a5-acc7-4d26-8022-89413cffe388>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient method to find collision free random numbers up vote 15 down vote favorite I have a users table, the user ID is public. But I want to obfuscate the number of registered user and trends of the project, so I don't want to have public incrementing IDs. When a new user is created I want to find a random integer number that is greater than a certain number and that is not yet in the database. Naive code: $found = false; while(!$found) { $uid = rand(1000000000,4294967295) // find random number betwen minimum and maximum // check if user id is in use, and if not insert it if($dbh->query("SELECT * FROM users WHERE uid = $uid")) { $dbh->exec("INSERT INTO users (uid) VALUES ($uid)"); $found = true; // we just got our new uid ... This will work it however may become inefficient. True that there is a big range and the probability of hitting an unused uid is high. But what if I want to use a smaller range, because I don't want to have so long userids? Example of my concerns: • 60% of all user ids are in use • the chance of hitting an unused uid are 0.4 • the first attempt has 0.4% success rate • if 1st not successful the second attempt has 0.6*0.4 probability • so with a maximum of two tries i have 0.4 + 0.6*0.4 proability (is that right??) So one method to optimize is that came to my mind is the following: • find a random number, check if its free, if not, increment it by 1 and try again and so on • if the maximum number is hit, continue with the minimum number That should give me a number with a maximum runtime of O(range) That sounds pretty bad but I think it is not, because I submit random numbers to the database and that they are all at the beginnig is very unlikely. So how good/bad is it really? I think this would work just fine but I want it BETTER So what about this? • find a random number • query the database for how many numbers are occupied in the range whole range, starting from that number (this first step is trivial...) • if there are numbers occupied in that range, divide the range by half and try again. starting with the initial number • if there are numbers occupied divide the range by half and try again. starting with the initial number If I am thinking correctly this will give ma a number with a maximum of O(log(range)) time. That is pretty satisfying because log() is pretty good. However I think this method will often be as bad as possible. Because with our random numbers we will probably always hit numbers in the large So at the beginning our pure random method is probably better. So what about having a limit like this • select current number of used numbers • is it greater than X, logarithmic range approach • if it is not, use pure random method What would X be and why? So final question: This is pretty easy and pretty complicated at the same time. I think this is a standard problem because lots and lots of system use random ids (support tickets etc), so I cannot imagine I am the first one to stumble across this. How would you solve this? Any input is appriciated! Is there maby an existing class / procedure for this I can use? Or maby some database functions that I can use? I would like to do it in PHP/Mysql I just thought about the range/logarithmic solution. It seems to be complete bullshit sorry for my wording because: • what if i hit an occupied number at start? Then I am dividing my range so long if it is only 1. And even then the number is occoupied. So its completely the same as the pure random method from start, only worse.... I am a bit embarassed I made this up but I will leave it in because I think its a good example of overcomplicated thinknig! php mysql random primary-key 3 My advice that doesn't answer the question necessarily is don't make the user id public if it's being used a key. Why do users need to know this? Why can't they just know username if they need to identify a user. You are opening up an implementation detail which is bad architecturally and also a potential security risk. – James Gaunt Jul 24 '11 at 17:53 You can assume for the problem that the public exporuse of the ID is no security risk. The reason behind this is that there is no name. Or better said, I want it to be changeable by the user, but keep a unique identifier. The user is also not required to choose a username, and I don't want to display him with a huge random GUID. FYI: Stackoverflow does knid of the same. Try opening a new account with no username, there will be somethin like user12345 assigend as your nickname. Apart form that, I am interested in the problem itself, but thanks for your input. That was important 2b said. +1 – The Surrican Jul 24 '11 at 17:56 Why not do something similar to stackoverflow and assign some sort of username in the same format to them then if that is the functionality that you are looking for? – Mike Jul 24 '11 at 18:08 2 BTW, if it's 0.4 chance of success each time, then for n attempts the chance of success is (1 - 0.6^n) which gets close to 1 pretty damn quick. – James Gaunt Jul 24 '11 at 18:10 I would probably use UUID v4 in your situation. See stackoverflow.com/questions/2040240/… – Finbarr Jul 24 '11 at 18:11 add comment 8 Answers active oldest votes If p is the proportion of ids in use, your "naive" solution will, on average, require 1/(1-p) attempts to find an unused id. (See Exponential distribution). In the case of 60% occupancy, that is a mere 1/0.4 = 2.5 queries ... Your "improved" solution requires about log(n) database calls, where n is the number of ids in use. That is quite a bit more than the "naive" solution. Also, your improved solution is incomplete (for instance, it does not handle the case where all number in a subrange are taken, and does not elaborate with subrange you recurse into) and is more complex to up vote 10 down implement to boot. vote accepted Finally, note that your implementation will only be thread safe if the database provides very strict transaction isolation, which scales poorly, and might not be the default behaviour of your database system. If that turns out to be a problem, you could speculatively insert with a random id, and retry in the event of a constraint violation. But I always have to keep an eye on that the range does not get filled up? Because if 99% are in use it would take on average 100 queries - correct? – The Surrican Jul 24 '11 at 4 If 99% of your user id space is in use, you (will soon) have bigger problems than slow performance. – Ilmari Karonen Jul 24 '11 at 22:07 add comment If you don't want to test for used numbers, you can create a function that calculates a random looking id $id_k based on the automatically incrementing id from the database $id: $id_k = transpose($id); That function has either a reverse cousin or is able to transpose transparently back (ideally): up vote 2 $id = transpose($id_k); down vote You can then used the transposed IDs on your site. Another idea that comes into my mind is that you precalculate the random id's per incrementing ids so you can control the use of the database better. the problem is that this 'security' is based on the knowledge of function transpose(). Once somebody knows the algorithm, he can easily guess the ids. – TMS Jul 24 '11 at 18:11 Sure, OP has asked for obfuscation only specifically. Even with "true" random keys in your database, someone who knows the database data, knows the ids as well. – hakre Jul 24 '11 at Someone who knows database data knows everything :-) This is complete different security break than knowning the algorithm. – TMS Jul 24 '11 at 20:24 2 Actually, this can work, if transpose is good enough... but good enough here basically means a cryptographically secure (adjustable size) block cipher with a secret key, which is probably a rather heavy tool to use for a problem like this. – Ilmari Karonen Jul 24 '11 at 22:29 add comment How about you compose the number from something that won't clash and a random number in some small range. up vote 2 down vote The ddddd being for example the number of days your system has been live, the rrr is a random number chosen each day, the n is the increment within day. Given any one number a person not knowing rrr for the day cannot deduce how many users have been created on a given day. that, sir, is a pretty awesome idea. – The Surrican Jul 24 '11 at 18:15 add comment Joe, just implement your algorithm as above with no worry. Just look: if the probability of hitting used id is p = 0.6, then the probability that you hit used id N consecutive times is p^ N. This goes down exponentially! I'd recommend to set the ID density lower, e.g. to p = 0.1. Then the probability that you don't suceed for 10 consecutive attempts is p^10 = 0.1^10 = 1e-10!!! Absolutely negligible. up vote 2 down vote Don't worry of the collisions and go for your solution. I like this solution the best. If Joe can't let go of the worry of collisions, then Joe could precompute a long list of unique random numbers and hand them out as needed. – emory Jul 24 '11 at 20:41 :-)) good idea :) – TMS Jul 24 '11 at 20:48 add comment How about when you start the application, pick a random range of 100 numbers (e.g. 100 - 199, 1000 - 1099, 5400 - 5499), check the first one, if it's not in the database we know (based on this algorithm) that all 100 are free. Store the start of this range in memory. Then just allocate these until you run out (or your application recycles) and then pick another random range. So you only need to go to the database once every 100 users. up vote 1 down vote This is similar to the Nhibernate hi/lo approach (except for the random bit). Obviously tweak to 100 depending on the rate at which you allocate ids compared to the typical life span of the application in memory. But this way once you see one id the other ids could be easily guessed... moreover, the i/o issues are low level, let's focus on the algorithm. BTW, AFAIK in PHP the lifetime of the 'app' is one http query. – TMS Jul 24 '11 at 18:18 add comment You could simply use any hash shuffle algorithm to generate new Id value by the known number of users (holding of this value is a usual practice). This approach could be better than your up vote 1 current solution because the corresponding algorithm will likely generate less collisions. The key point is to choose algorithm with appropriate strength and distribution uniformity. down vote I don't quite understand by what you mean with 'You could simply use any hash shuffle algorithm to generate new Id value by the known number of users (holding of this value is a usual practice)' - could you explain it further please? – TMS Jul 24 '11 at 18:13 @Tomas, I mean that hash function could be more efficient for this purpose than the random one. For example, you can get ids with the following formula id = (user_num*777+ (user_num>>4) *3) mod 111111. This one is pretty simple, but it could be more complicated. – tyz Jul 24 '11 at 18:23 OK, normal hash. But then I don't see how is that more effective than the proposed solution. You have to check for collisions also and moreover, you have to store this new id also. I think it is much easier to generate a random number with much more assured result regarding the distribution uniformity. – TMS Jul 24 '11 at 20:34 @Tomas. Lets assume that you want to generate ids in range from 10 to 20 for 10 users. Hash function for id generation 10 + user_num*7 mod 10 will never lead to collision. If you use the random function with the same input load the probability of collision on the last step is 0.9. Of course this is a toy example, but it illustrates the idea. – tyz Jul 24 '11 at add comment To expand on on meriton's and Tomas Telensky's answers, if you want to keep your user IDs short while ensuring that you don't run out of them, you could select each ID randomly from, say, the range 1 to 10*n+1000, where n is the current number of users you have. That way, your effective user ID space will never be more than 10% full, while the user IDs will (in the long run) be only about one digit longer than if you'd assigned them sequentially. up vote 1 The down side, of course, is that the IDs will no longer be completely uncorrelated with registration order: if someone has the ID 5851, you know they must be at least the 486th registered down vote user and that they're unlikely to be, say, the 50000th. (Of course, you introduce the same kind of correlations if you ever manually adjust the range to accommodate more users.) Of course you can adjust the constants 10 and 1000 above: the bigger they are, the longer and more random your user IDs will be. add comment You could use symmetric-key encryption (e.g. AES) to encrypt a counter. If you use the entire output (128-bits for AES) then you're guaranteed no collisions, and it's a reversible up vote 1 down 128 bits might be more than you want to deal with though—it's a 32-digit hex number, 39-digit decimal number. You could use a 64-bit encryption algorithm such as DES, Blowfish or vote Misty (16-digit hex number, 20-digit decimal number). add comment Not the answer you're looking for? Browse other questions tagged php mysql random primary-key or ask your own question.
{"url":"http://stackoverflow.com/questions/6808537/efficient-method-to-find-collision-free-random-numbers","timestamp":"2014-04-17T22:03:34Z","content_type":null,"content_length":"118869","record_id":"<urn:uuid:206adf97-47c3-4d67-b626-25071a504efc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
IGEM:IMPERIAL/2006/project/Oscillator/project browser/Test Sensing Predator Construct/Modelling From OpenWetWare │Super Parts│ Predator Construct │ │Actual Part│ │ │ Sub Parts │<bbpart>R0062</bbpart>│<bbpart>B0034</bbpart>│<bbpart>C0062</bbpart>│<bbpart>I13504</bbpart> │ Model assumptions and relevance • General assumptions on gene expression modelling: □ Quasi-steady state hypothesis on mRNA expression. □ Gene activation can be approximated by Hill equations. • Assumptions linked to the quorum sensing: □ As a first approximation, we assume that luxR and AHL molecules form a heterodimer (even if it has been found that the complex formed is more complicated). □ The degradation rate of luxR and aiiA is mainly due to the growth dilution. Model description of the oscillator Full derivation of the above equations. Model description of the growth of the predator • mathematical description of the predator growth and death: □ $\frac{d[luxR]}{dt} = \frac{c * [AHL] * [luxR]}{(c0 + [AHL] * [luxR])} - gd * [luxR]$ □ $\frac{d[aiiA]}{dt} = \frac{c * [AHL] * [luxR]}{(c0 + [AHL] * [luxR])} - gd * [aiiA]$ • insert a graphical representation if possible (e.g. CellDesigner display) • link to SBML file or matlab. Model variables and parameters for the growth of the predator (list all the variables and parameters of the model in a table, specifying if their values are known, unknown, to be measured.) │Variables │ │Name│ Description │Initial Value│ Confidence │Reference│ │AHL │homoserine lactone acting as the prey-molecule │0 │depends how good is the control of the prey positive feedback │links │ │luxR│molecule acting as the sensing module for the predator │0 │to be measured as we might have to deal with some leakage of the promoter│links │ │aiiA│molecule acting as the killing module of the prey for the predator│0 │to be measured as we might have to deal with some leakage of the promoter│links │ │Parameters │ │Name│ Description │ Value │ Confidence │Reference│ │c │maximum synthesis rate of the pLux promoter │to be characterized│to be measured│links │ │c0 │dissociation constant according to Hill eq │to be characterized│to be measured│links │ │gd │growth dilution │to be characterized│to be measured│links │ • Parameters c & c0 for characterization have to be extracted. • gd[luxR] and gd[aiiA] can be controlled by the chemostat.
{"url":"http://www.openwetware.org/wiki/IGEM:IMPERIAL/2006/project/Oscillator/project_browser/Test_Sensing_Predator_Construct/Modelling","timestamp":"2014-04-18T14:55:21Z","content_type":null,"content_length":"21975","record_id":"<urn:uuid:eaceb5aa-32bd-43a2-8dbd-2aa32a9c1287>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple problem February 23rd 2009, 11:48 AM #1 Oct 2008 Simple problem I haven't taken any statistics course but I need to solve this problem: There are two guys in two different rooms, each of them has an exam with four options, let's say, a,b,c,d What's the probability of both getting the same letter? Person is room A chooses say choice d. How many choices does person in room B have? How many ways will B's choice match person A? Person is room A chooses say choice d. How many choices does person in room B have? The person in room B has 4 options How many ways will B's choice match person A? There is only 1 way for B to match person A that means 25% probabilities? 1/4, please! Same question: You roll two four-sided dice. What's the probability you get the same number of spots on each .... February 23rd 2009, 12:15 PM #2 February 23rd 2009, 07:05 PM #3 Oct 2008 February 23rd 2009, 07:25 PM #4
{"url":"http://mathhelpforum.com/statistics/75346-simple-problem.html","timestamp":"2014-04-17T16:33:55Z","content_type":null,"content_length":"40064","record_id":"<urn:uuid:006734cc-78f4-4e45-9e62-7812eb873012>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
"Discrete Mathematics in the Schools" DIMACS Series in Discrete Mathematics and Theoretical Computer Science VOLUME Thirty Six TITLE: "Discrete Mathematics in the Schools" EDITORS: Joseph G. Rosenstein, Deborah S. Franzblau and Fred S. Roberts. Published by the American Mathematical Society and the National Council of Teachers of Mathematics Ordering Information This volume may be obtained from the AMS or NCTM or through bookstores in your area. To order through AMS contact the AMS Customer Services Department, P.O. Box 6248, Providence, Rhode Island 02940-6248 USA. For Visa, Mastercard, Discover, and American Express orders call You may also visit the AMS Bookstore and order directly from there. DIMACS does not distribute or sell these books. To order through NCTM contact National Council of Teachers of Mathematics, P.O. Box 25405, Richmond VA 23260-5405 or 703/620-9840. Orders may be placed at 800/235-7566 or orders@nctm.org. Discrete mathematics can and should be taught in K-12 classrooms. This volume, a collection of articles by experienced educators, explains why and how, including evidence for ``why'' and practical guidance on ``how''. It also discusses how discrete mathematics can be used as a vehicle for achieving the broader goals of the major effort now underway to improve mathematics education. This volume is intended for several different audiences. Teachers at all grade levels will find here a great deal of valuable material that will help them introduce discrete mathematics in their classrooms, as well as examples of innovative teaching techniques. School and district curriculum leaders will find articles that address their questions of whether and how discrete mathematics can be introduced into their curricula. College faculty will find ideas and topics that can be incorporated into a variety of courses, including mathematics courses for prospective teachers. A description of the organization of this volume and an annotated summary of the articles it contains can be found in the Overview and Abstracts. This volume developed from a conference that took place at Rutgers University on October 2-4, 1992. The conference, entitled ``Discrete Mathematics in the Schools: How Do We Make an Impact?'' was attended by 33 people, from high schools and colleges, who had played leadership roles in introducing discrete mathematics at precollege levels.^1 The conference was sponsored by the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS)^2 and funded by the National Science Foundation (NSF). The invitation to the conference noted that ``Although primarily a research center, DIMACS is committed to educational programs involving discrete mathematics... as discrete mathematics activities at K-12 levels increase, it is appropriate for a national center in discrete mathematics to bring together those associated with such activities for an opportunity to reflect on how all of our activities can make an impact on mathematics education nationally.'' The rationale for the conference is further described in the Introduction, and the Vision Statement concerning discrete mathematics in the schools that emerged from the conference appears directly after this Preface. This volume was originally conceived as the proceedings of the conference. However, as we began receiving and reviewing articles, we realized that an expanded and more comprehensive book would have greater value and impact. Accordingly, we solicited additional articles from appropriate authors; approximately two-thirds of the articles are based on conference presentations, and the remainder were written independently. All of the authors received comments and suggestions from both anonymous referees and the editors, and revised their articles accordingly; this lengthened considerably the time to produce the volume, but greatly enhanced its quality. The editors wish to thank the authors for their cooperation and patience, as well as for their contributions. We also thank the referees for their assistance, Reuben Settergren for many hours spent in editorial work, typesetting, and creating figures, Pat Pravato for her able secretarial help, and NSF for a supplementary grant that enabled us to complete the volume. Compiling a volume like this, involving 34 articles from different authors, is not an easy task, and we are quite pleased that this task has now been completed. Joseph G. Rosenstein Deborah S. Franzblau Fred S. Roberts 1. A list of conference participants and an abbreviated conference program appear as appendices to the Introduction. 2. DIMACS is an NSF-funded Science and Technology Center which was founded in 1989 as a consortium of Rutgers and Princeton Universities, AT&T Bell Laboratories, and Bellcore (Bell Communications Research). With the reorganization of AT&T Bell Laboratories in 1996, it was replaced in the DIMACS consortium by AT&T Labs and Bell Labs (part of Lucent Technologies). DIMACS is also funded by the New Jersey Commission on Science and Technology, its partner organizations, and numerous other agencies. Discrete Mathematics in the Schools: An Opportunity to Revitalize School Mathematics Joseph G. Rosenstein Section 1. The Value of Discrete Mathematics: Views from the Classroom The Impact of Discrete Mathematics in My Classroom Bro. Patrick Carney Three for the Money: An Hour in the Classroom Nancy Casey Fibonacci Reflections: It's Elementary! Janice C. Kowalczyk Using Discrete Mathematics to Give Remedial Students a Second Chance Susan H. Picker What We've Got Here is a Failure to Cooperate Reuben J. Settergren Section 2. The Value of Discrete Mathematics: Achieving Broader Goals Implementing the Standards: Let's Focus on the First Four Nancy Casey and Michael R. Fellows Discrete Mathematics: A Vehicle for Problem Solving and Excitement Margaret B. Cozzens Logic and Discrete Mathematics in the Schools Susanna S. Epp Writing Discrete(ly) Rochelle Leibowitz Discrete Mathematics and Public Perceptions of Mathematics Joseph Malkevitch Mathematical Modeling and Discrete Mathematics Henry O. Pollak The Role of Applications in Teaching Discrete Mathematics Fred S. Roberts Section 3. What is Discrete Mathematics: Two Perspectives What is Discrete Mathematics? The Many Answers Stephen B. Maurer A Comprehensive View of Discrete Mathematics: Chapter 14 of the New Jersey Mathematics Curriculum Framework [PostScript] Joseph G. Rosenstein Section 4. Integrating Discrete Mathematics into Existing Mathematics Curricula, Grades K-8 Discrete Mathematics in K-2 Classrooms Valerie A. DeBellis Rhythm and Pattern: Discrete Mathematics with an Artistic Connection for Elementary School Teachers Robert E. Jamison Discrete Mathematics Activities for Middle School Evan Maletsky Section 5. Integrating Discrete Mathematics into Existing Mathematics Curricula, Grades 9-12 Putting Chaos into Calculus Courses Robert L. Devaney Making a Difference with Difference Equations John A. Dossey Discrete Mathematical Modeling in the Secondary Curriculum: Rationale and Examples from the Core-Plus Mathematics Project Eric W. Hart A Discrete Mathematics Experience with General Mathematics Students Bret Hoyer Algorithms, Algebra, and the Computer Lab Philip G. Lewis Discrete Mathematics is Already in the Classroom -- But It's Hiding Joan Reinthaler Integrating Discrete Mathematics into the Curriculum: An Example James T. Sandefur Section 6. High School Courses on Discrete Mathematics The Status of Discrete Mathematics in the High Schools Harold F. Bailey Discrete Mathematics: A Fresh Start for Secondary Students L. Charles Biehl A Discrete Mathematics Textbook for High Schools Nancy Crisler, Patience Fisher, and Gary Froelich Section 7. Discrete Mathematics and Computer Science Computer Science, Problem Solving, and Discrete Mathematics Peter B. Henderson The Role of Computer Science and Discrete Mathematics in the High School Curriculum Viera K. Proulx Section 8. Resources for Teachers Discrete Mathematics Software for K-12 Education Nathaniel Dean and Yanxi Liu Deborah S. Franzblau and Janice C. Kowalczyk The Leadership Program in Discrete Mathematics Joseph G. Rosenstein and Valerie A. DeBellis Computer Software for the Teaching of Discrete Mathematics in the Schools Mario Vassallo and Anthony Ralston Contacting the Center Document last modified on October 28, 1998.
{"url":"http://dimacs.rutgers.edu/Volumes/Vol36.html","timestamp":"2014-04-19T11:56:37Z","content_type":null,"content_length":"12988","record_id":"<urn:uuid:02ab157e-dd27-42b7-b36b-e925c9c6c843>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Tavistock, NJ Algebra 2 Tutor Find a Tavistock, NJ Algebra 2 Tutor ...I am especially personable and I know I have the ability to inspire students to have success beyond their expectations especially with the creative method I use for teaching. I believe that the best approach for teaching is to help students conceptualize some seemingly abstract topics in these s... 16 Subjects: including algebra 2, Spanish, calculus, physics ...I feel very comfortable helping a student in either of these subjects, and I find both of these subjects very interesting. Throughout college, I was required to take a CAD course, which I received an A in. I continued to use CAD software throughout my education, as well as in my summer internships. 30 Subjects: including algebra 2, reading, English, biology ...I enjoy working with teens, and I generally get along pretty well with them. Although, adults are okay too (they tend to ask better questions). So if you are someone in need of tutoring in math and/or physics (and there is no shame in asking for extra help), then please feel free to contact me. ... 16 Subjects: including algebra 2, English, physics, calculus ...On the other hand, if you have chosen one of the other talented tutors, keep on learning! "An expert problem solver must be endowed with two incomparable qualities: a restless imagination and a patient pertinacity." - Howard W. Eves, American MathematicianMy expertise is in the field of analyti... 9 Subjects: including algebra 2, chemistry, geometry, algebra 1 ...I now work as a college admissions consultant for a university prep firm and volunteer as a mentor to youth in Camden. After graduating Princeton I lived and worked for about two years in Singapore, where I taught business IT (focusing on advanced MS Excel) in the business and accounting school ... 36 Subjects: including algebra 2, reading, English, geometry Related Tavistock, NJ Tutors Tavistock, NJ Accounting Tutors Tavistock, NJ ACT Tutors Tavistock, NJ Algebra Tutors Tavistock, NJ Algebra 2 Tutors Tavistock, NJ Calculus Tutors Tavistock, NJ Geometry Tutors Tavistock, NJ Math Tutors Tavistock, NJ Prealgebra Tutors Tavistock, NJ Precalculus Tutors Tavistock, NJ SAT Tutors Tavistock, NJ SAT Math Tutors Tavistock, NJ Science Tutors Tavistock, NJ Statistics Tutors Tavistock, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Tavistock_NJ_Algebra_2_tutors.php","timestamp":"2014-04-16T04:42:42Z","content_type":null,"content_length":"24235","record_id":"<urn:uuid:5d14b2d3-1604-49f5-8b36-5ac28d1f5552>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Succeed In Math Do you want to do better in math? We have some suggestions that will help you. Tips for Students Why Studying Math is Different Take Charge and Take Action Preparing for Tests Why Studying Math is Different · Reading a math text is not like reading a novel. You may have to stop and think about some lines before proceeding. · Math is a cumulative subject. If you miss a concept one day, it may come back to haunt you and could even prevent you from understanding concepts you study later. Always get help as soon as you recognize that you have a problem. · Build up a network of math partners you can consult if you run into a roadblock. These are the days of easy communication. Telephone, email, and instant messaging are all available. Use them. Take Charge and Take Action · Take responsibility for your own success. If you find that you don't know or understand something, take whatever steps are necessary to fix the problem. Do not let others distract you from your · Be an active participant in the classroom. Volunteer answers to questions and offer to place solutions on the blackboard. Ask questions immediately when you think you have lost the thread of the · Math is learned by doing problems. Although you need to know some facts and procedures, you get really good at math by working through problems. It's wise to work on a problem yourself as much as possible. You may need to ask for help at some point, but don't give up too easily. The more you can do on your own, the more your brain will develop and the easier future problems will seem. · Problem-solving is one of the key skills in the study of math. There are tips for problem-solving starting on page xiii in the front of MathLinks 8. Your teacher will show you additional strategies that you can use. In short, the steps are: – Understand the problem. – Plan how to solve it. – Do It! Carry out your plan. – Look Back. Review how you solved the problem and the answer you received. Does it make sense? If the answer seems unreasonable, it may be necessary to look for errors or select another strategy. · Before beginning an assignment, review your class notes. Ensure that you understand the worked examples and the meaning of any new terms. Consider highlighting important concepts, equations, or · As you work on each chapter, use the Foldables™ idea at the beginning of that chapter to keep track of information from the chapter, including Key Words, examples, Key Ideas, and what you need to work on. · If you have completed the assigned problems, but still don't feel comfortable with the concepts, do a few more. Most teachers will assign about half of the problems in a given exercise. If you run out of practice questions before you feel comfortable with the concepts, ask the teacher for more. This Student Centre section of the MathLinks 8 Online Learning Centre has many things to help you. · If you find that you need some help or a hint to proceed with the solution to a problem, be careful not to get too much help. You want a coach, not a handout. Once you see where to go, thank your coach. Don't ask for the entire solution. That robs you of an important learning opportunity. · You have not failed at solving a question until you quit. Sometimes it is useful to skip a tricky question after thinking about it for a few minutes and then come back to it later. · If there is any reason why you cannot finish your entire math assignment, it is better to do a few problems from each part than to do just the first problems in the assignment. · If the homework load is light on a given day, use the extra time to review and practise concepts covered earlier in the course. · Allow a few minutes at the end of your math work session to have a look at the next lesson so that you know what is coming up. It isn't necessary to work through the lesson, just to get a feeling for what is going to happen in the next math class. Preparing for Tests · If you do your homework conscientiously and work at fixing problems as they occur, then preparing for tests becomes much easier. All you need to do is remind yourself of the concepts that you are going to be tested on and do some sample problems to sharpen up your skills. · When you receive your test, take a minute or two to look it over. You don't have to do question #1 first. If you see that you know how to attack question #3, then do that one first. · Don't get bogged down on a question. If your strategy doesn't seem to be working and you are stuck for an alternative, go on to another question. · Sometimes you will not finish a test in the time allotted. If this seems to be happening, do not panic. Accept that you are not going to finish. Make it your goal to do as many questions as you can before the time runs out. · Read each question carefully. Be sure that you answer what was asked. · Be sure to show your work. If you make an error and arrive at the wrong answer, you will at least get partial marks. · If you have time left, use it to verify your answers. You can sometimes work backwards to do this. Alternatively, you can solve the same question a different way. Be sure to check calculations. A slip of the finger on a calculator can easily lead to a wrong answer. · Watch out for panic attacks or "freezeups". This occasionally happens to many students on a test. Time may be short, solutions are not going well, and you have an overwhelming sense of panic. The best thing to do is STOP. Turn the test over on your desk. Take several deep breaths, exhaling slowly. Remind yourself that you prepared for this test and that you can do most, probably all, of the questions on it. Then, return to the test, select a question that you can do, and work through it. · If panic becomes a serious problem, consider learning one or more relaxation techniques or consulting a counsellor for other strategies. Keep in mind that these will not help if the real source of the panic is inadequate preparation for the test.
{"url":"http://highered.mcgraw-hill.com/sites/0070973385/student_view0/succeed_in_math.html","timestamp":"2014-04-20T13:19:17Z","content_type":null,"content_length":"52866","record_id":"<urn:uuid:b6bd3314-152b-476c-98f9-948537559182>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Highlands, TX Math Tutor Find a Highlands, TX Math Tutor ...I also have experience tutoring for the Texas State Test (TAKS & STAAR) resulting in 'recognized' or 'advanced' in the Reading and Science and outstanding achievements in Mathematics. I have a very encouraging & fun teaching style & believe anything can be easily understood if it is presented in... 12 Subjects: including prealgebra, English, geometry, algebra 1 ...As one who has helped thousands in developing an understanding of themselves. I am well aware and understand how to assist a student in connecting with a subject matter thereby allowing them to maximize their learning potential. As an educator I hold myself to the highest standard as exhibited ... 15 Subjects: including algebra 1, English, prealgebra, algebra 2 ...John's School but his ISEE math score was not high enough. I worked with him for 2 months, and he achieved a 9 (out of 10) on his ISEE math and was accepted to St. John's. 22 Subjects: including calculus, Java, government & politics, discrete math ...I have also tutored high school students in various locations. My reputation at the Air Force Academy was the top calculus instructor. I have taught precalculus during the past two years and have enjoyed success with the accomplishments of my students. 11 Subjects: including trigonometry, statistics, algebra 1, algebra 2 ...I have written at least ten (10) speeches of various length. I do not write out my speeches, I make an outline to follow. I do this because I think speeches are a conversation between the speaker and the listener. 18 Subjects: including algebra 1, SAT math, prealgebra, geometry Related Highlands, TX Tutors Highlands, TX Accounting Tutors Highlands, TX ACT Tutors Highlands, TX Algebra Tutors Highlands, TX Algebra 2 Tutors Highlands, TX Calculus Tutors Highlands, TX Geometry Tutors Highlands, TX Math Tutors Highlands, TX Prealgebra Tutors Highlands, TX Precalculus Tutors Highlands, TX SAT Tutors Highlands, TX SAT Math Tutors Highlands, TX Science Tutors Highlands, TX Statistics Tutors Highlands, TX Trigonometry Tutors Nearby Cities With Math Tutor Bellaire, TX Math Tutors Brookside Village, TX Math Tutors Channelview Math Tutors Cove, TX Math Tutors Crosby, TX Math Tutors Dayton, TX Math Tutors Huffman Math Tutors Jacinto City, TX Math Tutors Kemah Math Tutors Lynchburg, TX Math Tutors Mcnair, TX Math Tutors Mont Belvieu Math Tutors Morgans Point, TX Math Tutors Shoreacres, TX Math Tutors Wallisville Math Tutors
{"url":"http://www.purplemath.com/Highlands_TX_Math_tutors.php","timestamp":"2014-04-21T13:20:22Z","content_type":null,"content_length":"23700","record_id":"<urn:uuid:f900ad5c-c555-4b03-b8e5-e1a309d4c932>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Abstract Algebra: An Introduction This is good basic introduction to abstract algebra at the undergraduate level. It is more concrete than most such books, and is unusual in starting with rings first, making heavy use of the integers and polynomial rings as guides, and then doubling back to groups. This third edition has added a little bit of alternate treatment of some topics to provide a path starting with group theory. The book has a strong number-theoretic flavor, with the exception of the field theory parts, which have a strong theory-of-equations flavor. I think this rings-first approach works well; although groups are simpler, the integers are much more familiar and it’s easier to go through the abstraction process. This is not primarily a proofs book, and it recognizes the possibility that this may be the first course where students have to prove things. The book shows many examples to motivate the theorem and make it plausible, and then gives a proof. The book is well-equipped with exercises, although most of these are drill, examples, and proofs that test understanding rather than pose challenging problems or provide further developments of ideas in the body. Very Good Feature: extensive symbol glossary and index printed on the endpapers. Very Peculiar Feature: three Technology Tips scattered through the book showing how to use a TI graphing calculator for some number theory calculations. The programs are available for download from the publisher’s web site. Two comparable books are Herstein’s Abstract Algebra and Fraleigh’s A First Course in Abstract Algebra. Both of these follow the more traditional groups-rings-fields ordering. Herstein’s book is even more basic than Hungerford’s and is half the length. In particular, it omits Galois theory and solvable groups, although it does show the beginning of Galois theory and develops enough about field extensions to handle the classic construction problems in geometry. Fraleigh is fairly close to Hungerford in coverage, but is generally more abstract, puts more emphasis on structure and homomorphisms, has more applications, and goes a little deeper into the subject. Hungerford does a lot more hand-holding that the other two books, and sets a very leisurely pace, which accounts for much of the length of his book. Allen Stenger is a math hobbyist and retired software developer. He is webmaster and newsletter editor for the MAA Southwestern Section and is an editor of the Missouri Journal of Mathematical Sciences. His mathematical interests are number theory and classical analysis. He volunteers in his spare time at MathNerds.org, a math help site that fosters inquiry learning.
{"url":"http://www.maa.org/publications/maa-reviews/abstract-algebra-an-introduction-0?device=desktop","timestamp":"2014-04-20T11:17:46Z","content_type":null,"content_length":"97892","record_id":"<urn:uuid:454a1e53-66b7-4e82-bfcb-b428c5669b99>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Hall effect gets new dimensions - physicsworld.com ‘The higher-dimensional generalization of the quantum Hall effect has been sought for a long time, but no one has succeeded before,’ Zhang told PhysicsWeb. ‘The mathematical structure could be very relevant to string theory, but the model is far, far from a realistic model of the universe.’ Most condensed matter systems can be explained by ignoring the interactions or correlations between electrons and calculating the properties of charged excitations in the ‘sea’ of electrons in the system. However, there is a growing number of strongly correlated systems - such as high-temperature superconductors - in which electron interactions are important and the conventional approach breaks down. Most of these systems develop long range order in their ground state. However, the quantum Hall effect and the ‘Luttinger liquid’ are the only known examples of quantum disordered ground The quantum Hall effect is observed when the resistance of a two-dimensional gas of electrons is measured in a magnetic field. In 1980 Klaus von Klitzing discovered that the resistance of the gas is quantized when the magnetic field is high and the temperature is very low. This integer quantum Hall effect could be explained without electron correlations, but this conventional approach failed when the fractional quantum Hall effect was discovered in 1982. This effect was subsequently explained by Robert Laughlin in terms of electron correlations leading to fractionally charged excitations. The Luttinger liquid can also be understood in terms of fractionally charged excitations. When Zhang and Hu extended the theory of the quantum Hall effect to four dimensions, they found that the equations that described excitations at the boundary of the system were similar to Maxwell’s equations of classical electromagnetism, and also to the linear version of Einstein’s General Theory of Relativity. They also found that the excitations could be used to model relativistic particles without mass, such as the photon and the graviton, and also other particles without analogues in high-energy physics. The results suggest that it might be possible to think of special and general relativity as theories that emerge from quantum mechanics, rather than as completely different theories. ‘Although this work is still very limited,’ write Zhang and Hu, ‘we hope that this framework will stimulate investigations on the deep connection between condensed matter and elementary particle
{"url":"http://physicsworld.com/cws/article/news/2001/oct/30/quantum-hall-effect-gets-new-dimensions","timestamp":"2014-04-18T22:18:14Z","content_type":null,"content_length":"38589","record_id":"<urn:uuid:31d194d4-ee48-443b-ab69-a056220cdaea>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Relativity and Cosmology 1103 Submissions [36] viXra:1103.0113 [pdf] replaced on 25 Apr 2011 The Discrepancy in Redshift Between Associated Galaxies and Quasars Authors: Michael J Savins Comments: 8 pages There is considerable evidence to suggest that many apparent quasar galaxy pairs, actually are pairs despite their difference in red shift. Therefore, the light leaving the quasar must be gravitationally red shifted at source. However, one might expect the quasar to be blue shifted relative to the galaxy in view of the truly gigantic amount of energy being released ejecting huge volumes of material at relativistic speed. The problem then is how to account for the difference, what mechanism is capable of turning a blue shift into a considerable red shift other than cosmological expansion? The answer is time or more precisely the rate of flow of time. A quasar alters the rate of flow of time in its locality by the following Rt = E/m + -ma. The quasar by attempting to reverse time in its vicinity slows down the universal rate of flow of time in its locality. This slow down of the rate of flow of time or time dilation results in light leaving the quasar being far red shifted at source. This rules out a black hole as being a candidate for the quasar as black holes and galaxies share the same time frame, this means it has to be a white hole which is time reversed in comparison to the rest of the universe Category: Relativity and Cosmology [35] viXra:1103.0111 [pdf] submitted on 28 Mar 2011 Some Studies on K-Essence Lagrangian Authors: Somnath Mukherjee Comments: 11 pages It has by now established that the universe consists of roughly 25 percent dark matter and 70 percent dark energy. Parametric lagrangian from an exact k-essence lagrangian is studied of an unified dark matter and dark energy model. Category: Relativity and Cosmology [34] viXra:1103.0109 [pdf] replaced on 2012-06-06 11:37:23 The General Theory of Relativity, Metric Theory of Relativity and Covariant Theory of Gravitation. Axiomatization and Critical Analysis Authors: Sergey G. Fedosin Comments: 14 Pages. The axiomatization of general theory of relativity (GR) is done. The axioms of GR are compared with the axioms of the metric theory of relativity and the covariant theory of gravitation. The need to use the covariant form of the total derivative with respect to the proper time of the invariant quantities, the 4-vectors and tensors is indicated. The definition of the 4-vector of force density in Riemannian spacetime is deduced. Category: Relativity and Cosmology [33] viXra:1103.0108 [pdf] submitted on 27 Mar 2011 New Theory of Light and Ether Authors: Udrea Sergiu Comments: 7 pages. Light as a wave needs a medium of propagation. The ether is the medium through which propagate light waves. Light shows a surprising number of properties that were difficult to explain relatively to the ether, but which must appear as naturals, a normal result of the properties of ether. The ether must support the phenomena related to light, particles and their interactions. Category: Relativity and Cosmology [32] viXra:1103.0104 [pdf] replaced on 11 Jun 2011 Quantum Field Theory with Electric-Magnetic Duality and Spin-Mass Duality But Without Grand Unification and Supersymmetry Authors: Rainer W. Kühne Comments: 25 pages. I present a generalization of quantum electrodynamics which includes Dirac magnetic monopoles and the Salam magnetic photon. This quantum electromagnetodynamics has many attractive features. (1) It explains the quantization of electric charge. (2) It describes symmetrized Maxwell equations. (3) It is manifestly covariant. (4) It describes local four-potentials. (5) It avoids the unphysical Dirac string. (6) It predicts a second kind of electromagnetic radiation which can be verified by a tabletop experiment. An effect of this radiation may have been observed by August Kundt in 1885. Furthermore I discuss a generalization of General Relativity which includes Cartan's torsion. I discuss the mathematical definition, concrete description, and physical meaning of Cartan's torsion. I argue that the electric-magnetic duality of quantum electromagnetodynamics is analogous to the spin-mass duality of Einstein-Cartan theory. A quantum version of this theory requires that the torsion tensor corresponds to a spin-3 boson called tordion which is shown to have a rest mass close to the Planck mass. Moreover I present an empirically satisfied fundamental equation of unified field theory which includes the fundamental constants of electromagnetism and gravity. I conclude with the remark that the concepts presented here require neither Grand Unification nor supersymmetry. Category: Relativity and Cosmology [31] viXra:1103.0103 [pdf] replaced on 5 Apr 2011 Pioneer Anomaly – A Confirmation of Relativity Authors: Michael J Savins Comments: 3 pages. The Pioneer anomaly is due to time dilation that is caused by the gravity of the Solar System. As pioneer leaves the solar System the rate of flow of time increases causing a doppler blue shift relative to our perspective. This blue shift reduces the expected red shift so the red shift is not as far red shifted as expected. The craft is where it is supposed to be, it just appears to be closer to us than it is. Category: Relativity and Cosmology [30] viXra:1103.0102 [pdf] submitted on 25 Mar 2011 The Phoenix Theory of the Universe Authors: Michael J Savins Comments: 20 pages. Much knowledge has been discovered about the Universe and is readily available. The Phoenix Theory attempts to re-interpret it in a simple and intuitive manner that better fits what we see. The Phoenix theory builds upon Einstein's theories of Relativity and General Relativity. His simple equation E = mc^2 explains a lot more about the universe than it has been credited with. This theory fills in many of the missing details and attempts to complete the picture. Everything in the universe is made of energy. This theory attempts to explain the relationship between mass, energy and time in a cyclic universe, matter, antimatter ad infinitum. Category: Relativity and Cosmology [29] viXra:1103.0097 [pdf] submitted on 23 Mar 2011 ZPE Zero Point Energy Examples Around Black Holes. Authors: Leo Vuyk Comments: 11 pages. FUNCTION FOLLOWS FORM in Quantum FFF THEORY. The FORM and MICROSTRUCTURE of elementary particles, is supposed to be the origin of FUNCTIONAL differences between Higgs- Graviton- Photon- and Fermion particles. As a consequence, a NEW splitting, accelerating and pairing MASSLESS BLACK HOLE, able to convert vacuum energy (ZPE) into real energy by entropy decrease, seems to be able to explain quick Galaxy- and Star formation, down to Sunspots, Comets and even Ball Lightning. Category: Relativity and Cosmology [28] viXra:1103.0096 [pdf] replaced on 2014-04-05 04:28:50 Three Fundamental Masses Derived by Dimensional Analysis Authors: Dimitar Valev Comments: 6 pages, minor changes, paper is published in Am. J. Space Sci., 2013, Vol. 1, Issue 2, pp. 145-149 Three new mass dimension quantities have been derived by dimensional analysis, in addition to the famous Planck mass mp ~ 10^(-8) kg. These masses have been derived by means of fundamental constants – the speed of light (c), the gravitational constant (G), the Plank constant (h_bar) and the Hubble constant (H). The enormous mass m1 ~ 10^53 kg practically coincides with the Hoyle-Carvalho formula for the mass of the observable universe. The extremely small mass m2 ~ 10^(-33) eV has been identified with the minimum quantum of energy, which seems close to the graviton mass. It is noteworthy that the Planck mass appears geometric mean of the masses m1 and m2. The mass m3 ~ 10^7 GeV could not be unambiguously identified at present time. Besides, the order of magnitude of the total density of the universe has been estimated by this approach. Category: Relativity and Cosmology [27] viXra:1103.0088 [pdf] submitted on 23 Mar 2011 One-Way Speed of Light Relative to a Moving Observer Authors: Stephan J. G. Gift Comments: 8 pages. The one-way speed of light relative to a moving observer is determined using the range measurement equation of the Global Positioning System. This equation has been rigorously tested and verified in the Earth-Centred Inertial frame where light signals propagate in straight lines at constant speed c. The result is a simple demonstration of light speed anisotropy that is consistent with light speed anisotropy detected in other experiments and inconsistent with the principle of light speed constancy. This light speed anisotropy was not observed before because there has been no direct one-way measurement of light speed relative to a moving observer. Category: Relativity and Cosmology [26] viXra:1103.0087 [pdf] replaced on 25 Apr 2011 Reduced Total Energy Requirements For The Original Alcubierre and Natario Warp Drive Spacetimes-The Role Of Warp Factors. Authors: Fernando Loup Comments: 21 pages. Warp Drives are solutions of the Einstein Field Equations that allows Superluminal Travel within the framework of General Relativity. There are at the present moment two known solutions: The Alcubierre Warp Drive discovered in 1994 and the Natario Warp Drive discovered in 2001. However as stated by both Alcubierre and Natario themselves the Warp Drive violates all the known energy conditions because the stress energy momentum tensor(the right side of the Einstein Field Equations) for the Einstein tensor G[00] is negative implying in a negative energy density. While from a classical point of view the negative energy is forbidden the quantum theory allows the existence of very small amounts of it being the Casimir effect a good example as stated by Alcubierre himself. But the stress energy momentum tensor of both Alcubierre and Natario Warp Drives have the speed of the ship raised to the square inside its mathematical structure which means to say that as fast the ship goes by then more and more amounts of negative energy are needed in order to maintain the Warp Drive. Since the total energy requirements to maintain the Warp Drive are enormous and since quantum theory only allows small amounts of it,many authors regarded the Warp Drive as unphysical and impossible to be achieved. We compute the negative energy density requirements for a Warp Bubble with a radius of 100 meters(large enough to contain a ship) moving with a speed of 200 times light speed(fast enough to reach stars at 20 light-years away in months not in years)and we verify that the negative energy density requirements are of about 10^28 times the positive energy density of Earth!!!(We multiply the mass of Earth by c^2 and divide by Earth volume for a radius of 6300km). However both Alcubierre and Natario Warp Drives as members of the same family of the Einstein Field Equations requires the so-called Shape Functions in order to be mathematically defined. We present in this work two new Shape Functions one for the Alcubierre and another for the NatarioWarp Drive Spacetimes that allows arbitrary Superluminal speeds while keeping the negative energy density at "low" and "affordable" levels.We do not violate any known law of quantum physics and we maintain the original geometries of both Alcubierre and Natario Warp Drive Spacetimes. Category: Relativity and Cosmology [25] viXra:1103.0074 [pdf] replaced on 9 May 2011 On the Expansion of the Universe Authors: Jorge de Sousa e Meneses Comments: 6 pages It is accepted from the beginning that nothing can escape from the Universe and a distance is found at which the expansion and compression of the space around a mass are in equilibrium. With this in mind the density of the space is calculated. The value obtained matches the value obtained experimentally by measuring cosmologic redshifts. Applying this concept to the mass of the Universe a second equation is found. This equation, together with the first one, allows the age of the Universe to be calculated and a value is found which is between the normally accepted limits. The same equations allow the deduction of the density equation calculated by Milne and the relativistic equation deduced by Friedmann. Finally, with these equations, the relation between the mass of the Universe, the speed of light and the universal constant of gravitation is found. This relation indicates possibly new areas of investigation. Category: Relativity and Cosmology [24] viXra:1103.0067 [pdf] submitted on 16 Mar 2011 Addressing Strength of GW Radiation Affected by Additional Dimensions Authors: A. Beckwith Comments: 15 pages, 3 tables, two figures.; Has arguments in simplified form as to what will be brought up for presentation by the author in rencontres de Moriond, in a GW symposium We examine whether gravitational waves would be generated during the initial phase, δ[0] of the universe when triggered by changes in space-time geometry; i.e. We hope to find traces of the breakdown of the Entropy/QM space-time regime during δ[0]. In particular, we look at if higher dimensions affect the relative strength of Ω[GW] , and comment as to how this magnitude may affect opportunities for detection of GW from relic sources. In particular, we will explain the reason why GW O of the pre big bang model is so strong, up to 10 to the - 6 power, while the Ω[GW] of ordinary inflation is so weak. In relic conditions. Category: Relativity and Cosmology [23] viXra:1103.0061 [pdf] replaced on 15 Mar 2011 Relativity the Theory of Information Authors: John R. McWhinnie Comments: 14 pages This paper shows that the theory that we know as the Theory Of Relativity is more accurately described as A Theory Of Information. Explained from an informational perspective and the conclusions that its author, Albert Einstein came to, come into question through the natural viewpoint as the entire theory being simply about the transfer of information between informational systems. Category: Relativity and Cosmology [22] viXra:1103.0060 [pdf] submitted on 14 Mar 2011 The General Relativity Precession of Mercury. Authors: Javier Bootello Comments: 7 pages The solution to the unexplained anomalous precession of the perihelion of Mercury, was the first success of GR (Einstein 1915), event which is near to reach its first centenary. We propose in this paper, to update the classic test of relativity, studying the gradual progression of one-orbit precession, not only in its perihelion, but also along a complete trajectory around the Sun. Just to underline GR results, we have confronted it with other virtual and mathematical potentials which, leading to an identical secular advance of the perihelion, offer different equations of motion with only theoretical meaning. Spacecraft Messenger will begin to orbit Mercury next March 18, and during twelve months, both will make 4.2 revolutions around the Sun. That event should afterwards allow us, to measure and draw accurately, the geometry of the whole geodesic orbit as an open free-fall path, isolated from other planets gravitational interference. This update must verify the GR issues with modern standards, throughout an accessible test to perform, with clear results, unlike a complex test, expensive and with uncertain conclusions. Category: Relativity and Cosmology [21] viXra:1103.0057 [pdf] submitted on 14 Mar 2011 Serious Anomalies in the Reported Geometry of Einstein's Gravitational Field Authors: Stephen J. Crothers Comments: 6 pages Careful reading of the reported geometry of Einstein's gravitational field reveals that the physicists have committed fatal errors in the elementary differential geometry of a pseudo-Riemannian metric manifold. These elementary errors in mathematics invalidate much of the reported physics of Einstein's gravitational field. The consequences for astrophysical theory are significant. Category: Relativity and Cosmology [20] viXra:1103.0056 [pdf] submitted on 14 Mar 2011 Fundamental Errors in the General Theory of Relativity Authors: Stephen J. Crothers Comments: 15 pages The notion of black holes voraciously gobbling up matter, twisting spacetime into contortions that trap light, stretching the unwary into long spaghetti-like strands as they fall inward to ultimately collide and merge with an infinitely dense point-mass singularity, has become a mantra of the astrophysical community. There are almost daily reports of scientists claiming that they have again found black holes again here and there. It is asserted that black holes range in size from micro to mini, to intermediate and on up through to supermassive behemoths and it is accepted as scientific fact that they have been detected at the centres of galaxies. Images of black holes interacting with surrounding matter are routinely included with reports of them. Some physicists even claim that black holes will be created in particle accelerators, such as the Large Hadron Collider, potentially able to swallow the Earth, if care is not taken in their production. Yet contrary to the assertions of the astronomers and astrophysicists of the black hole community, nobody has ever found a black hole, anywhere, let alone imaged one. The pictures adduced to convince are actually either artistic impressions (i.e. drawings) or photos of otherwise unidentified objects imaged by telescopes and merely asserted to be due to black holes, ad hoc. Category: Relativity and Cosmology [19] viXra:1103.0055 [pdf] submitted on 14 Mar 2011 On Line-Elements and Radii: A Correction Authors: Stephen J. Crothers Comments: 2 pages Using a manifold with boundary various line-elements have been proposed as solutions to Einstein's gravitational field. It is from such line-elements that black holes, expansion of the Universe, and big bang cosmology have been alleged. However, it has been proved that black holes, expansion of the Universe, and big bang cosmology are not consistent with General Relativity. In a previous paper disproving the black hole theory, the writer made an error which, although minor and having no effect on the conclusion that black holes are inconsistent with General Relativity, is corrected herein for the record. Category: Relativity and Cosmology [18] viXra:1103.0054 [pdf] submitted on 14 Mar 2011 Planck Particles and Quantum Gravity Authors: Stephen J. Crothers Comments: 4 pages The alleged existence of so-called Planck particles is examined. The various methods for deriving the properties of these "particles" are examined and it is shown that their existence as genuine physical particles is based on a number of conceptual flaws which serve to render the concept invalid. Category: Relativity and Cosmology [17] viXra:1103.0053 [pdf] submitted on 14 Mar 2011 On Theoretical Contradictions and Physical Misconceptions in the General Theory of Relativity Authors: Stephen J. Crothers Comments: 6 pages It is demonstrated herein that:- 1. The quantity 'r' appearing in the so-called "Schwarzschild solution" is neither a distance nor a geodesic radius in the manifold but is in fact the inverse square root of the Gaussian curvature of the spatial section and does not generally determine the geodesic radial distance (the proper radius) from the arbitrary point at the centre of the spherically symmetric metric manifold. 2. The Theory of Relativity forbids the existence of point-mass singularities because they imply infinite energies (or equivalently, that a material body can acquire the speed of light in vacuo); 3. Ric=R[μν] =0 violates Einstein's 'Principle of Equivalence' and so does not describe Einstein's gravitational field; 4. Einstein's conceptions of the conservation and localisation of gravitational energy are invalid; 5. The concepts of black holes and their interactions are ill-conceived; 6. The FRW line-element actually implies an open, infinite Universe in both time and space, thereby invalidating the Big Bang cosmology. Category: Relativity and Cosmology [16] viXra:1103.0052 [pdf] submitted on 14 Mar 2011 Geometric and Physical Defects in the Theory of Black Holes Authors: Stephen J. Crothers Comments: 12 pages The so-called 'Schwarzschild solution' is not Schwarzschild's solution, but a corruption of the Schwarzschild/Droste solution due to David Hilbert (December 1916), wherein m is allegedly the mass of the source of the alleged associated gravitational field and the quantity r is alleged to be able to go down to zero (although no proof of this claim has ever been advanced), so that there are two alleged 'singularities', one at r=2m and another at r=0. It is routinely alleged that r=2m is a 'coordinate' or 'removable' singularity which denotes the so-called 'Schwarzschild radius' (event horizon) and that the 'physical' singularity is at r=0. The quantity r in the usual metric has never been rightly identified by the physicists, who effectively treat it as a radial distance from the alleged source of the gravitational field at the origin of coordinates. The consequence of this is that the intrinsic geometry of the metric manifold has been violated in the procedures applied to the associated metric by which the black hole has been generated. It is easily proven that the said quantity r is in fact the inverse square root of the Gaussian curvature of a spherically symmetric geodesic surface in the spatial section of Schwarzschild spacetime and so does not denote radial distance in the Schwarzschild manifold. With the correct identification of the associated Gaussian curvature it is also easily proven that there is only one singularity associated with all Schwarzschild metrics, of which there is an infinite number that are equivalent. Thus, the standard removal of the singularity at r=2m is actually a removal of the wrong singularity, very simply demonstrated herein. Category: Relativity and Cosmology [15] viXra:1103.0051 [pdf] submitted on 14 Mar 2011 The Schwarzschild Solution and Its Implications for Gravitational Waves Authors: Stephen J. Crothers Comments: 27 pages The so-called 'Schwarzschild solution' is not Schwarzschild's solution, but a corruption, due to David Hilbert (December 1916), of the Schwarzschild/Droste solution, wherein m is allegedly the mass of the source of a gravitational field and the quantity r is alleged to be able to go down to zero (although no proof of this claim has ever been advanced), so that there are two alleged 'singularities', one at r=2m and another at r=0. It is routinely asserted that r=2m is a 'coordinate' or 'removable' singularity which denotes the so-called 'Schwarzschild radius' (event horizon) and that the 'physical' singularity is at r=0. The quantity r in the so-called 'Schwarzschild solution' has never been rightly identified by the physicists, who, although proposing many and varied concepts for what r therein denotes, effectively treat it as a radial distance from the claimed source of the gravitational field at the origin of coordinates. The consequence of this is that the intrinsic geometry of the metric manifold has been violated. It is easily proven that the said quantity r is in fact the inverse square root of the Gaussian curvature of a spherically symmetric geodesic surface in the spatial section of the 'Schwarzschild solution' and so does not in itself define any distance whatsoever in that manifold. With the correct identification of the associated Gaussian curvature it is also easily proven that there is only one singularity associated with all Schwarzschild metrics, of which there is an infinite number that are equivalent. Thus, the standard removal of the singularity at r=2m is, in a very real sense, removal of the wrong singularity, very simply demonstrated herein. This has major implications for the localisation of gravitational energy i.e. gravitational waves. Category: Relativity and Cosmology [14] viXra:1103.0050 [pdf] submitted on 14 Mar 2011 Proof that the Black Hole is Fallacious (Without Mathematics) Authors: Stephen J. Crothers Comments: 4 pages The proponents of the black hole make much fanfare of the quantity r that appears in the so-called "Schwarzschild solution". They treat this issue with complicated mathematics and thereby confuse those not versed in the relevant mathematics. They routinely claim that this r is the radius, one way or another. However, this is false because it is not even a distance in "Schwarzschild" spacetime since it is easily proven that it strictly plays the role of the inverse square root of the Gaussian curvature of the spherically symmetric geodesic surface in the spatial section of "Schwarzschild" spacetime and so does not itself denote any distance whatsoever in "Schwarzschild" spacetime. Now one does not even need to understand the abstruse mathematics surrounding this issue to see that the black hole is invalid, making this complicated mathematical matter irrelevant, as I now show. Category: Relativity and Cosmology [13] viXra:1103.0049 [pdf] submitted on 14 Mar 2011 The Fictitious 'interior' of Schwarzschild Spacetime (And a Couple of Other Fallacies) Authors: Stephen J. Crothers Comments: 8 pages All arguments for the black hole are based upon the same fundamental idea in that they conceive of a region that in actual fact does not exist. This fictitious region the relativists call the 'interior', i.e. the region they think is contained by their a spherically symmetric surface they call the 'event horizon'. But there is no such region. The notion comes from a failure to recognise that the centre of spherical symmetry of the problem at hand is not located where they think it is, at their r = 0. I shall discuss this now in more detail. Category: Relativity and Cosmology [12] viXra:1103.0048 [pdf] submitted on 14 Mar 2011 LIGO, LISA Destined to Detect Nothing Authors: Stephen J. Crothers Comments: 5 pages It is claimed that the LIGO and LISA projects will detect Einstein's gravitational waves. The existence of these waves is entirely theoretical. Over the past forty years or so no Einstein gravitational waves have been detected. How long must the search go on, at great expense to the public purse, before the astrophysical scientists admit that their search is fruitless and a waste of vast sums of public money? The fact is, from day one, the search for these elusive waves has been destined to detect nothing. Here are some reasons why. Category: Relativity and Cosmology [11] viXra:1103.0047 [pdf] replaced on 2012-08-29 01:15:26 The Black Hole, the Big Bang – A Cosmology in Crisis (A Detailed Analysis) Authors: Stephen J. Crothers Comments: 38 Pages. It is often claimed that cosmology became a true scientific inquiry with the advent of the General Theory of Relativity. A few subsequent putative observations have been misconstrued in such a way as to support the prevailing Big Bang model by which the Universe is alleged to have burst into existence from an infinitely dense point-mass singularity. Yet it can be shown that the General Theory of Relativity and the Big Bang model are in conflict with well-established experimental facts. Category: Relativity and Cosmology [10] viXra:1103.0046 [pdf] submitted on 14 Mar 2011 Concerning Fundamental Mathematical and Physical Defects in the General Theory of Relativity Authors: Stephen J. Crothers Comments: 13 pages The physicists have misinterpreted the quantity 'r' appearing in the socalled "Schwarzschild solution" as it is neither a distance nor a geodesic radius but is in fact the inverse square root of the Gaussian curvature of a spherically symmetric geodesic surface in the spatial section of the Schwarzschild manifold, and so it does not directly determine any distance at all in the Schwarzschild manifold - in other words, it determines the Gaussian curvature at any point in a spherically symmetric geodesic surface in the spatial section of the manifold. The concept of the black hole is consequently invalid. It is also shown herein that the Theory of Relativity forbids the existence of point-mass singularities because they imply infinite energies (or equivalently, that a material body can acquire the speed of light in vacuo), and so the black hole is forbidden by the Theory of Relativity. That Ric=R[μν] = 0 violates Einstein's 'Principle of Equivalence' and so does not describe Einstein's gravitational field, is demonstrated. It immediately follows that Einstein's conceptions of the conservation and localisation of gravitational energy are invalid - the General Theory of Relativity violates the usual conservation of energy and momentum. Category: Relativity and Cosmology [9] viXra:1103.0045 [pdf] submitted on 14 Mar 2011 Black Holes, Unicorns, and All That Jazz Authors: Stephen J. Crothers Comments: 5 pages The notion of black holes voraciously gobbling up matter, twisting spacetime into contortions that trap light, stretching the unwary into long spaghetti-like strands as they fall inward to ultimately collide and merge with an infinitely dense point-mass singularity, has become a mantra of the astrophysical community, so much so that even primary-school children know about the sinister black hole, waiting patiently, like the Roman child's Hannibal, for an opportunity to abduct the unruly and the misbehaved. There are almost daily reports of scientists claiming that they have again found black holes again here and there. It is asserted that black holes range in size from micro to mini, to intermediate and on up through to supermassive behemoths. Black holes are glibly spoken of and accepted as scientific facts and it is routinely claimed that they have been detected at the centres of galaxies. Images of black holes having their wicked ways with surrounding matter are routinely included with reports of them. Some physicists even claim that black holes will be created in particle accelerators, such as the Large Hadron Collider, potentially able to swallow the Earth, if care is not taken in their production. Yet despite all this hoopla, contrary to the assertions of the astronomers and astrophysicists of the black hole community, nobody has ever found a black hole, anywhere, let alone 'imaged' one. The pictures adduced to convince are actually either artistic impressions (i.e. drawings) or photos of otherwise unidentified objects imaged by telescopes and merely asserted to be due to black holes, ad hoc. Category: Relativity and Cosmology [8] viXra:1103.0044 [pdf] submitted on 14 Mar 2011 Authors: A. Beckwith Comments: 12 pages, one figure, three tables. Predictions of page 8 will be proved rigorously in the near future. We examine whether gravitational waves would be generated during the initial phase, of the universe when triggered by changes in spacetime geometry; i.e. We hope to find traces of the breakdown of the Entropy/QM spacetime regime during initial phase change induced by the geometry of space time alterations. Category: Relativity and Cosmology [7] viXra:1103.0037 [pdf] submitted on 12 Mar 2011 If the Speed of Light is a Constant, Then Light is a Wave Authors: Constantinos Ragazas Comments: 3 pages In this short note we mathematically prove that if we assume that the speed of light is constant, then light propagates as a wave. Category: Relativity and Cosmology [6] viXra:1103.0033 [pdf] submitted on 11 Mar 2011 How to Prove that the Transition from Pre Planckian to Planckian Space Time Physics May Allow Octonionic Gravity Conditions to Form. and How to Measure that Transition from Pre Octionic to Octionic Gravity, and Check Into Conditions Permitting Possible Multiple Universes Authors: A. Beckwith Comments: 22 pages, three figures, 4 tables. Provides foundations of GW astronomy as being developed by Chonqujing University department of physics We ask if Octonionic quantum gravity [1] is a relevant consideration near the Planck scale. Furthermore, we examine whether gravitational waves would be generated during the initial phase, and look into multiple universe generation. And an Ergodic mapping which may allow multiple universe embedding of Octonionic gravity as a starting point to inflation Category: Relativity and Cosmology [5] viXra:1103.0023 [pdf] submitted on 9 Mar 2011 Emergent Gravity in Start of Cosmological Inflation as by Product of Dynamical Systems Mappings ? Authors: Andrew Walcott Beckwith Comments: 6 pages, one table. An entry into the 11 DSTA conference, in Lotz, Poland, in December 5-8, 2011m put in to facilitate mathematical development and continuation/ improvement of a suggestion brought up by Dr. R. Penrose which the author was priviliged to attend in summer , 2007, at the inagural opening of the new Penn State gravitational physics center. We present how a Gaussian mapping, combined with what we hope to turn into a strange attractor for re cycling prior universe matter-energy may enable quantum gravity to form. And embed it in a larger- non linear theory. The key development to be worked upon would be turning into a strange attractor the supposition R. Penrose made as to re cycling the 'history' of the universe without the necessity of a 'big crunch', i.e. a contracting universe. The nature of the attractor would be instrumental in helping us come up with conditions enabling the evolution of pre Planckian embedding of non linear ' analog reality' ( 'classical' ) physics meshing into, with an increase in degrees of freedom into 'digital reality' ( 'quantum mechanics') and de facto quantum gravity , at the start of Planckian space time. This Planckian space time would mark the beginning of inflation. Category: Relativity and Cosmology [4] viXra:1103.0020 [pdf] submitted on 7 Mar 2011 Detailing Minimum Parameters as Far as Red Shift, Frequency, Strain, and Wavelength of Gravity Waves / Gravitons, and Possible Impact Upon GW Astronomy Authors: A. Beckwith Comments: 11 pages, 3 tables. Submitted by invitation to Advances in Astronomy. Coverse material part of presentation for This document will briefly outline some of the issues pertinent to early inflation and how it affects both strain readings for a GW detector, GW wave lengths, the number of gravitons which may be collected per phase space, among other issues. Different inflation models will also be briefly alluded to.explain in part what may be happening, as far as rates of alternations of wavelengths of GW 's from their genesis in terms of pre inflation to inflationary generation. We also mention a standard as far as GW measurement and how the 'metric' of measurement varies between the different models To summarize we state that the best chances for relic GW measurements are Ω[GW] ~ 10^6 are in the 1Hz < f < 10 GHz range. This according to the pre big bang models, and the QIM model. Category: Relativity and Cosmology [3] viXra:1103.0019 [pdf] submitted on 7 Mar 2011 Deceleration Parameter Q(Z) and Examination if a Joint DM-DE Model is Feasible, with Applications to " Atoms of Space Time " Thermodynamcs and BBN Authors: A. Beckwith Comments: 15 pages, 3 figures. Submitted to Advances in Astronomy The case for a four dimensional graviton mass (non zero) influencing reacceleration of the universe in five dimensions is stated, with particular emphasis upon if five dimensional geometries as given below give us new physical insight as to cosmological evolution. One noticeable datum, that a calculated inflaton φ (t) may partly re-emerge after fading out in the aftermath of inflation. The inflaton may be a contributing factor to, with non zero graviton mass, in re acceleration of the universe a billion years ago. Many theorists assume that the inflaton is the source of entropy. The inflaton also may be the source of re acceleration of the universe, especially if the effects of a re emergent inflaton are in tandem with the appearance of macro effects of a small graviton mass, leading to a speed up of the rate of expansion of the universe one billion years ago, at red shift value of Z ~ .423. The key formula, for joint DM-DE shows up in terms of deceleration parameter, Q (z). The choice of the DM-DE eqn. may eventually illuminate how early BBN may affect the formation of low levels of lithium for early star formation which we reference toward the end of this document. We also discuss what is necessary for not only proper BBN, but also to the implications for 'atoms' of space time congruent with relic GW production, i.e. the thermodynamics of emergent Category: Relativity and Cosmology [2] viXra:1103.0013 [pdf] submitted on 3 Mar 2011 The First 10^-35 Seconds from the Imaginary to the Real a First Course in the Exploration of the Unknown Authors: Nikiforos A. Sideris Comments: 137 pages. The first Edition of this work was published as a book in English under the title "THE FIRST 10^-35 SECONDS" (ISBN: 960-630-425-6) in Athens on 2005. The circulation of this book however was restricted mainly in the little community of the Greek physicists. Thanks to the very smart development of the viXra.org. I considered that it would be good to present it to the wide International Physics Community in electronic form since the ideas and findings from my work bring new information in the sectors of Elementary Particle Physics and in Cosmology. Some of these new results are based on other works of mine that have already been published mainly in the international journal Physics Essays and also in my new book published in Greek under the title "The Machinary of Newtonian Gravitation and the fallacies of General Relativity" (ISBN: 978-960-8160-49-1). I hope that for many physicists will be useful to be informed that to many as yet unsolved problems of physics, this presentation will give answers that may be discussed. I thank in advance and I congratulate the viXra. organization for their contribution to transfer to the physics community new ideas that perhaps, to my opinion, will bring a little restlessness to some of the top leading minds of the contemporary physics. Perhaps this is one of the reasons that new ideas are prevented to be exposed by some of the top journals on physics. But the ancient Greeks had a proverb: Nothing can be hidden under the Sun. Category: Relativity and Cosmology [1] viXra:1103.0005 [pdf] submitted on 2 Mar 2011 On the Theory of Everything Authors: Bertrand Wong Comments: 4 pages. A theory of everything , or, grand unified theory (which Einstein had been working on without success, with Superstring Theory now being a good candidate), is one which unites all the forces of nature, viz., gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Important as this theory might be, it is lacking in one important fundamental aspect, viz., the role of consciousness, which could in fact be considered the most fundamental aspect of physics. This paper explains that a theory of consciousness is more important than a theory of everything or grand unified theory and should be the theory of everything instead, or, at least, a part of the theory of everything. Category: Relativity and Cosmology
{"url":"http://vixra.org/relcos/1103","timestamp":"2014-04-18T08:02:35Z","content_type":null,"content_length":"53902","record_id":"<urn:uuid:392b71e4-8d1d-4190-ab42-92f8619cc729>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of convergent-sequence an infinite sequence, x [1] x [2] , …, whose terms are points in there exists a point such that the limit as goes to infinity of x[n] y if and only if for every ε>0, there exists a number such that i N j N implies | x[i] x[j] Also called Cauchy sequence, convergent sequence. complete ( def 10b )
{"url":"http://dictionary.reference.com/browse/convergent-sequence","timestamp":"2014-04-23T11:58:46Z","content_type":null,"content_length":"91037","record_id":"<urn:uuid:b9c3adf5-7330-4c7d-bf81-e35b17d80221>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Real integration of oscillating functions We present new results on real integration of oscillating functions, for example, Fourier transforms of subanalytic functions; it is joint work with Aschenbrenner and Rolin. We begin to control very good the transcendental functions we have to add, (namely, oscillating versions of basic Abelian integrals), in order to describe these parameterized integrals. The method is in principle algorithmic, with a similar algorithm as to compute motivic oscillating integrals. Yet, conjectures linking real and motivic integrals remain unsolvable.
{"url":"http://www.newton.ac.uk/programmes/MAA/Abstract1/cluckers.html","timestamp":"2014-04-18T18:17:06Z","content_type":null,"content_length":"2618","record_id":"<urn:uuid:a0e250aa-e1ac-4452-9bb6-d2d6e2d47916>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-linear GxE model I am running models of GxE where I have age as a continuous moderator of a continuous phenotype. I have initially run the standard linear GxE model as outlined by Purcell (2002). However, I had problems getting this to fit. Out of interest, I ran the non-linear script, and this was successful. In addition to running the full model I have run nested models where the influence of the moderating terms are successively dropped. I am just wondering whether it is logically implausible to run a model where I retain quadratic paths (for the non-linear interaction), but drop the linear ones? The best fitting model appears to be one where only the quadratic moderation of E is retained and all other moderating paths (i.e. linear A, C, E, quadratic A and C) are dropped. Any advice would be much appreciated! Thanks!
{"url":"http://openmx.psyc.virginia.edu/print/854","timestamp":"2014-04-16T19:22:11Z","content_type":null,"content_length":"8266","record_id":"<urn:uuid:16a5b4c9-40e3-440b-acff-b87236249add>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci Number Formula The Fibonacci numbers are generated by setting F[0]=0, F[1]=1, and then using the recursive formula F[n] = F[n-1] + F[n-2] to get the rest. Thus the sequence begins: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ... This sequence of Fibonacci numbers arises all over mathematics and also in nature. However, if I wanted the 100th term of this sequence, it would take lots of intermediate calculations with the recursive formula to get a result. Is there an easier way? Yes, there is an exact formula for the n-th term! It is: a[n] = [ Phi^n - (phi)^n ]/Sqrt[5]. where Phi=(1+Sqrt[5])/2 is the so-called golden mean, and phi=(1-Sqrt[5])/2 is an associated golden number, also equal to (-1/Phi). This formula is attributed to Binet in 1843, though known by Euler before him. The Math Behind the Fact: The formula can be proved by induction. It can also be proved using the eigenvalues of a 2x2-matrix that encodes the recurrence. You can learn more about recurrence formulas in a fun course called discrete mathematics. How to Cite this Page: Su, Francis E., et al. "Fibonacci Number Formula." Math Fun Facts. <http://www.math.hmc.edu/funfacts>.
{"url":"http://www.math.hmc.edu/funfacts/ffiles/10002.4-5.shtml","timestamp":"2014-04-19T04:19:16Z","content_type":null,"content_length":"20280","record_id":"<urn:uuid:16aa42eb-7260-4693-b4a8-c86fffe9665b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by yuni Total # Posts: 26 bio lab what should i labeled on giant chromosome slide? -things that should have on that chromosome. chem lab KMn04 with iron(II) ammonium sulphate hexahydrate-redox titration. 1-what is the purpose of adding H2SO4 to iron sample before the titration? 2-why is there no indicator used to determine the end point for this titration? 3-suggest another possible oxidising agent and reducing... the common metallic elements have been invetigated for their relative reactivities and have been arranged into what is called electromotive series. Explain the series in terms of choosing the suitable electrode in setting up an electrochemical series. chem lab does the voltaic cell(in exp) must be the same with the standard cell that we calculated based on the formula? is the answer 1392 s? The reaction AB(aq)->A(g)+ B(g)is second order in AB and has a rate constant of 0.0282 M^{-1} s^{-1} at 25 Celsius. A reaction vessel initially contains 250 mL of 0.105 M AB which is allowed to react to form the gaseous product. The product is collected over water at 25 Cel... are blood pressures have differences between genders?explain your answer. -from what i know, women have lower risk to get high blood pressure but for the range of normal blood pressure, i think there is no differences. pls correct me if i wrong and if im right can i know the a... i'm sorry. the electric field are from A to B. dear drwls,you didnt answer my question. i know about the concept but why the v is negative? or how can it be proven mathematically by the formula?. potential difference=40v b2n plate A & B. find work done if q=+3C from a) B to A b) A to B *i already know the ans but i need the explanation for the formula. -this is my work a)w=-qv b)-120J =-3 (-40) =120 J what i want to ask is: 1-why must we add - sign for v is not that if ... if a & b are perpendicular vectors, show that a)(a+2b).(a-b)=|a|^2-2|b|^2 b)(a+b).(a-b)=|a|^2+|b|^2 *how to show it?just multiply it?if yes where does||come from? does||show that it should be a Find the interior angle formed by points A(1,5,3) , B(-2,3,7) , C(3,2,5). does this mean i need to find all the 3 angles of triangle? the position vectors of the point A,B & C respect to O are a=3i-2j+4k, b=i+3j-4k, c=-2i+5j-3k. Find a)the angle between a+b and a-b b)lamdar if vector a is perpendicular to b+lamdar k. prove cos(x+y) / cos(x-y) = (1-tan x tan y)/(1+tan x tan y) rate=k[A2][B2]^2. by how much would the rate change if: i) [A2] is doubled? ii) [B2] is halved? *does the rate change equal with k? the atom of element A has 2 electrons in its highest energy level N. a)what is the special name given to elements in the same gp as A? i answered as alkali earth metal because the valence electron is 2. am i right? 150 cm3 0.24 M naoh solution react with excess al(no3)3, calculate the mass of saodium aluminate produced. al(no3)3 + 4naoh -> naalo2 + 3 nano3 +2h20 i got 0.738g.. is my answer correct? if im wrong could u show me the calculation? biology lab im sorry but b4 this i already seacrh at the internet but i didnt found what i want. so i hope there will be another answer coming on. biology lab how to know from the experiment if the oxygen produced as a result of carbon fixation or electron transport? my experiment result show that average bubble/min for tap water is 4.7 while NaHCO3 is biology lab -rate of photosyhthesis in presence of light intensity & CO2. ~A : sodium bicarbonate in darkness pH b4 xperiment-9 " @ 20 min- 8 " 60 min- 8 ~B: sodium bicarbonate in room light pH b4 xperiment-9 " @ 20 min-9 " 60 min-8 ~C: sodium bicarbonate 20 cm from li... what is the original color of Phenolphthalein solution? pink or colourless? can any1 plz answer this question. A piece of wire of length 240 cm is bent into the shape of trapezium where the eqtn is = 13x+13x+y+y+24x. Find the value of x and y for which A is maximum, hence find the maximum area. a. express y in term of x b. show that the area A cm^2 enclosed by the wire is given by 2880... the volume of an cylindrical can with a radius r cm and height h cm is 128000 c^3, show that the surface area of the can is A=2(22/7)r^2 + 246000/r. Find the value for r to minimize the surface area. *i know what the quest. ask but i do not know how to apply it. a liquid form of penicillin is sold in bulk at a price of $ 200 per unit.If the total cost for x unit is C(x)=500000+80x+0.003X^2 and the production capacity is at most 30000 at a specified timed, how many of the penicillin must be manufactured and sold in that time to maximiz... Chemistry Lab if i get 4 % of yield in the experiment of chemical reactions of copper and percent yield. Is there a possibility the experiment wa success? ~in my opinion it was success because ou theoritical yield is below 100 % ~i just need some opinion.
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=yuni","timestamp":"2014-04-19T07:17:09Z","content_type":null,"content_length":"12164","record_id":"<urn:uuid:9787fe04-cd90-4911-8a9c-512819f41ad8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
The origin of sets? up vote 33 down vote favorite The history of set theory from Cantor to modern times is well documented. However, the origin of the idea of sets is not so clear. A few years ago, I taught a set theory course and I did some digging to find the earliest definition of sets. My notes are a little scattered but it appears that the one of the earliest definition that I found was due to Bolzano in Paradoxien des Unendlichen: There are wholes which, although they contain the same parts $A$, $B$, $C$, $D$,..., nevertheless present themselves as different when seen from our point of view or conception (this kind of difference we call 'essential'), e.g. a complete and a broken glass viewed as a drinking vessel. [...] A whole whose basic conception renders the arrangement of its parts a matter of indifference (and whose rearrangement therefore changes nothing essential from our point of view, if only that changes), I call a set. (The original German text is here, §4; I don't remember where I got the translation.) According to my notes, Bolzano wrote this in 1847. Since Boole's An Investigation of the Laws of Thought was published a just few years later in 1854, it seems that the idea of sets was already well known at that time. What was the earliest definition of 'set' in the mathematical literature? Historical queries of this type are hopelessly vague, so let me give some more specific criteria for what I am looking for. The object doesn't have to be called "set" but it must be an independent container object where the arrangement of the parts doesn't matter. • It should also be fairly general in what the set can contain. A general set of points in the plane is probably not enough in terms of generality but if the same concept is also used for collections of lines then we're talking. • It shouldn't have implicit or explicit structure. Line segments, intervals, planes and such are too structured even if the arrangement of the parts technically doesn't matter. • It should be an independent object intended to be used and manipulated for its own sake. For example, the first time a collection of points in general position was considered in the literature doesn't make the cut since there was no intent to manipulate the collection for its own sake. • It should be a definition. Formal definitions as we see them today are a relatively new phenomenon but it should be fairly clear that this is the intent, such as when Bolzano says "I call a set" at the end of the quote above. • It should be mathematical concept. The strict divisions we have today are very recent but it should be clear that the sets in question are intended for mathematical purposes. Paradoxien des Unendlichen is perhaps more of a philosophical treatise than a mathematical one, but it is clear that Bolzano is considering sets in a mathematical way. That said, any input that doesn't quite meet all of these criteria is welcome since the ultimate goal is to understand how the modern idea of set came to be. ho.history-overview set-theory lo.logic This is a borderline case for CW. I don't expect a definite answer, but it is not intended as a collection of resources to be sorted and the idea of a useful answer makes sense. What do people think? – François G. Dorais♦ Dec 22 '12 at 21:35 5 I don't think it should be CW. If only because a good answer would definitely be worthy of every vote and vote given to it. It's a very interesting question too! – Asaf Karagila Dec 22 '12 at I've read that Dedekind also made important early contributions. I don't know the chronology of the contributions themselves, but Dedekind was 14 years older than Cantor and 50 years younger than Bolzano. Probably you know much more about this than I do. – Tom Leinster Dec 22 '12 at 21:52 3 Find some examples by looking for "set" at MathWord jeff560.tripod.com/mathword.html – Gerald Edgar Dec 22 '12 at 22:16 Thanks Gerald! There is only one example there that significantly predates Bolzano: "In 1796, William Frend used the phrase “set of numbers” in The Principles of Algebra. This use of the word was found by Stanley Burris, who wites, 'This was certainly not an influential book since Frend did not accept negative numbers, but it suggests the use of the word set in math texts may have been common.'" – François G. Dorais♦ Dec 22 '12 at 22:26 show 4 more comments protected by Andres Caicedo Oct 5 '13 at 23:00 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. Would you like to answer one of these unanswered questions instead? 4 Answers active oldest votes This isn't meant entirely seriously as an answer to your question, but: on page 344 of Practical Foundations of Mathematics, Paul Taylor writes: Adam of Balsham (1132) observed that the difference between finite and infinite sets is that the latter admit proper self-inclusions, such as $n \mapsto 2n$. up vote 16 down vote Obviously this is staggeringly early and it would be astonishing if this dude Adam had anything like our present-day conception of set. Paul doesn't appear to give a reference, but perhaps he (Paul, not Adam) will see this and tell us more. 1 Impressive. Until now I thought that most of the achievement of the middle ages mathematics were to preserve copies of Greek literature and to figure out the harmonic series diverges. But now it seems that they invented set theory too! – Asaf Karagila Dec 23 '12 at 0:17 8 This is often referred to as Galileo's paradox. Galilei discusses it in some detail, but it goes back further. The following web site gives references going back as far as Plutarch: earlham.edu/~peters/writing/infinity.htm – Michael Renardy Dec 23 '12 at 1:14 8 Adam of Balsham is mentioned in Styazhkin's "History of Mathematical Logic from Leibniz to Peano". I don't have access to that book now, or any other notes on this topic. Historical references like this in my book were meant simply to point out that many important ideas are much older than the followers of Cantor would have you believe. I would not be surprised to be told that some Arabic mathematician or philosopher had made this comment much earlier. Or even Archimedes. – Paul Taylor Dec 23 '12 at 8:28 6 Paul, using the sentence "much older than the followers of Cantor would you have believe." makes it sound like you claim there is some conspiracy to make Cantor a great mathematician, even if he wasn't. And that there is some shadow society which works hard to keep in the shadow all those who discussed sets before Cantorian times... I just want to say that if there is such secret society, and they are reading this, I would like to be a member! :-) – Asaf Karagila Dec 23 '12 at 11:14 Adam of Balsham was known as Parvipontanus because he taught near the Petit Pont in Paris. He wrote a book called Ars Disserendi. Regarding Cantor, I am not sure whether he is 2 officially regarded as a Great Mathematician, say alongside Dedekind, but there is most certainly a conspiracy regarding his ideas: just try getting a job in a Pure Mathematics department if you are an atheist regarding Set Theory. I am not sure if Asaf is declaring himself an atheist too, but we are not alone. – Paul Taylor Dec 23 '12 at 21:21 show 3 more comments Euler in Lettres à une princesse d'Allemagne sur divers sujets de physique et de philosophie, 17-24 feb 1761, writes about objects he calls spaces (my emphasis): As a general notion encompasses an infinity of individual objects, one regards it as a space within which all these individuals are enclosed: thus, for the notion of man, one makes a space (fig. 39) in which one conceives that all men are comprised. For the notion of mortal, one also makes a space (fig. 40), where one conceives that everything mortal is comprised. Then, when I say that all men are mortal, that comes down to the former figure being contained in the latter. up vote 13 (...) down vote These round figures or rather these spaces (for it doesn't matter what shape we give them) are very well-suited to facilitating our reflections (...) etc., and illustrates this with what we would call ensemblist diagrams (fig. 39 to 89), famously reproduced on Swiss banknotes. The applications he gives here are to everyday logic, so perhaps less mathematical than intended by the question. (I don't know if he ever wrote again on the subject.) Merci beaucoup! Euler doesn't quite capture the lack of structure as well as Bolzano. Also, as you observe, it's not clearly a mathematical object. Nevertheless, this is an important example! – François G. Dorais♦ Dec 23 '12 at 2:35 3 And those figures rondes are Venn diagrams a century before Venn :-) – Mariano Suárez-Alvarez♦ Dec 23 '12 at 2:47 add comment "A new and compendious system of practical arithmetick", by William Pardon in 1738, contains the passage: Here if the first Series or Set of Numbers increases by 1, and the second decreases by 1; the third increases by 2, ... up vote 7 down vote The emphasis is in the original, so that it is not set that is being described. So in 1738, it's meaning was already taken for granted. 1 Published in 1738 according to WorldCat - worldcat.org/title/… – François G. Dorais♦ Dec 22 '12 at 22:54 1 I don't know if emphasis was used in that way in 1738. – François G. Dorais♦ Dec 27 '12 at 4:19 add comment From the Wikipedia article on Euler diagrams: "The first use of "Eulerian circles" is commonly attributed to Swiss mathematician Leonhard Euler (1707–1783)." up vote 1 down vote "Venn diagrams are a more restrictive form of Euler diagrams. A Venn diagram must contain all $2^n$ logically possible zones of overlap between its $n$ curves, representing all combinations of inclusion/exclusion of its constituent sets, but in an Euler diagram some zones might be missing if they are empty sets." Possibly an instance of Baez's theorem, where 'Venn diagram' I've seen applied to what are apparently called Eulerian diagrams. – David Roberts Dec 23 '12 at 9:33 I'm afraid this meets none of the criteria in the question. – François G. Dorais♦ Dec 23 '12 at 20:22 Was this intended as a comment to Mariano Suarez-Alvarez's comment to François Ziegler's answer? – François G. Dorais♦ Dec 23 '12 at 23:21 Sort of. I wanted it to be a standalone answer so it would include the image, but the image hasn't appeared. I was not aware until recently that there was a more specific definition of Venn diagrams. I had never heard of Euler circles until I read Wikipedia. – Stxmqs Dec 24 '12 at 7:34 Looking back at my old school textbook I see that all the examples given are indeed proper Venn diagrams. However the book doesn't give any actual definition so one could easily come away thinking that Euler diagrams are also Venn diagrams. – Stxmqs Dec 24 '12 at 7:48 add comment Not the answer you're looking for? Browse other questions tagged ho.history-overview set-theory lo.logic or ask your own question.
{"url":"http://mathoverflow.net/questions/117051/the-origin-of-sets/117052","timestamp":"2014-04-21T09:49:15Z","content_type":null,"content_length":"87556","record_id":"<urn:uuid:73232cff-5f08-44a6-b717-5f4df806816a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Fenerbahce v Viktoria Plzen Sukru Saracoglu Stadium , Turkey Referee: Manuel Grafe * Local time based on your geographic location. • Volkan Demirel takes a direct freekick with his right foot from his own half. Outcome: open play • Vladimir Darida commits a foul on Volkan Demirel resulting on a free kick for Fenerbahce • Matus Kozacik makes a very good save (Catch) • Caner Erkin hits a good right footed shot. Outcome: save • David Limbersky takes a direct freekick with his right foot from his own half. Outcome: open play • Caner Erkin commits a foul on David Limbersky resulting on a free kick for Viktoria Plzen • Reto Ziegler takes the corner kick from the right byline with his right foot and hits an inswinger to the centre, resulting in: open play • Fenerbahce makes a sub: Egeman Korkmaz enters for Baroni Cristian. Reason: Tactical • Matus Kozacik makes a very good save (Round Post) • Dirk Kuyt hits a good right footed shot. Outcome: save • Throw-in: Radim Reznik takes it (Attacking) • Vladimir Darida takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: open play • Matus Kozacik takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Dirk Kuyt • Marian Cisovsky blocks the shot • Mehmet Topuz hits a good right footed shot. Outcome: blocked • Baroni Cristian takes a direct freekick with his right foot from the right channel. Outcome: pass • David Limbersky commits a foul on Caner Erkin resulting on a free kick for Fenerbahce • Mehmet Topuz takes a direct freekick with his right foot from the right wing. Outcome: pass • Viktoria Plzen makes a sub: Martin Fillo enters for Frantisek Rajtoral. Reason: Tactical • Vaclav Prochazka is awarded a yellow card. Reason: unsporting behaviour • Handball called on Vaclav Prochazka • Throw-in: Gokhan Gonul takes it (Defending) • Daniel Kolár is awarded a yellow card. Reason: unsporting behaviour • Volkan Demirel takes a long goal kick • Fenerbahce makes a sub: Mehmet Topuz enters for Moussa Sow. Reason: Tactical • Michal Duris hits a good header, but it is off target. Outcome: over bar • David Limbersky crosses the ball. Outcome: shot • Volkan Demirel makes a very good save (Punch) • Vladimir Darida crosses the ball. Outcome: save • Vladimir Darida takes a direct freekick with his right foot from the left wing. Outcome: cross • Gokhan Gonul is awarded a yellow card. Reason: unsporting behaviour • Gokhan Gonul commits a foul on David Limbersky resulting on a free kick for Viktoria Plzen • Vaclav Prochazka takes an indirect freekick with his right foot from his own half. Outcome: pass • Offside called on Moussa Sow • Vaclav Prochazka takes a direct freekick with his right foot from his own half. Outcome: pass • Moussa Sow commits a foul on Jan Kovarik resulting on a free kick for Viktoria Plzen • Throw-in: Reto Ziegler takes it (Attacking) • Volkan Demirel takes a long goal kick • Frantisek Rajtoral hits a good right footed shot, but it is off target. Outcome: miss left • Reto Ziegler takes a direct freekick with his left foot from his own half. Outcome: open play • Michal Duris commits a foul on Reto Ziegler resulting on a free kick for Fenerbahce • Selcuk Sahin takes a direct freekick with his right foot from his own half. Outcome: pass • Daniel Kolár commits a foul on Salih Ucan resulting on a free kick for Fenerbahce • Throw-in: Vladimir Darida takes it (Attacking) • Throw-in: Reto Ziegler takes it (Defending) • Throw-in: Reto Ziegler takes it (Defending) • Viktoria Plzen makes a sub: Michal Duris enters for Pavel Horvath. Reason: Tactical • David Limbersky hits a good right footed shot, but it is off target. Outcome: hit post • Volkan Demirel makes a very good save (Punch) • Pavel Horvath takes the corner kick from the right byline with his left foot and hits an inswinger to the centre, resulting in: save • Throw-in: Gokhan Gonul takes it (Defending) • Matus Kozacik takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Moussa Sow • Reto Ziegler takes a direct freekick with his right foot from his own half. Outcome: open play • Frantisek Rajtoral commits a foul on Reto Ziegler resulting on a free kick for Fenerbahce • David Limbersky takes a direct freekick with his right foot from his own half. Outcome: pass • Viktoria Plzen makes a sub: Stanislav Tecl enters for Marek Bakos. Reason: Tactical • Dirk Kuyt is awarded a yellow card. Reason: unsporting behaviour • Dirk Kuyt commits a foul on Pavel Horvath resulting on a free kick for Viktoria Plzen • Throw-in: Gokhan Gonul takes it (Defending) • Matus Kozacik takes an indirect freekick with his right foot from his own half. Outcome: pass • Offside called on Caner Erkin • Bekir Irtegun takes a direct freekick with his right foot from his own half. Outcome: pass • Marek Bakos is awarded a yellow card. Reason: unsporting behaviour • Marek Bakos commits a foul on Gokhan Gonul resulting on a free kick for Fenerbahce • Vladimir Darida hits a good right footed shot. Outcome: goal • Matus Kozacik takes a short goal kick • Baroni Cristian hits a good right footed shot, but it is off target. Outcome: miss left • Matus Kozacik takes a short goal kick • Throw-in: Radim Reznik takes it (Attacking) • Bekir Irtegun takes a direct freekick with his right foot from his own half. Outcome: pass • Handball called on Daniel Kolár • Matus Kozacik makes a good save (Catch) • Baroni Cristian hits a good right footed shot. Outcome: save • Matus Kozacik takes a long goal kick • Baroni Cristian hits a right footed shot, but it is off target. Outcome: over bar • Volkan Demirel takes a long goal kick • Throw-in: Radim Reznik takes it (Attacking) • Volkan Demirel takes a long goal kick • Pavel Horvath hits a good left footed shot, but it is off target. Outcome: over bar • Joseph Yobo takes a direct freekick with his left foot from his own half. Outcome: pass • Marek Bakos commits a foul on Joseph Yobo resulting on a free kick for Fenerbahce • Volkan Demirel makes a good save (Catch) • Vladimir Darida hits a good left footed shot. Outcome: save • Matus Kozacik takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Moussa Sow • Vladimir Darida takes the corner kick from the left byline with his right foot and passes it to a teammate resulting in: open play • Gokhan Gonul clears the ball from danger. • Frantisek Rajtoral hits a good right footed shot, but it is off target. Outcome: clearance • Volkan Demirel takes a long goal kick • Matus Kozacik takes a long goal kick • Bekir Irtegun hits a good left footed shot, but it is off target. Outcome: miss left • Vaclav Prochazka clears the ball from danger. • Caner Erkin takes the corner kick from the right byline with his left foot and hits an inswinger to the centre, resulting in: clearance • Matus Kozacik takes a long goal kick • Dirk Kuyt hits a good right footed shot, but it is off target. Outcome: miss right • Pavel Horvath takes a direct freekick with his left foot from the left channel. Outcome: open play • Baroni Cristian commits a foul on Pavel Horvath resulting on a free kick for Viktoria Plzen • Throw-in: Gokhan Gonul takes it (Defending) • That last goal was assisted by Baroni Cristian (Pass from Right Penalty Area) • Salih Ucan hits a good right footed shot. Outcome: goal • Throw-in: Reto Ziegler takes it (Attacking) • Vaclav Prochazka clears the ball from danger. • Baroni Cristian crosses the ball. Outcome: clearance • Baroni Cristian takes a direct freekick with his right foot from the right channel. Outcome: cross • Marian Cisovsky commits a foul on Moussa Sow resulting on a free kick for Fenerbahce • Matus Kozacik makes a very good save (Catch) • Baroni Cristian takes the corner kick from the left byline with his left foot and hits an outswinger to the centre, resulting in: save • Matus Kozacik makes a very good save (Round Post) • Moussa Sow hits a good right footed shot. Outcome: save • Matus Kozacik takes an indirect freekick with his right foot from his own half. Outcome: pass • Offside called on Moussa Sow • Throw-in: Gokhan Gonul takes it (Attacking) • Throw-in: Gokhan Gonul takes it (Defending) • Pavel Horvath takes the corner kick from the right byline with his left foot and hits an inswinger to the centre, resulting in: open play • Reto Ziegler clears the ball from danger. • Daniel Kolár crosses the ball. Outcome: clearance • Throw-in: Marian Cisovsky takes it (Defending) • Volkan Demirel takes a long goal kick • Throw-in: Gokhan Gonul takes it (Defending) • Throw-in: Vladimir Darida takes it (Attacking) • Fenerbahce makes a sub: Salih Ucan enters for Mehmet Topal. Reason: Injury • Throw-in: Gokhan Gonul takes it (Defending) • Throw-in: David Limbersky takes it (Attacking) • Throw-in: Gokhan Gonul takes it (Defending) • Volkan Demirel takes a long goal kick • Daniel Kolár hits a good header, but it is off target. Outcome: miss left • Radim Reznik crosses the ball. Outcome: shot • Throw-in: David Limbersky takes it (Attacking) • Throw-in: Radim Reznik takes it (Attacking) • Volkan Demirel takes a long goal kick • Throw-in: Frantisek Rajtoral takes it (Attacking) • Matus Kozacik takes a short goal kick • Throw-in: Gokhan Gonul takes it (Defending) • Throw-in: Gokhan Gonul takes it (Attacking) • Matus Kozacik takes a long goal kick • Caner Erkin hits a good left footed shot, but it is off target. Outcome: miss right • Throw-in: Reto Ziegler takes it (Attacking) • Throw-in: Radim Reznik takes it (Defending) • Throw-in: David Limbersky takes it (Attacking) • Volkan Demirel takes a long goal kick • Vladimir Darida hits a good left footed shot, but it is off target. Outcome: miss left • Matus Kozacik makes a good save (Catch) • Gokhan Gonul crosses the ball. Outcome: save • Throw-in: Radim Reznik takes it (Attacking) • Volkan Demirel takes an indirect freekick with his right foot from his own half. Outcome: open play • Offside called on Jan Kovarik • Matus Kozacik takes a short goal kick • Moussa Sow hits a good left footed shot, but it is off target. Outcome: miss left • Caner Erkin hits a left footed shot that gets deflected, but it is off target. Outcome: miss left • Volkan Demirel takes a long goal kick • Frantisek Rajtoral hits a good right footed shot, but it is off target. Outcome: miss left • Selcuk Sahin clears the ball from danger. • Pavel Horvath crosses the ball. Outcome: clearance • Pavel Horvath takes a direct freekick with his left foot from the right wing. Outcome: cross • Caner Erkin commits a foul on Radim Reznik resulting on a free kick for Viktoria Plzen • Throw-in: Radim Reznik takes it (Attacking) • Matus Kozacik makes a good save (Catch) • Caner Erkin hits a good left footed shot. Outcome: save • Volkan Demirel takes a long goal kick • Joseph Yobo clears the ball from danger. • Pavel Horvath takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: clearance • Gokhan Gonul clears the ball from danger. • Vladimir Darida crosses the ball. Outcome: clearance • Throw-in: Jan Kovarik takes it (Attacking) • Matus Kozacik takes a short goal kick • Caner Erkin blocks the shot • David Limbersky hits a right footed shot that gets deflected, but it is off target. Outcome: blocked • Pavel Horvath takes the corner kick from the right byline with his right foot and passes it to a teammate resulting in: open play • Caner Erkin blocks the cross • Frantisek Rajtoral crosses the ball. Outcome: blocked • Matus Kozacik takes a long goal kick • Baroni Cristian hits a good right footed shot, but it is off target. Outcome: miss left • Gokhan Gonul takes a direct freekick with his right foot from the right wing. Outcome: pass • Pavel Horvath commits a foul on Baroni Cristian resulting on a free kick for Fenerbahce • Throw-in: David Limbersky takes it (Attacking) • Throw-in: David Limbersky takes it (Defending) • Volkan Demirel takes a long goal kick • Daniel Kolár hits a good header, but it is off target. Outcome: over bar • Vladimir Darida takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: shot • Mehmet Topal clears the ball from danger. • Frantisek Rajtoral crosses the ball. Outcome: clearance • Throw-in: David Limbersky takes it (Defending) • Bekir Irtegun takes a direct freekick with his right foot from his own half. Outcome: open play • Daniel Kolár commits a foul on Mehmet Topal resulting on a free kick for Fenerbahce • Volkan Demirel makes a good save (Catch) • Marek Bakos hits a good header. Outcome: save • Pavel Horvath crosses the ball. Outcome: shot • Pavel Horvath takes a direct freekick with his left foot from the right wing. Outcome: cross • Mehmet Topal commits a foul on Radim Reznik resulting on a free kick for Viktoria Plzen • Vaclav Prochazka clears the ball from danger. • Baroni Cristian takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: clearance • Reto Ziegler hits a good left footed shot, but it is off target. Outcome: miss left • Dirk Kuyt crosses the ball. Outcome: shot • Throw-in: Gokhan Gonul takes it (Attacking) • Shots (on goal) • tackles • Fouls • possession • Fenerbahce • - • Viktoria Plzen • - Match Stats • Fenerbahce • Viktoria Plzen 15(6) Shots (on goal) 12(2) 7 Fouls 10 4 Corner kicks 7 6 Offsides 1 46% Time of Possession 54% 2 Yellow Cards 3 0 Red Cards 0 4 Saves 7
{"url":"http://espnfc.com/en/gamecast/362175/gamecast.html?soccernet=true&cc=","timestamp":"2014-04-17T04:20:49Z","content_type":null,"content_length":"143305","record_id":"<urn:uuid:fed21672-892b-4299-a663-7ebd75a83ed0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
roblems with stepwise regression What are some of the problems with stepwise regression? Title Problems with stepwise regression Author Bill Sribney, StataCorp Date May 1998; minor revisions July 2011 Note: All of this material is quoted from emails that originally appeared on STAT-L/SCI.STAT.CONSULT in 1996. Thanks go to Richard Ulrich, who originally compiled these comments, and to Frank Ivis, who did minor editing and posted them to Statalist. Frank Harrell’s comments: Here are some of the problems with stepwise variable selection. 1. It yields R-squared values that are badly biased to be high. 2. The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution. 3. The method yields confidence intervals for effects and predicted values that are falsely narrow; see Altman and Andersen (1989). 4. It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem. 5. It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large; see Tibshirani [1996]). 6. It has severe problems in the presence of collinearity. 7. It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses. 8. Increasing the sample size does not help very much; see Derksen and Keselman (1992). 9. It allows us to not think about the problem. 10. It uses a lot of paper. “All possible subsets” regression solves none of these problems. “The degree of correlation between the predictor variables affected the frequency with which authentic predictor variables found their way into the final model.” “The number of candidate predictor variables affected the number of noise variables that gained entry to the model.” “The size of the sample was of little practical importance in determining the number of authentic variables contained in the final model.” “The population multiple coefficient of determination could be faithfully estimated by adopting a statistic that is adjusted by the total number of candidate predictor variables rather than the number of variables in the final model.” Altman, D. G. and P. K. Andersen. 1989. Bootstrap investigation of the stability of a Cox regression model. Statistics in Medicine 8: 771–783. Copas, J. B. 1983. Regression, prediction and shrinkage (with discussion). Journal of the Royal Statistical Society, Series B 45: 311–354. Shows why the number of CANDIDATE variables and not the number in the final model is the number of degrees of freedom to consider. Derksen, S. and H. J. Keselman. 1992. Backward, forward and stepwise automated subset selection algorithms: frequency of obtaining authentic and noise variables. British Journal of Mathematical and Statistical Psychology 45: 265–282. Hurvich, C. M. and C. L. Tsai. 1990. The impact of model selection on inference in linear regression. American Statistician 44: 214–217. Mantel, Nathan. 1970. Why stepdown procedures in variable selection. Technometrics 12: 621–625. Roecker, Ellen B. 1991. Prediction error and its estimation for subset—selected models. Technometrics 33: 459–468. Shows that all-possible regression can yield models that are too small. Tibshirani, Robert. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B 58: 267–288. Ronan Conroy’s comments: I am struck by the fact that Judd and McClelland in their excellent book Data Analysis: A Model Comparison Approach (Harcourt Brace Jovanovich, ISBN 0-15-516765-0) devote less than two pages to stepwise methods. What they do say, however, is worth repeating: 1. Stepwise methods will not necessarily produce the best model if there are redundant predictors (common problem). 2. All-possible-subset methods produce the best model for each possible number of terms, but larger models need not necessarily be subsets of smaller ones, causing serious conceptual problems about the underlying logic of the investigation. 3. Models identified by stepwise methods have an inflated risk of capitalizing on chance features of the data. They often fail when applied to new datasets. They are rarely tested in this way. 4. Since the interpretation of coefficients in a model depends on the other terms included, “it seems unwise,” to quote J and McC, “to let an automatic algorithm determine the questions we do and do not ask about our data”. 5. I quote this last point directly, as it is sane and succinct: “It is our experience and strong belief that better models and a better understanding of one’s data result from focussed data analysis, guided by substantive theory,” (p. 204). They end with a quote from Henderson and Velleman's paper “Building multiple regression models interactively” (1981, Biometrics 37: 391–411): “The data analyst knows more than the computer,” and they add “failure to use that knowledge produces inadequate data analysis”. Personally, I would no more let an automatic routine select my model than I would let some best-fit procedure pack my suitcase.
{"url":"http://www.stata.com/support/faqs/statistics/stepwise-regression-problems/","timestamp":"2014-04-18T08:24:51Z","content_type":null,"content_length":"29790","record_id":"<urn:uuid:fb500e23-d78c-4a15-b8e0-039af1fc7cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2010 [00774] [Date Index] [Thread Index] [Author Index] Re: Journals dying?, apparently rather slowly (was , • To: mathgroup at smc.vnet.net • Subject: [mg106836] Re: [mg106791] Journals dying?, apparently rather slowly (was , • From: Richard Fateman <fateman at cs.berkeley.edu> • Date: Sun, 24 Jan 2010 05:44:32 -0500 (EST) • References: <27994965.1264251543203.JavaMail.root@n11> <006e01ca9c5f$6e81d8b0$4b858a10$@net> David Park wrote: > Two topics: technical publishing in general and technical publishing with > Mathematica. > All those journals are known as vanity publishing, Richard. You pay the > publisher, they don't pay you. Certainly not all of them. If they were journals that merely published papers from authors who paid them, then my university library would not (knowingly) subscribe. Some of them come from universities, or learned societies, or professional societies. > In the technical world it is hugely > expensive, interminably slow, and grossly inefficient at making the useful > communication links. I agree that it is inefficient. So is internet search, which sometimes uncovers lots of garbage. > More and more, work is published on the web. It is the better archive. There are many problems associated with work published (only) on the web, including identifying an archival copy. If I keep my own copy of my own paper online, I can (and sometimes do) update it, so that someone who follows a citation to it may see a different version from the one cited. And sometimes the link is broken. Now there are solutions to these and other problems, but they are not universally > Everyone has access to it. Only if they have internet access and the copy is free of charge. > An adequate descriptive title and good search > engine will find any paper. There is at least one very important paper that > is only available on the web. I don't know what that might be, but I could download it and print it, and then it would be available on paper (in my office.) > Any useful paper is probably useful to a > rather small selected group of people. And some papers published today are useful to no one except possibly the author who is using it to get tenure somewhere. > Active, dynamic Mathematica notebooks are a better medium for communicating > technical information than static papers. The potential for the medium is not the issue. There are many excellent papers that are "static". They are excellent because of their content. Just as there are excellent movies that were filmed without color. Of course, someone might argue that in the future, all movies will be in 3-D, because that medium has more avenues for expression. But today there are still people using b&w film. And non-3D. ... I expect that in the future there will continue to be (a small percentage) of excellent static papers. I expect that the percentage of excellent ".nb" papers will also be small. Perhaps smaller. If I had an excellent idea and I wished to publish it, I would certainly shy away from a presentation that required the reader to own a Mathematica license, and I expect that would be the case for very many other people. Does the presentation require dynamic notebooks, or is it just a gimmick ? Now there may be ideas that really can't be satisfactorily expressed within the constraints of a linear static presentation, in which case I might be tempted to try something more dynamic. I might write a Java applet then. Or attach a program in some other free program. It may be that the authors of the Digital Library of Mathematical Functions at NIST will eventually produce a kind of dynamic interactive document for their reference work, and one that other people could use (free, presumably). This might influence the publication of work in (especially) applied mathematics, special functions, and related areas. Eventually. I've been disappointed with the pace of that project, but it is not likely that they would just say, Oh, let's use Mathematica!. > Of course, Mathematica notebooks as papers are only useful if they can be > universally and freely evaluated. That's the sticking point. Not the only one, in my opinion.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00774.html","timestamp":"2014-04-20T16:13:07Z","content_type":null,"content_length":"29156","record_id":"<urn:uuid:8e9fc6ce-ef2d-43bc-8b70-e3bb69c907c1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 26th 2008, 12:09 PM #1 Mar 2008 Use the fact that sqrt2 is irrational to prove that sqrt2 + sqrt3 is irrational ...im not sure if I'm posting this right...i dont know how to put the sqrt symbol up. but any help would be much appreciated!! Suppose that $\sqrt{2}+\sqrt{3}$ is rational, then there exist integers $a$ and $b$ such that: from this point it should be simple, so I will leave you to complete the proof by contradiction. March 27th 2008, 03:55 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/algebra/32150-proofs.html","timestamp":"2014-04-21T13:46:48Z","content_type":null,"content_length":"33491","record_id":"<urn:uuid:1b2fb4c0-7eab-40d4-946f-efe97289d0ed>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Aaron Turon and Mitchell Wand. A separation logic for refining concurrent objects. In Proceedings ACM Symposium on Programming Languages, pages 247--258, January 2011. Abstract: Fine-grained concurrent data structures are crucial for gaining performance from multiprocessing, but their design is a subtle art. Recent literature has made large strides in verifying these data structures, using either atomicity refinement or separation logic with rely-guarantee reasoning. In this paper we show how the ownership discipline of separation logic can be used to enable atomicity refinement, and we develop a new rely-guarantee method that is localized to the definition of a data structure. The result is a comprehensive and tidy account of concurrent data refinement that clarifies and consolidates the existing approaches. Aaron Turon and Mitchell Wand. A resource analysis of the pi-calculus. In Proceedings 27th Conf. on the Mathematical Foundations of Programming Semantics (MFPS), Electronic Notes in Computer Science, May 2011. to appear. Abstract: We give a new treatment of the pi-calculus based on the semantic theory of separation logic, continuing a research program begun by Hoare and O'Hearn. Using a novel resource model that distinguishes between public and private ownership, we refactor the operational semantics so that sending, receiving, and allocating are commands that influence owned resources. These ideas lead naturally to two denotational models: one for safety and one for liveness. Both models are fully abstract for the corresponding observables, but more importantly both are very simple. The close connections with the model theory of separation logic (in particular, with Brookes's action trace model) give rise to a logic of processes and resources. Paul Stansifer and Mitchell Wand. Parsing Reflective Grammars. In Workshop on Language Descriptions, Tools, and Applications (LDTA), pages 73--79, Saarbrucken, Germany, March 2011. Abstract: Existing technology can parse arbitrary context-free grammars, but only a single, static grammar per input. In order to support more powerful syntax-extension systems, we propose reflective grammars, which can modify their own syntax during parsing. We demonstrate and prove the correctness of an algorithm for parsing reflective grammars. The algorithm is based on Earley's algorithm, and we prove that it performs asymptotically no worse than Earley's algorithm on ordinary context-free grammars. Olin Shivers and Mitchell Wand. Bottom-up beta-reduction: uplinks and lambda-DAGs (journal version). to appear, July 2010. Abstract: If we represent a lambda-calculus term as a DAG rather than a tree, we can efficiently represent the sharing that arises from beta-reduction, thus avoiding combinatorial explosion in space. By adding uplinks from a child to its parents, we can efficiently implement beta-reduction in a bottom-up manner, thus avoiding combinatorial explosion in time required to search the term in a top-down fashion. We present an algorithm for performing beta-reduction on lambda terms represented as uplinked DAGs; describe its proof of correctness; discuss its relation to alternate techniques such as Lamping graphs, explicit-substitution calculi and director strings; and present some timings of an implementation. Besides being both fast and parsimonious of space, the algorithm is particularly suited to applications such as compilers, theorem provers, and type-manipulation systems that may need to examine terms inbetween reductions---ie, the ``readback'' problem for our representation is trivial. Like Lamping graphs, and unlike director strings or the suspension lambda-calculus, the algorithm functions by side-effecting the term containing the redex; the representation is not a ``persistent'' one. The algorithm additionally has the charm of being quite simple; a complete implementation of the data structure and algorithm is 180 lines of SML. Aaron Turon and Mitchell Wand. A Separation Logic for the Pi-calculus. unpublished, July 2009. Abstract: Reasoning about concurrent processes requires distinguishing communication from interference, and is especially difficult when the means of interaction change over time. We present a new logic for the pi-calculus that combines temporal and separation logic, and treats channels as resources that can be gained and lost by processes. The resource model provides a lightweight way to constrain interference. By interpreting process terms as formulas, our logic directly supports compositional reasoning. Christos Dimoulas and Mitchell Wand. The Higher-Order Aggregate Update Problem. In Neil~D. Jones and Markus Muller Olm, editors, Verification, Model Checking, and Abstract Interpretation, 10th International Conference, volume 5403 of Lecture Notes in Computer Science, pages 44--58, Berlin, Heidelberg, and New York, January 2009. Springer-Verlag. Abstract: We present a multi-pass interprocedural analysis and transformation for the functional aggregate update problem. Our solution handles untyped programs, including unrestricted closures and nested arrays. Also, it can handle programs that contain a mix of functional and destructive updates. Correctness of all the analyses and of the transformation itself is proved. Dimitris Vardoulakis and Mitchell Wand. A Compositional Trace Semantics for Orc. In Coordination Models and Languages: 10th International Conference, COORDINATION 2008, Oslo, Norway, June 4-6, 2008, volume 5052 of Lecture Notes in Computer Science, pages 331--346, Berlin, Heidelberg, and New York, 2008. Springer-Verlag. http://dx.doi.org/10.1007/978-3-540-68265-3\_21. Abstract: Orc (Kitchin, Cook, and Misra 2006) is a language for task orchestration. It has a small set of primitives, but is sufficient to express many useful programs succinctly. We show that the operational and denotational semantics given in Kitchin et al. (2006) do not agree, by giving counterexamples to their Theorems 2 and 3. We remedy this situation by providing new operational and denotational semantics with a better treatment of variable binding, and proving an adequacy theorem to relate them. Our semantics validates some useful equivalences between Orc processes; since the semantics is compositional these automatically become congruences. Last, we consider an alternative semantics that is insensitive to internal events. David Herman and Mitchell Wand. A Theory of Hygienic Macros. In Programming Languages and Systems 17th European Symposium on Programming, ESOP 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29-April 6, 2008., volume 4960 of Lecture Notes in Computer Science, pages 48--62, Berlin, Heidelberg, and New York, 2008. Springer-Verlag. http://dx.doi.org/10.1007/978-3-540-78739-6\_4. Abstract: Hygienic macro systems automatically rename variables to prevent unintentional variable capture-- in short, they "just work." But hygiene has never been presented in a formal way, as a specification rather than an algorithm. According to folklore, the definition of hygienic macro expansion hinges on the preservation of alpha-equivalence. But the only known definition of alpha-equivalence for Scheme depends on the results of macro expansion! We break this circularity by introducing binding specifications for macros, permitting a definition of alpha-equivalence independent of expansion. We define a semantics for a first-order subset of Scheme macros and prove hygiene as a consequence of confluence. Daniel~P. Friedman and Mitchell Wand. Essentials of Programming Languages. MIT Press, Cambridge, MA, third edition, 2008. Mitchell Wand. On the Correctness of the Krivine Machine. Higher-Order and Symbolic Computation, 20(3):231--236, September 2007. Abstract: We provide a short proof of the correctness of the Krivine machine by showing how it simulates weak head reduction. Vasileios Koutavas and Mitchell Wand. Reasoning About Class Behavior. In Informal Workshop Record of FOOL 2007, January 2007. Abstract: We present a sound and complete method for reasoning about contextual equivalence between different implementations of classes in an imperative subset of Java. To the extent of our knowledge this is the first such method for a language with unrestricted inheritance, where the context can arbitrarily extend classes to distinguish presumably equivalent implementations. Similar reasoning techniques for class-based languages don't consider inheritance at all, or forbid the context from extending related classes. Other techniques that do consider inheritance study whole-program equivalence. Our technique also handles public, protected, and private interfaces of classes, imperative fields, and invocations of callbacks. Using our technique we were able to prove equivalences in examples with higher-order behavior, where previous methods for functional calculi admit limitations. Furthermore we use our technique as a tool to understand the exact effect of inheritance on contextual equivalence. We do that by deriving conditions of equivalence for a language without inheritance and compare them to those we get after we extend the language with it. In a similar way we show that adding a cast operator is a conservative extension of the language. Vasileios Koutavas and Mitchell Wand. Small Bisimulations for Reasoning About Higher-Order Imperative Programs. In Proceedings 33rd ACM Symposium on Programming Languages, pages 141--152, January Abstract: We introduce a new notion of bisimulation for showing contextual equivalence of expressions in an untyped lambda-calculus with an explicit store, and in which all expressed values, including higher-order values, are storable. Our notion of bisimulation leads to smaller and more tractable relations than does the method of Sumii and Pierce [2004]. In particular, our method allows one to write down a bisimulation relation directly in cases where Sumii and Pierce 2004 requires an inductive specification, and where the method of Pitts and Stark [1998] is inapplicable. Our method can also express examples with higher-order functions, in contrast with the most widely known previous methods [Sumii-Pierce 2005, Pitts-Stark 1998, Benton-Leperchey 2005], which are limited in their ability to deal with examples containing higher-order functions. The bisimulation conditions are derived by manually extracting proof obligations from a hypothetical direct proof of contextual equivalence. Vasileios Koutavas and Mitchell Wand. Bisimulations for Untyped Imperative Objects. In Peter Sestoft, editor, Proc. ESOP 2006, volume 3924 of Lecture Notes in Computer Science, pages 146--161, Berlin, Heidelberg, and New York, March 2006. Springer-Verlag. Abstract: We present a sound and complete method for reasoning about contextual equivalence in the untyped, imperative object calculus of Abadi and Cardelli [1]. Our method is based on bisimulations, following the work of Sumii and Pierce [26, 27] and our own [15]. Using our method we were able to prove equivalence in more complex examples than the ones of Gordon, Hankin and Lassen [7] and Gordon and Rees [8]. We can also write bisimulations in closed form in cases where similar bisimulation methods [27] require an inductive specification. To derive our bisimulations we follow the same technique as we did in [15], thus indicating the extensibility of this method. Olin Shivers and Mitchell Wand. Bottom-up beta-reduction: uplinks and lambda-DAGs. In Mooly Sagiv, editor, Programming Languages and Systems: 14th European Symposium on Programming, ESOP 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4-8, 2005. Proceedings, volume 3444 of Lecture Notes in Computer Science, Berlin, Heidelberg, and New York, 2005. Springer-Verlag. expanded version available as BRICS Technical Report RS-04-38, Department of Computer Science, University of {\AA}rhus. Abstract: This paper presents a new representation of lambda-caluclus terms that allows for fast, space-efficient beta-reduction. This representation is surprisingly simple, and is based on two ideas: (1) representing terms as a directed acyclic graph, allowing sharing, and (2) using explicit backpointers from children to parents, allowing us to replace blind search with minimal, directed traversals. We discuss the formal correctness of the algorithm, compare it with alternate techniques, and present comparative timings of implementations. An appendix contains a complete annotated code listing of the core data structrues and algorithms, in Standard ML Philippe Meunier, Robby Findler, Paul~A. Steckler, and Mitchell Wand. Selectors Make Analyzing Case-Lambda Too Hard. Higher-Order and Symbolic Computation, 18(3-4):245--269, December 2005. Abstract: Flanagan's set-based analysis (SBA) uses selectors to choose data flowing through expressions. For example, the rng selector chooses the ranges of procedures flowing through an expression. The MrSpidey static debugger for PLT Scheme is based on Flanagan's formalism. In PLT Scheme, a case-lambda is a procedure with possibly several argument lists and clauses. When a case-lambda is applied at a particular call site, at most one clause is actually invoked, chosen by the number of actual arguments. Therefore, an analysis should propagate data only through appropriate case-lambda clauses. MrSpidey propagates data through all clauses of a case-lambda, lessening its usefulness as a static debugger. Wishing to retain Flanagan's framework, we extended it to better analyze case-lambda with rest parameters by annotating selectors with arity information. The resulting analysis gives strictly better results than MrSpidey. Unfortunately, the improved analysis is too expensive because of overheads imposed by the use of selectors. Nonetheless, a closure-analysis style SBA (CA-SBA) eliminates these overheads and can give comparable results within cubic time. Mitchell Wand and Dale Vaillancourt. Relating Models of Backtracking. In Proc. ACM SIGPLAN International Conference on Functional Programming, pages 54--65, 2004. Abstract: Past attempts to relate two well-known models of backtracking computatation have met with only limited success. We relate these two models using logical relations. We accommodate higher-order values and infinite computations. We also provide an operational semantics, and we prove it adequate for both models. Mitchell Wand, Gregor Kiczales, and Christopher Dutchyn. A Semantics for Advice and Dynamic Join Points in Aspect-Oriented Programming. TOPLAS, 26(5):890--910, September 2004. Earlier versions of this paper were presented at the 9th International Workshop on Foundations of Object-Oriented Languages, January 19, 2002, and at the Workshop on Foundations of Aspect-Oriented Languages (FOAL), April 22, 2002. Abstract: A characteristic of aspect-oriented programming, as embodied in AspectJ, is the use of _advice_ to incrementally modify the behavior of a program. An advice declaration specifies an action to be taken whenever some condition arises during the execution of the program. The condition is specified by a formula called a _pointcut designator_ or _pcd_. The events during execution at which advice may be triggered are called _join points_. In this model of aspect-oriented programming, join points are dynamic in that they refer to events during the execution of the program. We give a denotational semantics for a minilanguage that embodies the key features of dynamic join points, pointcut designators, and advice. Mitchell Wand. Understanding Aspects (Extended Abstract). In Proc. ACM SIGPLAN International Conference on Functional Programming, August 2003. Summary of invited talk to be given at ICFP 2003. Abstract: We report on our adventures in the AOP community, and suggest a narrative to explain the main ideas of aspect-oriented programming. We show how AOP as currently practiced invalidates conventional modular program reasoning, and discuss a reconceptualization of AOP that we hope will allow an eventual reconciliation between AOP and modular reasoning. Jens Palsberg and Mitchell Wand. CPS Transformation of Flow Information. Journal of Functional Programming, 13(5):905--923, September 2003. Abstract: We consider the question of how a continuation-passing-style (CPS) transformation changes the flow analysis of a program. We present an algorithm that takes the least solution to the flow constraints of a program and constructs in linear time the least solution to the flow constraints for the CPS-transformed program. Previous studies of this question used CPS transformations that had the effect of duplicating code, or of introducing flow sensitivity into the analysis. Our algorithm has the property that for a program point in the original program and the corresponding program point in the CPS-transformed program, the flow information is the same. By carefully avoiding both duplicated code and flow-sensitive analysis, we find that the most accurate analysis of the CPS-transformed program is neither better nor worse than the most accurate analysis of the original. Thus a compiler that needed flow information after CPS transformation could use the flow information from the original program to annotate some program points, and it could use our algorithm to find the the rest of the flow information quickly, rather than having to analyze the CPS-transformed program. Mitchell Wand and Galen~B. Williamson. A Modular, Extensible Proof Method for Small-step Flow Analyses. In Daniel~Le M{\'e}tayer, editor, Programming Languages and Systems, 11th European Symposium on Programming, ESOP 2002, held as Part of the Joint European Conference on Theory and Practice of Software, ETAPS 2002, Grenoble, France, April 8-12, 2002, Proceedings, volume 2305 of Lecture Notes in Computer Science, pages 213--227, Berlin, Heidelberg, and New York, 2002. Springer-Verlag. Abstract: We introduce a new proof technique for showing the correctness of 0CFA-like analyses with respect to small-step semantics. We illustrate the technique by proving the correctness of 0CFA for the pure lambda-calculus under arbitrary beta-reduction. This result was claimed by Palsberg in 1995; unfortunately, his proof was flawed. We provide a correct proof of this result, using a simpler and more general proof method. We illustrate the extensibility of the new method by showing the correctness of an analysis for the Abadi-Cardelli object calculus under small-step Mitchell Wand and Karl Lieberherr. Navigating through Object Graphs Using Local Meta-Information. unpublished report, June 2002. Abstract: Traversal through object graphs is needed for many programming tasks. We show how this task may be specified declaratively at a high level of abstraction, and we give a simple and intuitive semantics for such specifications. The algorithm is implemented in a Java library called DJ. Mitchell Wand and William~D. Clinger. Set Constraints for Destructive Array Update Optimization. Journal of Functional Programming, 11(3):319--346, May 2001. Abstract: Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a multi-pass optimization, involving interprocedural flow analyses for aliasing and liveness. We characterize and prove the soundness of these analyses using small-step operational semantics. We also prove that any sound liveness analysis induces a correct program transformation. A preliminary version of this paper appeared in ICCL '98. Mitchell Wand. A Semantics for Advice and Dynamic Join Points in Aspect-Oriented Programming. In Proceedings of SAIG '01, Lecture Notes in Computer Science, Berlin, Heidelberg, and New York, September 2001. Springer-Verlag. invited talk. Philippe Meunier, Robby Findler, Paul~A. Steckler, and Mitchell Wand. Selectors Make Analyzing Case-Lambda Too Hard. In Proc. Scheme 2001 Workshop, Technical Report: Informatique, Signaux et Systemes de Sophia-Antipolis, I3S/RT-2001-12-FR, pages 54--64, September 2001. Abstract: Flanagan's set-based analysis (SBA) uses selectors to choose data flowing through expressions. For example, the rng selector chooses the ranges of procedures flowing through an expression. The MrSpidey static debugger for PLT Scheme is based on Flanagan's formalism. In PLT Scheme, a case-lambda is a procedure with possibly several argument lists and clauses. When a case-lambda is applied at a particular call site, at most one clause is actually invoked, chosen by the number of actual arguments. Therefore, an analysis should propagate data only through appropriate case-lambda clauses. MrSpidey propagates data through all clauses of a case-lambda, lessening its usefulness as a static debugger. Wishing to retain Flanagan's framework, we extended it to better analyze case-lambda with rest parameters by annotating selectors with arity information. The resulting analysis gives strictly better results than MrSpidey. Unfortunately, the improved analysis is too expensive because of overheads imposed by the use of selectors. Nonetheless, a closure-analysis style SBA (CA-SBA) eliminates these overheads and can give comparable results within cubic time. Daniel~P. Friedman, Mitchell Wand, and Christopher~T. Haynes. Essentials of Programming Languages. MIT Press, Cambridge, MA, second edition, 2001. Mitchell Wand and Igor Siveroni. Constraint Systems for Useless Variable Elimination. In Proceedings 26th ACM Symposium on Programming Languages, pages 291--302, 1999. Abstract: A useless variable is one whose value contributes nothing to the final outcome of a computation. Such variables are unlikely to occur in human-produced code, but may be introduced by various program transformations. We would like to eliminate useless parameters from procedures and eliminate the corresponding actual parameters from their call sites. This transformation is the extension to higher-order programming of a variety of dead-code elimination optimizations that are important in compilers for first-order imperative languages. Shivers has presented such a transformation. We reformulate the transformation and prove its correctness. We believe that this correctness proof can be a model for proofs of other analysis-based transformations. Mitchell Wand. Continuation-Based Multiprocessing Revisited. Higher-Order and Symbolic Computation, 12(3):283, October 1999. Abstract: This is a short introduction to the republication of "Continuation-Based Multiprocessing," which originally appeared in the 1980 Lisp Conference. Mitchell Wand. Continuation-Based Multiprocessing. Higher-Order and Symbolic Computation, 12(3):285--299, October 1999. Originally appeared in the 1980 Lisp Conference. Abstract: Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g. a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs), and process saving may be obtained through the use of the CATCH operator. The use of CATCH, in particular, allows an elegant treatment of process saving. We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design. This paper originally appeared in the 1980 Lisp Conference. Johan Ovlinger and Mitchell Wand. A Language for Specifying Recursive Traversals of Object Structures. In Proceedings of the 1999 ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA`99), pages 70--81, November 1999. Abstract: We present a domain-specific language for specifying recursive traversals of object structures, for use with the visitor pattern. Traversals are traditionally specified as iterations, forcing the programmer to adopt an imperative style, or are hard-coded into the program or visitor. Our proposal allows a number of problems best approached by recursive means to be tackled with the visitor pattern, while retaining the benefits of a separate traversal specification. Steven~E. Ganz, Daniel~P. Friedman, and Mitchell Wand. Trampolined Style. In Proc. 1999 ACM SIGPLAN International Conference on Functional Programming, pages 18--27, Paris, September 1999. Abstract: A trampolined program is organized as a single loop in which computations are scheduled and their execution allowed to proceed in discrete steps. Writing programs in trampolined style supports primitives for multithreading without language support for continuations. Various forms of trampolining allow for different degrees of interaction between threads. We present two architectures based on an only mildly intrusive trampolined style. Concurrency can be supported at multiple levels of granularity by performing the trampolining transformation multiple times. Mitchell Wand and William~D. Clinger. Set Constraints for Destructive Array Update Optimization. In Proc. IEEE Conf. on Computer Languages '98, pages 184--193. IEEE, April 1998. Abstract: Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a highly non-trivial optimization, involving interprocedural flow analyses for aliasing and liveness. We characterize the sondness of these analyses using small-step operational semantics. We have shown that all solutions to our set constraints are sound. We have also proved that any sound liveness analysis induces a correct program transformation. Mitchell Wand. The Theory of Fexprs is Trivial. Lisp and Symbolic Computation, 10:189--199, 1998. Abstract: We provide a very simple model of a reflective facility based on the pure lambda-calculus, and we show that its theory of contextual equivalence is trivial: two terms in the language are contextually equivalent iff they are alpha-congruent. Jonathan~A. Rees, William~D. Clinger, et al. Revised{$^5$} Report on the Algorithmic Language Scheme. Higher-Order and Symbolic Computation, 11(1):7--104, August 1998. also appeared in SIGPLAN Notices 33:9, September 1998. Mitchell Wand and Gregory~T. Sullivan. Denotational Semantics Using an Operationally-Based Term Model. In Proceedings 23rd ACM Symposium on Programming Languages, pages 386--399, 1997. Abstract: We introduce a method for proving the correctness of transformations of programs in languages like Scheme and ML. The method consists of giving the programs a denotational semantics in an operationally-based term model in which interaction is the basic observable, and showing that the transformation is meaning-preserving. This allows us to consider correctness for programs that interact with their environment without terminating, and also for transformations that change the internal store behavior of the program. We illustrate the technique on some of the Meyer-Sieber examples, and we use it to prove the correctness of assignment elimination for Scheme. The latter is an important but subtle step for Scheme compilers; we believe ours is the first proof of its Mitchell Wand. Types in Compilation: Scenes from an Invited Lecture. Invited talk at Workshop on Types in Compilation, held in conjunction with ICFP97, June 1997. Abstract: We consider some of the issues raised by the use of typed intermediate languages in compilers for higher-order languages. After a brief introduction, we consider typed vs.\ untyped equivalences, the relation between type analysis and flow analysis, and methods for proving the corrrectness of analysis-based program transformations. Paul~A. Steckler and Mitchell Wand. Lightweight Closure Conversion. ACM Transactions on Programming Languages and Systems, 19(1):48--86, January 1997. Original version appeared in Proceedings 21st ACM Symposium on Programming Languages, 1994. Abstract: We consider the problem of lightweight closure conversion, in which multiple procedure call protocols may coexist in the same code. A lightweight closure omits bindings for some of the free variables of the procedure that it represents. Flow analysis is used to match the protocol expected by each procedure and the protocol used at each of its possible call sites. We formulate the flow nalysis as a deductive system tht generates a labelled transition system and a set of constraints. We show that any solution to the constraints justifies the rsulting transformation. Some of the techniques used a similar to those of abstract interpretation, but others appear to be novel. Jens Palsberg, Mitchell Wand, and Patrick O'Keefe. Type Inference with Non-Structural Subtyping. Formal Aspects of Computer Science, 9:49--67, 1997. Abstract: We present an O(n^3) time type inference algorithm for a type system with a largest type \top, a smallest type \bot, and the usual ordering between function types. The algorithm infers type annotations of minimal size, and it works equally well for recursive types. For the problem of typability, our algorithm is simpler than the one of Kozen, Palsberg, and Schwartzbach for type inference without \bot. This may be surprising, especially because the system with \bot is strictly more powerful. Jerzy Tiuryn and Mitchell Wand. Untyped Lambda-Calculus with Input-Output. In H. Kirchner, editor, Trees in Algebra and Programming: CAAP'96, Proc.~21st International Colloquium, volume 1059 of Lecture Notes in Computer Science, pages 317--329, Berlin, Heidelberg, and New York, April 1996. Springer-Verlag. Abstract: We introduce an untyped lambda-calculus with input-output, based on Gordon's continuation-passing model of input-output. This calculus is intended to allow the classification of possibly infinite input-output behaviors, such as those required for servers or distributed systems. We define two terms to be operationally approximate iff they have similar behaviors in any context. We then define a notion of applicative approximation and show that it coincides with operational approximation for these new behaviors. Last, we consider the theory of pure lambda-terms under this notion of operational equivalence. Gregory~T. Sullivan and Mitchell Wand. Incremental Lambda Lifting: An Exercise in Almost-Denotational Semantics. manuscript, November 1996. Abstract: We prove the correctness of incremental lambda-lifting, an optimization that attempts to reduce the closure allocation overhead of higher-order programs by changing the scope of nested procedures. This optimization is invalid in the standard denotational semantics of Scheme, because it changes the storage behavior of the program. Our method consists of giving Scheme a denotational semantics in an operationally-based term model in which interaction is the basic observable. Lambda lifting is then shown to preserve meaning in the model. Jonathan~G. Rossie, Daniel~P. Friedman, and Mitchell Wand. Modeling Subobject-based Inheritance. In Pierre Cointe, editor, Proc. European Conference on Object-Oriented Programming, volume 1098 of Lecture Notes in Computer Science, pages 248--274, Berlin, Heidelberg, and New York, July 1996. Springer-Verlag. Abstract: A model of subobjects and subobject selection gives us a concise expression of key semantic relationships in a variety of inheritance-based languages. Subobjects and their selection have been difficult to reason about explicitly because they are not explicit in the languages that support them. The goal of this paper is to present a relatively simple calculus to describe subobjects and subobject selection explicitly. Rather than present any deep theorems here, the goal is to present a general calculus that can be used to explore the design of inheritance systems. David Gladstein and Mitchell Wand. Compiler Correctness for Concurrent Languages. In Paolo Ciancarini and Chris Hankin, editors, Proc. Coordination '96 (Cesena, Italy), volume 1061 of Lecture Notes in Computer Science, pages 231--248, Berlin, Heidelberg, and New York, April 1996. Springer-Verlag. Abstract: This paper extends previous work in compiler derivation and verification to languages with true-concurrency semantics. We extend the lambda-calculus to model process-centered concurrent computation, and give the semantics of a small language in terms of this calculus. We then define a target abstract machine whose states have denotations in the same calculus. We prove the correctness of a compiler for our language: the denotation of the compiled code is shown to be strongly bisimilar to the denotation of the source program, and the abstract machine running the compiled code is shown to be branching-bisimilar tothe source program's denotation. Mitchell Wand and Gregory~T. Sullivan. A Little Goes a Long Way: A Simple Tool to Support Denotational Compiler-Correctness Proofs. Technical Report NU-CCS-95-19, Northeastern University College of Computer Science, November 1995. Abstract: In a series of papers in the early 80's we proposed a paradigm for semantics-based compiler correctness. In this paradigm, the source and target languages are given denotational semantics in the same lambda-theory, so correctness proofs can be carried out within this theory. In many cases, the proofs have a highly structured form. We show how a simple proof strategy, based on an algorithm for alpha-matching, can be used to build a tool that can automate all the routine cases of these proofs. Mitchell Wand, Patrick O'Keefe, and Jens Palsberg. Strong Normalization with Non-structural Subtyping. Mathematical Structures in Computer Science, 5(3):419--430, September 1995. Abstract: We study a type system with a notion of subtyping that involves a largest type $\top$, a smallest type $\bot$, atomic coercions between base types, and the usual ordering of function types. We prove that any $\lambda$-term typable in this system is strongly normalizing; this solves an open problem of Thatte. We also prove that the fragment without $\bot$ types strictly fewer terms. This demonstrates that $\bot$ adds power to a type system. Mitchell Wand. Compiler Correctness for Parallel Languages. In Functional Programming Languages and Computer Architecture, pages 120--134, June 1995. Abstract: We present a paradigm for proving the correctness of compilers for languages with parallelism. The source language is given a denotational semantics as a compositional translation to a higher-order process calculus. The target language is also given a denotational semantics as a compositional translation to the same process calculus. We show the compiler is correct in that it preserves denotation up to bisimulation. The target language is also given an operational semantics, and this operational semantics is shown correct in the sense that it is branching-bisimilar to the denotational semantics of the target language. Together, these results show that for any program, the operational semantics of the target code is branching-bisimilar to the semantics of the source code. Dino~P. Oliva, John~D. Ramsdell, and Mitchell Wand. The VLISP Verified PreScheme Compiler. Lisp and Symbolic Computation, 8(1/2):111--182, 1995. Abstract: This paper describes a verified compiler for PreScheme, the implementation language for the {\vlisp} run-time system. The compiler and proof were divided into three parts: A transformational front end that translates source text into a core language, a syntax-directed compiler that translates the core language into combinator-based tree-manipulation language, and a linearizer that translates combinator code into code for an abstract stored-program machine with linear memory for both data and code. This factorization enabled different proof techniques to be used for the different phases of the compiler, and also allowed the generation of good code. Finally, the whole process was made possible by carefully defining the semantics of {\vlisp} PreScheme rather than just adopting Scheme's. We believe that the architecture of the compiler and its correctness proof can easily be applied to compilers for languages other than PreScheme. Joshua~D. Guttman and Mitchell Wand, editors. VLISP: A Verified Implementation of Scheme. Kluwer, Boston, 1995. Originally published as a special double issue of the journal {\it Lisp and Symbolic Computation} (Volume 8, Issue 1/2). Abstract: The VLISP project undertook to verify rigorously the implementation of a programming language. The project began at The MITRE Corporation in late 1989, under the company's Technology Program. The work was supervised by the Rome Laboratory of the United States Air Force. Northeatern University became involved a year later. This research work has also been published as a special double issue of the journal {\it Lisp and Symbolic Computation} (Volume 8, Issue 1/2). Joshua Guttman, John Ramsdell, and Mitchell Wand. VLISP: A Verified Implementation of Scheme. Lisp and Symbolic Computation, 8(1/2):5--32, 1995. Abstract: The Vlisp project showed how to produce a comprehensively verified implementation for a programming language, namely Scheme. This paper introduces two more detailed studies on Vlisp. It summarizes the basic techniques that were used repeatedly throughout the effort. It presents scientific conclusions about the applicability of the these techniques as well as engineering conclusions about the crucial choices that allowed the verification to succeed. Mitchell Wand and Zheng-Yu Wang. Conditional Lambda-Theories and the Verification of Static Properties of Programs. Information and Computation, 113(2):253--277, 1994. Preliminary version appeared in {\lics{5th}}, 1990, pp.~321-332. Abstract: We present a proof that a simple compiler correctly uses the static properties in its symbol table. We do this by regarding the target code produced by the compiler as a syntactic variant of a \l-term. In general, this \l-term $C$ may not be equal to the semantics $S$ of the source program: they need be equal only when information in the symbol table is valid. We formulate this relation as a {\it {conditional \l-judgement}\/} $\Gbar \imp S = C$, where \Gbar\ is a formula that represents the invariants implicit in symbol table \G. We present rules of inference for conditional \l-judgements and prove their soundness. We then use these rules to prove the correctness of a simple compiler that relies on a symbol table. The form of the proof suggests that such proofs may be largely mechanizable. Mitchell Wand and Paul Steckler. Tracking Available Variables for Lightweight Closures. In Proceedings of Atlantique Workshop on Semantics-Based Program Manipulation, pages 63--70. University of Copenhagen DIKU Technical Report 94/12, 1994. Mitchell Wand and Paul Steckler. Selective and Lightweight Closure Conversion. In Conf. Rec. 21th ACM Symp. on Principles of Prog. Lang., pages 435--445, 1994. Revised version appeared in {\em TOPLAS 19}:1, January, 1997, pp. 48-86. Abstract: We consider the problem of selective and lightweight closure conversion, in which multiple procedure-calling protocols may coexist in the same code. Flow analysis is used to match the protocol expected by each procedure and the protocol used at each of its possible call sites. We formulate the flow analysis as the solution of a set of constraints, and show that any solution to the constraints justifies the resulting transformation. Some of the techniques used are suggested by those of abstract interpretation, but others arise out of alternative approaches. Mitchell Wand. Type Inference for Objects with Instance Variables and Inheritance. In Carl Gunter and John~C. Mitchell, editors, Theoretical Aspects of Object-Oriented Programming, pages 97--120. MIT Press, 1994. Originally appeared as Northeastern University College of Computer Science Technical Report NU-CCS-89-2, February, 1989. Abstract: We show how to construct a complete type inference system for object systems with protected instance variables, publicly accessible methods, first-class classes, and single inheritance. This is done by extending Remy's scheme for polymorphic record typing to allow potentially infinite label sets, and interpreting objects in the resulting language. Paul Steckler and Mitchell Wand. Selective Thunkification. In Baudouin~Le Charlier, editor, Static Analysis: First International Static Analysis Symposium, volume 864, pages 162--178. Springer-Verlag, Berlin, Heidelberg, and New York, September 1994. Abstract: Recently, Amtoft presented an analysis and transformation for mapping typed call-by-name programs to call-by-value equivalents. Here, we present a comparable analysis and transformation for untyped programs using dataflow analysis. In the general case, the transformation generates thunks for call site operands of a call-by-name program. Using strictness information derived as part of a larger flow analysis, we can determine that some operands are necessarily evaluated under call-by-name, so the transformation does not need to generate thunks for them. The dataflow analysis is formulated as the solution to a set of constraints. We show that any solution to the constraints is sound, and that any such solution justifies the resulting transformation. Mitchell Wand. Specifying the Correctness of Binding-Time Analysis. Journal of Functional Programming, 3(3):365--387, July 1993. Preliminary version appeared in {\it Conf. Rec. 20th ACM Symp. on Principles of Prog. Lang.\/} (1993), 137--143. Abstract: Mogensen has exhibited a very compact partial evaluator for the pure lambda calculus, using binding-time analysis followed by specialization. We give a correctness criterion for this partial evaluator and prove its correctness relative to this specification. We show that the conventional properties of partial evaluators, such as the Futamura projections, are consequences of this specification. By considering both a flow analysis and the transformation it justifies together, this proof suggests a framework for incorporating flow analyses into verified compilers. Jerzy Tiuryn and Mitchell Wand. Type Reconstruction with Recursive Types and Atomic Subtyping. In {CAAP '93: 18th Colloquium on Trees in Algebra and Programming}, July 1993. Abstract: We consider the problem of type reconstruction for \l-terms over a type system with recursive types and atomic subsumptions. This problem reduces to the problem of solving a finite set of inequalities over infinite trees. We show how to solve such inequalities by reduction to an infinite but well-structured set of inequalities over the base types. This infinite set of inequalities is solved using \Buchi\ automata. The resulting algorithm is in {\em DEXPTIME}. This also improves the previous {\em NEXPTIME} upper bound for type reconstruction for finite types with atomic subtyping. We show that the key steps in the algorithm are {\em PSPACE}-hard. Mitchell Wand and Dino~P. Oliva. Proving the Correctness of Storage Representations. In {\lfp{1992}}, pages 151--160, 1992. Abstract: Conventional techniques for semantics-directed compiler de\-ri\-vation yield abstract machines that manipulate trees. However, in order to produce a real compiler, one has to represent these trees in memory. In this paper we show how the technique of {\em {storage-layout relations}} (introduced by Hannan \cite{Hannan}) can be applied to verify the correctness of storage representations in a very general way. This technique allows us to separate denotational from operational reasoning, so that each can be used when needed. As an example, we show the correctness of a stack implementation of a language including dynamic catch and throw. The representation uses static and dynamic links to thread the environment and continuation through the stack. We discuss other uses of these techniques. Mitchell Wand and Patrick~M. O'Keefe. Type Inference for Partial Types is Decidable. In B. Krieg Br{\"u}ckner, editor, European Symposium on Programming '92, volume 582 of Lecture Notes in Computer Science, pages 408--417, Berlin, Heidelberg, and New York, 1992. Springer-Verlag. Abstract: The type inference problem for partial types, introduced by Thatte \cite{Thatte}, is the problem of deducing types under a subtype relation with a largest element \O\ and closed under the usual antimonotonic rule for function types. We show that this problem is decidable by reducing it to a satisfiability problem for type expressions over this partial order and giving an algorithm for the satisfiability problem. The satisfiability problem is harder than the one conventionally given because comparable types may have radically different shapes. Mitchell Wand. Lambda Calculus. In S.C. Shapiro, editor, Encyclopedia of Artificial Intelligence, pages 760--761. Wiley-Inter\-science, 2nd edition, 1992. Mitchell Wand. Correctness of Procedure Representations in Higher-Order Assembly Language. In S. Brookes, editor, Proceedings Mathematical Foundations of Programming Semantics '91, volume 598 of Lecture Notes in Computer Science, pages 294--311. Springer-Verlag, Berlin, Heidelberg, and New York, 1992. Abstract: Higher-order assembly language (HOAL) generalizes combinator-based target languages by allowing free variables in terms to play the role of registers. We introduce a machine model for which HOAL is the assembly language, and prove the correctness of a compiler from a tiny language into HOAL. We introduce the notion of a lambda-representation, which is an abstract binding operation, show how some common representations of procedures and continuations can be expressed as lambda-representations. Last, we prove the correctness of a typical procedure-calling convention in this framework. Dino~P. Oliva and Mitchell Wand. A Verified Run-Time Structure for Pure PreScheme. Technical Report NU-CCS-92-97, Northeastern University College of Computer Science, 1992. Abstract: This document gives a summary of activities under MITRE Corporation Contract Number F19628-89-C-0001. It gives an operational semantics of an abstract machine for Pure PreScheme and of its implementation as a run-time structure on an Motorola 68000 microprocessor. The relationship between these two models is stated formally and proved. Dino~P. Oliva and Mitchell Wand. A Verified Compiler for Pure PreScheme. Technical Report NU-CCS-92-5, Northeastern University College of Computer Science, February 1992. Abstract: This document gives a summary of activities under MITRE Contract Number F19628-89-C-001. It gives a detailed denotational specification of the language Pure Pre\-Scheme. A bytecode compiler, derived from the semantics, is presented, followed by proofs of correctness of the compiler with respect to the semantics. Finally, an assembler from the bytecode to an actual machine architecture is shown. Daniel~P. Friedman, Mitchell Wand, and Christopher~T. Haynes. Essentials of Programming Languages. MIT Press, Cambridge, MA, 1992. Mitchell Wand and Patrick O'Keefe. Automatic Dimensional Inference. In J.L. Lassez and G.D. Plotkin, editors, Computational Logic: in honor of J. Alan Robinson, pages 479--486. MIT Press, 1991. Abstract: While there have been a number of proposals to integrate dimensional analysis into existing compilers, it appears that no one has made the easy observation that dimensional analysis fits neatly into the pattern of ML-style type inference. In this paper we show how to add dimensions to the simply-typed lambda calculus, and we show that every typable dimension-preserving term has a principal type. The principal type is unique up to a choice of basis. Mitchell Wand. Type Inference for Record Concatenation and Multiple Inheritance. Information and Computation, 93:1--15, 1991. Preliminary version appeared in {\it Proc. 4th IEEE Symposium on Logic in Computer Science\/} (1989), 92--97. Abstract: We show that the type inference problem for a lambda calculus with records, including a record concatenation operator, is decidable. We show that this calculus does not have principal types, but does have finite complete sets of types: that is, for any term $M$ in the calculus, there exists an effectively generable finite set of type schemes such that every typing for $M$ is an instance of one the schemes in the set. We show how a simple model of object-oriented programming, including hidden instance variables and multiple inheritance, may be coded in this calculus. We conclude that type inference is decidable for object-oriented programs, even with multiple inheritance and classes as first-class values. Margaret Montenyohl and Mitchell Wand. Correctness of Static Flow Analysis in Continuation Semantics. Science of Computer Programming, 16:1--18, 1991. Preliminary version appeared in {\popl {15th}} (1988), 204--218. Boleslaw Ciesielski and Mitchell Wand. Using the Theorem Prover Isabelle-91 to Verify a Simple Proof of Compiler Correctness. Technical Report NU-CCS-91-20, Northeastern University College of Computer Science, December 1991. William~D. Clinger, J. Rees, et al. Revised Report on the Algorithmic Language Scheme. Lisp Pointers, 4(3):1--55, July-September 1991. Has also appeared as MIT, Indiana University and University of Oregon technical reports. Mitchell Wand. A Short Proof of the Lexical Addressing Algorithm. Information Processing Letters, 35:1--5, 1990. Abstract: The question of how to express binding relations, and in particular, of proving the correctness of lexical addressing techniques, has been considered primarily in the context of compiler correctness proofs. Here we consider the problem in isolation. We formulate the connections between three different treatments of variables in programming language semantics: the environment coding, the natural coding, and the lexical-address coding (sometimes called the Frege coding, the Church coding, and the deBruijn coding, respectively). By considering the problem in isolation, we obtain shorter and clearer proofs. The natural coding seems to occupy a central place, and the other codings are proved equivalent by reference to it. Mitchell Wand and Patrick~M. O'Keefe. On the Complexity of Type Inference with Coercion. In Conf. on Functional Programming Languages and Computer Architecture, 1989. Abstract: We consider the following problem: Given a partial order $(C, \le)$ of base types and coercions between them, a set of constants with types generated from $C$, and a term $M$ in the lambda calculus with these constants, does $M$ have a typing with this set of types? This problem abstracts the problem of typability over a fixed set of base types and coercions ({\it e.g.}\ int $\le$ real, or a fixed set of coercions between opaque data types). We show that in general, the problem of typability of lambda-terms over a given partially-ordered set of base types is NP-complete. However, if the partial order is known to be a tree, then the satisfiability problem is solvable in (low-order) polynomial time. The latter result is of practical importance, as trees correspond to the coercion structure of single-inheritance object systems. Mitchell Wand. The Register-Closure Abstract Machine: A Machine Model to Support {CPS} Compiling. Technical Report NU-CCS-89-24, Northeastern University College of Computer Science, Boston, MA, July 1989. Abstract: We present a new abstract machine model for compiling languages based on their denotational semantics. In this model, the output of the compiler is a lambda-term which is the higher-order abstract syntax for an assembly language program. The machine operates by reducing these terms. This approach is well-suited for generating code for modern machines with many registers. We discuss how this approach can be used to prove the correctness of compilers, and how it improves on our previous work in this area. Mitchell Wand. From Interpreter to Compiler via Higher-Order Abstract Assembly Language. Technical report, Northeastern University College of Computer Science, 1989. Abstract: In this paper, we give a case study of transforming an interpreter into a compiler. This transformation improves on our previous work through the use of {\it {higher-order abstract assembly language}\/}. Higher-order abstract assembly language (or HOAL) uses a Church-style, continuation-passing encoding of machine operations. This improves on the use of combinator-based encoding in allowing a direct treatment of register usage, and thereby giving the compiler writer a clearer idea of how to incorporate new constructs in the source language or machine. For example, it allows a denotational exposition of stack layouts. We show how to do the transformation for a simple language, for a language with procedures, and for a compiler using lexical Margaret Montenyohl and Mitchell Wand. Incorporating Static Analysis in a Semantics-Based Compiler. Information and Computation, 82:151--184, 1989. Abstract: We show how restructuring a denotational definition leads to a more efficient compiling algorithm. Three semantics-preserving transformations (static replacement, factoring, and combinator selection) are used to convert a continuation semantics into a formal description of a semantic analyzer and code generator. The compiling algorithm derived below performs type checking before code generation so that type-checking instructions may be omitted from the target code. The optimized code is proved correct with respect to the original definition of the source language. The proof consists of showing that all transformations preserve the semantics of the source language. Richard~P. Gabriel et al. Draft Report on Requirements for a Common Prototyping System. SIGPLAN Notices, 24(3):93--165, March 1989. Mitchell Wand and Daniel~P. Friedman. The Mystery of the Tower Revealed: A Non-Reflective Description of the Reflective Tower. Lisp and Symbolic Computation, 1(1):11--37, 1988. Reprinted in {\it Meta-Level Architectures and Reflection} (P. Maes and D. Nardi, eds.) North-Holland, Amsterdam, 1988, pp. 111--134. Preliminary version appeared in {\it Proc. 1986 ACM Conf. on Lisp and Functional Programming,\/} 298--307. Abstract: In an important series of papers [8,9], Brian Smith has discussed the nature of programs that know about their text and the context in which they are executed. He called this kind of knowledge reflection. Smith proposed a programming language, called 3-LISP, which embodied such self-knowledge in the domain of metacircular interpreters. Every 3-LISP program is interpreted by a metacircular interpreter, also written in 3-LISP. This gives rise to a picture of an infinite tower of metacircular interpreters, each being interpreted by the one above it. Such a metaphor poses a serious challenge for conventional modes of understanding of programming languages. In our earlier work on reflection [4], we showed how a useful species of reflection could be modeled without the use of towers. In this paper, we give a semantic account of the reflective tower. This account is self-contained in the sense that it does not employ reflection to explain reflection. Matthias Felleisen, Mitchell Wand, Daniel~P. Friedman, and Bruce~F. Duba. Abstract Continuations: A Mathematical Semantics for Handling Functional Jumps. In Proc. 1988 ACM Conf. on Lisp and Functional Programming, pages 52--62, 1988. Abstract: Continuation semantics is the traditional mathematical formalism for specifying the semantics of imperative control facilities. Modern Lisp-like languages, however, contain advanced control structures like full functional jumps and control delimiters for which continuation semantics is insufficient. We solve this problem by introducing an abstract domain of {\it rests\/} of computations with appropriate operations. Beyond being useful for the problem at hand, these {\it abstract continuations} turn out to have applications in a much broader context, \eg, the explication of parallelism, the modeling of control facilities in parallel languages, and the design of new control structures. Mitchell Wand. A Simple Algorithm and Proof for Type Inference. Fundamenta Infomaticae, 10:115--122, 1987. Abstract: We present a simple proof of Hindley's Theorem: that it is decidable whether a term of the untyped lambda calculus is the image under type-erasing of a term of the simply typed lambda calculus. The proof proceeds by a direct reduction to the unification problem for simple terms. This arrangement of the proof allows for easy extensibility to other type inference problems. Mitchell Wand. Lambda Calculus. In S.C. Shapiro, editor, Encyclopedia of Artificial Intelligence, pages 441--443. Wiley-Inter\-science, 1987. Mitchell Wand. Complete Type Inference for Simple Objects. In Proc. 2nd IEEE Symposium on Logic in Computer Science, pages 37--44, 1987. {Corrigendum}, \lics{3rd}, 1988, page 132. Abstract: We consider the problem of strong typing for a model of object-oriented programming systems. These systems permit values which are records of other values, and in which fields inside these records are retrieved by name. We propose a type system which allows us to classify these kinds of values and to classify programs by the type of their result, as is usual in strongly-typed programming languages. Our type system has two important properties: it admits multiple inheritance, and it has a syntactically complete type inference system. Stefan Kolbl and Mitchell Wand. Linear Future Semantics and its Implementation. Science of Computer Programming, 8:87--103, 1987. Abstract: We describe linear future semantics, an extension of linear history semantics as introduced by Francez, Lehmann, and Pnueli, and show how it can be used to add multiprocessing to languages given by standard continuation semantics. We then demonstrate how the resulting semantics can be implemented. The implementation uses functional abstractions and non-determinacy to represent the sets of answers in the semantics. We give an example, using a semantic prototyping system based on the language Scheme. Eugene~M. Kohlbecker and Mitchell Wand. Macro-by-Example: Deriving Syntactic Transformations from their Specifications. In {\popl{14th}}, pages 77--84, 1987. Mitchell Wand. Finding the Source of Type Errors. In Conf. Rec. 13th ACM Symp. on Principles of Prog. Lang., pages 38--43, 1986. Jonathan~A. Rees, William~D. Clinger, et al. Revised{$^3$} Report on the Algorithmic Language Scheme. SIGPLAN Notices, 21(12):37--79, December 1986. D.P Friedman, C.T. Haynes, and Mitchell Wand. Obtaining Coroutines with Continuations. J. of Computer Languages, 11(3/4):143--153, 1986. Mitchell Wand. From Interpreter to Compiler: A Representational Derivation. In H. Ganzinger and N.D. Jones, editors, Workshop on Programs as Data Objects, volume 217 of Lecture Notes in Computer Science, pages 306--324. Springer-Verlag, Berlin, Heidelberg, and New York, 1985. Mitchell Wand. Embedding Type Structure in Semantics. In {\popl{12th}}, pages 1--6, 1985. Abstract: We show how a programming language designer may embed the type structure of of a programming language in the more robust type structure of the typed lambda calculus. This is done by translating programs of the language into terms of the typed lambda calculus. Our translation, however, does not always yield a well-typed lambda term. Programs whose translations are not well-typed are considered meaningless, that is, ill-typed. We give a conditionally type-correct semantics for a simple language with continuation semantics. We provide a set of static type-checking rules for our source language, and prove that they are sound and complete: that is, a program passes the typing rules if and only if its translation is well-typed. This proves the correctness of our static semantics relative to the well-established typing rules of the typed lambda-calculus. Albert~R. Meyer and Mitchell Wand. Continuation Semantics in Typed Lambda-Calculi. In R. Parikh, editor, Logics of Programs (Brooklyn, June, 1985), volume 193 of Lecture Notes in Computer Science , pages 219--224. Springer-Verlag, Berlin, Heidelberg, and New York, 1985. Abstract: This paper reports preliminary work on the semantics of the continuation transform. Previous work on the semantics of continuations has concentrated on untyped lambda-calculi and has used primarily the mechanism of inclusive predicates. Such predicates are easy to understand on atomic values, but they become obscure on functional values. In the case of the typed lambda-calculus, we show that such predicates can be replaced by retractions. The main theorem states that the meaning of a closed term is a retraction of the meaning of the corresponding continuationized term. W. Clinger, D.P. Friedman, and Mitchell Wand. A Scheme for a Higher-Level Semantic Algebra. In J. Reynolds and M. Nivat, editors, Algebraic Methods in Semantics: Proceedings of the US-French Seminar on the Application of Algebra to Language Definition and Compilation, pages 237--250. Cambridge University Press, Cambridge, 1985. Mitchell Wand. What is Lisp? American Mathematical Monthly, 91:32--42, 1984. Mitchell Wand. A Types-as-Sets Semantics for {Milner}-style Polymorphism. In {\popl{11th}}, pages 158--164, 1984. Mitchell Wand. A Semantic Prototyping System. In Proceedings ACM SIGPLAN '84 Compiler Construction Conference, pages 213--221, 1984. Abstract: We have written a set of computer programs for testing and exercising programming language specifications given in the style of denotational semantics. The system is built largely in Scheme 84, a dialect of LISP that serves as an efficient lambda-calculus interpreter. The system consists of: a syntax-directed transducer, which embodies the principle of compositionality, a type checker, which is extremely useful in debugging semantic definitions, and an interface to the yacc parser-generator, which allows the system to use concrete syntax rather than the often cumbersome abstract syntax for its programs. In this paper, we discuss the design of the system, its implementation, and discuss its use. Daniel~P. Friedman and Mitchell Wand. Reification: Reflection without Metaphysics. In {\lfp{1984}}, pages 348--355, August 1984. Daniel~P. Friedman, Christopher~T. Haynes, and Mitchell Wand. Continuations and Coroutines: An Exercise in Metaprogramming. In {\lfp{1984}}, pages 293--298, August 1984. Daniel~P. Friedman, Christopher~T. Haynes, Eugene Kohlbecker, and Mitchell Wand. The Scheme 84 Reference Manual. Technical Report 153, Indiana University Computer Science Department, Bloomington, IN, March 1984. Revised June 1985. Mitchell Wand. A Semantic Algebra for Logic Programming. Technical Report 148, Indiana University Computer Science Department, Bloomington, IN, August 1983. Mitchell Wand. Loops in Combinator-Based Compilers. Information and Computation, 57(2-3):148--164, 1983. Previously appeared in {\popl{10th}}, 1983, pages 190--196. Abstract: In our paper [Wand 82a], we introduced a paradigm for compilation based on combinators. A program from a source language is translated (via a semantic definition) to trees of combinators; the tree is simplified via associative and distributive laws) to a linear, assembly-language-like format; the ``compiler writer's virtual machine'' operates by simulating a reduction sequence of the simplified tree. The correctness of these transformations follows from general results about the $\lambda$-calculus. The code produced by such a generator is always tree-like. In this paper, the method is extended to produce target code with explicit loops. This is done by re-introducing variables into the terms of the target language in a restricted way, along with a structured binding operator. We also consider general conditions under which these transformations hold. Mitchell Wand. Specifications, Models, and Implementations of Data Abstractions. Theoretical Computer Science, 20:3--32, 1982. Mitchell Wand. Semantics-Directed Machine Architecture. In {\popl{9th}}, pages 234--241, 1982. Mitchell Wand. Deriving Target Code as a Representation of Continuation Semantics. ACM Transactions on Programming Languages and Systems, 4(3):496--517, July 1982. Mitchell Wand. SCHEME Version 3.1 Reference Manual. Technical Report 93, Indiana University Computer Science Department, Bloomington, IN, June 1980. Mitchell Wand. Induction, Recursion, and Programming. North Holland, New York, 1980. Mitchell Wand. First-Order Identities as a Defining Language. Acta Informatica, 14:337--357, 1980. Mitchell Wand. Different Advice on Structuring Compilers and Proving Them Correct. Technical Report 95, Indiana University Computer Science Department, Bloomington, IN, September 1980. Mitchell Wand. Continuation-Based Program Transformation Strategies. Journal of the ACM, 27:164--180, 1980. Mitchell Wand. Continuation-Based Multiprocessing. In J. Allen, editor, Conference Record of the 1980 LISP Conference, pages 19--28, Palo Alto, CA, 1980. The Lisp Company. Republished by ACM. Mitchell Wand. Fixed-Point Constructions in Order-Enriched Categories. Theoretical Computer Science, 8:13--30, 1979. Mitchell Wand. Final Algebra Semantics and Data Type Extensions. Journal of Computer and Systems Science, 19:27--44, 1979. Mitchell Wand and Daniel~P. Friedman. Compiling Lambda Expressions Using Continuations and Factorizations. Journal of Computer Languages, 3:241--263, 1978. Mitchell Wand. A New Incompleteness Result for {Hoare}'s System. Journal of the ACM, 25:168--175, 1978. Mitchell Wand. A Characterization of Weakest Preconditions. Journal of Computer and Systems Science, 15:209--212, 1977. Mitchell Wand. Algebraic Theories and Tree Rewriting Systems. Technical Report 66, Indiana University Computer Science Department, Bloomington, IN, July 1977. Stuart~C. Shapiro and Mitchell Wand. The Relevance of Relevance. Technical Report 46, Indiana University, November 1976. D.P. Friedman, D.S. Wise, and Mitchell Wand. Recursive Programming Through Table Lookup. In Proc. 1976 ACM Symposium on Symbolic and Algebraic Computation, pages 85--89, 1976. David~S. Wise, Daniel~P. Friedman, Stuart~C. Shapiro, and Mitchell Wand. Boolean-Valued Loops. BIT, 15:431--451, 1975. Mitchell Wand. On the Recursive Sepcification of Data Types. In E. Manes, editor, Category Theory Applied to Computation and Control, volume 25 of Lecture Notes in Computer Science, pages 214--217. Springer-Verlag, Berlin, Heidelberg, and New York, 1975. Mitchell Wand. Free, Iteratively Closed Categories of Complete Lattices. Cahiers de Topologie et G\'eom\'etrie Diff\'erentielle, 16:415--424, 1975. Mitchell Wand. An Algebraic Formulation of the Chomsky Hierarchy. In E. Manes, editor, Category Theory Applied to Computation and Control, volume 25 of Lecture Notes in Computer Science, pages 209--213. Springer-Verlag, Berlin, Heidelberg, and New York, 1975. Mitchell Wand. An Unusual Application of Program-Proving. In Proc. 5th ACM Symposium on Theory of Computing, pages 59--66, Austin, TX, 1973. Mitchell Wand. Mathematical Foundations of Formal Language Theory. PhD thesis, MIT, 1973. MIT Project MAC TR-108 (December, 1973). Mitchell Wand. A Concrete Approach to Abstract Recursive Definitions. In Maurice Nivat, editor, Automata, Languages, and Programming, pages 331--345. North-Holland, Amsterdam, 1973. Mitchell Wand. The Elementary Algebraic Theory of Generalized Finite Automata. Notices AMS, 19(2):A325, 1972. Mitchell Wand. Theories, Pretheories, and Finite-State Transducers on Trees. Artificial Intellligence Memo 216, MIT, May 1971.
{"url":"http://www.ccs.neu.edu/home/wand/pubs.html","timestamp":"2014-04-21T12:21:10Z","content_type":null,"content_length":"84565","record_id":"<urn:uuid:454a34b7-5314-4f11-8338-45d630a91a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- November 2005, week 2 (#449)LISTSERV at the University of Georgia --- Tony Yang <tonyyangsxz@GMAIL.COM> wrote: > Dear Listers; > One more question about PROC NLMIXED, if I have a nominal variable > COLOR, > which has 4 levels, and re-set another three variables for COLOR, > color1, > color2, color3(letting level 4 as reference), then I include these > three > variables in the modelling. > My question is, if I want to test if there is difference between > color1 and > color2, color1 and color 3, color2 and color3, then how should I do > this > analysis? > I tried the following code, > contrast "b1=b2" b1 1 b2 -1; > contrast "b1=b3" b1 1 b3 -1; > contrast "b2=b3" b2 1 b3 -1; > there are error for this coding, hence I tried in SAS direction > > contrast "b1=b2" b1 +1, b2 -1; > contrast "b1=b3" b1 +1, b3 -1; > contrast "b2=b3" b2 +1, b3 -1; > they works, while according to the syntax notes for contrast in PROC > NLMIXED, it seems SAS is doing the simulaneous test for b1=-1 and > b2=1, etc. > > Then I am not sure how to do this further, any suggestions will be > appreciated. Thanks in advance for your time. > > -- > Best regards, > Tony > The CONTRAST statement in NLMIXED does not follow the same construction principals as, say, the GLM CONTRAST statement. In GLM, you name an effect followed by the contrast coefficients for each level of the effect. In NLMIXED, you use regular data step type code to construct an equation for each orthogonal contrast. The NLMIXED contrast statement has similarity to the GLM contrast statement in that commas are employed to indicate additional linear combination which are employed to construct a multiple df hypothesis test. The correct code to test whether the parameters b1=b2, b1=b3, and b2=b3 would be as follows: contrast "b1=b2" b1 - b2; contrast "b1=b3" b1 - b3; contrast "b2=b3" b2 - b3; Let me make one other note about the NLMIXED contrast statement. You can perform an omnibus test that b1=b2=b3=0 by specifying contrast "Omnibus test for COLOR" b1, b2, b3; Now, it might also be instructive to examine just what was coded by your CONTRAST statements which "worked". You coded contrast "b1=b2" b1 +1, b2 -1; contrast "b1=b3" b1 +1, b3 -1; contrast "b2=b3" b2 +1, b3 -1; First, we see that each of these is a multiple df test because of the commas which separate each linear combination. The two linear combinations of the first contrast statement are 1 - H0: b1 + 1 = 0 OR b1 = -1 2 - H0: b2 - 1 = 0 OR b2 = 1 Since there are two independent linear combinations, then you get the simultaneous test that b1=-1 and b2=1, just as you indicated. --------------------------------------- Dale McLerran Fred Hutchinson Cancer Research Center mailto: dmclerra@NO_SPAMfhcrc.org Ph: (206) 667-2926 Fax: (206) 667-5977 __________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0511b&L=sas-l&D=0&P=50078&F=P","timestamp":"2014-04-17T12:30:33Z","content_type":null,"content_length":"12709","record_id":"<urn:uuid:77747991-e8c2-4da3-9a61-2e97e0115277>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
The material on this page is mainly textual and represents some of the stuff that I couldn't fit in either book, but wanted to write up. This material is all in draft form, and it may be revised from time to time. Please remember where you found it, and always give credit if you use it. Again, I would appreciate suggestions for further additions and/or clarification. This is a discussion of alternative ways to handle missing data, whether those data come from a multigroup experiment or are continuous variables used in a regression problem. I have written a chapter on missing data for the Handbook of Social Science Methodology, edited by Outhwaite and Turner and published by Sage. A preprint of that chapter is available by writing to me at David.Howell@uvm.edu I also recommend that you look at the following entry if your missing data occur in repeated measures designs. The purpose of this page is more about handling missing data than about logistic regression, but I think that it is helpful for both. The issues about missing data are very similar whether you are planning on a basic multiple regression or whether you have a dichotomous dependent variable and are using logistic regression. The use of mixed models offers another way of looking at repeated measures designs, and will return the same results as standard repeated measures ANOVA, but only when the design is balanced and there is no missing data. When there are missing data, the mixed model approach has a great deal to offer. You can obtain Part II at Mixed Models for Repeated Measures Designs-Part II This is a discussion that begins to show how sample sizes can affect the interpretation of a study when you have unequal cell frequencies. Near the end of the article is an e-mail message that I sent to someone else, illustrating how what appears to be one effect can actually come out to be a different effect. This is a discussion under construction of what it means to talk about power when we wouldn't be satisfied just to prove that one mean is trivially greater than another. If you are correlating variables (such as scores from twins or gay partners) where there is no ordering within a pair (e.g. either twin could be considered twinA or twinB), you want an intraclass correlation coefficient. This is a discussion of ways to run multiple comparisons when you have a repeated measure. It addresses the often-asked question "How do I do a Tukey test on my repeated measure?". I was asked for a demonstration of how you would compute the different types of sums of squares using the general linear model. Here that is. This is a discussion of the traditional Pearson's chi-square, Fisher's Exact Test, and randomization tests of 2 x 2 contingency tables. It demonstrates that when marginal totals are not considered as fixed, the standard chi-square test is to be preferred to Fisher's Exact Test. This is a response to a question about how I used the normal distribution to create the power tables at the end of both books and in Table 15.1 in the Fundamentals book.. This is a discussion different ways of testing hypotheses about contingency tables. The basic idea is chi-square, but there is more to it than that. This is a discussion of the designs in which you have two different groups, one receiving an intervention, and you test both before and after an intervention, if given, occurred. There are a number of ways to create permutation tests with factorial analyses of variance, and this document covers a number of them. Code is also given for programming in R. There are a number of very good graphical demonstrations of statistical issues available in the R package. This document shows how to download R and run those demonstrations even if you don't know R. I have had very little to say about single case studies in what I have written, but an excellent web site on this topic is available at the above link. John Crawford has published extensively in this field, and his web page is excellent. I have published several papers with John, but that really means that he does 95% of the work and I tell him how good the paper is. David C. Howell Last revised: 02/23/200910/30/2008
{"url":"http://www.uvm.edu/~dhowell/StatPages/More_Stuff/additional.html","timestamp":"2014-04-18T13:51:49Z","content_type":null,"content_length":"8460","record_id":"<urn:uuid:5a1dd4a2-75fb-45f3-b6c8-2b77f09d7605>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical model shows how groups split into factions As changes in individual relationships spread through a group, eventually a split evolves. (PhysOrg.com) -- The school dance committee is split; one group wants an "Alice in Wonderland" theme; the other insists on "Vampire Jamboree." Mathematics could have predicted it. Social scientists have long argued that when under stress, social networks either end up all agreeing or splitting into two opposing factions. Either condition is referred to as "structural balance." New Cornell research has generated a mathematical description of how this evolves. Previous mathematical approaches to structural balance have proven that when conditions are right, the result of group conflict will be a split into just two groups, the researchers said. The new work shows for the first time the steps through which friendships and rivalries shift over time and who ends up on each side. This video is not supported by your browser at this time. Watch the animation. "Structural balance theory applies in situations where there's a lot of social stress -- gossip disparaging to one person, countries feeling pressure, companies competing -- where we need to make alliances and find our friends and enemies," said Cornell Ph.D. candidate Seth Marvel, lead author of a paper explaining the research published during the week of Jan. 3, 2011, in the online edition of the Proceedings of the National Academy of Sciences with co-authors Jon Kleinberg, the Tisch University Professor of Computer Science, Robert Kleinberg, assistant professor of computer science, and Steven Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. People may form alliances based on shared values, or may consider the social consequences of allying with a particular person, Marvel said. "The model shows that the latter is sufficient to divide a group into two factions," he said. The model is a simple differential equation applied to a matrix, or grid of numbers, that can represent relationships between people, nations or corporations. The researchers tested their model on a classic sociological study of a karate club that split into two groups and got results that matched what happened in real life. They plugged in data on international relations prior to World War II and got almost perfect predictions on how the Axis and Allied alliances formed. The smallest unit in such a network is a "relationship triangle," between, say, Bob and Carol and Ted, which can have four states: • They're all good friends. That's a stable or "balanced" situation. • Bob and Carol are friends, but neither gets along with Ted. That's also balanced. • Bob and Carol dislike each other, but both are friends with Ted. Ted will try to get Bob and Carol together; he will either succeed or end up alienating one or both of his friends, so the situation is unbalanced and has a tendency to change. • All three dislike each other. Various pairs will try to form alliances against the third, so this situation is also unbalanced. "Every choice has consequences for other triangles," Strogatz explained, so unbalanced triangles kick off changes that propagate through the entire system. Too often the final state consists of two groups, each with all positive connections among themselves and all negative connections with members of the opposite group. Social groups can be broken down to "relationship triangles" with four possibilities: A) A, B and C are mutual friends: balanced. B) A is friends with B and C, but they don't get along with each other: not balanced. C) A and B are friends with C as a mutual enemy: balanced. D) A, B, and C are mutual enemies: not balanced. Unbalanced triangles set off changes that spread through the group. From "Networks" by Jon Kleinberg and David Easley Is there a way to avoid the mathematical certainty of a split, perhaps to make Republicans and Democrats in Congress less polarized? It depends on the initial conditions, Marvel said. The model shows that if the "mean friendliness" -- the average strength of connections across the entire network is positive, the system evolves to a single, all-positive pattern. "The model shows how to influence the result, but it doesn't tell you how to get there," Kleinberg cautioned. Marvel plans to test the model on other social networks, and perhaps work with psychologists to do lab experiments with human subjects. But he too cautions against leaning too heavily on the equations for practical advice. "This is a simple model and deterministic, and people aren't deterministic," he said. The research was supported in part by the John D. and Catherine T. MacArthur Foundation, Google, the Yahoo! Research Alliance, a Microsoft Research New Faculty Fellowship, the Air Force Office of Scientific Research and the National Science Foundation. 5 / 5 (1) Jan 04, 2011 That is actually pretty interesting and visible in everyday life, the result, in hindsight that is. This really gets into a subject from one of my favorite podcasts, Radiolab, on choice, which questions whether we are actually choosing anything. 4 / 5 (1) Jan 04, 2011 That is actually pretty interesting and visible in everyday life, the result, in hindsight that is. This really gets into a subject from one of my favorite podcasts, Radiolab, on choice, which questions whether we are actually choosing anything. I read that book "The Illusion of Conscious Will" that they referred to in the radiolab podcast (it was really good, and although the subject is difficult, the book was surprisingly easy to read and entertaining). The very next book I read was Taleb's Black Swan. I've come to a realization that what we call 'random' is really chaos. It's very possible nothing in our universe is random. We're just a big machine of equal and opposite reactions... and this probably includes human behavior. I also came to realize how little I know and how little the world knows. 5 / 5 (2) Jan 04, 2011 Uh, IIRC, this math explains why juries are a dozen, and why big committees, to get *anything* done, split into working parties... 3 / 5 (4) Jan 04, 2011 Uh, IIRC, this math explains why juries are a dozen, and why big committees, to get *anything* done, split into working parties... Which reminds me of a saying I heard 50 years ago: "That must be the answer - God is a committee"
{"url":"http://phys.org/news/2011-01-mathematical-groups-factions.html","timestamp":"2014-04-16T19:50:44Z","content_type":null,"content_length":"77598","record_id":"<urn:uuid:aa8caaae-2baa-4e4e-b512-0a89eb282d2c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Double angle identities February 25th 2009, 12:51 AM [SOLVED] Double angle identities Stuck on all of these and can't see to get my answers anywhere close to the answers a) Express $\frac{\sin2x}{1 - 2\cos2x}$ in terms of $\tan(x)$ b) Prove the identity $\frac{1 + \sin2A - \cos2A}{1 + \sin2A + \cos2A} \equiv \tan(A)$ and finally c) Given that $\cos2A = \tan^{2}x$ show that $\cos2x = \tan^{2}A$ February 25th 2009, 01:36 AM $sin(2x) =2sin(x)cos(x)$ $cos(2x) = cos^2(x) -sin^2(x)$ $\frac{\sin2x}{1 - 2\cos2x}$ $=\frac{2sin(x)cos(x)}{1-2(cos^2(x) -sin^2(x))}$ Divide numerator and denominator by $cos^2(x)$ $\frac{\frac{2sin(x)}{cos(x)}} {\frac{1}{cos^2(x)}-\frac{2cos^2(x)}{cos^2(x)}+\frac{2sin^2(x)}{cos^2( x)}}$ I wont tell you that $tan(x)= \frac{sin(x)}{cos(x)}$ and that $tan^2(x)+1 = sec^2(x)$ Neither will I tell you that your answer is $\frac{2tan(x)} {1+tan^2(x)-2+2tan^2(x)}$ $= ~\frac{2tan(x)}{-1+3tan^2(x)}$ February 25th 2009, 01:50 AM Using the formula We will get numerator = $1 +2sin(A)cos(A) - cos^2(A) + sin^2(A)$ Now we use $1-cos^2(x) = sin^2(x)$ So numerator $= 2sin^2(A) +2sin(A)cos(A) = 2sin(A)[sin(A) +cos(A)]$ Similarly Denominator will become $1 +2sin(A)cos(A) +cos^2(A)-sin^2(A)$ $= 2cos^2(A) +2sin(A)cos(A)$ $= 2cos(A) (sin(A) +cos(A))$ So our term is now $=\frac{2sin(A)(sin(A) +cos(A))}{2cos(A)(sin(A) +cos(A))}$ Now cancel the terms common in numerator and denominator And that's the end of the Second Show(Party)(Dance) February 25th 2009, 02:05 AM $cos(2A) = cos^2(A) - sin^2(A) = 2cos^2(x) - 1$ Hence equation now is $2cos^2(A) - 1 = tan^2(x)$ $2cos^2(A)= sec^2(x)$ $cos^2(A) = sec^2(x)/2$ Hence $sec^2(A) = \frac{2}{sec^2(x) } = 2 cos^2(x)$ $sec^2(A)=2 cos^2(x)$ Substract one from both sides And (Dance)
{"url":"http://mathhelpforum.com/trigonometry/75660-solved-double-angle-identities-print.html","timestamp":"2014-04-18T14:39:31Z","content_type":null,"content_length":"14419","record_id":"<urn:uuid:b93d3b16-f319-4745-9853-4f725229d2e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Hardness of Approximation G22.3033-007 Spring 2008 Probabilistically Checkable Proofs and Hardness of Approximation General Information and Announcements Assignment 1: Due Monday, May 5^th, 3:30pm. Course Summary Many optimization problems of theoretical and practical interest are NP-complete, meaning it is impossible to compute exact solutions to these problems in polynomial time unless P = NP. A natural way to cope with this curse of NP-completeness is to seek approximate solutions instead of exact solutions. An algorithm with approximation ratio C computes, for every problem instance, a solution whose cost is within a factor C of the optimum. Optimization problems exhibit a wide range of behavior in their approximability. It is well-known that Bin-Packing has an approximation algorithm with ratio 1+\epsilon for every \epsilon > 0. In theory jargon, we say that Bin-Packing has a polynomial-time approximation scheme (PTAS). However, it wasn't known till the early 90s whether problems like MAX-3SAT, Vertex Cover, and MAX-CUT have a PTAS. A celebrated result called the PCP Theorem finally showed that these problems have no PTAS unless P = NP. Such results that rule out the possibility of good approximation algorithms (under complexity theoretic assumptions like P != NP) are called inapproximability results or hardness of approximation results. The PCP Theorem has an equivalent formulation from the point of view of proof checking. The PCP theorem states that every NP-statement has a probabilistically checkable proof, i.e. a proof which can be "spot-checked" by reading only a constant number of bits from the proof. These bits are selected by a randomized process using a very limited amount of randomness. The checking process always accepts a correct proof of a correct statement and rejects any cheating proof of an incorrect statement with high probability. The term "holographic proof" is sometimes used to highlight this feature that a cheating proof must be wrong everywhere and therefore, can be detected by a spot-check. The discovery of the connection between proof checking and inapproximability results is one of the most exciting theoretical developments in the last decade. Since then, PCPs have led to several breakthrough results in inapproximability theory, e.g. tight hardness results for Clique, MAX-3SAT, and Set Cover. This course will cover many of the inapproximability results and PCPs used to prove them. No prior knowledge will be assumed, except the basic theory of NP-completeness. Participants are expected to scribe notes for one lecture, but this is optional (let this not deter you from taking the course). No assignments/exams ! Administrative Information Lectures: Mon 1:30-3:20 (WWH 513) Professor: Subhash Khot – Off 821, Ph: 212-998-4859 Template latex files for scribe-notes can be found here (stolen from Sanjeev Arora's course at Princeton). Course Syllabus Here is a tentative list of topics, not necessarily in the order of presentation. · PCP Theorem: Original proof. Low degree testing, Linearity testing · PCP Theorem: Dinur’s proof. · Long codes, Hastad's 3-bit PCP, Hardness of MAX-3SAT · Hardness of Set Cover, Closest Vector Problem · Hardness of Clique, FGLSS Reduction · Hardness of Edge Disjoint Paths · Hardness of Shortest Vector Problem · Hardness of Minimum Distance of Code · Hardness of Asymmetric k-Center Problem · Hardness of Hypergraph Vertex Cover · Unique Games Conjecture and its consequences · Integrality Ratio (for MAX-CUT, Asymmetric TSP, Sparsest Cut,....) Lecture notes for the same course I taught at Georgia Tech (first few): Lecture 1 (Basic definitions) Lecture 2 (Equivalence of PCP Theorem to inapproximability of MAX-3SAT) Lecture 3 (Hardness of Set Cover) Lecture 4 (Hastad’s 3-bit PCP) Lecture 5 (Hastad’s 3-bit PCP continued) Lecture 6 (Hardness of Minimum distance of code) Lecture 7 (Hardness of Clique, FGLSS) Lecture notes for similar course taught by Venkatesan Guruswami and Ryan O’donnell at U. Washington can be found here. Check out Dinur’s proof of PCP Theorem. PCP literature is extensive and often very technical. Here are good places to check out. ● Luca Trevisan's recent survey can be found here . ● Survey by Sanjeev Arora and Carsten Lund (chapter in a book) . Several open problems are no longer open. ● Sanjeev's PhD Thesis for the proof of PCP Theorem. ● Vijay Vazirani's book on Approximation Algorithms. ● Most papers are available from authors' webpages.
{"url":"http://www.cs.nyu.edu/~khot/pcp-course.html","timestamp":"2014-04-20T03:13:17Z","content_type":null,"content_length":"49733","record_id":"<urn:uuid:29d5adb8-0ae0-4004-bf13-870f5269f73f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
4.2 Starling's Hypothesis Fluid Physiology 4.2 Starling's Hypothesis Starling Forces and Factors A quote from Starling (1896) "... there must be a balance between the hydrostatic pressure of the blood in the capillaries and the osmotic attraction of the blood for the surrounding fluids. " " ... and whereas capillary pressure determines transudation, the osmotic pressure of the proteids of the serum determines absorption." Starling’s hypothesis states that the fluid movement due to filtration across the wall of a capillary is dependent on the balance between the hydrostatic pressure gradient and the oncotic pressure gradient across the capillary. The four Starling’s forces are: • hydrostatic pressure in the capillary (Pc) • hydrostatic pressure in the interstitium (Pi) • oncotic pressure in the capillary (pc ) • oncotic pressure in the interstitium (pi ) The balance of these forces allows calculation of the net driving pressure for filtration. Net Driving Pressure = [ ( Pc - Pi ) - ( pc - pi ) ] Net fluid flux is proportional to this net driving pressure. In order to derive an equation to measure this fluid flux several additional factors need to be considered: • the reflection coefficient • the filtration coefficient (Kf ) An additional point to note here is that the capillary hydrostatic pressure falls along the capillary from the arteriolar to the venous end and the driving pressure will decrease (& typically becomes negative) along the length of the capillary. The other Starling forces remain constant along the capillary. The reflection coefficient can be thought of as a correction factor which is applied to the measured oncotic pressure gradient across the capillary wall. Consider the following: The small leakage of proteins across the capillary membrane has two important effects: • the interstitial fluid oncotic pressure is higher then it would otherwise be. • not all of the protein present is effective in retaining water so the effective capillary oncotic pressure is lower than the measured oncotic pressure (in the same way that there is a difference between osmolality and tonicity). Both these effects decrease the oncotic pressure gradient. The interstitial oncotic pressure is accounted for as its value is included in the calculation of the gradient. The reflection coefficient is used to correct the magnitude of the measured gradient to take account of the ‘effective oncotic pressure’. It can have a value from 0 to 1. For example, CSF & the glomerular filtrate have very low protein concentrations and the reflection coefficient for protein in these capillaries is close to 1. Proteins cross the walls of the hepatic sinusoids relatively easily and the protein concentration of hepatic lymph is very high. The reflection coefficient for protein in the sinusoids is low. The reflection coefficient in the pulmonary capillaries is intermediate in value: about 0.5. Starling Equation The net fluid flux (due to filtration) across the capillary wall is proportional to the net driving pressure. The filtration coefficient (Kf) is the constant of proportionality in the flux equation which is known as the Starling’s equation. The filtration coefficient consists of two components as the net fluid flux is dependent on: • the area of the capillary walls where the transfer occurs • the permeability of the capillary wall to water. (This permeability factor is usually considered in terms of the ‘hydraulic conductivity’ of the wall.) The filtration coefficient is the product of these two components: Kf = Area x Hydraulic conductivity A ‘leaky’ capillary (eg due to histamine) would have a high filtration coefficient. The glomerular capillaries are naturally very leaky as this is necessary for their function; they have a high filtration coefficient. │ Typical values of Starling Forces in Systemic Capillaries (mmHg) │ │ │ Arteriolar end of capillary │ Venous end of capillary │ │ Capillary hydrostatic pressure │ 25 │ 10 │ │ Interstitial hydrostatic pressure │ -6 │ -6 │ │ Capillary oncotic pressure │ 25 │ 25 │ │ Interstitial oncotic pressure │ 5 │ 5 │ The net driving pressure is outward at the arteriolar end and inward at the venous end of the capillary. This change in net driving pressure is due to the decrease in the capillary hydrostatic pressure along the length of the capillary. The values quoted in various sources vary but most authors ‘adjust’ the values to ensure the net gradients are in the appropriate direction they wish to show. The method (used in some sources) of just summing the various forces takes no account of the reflection coefficient. The values for hydrostatic pressure are not fixed and vary quite widely in different tissues and indeed within the same tissue. Contraction of precapillary sphincters and/or arterioles can drop the capillary hydrostatic pressure quite low and the capillary will close. When first measured by Landis in 1930 in a capillary loop in a finger held at heart level, the hydrostatic pressures found were 32 mmHg at the arteriolar end and 12 mmHg at the venous end. The later discovery of negative values for interstitial hydrostatic pressure by Guyton did upset the status quo a bit. The Starling equation cannot be used quantitatively in clinical work. To actually use the Starling equation clinically requires measurement of six unknowns. This is simply not possible and this limits the usefulness of the equation in patient care. It can be used in a general way to explain observations (eg to explain generalised oedema as due to hypoalbuminaemia). Special Cases of Starling's Equation The microcirculations of the kidney, the lung and the brain are special cases in the use of the Starling equation and are considered in the next three sections.
{"url":"http://www.anaesthesiamcq.com/FluidBook/fl4_2.php","timestamp":"2014-04-17T12:47:05Z","content_type":null,"content_length":"8565","record_id":"<urn:uuid:db76a716-f860-4e91-b6cd-293ca2c0f276>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical model shows how groups split into factions As changes in individual relationships spread through a group, eventually a split evolves. (PhysOrg.com) -- The school dance committee is split; one group wants an "Alice in Wonderland" theme; the other insists on "Vampire Jamboree." Mathematics could have predicted it. Social scientists have long argued that when under stress, social networks either end up all agreeing or splitting into two opposing factions. Either condition is referred to as "structural balance." New Cornell research has generated a mathematical description of how this evolves. Previous mathematical approaches to structural balance have proven that when conditions are right, the result of group conflict will be a split into just two groups, the researchers said. The new work shows for the first time the steps through which friendships and rivalries shift over time and who ends up on each side. This video is not supported by your browser at this time. Watch the animation. "Structural balance theory applies in situations where there's a lot of social stress -- gossip disparaging to one person, countries feeling pressure, companies competing -- where we need to make alliances and find our friends and enemies," said Cornell Ph.D. candidate Seth Marvel, lead author of a paper explaining the research published during the week of Jan. 3, 2011, in the online edition of the Proceedings of the National Academy of Sciences with co-authors Jon Kleinberg, the Tisch University Professor of Computer Science, Robert Kleinberg, assistant professor of computer science, and Steven Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. People may form alliances based on shared values, or may consider the social consequences of allying with a particular person, Marvel said. "The model shows that the latter is sufficient to divide a group into two factions," he said. The model is a simple differential equation applied to a matrix, or grid of numbers, that can represent relationships between people, nations or corporations. The researchers tested their model on a classic sociological study of a karate club that split into two groups and got results that matched what happened in real life. They plugged in data on international relations prior to World War II and got almost perfect predictions on how the Axis and Allied alliances formed. The smallest unit in such a network is a "relationship triangle," between, say, Bob and Carol and Ted, which can have four states: • They're all good friends. That's a stable or "balanced" situation. • Bob and Carol are friends, but neither gets along with Ted. That's also balanced. • Bob and Carol dislike each other, but both are friends with Ted. Ted will try to get Bob and Carol together; he will either succeed or end up alienating one or both of his friends, so the situation is unbalanced and has a tendency to change. • All three dislike each other. Various pairs will try to form alliances against the third, so this situation is also unbalanced. "Every choice has consequences for other triangles," Strogatz explained, so unbalanced triangles kick off changes that propagate through the entire system. Too often the final state consists of two groups, each with all positive connections among themselves and all negative connections with members of the opposite group. Social groups can be broken down to "relationship triangles" with four possibilities: A) A, B and C are mutual friends: balanced. B) A is friends with B and C, but they don't get along with each other: not balanced. C) A and B are friends with C as a mutual enemy: balanced. D) A, B, and C are mutual enemies: not balanced. Unbalanced triangles set off changes that spread through the group. From "Networks" by Jon Kleinberg and David Easley Is there a way to avoid the mathematical certainty of a split, perhaps to make Republicans and Democrats in Congress less polarized? It depends on the initial conditions, Marvel said. The model shows that if the "mean friendliness" -- the average strength of connections across the entire network is positive, the system evolves to a single, all-positive pattern. "The model shows how to influence the result, but it doesn't tell you how to get there," Kleinberg cautioned. Marvel plans to test the model on other social networks, and perhaps work with psychologists to do lab experiments with human subjects. But he too cautions against leaning too heavily on the equations for practical advice. "This is a simple model and deterministic, and people aren't deterministic," he said. The research was supported in part by the John D. and Catherine T. MacArthur Foundation, Google, the Yahoo! Research Alliance, a Microsoft Research New Faculty Fellowship, the Air Force Office of Scientific Research and the National Science Foundation. 5 / 5 (1) Jan 04, 2011 That is actually pretty interesting and visible in everyday life, the result, in hindsight that is. This really gets into a subject from one of my favorite podcasts, Radiolab, on choice, which questions whether we are actually choosing anything. 4 / 5 (1) Jan 04, 2011 That is actually pretty interesting and visible in everyday life, the result, in hindsight that is. This really gets into a subject from one of my favorite podcasts, Radiolab, on choice, which questions whether we are actually choosing anything. I read that book "The Illusion of Conscious Will" that they referred to in the radiolab podcast (it was really good, and although the subject is difficult, the book was surprisingly easy to read and entertaining). The very next book I read was Taleb's Black Swan. I've come to a realization that what we call 'random' is really chaos. It's very possible nothing in our universe is random. We're just a big machine of equal and opposite reactions... and this probably includes human behavior. I also came to realize how little I know and how little the world knows. 5 / 5 (2) Jan 04, 2011 Uh, IIRC, this math explains why juries are a dozen, and why big committees, to get *anything* done, split into working parties... 3 / 5 (4) Jan 04, 2011 Uh, IIRC, this math explains why juries are a dozen, and why big committees, to get *anything* done, split into working parties... Which reminds me of a saying I heard 50 years ago: "That must be the answer - God is a committee"
{"url":"http://phys.org/news/2011-01-mathematical-groups-factions.html","timestamp":"2014-04-16T19:50:44Z","content_type":null,"content_length":"77598","record_id":"<urn:uuid:aa8caaae-2baa-4e4e-b512-0a89eb282d2c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
2 Conics Questions I don't get! May 24th 2009, 06:27 PM #1 Super Member Nov 2008 2 Conics Questions I don't get! Can someone please explain the last two questions of my homework please because I really don't get it. Any help would be greatly appreciated! Thanks in advance! Set up a coordinate system so that the origin is at the center of the barbell, x-axis horizontal, y-axis vertical. The vertices of the hyperbola are at (-2, 0) and (2, 0). Further, the foci are at (-3, 0) and (3, 0) because the lie on vertical lines connecting the ends of radii of the hemispheres. For the hyperbola [tex]\frac{x^2}{a^2}- \frac{y^2}{b^2}= 1, the distance from the center to each focus is given by $f^2= a^2+ b^2$ so $9= 4+ b^2$ and $b= \sqrt{5}$. The equation of the hyperbola is $\frac{x^2}{4}- \frac{y^2}{5}= 1$. At the ends of the hyperbolic section, x= 3 and -3, the same as for the foci which are on those lines. At those points $\frac{9}{4}- \frac{y^2}{5}= 1$. Solve for y. The length of the hyperbolic section is 2y because of the symmetry, and the entire length is that plus the two hemispheric radii: 2y+ 6. May 25th 2009, 03:11 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/math-topics/90345-2-conics-questions-i-don-t-get.html","timestamp":"2014-04-19T05:58:51Z","content_type":null,"content_length":"34666","record_id":"<urn:uuid:66480632-699c-4eed-9136-7d8a66f9f4a1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical model shows how groups split into factions As changes in individual relationships spread through a group, eventually a split evolves. (PhysOrg.com) -- The school dance committee is split; one group wants an "Alice in Wonderland" theme; the other insists on "Vampire Jamboree." Mathematics could have predicted it. Social scientists have long argued that when under stress, social networks either end up all agreeing or splitting into two opposing factions. Either condition is referred to as "structural balance." New Cornell research has generated a mathematical description of how this evolves. Previous mathematical approaches to structural balance have proven that when conditions are right, the result of group conflict will be a split into just two groups, the researchers said. The new work shows for the first time the steps through which friendships and rivalries shift over time and who ends up on each side. This video is not supported by your browser at this time. Watch the animation. "Structural balance theory applies in situations where there's a lot of social stress -- gossip disparaging to one person, countries feeling pressure, companies competing -- where we need to make alliances and find our friends and enemies," said Cornell Ph.D. candidate Seth Marvel, lead author of a paper explaining the research published during the week of Jan. 3, 2011, in the online edition of the Proceedings of the National Academy of Sciences with co-authors Jon Kleinberg, the Tisch University Professor of Computer Science, Robert Kleinberg, assistant professor of computer science, and Steven Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. People may form alliances based on shared values, or may consider the social consequences of allying with a particular person, Marvel said. "The model shows that the latter is sufficient to divide a group into two factions," he said. The model is a simple differential equation applied to a matrix, or grid of numbers, that can represent relationships between people, nations or corporations. The researchers tested their model on a classic sociological study of a karate club that split into two groups and got results that matched what happened in real life. They plugged in data on international relations prior to World War II and got almost perfect predictions on how the Axis and Allied alliances formed. The smallest unit in such a network is a "relationship triangle," between, say, Bob and Carol and Ted, which can have four states: • They're all good friends. That's a stable or "balanced" situation. • Bob and Carol are friends, but neither gets along with Ted. That's also balanced. • Bob and Carol dislike each other, but both are friends with Ted. Ted will try to get Bob and Carol together; he will either succeed or end up alienating one or both of his friends, so the situation is unbalanced and has a tendency to change. • All three dislike each other. Various pairs will try to form alliances against the third, so this situation is also unbalanced. "Every choice has consequences for other triangles," Strogatz explained, so unbalanced triangles kick off changes that propagate through the entire system. Too often the final state consists of two groups, each with all positive connections among themselves and all negative connections with members of the opposite group. Social groups can be broken down to "relationship triangles" with four possibilities: A) A, B and C are mutual friends: balanced. B) A is friends with B and C, but they don't get along with each other: not balanced. C) A and B are friends with C as a mutual enemy: balanced. D) A, B, and C are mutual enemies: not balanced. Unbalanced triangles set off changes that spread through the group. From "Networks" by Jon Kleinberg and David Easley Is there a way to avoid the mathematical certainty of a split, perhaps to make Republicans and Democrats in Congress less polarized? It depends on the initial conditions, Marvel said. The model shows that if the "mean friendliness" -- the average strength of connections across the entire network is positive, the system evolves to a single, all-positive pattern. "The model shows how to influence the result, but it doesn't tell you how to get there," Kleinberg cautioned. Marvel plans to test the model on other social networks, and perhaps work with psychologists to do lab experiments with human subjects. But he too cautions against leaning too heavily on the equations for practical advice. "This is a simple model and deterministic, and people aren't deterministic," he said. The research was supported in part by the John D. and Catherine T. MacArthur Foundation, Google, the Yahoo! Research Alliance, a Microsoft Research New Faculty Fellowship, the Air Force Office of Scientific Research and the National Science Foundation. 5 / 5 (1) Jan 04, 2011 That is actually pretty interesting and visible in everyday life, the result, in hindsight that is. This really gets into a subject from one of my favorite podcasts, Radiolab, on choice, which questions whether we are actually choosing anything. 4 / 5 (1) Jan 04, 2011 That is actually pretty interesting and visible in everyday life, the result, in hindsight that is. This really gets into a subject from one of my favorite podcasts, Radiolab, on choice, which questions whether we are actually choosing anything. I read that book "The Illusion of Conscious Will" that they referred to in the radiolab podcast (it was really good, and although the subject is difficult, the book was surprisingly easy to read and entertaining). The very next book I read was Taleb's Black Swan. I've come to a realization that what we call 'random' is really chaos. It's very possible nothing in our universe is random. We're just a big machine of equal and opposite reactions... and this probably includes human behavior. I also came to realize how little I know and how little the world knows. 5 / 5 (2) Jan 04, 2011 Uh, IIRC, this math explains why juries are a dozen, and why big committees, to get *anything* done, split into working parties... 3 / 5 (4) Jan 04, 2011 Uh, IIRC, this math explains why juries are a dozen, and why big committees, to get *anything* done, split into working parties... Which reminds me of a saying I heard 50 years ago: "That must be the answer - God is a committee"
{"url":"http://phys.org/news/2011-01-mathematical-groups-factions.html","timestamp":"2014-04-16T19:50:44Z","content_type":null,"content_length":"77598","record_id":"<urn:uuid:aa8caaae-2baa-4e4e-b512-0a89eb282d2c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6192336 - Method and system for searching for an optimal codevector The present invention relates to lossy coding systems, and more particularly to vector quantization for use in codebook-based coding systems. Digital speech processing typically can serve several purposes in computers. In some systems, speech signals are merely stored and transmitted. Other systems employ processing that enhances speech signals to improve the quality and intelligibility. Further, speech processing is often utilized to generate or synthesize waveforms to resemble speech, to provide verification of a speaker's identity, and/or to translate speech inputs into written outputs. In some speech processing systems, speech coding is performed to reduce the amount of data required for signal representation, often with analysis by synthesis adaptive predictive coders, including various versions of vector or code-excited coders. In the predictive systems, models of the vocal cord shape, i.e., the spectral envelope, and the periodic vibrations of the vocal cord, i.e., the spectral fine structure of speech signals, are typically utilized and efficiently performed through slowly, time-varying linear prediction filters. These models typically utilize parameters to replicate as closely as possible the original speech signal. There tends to be numerous parameters involved in such modeling. Compression schemes are often employed to reduce the number of parameters requiring transmission in the processing system. One such technique is known as vector quantization. Generally, vector quantization schemes, whether in speech processing or other large data modeling systems, such as image processing systems, employ a codebook or vocabulary of codevectors, and an index to the codebook. An optimal codevector in the codebook relative to an input vector is usually determined through intensive computations. An index to the optimal codevector is then transmitted. Thus, vector quantization effectively reduces the amount of information transmitted by transmitting only an indexed reference to the codevector, rather than the entire codevector. Unfortunately, the intensive computations typically involved in the determination of the optimal codevector are time-consuming and thus expensive. Accordingly, a need exists for a more efficient search strategy in vector quantization. The present invention addresses such a need and provides method and system aspects for searching for an optimal codevector from a plurality of codevectors in a codebook, the optimal codevector having a minimum weighted distance to a given vector. The preferred aspects determine a partial distance with a current vector component of a current codevector and the corresponding component of the given vector, compare the partial distance to a saved “renormalized” minimum partial distance, and proceed to a next codevector when the saved renormalized minimum partial distance is smaller than the partial distance. In addition, the present invention proceeds to a next vector component when the partial distance is smaller than the corresponding saved renormalized minimum partial distance. When the partial distance computed with each next vector component is smaller than the saved renormalized minimum partial distance, the present invention calculates a full weighted distance value. Further, the weighted full distance is compared to a saved minimum full weighted distance, and when the full weighted distance is smaller than the saved minimum full weighted distance, an optimal index to the current codevector is updated, the saved minimum full weighted distance is updated to the full weighted distance, and the new renormalized minimum partial distances are determined and stored. The operation then continues with a next codevector until all codevectors have been used. An optimal index to identify the optimal codevector is then returned when all codevectors in the codebook have been used. With the present invention, an advantageous computation of partial distance values reduces the computational load of searching for an optimal codevector in a given codebook. Further, the present invention achieves such advantage without restricting the codebook to a predefined structure or performing preliminary processing to determine a structure for the codebook. These and other advantages of the aspects of the present invention will be more fully understood in conjunction with the following detailed description and accompanying drawings. FIG. 1 illustrates a flow diagram of a renormalized, weighted vector quantization search strategy of the prior art. FIG. 2 illustrates a flow diagram of a weighted vector quantization search strategy in accordance with a preferred embodiment of the present invention. FIG. 3 illustrates a block diagram of a computer system capable of facilitating the implementation of the preferred embodiment. The present invention relates to weighted vector quantization for use in lossy coding systems, e.g., audio and video processing system. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein. For weighted vector quantization based coding systems, typically a given, N-dimensional vector, ‘v’, is approximated by a coded vector, ‘u’, in a codebook of M codevectors. By way of example, the given vector, ‘v’, may be a sequence of ‘N’ linear predictive coefficients derived from an analysis of a speech segment in a speech encoding system. Each codevector ‘u’ in the codebook is indexed by an integer index, j. In choosing the codevector ‘u’ that best approximates the given vector ‘v’, a weighted quadratic distance, D^2, to the given vector ‘v’ is determined via $D 2 = ∑ i = 0 N - 1 w i ( u i - v i ) 2 ;$ equation (1), where {v[i]} are the components of the given vector ‘v’, {u[i]} are the components of the codevector ‘u’, and {w[i]} are non-negative weights chosen prior to and appropriate for the search, as is well understood by those skilled in the art. The best codevector is thus the codevector that minimizes the weighted quadratic distance to the given vector ‘v’, and is conventionally determined using a direct search strategy. FIG. 1 illustrates a typical direct search strategy. The method begins with an initialization of an index variable, j, and a minimum weighted distance variable, D^2 [o ](step 100). The method continues with a determination of the weighted distance, D^2, (step 102 ) by substituting the appropriate values into equation (1). The variable u^j [i], as shown in FIG. 1, suitably represents the ith component of the jth codevector. Once the computation is completed, a comparison between the computed weighted distance value, D^2, and the stored minimum weighted distance value, D^2 [o], is performed (step 104). When the computed weighted distance value is smaller than the stored minimum weighted distance value, the minimum weighted distance value is updated to the computed weighted distance value and a variable for the position of the best index value, j[best], is updated to the current index value j (step 106). When the computed weighted distance value is not smaller than the stored minimum weighted distance value or upon completion of the updating, the index value is incremented (step 108). The process then continues with steps 102, 104, 106, and 108, until all M codevectors in the codebook have been tested (step 110). Once all of the codevectors have been tested, the value stored as j[best ]is returned (step 112) to point to the optimal codevector in the codebook that minimizes the weighted distance to ‘v’. While such methods do return an optimal codevector from a codebook for minimizing the weighted distance to the current vector, the heavy computation involved in performing the weighted distance calculation of every codevector is highly time-consuming. Some schemes have been proposed that reduce the burden such computations produce and are described in “Vector Quantization and Signal Compression”, Gersho, A., et al., Kluwer Academic Publishers, Boston, 1995. The schemes generally fall into one of three categories. These categories comprise a category for those schemes that constrain the codebook to have a predefined structure and are not tolerant of a given codebook that cannot be constrained, a category for those schemes that, for a given codebook, determine some structure for the codebook in an initial phase and have significant memory requirements to memorize the structure, and a category for those schemes that simply accept any codebook and do not perform any analysis of the codebook. The present invention suitably belongs to third category and capably achieves a reduction in the average computational expense for a given codebook in a distinctive and effective manner. FIG. 2 illustrates a preferred embodiment of the present invention. In general, the present invention recognizes that if the partial distance d[i]=(u[i]-v[i]) is greater than or equal to C/w[i], where C is a constant value, then the weighted distance, as defined by equation (1), will be greater than or equal to C, deriving immediately from the fact that the weights {w[i]} are all non-negative. Hence, if the minimum weighted distance found while testing a number of codevectors is D^2 [o], and for the next codevector being tested, the ith partial distance d[i ]is greater than or equal to the “renormalized” minimum partial distance A[i], where A[i]=D^2 [o]/w[i], the codevector is not the best one. Thus, the preferred embodiment proceeds with an initialization of several variables including a variable j, representing an index to the codebook, a set of variables {A[i]}, representing the renormalized minimum partial distance values, a variable D^2 [o ]representing the minimum weighted distance, and a computation of the values for the inverse of each of the weights, w[i], (step 200). A variable i, representing an index of the vector component, is set equal to zero (step 202). A partial distance value d[i ]is then computed (step 204), as shown by the equation in FIG. 2. A comparison between the computed partial distance value and the stored renormalized minimum partial distance value is then suitably performed (step 206). When the computed partial distance value is not smaller than the stored renormalized minimum partial distance value, the index of the codevector is incremented (step 208), so that a next codevector can be tested. When the computed partial distance is smaller than the stored renormalized minimum partial distance, the index to the component of the codevector is incremented (step 210), and a determination of whether all of the ‘N’ components of the codevector have been tested is made (step 212). If all of the components have not been tested, the process continues from step 204 with a partial distance calculation with a next component. When all the components have been tested and all of the partial distances are smaller than the stored minimum partial distance, the process suitably continues by computing a full weighted distance value, D^2, (step 214) by summing together the partial distances, after multiplication with the corresponding weights, as shown by FIG. 2. While each partial distance may be smaller than the corresponding renormalized partial distance, the codevector may still not provide a weighted distance smaller than D^2 [o ]when the weight values are taken more fully into consideration in the full weighted distance computation. Thus, a comparison is then appropriately performed between the computed full weighted distance and the stored minimum weighted distance value (step 216). When the full weighted distance is smaller than the stored minimum weighted distance, the full weighted distance value updates the stored minimum weighted distance value, the index of the codevector updates a best index variable, j[best], and the renormalized minimum partial distance values are updated by multiplying the full weighted distance by the computed inverse weight values (step 218). When the full weighted distance is not smaller than the stored minimum weighted distance D^2 [o], or when the updating (step 218) is completed, the process continues with the next codevector (step 208). When all of the M codevectors in the codebook have been tested (step 220), the index of the best codevector, j[best], is returned (step 222), and the process is completed. Comparing the number of computations between the direct search strategy, as described with reference to FIG. 1, and the search strategy of the present invention, as described with reference to FIG. 2, illustrates more particularly the advantages of the present invention. For the direct search strategy, the computation of D^2 with one vector having ‘N’ parameters involves 2N multiplications and (2N−1) sums. Thus, for a codebook with ‘M’ codevectors, the direct approach requires 2NM multiplications, (2N−1)M additions, and (M−1) comparisons. For the approach of the present invention, the number of operations depends on many factors, including how close the first codevectors being tested are to the input vector. An average number of calculations can be determined, however, as is well understood by those skilled in the art, for use in the comparison of the computational burden of each of the approaches. Thus, on average, the present invention requires approximately (13/20 NM +N log[2]M) multiplications, (13/20 NM −M/10) sums, and (11/20 NM +M/10) comparisons. Of course, there are also ‘N’ divisions at the beginning of the search to determine the value for 1/w[i], however, the cost of those divisions is often split over the approximation of several vectors when the same weights are used. With a system that has vectors characterizing 10 parameters (N=10) and a codebook with 256 codevector entries (M=256), the direct search strategy requires 5120 multiplications, 4864 additions, and 255 comparisons. For the present invention under such circumstances, on average, the strategy requires approximately 1744 multiplications, 1639 sums, and 1433 comparisons. Thus, a clear advantage is achieved through the present invention to reduce the computational burden of performing searches in weighted vector quantization. Such advantageous determinations are suitably performed by and implemented in a computer system, e.g., the computer system of FIG. 3, which illustrates a block diagram of a computer system capable of coordinating the vector quantization search strategy in accordance with the present invention. Included in the computer system are a central processing unit (CPU) 310, coupled to a bus 311 and interfacing with one or more input devices 312, including a cursor control/mouse/stylus device, keyboard, and speech/sound input device, such as a microphone, for receiving speech signals. The computer system further includes one or more output devices 314, such as a display device/monitor, sound output device/speaker, printer, etc, and memory components, 316, 318, e.g., RAM and ROM, as is well understood by those skilled in the art. Of course, other components, such as A/D converters, digital filters, etc., are also suitably included for speech signal generation of digital speech signals, e.g., from analog speech input, as is well appreciated by those skilled in the art. The computer system preferably controls operations necessary for the search strategy in weighted vector quantization of the present invention, suitably performed using a programming language, such as C, C++, and the like, and stored on an appropriate storage medium 320, such as a hard disk, floppy diskette, etc., which also suitably stores the given codebook. Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. For example, although the foregoing is described with reference to a speech processing system, other systems, such as image processing systems, that employ weighted vector quantization searching strategies are suitably improved by incorporating the features of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
{"url":"http://www.google.com/patents/US6192336?dq=5,825,242","timestamp":"2014-04-18T06:32:39Z","content_type":null,"content_length":"84466","record_id":"<urn:uuid:4a67331a-36b6-44e8-8376-14c2f2f88789>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Workshop on Intensionality in Mathematics Saturday and Sunday, May 11-12, 2013, Kungshuset, Lundagard The theme of the workshop is the mathematical and philosophical investigation of the choice of background models, with particular attention to models for computation and the choice of axiom systems for arithmetic which have great importance for the epistemology of mathematics. The specific light in which these topics will be investigated is the role of intensionality. The general aim of the workshop is to foster the dialogue among researches working on issues related to intensionality in logic, philosophy of language, philosophy of mathematics, computer science, computability theory, number theory and also study the reasons for intensionality from the cognitive science perspective. Some examples of the issues that the workshop aims at addressing include the Frege-Hilbert controversy on the axiomatic method and the distinction between syntax and semantics; structuralist versus (neo)-Fregean approaches to the choice of axioms (the existence of a mathematical structure is granted by the possibility of describing it with a coherent and hopefully categorical set of axioms, versus the idea that the first principles of a mathematical theory should capture the properties of the mathematical entities in question); to what extent does the choice of axioms determine what is further knowable about the mathematical structure which is being described; what intensional logics can better tackle intensional and epistemic paradoxes; what are logical connectives, whether there are intensional constraints on the choice of natural numbers as the domain for the formal treatment of the informal notion of effective computability, and what would be the philosophical consequences of understanding Church's Thesis on arbitrary domains. Moreover, we will discuss topics related to the formation of mathematical concept as studied in psychology or cognitive sciences, and also the usefulness of cognitive science methods in epistemology of mathematics. The workshop is a continuation of the Workshop on Philosophy and Computation held in Lund in May 2012. Immediately preceding the present event is a workshop on the Philosophy of Information and Information Quality (Friday, May 10). All welcome! However, to join us for lunches and dinners, please sign up here not later than Monday, May 6th. Practical Information Address: Kungshuset, room 318, Map Invited Speakers Titie: A Theory of Fregean Abstract Objects Abstract:In this paper, I am going to present a theory of Fregean abstract objects. The theory deploys a comprehension principle for concepts, and a comprehension principle for objects. On top of it, a principle of plural comprehension is added. These principles are tweaked to the effect that their interaction is consistent, and they also prove to be strong enough to derive as theorems several Fregean abstraction principles in their full mathematical strength. Titie: Soundness, reflection, and intensionality Abstract:It is commonly held that acceptance of the axioms of an arithmetical theory T (such as Peano arithmetic) obligate us to accept various formal expressions of T's soundness -- e.g. Con (T), or the uniform or global reflection principles for T. In this talk, I will discuss the nature of this commitment in light of several phenomena which are intensional in character: the paradoxes of Montague (1963) and Thomason (1980), Feferman's (1960) observations about the formulation of consistency statements, and Kreisel & Levy's (1968) observations about the relationship between reflection principles, mathematical induction, and the internal provability of cut elimination. Titie: On logicality, invariance, and definability Abstract: The traditional account of what a logical consequence is says that A follows logical from T if for every (re-)interpretation of the non-logical expressions in T and A; if all the sentences in T are true then so is A. This definition rests on the fact that we know how to distinguish between logical and non-logical expressions, this is the problem of identifying the logical constants. I will focus on a model theoretic approach to solve this problem: An operator is a logical constant if it is invariant under the most general transformations. Apart from giving some background I will present recent results (jointly with Denis Bonnay) on Galois correspondences between invariance and definability: The dual character of invariance under transformations and definability by some operations has been used in classical work by for example Galois and Klein. In this talk I will study this duality from a logical viewpoint and generalize results from Krasner and McGee into a full Galois correspondence of invariance under permutations and definability in L_{\infty\infty}. I will also present a similar correspondence related to definability in L_{\infty\infty}^-, the logic without equality. Titie: Mathematical intensions and intensionality in mathematics Abstract: There are a number of different research programs associated with the topic of intensionality in mathematics, as will be seen in the variety of talks in this workshop. This talk considers some of the common issues as well as some differences in approaches. In common there is a philosophical and mathematical history, as well as an interest in the epistemology of mathematics. There are significant differences, however, between the way the epistemology of mathematics is approached. I will argue that some of this variation can be associated with the decision to focus on different puzzles, or asymmetries, produced by the distinction between an intension and "its" extension. I conclude by suggesting that we still lack a satisfying account of the basic relation between mathematical intensions and extensions, that is, one that explains which intensions produce definite extensions and why. And for this, intuition is still an appealing "prop". Titie: Models for absolute provability Abstract:In this presentation I will discuss models for the notion of absolute provability. Kripke models have been the standard tools for modelling the notion of necessity. And they have also been used fairly successfully to generate models for the concept of absolute provability. Yet I will tentatively argue that branching time models are more suitable for modeling absolute Titie: Consistency statements in semi-euclidean systems Abstract: In logical systems where the axioms and rules of inference are allowed to change over time - like in Jeroslow's Experimental Logics - a reasonable definition of theoremhood makes it possible for the statement of the system's consistency to be among its theorems. The talk discusses examples of this, the intension of such statements and knowability of consistency. Titie: Modality in Mathematics Abstract: The talk will be concerned with the role that modality can play in a broadly classical mathematics and in particular with the question of how this modality can be interpreted. Titie: The intensional side of algebraic-topological representation theorems Abstract: Stone representation theorems and the like are a central ingredient in the metatheory of philosophical logics and are used to establish modal embedding results in a general but indirect and non-constructive way. Their use will be reviewed and it will be shown how, through the methods of analytic proof theory, they can be circumvented in favour of direct and constructive arguments. Titie: The First Few Numbers: How Children Learn Them and Why It Matters Abstract: Cognitive and developmental scientists since Piaget have been interested in how humans acquire number concepts. In this talk, I'll briefly review a decade of research on how children acquire the first few numbers, and explain why this topic is both interesting and important. Then I will present data on the socio-economic gap in young children's number knowledge in Orange County, California; and describe two intervention studies in progress in my lab, both designed to help kids learn numbers. I'll wrap up by discussing how the results of these studies may contribute to our understanding of the origins of number concepts. Titie: Mathematics & Logic: Between Theory & Reality Abstract: In this talk I will discuss the relation between the objects (objectual grounding, source of truth) of mathematics and logic and the form their theories take. In particular, I will examine what in reality, if anything, mathematics and logic are about (or grounded in) and how this is related to the structure of their theories. This will lead to the discovery of certain gaps between theory and objects (objectual grounds) in the two disciplines. In the case of logic, the gap has to do with the linguistic nature of our logical theory and the non-linguistic reality grounding it. In the case of mathematics it has to do with the order (level) of mathematical theories and the order (level) of their objects. I will offer a solution to these gaps and show how it enables us to tackle well-known problems in the philosophies of mathematics and logic. Contributed speakers • Staffan Angere (Bristol University) Titie: On the logic of up to Isomorphism • Michael Gabbay (King's College London) Title: Making sense of maths: a formalist theory of mathematical intension Abstract: This paper proposes a solution to the problem of the intensional content of mathematical assertions in terms of their proof theoretic properties. It is proposed that an extensional relation symbol, such as =, has extensional content given by its standard mathematical interpretation, as well as inten- sional content constituted by a set of sound formal mathematical theories. It is claimed that we can use this to give a compositional theory of the intension of a mathematical assertion that can serve, among other things, as a semantics of mathematical propositional attitude ascriptions. • Jan Heylen (University of Leuven) Title: Peano numerals as buck-stoppers Abstract: I will examine three claims made by Ackerman (1978) and Kripke (1992). First, they claim that not any arithmetical terms is eligible for universal instan- tiation and existential generalisation in doxastic or epistemic contexts. Second, Ackerman claims that Peano numerals are eligible for universal instantiation and existential generalisation in doxastic or epistemic contexts. Kripke’s position is a bit more subtle. Third, they claim that the successor relation and the smaller- than must be effectively calculable. These three claims will be examined from the framework of modal-epistemic arithmetic, i.e. arithmetic extended with cer- tain modal, epistemic and modal-epistemic principles. I will present theorems that give support to the claims made by Ackerman and Kripke. Call for Papers [Closed] We have two or three slots for contributed papers. Please submit your abstract (max. 2000 words), prepared for blind review, to paula.quinon@fil.lu.se by Monday MARCH 11 (local time). Expect decisions with two weeks. We expect to be able to cover/subsidize travel and accommodation expenses. Budgetary approval pending, participants may participate in both workshops (see above), or parts thereof. Program Committee • Denis Bonnay • Sebastian Enqvist • Alessandro Facchini • Patrick Girard • Toby Meadows • Sebastian Sequoiah-Grayson • Giulia Terzian • Sean Walsh Utskrift 2014-04-19 av http://www.fil.lu.se/forskning/konferenser/konferenser-2013/workshop-on-intensionality-in-mathematics/
{"url":"http://www.fil.lu.se/index.php?id=18879","timestamp":"2014-04-19T01:47:02Z","content_type":null,"content_length":"37487","record_id":"<urn:uuid:291efbe3-251c-4d72-b067-e43a4e434d87>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Throwing a football, Part II In part I of this post, I talked about the basics of projectile motion with no air resistance. Also in that post, I showed that (without air resistance) the angle to throw a ball for maximum range is 45 degrees. When throwing a football, there is some air resistance this means that 45 degree is not necessarily the angle for the greatest range. Well, can’t I just do the same thing as before? It turns out that it is a significantly different problem when air resistance is added. Without air resistance, the acceleration was constant. Not so now, my friend. The problem is that air resistance depends on the velocity of the object. Search your feelings, you know this to be true. When you are driving (or riding) in a car and you stick your hand out the window, you can feel the air pushing against your hand. The faster the car moves, the greater this force. The air resistance force depends on: • Velocity of the object. The typical model used for objects like a football would depend on the direction and the square of the magnitude of the velocity. • The density of air. • The cross sectional area of the object. Compare putting an open hand out the car window to a closed fist out the car window. • Some air drag coefficient. Imagine a cone and a flat disk, both with the same radius (and thus same cross sectional area). These two objects would have different air resistances due to the shape, this is the coefficient of drag (also called other things I am sure). So, since the air force depends on the velocity, it will not be a constant acceleration. Kinematic equations won’t really work. To easily solve this problem, I will use numerical methods. The basic idea in numerical calculations is to break the problem into a whole bunch of little steps. During these small steps, the velocity does not change much so that I can “pretend” like the acceleration is constant. Here is a diagram of the forces on the ball while in the air. Before I go any further, I would like to say that there has been some “stuff” done on throwing a football before – and they probably do a better job than this post. Here are a few references (especially with more detailed discussion about the coefficient of drag for a spinning football): And now for some assumptions: • I hereby assume that the air resistance is proportional to the square of the magnitude of the velocity of the object. • The orientation of the football is such that the coefficient of drag is constant. This may not actually be true. Imagine if the ball were thrown and spinning with the axis parallel to the ground. If the axis stayed parallel to the ground, for part of the motion the direction of motion would not be along the axis. Get it? • Ignore aerodynamic lift effects. • Mass of the ball is .42 kg. • The density of air is 1.2 kg/m^3. • The coefficient of drag for the football is 0.05 to 0.14 • Typical initial speed of a thrown football is around 20 m/s. And finally, here is the recipie for my numerical calculation (in vpython of course): • Set up initial conditions • Set the angle of the throw • Calculate the new position assuming a constant velocity. • Calculate the new momentum (and thus velocity) assuming a constant force. • Calculate the force (it changes when the velocity changes) • Increase the time. • Keep doing the above until the ball gets back to y=0 m. • Change the angle and do all the above again. The answer First, I ran the program with an initial velocity of 20 m/s. Here is the data: At 35 degrees, this gives a distance of 23 meters (25 yards). This doesn’t seem right. I know a quarterback can throw farther than that. What if I change the coefficient to 0.05? Then the greatest angle is closer to 40 degrees and it goes 28 meters. Still seems low (think Doug Flutie). What about with no air resistance? Then it goes 41 meters (at 45 degrees). So, here is the Doug Flutie throw. From the video, it looks like he threw the ball from the 36ish yard line to about the 2 yard line. This would be 62 yards (56.7 meters). I am going to assume a coefficient of 0.07 (randomly). So, what initial speed will get this far? If I put in an initial velocity of 33 m/s, the ball will go 55.7 meters at an angle of 35 degrees. Really the thing that amazes me is that someone (not me) can throw a ball that far and essentially get it where they want it. Even if they are only sometimes successful, it is still amazing. How is it that humans can throw things somewhat accurately? We obviously do not do projectile motion calculations in our head – or maybe we do? 1. #1 Uncle Al December 29, 2008 The football is launched, spinning, with its long axis at 35-45 degrees elevation from the ground. Does it real world impact with its leading pole still up in the air or pointing toward the 2. #2 Pablo Sanches October 9, 2009 where is the video i want to watch it 3. #3 Rhett October 9, 2009 Is the video not showing up in the above post? If not, here is the direct url from youtube: 4. #4 Patrice Vezeau July 3, 2010 human don’t calculate we use Memory,muscle memory and intuitive deduction from those memory and previous experience to estimate how to throw the ball. Personalty i am playing tennis and I know every time I start to play i must warm up hitting many ball to adjust depending on the court condition like heat, wind,humidity, type of ball,and if I play with a used string bed or new one. But I start to hit from my previous experience and do little adjustment with trial and error then the confidence grow and I can hit harder or very sharp angle shot without missing because I know how the ball will react.
{"url":"http://scienceblogs.com/dotphysics/2008/12/29/throwing-a-football-part-ii/","timestamp":"2014-04-16T19:47:01Z","content_type":null,"content_length":"65124","record_id":"<urn:uuid:4118f493-444f-4800-bc1e-b188df72318f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Answer - Market The smallest possible number of eggs that Mrs. Covey could have taken to market is 719. After selling half the number and giving half an egg over she would have 359 left; after the second transaction she would have 239 left; after the third deal, 179; and after the fourth, 143. This last number she could divide equally among her thirteen friends, giving each 11, and she would not have broken an egg
{"url":"http://www.pedagonet.com/puzzles/eccentric1.htm","timestamp":"2014-04-16T07:46:04Z","content_type":null,"content_length":"10432","record_id":"<urn:uuid:0422acfa-d4c0-444e-abde-93989188b15f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
arbitrage pricing model arbitrage pricing model (uncountable) 1. (finance) An asset pricing model using one or more common factors to price returns. With only one factor, representing the market portfolio, it is called a single factor model. With two or more factors, it is called a multifactor model. Related termsEdit • capital asset pricing model Last modified on 16 June 2013, at 18:44
{"url":"http://en.m.wiktionary.org/wiki/arbitrage_pricing_model","timestamp":"2014-04-18T13:25:52Z","content_type":null,"content_length":"15103","record_id":"<urn:uuid:d380ed94-4ec2-4194-8993-8882cc6d25fe>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Algebraic Theory of Measure and Integration: Second English Edition &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp AMS Chelsea By generalizing the concept of point function to that of a function ("soma" function) over a Boolean ring, Carathéodory gives in this book an elegant algebraic treatment of measure Publishing and integration. 1963; 378 pp; • Somas: 1.1-2 The axiomatic method; 1.3-7 Elementary theory of somas; 1.8-13 Somas as elements of a Boolean algebra; 1.14-16 The main properties of the union; 1.17-22 The hardcover decomposability of somas; 1.23-24 The intersection of an infinite number of somas; 1.25-32 Limits and bounds • Sets of Somas: 2.33-40 Sets of somas closed under a binary operation; 2.41-46 Complete rings; 2.47-53 Ordinal numbers of the second class; 2.54-55 Hereditary sets of somas; Volume: 161 2.56-64 Homomorphisms of rings of somas • Place Functions: 3.65-68 Finitely-valued place functions; 3.69-75 Nests of somas; 3.76-79 Altering the domain of definition; 3.80-88 Principal properties of the soma functions \ ISBN-10: (\alpha(X)\) and \(\beta(X)\) 0-8218-5273-6 • Calculation with Place Functions: 4.89-94 Limit processes; 4.95-106 Elementary operations on place functions; 4.107-110 Uniform and absolute convergence; 4.111-117 Composition of place functions; 4.118-125 Homomorphisms of place functions ISBN-13: • Measure Functions: 5.126-128 Additive and union-bounded soma functions; 5.129-130 Measurability; 5.131-135 Measure functions; 5.136-140 The measure function on its ring of 978-0-8218-5273-6 measurability; 5.141-143 Sequences of measure functions and their limits; 5.144-147 Transformation of measure functions by homomorphisms; 5.148-153 The Borel-Lebesgue content • The Integral: 6.154 Fields of place functions; Measurable place functions; 6.155-162 The notion of the integral; 6.163-166 Linearity of the integral and the integration of place List Price: US$53 functions of arbitrary sign; 6.167-172 Comparable measure functions and the Lebesgue decomposition; 6.173-175 Abstract differentials; 6.176-177 The absolute continuity of two comparable measure functions; 6.178-180 Transformation of the integral by means of homomorphisms Member Price: • Application of the Theory of Integration to Limit Processes: 7.181-183 The theorem of Egoroff; 7.184-189 Continuity of the integral as a functional; 7.190-197 Convergence in the US$47.70 mean; 7.198-205 Ergodic theory • The Computation of Measure Functions: 8.206-210 Maximal measure functions; 8.211-215 The bases of an arbitrary measure function; 8.216-221 Relative measurability Order Code: CHEL/ • Regular Measure Functions: 9.222-224 The definition and principal properties of regular measure functions; 9.225-229 Inner measure; 9.230-235 Comparison of inner and outer 161.H measures; 9.236-240 The arithmetic mean of the inner and outer measures • Isotypic Regular Measure Functions: 10.241-244 The principal properties of isotypic measure functions; 10.245-248 The Jordan decomposition of completely additive soma functions; 10.249-255 The difference of two isotypic regular measure functions; 10.256-257 Comparable outer measures • Content Functions: 11.258-259 The definition of content functions; 11.260-267 Reduced content functions and their homomorphisms; 11.268-271 The Jessen infinite-dimensional torus; 11.272-278 The Vitali covering theorem; 11.279-282 The Lebesgue integral; 11.283-284 Comparable content functions; 11.285-289 Linear measure • Appendix: Somas as elements of partially ordered sets: 12.290-297 A new axiom system for somas; 12.298-302 The partitioning of a set into classes; 12.303-304 Partially ordered sets; 12.305-308 Applications to the theory of somas; 12.309-312 Systems of somas that are not isomorphic to systems of subsets of a set • Bibliography: Earlier publications by Constantin Carathéodory on the algebraization of measure and integral • List of symbols • Index
{"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-161-H","timestamp":"2014-04-19T14:59:05Z","content_type":null,"content_length":"17483","record_id":"<urn:uuid:560073aa-2ddf-4f13-82d3-602ff8c34e67>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Spreadsheets function for subtotal Spreadsheets 1/4/09 How can I use subtotal function at spreadsheets.? function for E.mali 2:20 AM Re: Spreadsheets 1/4/09 Google Docs spreadsheet doesn not support a SUBTOTAL function. You will have to use a workaround that fits your spreadsheet... :( function for ahab 3:45 AM Re: Spreadsheets Carlos 1/25/09 You can use the table gadget. Insert gadget, select table. It supports auto-filtering, groups with subtotals and other functions. It`s not exactly the same function for Ribeiro 1:51 AM (it's a separate object, not a regular spreadsheet) but if all you need are the totals it should be enough for you. Carlos - I'm sure your answer is good but I've never used tables. Can you be more detailed/clear ??? I have multiple entries by day, need each days total Re: Spreadsheets 8/11/09 and then a month total. Subtotal in excel worked fine. what do I need to do in Google? function for loyknight 1:41 PM subtotal Any hope subtotal support will be added? to expand on Ahab's reply, if your "Description" is in column B3 to B40, and the data to sum is in column C3 to C40, you can enter "SUBTOTAL" in Column B on Re: Spreadsheets the rows containing subtotals, and use the following formula in column C. function for hohockjim 4/5/10 subtotal 9:04 PM Is this still true? NO subtotals? Re: Spreadsheets function for Jaytkay 5/12/10 I find that hard to believe, it's such an elementary part of spreadsheets. subtotal 2:30 PM I tried the table gadget. I hate it. Its one of those weird omissions that keep hindering us moving to use Google Docs instead of Excel & Word. The workarounds are just not viable when you are trying to transition work from Excel to Google - yes I can understand why some of the really obscure stuff in Re: Spreadsheets 5/15/10 Excel would be missing - like the pivot tables etc , but come on SUBTOTAL - its one of the most basic functions and with no easy workaround because it function for MitraEarth 11:50 AM conditionally sums based on other lines not being subtotals. - Mitra Re: Spreadsheets 5/29/10 Without subtotals working as they do in excel, Google Docs just won't be very useful for us. function for alan.kushnir 4:04 PM alan Re: Spreadsheets 6/2/10 I need a subtotal mechanism to order my Amazon.com transactions. function for Carl.Simon 6:54 PM Re: Spreadsheets 6/12/10 Are subtotals something that could be addressed with some scripting? function for P39 9:38 AM "Are subtotals something that could be addressed with some scripting?" No, it would require that custom spreadsheet functions would also be able to set the formatting and alignment of their output. In essence a SUBTOTAL can be emulated like this: =TEXT( SUM( range ) ; "format" ) ) <- right align this cell This can produce a string with the same format as the numbers used in the range, and only needs right alignment to look like just another number. Re: Spreadsheets 6/12/10 function for ahab 9:52 AM A1: $1.00 A2: $2.00 A3: $3.00 A4: =TEXT( SUM(A1:A3) ; "'$'#,###.00") <- right align this cell A5: $5.00 A6: $6,00 A7: =TEXT( SUM(A1:A6) ; "'$'#,###.00") <- right align this cell In this column of numbers A4 and A7 act as real subtotals, i.e. they are ignored by other any other (sub-)total summation. Just being able to provide the same function doesn't address the real issue, which is a lack of compatability with even basic Excel functions like SUBTOTAL, Re: Spreadsheets which makes it hard to even suggest to a team that their spreadsheets move onto Google Docs. function for MitraEarth 6/17/10 subtotal 12:55 PM - Mitra Re: Spreadsheets 7/11/10 Can someone help me out I just imported this spreadsheet from excel to find out there is no subtotal function in Google doc's this sheet is for flight time function for jettechfsr 8:30 AM tracking if you enter time into Colum D Leg Time it fills in the rest and links to sheet 2 Totals in yellow to email out every so often to the OEM Re: Spreadsheets jettechfsr, function for ahab 7/11/10 subtotal 10:07 AM Please take a look at the 'Copy of...' sheets in your spreadsheet. Are these calculating the expected results? Re: Spreadsheets 7/14/10 @ahab thanks so much this will really help function for jettechfsr 5:13 AM
{"url":"https://productforums.google.com/forum/?_escaped_fragment_=category-topic/docs/how-do-i/SNY8eAoXdGo","timestamp":"2014-04-20T16:04:45Z","content_type":null,"content_length":"10322","record_id":"<urn:uuid:b3845e37-11f4-48f7-bde1-67dd878b8e04>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
May the force be with us? Precise measurements test quantum electrodynamics, constrain possible fifth fundamental force - Technology Org May the force be with us? Precise measurements test quantum electrodynamics, constrain possible fifth fundamental force Posted on June 6, 2013 Quantum electrodynamics (QED) – the relativistic quantum field theory of electrodynamics – describes how light and matter interact – achieves full agreement between quantum mechanics and special relativity. (QED can also be described as a perturbation theory of the electromagnetic quantum vacuum.) QED solves the problem of infinities associated with charged pointlike particles and, perhaps more importantly, includes the effects of spontaneous particle-antiparticle generation from the vacuum. Recently, scientists at VU University, The Netherlands, published two papers in quick succession that, respectively, tested QED to extreme precision by comparing values for the electromagnetic coupling constant1, and applied these measurements to obtain accurate results from frequency measurements on neutral hydrogen molecules that can be interpreted in terms of constraints on possible fifth-force interactions beyond the Standard Model of physics2. In addition, the researchers point out that while the Standard Model explains physical phenomena observed at the microscopic scale, so-called dark matter and dark energy at the cosmological scale are considered as unsolved problems that hints at physics beyond the Standard Model. Prof. Wim Ubachs discussed the research he and his colleagues (at University of San Carlos, Philippines; Mickiewicz University, Poland; and University of Warsaw, Poland) undertook, citing some of the challenges they faced, in a conversation with Phys.org. “The challenges in testing QED to extreme precision by comparing values for the electromagnetic coupling constant are twofold,” Ubachs says. “Using lasers, we measured transition frequencies as accurately as possible. These measurements, in turn, had to be compared with calculations, which also had to be performed at the highest accuracy levels, involving many steps: First, solving the Schrodinger equation for the H2 molecule, and secondly calculating the relativistic corrections and the terms associated with quantum electrodynamics.” The latter, he notes, involves calculating the interaction of the particles with the quantum vacuum – that is, with the spontaneously generated particles from the void. Read more at: Phys.org
{"url":"http://www.technology.org/2013/06/06/may-the-force-be-with-us-precise-measurements-test-quantum-electrodynamics-constrain-possible-fifth-fundamental-force/","timestamp":"2014-04-18T07:26:32Z","content_type":null,"content_length":"33640","record_id":"<urn:uuid:86fbd25c-e01b-4ac5-981d-d960d97f2053>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Unusual Ways of Estimating Pi Date: 03/24/97 at 21:55:10 From: Phydeaux Subject: Two Unusual Ways of Estimating Pi Hello, Dr. Math, I have to write a detailed five page report on the history of Pi and two ways to calculate it. The history of pi is no big deal to me. It's the ways to estimate pi that is bugging me. I have found the information on Buffon's needle problem but I'm looking for something VERY unusual. I've also discovered (through Buffon's needle problem) that pi is associated with things that don't have anything to do with circles or spheres. Help me out, please. As I said, I'm looking for two unusual ways to estimate pi. Muchas gracias, Tim Dwyer Date: 03/25/97 at 14:49:12 From: Doctor Steven Subject: Re: Two Unusual Ways of Estimating Pi Pi is defined as the ratio of the area of a circle to its radius squared. If you inscribe polygons with the same radius as the circle, then their area is smaller than the circle. Find the ratio of their area to the radius and you have an estimate that is too small for Pi. If you circumscribe polygons with the same radius as the circle then they have an area that is larger than the circle. Find the ratio of their area to the radius squared and you get an estimate that is too high for Pi. You've now squeezed Pi between two values. The more sides you add to your polygons the better the approximations, and the tighter the interval where you know Pi to be. (Hint: if you let the radius equal one then the area of the polygons should approximate Pi). The probability that an integer picked at random will have repeated prime factors is 6/PI^2. Using the 100 million decimal place expansion of Pi, take every 100 digits as an integer, find whether it has repeated prime factors. The proportion of integers that do should equal 6/Pi^2. Using this method "researchers" have found the value of Pi to about 20 decimal places. (Not a very useful approximation method for Pi, but a very neat one.) Hope this helps. -Doctor Steven, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/58289.html","timestamp":"2014-04-21T05:24:22Z","content_type":null,"content_length":"6961","record_id":"<urn:uuid:e13e014a-48b4-42d1-9171-cb2251d1af8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Directory of Experts Click on an expert's name to see his or her full details. Click on a keyword to see experts in related fields. Dr Francesco Mezzadri Post(s): Reader in Applied Mathematics, School of Mathematics Areas of expertise: Random matrix theory has found applications in a variety of areas of physics, pure and applied mathematics, probability,... Keywords: random matrix theory | quantum chaos | statistical mechanics | number theory Dr Martin Sieber Post(s): Reader in Applied Mathematics, School of Mathematics Areas of expertise: My main interests lie in asymptotical approximations for wave theories that are valid in the limit of short wave lengths.... Keywords: wave theories | quantum mechanics | geometrical optics | quantum chaos | random matrix theory | diffraction | wavefunctions | microlasers Dr Nina Snaith Post(s): Reader, School of Mathematics Areas of expertise: My main interest is the connection between Random Matrix Theory and certain number theoretical functions such as the Riemann... Keywords: random matrix theory | theoretical functions | Riemann zeta function | L-functions | statistics University of Bristol | Feedback Senate House, Tyndall Avenue, Bristol BS8 1TH, UK. Tel: +44 (0)117 928 9000
{"url":"http://bristol.ac.uk/media/experts/jsp/public_view/expertsByKeyword?keywordID=3376","timestamp":"2014-04-16T13:32:40Z","content_type":null,"content_length":"21026","record_id":"<urn:uuid:3bd8f114-c84a-427f-865f-78fa3902f0af>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Regression Equations 7.4: Linear Regression Equations Created by: CK-12 Practice Linear Regression Equations Suppose you have a large database that includes the scores on physics exams and calculus exams from high school students across your state who took both tests. You want to find out whether there is a correlation between these two sets of scores. What tools could you use to find out this information in an efficient way? Watch This First watch this video to learn about linear regression equations. CK-12 Foundation: Chapter7LinearRegressionEquationsA Then watch this video to see some examples. CK-12 Foundation: Chapter7LinearRegressionEquationsB Watch this video for more help. James Sousa Linear Regression on the TI84 - Example 1 Scatter plots and lines of best fit can also be drawn by using technology. The TI-83 is capable of graphing both a scatter plot and of inserting the line of best fit onto the scatter plot. The calculator is also able to find the correlation coefficient $(r)$coefficient of determination $(r^2)$ The correlation coefficient will have a value between $-1$$-1$ Example A The following table consists of the marks achieved by 9 students on chemistry and math tests. Create a scatter plot for the data with your calculator. Student A B C D E F G H I Chemistry Marks 49 46 35 58 51 56 54 46 53 Math Marks 29 23 10 41 38 36 31 24 ? Example B Draw a line of best fit for the data that you plotted in Example A. Use the line of best fit to calculate the predicted value for Student I's math test mark. The calculator can now be used to determine a linear regression equation for the given values. The equation can be entered into the calculator, and the line will be plotted on the scatter plot. From the line of best fit, the calculated value for Student I's math test mark is 33.6. Example C Determine the correlation coefficient and the coefficient of determination for the linear regression equation that you found in Example B. Is the linear regression equation a good fit for the data? The correlation coefficient and the coefficient of determination for the linear regression equation are found the same way that the linear regression equation is found. In other words, to find the correlation coefficient and the coefficient of determination, after entering the data into your calculator, press $\boxed{\text{STAT}}$$\boxed{\text{ENTER}}$$\boxed{\text{ENTER}}$ You can see that $r$$r^2$ Guided Practice The data below gives the fuel efficiency of cars with the same-sized engines when driven at various speeds. $& \text{Speed (m/h)} \qquad \qquad \qquad \ \quad 32 \quad 64 \quad 77 \quad 42 \quad 82 \quad 57 \quad 72\\& \text{Fuel Efficiency (m/gal)} \qquad \quad 40 \quad 27 \quad 24 \quad 37 \quad 22 \quad 36 \quad 28$ a. Draw a scatter plot and a line of best fit using technology. What is the equation of the line of best fit? b. What is the correlation coefficient and the coefficient of determination of the linear regression equation? Is the linear regression equation a good fit for the data? c. If a car were traveling at a speed of 47 m/h, estimate the fuel efficiency of the car. d. If a car has a fuel efficiency of 29 m/gal, estimate the speed of the car. From the following screen, the equation of the line of best fit is approximately $y=-0.36x+52.6$ b. As can be seen in the screen in the answer to part a, the correlation coefficient is 0.9534582451, while the coefficient of determination is 0.9090826251. This means that the linear regression equation is a moderately good fit, but not a great fit, for the data. c. Using the TI-83 to calculate the value, the fuel efficiency of a car traveling at a speed of 47 m/h would be approximately 35 m\gal. d. From the calculator, the equation of the line of best fit is approximately $y=-0.36x+52.6$$y$$x$ Using this equation: $y & = -0.36x+52.6\\29 & = -0.36x+52.6\\29-52.6& = -0.36x+52.6-52.6\\\frac{-23.6}{-0.36} & = \frac{-0.36x}{-0.36}\\65.6 \text{ m/h} & = x$ The speed of the car would be approximately 65.6 miles per hour. Interactive Practice 1. Which of the following calculations will create the line of best fit on the TI-83? 1. quadratic regression 2. cubic regression 3. exponential regression 4. linear regression $(ax + b)$ The linear regression below was performed on a data set with a TI calculator. Use the information shown on the screen to answer the following questions: 2. What is the linear regression equation? 3. What is the correlation coefficient and the coefficient of determination? Is the linear regression equation a good fit for the data? 4. According to the linear regression equation, what would be the approximate value of y when x = 3? The linear regression below was performed on a data set with a TI calculator. Use the information shown on the screen to answer the following questions: 5. What is the linear regression equation? 6. What is the correlation coefficient and the coefficient of determination? Is the linear regression equation a good fit for the data? 7. According to the linear regression equation, what would be the approximate value of y when x = 10? The linear regression below was performed on a data set with a TI calculator. Use the information shown on the screen to answer the following questions: 8. What is the linear regression equation? 9. What is the correlation coefficient and the coefficient of determination? Is the linear regression equation a good fit for the data? 10. According to the linear regression equation, what would be the approximate value of x when y = 8? Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Basic-Probability-and-Statistics-Concepts---A-Full-Course/r11/section/7.4/","timestamp":"2014-04-21T15:09:20Z","content_type":null,"content_length":"132738","record_id":"<urn:uuid:4922be48-b2c2-4d17-bcb4-cc436aa7e480>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Pitch Processor Board from Lexicon Model 2400 Time Compressor/Expander, Dattorro This is the pitch processor board from the Lexicon Model 2400 Time Compressor Expander. (There is a separate Splice Board I designed to do the splicing and digital filtering.) In the upper right-hand corner is the label Confidence Board because this board also produced an estimate of assurity of the measured pitch. I designed this board and I also laid it out with the help of Joe Zagami on a Mentor Graphics workstation. The unit is controlled by two TMS 32010 DSP chips (purple). The remaining chips comprise a discrete parallel and pipelined mathematical coprocessor operating with a clock rate of 20 MHz. (In 1986, that was fast.) Each of two parallel computational paths makes a U so that data physically flows away from the TMS 320s and then back again. The algorithm executed by this hardware is an average-magnitude-difference periodicity detector that I made capable of detecting sub-harmonics of polyphonic sounds. It was also made capable of distinguishing noise from pitched sounds. The latency is about 5 milliseconds, so a pitch estimate was always requested for some particular point in the future. This board was capable of fulfilling hundreds of pitch estimate requests per second. To give a rule of thumb for the requirements, every 1% absolute change in playback time requires about 1 splice per second. The 2400 provided only 0.75 - 1.333 factor changes in runtime. To determine the optimal splice point, many estimates of pitch were requested to find just the right place. The unit would hold off splicing as long as possible until a good splice point was found. Confidence in the estimate was weighed against urgency to splice. As time went on, urgency outweighed the confidence and so a splice would eventually be made. If perfect splice points were going by and the time compression factor were not exactly 1, then perfect splices would be made ahead of any urgency requirement. The splice points were thus only statistically characterizeable. When runtime is sped, splicing eliminates redundant information. When runtime is slowed, the splicing algorithm synthesizes audio from neighboring wave cycles. So, in that sense, the device is a In either case, a raised cosine is used to make a splice because that shape can be shown to produce least distortion in the event of error in a periodicity estimate.
{"url":"https://ccrma.stanford.edu/~dattorro/2400.htm","timestamp":"2014-04-16T13:25:06Z","content_type":null,"content_length":"3474","record_id":"<urn:uuid:3b38ae43-4129-46e7-a864-5c75425f536d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Creatively Integrate Science and Math November 11, 2011 | Ben Johnson Why is the sky blue? I remember in my physical science class, our teacher showed us a possible reason why the sky is blue. He took a canister of liquid oxygen and poured it out on the table. I saw the blueness of the liquid as it flowed out and then disappeared. Then we talked about color, frequencies, and absorption, reflected and radiated light. I wondered how scientist ever figured these things out? Duh -- math! How can you really teach science without math? It is impossible. Science is the application of math. In science, geometric principles such as symmetry, reflection, shape, and structure reach down to the atomic levels. In science, algebraic balance is required in chemical formulas, growth ratios, and genetic matrices. In science, math is used to analyze nature, discover its secrets and explain its existence and this is the big problem. Science is so complex and getting more so each day. In order to study, analyze and interpret science, mathematical tools are required. In math class one of the biggest needs is relevance. Why not use science to teach math? Since one of the biggest uses of mathematics in science is data gathering and analysis, that is the best place to start. When a teacher gives students a real science problem to solve -- one that requires math tools -- the teacher is giving the students a reason to use math. Math then becomes something useful, not something to be dreaded. Being able to teach math better and being able to teach science better are powerful reasons for the math and science teacher collaborate with each other. According to a case study conducted by Jennifer Dennis and Mary John O'Hair, another reason that math and science teachers should collaborate is that science helps provide relevance to math that is all too often abstract and isolated calculation operations. Ultimately, as another study reported, the students' increased conceptual understanding of math and science is the greatest benefit of math and science teacher collaboration. Unfortunately, knowing that increased teacher collaboration in math and science will benefit students and teachers is not enough. Teachers are so busy that finding time to collaborate is difficult. Add to this, the structure of the school inhibits collaboration when math and science teachers are spread out in a large campus. How do you overcome this? Well, a simple request to the principal might do the trick. Another solution is that even though geographically speaking the math and science teachers may be isolated, everyone has cellphone, texting, Facebook or even email can be considered forms of collaboration. What are ways you work with your companion subject teacher (math or science) to help students understand math and science better? see more see less Comments (39) Comment RSSSign in or register to post comments Apologies to the allusion to another name. I did not know if you were serious. Thanks for sharing your experience. You seem imminently qualified to respond to my question. How would you go about getting Science and math teachers to work more closely together? Since you and I are a former administrators, I am very interested in hearing your perspective of the role of leadership in helping students learn math and science better. Ben Johnson San Antonio, Texas You hit it on the head. The impulse for writing my thoughts was prompted by observations that made me ask why more teachers do not collaborate in math and science. Then I started thinking what could be done and who is doing it. It is not an in-depth article, or treatise, but my goal was to get conversation going about this topic. Thanks for understanding. Ben Johnson San Antonio, Texas Here's what I'm reading in a nutshell: "A good way to teach math would be together with an application. Science would do the trick" What's so deadly wrong with that? Also, I'm quite surprised with the lack of politeness here. Here's what I'm reading in a nutshell: "A good way to teach math would be together with an application. Science would do the trick" What's so deadly wrong with that? Also, I'm quite surprised with the lack of politeness here. Yes it is my real name and there is nothing spooky about it. I am aware that it is the same as that of a serial killer. Thanks for bringing it up. What can I do? Let's start with what I did. I was an educator for 42 years, 14 of which was as a principal. I won awards, including Teracher of the year in a large city. I developed/invented/adapted numerous devices, procedures, concepts for application in classrooms. I wrote a complete history of our state in mnemonic verse. My complaint with your article is that you title it "How to..." then simpy state an idea that has been around for ages. Your article does not tell how to integrate math and science. Keep in mind that in hundreds of thousands of elementary classrooms teachers teach every subject. Many of these teachers integrate the two at least by referencing applicable math concepts and operations to science procedures. What can I do? I can spot BS. Collaboration may be difficult, but it's not necessary as an introductory course. I mean an introduction to both science and mathematics in the same discipline. There the lecturer could start asking simple questions to introduce science problems and start developing the mathematical tools to tackle them. The days of one size fits all are numbered. When the failure of current "reform" efforts becomes obvious to all, the pendulum will swing the other way. After ninth grade (at least, maybe for ninth grade as well) high school students should have a degree of specialization. Not irrevocable, they can still change. ALL students do not need to learn quadratic equations. I taught AP Physics for 17 years. Even in AP Physics, problems involving solution of quadratic equations were explicitly ruled out and never seen on the AP test. Math is taught as a mostly abstract subject. Much science is taught as a bunch of words, formulas, and procedures to memorize. That's just wrong. Both disciplines develop important habits of thought. Yes, even quadratic equations. BTW, those equations come in for quite a workout in fairly simple science courses. However, the typical class in math or science does not focus on thinking but on memorizing. There have always been exceptions, but exceptions should be the rule. Then, more integration might be useful. However, you cannot lose the rational logical process of thinking that math provides. Tradition rules. The traditional one-size-fits-all high school math (algebra-geometry-calculus) or science (biology-chemistry-physics) curriculum has never been a useful match of skills, concepts and information learned to real world or career needs. Only for a very small percentage of students who become mathematicians or scientists. How many people have ever used a quadratic equation at home or at work? We value math and science blindly, as most educational policymakers are ignorant enough about either that they don't dare mess with them. All they know is that both are important, so let's keep on doing what we're doing. Sort of like Latin. If the Catholic Church still said masses in Latin, maybe Latin would still be a staple of American education. Sol Garfunkel and David Mumford have some great ideas about replacing algebra-geometry-calculus with finance-data-basic engineering for most students. I, on the other hand, would replace a lot of the study of literature and replace it with technical and scientific reading (and writing). Pipe dreams, however, for Garfunkel, Mumford and myself. Methinks unfortunately. Our school began integrating Algebra I and Science-9 this year! It's going well. We implemented a team-teaching, PBL based course for ninth graders in math and science as well as one in Social Studies and English. The relevance and critical thinking is truly there. I've been advocating for this sort of idea for decades. It's axiomatic to me, but then I've been involved in math and science all of my life. Some educators even are so bold as to suggest that English skills can be improved through science too. The current structure of our education systems prevents effective collaboration as some of the previous comments say. The entire concept of four core subjects plus ancillary arts and physical education seems truly archaic. Nevertheless, I have a couple of quibbles with this article. First off, you can teach science without math. Science is a way of thinking, not a bunch of equations. Sure, scientists constantly use one sort or another of math. It's also true that you should collect quantitative data when performing science experiments if possible. However, we're talking about science education here, not science itself. You can go far with just a smidgen of arithmetic when teaching science. Of course, I like it much better with lots of partial differential equations, advanced statistical analysis, and multiple analysis of variance. But that's just me. I certainly don't expect young students to have that bias. I also cannot buy into "science is application of math." Science is a way of thinking, not of calculating. Math often is applied to science, yes. On to the bigger issues. Math can be unexciting as usually taught. The same is true of science. Big deal! However, an inspired science course can engage students so that learning the math and language skills necessary to do the science becomes an attractive proposition. Before science courses can become valuable math learning exercises, they have to be reformed themselves. And what is this red herring about a large campus. I assume that we're discussing high school education here. How many high schools have such a great separate between math and science classrooms? Geography is the least of the problems in getting collaboration between any two subjects. Mathematics has always been taught with abstractions and with a few "word problems" that have little bearing on real life. (My apologies to math teachers who have broken that mold.) Science can be relevant but rarely is. Often, when it is, it is very forced and unnatural. Mathematics began as a practical tool, not an abstract one. It was vital to measuring land, keeping accounts, and lots of military activities. These alternate fields pushed people to figure out math in the first place. It just makes lots of sense for math to be taught in these contexts instead as a collection of x and y values, at least through most of high school. I strongly believe in the basic concept of using science more effectively to learn math, reading, and, most importantly, thinking skills. IMO, this article makes a relatively weak case for that approach. However, I applaud the effort. see more see less
{"url":"http://www.edutopia.org/blog/integrating-math-science-creatively-ben-johnson?page=2","timestamp":"2014-04-18T00:27:50Z","content_type":null,"content_length":"125677","record_id":"<urn:uuid:9a70021d-b255-4c16-9368-8e582bb90ed4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
The Existence of Countably Many Positive Solutions for Nonlinear th-Order Three-Point Boundary Value Problems 1. Introduction The existence of positive solutions for nonlinear second-order and higher-order multipoint boundary value problems has been studied by several authors, for example, see [1–12] and the references therein. However, there are a few papers dealing with the existence of positive solutions for the 13] discussed the existence and multiplicity of positive solutions for the following In [14], Kaufmann and Kosmatov showed that there exist countably many positive solutions for the two-point boundary value problems with infinitely many singularities of following form: In [15], Ji and Guo proved the existence of countably many positive solutions for the with one of the following Motivated by the result of [13–15], in this paper we are interested in the existence of countably many positive solutions for nonlinear Suppose that the following conditions are satisfied. There exists a sequence There exists Assuming that 15, Example The paper is organized as follows. In Section 2, we provide some necessary background material such as the Krasnosel'skii fixed-point theorem and Leggett-Williams fixed point theorem in cones. In Section 3, the associated Green's function for the 2. Preliminary Results Definition 2.1. Definition 2.2. The map for all for all Definition 2.3. The following Krasnosel'skii fixed point theorem and Leggett-Williams fixed point theorem play an important role in this paper. Theorem 2.4 ([16], Krasnosel'skii fixed point theorem). is a completely continuous operator such that, either Theorem 2.5 ([17], Leggett-Williams fixed point theorem). In order to establish some of the norm inequalities in Theorems 2.4 and 2.5 we will need Holder's inequality. We use standard notation of where the integral is understood in the Lebesgue sense. The norm on Theorem 2.6 ([18], Holder's inequality). 3. Preliminary Lemmas To prove the main results, we need the following lemmas. Lemma 3.1 (see [15]). has a unique solution Lemma 3.2 (see [15]). The Green's function for the boundary value problem is given by Lemma 3.3 (see [15]). The Green's function Lemma 3.4. has a unique solution The general solution of By solving the above equations, we get Therefore, (3.7) has a unique solution Lemma 3.5. is given by We omit the proof as it is immediate from Lemma 3.4 and (3.4). Lemma 3.6. Next, we prove that (3.15) holds. From Lemma 3.3 and (3.14), for for all We use inequality (3.15) to define our cones. Let Define the operator Theorems 2.4 and 2.5 require the operator Lemma 3.7. The operator for all Clearly operator (3.21) is continuous. By the Arzela-Ascoli theorem 4. Main Results In this section we present that problem (1.5) has countably many solutions if For convenience, we denote Theorem 4.1. Suppose conditions Then problem (1.5) has countably many positive solutions Consider the sequences By condition Now let It is obvious that We observe here that, for each For convenience, we denote Theorem 4.2. Suppose conditions Then problem (1.5) has three infinite families of solutions for each We note first that In a completely analogous argument, condition We now show that condition Therefore, condition Finally, we show that condition Therefore, condition for each 5. Example In this section, we cite an example (see [15]) to verify existence of Example 5.1. As an example of problem (1.5), we mention the boundary value problem where 15, Example We notice that If we take It follows from a direct calculation that In addition, if we take Then all the conditions of Theorem 4.1 are satisfied. Therefore, by Theorem 4.1 we know that problem (5.1) has countably many positive solutions Example 5.2. As another example of problem (1.5), we mention the boundary value problem where 15, Example We notice that If we take It follows from a direct calculation that In addition, if we take Then all the conditions of Theorem 4.2 are satisfied. Therefore, by Theorem 4.2 we know that problem (5.7) has countably many positive solutions for each Remark 5.3. In [8–12], the existence of solutions for local or nonlocal boundary value problems of higher-order nonlinear ordinary (fractional) differential equations that has been treated did not discuss problems with singularities. In [13], the singularity only allowed to appear at 14, 15] seem to have considered the existence of countably many positive solutions for the second-order and higher-order boundary value problems with infinitely many singularities in 15], only the boundary conditions The project is supported by the Natural Science Foundation of Hebei Province (A2009000664), the Foundation of Hebei Education Department (2008153), the Foundation of Hebei University of Science and Technology (XL2006040), and the National Natural Science Foundation of PR China (10971045). Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2009/1/572512","timestamp":"2014-04-18T15:52:52Z","content_type":null,"content_length":"120501","record_id":"<urn:uuid:2d34691f-d951-4c19-8444-718746ffb059>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
The effect of national culture on countries’ innovation efficiency Halkos, George and Tzeremes, Nickolaos (2011): The effect of national culture on countries’ innovation efficiency. Download (184Kb) | Preview This paper contributes to the link between social and cultural factors with countries innovation performance. By measuring 25 countries’ innovation efficiency with the use of conditional and unconditional DEA (Data Envelopment Analysis) frontiers the paper provides empirical evidence of the effect of culture on countries’ innovation efficiency. Particularly, conditional and unconditional full frontier models are used alongside with bootstrap techniques in order to determine the effect of national culture on countries’ innovation performance. The study illustrates how the recent developments in efficiency analysis and statistical inference can be applied when evaluating such issues. The results reveal that national culture has an impact on countries’ innovation efficiency. Analytically, the results indicate that higher PDI (power distance index), IDV (individualism) and UAI (uncertainty avoidance) values have a negative effect on countries innovation efficiency, whereas masculinity values appear to have a positive effect on countries innovation performance. Item Type: MPRA Paper Original The effect of national culture on countries’ innovation efficiency Language: English Keywords: National culture; Innovation efficiency; Conditional efficiency; Bootstrap procedures C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C14 - Semiparametric and Nonparametric Methods: General C - Mathematical and Quantitative Methods > C0 - General > C02 - Mathematical Methods Subjects: C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C61 - Optimization Techniques; Programming Models; Dynamic Analysis Z - Other Special Topics > Z1 - Cultural Economics; Economic Sociology; Economic Anthropology > Z13 - Economic Sociology; Economic Anthropology; Social and Economic Stratification O - Economic Development, Technological Change, and Growth > O1 - Economic Development > O14 - Industrialization; Manufacturing and Service Industries; Choice of Technology Item ID: 30100 Depositing Nickolaos Tzeremes Date 10. Apr 2011 00:47 Last 13. Feb 2013 05:44 Banerjee, A., Newman, A. 1993. Occupational choice and the process of economic development. J Polit Econ. 101, 274-298. Banker, R.D., Charnes, A., Cooper, W.W. 1984. Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis. Manage Sci 30, 1078 – 1092. Baumol, W.J. 1968. Entrepreneurship in economic theory. Am Econ Rev. 58, 64-71. Baumol, W.J. 2000. What Marshall didn’t know: on the twentieth century’s contributions to economics. Q J Econ. CXV, 1-43. Charnes, A., Cooper, W.W., Rhodes, E. 1978. Measuring Efficiency of Decision Making Units. Eur J Oper Res. 3, 429–444. Cohran, T.C. 1960. Cultural factors in economic growth. J Econ Hist. 20, 515-530. Coviello, N.E., Jones, M.V. 2004. Methodological issues in international entrepreneurship research. J Bus Venturing. 19, 485-508. Daraio, C., Simar, L. 2007. Advanced robust and nonparametric methods in efficiency analysis. Springer Science, New York. Daraio, C., Simar, L. 2005. Introducing Environmental Variables in Nonparametric Frontier Models: a Probabilistic Approach. J Prod Anal. 24, 93-121. Derpins, D., Simar, L., Tulkens, H. 1984. Measuring labor efficiency in post offices, in: Marchand, M., Pestieau, P., Tulkens, H. (Eds.), The performance of public enterprises: Concepts and measurement. North-Holland, Amstredam, pp. 243-267. Efron, B. 1979. Bootstrap methods: another look at the jackknife. Ann Stat. 7, 1-16. EIS. 2007. European innovation scoreboard 2007 database. INNO Metrics, available at the following address: http://www.proinno-europe.eu/index.cfm?fuseaction=page. display&topicID=275& Engelen, A., Heinemann, F., Brettel, M. 2009. Cross-cultural entrepreneurship research: Current status and framework for future studies. J Int Entrep. 7, 163-189. Evans, D.D., Leighton, L.S. 1989. Some empirical aspects of entrepreneurship. Am Econ Rev. 79, 519-535. Evans, G.H. 1949. The entrepreneur and economic theory: a historical and analytical approach. Am Econ Rev. 39, 336-348. Farrell, M. 1957. The measurement of productive efficiency. J R Stat Soc. 120, 253–281. Fu, X., Yang, Q.G. 2009. Exploring the cross-country gap in patenting: A Stochastic Frontier Approach. Res Policy. 38, 1203-1213. George, G., Zahra, S.A. 2002. Culture and its consequences for entrepreneurship. Entrep Theor Pract. 26, 5–8. Granovetter, M. 1985. Economic action and social structure: The problem of embeddedness. Am J Socio. 91, 481-510. Griliches, Z. 1990. Patent statistics as economic indicators: a survey? J Econ Lit. 28, 1661–1707. Guice, J. 1999. Designing the future: the culture of new trends in science and technology. Res Policy. 28, 81-98. Halkos, G.E., Tzeremes, N.G. 2010a. Measuring biodiversity performance: A conditional efficiency measurement approach. Environ Modell Softw. doi: 10.1016/j.envsoft.2010.04.014. Halkos, G.E., Tzeremes, N.G. 2010b. The effect of foreign ownership on SMEs performance: An efficiency analysis perspective. J Prod Anal. doi: 10.1007/s11123-010-0174-2. References: Hashimoto, A., Haneda, S. 2008. Measuring the change in R&D efficiency of the Japanese pharmaceutical industry. Res Policy. 37, 1829-1836. Hofstede, G. 1980. Culture's Consequences, International Differences in Work-Related Values. Sage Publications, Beverly Hills CA. Hollanders, H., Esser, F.C. 2007. Measuring Innovation Efficiency, INNO-Metrics Thematic Paper. available at http://www.proinnoeurope.eu/admin/uploaded documents/eis 2007 Innovation Howells, J. 1995. A socio-cognitive approach to innovation. Res Policy. 24, 883-894. Iyigun, M.F., Owen, A.I. 1998. Risk, entrepreneurship, and human-capital accumulation. Am Econ Rev. 88, 454-457. Lee, S.M., Peterson, S.J. 2000. Culture, Entrepreneurial orientation and Global competitiviness. J World Bus. 35, 401-416. Leinbenstein, H. 1968. Entrepreneurship and development. Am Econ Rev. 58, 72-83. Lynn, L.H., Reddy, N.M., Aram, J.D. 1996. Linking technology and institutions: the innovation community framework. Res Policy. 25, 91-106. Nadaraya, E.A. 1964. On estimating regression. Theor Probab Appl. 9, 141-142. Pakes, A., Griliches, Z. 1984. Patents and R&D at the firm level: a first look, in: Griliches, Z. (Ed.), R&D Patents and Productivity. University of Chicago Press, Chicago. Pavitt, K. 1998. The social shaping of the national science base. Res Policy. 27, 793-805. Schumpeter, A.J. 1934. The Theory of Economic Development. Harvard University Press, Cambridge, MA. Shephard, R.W. 1970. Theory of cost and production function. Princecton, NJ, Princeton University Press. Silverman, B.W. 1986. Density estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probabilities No 26. Chapman and Hall/CRC. Simar, L., Wilson, P. 2008. Statistical interference in nonparametric frontier models: recent developments and perspectives, in: Fried H. Lovell CAK, Schmidt S (Eds.), The measurement of productive efficiency and productivity change, Oxford University Press, New York. Simar, L., Wilson, P.W. 1998. Sensitivity Analysis of Efficiency Scores: How to Bootstrap in Nonparametric Frontier Models. Manag Sc. 44, 49–61. Simar, L., Wilson, P.W. 2000. A general methodology for bootstrapping in non-parametric frontier models. J Appl Stat. 27, 779 -802. Simar, L., Wilson, P.W. 2002. Nonparametric tests of returns to scale. Eur J Oper Res. 139, 115– 132. Singh, J. 1995. Measurement issues in cross-national research. J Int Bus Stud. 26, 597-619. Soltow, J.H. 1968. The entrepreneur and economic history. Am Econ Rev. 58, 84-92. Tiessen, J.H. 1997. Individualism, collectivism and entrepreneurship: a framework for international comparative research. J Bus Venturing.12, 367-384. Vaskarelis, N.C. 2001. The impact of patent protection, economy openness and national culture on R&D investment: a cross-country empirical investigation. Res Policy. 30, 1059-1068. Wang, E.C., Huang, W. 2007. Relative efficiency of R&D activities: A cross-country study accounting for environmental factors in the DEA approach. Res Policy. 36, 260-273. Watson, G.S. 1964. Smooth regression analysis. Sankhya Ser A. 26, 359-372. URI: http://mpra.ub.uni-muenchen.de/id/eprint/30100
{"url":"http://mpra.ub.uni-muenchen.de/30100/","timestamp":"2014-04-21T04:38:14Z","content_type":null,"content_length":"32690","record_id":"<urn:uuid:f7cf1ad3-eed1-4550-9d16-b898cd5b1ab5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
North Andover Geometry Tutor Find a North Andover Geometry Tutor ...My teaching philosophy for mathematics in general, and Calculus in particular, emphasizes two key skills: First understanding the meaning behind the equations and symbols with examples and word problems that relate a math problem to a real world description. Second is to learn systematic approa... 12 Subjects: including geometry, chemistry, algebra 2, calculus ...I can develop and explain examples in many different scientific contexts to help students. I like teaching elementary math. It's a great way to lay a solid foundation for higher math later on. 55 Subjects: including geometry, English, reading, ESL/ESOL ...I also have 4 years tutoring for NCLB grades 1-8 in Math and English. I have been tutoring for 25+ years, both Middle school (math and English ) and H.S. - Math. I have tutored COOP, HSPT, ISEE, SSAT, PSAT (Math and Verbal) ACT (Math and Verbal) SAT (Math and Verbal). I feel that I am definitely qualified to tutor for COOP/HSPT prep. 19 Subjects: including geometry, GRE, algebra 1, algebra 2 ...I've also written sections of physics workbooks and digital content for major publishers. The best way to understand most concepts in physics that are new to a student is to represent it on paper. More than any other discipline, academic physics emphasizes the use of diagrams, cartoons, and "be... 23 Subjects: including geometry, chemistry, writing, calculus ...I have the philosophy that anything can be understood if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students. I find real life examples and a crystal clear explanation are crucial for success. 19 Subjects: including geometry, Spanish, chemistry, calculus
{"url":"http://www.purplemath.com/north_andover_ma_geometry_tutors.php","timestamp":"2014-04-18T13:40:04Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:5810f88f-3d4c-4f73-b3d5-31d1d620d50a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoare Logic for a Simple WHILE Language Language and logic This directory contains an implementation of Hoare logic for a simple WHILE language. The constructs are • SKIP • _ := _ • _ ; _ • IF _ THEN _ ELSE _ FI • WHILE _ INV {_} DO _ OD Note that each WHILE-loop must be annotated with an invariant. After loading theory Hoare, you can state goals of the form VARS x y ... {P} prog {Q} where prog is a program in the above language, P is the precondition, Q the postcondition, and x y ... is the list of all program variables in prog. The latter list must be nonempty and it must include all variables that occur on the left-hand side of an assignment in prog. Example: VARS x {x = a} x := x+1 {x = a+1} The (normal) variable a is merely used to record the initial value of x and is not a program variable. Pre/post conditions can be arbitrary HOL formulae mentioning both program variables and normal The implementation hides reasoning in Hoare logic completely and provides a method vcg for transforming a goal in Hoare logic into an equivalent list of verification conditions in HOL: apply vcg If you want to simplify the resulting verification conditions at the same time: apply vcg_simp which, given the example goal above, solves it completely. For further examples see Examples. IMPORTANT: This is a logic of partial correctness. You can only prove that your program does the right thing if it terminates, but not that it terminates. Notes on the implementation The implementation loosely follows Mike Gordon. Mechanizing Programming Logics in Higher Order Logic. University of Cambridge, Computer Laboratory, TR 145, 1988. published as Mike Gordon. Mechanizing Programming Logics in Higher Order Logic. In Current Trends in Hardware Verification and Automated Theorem Proving , edited by G. Birtwistle and P.A. Subrahmanyam, Springer-Verlag, 1989. The main differences: the state is modelled as a tuple as suggested in J. von Wright and J. Hekanaho and P. Luostarinen and T. Langbacka. Mechanizing Some Advanced Refinement Concepts. Formal Methods in System Design, 3, 1993, 49-81. and the embeding is deep, i.e. there is a concrete datatype of programs. The latter is not really necessary.
{"url":"http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/library/HOL/HOL-Hoare/README.html","timestamp":"2014-04-18T08:04:39Z","content_type":null,"content_length":"3185","record_id":"<urn:uuid:bec94c8c-a212-4bc6-9f3c-38c812707bf6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. Most Active Subjects Questions Asked Questions Answered Medals Received Questions Asked Questions Answered Medals Received is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/doublemacburger/asked","timestamp":"2014-04-20T16:28:50Z","content_type":null,"content_length":"86067","record_id":"<urn:uuid:f8c88fc8-cfdb-45b9-ae9c-be593ca7d001>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Linden, NJ Statistics Tutor Find a Linden, NJ Statistics Tutor ...My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips and material relevant to the test. For students whose goal is to learn particular subjects,... 15 Subjects: including statistics, chemistry, calculus, algebra 2 ...I have also worked with middle and high school students. Over the years, I have gained experience working with students who have a wide variety of learning styles. For something to ‘click’ it must be presented in a way that makes sense to you based on what you already understand and how you process information. 10 Subjects: including statistics, calculus, geometry, algebra 1 ...I have over 6 years of experience tutoring and 4 years of experience working with middle-school students from minority backgrounds who are struggling with reading, writing and math. I love working with students from all grade levels and helping them get motivated to set learning goals and to suc... 25 Subjects: including statistics, reading, English, writing ...I look forward to hearing from you and scheduling a tutoring session! Thank you.I have worked at a non profit organization for 8 months where I use Microsoft Outlook on a daily basis. I am well versed with the mail, calendar, and contact tools featured on Outlook. 49 Subjects: including statistics, Spanish, English, reading ...I also served as a reference for students, by request of the professor, in Advanced Logic and Computability. In Modal Logic, I assisted two classmates. I am currently tutoring a high school student in Symbolic Logic. 32 Subjects: including statistics, physics, calculus, geometry Related Linden, NJ Tutors Linden, NJ Accounting Tutors Linden, NJ ACT Tutors Linden, NJ Algebra Tutors Linden, NJ Algebra 2 Tutors Linden, NJ Calculus Tutors Linden, NJ Geometry Tutors Linden, NJ Math Tutors Linden, NJ Prealgebra Tutors Linden, NJ Precalculus Tutors Linden, NJ SAT Tutors Linden, NJ SAT Math Tutors Linden, NJ Science Tutors Linden, NJ Statistics Tutors Linden, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Linden_NJ_Statistics_tutors.php","timestamp":"2014-04-17T21:31:09Z","content_type":null,"content_length":"23967","record_id":"<urn:uuid:5c012110-e74f-4c34-a8e2-4216edac12b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic Geometry Seminar: Vivek Shende • Date: 01/08/2013 • Time: 15:30 Lecturer(s): Vivek Shende, MIT University of British Columbia Special divisors on hyperelliptic curves A divisor on a curve is called "special'' if its linear equivalence class is larger than expected. On a hyperelliptic curve, all such come from pullbacks of points from the line. But one can ask subtler questions. Fix a degree zero divisor Z; consider the space parameterizing divisors D where D and D+Z are both special. In other words, we wish to study the intersection of the theta divisor with a translate; the main goal is to understand its singularities and its cohomology.The real motivation comes from number theory. Consider, in products of the moduli space of elliptic curves, points whose coordinates all correspond to curves with complex multiplication. The Andre-Oort conjecture controls the Zariski closure of sequences of such points (and in this case is a theorem of Pila) and a rather stronger equidistribution statement was conjectured by Zhang. The locus introduced above arises naturally in the consideration of a function field analogue of this conjecture. This talk presents joint work with Jacob Tsimerman.
{"url":"http://www.pims.math.ca/scientific-event/130108-agsvs","timestamp":"2014-04-18T03:08:02Z","content_type":null,"content_length":"17152","record_id":"<urn:uuid:d6cab772-ff33-4b51-9bd7-3249995d812d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate angels of a point object Join Date Aug 2011 Rep Power want to calculate the x & y of a point object after it hits a frame. Are you drawing a shape that hits another shape as it is moving across your window? You need to consider where the two shapes are located in 2 dimensions and see when the boundary of one shape touches the boundary of the other shape. Drawing this on paper will help you to see where the boundaries are and where the boundary of one touches the boundary of the other. the angle of the point need to be multiplied in -1 after it hits a frame Not sure what you mean here. A point doesn't have an angle. Do you mean the angle between the path of the point and the boundary of the shape that the moving point is coming in contact with? Are you trying to have the point bounce off the boundary and move in a new direction? Do you have some code you can post to show your problem? Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power If a point travels with a speed (dx, dy) toggle the sign of dx if the point hits a vertical wall; toggle the sign of dy if it hits a horizontal wall. kind regards, Last edited by JosAH; 11-16-2011 at 08:19 AM. Reason: fixed a stupid typo ... cenosillicaphobia: the fear for an empty beer glass Join Date Aug 2011 Rep Power No, I'm drawing a shape that hits the window frames. I know! I just trying to find a way to calculate the new values after the object confluenction. Do you have some code you can post to show your problem? Truthly, no. I just got a simple point on my screen. Looks like I got 2 problems now: 1)I want the point to move randomly on the screen. Moving it with 90 degrees it's easy .. just x++ or y++ but I want the point to move as the ball move on the picture I attached. 2)When it hits the window frame, what the values will be then? Yea I know that but what happens if the ball is not re painting by x++ or y++, if it's x+=3, y+=10 Thanks guys. edit: JoshAH, I think your answer is working for all the cases. I will test it now. I want the point to move as the ball move on the picture I attached. If you change the x and/or y values over time, the location will change. When it hits the window frame, what the values will be then? Print out the values of x and y as the ball moves to see their values. Do you know the size of the window frame? Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power
{"url":"http://www.java-forums.org/new-java/51276-calculate-angels-point-object.html","timestamp":"2014-04-17T17:02:48Z","content_type":null,"content_length":"90736","record_id":"<urn:uuid:db7b5157-6ab8-47d7-b7f0-55984db991f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorial problem June 13th 2008, 05:43 PM #1 May 2008 Combinatorial problem Can anyone check if my below given questions answer is correct or not... (a) How many different strings can be formed by rearranging all the letters of the word MISSISSAUGA? Briefly justify your answer. my ans) 9! (b) How many of the rearrangements in part (a) contain 4 S’s in a row? Briefly justify your answer. my ans) Permutation(8,4) Thanks for the help! Hello, robocop_911! Sorry, your answers are wrong . . . (a) How many different strings can be formed by rearranging all the letters of the word MISSISSAUGA? There are 11 letters: . $\underbrace{A\;A}_2\;G\;\underbrace{I\;I}_2\:M\;\u nderbrace{S\;S\;S\;S}_4\;U$ There are: . $\frac{11!}{2!2!4!} \:=\:415,800$ possible strings. (b) How many of the rearrangements in part (a) contain 4 S’s in a row? Tape the four S's together. Then we have 8 "letters" to arrange: . $\underbrace{A\;A}_2\;G\;\underbrace{I\;I}_2\;M\;\b oxed{SSSS}\;U$ There are: . $\frac{8!}{2!2!} \:=\:10,080$ arrangements. By strings, I assume the first one asks for how many arrangements can be made from the word MISSISSAUGA?. We have 4 S's, 2 A's, 2 A's. Now, for the second one: Tie the S's together as one big letter. There is 8 places to put them and then arrange the other 7 letters in $\frac{7!}{2!2!}=1260$ ways. Last edited by galactus; June 13th 2008 at 06:23 PM. Reason: Soroban beat me but I can see we concur. June 13th 2008, 06:14 PM #2 Super Member May 2006 Lexington, MA (USA) June 13th 2008, 06:22 PM #3
{"url":"http://mathhelpforum.com/statistics/41502-combinatorial-problem.html","timestamp":"2014-04-20T13:06:31Z","content_type":null,"content_length":"38968","record_id":"<urn:uuid:79da5c0b-a78d-460a-80eb-b3c6164a9f6b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
arcTo method Draws an arc of a fixed radius between two tangents that are defined by the current point in a path and two additional points. CanvasRenderingContext2D.arcTo(x1, y1, x2, y2, radius); x1 [in] Type: number The x-coordinate for the first tangent that intersects with the current path point. y1 [in] Type: number The y-coordinate for the first tangent that intersects with the current point. x2 [in] Type: number The x-coordinate for the second tangent that intersects with the x1 and y1 points. y2 [in] Type: number The y-coordinate for the second tangent that intersects with the x1 and y1 points. radius [in] Type: number The radius of the arc to create. Return value This method does not return a value. Exception Condition IndexSizeError The radius given is Standards information The arcTo method creates an arc of radius radius between two tangents. The first tangent is defined by an imaginary line that is drawn through the last point in a path and the point (x1, y1). The second tangent is defined by an imaginary line that is drawn through the point (x1, y1) and the point (x2, y2). The arc is drawn between the two tangents using radius as the radius. ArcTo will draw a straight line from the last point of the path to the start of the arc which lies on the tangent that contains the last point on the path and x1 and y1. When arcTo draws an arc, it tries to fit the arc between the two tangents. The following illustration shows two graphics. Both graphics create a path, and both draw horizontal and vertical lines and use the same parameters for arcTo. However, the second graphic moves the last point on the path down by 20 pixels, which changes the angle between the two tangents. In the first graphic, the values make the arc complete a rounded corner. In the second example, because the angle of the intersecting tangents is narrower, arcTo needs to move the fixed radius arc to a point where it fits. The following code example shows how arcTo creates two different arcs of the same radius based on the angle of the tangents. The difference between the arcs is the position of the last path point. For illustrative purposes, the example uses the translate method to move the second arc down on the screen while preserving the same basic coordinate values. The second arc has the same radius as the first arc. But because moveTo moves the tangential lines, the arc appears in a different position. Because the radius is a fixed value, arcTo calculates the tangential lines and moves the arc to a position where it fits. Both examples include blue lines to show the tangential lines that arcTo uses. See the example in action. <!DOCTYPE html> <title>ArcTo example</title> <h1>ArcTo example</h1> <canvas id="myCanvas" width="300" height="600">This browser or document mode doesn't support canvas</canvas> var canvas = document.getElementById("myCanvas"); if (canvas.getContext) { var ctx = canvas.getContext("2d"); // Draw the imaginary tangents in blue. ctx.lineWidth = "3"; ctx.strokeStyle = "blue"; ctx.moveTo(80, 100); ctx.lineTo(240, 100); ctx.moveTo(200, 60); ctx.lineTo(200, 220); ctx.stroke(); // Draw it. // Create two lines that have a connecting arc that could be used as a start to a rounded rectangle. ctx.strokeStyle = "black"; ctx.lineWidth = "5"; ctx.moveTo(120, 100); // Create a starting point. ctx.lineTo(180, 100); // Draw a horizontal line. ctx.arcTo(200, 100, 200, 120, 20); // Create an arc. ctx.lineTo(200, 180); // Continue with a vertical line of the rectangle. ctx.stroke(); // Draw it. // Use the translate method to move the second example down. ctx.translate(0, 220); // Move all y-coordinates down 220 pixels to see more clearly. // Draw the imaginary tangents in blue. ctx.strokeStyle = "blue"; ctx.lineWidth = "3"; ctx.moveTo(200, 60); ctx.lineTo(200, 220); ctx.moveTo(220, 80); ctx.lineTo(120, 180); // Create a line, move the last path point to a point below, and then create an arc. ctx.strokeStyle = "black"; ctx.lineWidth = "5"; ctx.moveTo(120, 100); // Same starting point as above. ctx.lineTo(180, 100); // Same horizontal line as above. ctx.moveTo(180, 120); // Move the last path point down 20 pixels. ctx.arcTo(200, 100, 200, 120, 20); // Create an arc. ctx.lineTo(200, 180); // Continue with a vertical line of the rectangle. See also
{"url":"http://msdn.microsoft.com/en-us/library/windows/apps/hh465753.aspx","timestamp":"2014-04-17T19:43:35Z","content_type":null,"content_length":"48785","record_id":"<urn:uuid:44db809c-3bc5-4640-950d-ac2d48a02e0c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
9.3 Some Type M-Measures of Correlation Chapter 9 Correlation and Tests of Independence 451 percentage bend correlation is easily the most satisfactory, with the estimated probability of a type I error (based on simulations with 10,000 replications) ranging between .046 and .062. If the usual correlation, r , is used instead, the probability of type I error can exceed .2, and when using method GR, it exceeds .15. 9.3.5 R Function pball The R function pball(m,beta=0.2), computes the percentage bend correlation for all pairs of random variables, and it tests the hypothesis that all of the correlations are equal to zero. Here, m is an n-by- p matrix of data. If the data are not stored in a matrix, the function prints an error message and terminates. (Use the R command matrix to store the data in the proper way. See Becker, Chambers, & Wilks, 1988, for details.) Again beta, which is in Table 9.2, defaults to 0.2. The function returns a p-by-p matrix of correlations in pball$pbcorm, another matrix indicating the p-values for the hypotheses that each correlation is zero, plus the test statistic H and its corresponding p-value.
{"url":"http://my.safaribooksonline.com/book/-/9780123869838/9dot3-some-type-m-measures-of-correlation/935_r_function_pball","timestamp":"2014-04-20T07:00:55Z","content_type":null,"content_length":"104702","record_id":"<urn:uuid:b21d0a17-de9e-4b25-a4ac-682a08fffb86>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics[ScatterPlot3D] - generate 3D scatter plots Calling Sequence ScatterPlot3D(XYZ, options, plotoptions) XYZ - Array or Matrix of numeric data, of size mx3 options - (optional) equation(s) of the form option=value where option is one of lowess, bandwidth, fitorder, rule, strictorder, or showpoints; specify options for generating the scatter plot plotoptions - options to be passed to the plots[display] command The options argument can contain one or more of the options shown below. All unrecognized options will be passed to the plots[display] command. See plot[options] for details. Designates whether lowess smoothing should be applied to the scatter plot. The smoothing behavior is modified by the options bandwidth, fitorder, rule, and strictorder; see these options for more details. The default value is false. This option is used to control the bandwidth of the lowess smoothing algorithm, when lowess fitting is enabled. The value of this option specifies the ratio of the size of the rectangular fitting window to the entire range of the independent data. The default value is . At the value of all data points in the sample will be used to compute each plotted grid value, which is quite expensive and not really in the spirit of lowess smoothing. As this value is decreased, fewer data points will be found within the window and used for each individual local fit, and this will decrease the duration of the whole computation. As this value is increased, more points farther away will influence the output value for each local fit, and this will also increase the duration of the whole • fitorder=identical(0,1,2) The degree of the bivariate polynomial used in lowess smoothing, when lowess fitting is enabled. The default value is . Designates the rule by which the nearby points falling in the window specified by bandwidth are weighted. The default value is , which denotes the tri-cubed rule. A value of 0 for this option means that all points found in the window will have the same weight. Designates whether the order of the fitting curve may not be reduced in the case that the number of points found in the window is less than what would be necessary for the supplied fitorder option. The default value is false, which allows reduction of the order at any individual computed point. Designates whether the pointplot component will be included in the output. If false then only the surface will be included. The default value is true. • The ScatterPlot3D command generates a 3D scatter plot for the specified 2D data together with a surface approximated using lowess smoothing (LOcally Weighted Scatterplot Smoothing). • The first parameter, XYZ, is the data sample - given as a Matrix or Array with three columns and as many rows as there are distinct data points. Each row represents the x-, y-, and z-coordinate of a data point. • The collection of x- and y-components of all the data points need not collectively form a regular grid in the x-y plane. The data points may be irregularly spaced when projected onto the x-y • As this is a smoothing technique, the resulting surface will not necessarily pass exactly through all the the 3D data points. • The Statistics[ScatterPlot3D] command was introduced in Maple 16. • For more information on Maple 16 changes, see Updates in Maple 16. First, some data is constructed and noise is then added to the z-component. The view from above shows the irregular spacing of the x-y components of the data. A fitting order of 0 produces a form of weighted moving average. Linear or quadratic fitting, with a fitting order of 1 or 2 respectively, produce smoother plots. See Also CurveFitting, Statistics, Statistics[ScatterPlot], Statistics[Visualization], plots[surfdata], examples,Interpolation_and_Smoothing Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=Statistics/ScatterPlot3D","timestamp":"2014-04-21T10:26:24Z","content_type":null,"content_length":"142767","record_id":"<urn:uuid:1ef0fd19-7ce7-4ef4-b008-ffb285707a23>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Answer Key for Postal Assistant/Sorting Assistant Exam held on 12 May 2013- Mathematics Section Mathematics Section Solved Q1. If the digits of a two digit number are interchanged, the newly formed number is more than the original number by 18. If the sum of the digits is 8, then what was the original number? (A) 26 (B) 17 (C) 35 (D) Data inadequate Ans. (C) 35 Q2. One year ago, a mother was 4 times older than her son. After 6 years, her age becomes more than double her son’s age by 5 years. The present ration of their age will be: (A) 17:2 (B) 25:7 (C) 29:6 (D) None of these Ans. (B) 25:7 Q3. If m:n is 2:3, what is the value of (2m+5n)/(6m-n): (A) 5/3 (B) 7/3 (C) 3/7 (D) None of these Ans. (B) 7/3 Q4. A triple successive discount of 20%, 10% and 5% is equal to a single discount of : (A) 31.60% (B) 25.7% (C) 40% (D) 35% Ans. (A) 31.60% Q5. Three items are purchased at Rs.380/- each. One of them is sold at a loss of 10%. The others are sold so as to gain25% on the whole transaction. What is the gain % on these two items? (A) 42.5% (B) 40.5% (C) 44.5% (D) None of these Ans. (A) 42.5% Q6. The size of a bag that could hold 6 Kg of oranges has now been increased so that it can hold 8 kg. What is the percentage increase in the size? (A) 30 1/3 % (B) 33 1/3% (C) 20% (D) 25% Ans. (B) 33 1/3% Q7. If w/x=6/11, y/z=16/23, w/y=9/6 then what is the value of x/y? (A) 13/16 (B) 11/16 (C) 6/16 (D) None of these Ans. (D) None of these Q8. What approximate value should come in place of (?) in the following question? (A) 8 (B) 10 (C) 53 (D) None of these Ans. (B) 10 Q9. Mid points of the side of an equilateral triangle of side 18 cm are joined to form another triangle, whose midpoints are further joined to form a different triangle and this process is repeated indefinitely. The sum of the perimeters of all triangle will be: (A) 144cm (B) 172cm (C) 72cm (D) 108cm Ans. (D) 108cm Hint: use GP Q10. A sum of Rs.8500/-is to be divided among 5 men, 6 women and 8 boys in the ratio of 10:7:1. The share of 01 boy will be: (A) Rs.595/- (B) Rs.850/- (C) Rs.85/- (D) None of these Ans. (C) Rs.85/- Q11. Ram and Shyam work in the same factory. Ram can produce 45 articles in one hour and Shyam can produce 40 articles in 0ne hour. During one week Shyam worked 5 more hours than Ram but produced same number of articles as Ram. How many hours did Ram work that week: (A) 50 (B) 43 (C) 45 (D) 40 Ans. (D) 40 (A) 3/4 (B) 5 ¼ (C) 1 (D) 1 1/3 Ans. (C) 1 Q13. Which of the following numbers has got the highest value? (A) 127/25 (B) 121/9 (C) 53/4 (D) 25/6 Ans. (B) 121/9 Q14. The average score of Sachin Tendulkar in IPL 15 matches is 70 runs and the average score in Border-Gavaskar T-20 matches is 45 runs in 7 matches. If he has played 10 more International T-20 matches and his overall average score in all T-20 matches was 73 runs. What was his total score in 10 International T-20 matches: (A) 971 (B) 982 (C) 990 (D) None of these Ans. (A) 971 Q15. Number of diagonals in a 30 sided convex polygon will be: (A) 818 (B) 378 (C) 405 (D) 955 Ans. (C) 405 Q16. Two trains leave stations P and Q, 110km apart. Train from P to Q travels at 25Km/hr and train from Q to P at 30Km/hr. If they both start at 8 AM, they meet at: (A) 09:00Am (B) 09:45 Am (C) 10:40Am (D) None of these Ans. (D) None of these Q17. A sphere of radius x is melted and its volume is divided into two equal parts. One part is cast into a cylinder of height 10cm and second a cone of same height. The ratio of cylinder radius to the cone radius is: (A) 1:3 (B) √3:2 (C) 1:√3 (D) None of these Ans. (C) 1:√3 Q18. A man buys milk at a certain price per Kg. and after mixing it with water sells it again at the same price. How many grams of water he mixes in every Kg. of milk if he makes a profit of 25%: (A) 150g (B) 30g (C) 250g (D) 200g Ans. (C) 250g Q19. A sum of Rs.8000 generates Rs.1261 as compound interest in 03 years, interest being compounded annually. The rate of compound interest is: (A) 10% (B) 5% (C) 20% (D) 2.5% Ans. (B) 5% Q20. By how much is two-thirds of 96 less than three fifths of 210? (A) 114 (B) 62 (C) 206 (D) None of these Ans. (B) 62 Q21. If √.00000676 = .0026, then the square root of 67,60,000 is: (A) 260 (B) 2600 (C) 1/26 (D) 26 Ans. (B) 2600 Q22. The average temperature of three days is 24˚C. If the temperature on first two days is 20˚ and 25˚C respectively, then temperature on third day is: (A) 27˚C (B) 24˚C (C) 22 ½˚C (D) 23˚C Ans. (A) 27˚C Q23. A five-year cash certificate with a maturity value of Rs.300 is purchased for Rs.200. The annual rate of simple interest is: (A) 10% (B) 15% (C) 5% (D) 7 ½ % Ans. (A) 10% Q24. You bought some apples. On the first day you ate one and ¼ of the reminder. On the second day you ate 2 and ¼ of the remainder. On the third day you ate the entire remaining balance of 3. How many apples did you buy? (A) 13 (B) 19 (C) 9 (D) None of these Ans. (C) 9 Q25. Two card start from place A and B, 100km apart, towards each other. Both card start simultaneously. A bird sitting on one car starts at the same time towards the other car, and as soon as it reaches the second car, it flies back to the first carand it continues in the manner flying backwards and forwards from one car to the other, until the cars meet. Both cars travel at speed of 50Kmph and the bird flies at 100kmph. Total distance covered by the bird will be: (A) 100km (B) 200km (C) 50km (D) None of these Ans. (A) 100km * Comment if there is discrepancy in answers to any questions. 48 comments: 1. qus 11.40hours is the time he wrk plus 5 hours he wrkd more.so 45 cud b the answr. 2. qus no 8.answr s 10 na?hw cud it b 53?pls solve 3. Please check the answer of ques no. 8.... I think the right answer is B. 10 4. i think ans for qstn no. 25 and 10 is d. 5. answer of ques no 8 is surely 10. use a calculater 6. Question num 10. Right answer is 59, its not in the option. So none of these D 7. pls upload the key of remaining two sections. thanks for your help. 8. ques no:10 answer is 59.. 9. Yes...question no:10...i'm also getting the ans as d) none of these.... 1. i think answer 85 is correct 2. how? 3. but how come it is 85??? 10. i think answer for ans 10 is 85 itself. it is the correct answer. 11. i have scored 76 marks in the exam as per the available key. any chances for selection? 1. yes there is chance for selection 2. In which category do u belong to??? 3. i belong to OBC.. 12. I got above 10 marks in maths section as per the key given here.Also I got 10 marks in all other sections.Is there any chance for me?? 13. plz upload the key of english section ,i have scored 47 marks from 3 section each 14,12,21 is there is any chance for me 14. i got 70 marks in total whether ter is possibility for me to select in obc category 15. i got 75 marks and i come under general category... is there any hope??? 16. hey guys...pls tell ur score... 1. i have scored 92 as per key. 2. 92??so job is sure for u..... :) 3. will +2 mark will be considered i have 93% 4. i think no.....only apti marks vl b considered.... how much is your apti mark as per the key given above?? 5. i got 92 mark as per key 6. k..you vl get the job for sure.... 7. thank you.but i can"t join because i have got selection for other job 8. oh k....congrats!!!!wat job u got selected for?? 9. got selection as army officer( women entry). 10. cool...and u belong to which circle?? 11. i think you are asking about postal assistant circle.i belong to kerala circle. have you written postal assistant exam??? any hope 12. yeah!!! i also belong to kerala circle....i have written and score is 78.don't know whether to hope or not :) 13. i think you have more probability for selection.work hard in speed typing (computer).practice a lot and achieve the goal.may god bless you...... 14. how this test would be??i have no clue....is it typing a passage or what?? and also how many candidates would they call for this test??is there a good chance for me to be called for this? 17. i got 78 marks...and i'm in general category.. any chance for me?? 18. I got 70 marks (kerala circle). In general category. Any hope to be included in the list? 19. I got 58 marks totally is there any chance for me? becoz the qualifying mark is 10 marks in each part is 10 marks in each part.i have 10 marks in every part.But total is 58 20. I got 51 marks totally is there any chance for me? becoz the qualifying mark is 10 marks in each part is 10 marks in each part.i have 10 marks in every part.But total is 51 21. Any one know the eligible criteria??? pls rply me... 22. number doesn't get you job you are now just eligible for typing & data entry test ...and i can assure you that's quite a tough deal 1. i got 78 marks...and i'm in general category,kerala.. any chance for me?? 23. Is typing test is very tough???????????????? 1. no idea.... 24. i got 58 mark if there is any chance for me 1. r u in which category??? 25. when did the results come?????????
{"url":"http://www.currentaffairs4examz.com/2013/05/answer-key-for-postal-assistantsorting.html","timestamp":"2014-04-16T10:33:27Z","content_type":null,"content_length":"280312","record_id":"<urn:uuid:fcdef18d-8e8c-4973-b59d-13a322989f7f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Avron's questions about intuitionistic A.P. Hazen a.hazen at philosophy.unimelb.edu.au Fri Nov 4 22:37:09 EST 2005 Arnon Avron writes: >I have several qustions to intuitionists (the first two - also >to nonintuitionists who might know the answers). >1) Is there a definable (in the same way implication is definable >in classical logic in terms of disjunction and negation) unary >connective @ of intuitionistic logic such that for every A, B we have >that A and @@A intuitionistically follow from each other, >and B intuitionistically follows from the set {A, at A}? -------No. If such a connective were definable, it would express a "truth function" in every matrix for intuitionistic logic. Thus, to show indefinability, it suffices to exhibit a Heyting Algebra over which no such operation can be defined. Consider the three-element chain, <T,U,F>. (This is the algebra of propositions in the two-world Kripke model: T = true in both worlds, U = true only at the second world, F = true in neither world.) By Avron's condition (ii) B intuitionistically follows from the set {A, at A} @A intuitionistically implies ~A (the usual intuitionistic negation of A). Thus @(x) must be less than (or equal to) ~(x) for each value, so @(T)=@(U)=F. @(F), however must be T: by Avron's (ia) A intuitionistically implies @@A @@(T) has to be T, but @(T) is F. But now (ib) @@A intuitionistically implies A yields a contradiction: @(U) is F, so @@(U) = T, which does not imply U. >2) Can one define in intuitionistic logic counterparts of the >classical connectives so that the resulting translation >preserves the consequence relation of classical logic? (Obviously,a >negative answer to question 1 entails a negative one to question 2). -----There are the "negative translations," discovered by Glivenko, Gödel and Gentzen. Those that preserve the classical consequence relation, however, do not provide "generally applicable" analogues of the classical connectives: defined connectives which, applied to arbitrary formulas, "act classical". In particular, there is a negative translation, t, for which the translation of classical negation is simply intuitionistic negation (t[~A] = ~[t[A]]) and which allows t[A] to be inferred from t[~~A], but on it t[S] is always built up from "pseudo-atoms" of the form ~~p. >3) In several postings it was emphasized that LEM applies >in intuitionistic logic to "decidable" relations. Is there >an intuitionist definition of "decidable" according to which >this claim conveys more than a trivial claim of the form "A implies A"? ----The claim is tautologous if we DEFINE "F is decidable" as "Ax(Fx v ~Fx)" is (intuitionistically) true. On the other hand, very weak intuitionistic systems of arithmetic can represent recursive functions. So in general, if a predicate is decidable by means of an algorithm, and you can prove that the algorithm works -- i.e. that Ax((Fx <=> f(x)=1)&(~Fx <=> fx = 0)) -- in a reasonable intuitionistic system, then you can also prove Ax(Fx v ~Fx) in the same system. So, if you think of decidability in terms of algorithms, it can be informative to say that excluded middle holds for decidable predicates. >4) Some postings mentioned also "undecidable" relations (or predicates). >What is the definition of "undecidable" here? is a relation P >intuitionistically undecidable if there is procedure that produces >a proof of absurdity from a procedure that given any x either provides >a proof of P(x) or a procedure that carries any proof of P(x) to >a proof of absurdity? >5) Does an intutionistically-undecidable predicate intutionistically-exist? -------It is provable in intuitionistic analysis that it is not the case that every real number is either equal to or distinct from 0: ~Ax(Fx v ~Fx) is provable, where F means "is equal to 0" and the variable ranges over real numbers. Allen Hazen Philosophy Department University of Melbourne More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-November/009306.html","timestamp":"2014-04-24T01:31:52Z","content_type":null,"content_length":"6966","record_id":"<urn:uuid:3fd53ac1-78f3-4523-aea9-06132ce19162>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Right Triangle Proof Date: 11/19/2004 at 10:26:03 From: Kaustav Subject: Inscribed Circle In right triangle ABC, let CD be the altitude to the hypotenuse. If r1,r2,r3 are radii of the incircles of triangles ABC, ADC, and BDC, respectively, prove CD = r1 + r2 + r3. I'm not sure how to approach it. I have considered using areas. If I can prove the ratio of radii to the sides, I think it can be solved. Any hints? Date: 11/19/2004 at 12:36:46 From: Doctor Schwa Subject: Re: Inscribed Circle Hi Kaustav, By cutting each triangle into three, you can prove that the area of each triangle = 1/2 * radius of inscribed circle * perimeter of triangle. So: Area ABC = 1/2 * r1 * (AB + BC + AC) Area ADC = 1/2 * r2 * (AD + DC + AC) Area BDC = 1/2 * r3 * (BD + DC + BC) and since area ABC = area ADC + area BDC, r1*AB + r1*BC + r1*AC = r2*AD + r2*DC + r2*AC + r3*BD + r3*DC + r3*BC which is some progress, since now we know DC = (r1*AB + r1*BC + r1*AC - r2*AD - r2*AC - r3*BD - r3*BC)/(r2 + r3) but this still looks pretty ugly. In my argument, though, I never used the fact that the triangle was right! So, what else do we get? Area ABC = 1/2 * AC * BC and also some nice proportions like AD/DC = AC/CB = DC/DB. There's some other nice facts, too, like because tangent segments to a circle must be equal, r1 = (AC + BC - AB)/2 (do you see how to prove this fact?) and analogous relationships in the smaller triangles as well. I think this last idea is VERY promising ... try writing down the expressions for r2 and r3 that you get by this same method and see if you can find that r1 + r2 + r3 = CD! - Doctor Schwa, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/67182.html","timestamp":"2014-04-19T20:22:24Z","content_type":null,"content_length":"6724","record_id":"<urn:uuid:0bee60a1-260f-4c54-9860-42c1ee65b834>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Tip of the Week If You Don’t Love the Situation, I’m Going to Make You Love the Situation As we wind down the first decade of the new millennium, it’s only human to reflect on the decade as a whole to attempt to place it in context. What will people remember as the lasting legacy of The Oughts? The election of Barack Obama? 9/11? The financial debacle? If you’ve been watching television this last month, you’ve likely noticed that a new contender has emerged: MTV’s newest reality show Jersey Shore has become an instant phenomenon, providing the world with both an extraordinary amount of entertainment and another reason to hate America’s Generation Y. Oddly enough, this so-horrendous-you-can’t-turn-away group of fist-pumping do-nothings may also provide you with a framework to improve your GMAT performance. Arguably the most interesting character, named “The Situation”, has arguably the greatest signature one-liner in recent television memory: upon approaching fellow revelers, he’s keen on saying of himself “if you don’t love The Situation, I’m going to make you love The Situation.” As you prepare for the GMAT, you may want to use this mantra as your own. The GMAT asks questions in unique ways, each of which is layered with traps and pitfalls designed to provide you with opportunities to make mistakes. Studying for the GMAT can be a frustrating pursuit – perform all the necessary calculations to solve for x, and the question may actually ask you for the value of 15-x (but cleverly provide you with your value for x as an answer choice). On a Data Sufficiency problem, you might determine that statement one proves that “no, x is not greater than 0″, and therefore eliminate statement one from your checklist…only to have made the crucial mistake of not recognizing that a definitive “no” answer means that statement one IS SUFFICIENT to answer the question. Remember – your job on yes/no Data Sufficiency problems (see example below) is to determine when you have enough information to solve the problem; you don’t need to ensure that the answer is “yes”. Consider: Is x >0? 1) x^2 + 2x + 1 = 0 This statement demonstrates that x must be a negative number – any positive value of x would create a positive result for the entire quadratic, and therefore not equal zero. Only if x is negative can the equation be true (you may even recognize that this equation factors out to (x + 1)^2 = 0, meaning that -1 is the only solution – but also note that you don’t need to solve for x in this case as long as you know that it cannot be positive, as you only have to answer the overall question). You may be inclined, once you realize that statement one provides the answer “NO” to the overall question, to cross off statement one, but it actually is sufficient; it gives us a definitive answer to the question, which is all we need. Back to “The Situation”, as frustrating as you may find these subtleties of the GMAT, if you learn to love the GMAT situation you can attack these problems as fun brainteasers. Data Sufficiency problems have subtle rules just like Tetris does, or your computer’s time-killing solitaire card games do. If you see the GMAT as a worthy mental challenge, and not as a frustrating task, you’ll be much better equipped to quickly spot the traps and pitfalls that might otherwise catch you and lower your score. If you don’t love the situation, you should learn to love the situation. And once you love the situation, you’re much more likely to dominate the situation. For more GMAT prep tips and resources, give us a call at (800) 925-7737. And, as always, be sure to follow us on Twitter!
{"url":"http://www.veritasprep.com/blog/2009/12/gmat-tip-of-the-week-57/","timestamp":"2014-04-19T06:53:44Z","content_type":null,"content_length":"47333","record_id":"<urn:uuid:4e9d55ed-89e3-4141-b669-21505823b860>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding unique solutions August 17th 2012, 05:58 PM #1 Jun 2012 Finding unique solutions "Find an interval around x = 0 for which the given initial-value problem has a unique solution." (x-2)y" + 3y = x, y(0) = 0, y'(0) = 1 How do I do this without solving the DE, which I haven't learned yet (but will soon)? Re: Finding unique solutions Hi, phys251. There is an existence theorem for second order linear differential equations that goes as follows: Main Theorem Consider the Initial Value Problem (IVP) $y''(x)+p(x)y'(x)+q(x)y(x)=g(x)$, $y(x_{0})=y_{0}$ and $y'(x_{0})=y_{1}.$ If the functions $p(x), q(x)$ and $g(x)$ are continuous on the open interval $I$ that contains the point $x_{0},$ then the IVP has exactly one solution throughout the interval $I$. To start, $x_{0}=0$ in our example. I would suggest dividing through by $x-2$ and seeing if you can use the Main Theorem from there. Think about it a little more, and if you're still stuck I'll detail a little further what I had in mind for this exercise. Good luck! Last edited by GJA; August 17th 2012 at 09:32 PM. Re: Finding unique solutions some crazy partial fractions, but I think this definitely can't be right as I'm supposed to end up with another piecewise function as my answer. Could anyone point me in the right direction? I have never done a piecewise function with functions of t as the values. chaussures jordan 6 timberland bottes timberland bottes pas cher August 17th 2012, 06:49 PM #2 Jul 2012 November 9th 2012, 05:48 AM #3 Nov 2012
{"url":"http://mathhelpforum.com/differential-equations/202277-finding-unique-solutions.html","timestamp":"2014-04-17T21:55:53Z","content_type":null,"content_length":"37087","record_id":"<urn:uuid:deb3b3f2-d21f-48a8-a742-27f7788d9fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Devon Calculus Tutor Find a Devon Calculus Tutor ...My expertise is in Algebra 1, Algebra 2, Geometry, Trigonometry, Pre-Calculus, Calculus and SAT/Act preparations. I am a flexible, enthusiastic, and encouraging tutor. My experience allows me to identify the weak areas of students and then effectively finding ways to explain mostly by relating to real life situations. 15 Subjects: including calculus, physics, geometry, algebra 1 ...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! 14 Subjects: including calculus, physics, geometry, ASVAB ...I have taught students from kindergarten to college age, and I build positive tutoring relationships with students of all ability and motivation levels. All of my students have seen grade improvement within their first two weeks of tutoring, and all of my students have reviewed me positively. T... 38 Subjects: including calculus, Spanish, English, reading ...I can provide that. A continuation of Algebra 1 (see course description). Use of irrational numbers, imaginary numbers, quadratic equations, graphing, systems of linear equations, absolute values, and various other topics. May be combined with some basic geometry. 32 Subjects: including calculus, English, geometry, biology ...I recently graduated from Jacksonville University with a bachelor's degree in mathematics. I took Linear algebra course in college and passed it with a B. I have experience with the uses of linear algebra and matrices. 13 Subjects: including calculus, geometry, GRE, algebra 1
{"url":"http://www.purplemath.com/Devon_Calculus_tutors.php","timestamp":"2014-04-20T08:57:19Z","content_type":null,"content_length":"23526","record_id":"<urn:uuid:e7b858a2-ab5d-41d5-9783-d93a72ac9f73>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Venation Skeleton-Based Modeling Plant Leaf Wilting International Journal of Computer Games Technology Volume 2009 (2009), Article ID 890917, 8 pages Research Article Venation Skeleton-Based Modeling Plant Leaf Wilting National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China Received 1 September 2008; Revised 30 December 2008; Accepted 19 February 2009 Academic Editor: Xiaopeng Zhang Copyright © 2009 Shenglian Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A venation skeleton-driven method for modeling and animating plant leaf wilting is presented. The proposed method includes five principal processes. Firstly, a three-dimensional leaf skeleton is constructed from a leaf image, and the leaf skeleton is further used to generate a detailed mesh for the leaf surface. Then a venation skeleton is generated interactively from the leaf skeleton. Each vein in the venation skeleton consists of a segmented vertices string. Thirdly, each vertex in the leaf mesh is banded to the nearest vertex in the venation skeleton. We then deform the venation skeleton by controlling the movement of each vertex in the venation skeleton by rotating it around a fixed vector. Finally, the leaf mesh is mapped to the deformed venation skeleton, as such the deformation of the mesh follows the deformation of the venation skeleton. The proposed techniques have been applied to simulate plant leaf surface deformation resulted from biological responses of plant wilting. 1. Introduction Realistic modeling of plant leaves has a long history in computer graphics. This is partly due to their either beautiful or colorful images, and partly they have a strong visual effect on the audience. Many techniques have been proposed for modeling the geometry or shape of leaves. Most of these methods, however, just describe the shape of leaves formed in normal natural conditions and do not account for the shapes formed under stress, for example, curled or withered leaves. Additionally, there has been a great deal of previous work on simulating motions of plant, including plant growth, motion in the wind, and so on. But less work has focused on modeling leaf surface deformation and simulating subtle behaviors of plant, such as wilting of leaves suffering from insufficient water supply. This paper presents a venation skeleton-based deformation method for plant leaves and aims to develop an approximately kinematic model of leaf for simulating motions of plant leaves, especially wilting. Firstly, an initial leaf skeleton is exacted from scanned image of leaf. The leaf skeleton plays two roles: it is used to generate a venation skeleton for later deformation; a geometric mesh for the leaf surface is also constructed from the leaf skeleton. Furthermore, a subdivision scheme is applied to generate a detailed triangular mesh from the initial mesh, and each vertex in the mesh is mapped to its nearest vertex in the venation skeleton. Then the venation skeleton is deformed interactively to desired shape. Lastly, the detailed mesh is deformed according to the deformed venation skeleton. Applications of our approach to simulate wilting plant leaves with realistic results illustrate the flexibility and effectiveness of our model. 2. Background and Related Work 2.1. Venation Patterns Both the outline and venation system of a leaf are essential in the recognition of plant species. Various venation structures can be found in plant kingdom. It is believed that venation patterns correlate closely with the taxonomic groups of plants and the shapes of leaves. Hickey [1] has given a classification for the leaf venation patterns, in which pinnate venation and actinodromous venation are two common found categories (see Figure 1). Pinnate venation characterized by a single primary vein (the midvein) being attached by several secondary veins; the primary vein originates at the base and extends toward the leaf tip. Whereas in actinodromous venation, three or more primary veins diverge radially from a single point. Primary veins support sequences of secondary (lateral) veins, which may branch further into higher-order veins. The secondary veins and their descendants may be free ending, which produces an open, tree-like venation pattern, or they may connect, forming loops characteristic of a closed pattern. Although the interrelationships between topological or geometric properties of the various leaf venation patterns and functional aspects are far from being well understood, it is believed that the leaf venation system brings various functional properties. More information about this can be found in [2]. 2.2. Model Deformation and Motions of Plant Leaf in Computer Graphics Some researchers have endeavored to generate curled shapes of plant leaves. Prusinkiewicz et al. [3] provided a detailed representation of combining interaction and parameterized algorithms for realistic plant modeling and scene creation involving plants, including curled leaves. Mündermann et al. [4] proposed a method for modeling lobed leaves; curled surface of leaves could be generated by using free deformation in their framework. Recently, Hong et al. [5] proposed an interactive method for modeling curled leaf surface. But it could involve excessive manual interactions for generating a desired curled shape of leaf by using their method. Studies on curvature of plant leaves from biophysical perspective have raised the question of what role, if any, genes play in controlling the curvature of leaves [6]. Yet there are some researchers study waved or wrinkled pattern in leaves with physical analysis [7]. But these may go beyond our focus in this paper. In aspects of modeling motions of plant, most work has been done on modeling dynamic motions of tree in the wind, such as the work demonstrated in [8]. Based on the fact that plant growth is generally influenced by gravity and tropisms, Jirasek and Prusinkiewicz [9] proposed a biomechanical model for creating curved plant branches by using physically based modeling. Hart et al. [10] extended their idea in modeling plant growth considering physical properties of the plant. Note that none of the above models simulated leaves behaviors. Wang et al. [11] had simulated physically the growth of a plant leaf; the physical model used in their simulation is the governing equations of fluid mechanics—the Navier-Stokes equations. But they just tested their model in 2D. Much work has been done on surface deformation. And many techniques, such as multiresolution mesh representations [12], skeleton-driven global free-form shape deformations [13], and differential deformation [14, 15], have been developed to help artists deform object shapes. But none of these techniques has been tested by modeling leaf deformation. 3. Overview of the Proposed Method Figure 2 gives an overview how our proposed method works. The modeling processes include constructing the venation skeleton from a leaf image, deforming the leaf surface, and so forth. These schematics label the transition processes in uppercase letters, A, B, and so forth. The interactive simulating system that we propose tries to strike a pragmatic balance between processes that can be automated and those that seem to require interaction to achieve the desired level of realism. It has primarily been designed to support our experimentation with interactive animation of leaf motions. Each process will be detailed in the following sections. 4. Generating Venation Skeleton The venation structure of a leaf plays a major biological role in determining the leaf surface shape and controlling its deformation; therefore, we use it to control the deformation of a leaf blade. To generate the venation skeleton, currently we consider an interactive method. In Figure 2, process A and C illustrate the steps for generating venation skeleton of a leaf. 4.1. Extracting a Leaf Boundary We use a representation of a leaf skeleton consisting of two boundary curves and a midvein curve which consists of feature points, as shown in the left of process A. These boundary curves can be reconstructed from feature points in the boundary of a leaf, while these feature points can be extracted automatically from a scanned digital image by using a standard edge detection algorithm, or obtained by using a 3D digitizer. To meet the need of interactive design, the leaf skeleton can also be constructed automatically with a parametric method, in which the length of midvein, the width of leaf blade, and the number of feature points are all initialized with parameters. 4.2. Generating Venation Skeleton The skeleton was originally introduced by Blum [16] for 2D shapes in order to provide a symmetry-based shape representation for shape perception and recognition. Recently the skeleton in 3D has been studied in connection with a research on shape organization [17] and shape manipulation. Practical extraction of the skeleton of a 3D shape is usually based on 3D Voronoi diagram techniques [18]. For our needs, we develop an interface for generating interactively venation skeleton from a leaf skeleton. As Figure 2 shows (from process A to process C), the leaf skeleton can be obtained from a scanned image; a venation skeleton is then generated from the leaf skeleton, and each vein is segmented. The result of process C demonstrates a generated venation skeleton consisting of one midvein and four secondary veins; the black vertices string segmented each vein into several line segments. The process of generating venation skeleton involves several manual interactions including defining the start point and end point for each vein, and specifying parameters for segmenting each vein. Note that the venation skeleton is not unique. It can be created according to actual needs. Users can decide how the midvein crosses the leaf skeleton, and how many secondary veins are attached to the midvein. 5. Leaf Surface Meshing and Banding 5.1. Constructing Leaf Surface We have constructed a leaf skeleton with two boundary curves and a midvein curve as shown in Figure 2 (process A). To mesh the void area within these boundary curves, we employ Delaunay triangulation scheme, because it can deal with the problem of existing concave area in the leaf blade, which is difficult to render directly by simple polygon. For example, lobed leaves often have irregular silhouette characterizing by a number of concave outline. When using Delaunay triangulation, we can use directly the feature points in the midvein and silhouette, or extract a series of points from the midvein curve and silhouette curves with a fixed interval. The mesh resulted from process B of Figure 2 is generated from the result of process A by using Delaunay triangulation scheme. The initial mesh of a leaf surface generated by using Delaunay triangulation is generally irregular and rough. It is necessary to refine the mesh for later deformation. Currently, we use a simple method to subdivide the initial mesh. Generally each triangle can be divided into four triangles. To meet users’ requirements of interaction, we provide two parameters for the subdivision: one is the times of iteration steps of subdivision, the other is the minimal distance of an edge in the mesh. The iterations timeis used to control the number of iterations in the subdividing, while using the constraint of the minimal distance for an edge is to avoid generating too short edge. The lower figure of process D in Figure 2 illustrates a mesh subdividing the initial mesh shown in the upper figure at the same process with iteration steps and minimal distance being specified 2 and 0.5, respectively. 5.2. Banding The banding is to attach all the vertices in the subdivision mesh to the initial venation skeleton of the leaf. The banding is based on the distance of each vertex to the venation skeleton. In other words, each vertex in the mesh is banded to the nearest vertex in the venation skeleton. Figure 3 gives an example of banding a leaf mesh to its venation skeleton which consists of a midvein and two secondary veins. 6. Venation Skeleton-Driven Leaf Surface Deformation Skeleton-based methods have been extensively used for mesh deformation in computer animation and computer modeling. In this method, two or more “bones” meet at each joint, to control shape deformation. This allows intuitive control, naturally describing the way in which many objects, for example, animals, deform the muscles and other tissues follow motions of the underlying bones. Traditional skeleton-based methods commonly require a tedious process of weight selection to obtain satisfactory results. Note that the natural deformation of plant leaves is different from the deformation of humans’ or animals’ organs, and spontaneous motions of a leaf blade are relatively simple. The most regular motions of plant leaves which can be seen by our naked eyes are curling and wilting. So we can constraint the movement of vertices in the venation skeleton for simulating the movements of leaves. The major goal of our approach is to develop an approximately kinematic model for simulating these motions of plant leaves. For our needs, we restrict the movement of each vertex in the venation skeleton by rotating it around a fixed vector. Figure 4(a) illustrates how a venation skeleton works. For convenience, the venation skeleton includes a vein only, which consists of four segments. The light black vertex serves as the root node. Note that the movement of a leaf blade is always downward during its wilting, as such each joint in the skeleton segment does spherical movement. Take vertex , for example, vector will align gradually vector during wilting, in which is a downward vector reverse to -axis, while vector is perpendicular to the plane which contains vector and . To the movement of vertex , it can be looked as is rotated round vector . To obtain a motion sequence of a vertex in a skeleton, the simplest method is to rotate the vertex with a fixed angle, such as θ in Figure 4(a), and the angle is given commonly by the users. But a conventional method is using inverse kinematics [19]. We have mentioned before that a vertex in the skeleton will do spherical rotation during the wilting of its corresponding leaf surface. As Figure 4(a) illustrates, we can calculate the new position of vertex by using the following parametric equation: where . Further, the motion sequence of can be obtained by increasing parameter . This may simplify the rotation operation. Note that vector needs to be recalculated when vertex is repositioned, but it is always downward. And also, all child segment of a vertex in the venation skeleton will follow the movement of the vertex; this can be done by passing a displacement and rotation angle to its child vertices when the vertex is being rotated. Figure 4(b) gives results of rotating four vertices in a skeleton successively, in which the blue is the rotated vertices while the red is the base point. We have detailed the mechanism for controlling movement of the venation skeleton and methods for constructing the leaf mesh in our approach. The last step is to deform the leaf surface according to the deformed venation skeleton. This process can be illustrated as process F in Figure 2. Firstly all the vertices in the subdivision mesh of the leaf are banded to the initial venation skeleton. Then the initial venation skeleton is deformed by using the method described in Section 6.1. For example, at process E, we can generate the shape of the venation skeleton shown bottom in Figure 2E from the initial skeleton shown top in Figure 2E (with different number of joints in each vein). Lastly the position of each vertex in the mesh is recalculated according to the new coordinate of its banded vertex in the venation skeleton. The left-bottom figure in Figure 2 shows the resulted mesh. The right-bottom figure in Figure 2 illustrates the rendering result. The texture mapping is calculated before the deformation, and no need to remap after the deformation. It needs to be noted that the number of joints in each vein in the venation skeleton will influence the effects of deformation. It is easy to imagine that the larger the number of joints is, the smoother the deformed surface will be. And large deformation needs large number of joints. But larger number of joints in the venation skeleton also means more computation and more difficult controlling over the deformation. In our experiments, we find that pleasing visual effects of the simulated wilted leaves shape can be achieved when the number of joints in each vein is not smaller than 4. Users can also obtain a satisfactory result by interactive experiments. Constraints and collision detection are usually the common issues in surface deformation. For constraint, we have stated that each vertex in the venation skeleton can rotate around a fixed vector. Additionally, the rotation needs to meet some extra constraints. For example, the leaf surface will be always drooped during its wilting. In simulating the effect of wilting leaf surface, a vertex in the leaf mesh could not be rotated after it had reached the maximal drooped distance. When simulating curling of a leaf, it needs to avoid overlap of the leaf surface. This can be done by keeping the included angle of two adjacent line sections on each vein being larger than a predefined angle. Collision detection and response is usually the most time consuming process for the overall simulation. Currently we just consider a collision detection to avoid self-intersect in the deformation. During deforming a leaf mesh, each handling currently vertex needs to be checked if its movement will pierce some triangle in the mesh. Piercing means that there is intersection in the triangle mesh. If no piercing occurs, no response is made. If there is an intersection, then we calculate a maximal displacement from the precalculated displacement for the vertex to move to avoid intersection, and correct the displacement of corresponding vertex in the venation skeleton. 7. Applications and Discussion We implemented our algorithm for venation skeleton-driven leaf surface deformation in C++ on a PC with a 2.8GHz Pentium D processor and a NVIDIA GeFore 7900GS graphics card, and used OpenGL to render the results. In this section we report the modeling results. Firstly we simulate wilting effect of a plant leaf. The plant used in this experiment is cucumber. Figure 5 shows a photo of a real wilting cucumber in greenhouse. In our experiment, we first simulate the wilted effect of a cucumber leaf. As shown in Figure 6, Figure 6(a) is the initial shape of the simulated leaf surface while Figures 6(b) and 6(c) are two deformed results corresponding to two different levels of wilted leaves. The initial leaf surface is created interactively from a scanned image within 3 minutes, with 187 vertices and 286 triangles. The iterations time for subdividing the leaf surface is specified to 0. In this experiment, the venation skeleton consists of a midvein and two secondary veins parameters for segmenting these veins are 20, 4, respectively, and the total time for simulating the leaf wilting is 8 seconds with 22 frames. We then use the venation skeleton model to simulate the process of a cucumber wilting. Figure 7 demonstrates the simulated results, in which Figure 7(a) is the initial shape, Figure 7(b) simulates a slight wilting leaves, and Figure 7(c) simulates an acute wilting one. This initial plant is created within 5 minutes by using an interactive interface that we have developed for designing crop structure. The number of vertices and triangles of this model are 4082 and 6150, respectively. We use three instances of leaf surface in the cucumber model, and the venation skeleton in each instance is different from each other, but with the same number of veins (three). The venation skeleton is deformed automatically by rotating the vertices in the venation skeleton downward from the boundary to root of the leaf, by using (1), and the above leaves start wilting later than the lower leaves do, whereas the speed of wilting can be adjusted by modifying parameter t. This simulation consists of 300 frames and runs for approximately 2 minutes. The second application example is simulating a watermelon leaf wilting. Watermelon leaf is a typical lobed leaf. We use a venation skeleton shown in Figure 8(a) to control its deformation of the leaf blade. The venation skeleton consists of one midvein and four secondary veins, and the parameter for segmenting the midvein is specified to 30; parameter for segmenting secondary vein is 10. The initial shape of the leaf is shown in Figure 8(b), which contains 239 vertices and 316 triangles, while Figure 8(c) demonstrates three wilting effects, respectively. We do not apply subdivision to the mesh of the leaf surface (the iterations time is 0), but the results are plausible. To test the influence of the iterations time for subdividing leaf surface to the simulation of leaf wilting, we make an experimental comparison, and the simulated results can be seen in Figure 9. It can be concluded that the larger the iterations time for subdividing, the smoother the simulated wilted leaf surface. The above application examples demonstrate that the proposed venation skeleton-driven approach for simulating wilting of leaf surface is effective and flexible. All of these simulations are running in real time. It can generate realistic effects of wilted leaves similar to natural shape. Currently generating the venation skeleton is manual and interactive in our framework, and controlling the motions of leaves in the scale of a plant is still simple. In fact, leaves wilting may be a natural response for plant adapting to the environment basing on their inner state. An attractive area for future work might involve combining our dynamic modeling technique with physiological model of the leaf. In addition, we just consider a single plant or a leaf in our framework. It is desirable to simulate the motions of plant leaves in an ecosystem scale. Compared to previous related work on modeling shape of plant leaves, in our knowledge, our method is a first attempt to model plant leaves wilting in computer graphics. The proposed approach cannot only generate wilted leaves shape, but also simulate the wilting process of a plant. This provides an intuitive mechanism for animating subtle motions of plant. 8. Conclusion We have presented a model for modeling wilted leaf surface and simulating motions of plant leaves. This model deforms a leaf surface by driving a venation skeleton which is embedded into the geometric mesh of a leaf. The venation skeleton can be created from any polygonal mesh of leaf surface, whereas the polygonal mesh can be captured from real leaves, which makes it easy to create highly realistic leaf appearance models. Currently generating the venation skeleton is manual and interactive in our framework, and with a parametric method. We have demonstrated our model by simulating wilting of a cucumber plant and a watermelon leaf. But it needs to be noted that motions of plant leaves would result from a series of complex reasons, which are difficult to be revealed and simulated, as such mechanism of motions of plant leaves is not easy to model. The leaf deformation model presented in this paper is an example of a model that provides intuitive control for simulating of some motions of plant leaves. An exciting area for future work is the development of a framework for virtual agronomic experiment for broader classes of This work is supported by National High Tech R&D Program of China under Grant no. 2007AA10Z226, Beijing Natural Science Foundation of China under Grant no. 4081001, and the National 11th Five-year Plan for Science & Technology of China under Grant no. 2006BAD10A07. 1. L. J. Hickey, Anatomy of the Dicotyledons, Clarendon Press, Oxford, UK, 2nd edition, 1979. 2. A. Roth-Nebelsick, D. Uhl, V. Mosbrugger, and H. Kerp, “Evolution and function of leaf venation architecture: a review,” Annals of Botany, vol. 87, no. 5, pp. 553–566, 2001. View at Publisher · View at Google Scholar 3. P. Prusinkiewicz, L. Mündermann, R. Karwowski, and B. Lane, “The use of positional information in the modeling of plants,” in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '01), pp. 289–300, Los Angeles, Calif, USA, August 2001. View at Publisher · View at Google Scholar 4. L. Mündermann, P. MacMurchy, J. Pivovarov, and P. Prusinkiewicz, “Modeling lobed leaves,” in Proceedings of Computer Graphics International (CGI '03), pp. 60–65, Tokyo, Japan, July 2003. 5. S. M. Hong, B. Simpson, and G. V. G. Baranoski, “Interactive venation-based leaf shape modeling,” Computer Animation and Virtual Worlds, vol. 16, no. 3-4, pp. 415–427, 2005. View at Publisher · View at Google Scholar 6. U. Nath, B. C. W. Crawford, R. Carpenter, and E. Coen, “Genetic control of surface curvature,” Science, vol. 299, no. 5611, pp. 1404–1407, 2003. View at Publisher · View at Google Scholar 7. E. Sharon, B. Roman, and H. L. Swinney, “Geometrically driven wrinkling observed in free plastic sheets and leaves,” Physical Review E, vol. 75, no. 4, Article ID 046211, 7 pages, 2007. View at Publisher · View at Google Scholar 8. J. Beaudoin and J. Keyser, “Simulation levels of detail for plant motion,” in Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '04), pp. 297–304, Grenoble, France, August 2004. View at Publisher · View at Google Scholar 9. C. Jirasek and P. Prusinkiewicz, “A biomechanical model of branch shape in plants,” in Proceedings of the 9th Western Computer Graphics Symposium (WCGS '98), M. Lantin, Ed., pp. 23–26, Whistler, Canada, April 1998. 10. J. C. Hart, B. Baker, and J. Michaelraj, “Structural simulation of tree growth and response,” The Visual Computer, vol. 19, no. 2-3, pp. 151–163, 2003. 11. I. R. Wang, J. W. L. Wan, and G. V. G. Baranoski, “Physically-based simulation of plant leaf growth,” Computer Animation and Virtual Worlds, vol. 15, no. 3-4, pp. 237–244, 2004. View at Publisher · View at Google Scholar 12. L. P. Kobbelt, T. Bareuther, and H.-P. Seidel, “Multiresolution shape deformations for meshes with dynamic vertex connectivity,” in Proceedings of the 21st Annual Eurographics Conference (EUROGRAPHICS '00), pp. 249–260, Interlaken, Switzerland, August 2000. 13. J. Bloomenthal and C. Lim, “Skeletal methods of shape manipulation,” in Proceedings of the International Conference on Shape Modeling and Applications, pp. 44–47, Aizu-Wakamatsu, Japan, March 14. M. Alexa, “Differential coordinates for local mesh morphing and deformation,” The Visual Computer, vol. 19, no. 2-3, pp. 105–114, 2003. 15. Y. Yu, K. Zhou, D. Xu, et al., “Mesh editing with poisson-based gradient field manipulation,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 644–651, 2004. View at Publisher · View at Google 16. H. Blum, “A transformation for extracting new descriptors of shape,” in Proceedings of the Symposium on Models for the Perception of Speech and Visual Form, pp. 362–380, MIT Press, Cambridge, Mass, USA, 1967. 17. P. Giblin and B. B. Kimia, “A formal classification of 3D medial axis points and their local geometry,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), vol. 1, pp. 566–573, Hilton Head, SC, USA, June 2000. View at Publisher · View at Google Scholar 18. N. Amenta, M. Bern, and M. Kamvysselis, “A new Voronoi-based surface reconstruction algorithm,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '98), pp. 415–421, Orlando, Fla, USA, July 1998. View at Publisher · View at Google Scholar 19. J. Weber, “Run-time skin deformation,” in Proceedings of Game Developers Conference (GDC '00), pp. 703–721, San Jose, Calif, USA, March 2000.
{"url":"http://www.hindawi.com/journals/ijcgt/2009/890917/","timestamp":"2014-04-17T19:12:08Z","content_type":null,"content_length":"71076","record_id":"<urn:uuid:08837a9d-fb09-462c-8cc9-7eb822ae0802>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: 3NF but not BCNF From: Jan Hidders <hidders_at_REMOVE.THIS.win.tue.nl> Date: 15 May 2001 13:41:29 GMT Message-ID: <9drbm9$hij$1@news.tue.nl> Phil Cook wrote: > I am presented with the following question: > Argue that if a relation schema R is in Third Normal Form but not > in Boyce-Codd Normal Form with respect to a set of functional > dependencies F, then it must have at least two distinct keys for > R with respect to F which overlap, i.e. such that their > intersection is nonempty. > Unfortunately, my textbook only mentions this point in passing, > referring to some paper by Vincent and Srinivasan. This paper does > not appear to be available online, however. > Any explanation would be greatly appreciated. The proof rests upon the following two Lemmas: Lemma 1: If you have a candidate key K and a functional dependency X -> A (with X a set of attributes and A a single attribute) that is left-irreducible (i.e. there is not a FD X' -> A with X' a strict subset of X) then (K - {A}) + X (with '-' the set minus and '+' the set union) is also a candidate key. For example, if you know that ABC is a candidate key and FD DE->B holds, then ADEC is also a candidate key. (Note that we simply replaced in ABC the B with DE.) [Note: Proof of this is easy but not trivial.] Lemma 2: If a relation is in 3NF but not in BCNF then there will be a left-irreducible FD X -> A such that it is 1. non-trivial, 2. A is in a candidate key and 3. X is not a superkey. [Note: This is also easy to prove but not trivial. ] With these lemmas we can prove for a relation in 3NF but not in BCNF there are at least two different candidate keys such that these have an overlap. The general idea of the proof is that we can use the FD of Lemma 1 to derive another candidate key with Lemma 2 that has an overlap with the one that A is in: Assume that there are only the candidate keys K_1, ..., K_n. Then by Lemma 1 there is a left-irreducible non-trivial FD X -> A such that A is in some candidate key K_i and X is not a superset of any of K_1, ..., K_n. By Lemma 2 it follows that then (K_i - {A}) + X is also a candidate key. For this candidate key we can show two things: 1. It is different from K_i. 2. Its intersection with K_i is non-empty. This can be shown as follows. 1. Assume that (K_i - {A}) + X = K_i. Since A is in K_i it follows that X must be a superset of {A}. But this implies that X -> A is trivial which leads to a contradiction. The assumption that (K_i □ {A}) + X = K_i must therefore be false. 2. Assume that the intersection of K_i and (K_i - {A}) + X is empty. It then follows that K_i = {A} and therefore that (K_i - {A}) + X □ ({A} - {A}) + X = X is a candidate key. But this implies that X is a superkey which leads to a contradiction. The assumption that the intersection of K_i and (K_i - {A}) + X is empty must therefore be false. It follows from a. and b. that ther are at least two candidate keys such that these have an overlap. Jan Hidders Received on Tue May 15 2001 - 08:41:29 CDT
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2001/05/15/0144.htm","timestamp":"2014-04-21T08:33:50Z","content_type":null,"content_length":"10044","record_id":"<urn:uuid:dc44974d-d43d-4a87-b5ec-e6bef66b8f8c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate angels of a point object Join Date Aug 2011 Rep Power want to calculate the x & y of a point object after it hits a frame. Are you drawing a shape that hits another shape as it is moving across your window? You need to consider where the two shapes are located in 2 dimensions and see when the boundary of one shape touches the boundary of the other shape. Drawing this on paper will help you to see where the boundaries are and where the boundary of one touches the boundary of the other. the angle of the point need to be multiplied in -1 after it hits a frame Not sure what you mean here. A point doesn't have an angle. Do you mean the angle between the path of the point and the boundary of the shape that the moving point is coming in contact with? Are you trying to have the point bounce off the boundary and move in a new direction? Do you have some code you can post to show your problem? Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power If a point travels with a speed (dx, dy) toggle the sign of dx if the point hits a vertical wall; toggle the sign of dy if it hits a horizontal wall. kind regards, Last edited by JosAH; 11-16-2011 at 08:19 AM. Reason: fixed a stupid typo ... cenosillicaphobia: the fear for an empty beer glass Join Date Aug 2011 Rep Power No, I'm drawing a shape that hits the window frames. I know! I just trying to find a way to calculate the new values after the object confluenction. Do you have some code you can post to show your problem? Truthly, no. I just got a simple point on my screen. Looks like I got 2 problems now: 1)I want the point to move randomly on the screen. Moving it with 90 degrees it's easy .. just x++ or y++ but I want the point to move as the ball move on the picture I attached. 2)When it hits the window frame, what the values will be then? Yea I know that but what happens if the ball is not re painting by x++ or y++, if it's x+=3, y+=10 Thanks guys. edit: JoshAH, I think your answer is working for all the cases. I will test it now. I want the point to move as the ball move on the picture I attached. If you change the x and/or y values over time, the location will change. When it hits the window frame, what the values will be then? Print out the values of x and y as the ball moves to see their values. Do you know the size of the window frame? Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power
{"url":"http://www.java-forums.org/new-java/51276-calculate-angels-point-object.html","timestamp":"2014-04-17T17:02:48Z","content_type":null,"content_length":"90736","record_id":"<urn:uuid:db7b5157-6ab8-47d7-b7f0-55984db991f1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Manipulation of (Imaginary?) Roots Date: 8/18/96 at 19:46:41 From: Anonymous Subject: Manipulation of (Imaginary?) Roots I would appreciate it if you could point out the mistake I'm making. Let r,s, and t be the roots of x^3-6x^2+5x-7=0. Find 1/r^2+1/s^2+1/t^2. Simplifying the fractions into one, I get Let the numerator be x. Expand (rs+st+rt)^2 = x+2rst(r+s+t) Using the coefficients from the original equation I get 25 = x+2*7*6 or x = -59 and the desired fraction is -59/49. The problem is that everything is squared in the desired fractions, so they cannot be negative. Two of the roots are imaginary, but I don't think that should affect the result. Date: 8/19/96 at 4:34:42 From: Doctor Pete Subject: Re: Manipulation of (Imaginary?) Roots The answer you obtained is indeed correct. In fact, it is precisely because two of the roots are imaginary that the above result is To see why this is so, take a look at the original expression, 1/r^2 + 1/s^2 + 1/t^2 . If, say, r and s were imaginary, r^2 and s^2 would be real and *negative*. Thus 1/r^2 and 1/s^2 would be negative, and one could easily expect the entire sum to be negative as well. In this case, the approximate numerical values of the three roots are {5.30633, 0.346833 + 1.09494 I, 0.346833 - 1.09494 I} Since the squares of complex conjugates are also complex conjugates, it is clear that the imaginary component vanishes when the sum of the reciprocals is taken. Since the real root is relatively large, the reciprocal of its square will be very small - small enough that the net result is negative. -Doctor Pete, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/52986.html","timestamp":"2014-04-19T15:22:56Z","content_type":null,"content_length":"6698","record_id":"<urn:uuid:00907056-34ec-44fd-846b-ecb7a7b167a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathcad: Using Vectors Instead of Range Variables Much of the material below is from Chapters 1, 5, and 11 of the book Essential Mathcad ©2009, Elsevier Inc. Note: For this worksheet, ORIGIN is set to 1 A vector is simply a matrix with only one column. Use the Insert Matrix dialog box to create a matrix. This dialog box can be accessed in three ways: Selecting Matrix from the Insert menu; typing the shortcut CTRL+ M.; or selecting the matrix icon (showing a three by three matrix) on the Matrix toolbar. For a vector, the columns field in the dialog box should be set to 1. Range Variables Range variables have a beginning value, an incremental value, and an ending value. To define a range variable: 1. Type the variable name followed by a colon; 2. In the placeholder, type the beginning value followed by a comma. 3. In the new placeholder type the incremental value followed by a semicolon. This inserts two dots and adds a third placeholder. 4. Now enter the ending value. Range variables are best used to increment expressions, iterate calculations, and to set plotting limits. When you use range variables to iterate a calculation, it is important to understand that Mathcad begins at the beginning value and iterates every value in the range. You cannot tell Mathcad to only use part of the range variable. If you use a range variable as an argument for a function, the result is another range variable, which means that the result is displayed, but you cannot access individual elements of the result. Also, you cannot assign the result to a variable. Even though it is possible to use a range variable as an argument for a function, it is best to use a vector, so that each element of the result can be assigned and accessed. It is important to understand the difference between a range variable and a vector. The following two examples illustrate the benefits of using vectors rather than range variables when you need to assign and access the results of a function or equation. Example 1 Range variable vs vector in user-defined functions This example compares the difference between using a range variable and a vector as input to a function. 1. Using a Range Variable Create range variable RV Create user-defined function The range variable may be used as the argument of the function. Units may also be added to the function argument. The range variables do not need to be integers when using them in a function. The results are displayed, but you cannot access or reuse individual results. Mathcad gives you an error if you try to assign this result to a variable. 2. Using a Vector In order to make each result accessible, you need to assign the results to a vector. Create vector "v" similar to the range variable. Attach units of cm to the vector. Type CTRL+ M to start the vector. The answers are the same as above, but by using a vector, each element of the result is now accessible. Note: The above subscript is an array subscript typed by using the " [ " key. It is different from a literal subscript typed by using the " . " key. Array subscripts are used with vectors and Example 2 Range variable vs vector in an expression Use the range variable "RV" and the vector "v" from the previous example. 1. Using a range variable Type the expression π *RV2 to use the previously defined variable "RV." The results are displayed, but are not accessible, and the expression cannot be assigned to a variable. Mathcad gives you an error if you try to assign this result to a variable. 2. Using a Vector Assign the variable CircleArea_1 to the expressions π *v2, where "v" is a previously defined vector variable. Each element of the results is now accessible. A user-defined function to create a range vector Mathcad does not have a simple way to create a vector with a range of values. The following function creates a "range vector" similar to creating a range variable. It is from Figure 11.8 of the book Essential Mathcad. This function creates a vector with a range of values. This is based on a function in the February 2002 Mathcad Advisor Newsletter. The for loop creates a range variable for iterating from the three input arguments. The vector "Vec" is a local variable. Create a variable with range from 0 to 1 with 0.1 increment. These two variables appear to be identical, but they behave very differently. If you would like to have the above function always available in your worksheets, add the function to your default template "Normal.xmct." Much of the material above is from Chapters 1, 5, and 11 of the book Essential Mathcad ©2009, Elsevier Inc.
{"url":"http://learningexchange.ptc.com/tutorial/146/mathcad-using-vectors-instead-of-range-variables","timestamp":"2014-04-16T10:49:49Z","content_type":null,"content_length":"28824","record_id":"<urn:uuid:dd871e6d-2119-4da0-a8e3-da7f4f6da286>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
French Powers February 11th 2007, 10:23 AM #1 Nov 2006 French Powers I got this from New Scientist magazine. Don't worry, I've already figured it out and it's too late to be sent in for the £15 In the following statement, digits have been consistently replaced by capital letters, different letters being used for different digits: DIX is the product of two primes, CENT is a square, MILLE is a cube; none of them starts with a zero. Which numbers are represented (in this order) by DIX, CENT and MILLE? The closing date is the 14th of March, and its the first letter opened on that date with the correct answer that wins. But this is metaphor for correct answer selected at random, they do not order them by date Well get solving then! I won't post in any answers - I have already worked it out! >_< February 12th 2007, 04:32 AM #2 Grand Panjandrum Nov 2005 February 12th 2007, 07:57 AM #3 Nov 2006
{"url":"http://mathhelpforum.com/math-challenge-problems/11481-french-powers.html","timestamp":"2014-04-16T04:51:22Z","content_type":null,"content_length":"36357","record_id":"<urn:uuid:06d4d0a3-2092-46e4-97fe-476b27a8c91d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Technique of Removing Blind Spots to Optimize Wireless Coverage in Indoor Area International Journal of Antennas and Propagation Volume 2013 (2013), Article ID 509878, 10 pages Research Article A New Technique of Removing Blind Spots to Optimize Wireless Coverage in Indoor Area ^1Department of Electrical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia ^2Department of Electrical and Electronic Engineering, Faculty of Engineering, National Defence University of Malaysia, 57000 Kuala Lumpur, Malaysia Received 6 December 2012; Revised 14 February 2013; Accepted 27 March 2013 Academic Editor: Stefano Selleri Copyright © 2013 A. W. Reza et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Blind spots (or bad sampling points) in indoor areas are the positions where no signal exists (or the signal is too weak) and the existence of a receiver within the blind spot decelerates the performance of the communication system. Therefore, it is one of the fundamental requirements to eliminate the blind spots from the indoor area and obtain the maximum coverage while designing the wireless networks. In this regard, this paper combines ray-tracing (RT), genetic algorithm (GA), depth first search (DFS), and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage using minimum number of transmitters. The proposed system outperforms the existing techniques in terms of algorithmic complexity and demonstrates that the computation time can be reduced as high as 99% and 75%, respectively, as compared to existing algorithms. Moreover, in terms of experimental analysis, the coverage prediction successfully reaches 99% and, thus, the proposed coverage model effectively guarantees the removal of blind spots. 1. Introduction Recent trend shows that it is one of the greatest challenges to remove the blind spots from the user-specified environment as well as install the optimal communication system while designing the wireless networks. Blind spot [1] refers to that position where signal cannot be reached or too weak to be considered as a significant signal, which affects the overall performance of the communication system. There are not many existing methods [2, 3] available in the area of radio signal propagation prediction to optimize the indoor wireless coverage. In [2], a recent coverage prediction model using ray-tracing (RT) technique for field prediction, associated with genetic algorithm (GA) to optimize the base station antenna location in an indoor environment, is presented. The model is based on the image method and, thus, computational time is increased with the number of objects in the indoor environment. On the other hand, GA is used to find the location of optimal antenna that maximizes the signal strength calculated over the region of interest. The GA starts with a set of solutions, known as population, and solutions from a population are used to generate a new population. The solutions that will generate a new population are selected based on their fitness function. This procedure is repeated until a certain condition is satisfied. If the condition is not attained, genetic operators are applied to form a new population. The generation of new population completely depends on the linear combination of variables of each point in the optimization space in a certain generation. Therefore, implementation of this type of coverage prediction model is very troublesome and also computationally inefficient. Conversely, Yun’s algorithm [3] also consists of GA and RT to determine the minimum number of transmitting antennas as well as their appropriate locations to provide the optimized wireless coverage for indoor environment and according to Yun et al. [3], all existing coverage models failed to guarantee the optimum wireless coverage by completely eliminating blind spots from the indoor environment. However, it may be impractical to simulate the complex environments using Yun’s algorithm [3] due to expensive computational complexity that would sometimes be beyond the capacity of the contemporary computers. Therefore, the main research objectives are illustrated as follows: (i) to determine the best location of transmitter to provide the optimum network deployment, (ii) to determine the number of transmitters required to ensure optimal coverage, and (iii) to remove the blind spots from indoor as well as obtain the maximum wireless coverage using the minimum number of transmitters while designing the wireless networks (i.e., to find a solution where the number of transmitters is minimum and all the receivers are covered by those transmitters, which means no receiver or blind spot will exist in the indoor). To address the current challenges and prime concerns as discussed, this study exploits both RT [4–11] and GA [12–16] together with depth first search (DFS) [17] (the DFS is a last in first out searching in terms of live nodes where list of live nodes forms a stack) and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage. The RT is used here to follow the trajectory from the transmitter to the receiver. For detailed explanations of RT model, we may refer to [6]. In GA, each chromosome is represented by a binary pattern that keeps the coverage information of a transmitter. By recombination of some patterns, it finds the optimal solution. The DFS is used to search those transmitters that are required to cover all the receivers (a generated node whose child nodes have not yet been explored is called live node and the node which is being explored is called E-node [12 ]). While searching, the recombination theory of GA is applied to the coverage pattern of the transmitters. And, branch-and-bound method is a backtracking procedure [12], where bounding functions are applied to avoid the generation of subtrees (to avoid unnecessary searching, thus making the algorithm faster) that have no answer node. In contrast to [12], the proposed algorithm consumed less memory requirement as it stores only a stack of nodes representing a single path. If any solution is found without exploring unnecessary nodes based on the bounding functions, both space and time complexities can be reduced further. The superiority of the proposed system will be verified in the subsequent discussions. 2. Proposed Coverage Algorithm The working principle of the proposed algorithm based on the DFS is as follows. (i)Explore the root to generate its child.(ii)Visit the child nodes and follow on with the child of them and so on until a leaf node is found. (iii)Step back to the second child of the previous root, and so on. For simplicity, the following notations are being used in the subsequent discussions. (i)If the number of sampling points is , is the th transmitter where . (ii)The coverage pattern of a transmitter is . (iii)The number of good and bad sampling points of a coverage pattern is and , respectively. (iv)Let be the edge that is connected with the root and immediate child of the root node. The edge has a real-valued weight , where is the th bit of coverage pattern . Each node also has a weight of . The weight of the root node is , the weight of the child node at level 1 is , and the child node at level 2 along the same path has the weight of . Therefore, the weight of each node can be expressed as , where and represent the depth and the weight of an edge of the tree, respectively. In this study, the coverage information of a transmitter is described by a chromosome, which consists of a coverage pattern. The coverage pattern works as follows. Assume that an indoor area where sampling points are covered by transmitters , . Here, each transmitter has a coverage pattern like , where and = “0” or “1” for . The value indicates the th sampling point as a good sampling point, which is covered by the th transmitter and specifies that th sampling point is a blind spot. Again, two patterns and can generate the resultant pattern based on union operation as follows: In the best case, the resultant coverage pattern is where that makes summation value equal to , which indicates 100% coverage. Thus, if the set of unique coverage patterns is , the purpose is to find a subset , where the number of covered sampling points is For illustration, suppose there are 8 sampling points in an indoor area. If two transmitters and cover the sampling points 1, 3, 5, 8 and 2, 3, 4, respectively, their coverage patterns ( and ) and the resultant pattern are as shown in Table 1. Here, the resultant pattern is created based on logical “OR” operation (where the result is “1”, if either first or second bit is “1” or both bits are “1;” otherwise, the result is “0”). The resultant pattern consists of six good sampling points (1, 2, 3, 4, 5, and 8) and two blind spots (6 and 7). Figure 1 also gives some ideas on good sampling points and blind spots, where 40 receivers have been deployed. In Figure 1(a), an area surrounded by the dotted pentagon has been chosen as the transmitter position. Here, 21 receivers (small filled circles) receive the signals from the transmitter and other 19 receivers (small filled circles surrounded by the triangle) do not receive any signal. The blue lines are the emanated rays from the transmitter, while the red lines represent the specular reflections and transmissions of waves between the objects (obstacles). Therefore, in case of a single transmitter in Figure 1(a), 21 good sampling points and 19 blind spots are found. Similarly, while using two transmitters in Figure 1(b), still there exist 6 blind spots. And, finally for Figure 1(c), where 3 transmitters are being used, there is no blind spot available that indicates that the optimal coverage for indoor wireless has been achieved and those three transmitting positions (surrounded by the dotted pentagon) are considered as at their optimized locations. In this study, the DFS uses branch-and-bound terminology while exploring the search tree. For the indoor environment having number of sampling points, the proposed bounding functions are as follows. (i)If the set of transmitters representing the path to the E-node is , then the set of child nodes with parent-child labeling is , where the coverage pattern is not covered by the resultant pattern , that is, .(ii)The number of blind spots in the resulting pattern generated from the set of transmitters until the path of the E-node is less than or equal to the summation of the good sampling points of the subsequent coverage patterns that correspond to the subtree of the root , that is, .(iii)The new node will be generated from the current E-node, if the weight (where ) of the current E-node at level is less than or equal to the number of sampling points . Consider a floor plan of a building with six sampling points as presented in [3]. It is assumed that the transmitters are placed to the slight left of the sampling points by an arbitrary distance of 36cm [3]. The reason behind this is if the positions of the transmitter and receiver are identical, then the proposed algorithm will skip the corresponding sampling point while calculating the received power as both positions of transmitter and sampling points are overlapped. Now, the ray tracer for each of the six sampling positions is executed to calculate the field distribution among sampling points and the generated coverage patterns (–) of those six transmitters (–) are in Table 2. Here, each row is a coverage pattern and each binary value in the column refers to a sampling point as good or blind spot for the corresponding transmitter. From Table 2, the cost functions of the coverage patterns ( to ) are 3, 3, 3, 4, 3, and 5, respectively. Here, the cost function (effectiveness) of a coverage pattern is calculated based on the blind spots; that is, the lower the number of blind spots, the higher the effectiveness. It is noticed that both and are the same (i.e., the sampling points covered by have already been covered by ). Hence, is ignored marking it as a duplicate pattern. Thus, the proposed algorithm continues with 5 patterns except to select an optimal subset of them. Figure 2 generates a DFS tree using those 5 transmitters to achieve the optimal coverage. The edge between two nodes represents a transmitter. According to the proposed algorithm, the exploration of a node is suspended as soon as a new unexplored node is generated. Then, the exploration of the new node is immediately begun. The algorithm starts from the root node 1 at level 0, which indicates no transmitter in the area. The root is explored and its first child node 2 is the next E-node of level 1, where the coverage algorithm is based on single transmitter. Node 2 generates its first child 3 at level 2, which is expanded as usual to generate nodes 4 and 5 at level 3. However, it cannot be generated because of the third bounding function because the weight of each node at level 3 is greater than the sum of the number of sampling points, that is, . As the node 3 is a leaf node, the algorithm switches back to its parent node 2. The second child node 4 of node 2 is generated and the algorithm switches to the second level again. Children of node 4 should be transmitters and , respectively, but they cannot be generated because of first, second, and third bounding functions, respectively. The set of transmitters that consists of the path to the E-node 4 is , the resultant coverage pattern of which is as follows:(4) The pattern covers the pattern that violates the first bounding function. The pattern has 2 blind spots. The set of transmitters that forms the sub-tree of root is . Therefore, according to the second boundary function: As the condition of (5) is wrong, the algorithm will not generate the child node of . And, if the node 5 is intended to add, the weight of the node 5 at level 3 exceeds the sum of the number of sampling points, that is, , which violates the third bounding function. Hence, it prevents the generation of node 5. If the algorithm continues, the tree in Figure 2 will be constructed, where the cross (X) marked node of has duplicate coverage pattern. In Figure 2, the optimal coverage is formed by the transmitters {, , and } and the solution path consists of the nodes 1, 6, 8, and 9, Figure 3 shows some sample simulations with different numbers of sampling points generated by the proposed coverage algorithm. Here, the small filled solid circles represent the sampling points Rx (receiving points), the solid circles in hollow circles represent the optimized positions of the transmitters Tx that cover all the sampling points from those positions as well as eliminate all the blind spots, and the rectangles are the objects (working as obstacles). 3. Results and Discussion In this section, both time and space complexities of the proposed algorithm will be derived and compared with the existing Algorithms 1 and 2, as reported in [3, 12], respectively. Both time and space complexities of the existing Algorithm 1 reported in [3] were derived in [12] and it was observed that both time and space complexities were as the same as follows: Here, represents the number of sampling points. As the time complexity depends on the number of nodes generated or expanded until the required solution has been found, the time complexity of the existing Algorithm 2 can be expressed as follows by modifying (6): Here, and are the number of rejected coverage patterns and unexplored nodes. In [12], it is already proven that the time complexity is according to the bounding function where BFS (breadth first search) algorithm is used. On the other hand, this paper is an enhanced version of our existing work, and hence, we can express the time complexity of DFS algorithm which is as well, because the time complexity of DFS is generally the same as BFS. However, in the proposed method, we have used new bounding functions and some criteria to limit the search space tree. According to the third bounding function, the solution must be found within the path length for the proposed algorithm. Therefore, the improved time complexity of the proposed method is obtained as follows by replacing value with : As the only single path from the root to a leaf node is stored in memory stack, the space complexity of the proposed algorithm can be expressed as where is the number of nodes unexplored in a single path. The proposed algorithm will be compared with the existing Algorithm 1 [3] and the existing Algorithm 2 [12], as presented in Table 3. Based on this table, it can be deduced that the time complexity of the proposed algorithm is much better than the existing Algorithms 1 and 2 (the time complexity of both existing algorithms is increasing exponentially while it remains almost constant for the proposed algorithm). This is further demonstrated in Figure 4. In this figure, the values of and are randomly assigned to 1 and 2, respectively. That is, it has been considered that the number of rejected duplicate patterns is and the number of unexplored nodes is . According to Figure 4, as the value of and increases, the complexity difference rises which indicates better performance of the proposed algorithm. In addition, Figure 5 and Table 4 show that both existing Algorithms 1 and 2 experience exponential increase in space complexity while only a linear increase with the proposed algorithm. Let the space complexities of the existing Algorithm 1 and the proposed algorithm be and , respectively. Thus, it can be written that , which indicates less space complexity of the proposed algorithm from the existing Algorithm 1. Again suppose that the space complexities of the existing Algorithm 2 and the proposed algorithm are and , respectively. Therefore, it can also be written that , which indicates less space complexity of the proposed algorithm from the existing Algorithm 2. Therefore, space complexity of the proposed algorithm is much less than that of both existing Algorithms 1 and 2 that can be best described in a tabular format as in Table 4, where the values of , , and are also randomly assigned to 1, 2, and 1, respectively. That is, it has been considered that the number of rejected duplicate patterns is , the number of unexplored nodes in the whole search space is , and the number of unexplored nodes in a single path of the DFS tree is . From Table 4 and Figure 5, it can be decided that the space complexity of both existing Algorithms 1 and 2 is increasing exponentially while the proposed algorithm is increasing linearly, which indicates an outstanding performance of the proposed algorithm. The comparison based on the computation time among proposed algorithm and existing Algorithms 1 and 2 has been exposed in Table 5 and the reduction of computation time of the proposed algorithm from existing Algorithms 1 and 2 has been presented as a graphical view for better understanding in Figure 6. Here, 10 different scenarios containing different numbers of sampling points have been considered. The number of transmitters in Table 5 indicates how many transmitters are involved to cover all the sampling points as well as to remove all the blind spots from the corresponding propagation area. From Table 5, it is revealed that the execution time of the proposed algorithm is always much less than both existing Algorithms 1 and 2. It is also established that 75% reduction of the execution time in average is possible by the proposed algorithm while comparing with the existing Algorithm 2 and above 99% reduction (in average) is possible while comparing with the existing Algorithm 1. In this study, specular reflections and transmissions of waves between the walls and furniture are considered, because the RT model used in this paper is a 2D ray tracer. Indoor measurements are performed at a carrier frequency of 2.4GHz. The transmitter (Tx) used in this experiment is R&S SMU200A Vector Signal Generator and the receiver (Rx) is R&S FSV Signal and Spectrum Analyzer. Here, the semispherical antenna is used for both of Tx and Rx to receive the signals from both directions. The obstacles are composed of arbitrary materials. The walls of the room are made of brick. The doors are made of wood and the windows are made of glass. The inside furniture of the room is made of wood and plastic board. There are some partitions that are made of plastic board and glass. Relative permittivity and refractive indexes of different materials (each and every kind of furniture in the building), such as brick, glass, wood, and plastic board in the testing room, are chosen as standard value [18, 19]. It should be noted here that a highly sophisticated simulation software is developed using Visual Studio 2008 (C# code) in this work that provides a Graphical User Interface (GUI). Therefore, this software will allow customizing the properties and the thicknesses of particular objects by a “mouse click” operation. In this study, all walls are assumed to have the same thickness and same properties except at locations wherever specified. Besides, the detailed parameters, such as thickness, dielectric constant, and conductivity of particular objects, are taken for the realistic field calculations. When the properties of each object are modified, suppose that permittivity of the wall is changed, then the received signal strength will also change, because the received signal is directly related to the reflection loss, which is related to the permittivity of the materials. On the other hand, when any partition is changed by other materials, for instance, a partition made of brick is changed by the glass sheet, then the overall predicted signal is also changed, which will have an effect on the number of receivers covered by a particular transmitter, which means that the number of blind spots will also be changed. Figure 7 shows that the number of blind spots is changed because of changing of material used in the indoor environment. In Figure 7(a), one transmitter and four receivers are placed in different locations. After rays are launched from the transmitter, two receivers remain uncovered which means that two blind spots exist within the coverage area, because of brick partition used in the dotted squared area. On the other hand, four receivers are covered by the transmitter when the partition walls (that exist in the dotted rectangle as shown in Figure 7(b)) are replaced by the glass sheet. Therefore, none of the blind spots exits in the coverage area. It can be summarized that when the properties of objects used in the simulation environment are changed, then the coverage pattern of the transmitter also changes. In addition, real measurements of Figure 1 are carried out to account for the strength of the signal, because providing coverage with assessing the intensity of the signal is sufficient to claim that there are no blind spots. Bad sampling points are determined as when the received power from any transmitter (Tx) is less than −50dB [3]. Figure 8 shows a relationship between the received power and coverage for different number of Tx(s), where the transmitted power is considered as constant and the number of Tx(s) is increased to obtain the maximum coverage. From Figure 8, it is also observed that when there is a single Tx on the environment, it covers a less number of Rx(s). Therefore, the total power also becomes less. For two Tx(s), the number of covered Rx(s) increases, which also increases the received power. For three Tx(s), almost all Rx(s) are covered and, also, the received power is getting higher. According to the indication of the figures, it is clear that the Rx power is coming from all of the used Tx(s). When we use single Tx, the Rx(s) power is from that single transmitter. When there are two Tx(s), it is obvious that the Rx(s) receive power from both of the two Tx(s) and the average of the two received powers has been shown here. In case of three Tx(s), the Rx(s) receive from all three Tx(s) and here, also, the average received power is shown in the figure. During this experiment, the Tx power was kept constant at −30dBm. However, this experiment can also be done for any range of Tx power. From Figure 8, it is observed that the percentage of coverage is increasing with the increase in the received power, which guarantees coverage of the receive point (i.e., there are no blind spots). The received power is also increased when the number of Txs is increasing. It is found that, when the number of Txs is 3, the percentage of coverage is almost 99% (almost all sampling points are covered). 4. Conclusion An efficient and novel optimization algorithm of removing blind spots from the indoor area has been proposed in this study. The advantage of the algorithm is that the memory requirement is only linear. This is in contrast to other existing algorithms [3, 12], which require more space. The reason behind that is that, a stack of nodes on the path from the root to the current node is needed to store by the algorithm. As mentioned earlier, if the proposed algorithm finds the solution without exploring unnecessary nodes in a path, the space and time it takes will be much less. Thus, it has less time and space complexities in average and the more the number of sampling points in the indoor environment, the more the complexity difference between the proposed and the existing algorithms. Thus, it is revealed that the average execution time can be reduced as high as 99% and 75%, respectively, because of outstanding bounding functions as well as the concept of binary coverage pattern. Therefore, a conclusion can be drawn that the proposed algorithm excels in both algorithmic complexity and computation time. Conversely, the wireless coverage model by [3] repeats the ray tracer for every generated chromosome. As a result, there is a high probability of repeating the RT program multiple times for the same transmitter position, which consumes a significant amount of time. The model [3] also cannot reuse the information of a transmitter location and, therefore, it runs the ray tracer repeatedly. The proposed algorithm does not run the ray tracer more than once for a single transmitting position. The proposed coverage model guarantees the removal of blind spots completely from the indoor area and subsequently continues recursively until it covers all the sampling (receiving) points. Theoretically, the coverage algorithm always covers all the receiving points to offer 100% coverage. Moreover, the performance and accuracy of the proposed coverage model are verified by making comparisons between simulation results and measured data. It is observed that the percentage of coverage is increasing with the increase in the received power that guarantees coverage of the receiving point. The received power is also increased when the number of transmitters is increasing. Therefore, in terms of measurement results, the percentage of obtaining coverage is 99% and almost all sampling points are successfully covered in the given area. Hence, the proposed algorithm can be a great competitor of other optimization techniques and the outcome of this study can be easily applied to real system engineering. 1. C. H. Loo, A. Z. Elsherbeni, F. Yang, and D. Kajfez, “Experimental and simulation investigation of RFID blind spots,” Journal of Electromagnetic Waves and Applications, vol. 23, no. 5, pp. 747–760, 2009. View at Publisher · View at Google Scholar · View at Scopus 2. S. Grubisic, W. P. Carpes, and J. P. A. Bastos, “Optimization model for antenna positioning in indoor environments using 2-D ray-tracing technique associated to a real-coded genetic algorithm,” IEEE Transactions on Magnetics, vol. 45, no. 3, pp. 1626–1629, 2009. View at Publisher · View at Google Scholar · View at Scopus 3. Z. Yun, S. Lim, and M. F. Iskander, “An integrated method of ray tracing and genetic algorithm for optimizing coverage in indoor wireless networks,” IEEE Antennas and Wireless Propagation Letters , vol. 7, pp. 145–148, 2008. View at Publisher · View at Google Scholar · View at Scopus 4. M. S. Sarker, A. W. Reza, and K. Dimyati, “A novel ray-tracing technique for indoor radio signal prediction,” Journal of Electromagnetic Waves and Applications, vol. 25, no. 8-9, pp. 1179–1190, 2011. View at Publisher · View at Google Scholar · View at Scopus 5. C. Takahashi, Z. Yun, M. F. Iskander, G. Poilasne, V. Pathak, and J. Fabrega, “Propagation-prediction and site-planning software for wireless communication systems,” IEEE Antennas and Propagation Magazine, vol. 49, no. 2, pp. 52–60, 2007. View at Publisher · View at Google Scholar · View at Scopus 6. A. W. Reza, K. Dimyati, K. A. Noordin, and M. S. Sarker, “Intelligent ray-tracing: an efficient indoor ray propagation model,” IEICE Electronics Express, vol. 8, no. 22, pp. 1920–1926, 2011. View at Publisher · View at Google Scholar 7. A. Tayebi, J. Gomez, F. Saez de Adana, and O. Gutierrez, “The application of ray-tracing to mobile localization using the direction of arrival and received signal strength in multipath indoor environments,” Progress in Electromagnetics Research, vol. 91, pp. 1–15, 2009. View at Scopus 8. J. Gomez, A. Tayebi, F. S. de Adana, and O. Gutierrez, “Localization approach based on ray-tracing including the effect of human shadowing,” Progress In Electromagnetics Research Letters, vol. 15, pp. 1–11, 2010. View at Scopus 9. Y. B. Tao, H. Lin, and H. J. Bao, “KD-tree based fast ray tracing for RCS prediction,” Progress in Electromagnetics Research, vol. 81, pp. 329–341, 2008. View at Scopus 10. F. Saez de Adana, O. Gutiérrez, M. A. Navarro, and A. S. Mohan, “Efficient time-domain ray-tracing technique for the analysis of ultra-wideband indoor environments including Lossy materials and multiple effects,” International Journal of Antennas and Propagation, vol. 2009, Article ID 390782, 8 pages, 2009. View at Publisher · View at Google Scholar 11. C. Lièbe, P. Combeau, A. Gaugue et al., “Ultra-wideband indoor channel modelling using ray-tracing software for through-the-wall imaging radar,” International Journal of Antennas and Propagation, vol. 2010, Article ID 934602, 14 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus 12. A. W. Reza, M. S. Sarker, and K. Dimyati, “A novel integrated mathematical approach of ray-tracing and genetic algorithm for optimizing indoor wireless coverage,” Progress in Electromagnetics Research, vol. 110, pp. 147–162, 2010. View at Scopus 13. E. Agastra, G. Bellaveglia, L. Lucci et al., “Genetic algorithm optimization of high-efficiency wide-band multimodal square horns for discrete lenses,” Progress in Electromagnetics Research, vol. 83, pp. 335–352, 2008. View at Scopus 14. Z. Meng, “Autonomous genetic algorithm for functional optimization,” Progress in Electromagnetics Research, vol. 72, pp. 253–268, 2007. View at Publisher · View at Google Scholar 15. Y. Rahmat-Samii and E. Michielssen, Electromagnetic Optimization by Genetic Algorithms, Wiley-Interscience, New York, NY, USA, 1999. 16. A. W. Reza, K. Dimyati, K. A. Noordin, A. S. M. Z. Kausar, and M. S. Sarker, “A comprehensive study of optimization algorithm for wireless coverage in indoor area,” Optimization Letters, 2012. View at Publisher · View at Google Scholar 17. D. E. Knuth, The Art of Computer Programming, vol. 1, Addison-Wesley, Boston, Mass, USA, 3rd edition, 1997. 18. D. M. Dobkin, RF Engineering for Wireless Networks: Hardware, Antennas, and Propagation, Elsevier/Newnes, Amsterdam, The Netherlands, 2005. 19. T. S. Rappaport, Wireless Communication Principles and Practice, Communications Engineering and Emerging Technology, Prentice Hall, Upper Saddle River, NJ, USA, 2002.
{"url":"http://www.hindawi.com/journals/ijap/2013/509878/","timestamp":"2014-04-16T14:15:28Z","content_type":null,"content_length":"180397","record_id":"<urn:uuid:ca7e6e23-15a8-4e30-ac56-5f059dc864cc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
shaded region Well, we know that: $\left|\int_a^b [f(x)]dx\right| = Area$ Well, we know the following: $a = 1$ We know this because the -1 at the end shifts it to the right 1. $b = 4$ It is labeled as the end of the region. $y = f(x) = \frac{1}{x^2} - 1$ So, therefore: $\left|\int_1^4 [\frac{1}{x^2} - 1]dx\right|$ So now we have to rewrite it in a way that would make this easier to integrate: $\left|\int_1^4 [x^{-2} - 1]dx\right|$ Now we integrate: $|[-x^{-1} - x]_1^4|$ $\left|\left[\left(-\frac{1}{4} - 4\right) - (-1 - 1)\right]\right|$ $\left|\left[\left(-\frac{5}{4}\right) + 2\right]\right|$ $\left|\ frac{3}{4}\right| \text{units}^2 = \frac{3}{4} \text{units}^2 = Area$ There you go.
{"url":"http://mathhelpforum.com/calculus/30583-shaded-region.html","timestamp":"2014-04-18T14:20:34Z","content_type":null,"content_length":"33835","record_id":"<urn:uuid:14b2cca8-a1df-41a7-b023-f61b7214962a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
e to NAEP Assessment Item, Grade 8: Use a scale to find a distance between two points 6, 7, 8 Professional Commentary Students use proportional reasoning to calculate a distance on a scale drawing. This multiple-choice question is a sample test item used in grades 4 and 8 in the 2003 National Assessment of Educational Progress (see About NAEP). The URL link (above) takes the user directly to the NAEP test item, with access to performance data by various subgroups of students, a scoring key, and discussion of the content on which the item is based. The NAEP website allows users to build their own printable database of test items by clicking on Add Question in the upper right hand corner of the screen. NAEP Reference Number: 2003-8M6, No. 19. (sw) Collections Containing This Resource Ohio Mathematics Academic Content Standards (2001) Number, Number Sense and Operations Standard Benchmarks (5–7) Use a variety of strategies, including proportional reasoning, to estimate, compute, solve and explain solutions to problems involving integers, fractions, decimals and percents. Benchmarks (8–10) Estimate, compute and solve problems involving real numbers, including ratio, proportion and percent, and explain solutions. Grade Level Indicators (Grade 6) Use proportional reasoning, ratios and percents to represent problem situations and determine the reasonableness of solutions. Grade Level Indicators (Grade 8) Estimate, compute and solve problems involving rational numbers, including ratio, proportion and percent, and judge the reasonableness of solutions. Principles and Standards for School Mathematics Number and Operations Standard Understand numbers, ways of representing numbers, relationships among numbers, and number systems Expectations (6–8) understand and use ratios and proportions to represent quantitative relationships; Compute fluently and make reasonable estimates Expectations (6–8) understand and use ratios and proportions to represent quantitative relationships; develop, analyze, and explain methods for solving problems involving proportions, such as scaling and finding equivalent ratios.
{"url":"http://ohiorc.org/record/2906.aspx","timestamp":"2014-04-18T03:04:39Z","content_type":null,"content_length":"25788","record_id":"<urn:uuid:db9b8582-4483-48e8-86c8-216f66048977>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick Review 3-1 Unit 3 - Objective 1 - Tangent Lines and Normal Lines Example 1 To find the equation of the tangent line to the curve at a given point, you need to find the 1st derivative of the function. The derivative is then evaluated at a given point which will be the slope of the tangent line. Then using the point-slope form of the equation To find the equation of the normal line (perpendicular to the tangent line) to the curve at a given point, you also find the 1st derivative of the function. The derivative is then evaluated at a given point which will be the slope of the tangent line. You then take the negative reciprocal of this slope to find the slope of the normal line. Then using the point-slope form of the equation, you can find the equation of the normal line. Try it! Find the equation of the tangent line to the curve y = 2x² - 3 at the point (2,5) You now need to find the slope of the tangent line to the curve at any point which would be: You now need to find the slope of the tangent line to the curve at the point (2,5). Evaluate the 1st derivative. You can now use the point-slope equation to find the equation of the tangent line to the curve at the point (2,5). Some other correct forms of the same equation are: -8x + y = -11 or -8x + y + 11 = 0 Example 2 If you want to find the equation of the normal line, you would take the negative reciprocal of 8 which would be -1/8. This would be the slope of the normal line. You can now use the point-slope equation to find the equation of the normal line to the curve at the point (2,5). Some other forms of the same equation are: 8y = -x + 42 x + 8y = 42 x + 8y - 42 = 0 Try it! Find the equation of the tangent line and normal line to the curve. 5x² + 20y² = 40 at the point (2,1) Find the 1st derivative (dy/dx). Now evaluate the derivative to find the slope of the tangent line at the point (2,1). The equation of the tangent line to the curve at the point (2,1) would be: y - (1) = - y - 1 = - y - 1 = - y = - Other forms are: x + 2y = 4 The slope of the normal line would be 2 so the equation of the normal line to the curve at the point (2,1) would be: y - 1 = 2 (x - 2) y - 1 = 2x - 4 y = 2x - 3 Other forms are: -2x + y = -3 -2x + y + 3 = 0 [Unit 3 Outline] [Course Outline] [Home Page] Copyright © 1996 by B. Chambers and P. Lowry. All Rights Reserved.
{"url":"http://www.ltu.edu/courses/lowry/techcalc/mod3-1.htm","timestamp":"2014-04-19T22:32:13Z","content_type":null,"content_length":"4865","record_id":"<urn:uuid:83bd9d1c-79cd-49b3-bba7-2177ee7a94cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"}